• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Randomness in Evolution: Valid and Invalid Usage

Oh, 24 pages of pointless bickering. Let me solve half of that dilemma: when creationists and "intelligent design" people use the word "random" to describe evolution, they are using it wrong.

More like when they are describing evolution, they are using it wrong.

When you have random or arbitrarily indifferent behaviors, and some actions are more preferred than others, then the thingamajig inhabitants will seem to use the more preferred action. And if the current actions are reflected on previous ones, then you may get evolution (biological or not).

Whether or not you believe this, if you are easily entertained and know how to write code, then you can play around with this. Write in a class for a dot and make a bunch of them. Make them move around randomly with some directions better than others. Like up or down, or towards/away the mouse if you have the right lib. If you get it right and you're patient, you'll see random become predictable.
 
More like when they are describing evolution, they are using it wrong.

When you have random or arbitrarily indifferent behaviors, and some actions are more preferred than others, then the thingamajig inhabitants will seem to use the more preferred action. And if the current actions are reflected on previous ones, then you may get evolution (biological or not).

Whether or not you believe this, if you are easily entertained and know how to write code, then you can play around with this. Write in a class for a dot and make a bunch of them. Make them move around randomly with some directions better than others. Like up or down, or towards/away the mouse if you have the right lib. If you get it right and you're patient, you'll see random become predictable.

You're getting close to the central issue that people have been bickering over. For the moment I'll suspend all the practical objections that I might make about pseudo-random number generators, and we'll assume that your dot class is 'random'. I've got three questions.

1. When you make some directions 'better' than others, you are skewing the distribution of the random numbers your program produced. If the random number generator naturally produced numbers with that distribution you would probably think there was something wrong with it, yes?

2. What if you used exactly specified a set of floating point precision numbers. For some set of specified floating precision numbers you will get some rounding error in the insignificant digits when multiplying. Do you consider this random?

3. For this third one, let us presume that your dot is, in fact, random. So you have a system(or program) made up of random dots, You would also call the overall behavior of your program random? Despite the fact that those dots(plural), reliably follow the mouse?

All of these questions are very much central to the discussion in this thread.

Incidentally, this dot program you describe sounds an awful lot like a program for the Mac dashboard.....
 
Yes indeed... Is a random number or random pattern generator, random itself?

Nope. Having random components, does not a random process make. :)
 
Zosima, I know enough to recognise that a digital computer simulation of a chaotic system is not in itself a chaotic syatem.

And that an analogue computer simulation is.

The digital simulation contains numbers that have been translated into high or low voltages on transistors, patterns of these voltages are then altered to duplicate the mathematical operations being performed on them. This is no more a chaotic physical system than a pen-and-pencil calcuation of these numbers.
 
Zosima, I know enough to recognise that a digital computer simulation of a chaotic system is not in itself a chaotic syatem.

Seriously It seems you are missing something fundamental, and all this arguing is getting to be less fun. So I'll type up a mathematical explanation of how these systems work later tonight. If you have a degree in physics and you still remember the math it required to get this degree, then you should have no trouble understanding what I'm saying. Unless you have some new, and highly creative objection after that point, we're done whether you agree or not.

I'll type it up in ~6 hours.

In case you're curious. The highlights will be this: #1 only mathematical models can be truly chaotic. #2 analog,pencil-and-paper,and digital physical systems, can all approximate chaotic systems equally well. #3 Any chaotic model applied to a natural system will, at best, be chaotic on some finite time/distance interval after which it will cease to be chaotic. #4 Any chaotic model when applied to a natural system will have a minimum sensitivity to perturbation.
 
Its good to see you back. :)
Good to be back and have some time to post.

Well, we might be able to come to some sort of agreement with respect to things lying on the continuum so I'll come back to that at the end.

It seems to me that you start talking about this idea of 'technically random' whenever you are having problems with whatever definition you put forward, but as far as I can tell your definition of 'random' and 'technically random' are identical. They are both "having a state or value depends on chance" which both assign the term random to things that are clearly not random.
I have given some examples of things that lie in both categories. Gas pressure being due to the number and momentum of particles hitting the container, will vary as you measure it, but to such a small extent that it is only technically random, but not practically so.

Compare that to a simple random number generator, where a noise source is compared to a reference voltage, and depending on the result a logical high ("1") or logical low ("0") is output. The resulting string of 1s and 0s is random in both senses, not just technically but also in practice.

Also, I'm not sure how you appointed yourself the arbiter of the definition of technically random, but that seems to be the claim we're disputing at the moment. (If we were talking about practically random we wouldn't be talking about definitions and statistics, we'd be talking about the processes of evolution.)



1. You have identified some techniques for identifying samples. The first one is the only one actually called random sampling. You did the same thing with distributions, where you inserted random into the name to try to support your point. So these would normally be called, 'simple random sampling','systematic sampling', and 'stratified sampling', in the same way that you started calling the distribution 'Poisson random distribution', 'Normal random distribution', etc....They are normally called the 'Poisson distribution'
and the 'Normal Distribution'
I want to get the definition of random out of the way, since it is useless to discuss how it applies to evolution as long as we are on different. Second I haven't appointed myself arbiter, the examples I gave about aren't ones I supplied.

Not true, random sampling is different that simple random sampling. Wikipedia doesn't do a bad job of explaining it. From the end of the first paragraph ...

This process and technique is known as Simple Random Sampling, and should not be confused with Random Sampling.​

And you can look at both definitions at wikipedia, and note that one of the examples they give of random sample is a stratified sample.

Actually, you can find "gaussian random distribution" in text books and on the web. If you look up "gaussian random" in google you will find articles, several technical, on gaussian random distribution or generating gaussian random numbers.

Stats book talk about random numbers having a non-uniform distribution, stochastic books do, my computer science text book on numerical methods discussing generating non-uniformly distributed random numbers using the transform and acceptance-rejection methods.

I haven't simply added random to these, and have not appointed myself an arbiter.

2. Note that these practical techniques to get samples that are random, they are not perfect, the failures you mention(like small correlations) represent deviations from the ideal(random)

3. Clearly depending on what they are sampling their results may differ significantly from random(they may only get one result,)
This point being to distinguish patterns in their outcome from patterns in sampling.

4. I thought it noteworthy at the point when anyone in statistics would say 'random number between 1 and 43' You say 'unbiased'. That is telling insofar as you are manipulating your descriptions to prevent them from displaying exactly the characteristics that ideal randomness shows.

5. These techniques can be random samples, if we know something about the structure of what we are sampling so we add no information via a lack of uniformity of selection or a correlation between selections. For example we would get a random sampling via a systematic sampling if we knew that the order of the objects we were sampling was uncorrelated with the values that we were sampling from those objects. A second example: We use stratified sampling, when we know that there is sufficient heterogeneity in our overall population such that it could bias our sample(ie create a strong correlation between our samples, or eliminate uniformity because one subgroup is much larger than others)
2. As I pointed out, random sample and simple random sample do not mean the same thing.

3. Sure, if there is no variance in the population being sampled, then the outcome of sampling will be determined.

4. Of course I didn't use the word random there. It's stupid to use a word whose definition we are discussing in a position that would lead to ambiguity. And since random doesn't mean uniform as pointed out previously, it would be down right wrong. There was no manipulating, because random was the wrong word for the occassion.

5. I don't think you said what you wanted to there, as simple-random sample and random sample are not the same.

I made this example about a dirac-delta when you were talking about 'Poisson Random Distributions'(actually called Poisson Distributions) ie inferring that any distribution over a statistical variable is random. The 'dirac delta random distribution' disproves this suggestion.
Poisson distribution can be population distributions or random distributions.

Also, as Mijo reminds us so frequently, if the dirac-delta function is defined over the reals or the complex numbers we can never quite be sure that it won't have another outcome. This means that we can construct examples with a dirac-delta that are completely opposed to our intuitions insofar as they are completely predictable and yet still dependent on chance.
I don't agree with Mijo on that, though I haven't followed that line of argument much.

When one definies a probability function over the reals, the function is the probability density function, and you can get the probability of the result being on any interval by intergrating the probability density over that interval. If the density function is delta-dirac(x-k), then any interval not including k will have probability 0. Any interval including k will have probability 1. That seems about as certain as can be.
I'm sorry if this wasn't clear, but this is the point I was trying to demonstrate to you, with this example. That there is difference and the fact that we use a Gaussian distribution to model a process does not necessarily mean the population that produced it is random. Even if you do call it a "Gaussian random distribution".

This point was also a response to your laundry list of distributions. The point is similar: the distribution is separate from the process. I think that is why the necessity of no correlation is so crucial as well. You may get statistical distributions similar the ones you mention when you randomly sample a random event many times, especially if you are summing the outcomes. But the distribution of a single random event will never be distributed that way.
The distribution of thermal noise is approximately gaussian. If you sample the voltage on a resistor, the probability distribution of that single sample is gaussian.
ETA: all these statistical techniques that you and mijo keep bringing up may have random in the name, but "Random Distributions", "Random Variables", "Random Samples" all can be used in combination with a variety of different random and non-random processes. The commonality that they have, is that it assumes that they are unbiased with respect to the way they are generated. In other words, they clearly reflect the nature of the process itself, because the sampling or generative process itself is ideally uncorrelated and uniform. To try to claim that all the processes they describe are also random is clearly a mistake or even that they are necessarily the statistics of random processes. The only process that is ideally random is the uncorrelated and uniform one.
No, there is no assumption that they are unbiased in the way they are formed. However, if you wanted to measure such a process and get an accurate picture, an unbiased sampling is the best way to go about finding the processes bias.
Again, your technical definition and your practical definition are indistinguishable. It seems like the definition you advocate alternates between "anything that is not determinate" and "systems described using statistics", "practical", and "technical"
No more so that the term technical and practical are indistinguishable. I've given examples above and elsewhere.

I'd like to move on to this idea of a continuum to see if we can find some common ground:
If your 'technical' definition is 'anything that is not determinate' then there is no way to define a spectrum. Where is the far end of the spectrum, the end that is opposite determinate, anchored? Under your definition all answers to that question are equally good. Is the pure random end of the spectrum a Poisson distribution? or a Gaussian distribution? A Chi-Sq distribution?
Technically something that is random can have any distribution that doesn't have a variance of 0. Determistic system can have similar distributions. What is important in the technical defintion is that one can get different results for the same starting conditions.

If we use uniformly distributed and uncorrelated as the definition then we have a clear and singular answer. We would say that a flat, horizontal, uniform distribution is the ideally random distribution of a single event. The distribution of an ideally deterministic single event would be a vertical, dirac-delta distribution. One end is vertical, one end is horizontal, they fall on opposite ends of a spectrum. Anything that falls in between would be a deviation from ideal randomness and a deviation from ideal determinism.
For multiple events we would want to calculate the correlation between a sequence of samples, (with value on the y axis and sample number on the x)
If the correlation is 1.0 the sequence is ideally deterministic, if the correlation is 0.0 the sequence is ideally random.
1. I get a clear answer as well.

2. How are you calculating the correlation, by the standard equation I assume?

3. Non-uniform distributions will generate correlations of 0 as well as uniform ones. They would just have to be independent of the sampling process. If your numbering samples in the order you take them it would be sufficient that the process be independent of time.

4. Deterministic systems can produce correlations of 0. If you periodically asked me for a number, and I give alternating 1s and 0s you will get an incredibly low correlation to the sample N. Compare that to a random (your definition) string of bits.

Walt

P.S. If I have time before leaving for vegas I will give some examples of how heredity or self-correlation in a system can make it more un-predictable. Depending on how much time I have, I might go into those traits of evolution that do make it distinctly different from some of the common random systems like gas pressure, fire detectors, etc.
 
Last edited:
Zosima, I know enough to recognise that a digital computer simulation of a chaotic system is not in itself a chaotic syatem.

me said:
he highlights will be this: #1 only mathematical models can be truly chaotic. #2 analog,pencil-and-paper,and digital physical systems, can all approximate chaotic systems equally well. #3 Any chaotic model applied to a natural system will, at best, be chaotic on some finite time/distance interval after which it will cease to be chaotic. #4 Any chaotic model when applied to a natural system will have a minimum sensitivity to perturbation.

The simplest way to define a system's sensitivity to initial conditions mathematically via a Lyapunov exponent. Heres how it works:

Imagine a function:
[latex]$$U(t)$$[/latex]
We can define the error of the system as:
[latex]$$\delta U(t) = \parallel U_{1}(t) - U_{2}(t)\parallel$$[/latex]
Then we can write an expression exponential error growth as follows:
[latex]$$\delta U(t) = \delta U(0) e^{\lambda t} \hspace{2 mm} for \hspace{2 mm} \lambda > 0$$[/latex]
Constant error:
[latex]$$\delta U(t) = \delta U(0) e^{\lambda t} \hspace{2 mm} for \hspace{2 mm} \lambda = 0$$[/latex]
Decreasing error(stability):
[latex]$$\delta U(t) = \delta U(0) e^{\lambda t} \hspace{2 mm} for \hspace{2 mm} \lambda < 0$$[/latex]

A couple of things become clear from this definition. #1 Error between two different solutions must be able to grow indefinitely. If the error growth stops the system is no longer chaotic on that interval. #2 Whether a system is chaotic by some definition depends on how we define our norm.

For example, its clear that if we use a euclidean norm to describe the error in a theoretical orbital system its clear from conservation of angular momentum that the greatest error growth we could have in the system is:
[latex]$$\delta U(t) = t \Sigma v$$[/latex]
If we're talking about a population system then its error growth follows from the exponential growth description:
[latex]$$\delta P(t) = | P(0)_{1} e^{r t} \hspace{2 mm} - \hspace{2 mm} P(0)_{2} e^{r t} |$$[/latex]
Since the difference between two exponentials is an exponential we can be sure it will meet the exponential sensitivity condition.

For some systems we might be concerned with the error growth within some bounded region. For those we might use a more probabilistic norm, but for our purposes it doesn't really matter.

From these definitions it is clear we can have a discrete system that meets this definition. For example:
[latex]$$U(t) = c4^{t}$$[/latex]
Defined over the non-negative integers.

The problem with chaos is physical systems is that they cannot have growth that is exponential forever. Eventually the system will be bounded from above. For example, eventually a population system will run out of resources for growth, an orbital system(with a probabilistic norm) will either collapse into its center of mass or be thrown apart(due to the specific solution of the system, or tidal forces), an analog computational system will reach a maximal wavelength(constrained by energy), a digital computer will run out of memory, a pencil and paper system will run out of patience.

So a system can only be truly chaotic in a theoretical world where the system can run forever and has an infinite amount of resources. That's #1

Now #2 follows from our definition. Any system that exhibits growth and has a norm has the capacity to show this particular sensitivity. If that norm over two particular versions of that system grows exponentially it meets our definition of exponential error growth. A digital system can use the number of megabytes it takes to store a particular state(or difference in states), a pen and paper system can use the time it takes to write down an answer, an analog system can use the amplitude of a waveform. The particular representation doesn't matter. Look up iterative maps, for more examples on discrete computational chaotic systems.

#3 Follows from #1. A physical system will exhibit exponential error growth until it exhausts its resources, then it will cease to be chaotic. For example a system of billiard balls loses energy to heat from the elasticity of collisions. Eventually the balls cease to move and the error becomes constant.

#4 For the condition of exponential error growth to be met:
[latex]$$U(t) = \parallel U_{1}(t) - U_{2}(t)\parallel \neq 0$$[/latex]
Otherwise:
[latex]$$\delta U(t) = 0 e^{\lambda t} = 0$$[/latex]

This means that under our chosen norm we have to be able to resolve a difference. Our norm could be a stepwise norm...for example:
[latex]$$\delta U(t) = \lfloor U_{1} - U_{2} \rfloor $$[/latex]

In a physical bounded system with a probabilistic norm the difference will always be bounded, at minimum, by Planc's distance, but often at a level above that. Here are some examples:

In a bounded system with a probabilistic norm, this means the difference in state needs to be resolvable over the noise in the system. (As the probability of the system is constant below this point). This can be thermal noise in a particulate system. Or imperfections in the material of an analog system. In a genetic system this might be the genetic difference required to create a phenotypical change. In a computational system this might be the number of RS bits required to protect against cosmic rays. In a population system this might be the difference in predator fitness required to kill one more prey.

So here's the conclusion. Physical systems are only an approximation of chaotic systems. Any physical modality can admit to a chaotic model.
If you want to claim that a physical system is chaotic you need to #1 provide a mathematical model of that system. #2 identify the norm under which it is chaotic. #3 Identify the range over which it is expected to exhibit chaotic behavior. #4 Identify the initial conditions and demonstrate that the difference between them is non-zero under this norm. #5 Provide evidence that said model is a good fit. What you don't need to do is, #1 wave hands, #2 not provide model that meets criteria above, #3 misquote/post claims without warrants.
 
And for those less physics inclined, I did once provide this article about the difference between randomness and chaos: http://www.newscientist.com/article.ns?id=dn11858

Non-random behaviour

Brembs and colleagues analysed the resulting flight records using increasingly sophisticated models of random behaviour. Were the flies' decisions random, like the result of a coin flip? No. Did they fit a coin-flip model in which the probability of "heads" varied randomly? Again, no.

Nor could they be explained by a series of random inputs, or a series of random inputs combined in non-random ways.

Instead, the researchers found that the flies' behaviour bears the hallmark of chaos – a non-random process that is nevertheless unpredictable, like the weather. No one has yet been able to adequately explain how chaos arises.

Chaotic advantage

The chaotic control gives flies' flight a spontaneity that might be evolutionarily advantageous when searching for food, say, or when a female tries to avoid an unwanted male. And, unlike true randomness, evolution can fine-tune the level of this spontaneity, Brembs says.



Understanding the terms you are using and making sure your audience understands is the first step of clear communication and useful modeling. But if obfuscation is your goal, use language however it pleases you to do so. Words can mean whatever you want them to me if you have a "higher purpose".
 
Last edited:
Why isn't this an example of a random evolutionary event:

From New Scientist

Mostly, the patterns Lenski saw were similar in each separate population. All 12 evolved larger cells, for example, as well as faster growth rates on the glucose they were fed, and lower peak population densities.

But sometime around the 31,500th generation, something dramatic happened in just one of the populations – the bacteria suddenly acquired the ability to metabolise citrate, a second nutrient in their culture medium that E. coli normally cannot use.

Indeed, the inability to use citrate is one of the traits by which bacteriologists distinguish E. coli from other species. The citrate-using mutants increased in population size and diversity.



In the meantime, the experiment stands as proof that evolution does not always lead to the best possible outcome. Instead, a chance event can sometimes open evolutionary doors for one population that remain forever closed to other populations with different histories.

That argument that a chance event can open (or close) evoloutionary "doors") has been part of my argument, although I talked about "niches".


Inotice that articulett is claiming to have put me on ignore again:

And jimbob... I've explained my point a thousand times. The nuts get to the top through probabilities, I supposed... but that IS irrelevant to understanding how they always seem to end up there. And I have you on ignore. Don't bother asking me leading questions you cannot comprehend the answer to anyhow. That is mijo-esque. I've been there; done that. You can have the last word. I refuse to let you inflict it on me, however.

(Your obfuscation regarding probabilities is fantastic, however, if you don't really want people to understand the basic science that ensures that the big nuts will settle on top... if, instead, you hope that they'll be open to the idea that there is a plot amongst nut sellers to make it look like there are more big nuts then there actually are. kudos.)


I say that Dawkins in the Extended Phenotype uses a probabilistic treatment of natural selection.

What is the alternative "nonprobabilistic" treatment?

That is obviously somehow a dishonest question.
 
For some life forms, they evolved to try something new or different when the old stuff isn't working...
 
I have given some examples of things that lie in both categories. Gas pressure being due to the number and momentum of particles hitting the container, will vary as you measure it, but to such a small extent that it is only technically random, but not practically so.
1. This is the central point of contention. I would agree that it is not wrong to call any individual particle in the gas 'technically random', or just 'random'. But the moments of the gas are neither practically nor technically random. As you say the number and the momentum of the particles is the cause of pressure. In an idealized closed container(no particle leakage,inelastic collisions,& rigid walls) the number of particles and their momentum will be strictly constant. (Due to the fact that the container is closed and due to conservation of momentum) So technically it is non-random and non-varying.
For example, pressure is:
P = Nm(vrms)2/3V
N the number of particles.(constant due to closure)
Vrms is the mean of the distribution(constant due to conservation of momentum)
m is the sum over the mass of the system(constant due to closure and conservation)
V is volume, (system wide constant)
I will note that practically there will be variation. Practically, the gas will leak energy/momentum into the walls of the container and it will radiate the energy out of the system from there. This will cause a steady decline in temperature.

2. It sounds like you're saying that the system is 'technically random' because there may be some variation in the insignificant digits of the measured moments of a gas. This is equivalent to saying everything is technically random. The is no physical entity that can be measured to infinite precision and there will always be 'quantum variation'. Thus the definition you put forth is devoid of meaning.(A definition that applies to everything equally fails to communicate information, see Shannon information entropy, if you are confused on this point.)

3. Also, no measurement will be to infinite precision. Discussions of any physical quantity need to be put in the context of some experimental method. If our instrument cannot measure down to an accuracy that we see any variation, then it is fair to call that system non-random and deterministic, insofar as our experiment is concerned. If we can measure a small variation in the measurement, we need to be careful how we phrase our statements about the system. We might be say the system is not a completely deterministic, that it has no variation in the significant digits, and that it has variation in the insignificant digits. If there is no pattern in this variation(uniform and uncorrelated) we might say there is random variation in the insignificant digits of our measurement. It would be folly to call the whole system random or even variable if the significant digits of our measurement are constant.

Compare that to a simple random number generator, where a noise source is compared to a reference voltage, and depending on the result a logical high ("1") or logical low ("0") is output. The resulting string of 1s and 0s is random in both senses, not just technically but also in practice.

Practically yes, technically maybe. Actually, at one time it was not uncommon to find systematic variation in a physical measurement used for a random number generator that was thought secure. That is actually part of the reason that software moved to pseudo-random number generators. They ended up being less random than noise based physical measurements. Although, admittedly technology has improved and cryptographic necessity has brought back the importance of random numbers generated from physical systems these days.

I want to get the definition of random out of the way, since it is useless to discuss how it applies to evolution as long as we are on different. Second I haven't appointed myself arbiter, the examples I gave about aren't ones I supplied.
What I refer to is your tendency to shift to the claim that "well whatever points you're making the system is still technically random". The points I've been making apply to both your definitions, so you can't logically regress to another nearly identical definition that you name more authoritatively.

Not true, random sampling is different that simple random sampling. Wikipedia doesn't do a bad job of explaining it. From the end of the first paragraph ...

This process and technique is known as Simple Random Sampling, and should not be confused with Random Sampling.​

And you can look at both definitions at wikipedia, and note that one of the examples they give of random sample is a stratified sample.

Actually, you can find "gaussian random distribution" in text books and on the web. If you look up "gaussian random" in google you will find articles, several technical, on gaussian random distribution or generating gaussian random numbers.

The point I'm trying to make is not that it is wrong to have random in the name, but to point out that its presence in the name doesn't add to your claim. For example, you seem to make a big deal about the fact the that the
word 'random' is in a 'gaussian random distribution'. As you so aptly mention further down in your post, it is not the distribution that is random. In fact, a gaussian distribution is entirely deterministic.
Gpdf = (1/sqrt(2*pi))*exp(-x2/2)
(This is with a mean of 0 and a variance of 1)
The fact is that a large number of measurements may often be modeled with a Gaussian distribution due to the central limit theorem. We might call this a population or a sample distribution, depending on context.

Stats book talk about random numbers having a non-uniform distribution, stochastic books do, my computer science text book on numerical methods discussing generating non-uniformly distributed random numbers using the transform and acceptance-rejection methods.

Yet a random number generator that isn't uniformly distributed is considered non-random in computer science. A cryptographic random number generator that isn't uniformly distributed is no random number generator at all. Stats books talk about 'random variables' having non-uniform distributions and we've already mentioned that they consider distributions like the dirac-delta to be distributions over random variables as well.

If we want this conversation to go until infinity we can both go and mine all our textbooks and all the scientific literature for examples that support both our cases. The bottom line is that the use of the term isn't going to be consistent from author to author, researcher to researcher, field to field, etc... So which usage should we choose? I would assert strongly that the definition of practitioners in the field of evolution is most applicable, and we've already heard the top minds in the field denying randomness in evolution.

2. As I pointed out, random sample and simple random sample do not mean the same thing.
Yes this was addressed above, I'm not sure how this denies that fact that deviations: correlations, skews, etc... are a deviation from randomness.

3. Sure, if there is no variance in the population being sampled, then the outcome of sampling will be determined.
Exactly the point. For example, in the last post you mention 'Systematic Random Sampling', but insofar as SRS is a deterministic process, it is non-random. And you've just agreed that the object being sampled is not necessarily random either.

So why is the word 'random' in the name at all? What is its meaning? It is talking about the capacity of the technique to create neither bias, nor correlation in the sampled distribution. If the samples themselves are uniformly distributed and uncorrelated(in the appropriate variable space of the problem) they will do exactly that. In that way a random sampling technique can form an accurate impression of the object being studied. This corresponds well to the definition I put forth. Whereas the definition of random as non-zero variance doesn't seem to explain this usage at all.

Exactly which technique needs to be used depends a lot of the object of study. We might use a systematic technique if we know the objects of interest are in a unordered(randomly ordered) list. We might use stratified sampling, when we believe there is a numerical or representational bias between groups. (For example in a survey by phone certain groups may be more likely to have a telephone, so if we want to eliminate this skew in our sampling, we would do well to sample the different groups separately and then recombine appropriately.)

4. Of course I didn't use the word random there. It's stupid to use a word whose definition we are discussing in a position that would lead to ambiguity. And since random doesn't mean uniform as pointed out previously, it would be down right wrong. There was no manipulating, because random was the wrong word for the occassion.
I thought it interesting. A uniform and uncorrelated number is exactly what you were talking about. Under my definition there would be no ambiguity. It certainly would be called random in any technical description. It only becomes ambiguous when the advocated definition is so broad that the original word is depleted of meaning.

5. I don't think you said what you wanted to there, as simple-random sample and random sample are not the same.
I do understand the difference. 'simple random sample' v 'all other types of random sampling' is a false dichotomy. If you didn't understand my previous explanation, my other one above may be clearer. The basic idea is that the technique ought to be selected to eliminate bias and correlation from the sampling process itself. Practically other variables will be optimized as well when deciding on a technique. For example, cost, speed, practicality, error. A systematic sample can be much easier to implement if its usage is not problematic.


Poisson distribution can be population distributions or random distributions.
I don't think you are using these terms quite correctly, or at least you are twisting 'random distribution' to somehow put it into opposition to 'population distribution' to support your definition. You were much clearer in the previous post when you distinguished between 'probability distribution' (or 'sample distribution') and 'sample distribution'

But I get that a population and sample can both have a Poisson distribution.


I don't agree with Mijo on that, though I haven't followed that line of argument much.

When one definies a probability function over the reals, the function is the probability density function, and you can get the probability of the result being on any interval by intergrating the probability density over that interval. If the density function is delta-dirac(x-k), then any interval not including k will have probability 0. Any interval including k will have probability 1. That seems about as certain as can be.

Hey now. You're arguing my points now. That the moments(or partial moments) of a system are constant and regular. This is integral is exactly what we do when we calculate the characteristic moments of a gas(density, velocity,temperature, pressure, etc...) and they are exact.

The point is that if you are looking at any particular solution of a sample distribution over the reals, you cannot necessarily be sure that the value will be the expected, regardless of the distribution. Only almost sure. What this means is that if we use a non-zero variance definition of random, there is not even a theoretical way to be certain a system is non-random. Although, I would tend to agree that this sort of digression is a bit of a red-herring.

The distribution of thermal noise is approximately gaussian. If you sample the voltage on a resistor, the probability distribution of that single sample is gaussian.
Yep thats noise alright. This is just a regression of the gas issue discussed above. The voltage is a function of temperature, and that follows from the random motion of individual molecules. It is governed by the central limit theorem and is not random(remember OHM's law). The single sample would only be random if it didn't tend toward that limit.(as per my stated definition of random)

No, there is no assumption that they are unbiased in the way they are formed. However, if you wanted to measure such a process and get an accurate picture, an unbiased sampling is the best way to go about finding the processes bias.
This is covered in detail above.

No more so that the term technical and practical are indistinguishable. I've given examples above and elsewhere.
You distinguish by saying that 'practically random' has more 'variance than technically random' . That is the only commonality in your distinction. That is just another way for you to say "I reserve to draw the distinction wherever suites my purposes"


Technically something that is random can have any distribution that doesn't have a variance of 0. Determistic system can have similar distributions. What is important in the technical defintion is that one can get different results for the same starting conditions.
Two bad meaningless definitions. I've addressed them both, but I've hit the first a little more explicitly. So I'm just going to address the second one here.

1. The terms 'different' and 'same' are ambiguous. Since no starting conditions are ever the same, there is no identical repeatability and no way to to tell if a system is random or not under this definition. Also the contrary point is true for the result. All results are necessarily different. You couldn't even design an experiment that can test this in theory as you can never measure to infinite precision whether the starts are the same and the ends are different.

2. This claims that random is synonymous with non-deterministic, clearly they are not. We can actually construct a thought experiment that proves this point. Imagine a phenomenon. We're in thought experiment land, though, so lets imagine we can control for every variable and actually get identical conditions. Our first measurement will be 1 or 0 with some probability and every subsequent measurement in the system for that session will be the negation of the previous value:
[latex]$$System: S(t) $$[/latex]
[latex]$$p(S(0)) = .5$$[/latex]
[latex]$$S(t-1) = 0 \Rightarrow S(t) = 1$$[/latex]
[latex]$$S(t-1) = 1 \Rightarrow S(t) = 0$$[/latex]
So is this system random or deterministic? We've stipulated that we control for all variables so we're in the same conditions, yet our first measurement may be 1 or it may be 0, we don't know. Yet after we measure that first value the system behaves in a completely reliable way. The answer seems to be that neither explanation is entirely adequate and we would more accurately call this intermediate system a mixed system or a non-deterministic(also non-random) system. You might argue that this thought experiment is unrealistic, that it is impossible because it postulates an uncaused event and the observer divides the sequences of measurements into sessions, but this is not too different from a quantum experiment testing bell's inequality, where the observer collapses the waveform, so an initial probabilistic measurement determines the system. It necessarily excludes this same-start different result definition.

1. I get a clear answer as well.
I think you miss by point. What I'm saying is that there are tons of systems that seem to fall between random and determinate. The definition I have creates a spectrum(on two dimensions) by placing uniform and uncorrelated
on one extreme and singular(dirac) and correlated on the other extreme. This organizes our conceptual space over a continuous field.(RxR) It allows us to situate the intermediate examples in the appropriate locations in the space. This is an advantage. Your definition has a certain binary quality such that it is unable to properly situate intermediate cases so that they agree with our intuitions. This is a disadvantage.

2. How are you calculating the correlation, by the standard equation I assume?
No I'm talking about lack of correlation in a more fundamental sense. In all senses really. For example there is a certain class of pseudo-random number generators that seemed to show lack of correlation and uniformity. In other words it seemed to produce random numbers. What was later discovered was that if the numbers were plotted in a high-dimensional space they would form clear and identifiable bands. So I don't think any single definition of correlation is sufficient.

3. Non-uniform distributions will generate correlations of 0 as well as uniform ones. They would just have to be independent of the sampling process. If your numbering samples in the order you take them it would be sufficient that the process be independent of time.
If (un)correlation followed from uniformity or uniformity from (un) correlation, I wouldn't require two constraints to describe random. I would just mention the more primary. Only both together create random.

4. Deterministic systems can produce correlations of 0. If you periodically asked me for a number, and I give alternating 1s and 0s you will get an incredibly low correlation to the sample N. Compare that to a random (your definition) string of bits.

See my example above, that any single correlation algorithm is inadequate.
If you plot your alternating 1s and 0s over the appropriate choice of basis a correlation will show.

Just to conclude, I'm getting kind of bored of this discussion.
In the interests of expediency, I'd like to try to work at a compromise. Really since a definition is based upon usage and common usage is based upon a consensus understanding of language, as long as either one of us(or mijo) is willing to assert their definition regardless of evidence to the contrary we're never going to be done. So it might be best for us to start looking for a compromise definition and stop asserting the universal truth of our definition. Since usage will vary(as I mention above) it seems kind of silly to insist that there is one correct definition. The definition is whatever people agree is should be and different groups will have come to different agreements. So what definition practical, technical or not, is appropriate to this discussion?
 
Last edited:
For some life forms, they evolved to try something new or different when the old stuff isn't working...

I take issue with virtually every word in that post.

Firstly:

Organisms do not evolve "to" do anything. Nor do they "try" things.

Secondly:

If the "old stuff" wasn't working, they would become extinct.

A random variation increased the reproductive success of holders of this variation, and so the new variation spread. If the holders of the variation (mutation) are still competing with organisms without this mutation, then their success will reduce the reproductive success of organisms without the mutation.
 
You're getting close to the central issue that people have been bickering over. For the moment I'll suspend all the practical objections that I might make about pseudo-random number generators, and we'll assume that your dot class is 'random'. I've got three questions.

1. When you make some directions 'better' than others, you are skewing the distribution of the random numbers your program produced. If the random number generator naturally produced numbers with that distribution you would probably think there was something wrong with it, yes?

Sorry about that. I've come up with a slightly better one on my calc. Pretty much, have an initial random behavior, and then (not affecting the randomness or semi-randomness) fuzzily show preference. That might be making a do over or something. The end result may not be fully random (and will become more...deterministic over time if you get it right). But the initial behaviors are. Or you can have previous behaviors give rise to new chaotic (I'm tired of saying random) behavior. Such as evolving feet for better survival, while giving the ability for drunks to aimlessly wander around.

I found a cool applet that had four simple "amoeba" wander around. They had four genetic traits: Speed, size, angular speed, and stinger length. The rules were that if an amoeba was stung, it died and was replaced by the killer, with random tiny mutations. The whole system evolves, and differently when I ran it; predominately going the route of wide ones with short, slow stingers; or small ones with fast stingers far out. Then again, you may point out that natural ecosystems are vastly more complex, cells have much more traits, and aren't simple 2d geometric primitives.

2. What if you used exactly specified a set of floating point precision numbers. For some set of specified floating precision numbers you will get some rounding error in the insignificant digits when multiplying. Do you consider this random?

My program used only integers (TI-83 SE+). Rounding may - in some instances - appear random. It depends on implementation. Some physics engines can appear random due to poorly handled rounding. I don't consider it random, though if I experience it through black-box implementation of a program, it might seem random.

3. For this third one, let us presume that your dot is, in fact, random. So you have a system(or program) made up of random dots, You would also call the overall behavior of your program random? Despite the fact that those dots(plural), reliably follow the mouse?

No...yes...no...ish. It's randomish, but normally in this program, it will pretty much settle and buzz around the mouse. Overall, it starts here and ends up here, and follows rules (go towards X) regardless of how. When I watched the dot frame by frame (a little less than .5 fps for ten dots), it was all over the place. After a while, it did get to where I wanted it to go (top left). The direction was random (The calculator is good with whatever seed it uses), but if the direction wasn't up-left, then it did a re-roll and settled for that. A light breeze, but a breeze nonetheless. For me, there was definite movement in a minute.

All of these questions are very much central to the discussion in this thread.

Incidentally, this dot program you describe sounds an awful lot like a program for the Mac dashboard.....

Actually, this was just a curiosity I played around with when I just started learning how to code. A purely random coincidence. And this isn't the only program dealing with this nature.

Hey, I was bored. Roaming dots are fun.
 
Zosima, I know enough to recognise that a digital computer simulation of a chaotic system is not in itself a chaotic syatem.
OOOOkay, um, what is a chaotic simulation then, like a strange attractor?

Wow.
And that an analogue computer simulation is.

The digital simulation contains numbers that have been translated into high or low voltages on transistors, patterns of these voltages are then altered to duplicate the mathematical operations being performed on them. This is no more a chaotic physical system than a pen-and-pencil calcuation of these numbers.

And what produces bifurcation and the mandlebrot set, pointless semantics.

Start with the weather random number generator and you will get chaos.
 
I take issue with virtually every word in that post.

Firstly:

Organisms do not evolve "to" do anything. Nor do they "try" things.

Secondly:

If the "old stuff" wasn't working, they would become extinct.

A random variation increased the reproductive success of holders of this variation, and so the new variation spread. If the holders of the variation (mutation) are still competing with organisms without this mutation, then their success will reduce the reproductive success of organisms without the mutation.

I agree many organisism do not evolve 'to do things'.
I take exception with many of your words.

Um variation need not be random, but please continue to abuse the term variation. There are means of variation that are not 'random', but whatever.
 
Last edited:
When I say the organism "evolved to", most people understand that those with the propensity in the genes preferentially survived and reproduced. When we say duck's corkscrew shaped members evolved to fit in the reproductive tracts of females discouraging rape... then I trust that all people who actually understand evolution understand that these are the environmental forces which "naturally selected" the genes coding for these traits.

I don't expect a creationist to understand or agree with anything I say. They can't. It spoils what they want to believe about their self appointed expertise in whatever it is they imagine themselves to be experts in.
 
One of the problems with the language of evolutionary biologists, is that they tend to say things like "evolved in order to...", or "calculates resources for optimal whatever...", but it is important to understand that they do not literally mean it when they say that!

Biologists do not literally mean that birds whip out calculators, to figure out how much yolk to distribute into how many eggs, to achieve optimal survival of the progeny. But, that genes evolved to hone the ability, "instinctually", within the bird's body.

However, biologists will still use such verbiage, as a shortcut, when talking amongst other biologists (and somtimes to the public, under the mistaken belief that they will "get it"), because it's just easier to say "birds calculate egg yolk distribution" instead of rambling on about what actually happens. Other biologists will "get it".
 
I think that no matter how you define random... there are means of variation that aren't.

But I shan't play the "defining random" game any more. I quoted definitions from peer reviewed sources. Those claiming evolution is random have not. They keep claiming their vague definition is the "technically correct" whatever that means... I guess by their own imagined expert authority anything that contains any randomness IS random and/or anything related to probability and/or that contains any part related to probability is random.

On my planet, that makes random a completely useless word unless your goal is to obfuscate understanding of evolution so as not to convey how natural selection works.
 

Back
Top Bottom