• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Randomness in Evolution: Valid and Invalid Usage

Since so many people are so sure that I don't understand probability theory, I would like them to explain exactly where I am misunderstanding it particularly with respect to the relationship between predictability and randomness.

Here's my response:

zosima said:
From wikitionary:
Statistical Randomness:
"the property of a numeric sequence of containing no recognizable patterns or regularities; exemplified in the results of an ideal die roll."

Or as I mentioned many posts ago: uncorrelated and uniformly distributed.

A smoke detector is a non-random system that takes a random input(or inputs more accurately). For all intents and purposes it is deterministic.

As there are recognizable patterns and regularities, it is clear that evolution, even individual selections in evolution are non-random. The more interesting argument that some people have been having with Jimbob, is whether evolution is deterministic or not. Which I think has fair arguments on both sides.

The mistake you seem to be making, over and over and over, again, is to assume that because you think evolution is not-deterministic that this implies that evolution is random.

This is a common mistake made in introductory logic classes.
If X->~Y this does not mean that ~X->Y
Above: Let X mean deterministic and Y mean random.

If a system is deterministic it is not random, but if a system is not deterministic it is not necessarily random.

(more commonly this fallacy is stated X->Y does not imply that ~X->~Y)

It identifies the fallacy and the correct definition of random.

Any process that has a non-uniform probability distribution will exhibit statistical regularities which makes it non-random. Continuous probability distributions that are not a dirac delta will not be deterministic. A system can be both non-uniform and not a dirac delta. Thus neither random nor deterministic. Your mistake is to call a non-deterministic system 'random'.

The interesting argument with evolution isn't whether it is random or deterministic. Strictly speaking, it cannot be random because the fact that complex structures develop constitutes a regularity that prevents a fit to the definition of randomness. Also strictly speaking, the only way it can be deterministic is if the universe is determined; An issue that is not specific to evolution.

The interesting question is how much do the fundamental variables have to be changed to create a significant difference on some timescale. Will a butterfly flapping its wings change the entire fate of a species? Will evolution still converge to the same result even if we pound the planet with meteors?

Arguing over whether evolution is random is a boring semantic argument anyway.
 
zosima-

The fatal flaw in you argument is that systems that have uniform probabilities also exhibit statistical regularity. I have pointed this out to you before:

zosima-

If I were wrong about randomness and predictability, larger sample sizes would not in most cases increase the power of statistical tests. Also, you would not be able to say that each number on standard, fair, six-sided die would come up roughly one sixth of the time or that the arithmetic mean of a series of rolls would be approximately 3.5. Moreover, you would not be able to say that, as the number of rolls increased, the proportion of each number coming up to the total number of rolls would approach one sixth or that the arithmetic mean of a series of rolls would approach 3.5.

Why do you refuse to address in as a counterexample to you central premise?
 
Last edited:
There really aren't such things as random processes... stochastic processes which contain random inputs are sometimes called "random processes"-- but it's not the process itself that is random. The fact that something is a "process" means that it has direction... If you call something random just because parts are random... then every process is a random process-- child birth, film developing, puberty, reproduction, making lasagna, getting a diploma, seatbelt studies, the mechanisms of fire alarm activation--

Random process is about as useful as a "variable process" or an "upside-down process" or a "magical process"- it doesn't really mean anything, and when it comes to reproduction and exponential growth of the best reproducers, it is completely uninformative and only used by creationists.

Sure, randomness can lead to order... but the order in evolution and the appearance of design, doesn't come from randomness (as stated repeatedly by actual experts and those who teach the subject)-- that's more like spiral galaxies and spheres-- the order in evolution comes from natural selection. And nobody but a creationist obfuscates understanding of natural selection by needing to call it "random". It's muddled and laughable. It makes you sound like a creationist. It had no meaning and confuses more than it clarifies.
There is a lot of misinformation in that post. The inputs are not what make a process stochastic, but the nature of the internal workings. That is, given the same input (whether that input was generated randomly or not), a process that goes to the same state(s) is a determistic one. A process where the future states indeterminant based on the inputs is stochastic.

Wether the output or state of the system is/appears random is dependent on the nature of the system. A simple system like a fire detector, will be a stochastic system will be for all intents and purposes determistic, even if technically a random system. A simple system like a random number generator, where noise is measured and a steam of ones and zeros is produced based on that noise is random.

There have been numerous qualities of evolution that make very distinct from the fire alarm mechanism. They've been discussed at length in other threads.

Walt
 
Here's my response:



It identifies the fallacy and the correct definition of random.

Any process that has a non-uniform probability distribution will exhibit statistical regularities which makes it non-random. Continuous probability distributions that are not a dirac delta will not be deterministic. A system can be both non-uniform and not a dirac delta. Thus neither random nor deterministic. Your mistake is to call a non-deterministic system 'random'.

The interesting argument with evolution isn't whether it is random or deterministic. Strictly speaking, it cannot be random because the fact that complex structures develop constitutes a regularity that prevents a fit to the definition of randomness. Also strictly speaking, the only way it can be deterministic is if the universe is determined; An issue that is not specific to evolution.

The interesting question is how much do the fundamental variables have to be changed to create a significant difference on some timescale. Will a butterfly flapping its wings change the entire fate of a species? Will evolution still converge to the same result even if we pound the planet with meteors?

Arguing over whether evolution is random is a boring semantic argument anyway.
It actually gives a definition of a random sequence, and an incorrect one at that. For one, a sequence generated by an ideal die may, actually probably will, have a bias, and there is a possibility (though unlikely for very long sequences) that it will have discernable patterns.

People in everyday use do not exclusively use random for equiprobably things. People in the physical sciences use it for other probability distributions as well, as do statisticians and mathematicians.

Walt
 
zosima-

The fatal flaw in you argument is that systems that have uniform probabilities also exhibit statistical regularity.

The wikitionary definition cites a die roll, which is uniformly distributed and uncorrelated. In the definition I mention, not only must the distribution be uniform but it must also be uncorrelated.

Is the type of statistical regularity you're talking about what you state in the example below? If yes I'll address it, but if it is something else, please do illuminate me.

Why do you refuse to address in as a counterexample to you central premise?

I told you why I wouldn't address it when you first posted it. Because it was off-topic in the thread you posted it in. Of course you made me ask you to post it in the right spot 3-4 times before you got the point, but...I'm pretty used to that by now. But I'll address your point now that you've pasted it into the correct thread. That said, I'm surprised I have to explain such a basic point to someone who professes so much knowledge about statistics.

If I roll a single die, all outcomes are equally likely. That single roll is random.
We the properties of that variable are not the same as the properties of other variables. For example the expected value(probability weighted mean) is not random, in fact, it is a constant. The probability 1/6 is also constant and non-random. You are conflating regularity in the values of the statistics with what these statistics tell us about potential regularities in the system. Notice that the official definition above cites an individual die roll not dice rolls, which become necessary for statistical regularities. This is the very crux of the point that has been made in this thread. A system made out of random components can have a sum behavior that is not random. That is the point of the law-of-large numbers.

A good way of thinking about it is this:

Ask yourself whether it is possible to come up with a strategy for a 'game' that will win you money on the long run. If you can do better than breaking even then it is not random.(Assuming the odds are fair)

If the game is guess the number when a die is rolled and a win pays out 5:1. There is no strategy that I can pick that will ever make me do better than average
This is a random game. Guessing 3.5 or 1/6 isn't going to get you anywhere, in fact these numbers aren't even part of the game.

If the game is guess the sum of two dice and it pays out 11:1 If my strategy is guess 7 I'll do better than even.
If the game is guess the mean within +- .5 after 10,000 rolls, I'd do well to guess 3.5 (With some appropriate payout, the pattern being n-1:1 payout, where n is the number of outcomes)

Now to understand why the constraint of being uncorrelated and uniform is placed. Imagine betting on the outcome of every 6th roll in a series. If it is a normal die(uncorrelated and uniform) I can't expect to do any better than even on this game, regardless of strategy. If it is a weighted die(non-uniform) lets say that there is a 50% chance of a 1 and a 10% chance for all other outcomes. (With a 5:1 payout) If my strategy is play 1, I'm going to expect to win money.

Here's another system, lets say we have a die that always rolls 1 the first time you roll it, 2 the second, 3 the third, 4 the 4th and 5 the 5th, 6 the 6th and 1 the 7th...
This system has a uniform distribution, but the correlation in the system makes it non-random. If I'm playing the game where I get to bet every 6th roll, I'll do well to play 6 as my strategy and expect to win money.

Each of these games are different. The fact that they all involve a die does not make them all random, but they are all probabilistic. The reason I can't win money betting on individual values of the ideal die is because it is random. There is no strategy that is better than any other. The reason I can win money when betting on the mean is because the mean is non-random. But no matter how much money I win betting on the mean, I'll never be able to make money betting on the ideal die.

Walter Wayne said:
It actually gives a definition of a random sequence, and an incorrect one at that. For one, a sequence generated by an ideal die may, actually probably will, have a bias, and there is a possibility (though unlikely for very long sequences) that it will have discernable patterns.

People in everyday use do not exclusively use random for equiprobably things. People in the physical sciences use it for other probability distributions as well, as do statisticians and mathematicians.

An ideal die will not have a bias. Although a finite sequence may vary from the expected value. It does not take a very long sequence at all for it be without patterns. We can only be 100% sure about the properties of a theoretical system. There is always the possibility that a real world system or a finite random sequence, will not appear to be so. To tell me that is a trivial, non-constructive objection, and it doesn't change anything about the issue at hand.

If you don't like that definition use the definition I've provided: uniform and uncorrelated. I provide a detailed explanation above. The explanation also dovetails with an interpretation of 'people in everyday use'. Although If you're talking about what everyday people think random is(whatever that means) then there is really no sense in discussing it, because what everyday people think is going to vary heavily from person to person and culture to culture. Moreover evidence indicates that everyday people have huge misconceptions about evolution so do we really care what they think random is and whether that applies to evolution? What scientists call a non-random probability distribution random? The ones on TV?

Finally, If you happen to have a definition of random that isn't just exclaiming that random is a synonym for non-deterministic, I'd be interested in hearing it.
 
An ideal die will not have a bias. Although a finite sequence may vary from the expected value. It does not take a very long sequence at all for it be without patterns. We can only be 100% sure about the properties of a theoretical system.
The problem is your definition doesn't mention a process, like the rolling of an ideal die. It mentions the property of a sequence. Second it is a wiki, and one with zero discussion. Get me a more authoratative source.
 
The problem is your definition doesn't mention a process, like the rolling of an ideal die. It mentions the property of a sequence. Second it is a wiki, and one with zero discussion. Get me a more authoratative source.

#1 I welcome a better definition, you haven't provided one and I've defended the logic behind my definition in detail. In absense of reasoned opposition I don't need anything more authoritative.

#2 I think the idea of having no patterns, being unpredictable is more colloquial than the more rigorous definition I've been providing. That is uniform and uncorrelated

#3 The distinction between a process and a sequence of numbers in this context is not significant. The measurements that are taken of a system at different times will constitute a sequence of numbers that will describe the state vector of the system. If the numbers are random that indicates the system is random.

#4 Its already been addressed why random process is not even really a significant distinction. If you disagree, once again I ask you to define a random process in a meaningful way.

#5 None of the points that I've made have relied on the authority of Wiktionary.
 
It doesn't matter your sources, zosima-- they imagine themselves smarter than them. I gave them a peer reviewed paper that said "evolution is not random"-- but to them it must be... the same people... spouting the same nothingness for the same unknown reasons.

To them, it must be "correct" (whatever that means) to say that "scientists think that all this came about randomly". Scientists, of course, don't say that.

But it's the favorite straw of creationists for some reason... and so all semantic burble leads back to the "evolution is random" nothingness.
 
Last edited:
Here are some easily accessible definitions of "determinsitic" and "stochastic" with citation to other published works:

Determistic

A Turing machine is called deterministic if there is always at most one instruction associated with a given present internal state/tape state pair [latex]$(q,s)$[/latex]. Otherwise, it is called nondeterministic (Itô 1987, p. 137).

In prediction theory, let [latex]${X_t}$[/latex] be a weakly stationary process, and let [latex]$M_t(X)$[/latex] be a subspace spanned by the [latex]$X_s$[/latex] (with [latex]$s\leq{t}$[/latex]). [latex]$If M_t(X)$[/latex] is independent of t so that [latex]$M_t(X)=M(X)$[/latex] for every [latex]$t$[/latex], then [latex]${X_t}$[/latex] is said to be deterministic (Itô 1987, p. 1463).

Stochastic


Stochastic is synonymous with "random." The word is of Greek origin and means "pertaining to chance" (Parzen 1962, p. 7). It is used to indicate that a particular subject is seen from point of view of randomness. Stochastic is often used as counterpart of the word "deterministic," which means that random phenomena are not involved. Therefore, stochastic models are based on random trials, while deterministic models always produce the same output for a given starting condition.
 
There is a lot of misinformation in that post. The inputs are not what make a process stochastic, but the nature of the internal workings. That is, given the same input (whether that input was generated randomly or not), a process that goes to the same state(s) is a determistic one. A process where the future states indeterminant based on the inputs is stochastic.

Wether the output or state of the system is/appears random is dependent on the nature of the system. A simple system like a fire detector, will be a stochastic system will be for all intents and purposes determistic, even if technically a random system. A simple system like a random number generator, where noise is measured and a steam of ones and zeros is produced based on that noise is random.

There have been numerous qualities of evolution that make very distinct from the fire alarm mechanism. They've been discussed at length in other threads.

Walt

Indeed, and the biggest difference is that an ionisation smoke detector is not a chaotic system. A slight difference in the decay rate leads to a slight difference in the ionisation current, which leads to a slight difference in teh "apparent" smoke density ant the output signal. A slight difference in the gain (transconductance) in the amplifying transistor similarly leads to a slight difference in the sensitivity of the detector, so this is compensated for during factory testing.

With the smoke detector slight differences in inputs lead to slight differences in outputs.

With chaotic, and other divergent systems, slight differences in inputs lead to vastly different outcomes, and the differences increase over time.

This seems to be what happens in ecosystems. Many biological systems exhibit chotic behaviour, and the consensus is that the weather does too. There are lots of positive feedback loops in biological evolution and this means that disruptive mutations can significantly affect the fitnes landscape for other organisms and alter the course of evolution.

The emergence of grasses mighe have been one. On a smaller level, but significant for the survival of millions of individuals, the mutation that made spanish flu so deadly was another.
 
#1 I welcome a better definition, you haven't provided one and I've defended the logic behind my definition in detail. In absense of reasoned opposition I don't need anything more authoritative.
Random: having a state or value depends on chance.
#2 I think the idea of having no patterns, being unpredictable is more colloquial than the more rigorous definition I've been providing. That is uniform and uncorrelated
So you can predict the sum of two die rolls? That isn't uniform. You can predict the position of someone on a snakes and later board on the second turn, after seeing the result of their first turn? That is correlated with the result of the first term.
#3 The distinction between a process and a sequence of numbers in this context is not significant. The measurements that are taken of a system at different times will constitute a sequence of numbers that will describe the state vector of the system. If the numbers are random that indicates the system is random.
But if you define whether a process is random by observing only the sequence produced, a sequence of randomly generated numbers may produce a non-uniform distribution. A determistic process can also produce unbiased sequences.
#4 Its already been addressed why random process is not even really a significant distinction. If you disagree, once again I ask you to define a random process in a meaningful way.
In other posts, I've pointed out the difference between technically random, and random with significant variation. I am not simply arguing that evolution is merely technically random, I've have mentioned that several times before.

[nitpick]Just because every physical process may be technically random, doesn't mean the definition isn't meaningful. During the period of classical physics, they were people who thought physical processes were technically determisitic. Determistic still meant something. But this is just a derail. [nitpick]
#5 None of the points that I've made have relied on the authority of Wiktionary.

Walt
 
Random: having a state or value depends on chance.
So you can predict the sum of two die rolls? That isn't uniform. You can predict the position of someone on a snakes and later board on the second turn, after seeing the result of their first turn? That is correlated with the result of the first term.

You can predict the sum of two die rolls with better than an accuracy better than random chance. If you provide a confidence interval, you can provide an exact prediction.

I'm sorry I don't understand the 'snakes and later board' example.

But if you define whether a process is random by observing only the sequence produced, a sequence of randomly generated numbers may produce a non-uniform distribution. A determistic process can also produce unbiased sequences.

Both these examples go to my point about theoretical vs best fits to the real world. I'm not arguing that these systems can't be deceptive. We often make mistakes in science and fit a model that later appears wrong in light of more evidence. Given enough data, the nature of each of the systems means that they'll both approach the correct definition given enough data.

For example, a pseudorandom number generator will always eventually give its self away because it necessarily has cycles. After a certain period it will begin to repeat and be obviously deterministic.

On your second point
In other posts, I've pointed out the difference between technically random, and random with significant variation. I am not simply arguing that evolution is merely technically random, I've have mentioned that several times before.

Could you elaborate? I'm not sure what you mean by this.

[nitpick]Just because every physical process may be technically random, doesn't mean the definition isn't meaningful. During the period of classical physics, they were people who thought physical processes were technically determisitic. Determistic still meant something. But this is just a derail. [nitpick]

I'll agree its meaningful, but that sort of randomness wouldn't apply meaningfully to evolution, in the sense that it would apply equally and trivially to everything if true. In other words it is outside the scope of this discussion.

Also I don't think quantum effects are random, they are probabilistic.
To generate a true random bit from the outcome of a non-uniform quantum effect you need to run the process twice, but switch what counts for true and false in each trial. You only count the result, if they agree, if they don't you run two more trials. (There are other techniques, but they all involve more than one trial)

jimbob said:
Indeed, and the biggest difference is that an ionisation smoke detector is not a chaotic system. A slight difference in the decay rate leads to a slight difference in the ionisation current, which leads to a slight difference in teh "apparent" smoke density ant the output signal. A slight difference in the gain (transconductance) in the amplifying transistor similarly leads to a slight difference in the sensitivity of the detector, so this is compensated for during factory testing.

With the smoke detector slight differences in inputs lead to slight differences in outputs.

Agreed. With the qualification that the output isn't a slight difference in measured signal, it is strictly 'smoke or no'.

With chaotic, and other divergent systems, slight differences in inputs lead to vastly different outcomes, and the differences increase over time.

With the qualification that the system needs to be sensitive to the changes in the input.

This seems to be what happens in ecosystems. Many biological systems exhibit chotic behaviour, and the consensus is that the weather does too. There are lots of positive feedback loops in biological evolution and this means that disruptive mutations can significantly affect the fitnes landscape for other organisms and alter the course of evolution.

The emergence of grasses mighe have been one. On a smaller level, but significant for the survival of millions of individuals, the mutation that made spanish flu so deadly was another.

I've got a couple of points here.

#1 Just to be clear I understand this to be from the perspective of the 'history of evolution' or 'path of evolution'. If we're talking about evolution in the sense of how a species change in response to changes in its environment(which may or may not be chaotic or random) this point is inapplicable. (Ie doesn't apply to the argument made from Baysian logic some pages back)

#2 The 'consensus' I got from the 'chaotic' thread was that the jury was still out with respect to whether weather was chaotic. Moreover the 'consensus' I got was that we couldn't tell if a system was chaotic unless we are talking about a mathematical model and are in agreement about its quality of fit to the evidence.

#3 I appreciate your intuition on these issues, but is there any reason you believe these systems are chaotic?

#4 How do you know that these systems are not just complex?

#5 How do you know that, if these systems are chaotic, that they are sensitive to random inputs?

#6 What do you contend these inputs are?

#7 How do you know that these systems are not constrained by negative feedback in the form of energy limitations and physical landscape for ecosystems, in terms of intertia,viscosity, and energy for weather.

#8 What are the relative scales of the development rate of 'disruptive mutations' vs 'positive feedback' in speciation. How do we know the scales are comparable?

#9 How do we know that this 'positive feedback' doesn't just affect the speed at which the solution is generated and not the substantive result of the solution.

Your assertions are all well and good, but they're just as hand-wavy as your assertions about chaos theory which were quite vigorously shot down. Unless you can answer these questions with evidence and reason, really all you are saying is that despite the testament of people in the field, you have the strong personal conviction, a feeling, that the results of evolution involve significant chaotic effects. Which, IMO, is terribly unpersuasive.
 
Last edited:
An example of my skepticism wrt chaos. While many systems in nature might seem to have exponential growth in their error, many curves in nature that look like exponential curves end up being sigmoidal. While they exhibit have a large derivative near the normalized origin, resource limitations ultimately end up reducing the growth rate to 0.

Bacterial populations are an excellent example. Initially they don't increase so quickly in absolute terms because there aren't very many of them they quickly accelerate their growth peaks as they begin to compete for resources then quickly decelerates to a steady state. Their population v time curve looks like a sigmoid, the derivative looks like a normal distribution.

Why exclude this possibility?
 
Moreover-- the exponential growth allows for more variety to select from-- more potential "winners" in the evolution game. That's the essence of the Evolution--not how "randomly" this variety was acquired!
 
If you play semantic scavenger hunt and extrapolate correctly, you, too, might finally understand that evolution is random.

And then you'd be as smart as Mijo.

So insisting that people use the proper and rigorous scientific is playing "semantic scavenger hunt"?

The definitions I provided are widely accepted by mathematicians, statisticians, physical scientists, and biologists of many different stripes.

Why can't evolutionary biologists (as you choose to portray them) standardize their terminology?
 
Moreover-- the exponential growth allows for more variety to select from-- more potential "winners" in the evolution game. That's the essence of the Evolution--not how "randomly" this variety was acquired!
That's a good summary of the point of this thread, I think.

Why can't evolutionary biologists (as you choose to portray them) standardize their terminology?
They do! But, just like mathematicians, statisticians, physical scientists, etc., the standard terms are adjusted according to the context of the study.
 
They do! But, just like mathematicians, statisticians, physical scientists, etc., the standard terms are adjusted according to the context of the study.

Uh....the other definitions by which evolutionary biologists claim evolution is not random are not consistent with the understanding of randomness needed to meaningfully practice statistics and to use statistical analysis to demonstrate that evolution does occur.
 
Last edited:
Uh....the other definitions by which evolutionary biologists claim evolution is not random are not consistent with the understanding of randomness needed to meaningfully practice statistics and to use statistical analysis to demonstrate that evolution does occur.
Who said they are not consistent?

Maybe they are, but are only relevant to certain aspects, or certain models of evolution; not the whole entire thing. Selection is non-random, by practically any definition of the word, for one thing... and that is true no matter how random (or not) mutations happen to be.
 

Back
Top Bottom