• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Randomness in Evolution: Valid and Invalid Usage

Why pick an analogy?

Because it's cheap and easy. The third dimension was explained to Mr. A. Square by analogy. Isn't natural philosophy sometimes easier understood by relations to other more readily understood concepts?
 
Because it's cheap and easy. The third dimension was explained to Mr. A. Square by analogy. Isn't natural philosophy sometimes easier understood by relations to other more readily understood concepts?

An analogy is fine if you are using it to define a term. An analogy is not sufficient to make claims about the properties of a system, if you intend the analogy to 'stand in for the system'

Maybe I should say "why pick a bad analogy?", If species are fluids okay.... but they are not. The differences between a fluid and a species are large enough that you will never be able to prove anything or make a persuasive claim with it. The simple claim that a species is somehow a fluid is so incredibly under-defined that it doesn't establish any claim. For example Is an evolutionary fluid a fluid with Laminar flow or Turbulent Flow? Is it Viscid or Inviscid? Can I apply the Navier-Stokes equations to solve evolutionary problems? Is it a compressible or incompressible fluid? More like a liquid or more like a gas? Is it frozen in flux? What shape container does it fill? How do I interpret, mass,pressure, density, temperature, momentum, velocity for an 'evolutionary fluid'? By the time you've answered all these questions, you could have made significant progress reasoning about the question directly. Oh yes, and when it comes down to it, there are huge outstanding questions about fluids themselves that we can't answer. So why not pick a system we can reason about?

So is your analogy accurate? No. Does it share some properties with evolution? Perhaps. Can we apply conclusions about your fluids to evolution? Not Bloody Likely.
 
When I use an analogy, I don't mean to include the unsaid baggage. I assumed other people do the same thing. So if that's not the case, then I'm sorry.

So to be thorough and explicit: I don't want to include all the baggage of analogies that I'm not talking about. There, problem solved. Let's move on to more productive talk, such as something related to the topic and not misunderstandings in the nuances of the English language. Unless it has become of fashion to speak of Science and Mathematics in verse.
 
When I use an analogy, I don't mean to include the unsaid baggage. I assumed other people do the same thing. So if that's not the case, then I'm sorry.

So to be thorough and explicit: I don't want to include all the baggage of analogies that I'm not talking about. There, problem solved. Let's move on to more productive talk, such as something related to the topic and not misunderstandings in the nuances of the English language. Unless it has become of fashion to speak of Science and Mathematics in verse.

What you consider baggage others may not. But I think verse would definitely be a step in the wrong direction from analogy. Generally, an explicit, perhaps even mathematical example, is a good way to make a claim, but if you don't have one...I'm hoping this thread will just die. All points have been made in triplicate.
 
Reading this thread is like seeing Zosima the Buck defending his territory as the young bucks step up with the bravado to challenge and fail gently but firmly every time. For all I know Zosima is younger than his challengers-- but he understand physics and he understands logic and he understands how to convey information so that this non-expert in physics understands. And he understands what the experts in biology are saying and is able to convey that understanding to most anyone except those most in need of thinking themselves as superior "explainers".

Just my opinion, of course.

*applause and whistles from sidelines*
 
Sorry for the delay. The combination of being busy and interent outage means I haven't addressed some things in a bit.

"Random: having a state or value depends on chance."

So then you're saying that everything is random? In legal terminology this has a fatal flaw known as no bright line. From this definition there is no way to tell what it doesn't apply to. In other words, to define a term that applies to everything doesn't communicate any information because you know just as much about whatever you are talking about before or after it was said.
First, if you want a laymen's definition, expect "no bright line" for anything that lies along a continuum. Your also interpreted it without the context of my other posts, where I explicit mentioned that some things are only random in the most technical sense of the word.
Note that these distributions are statistics that can result from many samples of processes. If the process is truly random any individual sample is #1 uncorrelated, #2 uniform.
Again, random does not mean uniform or uncorrelated. From statistics, the field you mentioned:

Simple random sampling: The one most are familair with.
Each sample is unbiased, and uncorrelated. Though to be precise, most often used is simple random sampling without replacement is unbiased, but does have some correlation, though usually small.

Systematic random sampling: Involves taking every nth member of the population. For example if you wanted to sample 400 of the 17470 members of these board, you would generate an unbiased number between 1 and 43, and then sample that member on your list, and every 43rd member after that.

Stratified random sampling: Used often in political polls. This involves dividing the population into strata or classes and then sampling from with in each of them. So you could do independent surveys of low-income, middle-income and high-income families to make sure that your sample doesn't accidently miss one class. Your sample is no longer uncorrelated (and may be biased depending on the particular brand of stratified random sampling you use).

Random does not mean uniformly distributed and uncorrelated.

To better clarify my point. If there are going to constitute examples of non-uniform random systems, you need to explain why they should be considered random. I don't think the fact that they use of the term 'random variable' is persuasive either.

#1 Random variable is used with distributions that are clearly not random. For example, I could have a dirac delta random variable. Ie a function with a dirac delta that is centered upon 0. It is a distribution over a random variable, but it is also only has one possible outcome. We could do the same thing over a discrete distribution as well.(In case you want to pick some nits)
Still, we would be talking about a random variable, yet a non-random system. Is the dirac delta function also random? Where do you draw the line?
How does, "having a value depending on chance", or "not being predictable" or "not determined by past state" correspond to a dirac-delta function?

#2 Probability distributions represent statistics of systems, but the systems they describe may or may not be random. For example, I can generate a binomial distribution by rolling a die many times over a number of trials and counting the numbers as they come up. Alternatively, I could write a computer program that will either return 1-6 with exactly the same frequency as the die returned the values, but in some sorted order. Both could generate a statistical fit that might be described by the binomial distribution. The computer would be deterministic and clearly so, the die would not.
There is a difference between a population distribution, and probability distribution, and you appear to be conflating the two here. If I look at the sides of my fair-die, I will see the population distibution. 1 side with 1 pip, I side with 2 pips, ... etc. If I take a fair single sample of that population (by rolling the die), the probability distribution of that sample will be equal to the relative population distribution of the die. That last bit is the foundation of how simple random sampling works.
#3 The Distributions themselves are not random at all. For example, if I want to generate a normal distribution, all I have to do is evaluate the probability density function for the normal distribution.(I'm not going to write it out here) This is a simple and deterministic mathematical operation. If you think that the particular distributions that you've mentioned happen to describe random processes, you're going to have to explain why, 'cause I don't see any reason why it should be assumed.
I am not sure what that has to do with. A uniform distribution is also describle in the same manner. For discrete uniform distributions the probability of a possible outcome is 1 in the total number of outcomes.
From a perspective that jives more closely with human intuitions. I think the term random is not so binary as just 'random or not'. It would be more accurate to talk about how random a system is. A system with a uniformly distributed and uncorrelated output is as ideal a random system as you can get. A system that always outputs the same value or same sequence of values is as deterministic as you can get.
...
I agree with this, which is why I mentioned the difference between random in only a technical sense compared to one that is random "in practice."

Walt
 
Sorry for the delay. The combination of being busy and interent outage means I haven't addressed some things in a bit.
Its good to see you back. :)

First, if you want a laymen's definition, expect "no bright line" for anything that lies along a continuum. Your also interpreted it without the context of my other posts, where I explicit mentioned that some things are only random in the most technical sense of the word.
Well, we might be able to come to some sort of agreement with respect to things lying on the continuum so I'll come back to that at the end.

It seems to me that you start talking about this idea of 'technically random' whenever you are having problems with whatever definition you put forward, but as far as I can tell your definition of 'random' and 'technically random' are identical. They are both "having a state or value depends on chance" which both assign the term random to things that are clearly not random.

Also, I'm not sure how you appointed yourself the arbiter of the definition of technically random, but that seems to be the claim we're disputing at the moment. (If we were talking about practically random we wouldn't be talking about definitions and statistics, we'd be talking about the processes of evolution.)

Again, random does not mean uniform or uncorrelated. From statistics, the field you mentioned:

Simple random sampling: The one most are familair with.
Each sample is unbiased, and uncorrelated. Though to be precise, most often used is simple random sampling without replacement is unbiased, but does have some correlation, though usually small.

Systematic random sampling: Involves taking every nth member of the population. For example if you wanted to sample 400 of the 17470 members of these board, you would generate an unbiased number between 1 and 43, and then sample that member on your list, and every 43rd member after that.

Stratified random sampling: Used often in political polls. This involves dividing the population into strata or classes and then sampling from with in each of them. So you could do independent surveys of low-income, middle-income and high-income families to make sure that your sample doesn't accidently miss one class. Your sample is no longer uncorrelated (and may be biased depending on the particular brand of stratified random sampling you use).

1. You have identified some techniques for identifying samples. The first one is the only one actually called random sampling. You did the same thing with distributions, where you inserted random into the name to try to support your point. So these would normally be called, 'simple random sampling','systematic sampling', and 'stratified sampling', in the same way that you started calling the distribution 'Poisson random distribution', 'Normal random distribution', etc....They are normally called the 'Poisson distribution'
and the 'Normal Distribution'

2. Note that these practical techniques to get samples that are random, they are not perfect, the failures you mention(like small correlations) represent deviations from the ideal(random)

3. Clearly depending on what they are sampling their results may differ significantly from random(they may only get one result,)
This point being to distinguish patterns in their outcome from patterns in sampling.

4. I thought it noteworthy at the point when anyone in statistics would say 'random number between 1 and 43' You say 'unbiased'. That is telling insofar as you are manipulating your descriptions to prevent them from displaying exactly the characteristics that ideal randomness shows.

5. These techniques can be random samples, if we know something about the structure of what we are sampling so we add no information via a lack of uniformity of selection or a correlation between selections. For example we would get a random sampling via a systematic sampling if we knew that the order of the objects we were sampling was uncorrelated with the values that we were sampling from those objects. A second example: We use stratified sampling, when we know that there is sufficient heterogeneity in our overall population such that it could bias our sample(ie create a strong correlation between our samples, or eliminate uniformity because one subgroup is much larger than others)

How does, "having a value depending on chance", or "not being predictable" or "not determined by past state" correspond to a dirac-delta function?

I made this example about a dirac-delta when you were talking about 'Poisson Random Distributions'(actually called Poisson Distributions) ie inferring that any distribution over a statistical variable is random. The 'dirac delta random distribution' disproves this suggestion.

Also, as Mijo reminds us so frequently, if the dirac-delta function is defined over the reals or the complex numbers we can never quite be sure that it won't have another outcome. This means that we can construct examples with a dirac-delta that are completely opposed to our intuitions insofar as they are completely predictable and yet still dependent on chance.

There is a difference between a population distribution, and probability distribution, and you appear to be conflating the two here. If I look at the sides of my fair-die, I will see the population distibution. 1 side with 1 pip, I side with 2 pips, ... etc. If I take a fair single sample of that population (by rolling the die), the probability distribution of that sample will be equal to the relative population distribution of the die. That last bit is the foundation of how simple random sampling works.
I'm sorry if this wasn't clear, but this is the point I was trying to demonstrate to you, with this example. That there is difference and the fact that we use a Gaussian distribution to model a process does not necessarily mean the population that produced it is random. Even if you do call it a "Gaussian random distribution".


I am not sure what that has to do with. A uniform distribution is also describle in the same manner. For discrete uniform distributions the probability of a possible outcome is 1 in the total number of outcomes.
This point was also a response to your laundry list of distributions. The point is similar: the distribution is separate from the process. I think that is why the necessity of no correlation is so crucial as well. You may get statistical distributions similar the ones you mention when you randomly sample a random event many times, especially if you are summing the outcomes. But the distribution of a single random event will never be distributed that way.

ETA: all these statistical techniques that you and mijo keep bringing up may have random in the name, but "Random Distributions", "Random Variables", "Random Samples" all can be used in combination with a variety of different random and non-random processes. The commonality that they have, is that it assumes that they are unbiased with respect to the way they are generated. In other words, they clearly reflect the nature of the process itself, because the sampling or generative process itself is ideally uncorrelated and uniform. To try to claim that all the processes they describe are also random is clearly a mistake or even that they are necessarily the statistics of random processes. The only process that is ideally random is the uncorrelated and uniform one.

I agree with this, which is why I mentioned the difference between random in only a technical sense compared to one that is random "in practice."

Walt

Again, your technical definition and your practical definition are indistinguishable. It seems like the definition you advocate alternates between "anything that is not determinate" and "systems described using statistics", "practical", and "technical"


I'd like to move on to this idea of a continuum to see if we can find some common ground:
If your 'technical' definition is 'anything that is not determinate' then there is no way to define a spectrum. Where is the far end of the spectrum, the end that is opposite determinate, anchored? Under your definition all answers to that question are equally good. Is the pure random end of the spectrum a Poisson distribution? or a Gaussian distribution? A Chi-Sq distribution?

If we use uniformly distributed and uncorrelated as the definition then we have a clear and singular answer. We would say that a flat, horizontal, uniform distribution is the ideally random distribution of a single event. The distribution of an ideally deterministic single event would be a vertical, dirac-delta distribution. One end is vertical, one end is horizontal, they fall on opposite ends of a spectrum. Anything that falls in between would be a deviation from ideal randomness and a deviation from ideal determinism.
For multiple events we would want to calculate the correlation between a sequence of samples, (with value on the y axis and sample number on the x)
If the correlation is 1.0 the sequence is ideally deterministic, if the correlation is 0.0 the sequence is ideally random.

Incidentally, that is why I call use this definition of 'technically random' in the same way that I don't call any events that have a distribution different than the dirac-delta 'technically determinate'
 
Last edited:
What you consider baggage others may not. But I think verse would definitely be a step in the wrong direction from analogy. Generally, an explicit, perhaps even mathematical example, is a good way to make a claim, but if you don't have one...I'm hoping this thread will just die. All points have been made in triplicate.

For the love of the Gods! Metaphor then. It's now a blasted metaphor. The verse statement was a joke. How could it not be?

Mathematical claim: Random relative to whom? Deterministic relative to whom? Is it plausible to say we can map out the future of the evolution of most species? Is there some black box mathematical function that spits the genetic code of some animal's ancestors millions of years ago? Is there a computer program that can accurately predict who'll be what at any given time?
 
For the love of the Gods! Metaphor then. It's now a blasted metaphor. The verse statement was a joke. How could it not be?
It really seems like you didn't even skim the body of available posts. Not even the recent ones. You make some vague analogy to Brownian motion, an analogy that has been repeated over and over and over in this thread. How do you expect me to respond? I think I have every reason to be impatient.

Mathematical claim: Random relative to whom? Deterministic relative to whom? Is it plausible to say we can map out the future of the evolution of most species? Is there some black box mathematical function that spits the genetic code of some animal's ancestors millions of years ago? Is there a computer program that can accurately predict who'll be what at any given time?

All I meant to say, is that I would appreciate it if you make a claim that is precise and original.

With your questions you're asking me to rewrite and/or summarize the many things that people have already said, when the record is right in front of you face. Generally when complete strangers ask me to engage in time consuming tasks, my generosity is limited. So please, do the research yourself and read the thread.
 
Alright then, refresh me on determinism is relative to whom? Who is the observer? Omniscient space-faring beings, or people like us with our limited knowledge on the biosphere?

In order for us to be determined about a natural system, we need sufficient knowledge. If we don't, then it may appear to be chaotic. Given that we are immersed in the system, we are even more limited. Our current empirical knowledge does not give us the privilege to declare evolution deterministic to us. We need more data, better understandings of evolution and always will to better understand.
 
Alright then, refresh me on determinism is relative to whom? Who is the observer? Omniscient space-faring beings, or people like us with our limited knowledge on the biosphere?
I guess it depends, there is no one consensus.

In order for us to be determined about a natural system, we need sufficient knowledge. If we don't, then it may appear to be chaotic. Given that we are immersed in the system, we are even more limited. Our current empirical knowledge does not give us the privilege to declare evolution deterministic to us. We need more data, better understandings of evolution and always will to better understand.

I agree with that. We don't know for sure until we get more evidence. This is particularly true when it comes to understanding whether a system is chaotic or not.

I think most people have been arguing from the perspective of what omniscient beings might know, although some people seem to claim that human use of probabilistic models implies that the system is random, without the need for additional inquiry.

I think some of the arguments that are important to answering this question have to do with the rate of mutation in a species, the importance of drift relative to selection, the significance of punctuated equilibrium, whether change is generated spontaneously, how we interpret infinitesimally improbable events and what chaotic systems, if any, serve as inputs to the process of evolution.

Pretty much all of these have been touched on, but knowledge and interpretations vary wildly.
 
Last edited:
And there seems to be a pretty good explanation amongst experts that while it makes sense to call mutations random (though they aren't strictly so) because they happen without respect to how they will fix the organism they code for, natural selection determines which of these mutations will multiply exponentially (having the chance to accumulate more mutations) and which will die out)--this is the essence of evolution... and this accounts for evolved complexity and seeming design when there is no intent behind the process.
 
1. Again, appeal to authority is not persuasive when the quotes you use do not include either the persons reasoning, and you cannot defend any reasoning the sensibly supports your claims.

I wasn't exactly appealing to authority; when I took my physics degree, the consensus was that chaotic systems quickly required resolution down to the quantum level to make predictions. I suspect that Sol Invictus is a little more current than me, but he had confirmed that the consensus hasn't changed much.

Here is one of the other posts that I was looking for (why restate something that someone else has written lucidly?)

Originally Posted by shemp
Since on any given throw, we don’t know in advance which of these values the dice will add up to, we say that the result is random. However, I say that the result is not random. Instead, I say that the result is predetermined by the conditions of the throw (such as the position of the dice in the thrower’s hand, the speed of the throw, the spin placed on the dice by the throw, the quality of the felt on the table, the hardness of the table, the hardness of the rail at the end of the table, the temperature of the dice, various qualities of the surrounding air [such as temperature, humidity, and air movement], along with other possible intangibles). The throw only appears to be random to the observer because he does not have all of this information and the capability to process it to determine the outcome of the throw.
I disagree, and most physicists would, too. Ultimately, the outcome of the throw is based not only upon random quantum phenomena, but upon values that are in principle unmeasurable; that is, not merely we cannot measure them, but they do not in principle have a determined value. The values of variables that depend upon them, therefore, are stochastic probability distributions, whose individual outcomes cannot be predicted from any prior knowledge of the state of the system, no matter how detailed. At any of numerous critical moments during the throw, the outcome of the dice roll can be influenced by a single quantum event, which is in principle truly random.

One might constrain particular throws; for example, it is possible that a sufficiently skilled thrower could alter the probability to favor some outcomes over others. Or the dice can be loaded, increasing the probability of certain outcomes. One will never, however, no matter how fine the control, absolutely determine the outcome. It is in principle impossible to do so under the laws of physics our universe operates on.

Originally Posted by shemp
So is there really no randomness in the non-quantum world? I think there is not. I think that every action at this level is predetermined by the physical conditions preceding it. This would mean that non-quantum randomness is merely an interpretation that we use to explain these actions.
We have shown (to a certainty of over two hundred standard deviations, a truly astounding level of certainty) by experiment that indeterminate (uncertain in the sense of Heisenberg's Uncertainty Principle) quantum values are not merely unmeasurable, but in fact cannot have a definite value. The experiment is called the Aspect Experiment; you can find a discussion of this experiment on this forum by searching on that term. It is therefore incorrect, even if you do maintain that every quantum action is predetermined by physical conditions, to state that the outcome is determinate; that is impossible, since the physical conditions are not merely unmeasurable but nonexistent.

Originally Posted by shemp
1. Is there really randomness in the macro, non-quantum world, or is it just an illusion and a lack of information and computing power?
According to the outcome of experiments, there really is randomness in the non-quantum world, and it springs from:

Originally Posted by shemp
2. Similarly, is there really randomness in the quantum world?
Yes, again, according to the outcome of experiments.

Originally Posted by shemp
3. If the answers to questions 1 and 2 are different, where can we draw the line separating the two?
It is not a sharp line, but there are areas that are definitely on one side or definitely on the other. As has been stated, the line is somewhere above the size of a molecule. The proof of this is an experiment that appears to contradict the Second Law of Thermodynamics, but confirms a derivation of that Law known as the Fluctuation Theorem. Details are available upon request; I'll have to google it up, and if you just want to argue philosophy, it's not worth my while. If you're interested in the hard evidence, I can provide it.

Originally Posted by shemp
4. Is the question of the existence of “free will” related to these questions, or not? Can free will exist without randomness?
On that, I have an opinion, but it is not grounded in the previous questions. I'll therefore answer (out of order) that I don't know whether it can, and I don't know if it is.

And another link discussing how classical chotic systems are affected by quantum uncertainty:

Ronald F. Fox of the Georgia Institute of Technology in Atlanta and his colleagues have taken a different tack. They looked at the behavior of a special, hypothetical physical system that could be treated either as a purely classical problem -- in which case it would display chaos -- or as a quantum problem. By comparing how the system's quantum version varies depending on whether the corresponding classical version shows chaotic behavior, the researchers hoped to identify characteristics of the quantum system that could be tied to chaotic behavior in the classical system.

"We found that there is such a property," Fox says.

In a quantum system, the Heisenberg uncertainty principle determines how precisely two variables -- such as position and momentum -- can be defined simultaneously. At the same time, a given variable has a certain probability distribution representing the range of values it may have. When the corresponding classical system is chaotic, Fox and his collaborators find that this probability distribution, initially as narrow as the uncertainty principle allows, becomes extremely broad, growing exponentially as the system evolves. "For a classical object, one normally thinks of these quantum fluctuations [expressed by the probability distribution] as very, very small and ignorable," Fox says. "We argue that when the dynamics is chaotic, these quantum fluctuations grow very large."


Not forgetting the earlier link discuissing billard balls From my OP on the other thread:

jimbob said:
Here is a discussion about a very simple system (from the Israel physical society)

You can apply this rule to snooker balls as well as molecules. One knows from bitter experience that snooker or pool exhibits sensitive dependence on initial conditions: a slight miscue of the cue-ball produces a big miss! If the balls are bouncing around a frictionless snooker table in a perfect vacuum (otherwise they will just stop moving after one or two collisions) then we might typically have d=1 metre and r=3 cm, so our map is qn+1 = 3qn. The growth in recoil angle uncertainty in the trajectory of a ball as it bounces off other balls is therefore pretty dramatic. In fact, if you hit the ball as accurately as Heisenberg's quantum Uncertainty Principle allows any physical process to be determined by observation, then only about 12 collisions are needed to amplify this uncertainty up to more than 90 degrees!

Twenty-four collisions ahead, and there are twelve sets of collisions where the accuracy required would be beyond the uncertainty principle.

I did give my reasons for stating how I came up with a rough figure for how long far in advance you might be able to predict weather. Athough these figures were rough and based on the simplifing assumptions that I stated, the rough result tallied pretty well with the couple of months that I understood was the best guess when I was an undergraduate.

zoosima said:
1. Being specificity of the positioning of the planet and other large gravitational bodies, is not the same as being influenced by quarks. As far as I know, there is going to be just as much vacuum fluctuation on one side of Pluto as the other. Again. It is a huge stretch to go from solar sized bodies to the influence of single quarks and leptons.

2. If the effects were to be significant it would take longer than the life of the universe for them to be significant. We get 2 million years of accuracy with the predictions we can make now. Lets assume we get 1 meter of accuracy today(which is generous, it is probably much worse than that). Planc's length is: 1.616 252 × 10e-35 meters. Which means it would take 2e6/1.616 252e-35 meters. That is 2e41years. That is longer than even the longest estimates for how long it will take for heat death of the universe. Incidentally, the orbit will have decayed due to tidal forces long before then as well.

3. If other non-quantum events intervene prior to the 2e41 years it will take for quantum effects to be significant, then quantum effects will not be significant. For example the non-quantum effect of the andromeda galaxy colliding into ours will show up in only 2.5 billion years from now.

4. This is what I was talking about with respect to effective granularity. Whether randomness exists or not, it is not necessarily an input to a chaotic system. Assuming it is, justifying it with hand-waving, and failing to do even the simplest calculations to verify your claims, is what I mean by being 'intellectually lazy'

So there are other factors that also make it unpredictible. I dinn't deny that. I am saying that even if these factors didn't exist, the system itself contains enough sensitivity to initial conditions to make its behaviour undetermined beyond a certain timescale.


zoosima said:
That is a huge amount of extrapolation from a few sentences of pop-sci article. Do a little bit of research on black and grey squirrels and we learn that they did not develop from a single random mutation but are common to the Americas. Where they developed as part of a long evolutionary process. http://en.wikipedia.org/wiki/Black_Squirrel

Also from your article:
"At the time when grey squirrels were new to the UK, black squirrels started to be noticed on a Hertfordshire common. The first sighting is believed to be as early as 1912."
This quote seems to claim simultaneous introduction of black & grey squirrels. As the wikipedia article shows. Black & Grey squirrels have differential success depending on geography. So a population of black & grey squirrels was introduced to the UK, and the subtype that was most appropriate for the UK became dominant. So the only thing that you might characterize as random is the introduction, but that was probably by humans. So do want to call human behavior random? Do you want to call it directed? Either way it is not an issue that is particular to evolution in nature.

Was there a small initial population of black squirrels in England in 1912? Were they lucky to breed? Why isn't random human action not an evolutionary force?

The discovery of penicillin was an accident. That has had a massive effect on the subsequent evolution of many bacteria populations. Why isn't that another random factor?
 
Last edited:
An analogy is fine if you are using it to define a term. An analogy is not sufficient to make claims about the properties of a system, if you intend the analogy to 'stand in for the system'

Maybe I should say "why pick a bad analogy?", If species are fluids okay.... but they are not. The differences between a fluid and a species are large enough that you will never be able to prove anything or make a persuasive claim with it. The simple claim that a species is somehow a fluid is so incredibly under-defined that it doesn't establish any claim. For example Is an evolutionary fluid a fluid with Laminar flow or Turbulent Flow? Is it Viscid or Inviscid? Can I apply the Navier-Stokes equations to solve evolutionary problems? Is it a compressible or incompressible fluid? More like a liquid or more like a gas? Is it frozen in flux? What shape container does it fill? How do I interpret, mass,pressure, density, temperature, momentum, velocity for an 'evolutionary fluid'? By the time you've answered all these questions, you could have made significant progress reasoning about the question directly. Oh yes, and when it comes down to it, there are huge outstanding questions about fluids themselves that we can't answer. So why not pick a system we can reason about?

So is your analogy accurate? No. Does it share some properties with evolution? Perhaps. Can we apply conclusions about your fluids to evolution? Not Bloody Likely.

I take it you didn't enjoy "River out of Eden" much.
 
And there seems to be a pretty good explanation amongst experts that while it makes sense to call mutations random (though they aren't strictly so) because they happen without respect to how they will fix the organism they code for, natural selection determines which of these mutations will multiply exponentially (having the chance to accumulate more mutations) and which will die out)--this is the essence of evolution... and this accounts for evolved complexity and seeming design when there is no intent behind the process.

And these experts also implicitally (and sometimes explicitally) use probabilistic treatments of natural selection.

From the extended phenotype:

"A selection pressure as weak as 1 in 1000 would take only a few thousand generations to push an initially rare mutation to fixation".

"If the selection pressure we are discussing is very strong, that is if one replicator makes its posessors very much more likely to survive than its alleles do "

Dawkins invokes a probabilistic treatment, as he would have to, given his background.

Articulett, how is this not a probabilistic treatment of natural selection?
 
Articulett, how is this not a probabilistic treatment of natural selection?

Jimbob - how do you not get that a probabilistic treatment doesn't necessitate an acuasal relationship?
 
And jimbob... I've explained my point a thousand times. The nuts get to the top through probabilities, I supposed... but that IS irrelevant to understanding how they always seem to end up there. And I have you on ignore. Don't bother asking me leading questions you cannot comprehend the answer to anyhow. That is mijo-esque. I've been there; done that. You can have the last word. I refuse to let you inflict it on me, however.

(Your obfuscation regarding probabilities is fantastic, however, if you don't really want people to understand the basic science that ensures that the big nuts will settle on top... if, instead, you hope that they'll be open to the idea that there is a plot amongst nut sellers to make it look like there are more big nuts then there actually are. kudos.)
 
Last edited:
Dawkins invokes a probabilistic treatment, as he would have to, given his background.

Articulett, how is this not a probabilistic treatment of natural selection?
(My answer to this question might be considered a severe oversimplification. So, if anyone else on the thread wants to correct me or expand upon it, please do so: )

Perhaps the easiest way to think about it, is that the probability ultimately doesn't matter. The non-randomness of selection would allow the adaptation (or mutation) to propagate under an enormous range of possibilities and probabilities. If a selection pressure is as weak as 1 in 1000 or as strong as 999 in 1000: the adaptation would eventually fixate in the species.

Perhaps the timing would be different, but the outcome would still be empirically predictable.
 
I wasn't exactly appealing to authority; when I took my physics degree, the consensus was that chaotic systems quickly required resolution down to the quantum level to make predictions. I suspect that Sol Invictus is a little more current than me, but he had confirmed that the consensus hasn't changed much.
Apparently you didn't learn too much....do you have any conception of how large a number 2e41 years is?

Here is one of the other posts that I was looking for (why restate something that someone else has written lucidly?)
Especially when you seem incapable of making your points lucidly.

And another link discussing how classical chotic systems are affected by quantum uncertainty:
You seem to be missing the point here Jimbo. Some classical systems may interact with quantum systems. All the examples of other people making 'authoritative claims' involve people claiming that it is possible for quantum interactions between classical systems and quantum systems.

You make the huge mistake of assuming that all classical systems will interact with chaotic systems. Do you really not understand various levels of granularity? It seems like we had identified the point exactly in the other chaos thread. In case you forgot....Not all chaotic systems are sensitive to quantum effects. Macroscopic systems may or may be chaotic and if they are chaotic, and whether they are QM sensitive varies, but a general rule of thumb is that the further removed they are from QM in scale, the less likely it is that QM is going to be significant. When I say insignificant, I mean that the probability of the outcome of the system being changed by quantum effects in the entire history of universe is less than nanoscopic.

At this point you might as well be talking about how likely it is for all my molecules to quantum teleport across the room. Both the arguments have the same form:

Jimbo: QM says that it is always possible that the molecules in your body could simultaneously teleport across the room.
Me: Yes, but statistics indicates that it will never happen.
Jimbo: Well look here, I found this article where a real scientist(omg!) says that particles quantum teleport all the time.
Me: I just did the calculation and it shows that the probability of many particles teleporting simultaneously in the lifespan of a billion,billion,billion,billion universes is less than one.
Jimbo: Yeah, but it could still happen
Me: *Facepalm/Wheeps for the educational system of whatever country jimbo is from*

Not forgetting the earlier link discuissing billard balls From my OP on the other thread:

We also came up with some other facts from that thread lets see, what were they?
#1 Billiard balls in the real world will never be QM sensitive because of friction.
#2 Technically, only mathematical systems can be truly chaotic.
#3 Being committed to the fact that chaotic systems can exist in the real world, commits oneself to the fact that chaotic systems will have various degrees of granularity(or sensitivity to effects on different scales).

I did give my reasons for stating how I came up with a rough figure for how long far in advance you might be able to predict weather. Athough these figures were rough and based on the simplifing assumptions that I stated, the rough result tallied pretty well with the couple of months that I understood was the best guess when I was an undergraduate.
The problem with the math used was that it was wrong. You used a model that no credible scientist would even shake a stick at. You assumed that since the quality of the met-office weather model improved by a factor of 3 over 20 years, that if you just gave a more detailed input to that model that it could produce as accurate a prediction as you want.

They're using a discrete computational model. Do you have any idea how silly and inappropriate linear extrapolation is? Do you realize that at some point no amount of rainfall accuracy is going to help predictions. Do you have any idea how complex rainfall models are? Do you even understand the details of Met Office's model? This wasn't just a few simplifying assumptions, this was making things up. You don't get to make those assumptions unless you have good reason and you certainly don't, especially when you claim these models are non-linear. Ie why would you make linear assumptions about their capacity to predict?

This would be like me saying: "Republicans won the presidential election by -500K votes in 2000, and they won by 3M votes in 2004, thus I can accurately predict that the republicans will win the 2008 presidential election by 7M votes."

There is a certain point when simplifying assumptions go from helpful to daft. You went way passed that point with your example.


So there are other factors that also make it unpredictible. I dinn't deny that. I am saying that even if these factors didn't exist, the system itself contains enough sensitivity to initial conditions to make its behaviour undetermined beyond a certain timescale.

It becomes undetermined about 2e41 years after the heat death of the universe. Assuming nothing intervenes. Do you understand how big this number is? Lets assume that the heat death of the universe occurs at 1e40 years(it is probably closer to 1e20). Then how many years will it be before your orbit is sensitive to quantum effects? 1.9e41....Your claims are assine. 2e41 years after the sun has burned out, 2e41 years after the Andromeda galaxy has collided with ours. Quantum uncertainty will never show up, because pluto will have collided with something before then, a decidedly un-quantum effect.

Actually, before quantum uncertainty in the initial conditions is significant Pluto's protons and neutrons will have decayed into their constituent quantum particles.

Was there a small initial population of black squirrels in England in 1912? Were they lucky to breed? Why isn't random human action not an evolutionary force?
We know there was some initial population. We know that the black squirrels came from the Americas because they were related. We know they bred and became successful because they were well suited to the environment. You would have a stronger case if the Grey squirrels became successful, as they were poorly suited to the environment. As it is, what happened is exactly what a deterministic theory would predict.

When we first started talking about this I made the statement unambiguously that when it comes to human action all bets are off. The reason I say that is because discussion whether human behavior is more inane than the discussion we are currently having. Are you really trying to tell me that you know who introduced those squirrels and what their reasoning was?

The discovery of penicillin was an accident. That has had a massive effect on the subsequent evolution of many bacteria populations. Why isn't that another random factor?

Again, you really want to talk about human motivations? The best you'll get out of this line of discussion is that humans are random, but I think most people will be less likely to concede that point than they will evolution. I can only assume that you are pursuing this because you've failed to establish your claim via any more rational lines of inquiry.

Moreover are you claiming that you know that no one else would have developed antibiotics if it hadn't been discovered when it was? If you can predict alternate histories, then nothing is random, so you lose or you lose.

Also:
"The discovery of penicillin is attributed to Scottish scientist Sir Alexander Fleming in 1928 and the development of penicillin for use as a medicine is attributed to the Australian Nobel Laureate Howard Walter Florey.
However, several others had noted earlier the bacteriostatic effects of Penicillium: The first published reference appears to have been in 1875, when it was reported to the Royal Society in London by John Tyndall[1]. Ernest Duchesne documented it in his 1897 paper; however it was not accepted by the Institut Pasteur because of his young age. In March 2000, doctors at the San Juan de Dios Hospital in San Jose (Costa Rica) published manuscripts belonging to the Costa Rican scientist and medical doctor Clodomiro (Clorito) Picado Twight (1887-1944). The manuscripts explained Picado's experiences between 1915 and 1927 about the inhibitory actions of the fungi of genera Penic. Clorito Picado had reported his discovery to the Paris Academy of Sciences, yet did not patent it, even though his investigation had started years before Fleming's."

Simultaneity in human discovery is common. Especially because there is evidence that penicillin been noted elsewhere, I have strong reason to believe its discovery was inevitable.

One final point. Why did you disappear from the chaos thread? It seemed we were making some real progress until you decided to flee.
 
Oh, 24 pages of pointless bickering. Let me solve half of that dilemma: when creationists and "intelligent design" people use the word "random" to describe evolution, they are using it wrong.
 

Back
Top Bottom