• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Randomness in Evolution: Valid and Invalid Usage

Cyborg, he has been failing to grasp the concept and repeating the same inanities for over a year... Like Mijo, he is convinced that it's because he understands more than everyone else (including actual experts)... it isn't ever, ever going to change. He cannot grasp it.

He is so muddled in his understanding that I don't think he can actually convey a comprehensible understanding of evolution to anyone... ever. It's garbled and jumping tenses and mixing the model for specific examples.

For what it's worth, you are clear to me. I have no idea if the same old people are clear to anyone... or if they are even clear to each other. Jimbobs whole "identical inputs"/different outputs is so convoluted, because he's not talking about IDENTICAL inputs at all... he tosses in a new variable or an event and that isn't identical and then says that it makes the outcome probabilistic per his non sequitur of an example. If this life wasn't "destined" (whatever that means) because some random thing could have affected the outcome -- then, to jimbob, evolution is random. To Mijo it's random so long as random things (described as "anything to do with probability) are involved in natural selection.

It's whacked. It's nonsense. You try to hard to make a dent in the impenetrable. Their descriptions could apply equally to poker... and it would be equally as garbled and uninformative as to what poker is or how the game is played as it is to evolution.
 
Cyborg, with no redundant information it would look like random noise, but not be; every time you uncompress it, it turns into Shakespere, every time you attempt to uncompress random noise it turns into random noise. That is a difference.

Are you disagreeing with the statement that chaotic systems produce random outputs over long timescales? I have said that over short timescales, chaotic systems are (maybe I didn't add sufficiently) deterministic, that is true, but over long timescales, although they depend on previous conditions, slight errors get magnified untli you go far enough forward, and accurate prediction would require an accuracy at the quantum level. Beyond this point, the output is not only unpredictible with respect to the initial conditions it is random because it is determined in part by future random events.

Articulett, could you explian what Cyborg means to me please? I thought I understood, but disagreed with, his point, but now I have no idea.


Articulettt, my contention is quite simple. Many biological systems are chaotic; natural selection certainly is affected by chaotic systems, for example the weather. In fact natural selection has all the hallmarks of a chaotic system. In other words it is random, because what reproduces is governed in part by chance. This is a long-term randomness is a fundamental aspect of chaotic systems, and is uncontroversial.

Over long timescales, this chaotic nature mean that the direction of evolution is likely to change randomly. A single mutation in a disease can and does cause a vast change in the selective pressures affecting a population. This is a random change in "direction".
 
Cyborg, with no redundant information it would look like random noise, but not be; every time you uncompress it, it turns into Shakespere, every time you attempt to uncompress random noise it turns into random noise. That is a difference.

That's a pretty bold statement: you're telling me there's no interpretation under which I could take "real" random noise and turn it into "meaningful" information.

I don't know what magic mathematical function it is that you have that can distinguish "real" random noise from "fake" random noise but I'd like to see it.

(I.e. if you can't see that this is mathematically impossible you will never correct your understanding).
 
So if you run the uncompression algorithm on random noise you expect to get meaningful information?

Should you try some other algorithm, the compressed works of Shakespere could very well produce a random-seeming result. However we are talking about using a known decompression algorithm, and that is highly unlikely* to produce anything meaningful on a similar sized random stream of numbers.

There is implicit information within the random noise, which concerns the distribution of the random noise.



*The whole argument about monkeys and typewriters, and entropy can show this. A google search on Project Gutenberg, shows that the text when zipped is 2.15 Mb. Now this won't be perfect compression, but gives an idea, about 17 mbits.

Even if the compressed works were "only" 2 million bits long this is still 2^(2million) possible combintations. I feel confident in saying that that particular arrangement of bits won't happen by chance.
 
All use of antibiotics based upon pennecillin is accidental because the original discovery was accidental.

The use isn't accidental, but there was a sudden random alteration in the fitness landscape for certain disease-causing bacteria that occured as a consequence of the discovery of antibiotics and their subsequent use.

It isn't one of the best examples of evolutionary direction being altered by random events, but you could argue that it an example.
 
So if you run the uncompression algorithm on random noise you expect to get meaningful information?

You haven't read this carefully: there is no "the" uncompression algorithm.


Er yes there is:

Look - the case of sequences with very few bits is both trivial and uninteresting for these purposes. Obviously we are not going to be able to decide whether 10 is a random sequence, as opposed to 01 or 11 for example. The interesting cases are sequences with many bits.

Yes they are trivial cases but this is simply about getting you to accept the principal of the thing and abandon the language trickery you have latched onto.

If you won't accept the trivialities then it's pointless dealing with larger cases.

Do you or do you not understand why it is invalid to pretend you have compressed information down to a single bit if you need to have that information represented in full in the algorithm for decompressing the data?

I don't understand why this disconnect is occuring: if I specify the algorithm:

1|print 101101110111000010110101011101001111010010100001

To compress:

101101110111000010110101011101001111010010100001

And then give it the input:

1

Then the number of bits I have encoded this sequence to is 49 at the very minimum - assuming no bits to describe the "print" part of the algorithm. I have gained absolutely nothing but shifting the input into the algorithm.

Shifting input into the algorithm DOES NOT WORK because we consider the complexity of the algorithm and input.

Algorithm AND input.

Nonsense. I've given you an algorithm, which I described in one short English sentence, which can compress any sequence - no matter how many bits it contains - down to one bit.

This?

EDIT - another one is an algorithm that just produces every possible sequence of increasing numbers of bits, like 0, 1, 00, 01 10, 11, 000, 001, 010, 011, 100, etc. Very easy to specify, and will produce any finite sequence.

And how do you propose to specify which sequence this algorithm is to produce without ending up back where you started?


cyborg said:
Do you or do you not understand why it is invalid to pretend you have compressed information down to a single bit if you need to have that information represented in full in the algorithm for decompressing the data?

You have been arguing that you do know the decompression algorithm.

So if a randomly generated sequence of 2million bits was able to be compressed, even slightly, it wouldn't be random?

You need to reread any information theory lecture notes you have.

  • You can't predict what the next number in a random sequence will be.
  • If you can predict what the next set of numbers will be in a sequence then you don't need the sequence, so the information is redundant, this means that you can compress information until you can't predict what the numbers will be.
  • This does mean that perfectly compressed data is unpredictable. It does not mean that it is impossible to analyse a sequence of previously generated random data and compress it. There will be (random) patterns in the random data and these could be compressed.
 
Last edited:
Dancing David, do you see my point that chance events do not only affect the selection of idividuals, but also the direction of evolution, and over long timescales, will mean that the direction of evolution has changed randomly?
 
Er yes there is:

I cannot fathom why you have highlighted the parts you have - at all.

You have been arguing that you do know the decompression algorithm.

No - I've said that the information comprising the algorithm must be included with any input when declaring what any sequence is encoded to. That does not fix the behaviour of the algorithm.

It does not mean that it is impossible to analyse a sequence of previously generated random data and compress it. There will be (random) patterns in the random data and these could be compressed.

And you know that your random generator is random how now?

Oh right, it's because it generates random data... except when it doesn't. Then it's still random because it's a random generator. Because you said it is.

That's a big FAIL on grasping the point.
 
And you know that your random generator is random how now?

Oh right, it's because it generates random data... except when it doesn't. Then it's still random because it's a random generator. Because you said it is.

That's a big FAIL on grasping the point.

Lottery machines, are probably sufficiently random due to the same chaotic equations for snooker balls. Number of counts of a geiger counter in a short-enough timeframe would also be random. You would have to use numbers generated by some physical sequence, but that isn't a coneceptual problem.

A simple analogy:

If a die is shaken two million times, it would not be surprising to see patterns in the results. These could be quite possibly compressed. Just because a number is generated by a random process, doesn't mean it can't be compressed. A sequence of one million sixes is just as likely (1:6^2million) as any other particular sequence; the order is important if you are turning the sequence into a single number.
 
These could be quite possibly compressed.

You will inevitably have patterns - because a pattern can be as short as "11" or "00" etc...

You could even have really long runs. And they might be compressable.

But you have to deal with the entire sequence.

As the rule of thumb for compression is that for every increase in compression you can make in one area there will be another area which will decrease in the compression you can have. You can't get around the entropy of the sequence.

It is absolutely no good if there is a compressable sequence if compressing it requires you expand other sequences. This is why attempting to compress random sequences generally fails and it is the trap you have fallen into here.
 
Here is a discussion about compression of random bitstreams:

Note that the random data is less compressible than the data which the author claims is nonrandom. However it is still compressible.

This whole discussion about whether compressible data is random or nopt has little bearing on the OP or the use of the term random in evolution.

I object to the term "nonrandom", which implies "inevitable".

Back to the OP:

Do you understand my point about chaotic systems having random outcomes over long timescales?

I have said that in stable environments, you might not need to discuss randomness, and the word "random" might be confusing; however I see no difficulty in simply saying the truth by adding the phrase "tend(s) to".

At first glance it seems reasonable to state that evolution doesn't need chaotic systems to work, however any implimentation that I can think of will have the replicators altering each other's fitness environment, i.e. there would be many positive feedback loops, so I'd think evoultionary systems would all be chaotic, although I am not certain.

Could you tell me how your approach is remotely helpful for discussing evolutionary history, it is more complex, and it doesn't reflect reality.
 
Why does "nonrandom" imply inevitable to you, jimbob, when it comes to evolution? Is a poker hand inevitable? You've already agreed that's it's akin to evolution as far as the first part being "random" and the second part (playing the game) being probabilistic.

We'd describe Poker by saying the cards are dealt randomly and then the outcome of the game is determined by how the game is played. We'd describe evolution by saying the mutations happen randomly, but the outcome is determined by natural selection. The details as to "how the game is played" , like the details of "natural selection", are more important than "probabilities" in conveying understanding of the process, right?

Why are you hearing something that isn't there in the description of evolution as given by the experts? You don't seem to hear it when we talk about Poker.
 
Last edited:
Hi Walt, thanks for the response!

I will remind you that what you said was "It ain't evolution as we know it without mutation."

Now the deal is that alelle combinations and the control genes for growth can be very important in the variability of traits.

Hazel eye are caused by the combination of double dominant green with the dominant brown, blue is the double recessive. So green is double gree, hazel is green:brown , brown can be double brown or brown:blue.

The point being that regardless of the ontology of the variation, there can be and is variation that is not dependant upon mutation. Which is the point I was trying to make and the other two posters and perhaps you seem to want to ignore.
I am not ignoring those posters, I am disagreeing with them.

The variation you point to, how did that come about? Yes, natural selection will cause variation by changing the frequencies of alleles, and possibly new combinations of alleles. How ever, if the frequency of one particular allele is 0, it will remain there.

Evolution, as we know it, not only explains such things as the different breeds of dog. In is the uniting factor of biology, it describes how we are related to the platypus and every other organism. Evolution without mutation does not explain the descent from prokaryote to all the species today. In others, evolution without mutation is not evolution as we know it.

Walt
 
Indeed, Walter Wayne, Without mutation, multicellular organisms wouldn't have arisn. (Actually given the chemistry of DNA and RNA, it would also be impossible for there to have been no mutation).

Articulett said:
Why does "nonrandom" imply inevitable to you, jimbob, when it comes to evolution? Is a poker hand inevitable? You've already agreed that's it's akin to evolution as far as the first part being "random" and the second part (playing the game) being probabilistic.
A better analogy might be a game maybe akin to dungeons and dragons, where the traits modulate how lucky an organism needs to be to reproduce. That is still a poor analogy, but getting closer.

A game of poker, where every so often (very rarely) the rules change slightly, so different hands are strong, and where sometimes weak hands can beat strong ones...

We'd describe Poker by saying the cards are dealt randomly and then the outcome of the game is determined by how the game is played. We'd describe evolution by saying the mutations happen randomly, but the outcome is determined by natural selection. The details as to "how the game is played" , like the details of "natural selection", are more important than "probabilities" in conveying understanding of the process, right?
Fair enough, but I am pointing out that natural selection is probabilistic, i.e. an organism could seem "fit" but fail to reproduce due to bad luck (a gust of wind at the wrong time). It makes little sense to claim that that organism wasn't really "fit" because that doesn't really agree with observation.

A real example:


Barn Owls could have (say) 12 chicks per parent.
In some areas the population of barn owls is stable.
This means that on average, one offspring per parent reproduces.
Most owlets will be similar to their parents.

I would argue that once you have removed "runts" or other grossly unfit owlets, most would be fairly similar, so an "average" owlet might have (say) a 10% chance of reproducing.

This means that any individual beneficial trait is unlikely to spread. However, over the entire population, some will do so. It also means that deleterious traits are likely to die out very quickly. A 10 % increase in "fitness" gives an 11% chance of reproducing, whilst a 10% decrease gives a 9% chance of reproducing.

Chance is more important in determining which owlets survive than subtle differences in traits. That doesn't mean that you can't assess which traits are beneficial, but at that level of detail, you need a probabilistic treatment of natural selection.

How do you describe what, "a selective pressure of as little as 1:1000" actually means without invoking probabilities?

Again, I wonder if this is a semantic difference over the worrd "random".

I think it is misleading to describe any chaotic system as "nonrandom", as over long enough timescales the behaviour is random.

Evolutionary algorithms can have a nonrandom selection, and are good examples for describing how powerful evolutionis. However, as soon as you are dealing with biological evolution (or indeed imperfect self-replication*) there are multiple feedback loops and the system is almost certainly chaotic, which will certainly be random over long timescales.


*e.g. biological evolution.
 

Back
Top Bottom