• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Randomness in Evolution: Valid and Invalid Usage

You obviously have no idea what "trivial case" means.

I know what "any" means. "Trivially" he's wrong.

He's still wrong non-trivially:

http://www.faqs.org/faqs/compression-faq/part1/section-8.html

Here's the important bit:

Theorem:
No program can compress without loss *all* files of size >= N bits, for
any given integer N >= 0.

Proof:
Assume that the program can compress without loss all files of size >= N
bits. Compress with this program all the 2^N files which have exactly N
bits. All compressed files have at most N-1 bits, so there are at most
(2^N)-1 different compressed files [2^(N-1) files of size N-1, 2^(N-2) of
size N-2, and so on, down to 1 file of size 0]. So at least two different
input files must compress to the same output file. Hence the compression
program cannot be lossless.
 
Are you talking about evolution as it works in nature, or some model of evolution?
 
There is nothing but "some model of evolution". Nature doesn't go about thinking, "ah yes, this is how evolution works so this is what I must do."
 
No it can't.

Simple counter-example for you - compress this sequence:

1

That's easy to compress - beep means the sequence "0", no beep means "1". Since you gave "1", I send nothing.

If you don't want to count that, all you have to do is amend what I said to apply to all finite sequences of at least 2 bits.

I know what "any" means. "Trivially" he's wrong.

He's still wrong non-trivially:

http://www.faqs.org/faqs/compression-faq/part1/section-8.html

Here's the important bit:

No, you either didn't read or didn't understand what I wrote. What I said was that any finite sequence can always be compressed. What that says is that no one algorithm can compress all sequences of length N and larger. Those are totally different statements, and they are not in contradiction.

Now - what is your definition of a finite random sequence?
 
Last edited:
No it can't.

Simple counter-example for you - compress this sequence:

1

That's easy to compress - no signal means the sequence "1", a signal means "0". Since you gave "1", I send no bits (no signal).

If you don't want to count that, all you have to do is amend what I said to apply to all finite sequences of at least 2 bits.

I know what "any" means. "Trivially" he's wrong.

He's still wrong non-trivially:

http://www.faqs.org/faqs/compression-faq/part1/section-8.html

Here's the important bit:

No, you either didn't read or didn't understand what I wrote. What I said was that any finite sequence can always be compressed. What that says is that no one algorithm can compress all sequences of length N and larger. Those are totally different statements, and they are not in contradiction.

Now - what is your definition of a finite random sequence?

Ooh,ooh, I have an example of a finite random sequence: it is described in this post:

That's a Red Herring.

It doesn't matter. The billard balls can only respond to the unfolding situation. That doesn't make their response random. I fail to see why you can't make this distinction.

The response to an event is determined. The occurrence of the event is not. However as the response is determined under indentical sets of events indentical sets of responses occur. It's not hard to understand.

Replace a dice with a list of numbers. Take all your quantum events that affect all your snooker balls and make them the same time after time. Does the same thing happen? Yes.

What is so hard to ****ing understand?


Replace a dice with a list of numbers. Take all your quantum events that affect all your snooker balls and make them the same time after time. Does the same thing happen? Yes.

That would seem to be a finite random list. I'll bet there is no simpler algorithm that can generate these numbers every time, other than that list.
 
That would seem to be a finite random list. I'll bet there is no simpler algorithm that can generate these numbers every time, other than that list.

I'm not sure I understand your example. You want to make a list of numbers based on some presumably random process (like positions of snooker balls after 24 bounces)?

That's trivial to compress - whatever it is, just call it 1.

The point is, I don't think there's any way to tell if something is random (or even define the term) if you only have a finite number of instances of it. You need an infinite number, which we never have.
 
You and the other two are hung up on mutation, it is NOT the only thing that natural selection acts upon.
I never said it was.

Do you really think that artic foxes are white because of a mutation? Do I really have brown eyes because of a mutation (both my parents have hazel eye).
"Artic foxes are white because of mutation" does not follow my from my statement about the importance of mutation as a component of evolution. Evolution explains alot more than why your eye colour from last generation to this one.

By the way, if I recall hazel eyes are blend of the co-dominant brown and blue. Thus your eyes as compared to your parent is purely hereditary. But the blue gene your parents have is believed to be a recent mutation, so their hazel eyes are the result of mutation and selection. They can not be said to be caused by just mutation, ro just selection.

Link
The team, whose research is published in the journal Human Genetics, identified a single mutation in a gene called OCA2, which arose by chance somewhere around the northwest coasts of the Black Sea in one single individual, about 8,000 years ago.
So your parents eyes were hazel because of mutation ... and selection, recombination, drift,....

Walt
 
Last edited:
That would seem to be a finite random list. I'll bet there is no simpler algorithm that can generate these numbers every time, other than that list.

I'm not sure I understand your example. You want to make a list of numbers based on some presumably random process (like positions of snooker balls after 24 bounces)?

That's trivial to compress - whatever it is, just call it 1.

The point is, I don't think there's any way to tell if something is random (or even define the term) if you only have a finite number of instances of it. You need an infinite number, which we never have.

Sorry, I was being slightly snide. (In the context of the surrounding posts, the first few numbers wouldn't be random, but later ones would be).

My real question was aimed at Cyborg, which was:

Does he consider this to be a list of random numbers?

What about a table of numbers obtained by rolling a set of dice repeatedly?

I think he is saying that all random numbers consist of

"Sequences for which the shortest algorithm (with input included) for producing them is the sequence itself." so the sequence can't be compressed.

Wouldn't that make any perfectly compressed communication random?

I also can't see why it doesn't require something magical preventing lists of random numbers from containing patterns, 10% of random numbers will divide by 10, and 1% will divide by 100, if you turn the sequence into one long number. (which is what we are doing in typing)

I have another sequence which consists of three numbers:

2 4 6

I can't think of another way of defining those three numbers, and no more than those three, that is shorter, without some prior information (like calling it one).
 
That's easy to compress - beep means the sequence "0", no beep means "1". Since you gave "1", I send nothing.

I thought you might take that out.

No, you either didn't read or didn't understand what I wrote. What I said was that any finite sequence can always be compressed.

No. You cannot compress a random sequence with any algorithm and gain any reduction in information required to express it.

For example if you were to say, "I can compress 10 by creating an algorithm that outputs 10 if the input is 1," then, per the definition I gave earlier which includes the size of the algorithm, the algorithm must, at the very least, contain the string "10" and hence be at least as large as the sequence it's trying to encode meaning you gain nothing by encoding it as the sequence "1", or even "".

Compression is not recursively applicable - your statement: "Any finite sequence can always be compressed," would allow for one to repeatidly apply compression - say reducing the sequence by 1 bit each time - until there are no bits in the sequence. This is clearly a nonsense. If you used a different algorithm at every stage every algorithm must be included in the count as to how much information was required - you can't magic up the data from nowhere.

This means:

That's trivial to compress - whatever it is, just call it 1.

Is a cheat that I've already covered.

So before you accuse me of not reading you properly I suggest you make sure you're not assuming what you think I'm saying.

Wouldn't that make any perfectly compressed communication random?

Absolutely - so should any decently compressed file.
 
No. You cannot compress a random sequence with any algorithm and gain any reduction in information required to express it.


Let's do an example. Suppose you claim the sequence 10 is random. Well, here's my compression algorithm:

input output
10 |||| 0
00 |||| 1
01 |||| 00
11 |||| 10

This algorithm compresses 10 and 00, but not 01 or 11 - so it is perfectly consistent with the theorem you quoted (of course), but it also compresses the sequence you claimed was random.

Now, because I specified an algorithm that can handle any input losslessly, it took a while. But I can also simply do this:

input |||| output
anything | 0

That compresses any sequence.

Now, you wanted an algorithm to produce a sequence... that's easy too. Here's a very short algorithm: produce a random (or pseudo-random, doesn't matter) sequence of 1's and 0's. Eventually the given sequence will be produced. :)

EDIT - another one is an algorithm that just produces every possible sequence of increasing numbers of bits, like 0, 1, 00, 01 10, 11, 000, 001, 010, 011, 100, etc. Very easy to specify, and will produce any finite sequence.
 
Last edited:
I never said it was.

"Artic foxes are white because of mutation" does not follow my from my statement about the importance of mutation as a component of evolution. Evolution explains alot more than why your eye colour from last generation to this one.

By the way, if I recall hazel eyes are blend of the co-dominant brown and blue. Thus your eyes as compared to your parent is purely hereditary. But the blue gene your parents have is believed to be a recent mutation, so their hazel eyes are the result of mutation and selection. They can not be said to be caused by just mutation, ro just selection.

Link

So your parents eyes were hazel because of mutation ... and selection, recombination, drift,....

Walt


Hi Walt, thanks for the response!

I will remind you that what you said was "It ain't evolution as we know it without mutation."

Now the deal is that alelle combinations and the control genes for growth can be very important in the variability of traits.

Hazel eye are caused by the combination of double dominant green with the dominant brown, blue is the double recessive. So green is double gree, hazel is green:brown , brown can be double brown or brown:blue.

The point being that regardless of the ontology of the variation, there can be and is variation that is not dependant upon mutation. Which is the point I was trying to make and the other two posters and perhaps you seem to want to ignore.

But to say that this requires the mutational accident would be like saying that all penecillin based antibiotic are dependant upon the accident of the mold growth.

Which is nonsesne, of course the trait was needed and accidentaly noticed.

However the groth of the mol in solution vats of great size, the refinement of the material to extract the penecilling and the concentration of the active agent are all very deliberate and very controlled.

So what is the 'more' important part, the fortuitous discovery or the deliberate development of the mediaction, to say that modern antibiotics are all 'accidental' because they were discovered by 'accident' is exactly comparable to saying all natural selection is 'random' because it has a'random' element.

Is the treatment of a severe bacterial infection and sepsis really accidental?
 
YOU HAVE NOT INCLUDED THE SIZE OF THE ALGORITHM IN YOUR CLAIM THAT THE SEQUENCE IS COMPRESSED.

Need I point this out yet again?

You seem to be rather confused.

First, the algorithm I gave is clearly more or less as simple as is possible. Moreover I described it in one short sentence, which bounds its complexity, and it compresses ANY sequence no matter what the length is.

Second, the complexity of a compression algorithm isn't usually the point. If I had an algorithm that could compress any sequence losslessly it would violate the theorem you quoted above, regardless of how complex the algorithm itself was. That theorem has nothing to do with the complexity of the algorithm - it simply proves that no such algorithm exists.
 
You seem to be rather confused.

I am not.

First, the algorithm I gave is clearly more or less as simple as is possible.

It is not simple enough to have a size of zero.

Second, the complexity of a compression algorithm isn't usually the point.

It is here.

You are trying to prove that a sequence can be compressed arbitrarially by playing language tricks. You have not understood the point of the definition - you have to include this information. You do not get to magic up zero bit length algorithms that hide arbitrary bit length data.
 
Last edited:
It is not simple enough to have a size of zero.

Look - the case of sequences with very few bits is both trivial and uninteresting for these purposes. Obviously we are not going to be able to decide whether 10 is a random sequence, as opposed to 01 or 11 for example. The interesting cases are sequences with many bits.

You are trying to prove that a sequence can be compressed arbitrarially by playing language tricks. You have not understood the point of the definition - you have to include this information. You do not get to magic up zero bit length algorithms that hide arbitrary bit length data.

Nonsense. I've given you an algorithm, which I described in one short English sentence, which can compress any sequence - no matter how many bits it contains - down to one bit. The information required to specify my algorithm is obviously bounded, but the sequences it will compress are not. So your definition does not suffice.
 
ETA: Cyborg, If you used a compression algorithm on the works of Shakespeare, so that it could be compressed no further, then by your definition isn't the result a random number?

If not why?

If you said that no general algorithm could compress all random signals, and in general, random signals do not compress much, then I would agree with you.

But that can't be what you are claiming. Because I could say that no general algorithm could compress multiple results of the snooker ball example, which you claim is nonrandom for some reason.
 
Last edited:
Look - the case of sequences with very few bits is both trivial and uninteresting for these purposes. Obviously we are not going to be able to decide whether 10 is a random sequence, as opposed to 01 or 11 for example. The interesting cases are sequences with many bits.

Yes they are trivial cases but this is simply about getting you to accept the principal of the thing and abandon the language trickery you have latched onto.

If you won't accept the trivialities then it's pointless dealing with larger cases.

Do you or do you not understand why it is invalid to pretend you have compressed information down to a single bit if you need to have that information represented in full in the algorithm for decompressing the data?

I don't understand why this disconnect is occuring: if I specify the algorithm:

1|print 101101110111000010110101011101001111010010100001

To compress:

101101110111000010110101011101001111010010100001

And then give it the input:

1

Then the number of bits I have encoded this sequence to is 49 at the very minimum - assuming no bits to describe the "print" part of the algorithm. I have gained absolutely nothing but shifting the input into the algorithm.

Shifting input into the algorithm DOES NOT WORK because we consider the complexity of the algorithm and input.

Algorithm AND input.

Nonsense. I've given you an algorithm, which I described in one short English sentence, which can compress any sequence - no matter how many bits it contains - down to one bit.

This?

EDIT - another one is an algorithm that just produces every possible sequence of increasing numbers of bits, like 0, 1, 00, 01 10, 11, 000, 001, 010, 011, 100, etc. Very easy to specify, and will produce any finite sequence.

And how do you propose to specify which sequence this algorithm is to produce without ending up back where you started?
 
Last edited:
But that can't be what you are claiming. Because I could say that no general algorithm could compress multiple results of the snooker ball example, which you claim is nonrandom for some reason.

I did not say it is "non-random" - I said it is trivially easy to create a deterministic set of data from a random source and then have the same results time and time again.

The important part of an interacting system is not whether or not some behaviour is "random" or "non-random" - that is mathematically undecidable (see the link) - the important part is the deterministic relationships - i.e. what things will have a causal effect on other things and conversly what things will not have a causal effect on other things.
 

Back
Top Bottom