mijopaalmc
Philosopher
- Joined
- Mar 10, 2007
- Messages
- 7,172
No it can't.
Simple counter-example for you - compress this sequence:
1
You obviously have no idea what "trivial case" means.
No it can't.
Simple counter-example for you - compress this sequence:
1
You obviously have no idea what "trivial case" means.
Theorem:
No program can compress without loss *all* files of size >= N bits, for
any given integer N >= 0.
Proof:
Assume that the program can compress without loss all files of size >= N
bits. Compress with this program all the 2^N files which have exactly N
bits. All compressed files have at most N-1 bits, so there are at most
(2^N)-1 different compressed files [2^(N-1) files of size N-1, 2^(N-2) of
size N-2, and so on, down to 1 file of size 0]. So at least two different
input files must compress to the same output file. Hence the compression
program cannot be lossless.
No it can't.
Simple counter-example for you - compress this sequence:
1
I know what "any" means. "Trivially" he's wrong.
He's still wrong non-trivially:
http://www.faqs.org/faqs/compression-faq/part1/section-8.html
Here's the important bit:
No it can't.
Simple counter-example for you - compress this sequence:
1
That's easy to compress - no signal means the sequence "1", a signal means "0". Since you gave "1", I send no bits (no signal).
If you don't want to count that, all you have to do is amend what I said to apply to all finite sequences of at least 2 bits.
I know what "any" means. "Trivially" he's wrong.
He's still wrong non-trivially:
http://www.faqs.org/faqs/compression-faq/part1/section-8.html
Here's the important bit:
No, you either didn't read or didn't understand what I wrote. What I said was that any finite sequence can always be compressed. What that says is that no one algorithm can compress all sequences of length N and larger. Those are totally different statements, and they are not in contradiction.
Now - what is your definition of a finite random sequence?
That's a Red Herring.
It doesn't matter. The billard balls can only respond to the unfolding situation. That doesn't make their response random. I fail to see why you can't make this distinction.
The response to an event is determined. The occurrence of the event is not. However as the response is determined under indentical sets of events indentical sets of responses occur. It's not hard to understand.
Replace a dice with a list of numbers. Take all your quantum events that affect all your snooker balls and make them the same time after time. Does the same thing happen? Yes.
What is so hard to ****ing understand?
That would seem to be a finite random list. I'll bet there is no simpler algorithm that can generate these numbers every time, other than that list.
I never said it was.You and the other two are hung up on mutation, it is NOT the only thing that natural selection acts upon.
"Artic foxes are white because of mutation" does not follow my from my statement about the importance of mutation as a component of evolution. Evolution explains alot more than why your eye colour from last generation to this one.Do you really think that artic foxes are white because of a mutation? Do I really have brown eyes because of a mutation (both my parents have hazel eye).
So your parents eyes were hazel because of mutation ... and selection, recombination, drift,....The team, whose research is published in the journal Human Genetics, identified a single mutation in a gene called OCA2, which arose by chance somewhere around the northwest coasts of the Black Sea in one single individual, about 8,000 years ago.
That would seem to be a finite random list. I'll bet there is no simpler algorithm that can generate these numbers every time, other than that list.
I'm not sure I understand your example. You want to make a list of numbers based on some presumably random process (like positions of snooker balls after 24 bounces)?
That's trivial to compress - whatever it is, just call it 1.
The point is, I don't think there's any way to tell if something is random (or even define the term) if you only have a finite number of instances of it. You need an infinite number, which we never have.
That's easy to compress - beep means the sequence "0", no beep means "1". Since you gave "1", I send nothing.
No, you either didn't read or didn't understand what I wrote. What I said was that any finite sequence can always be compressed.
That's trivial to compress - whatever it is, just call it 1.
Wouldn't that make any perfectly compressed communication random?
No. You cannot compress a random sequence with any algorithm and gain any reduction in information required to express it.
I never said it was.
"Artic foxes are white because of mutation" does not follow my from my statement about the importance of mutation as a component of evolution. Evolution explains alot more than why your eye colour from last generation to this one.
By the way, if I recall hazel eyes are blend of the co-dominant brown and blue. Thus your eyes as compared to your parent is purely hereditary. But the blue gene your parents have is believed to be a recent mutation, so their hazel eyes are the result of mutation and selection. They can not be said to be caused by just mutation, ro just selection.
Link
So your parents eyes were hazel because of mutation ... and selection, recombination, drift,....
Walt
That compresses any sequence.
YOU HAVE NOT INCLUDED THE SIZE OF THE ALGORITHM IN YOUR CLAIM THAT THE SEQUENCE IS COMPRESSED.
Need I point this out yet again?
You seem to be rather confused.
First, the algorithm I gave is clearly more or less as simple as is possible.
Second, the complexity of a compression algorithm isn't usually the point.
It is not simple enough to have a size of zero.
You are trying to prove that a sequence can be compressed arbitrarially by playing language tricks. You have not understood the point of the definition - you have to include this information. You do not get to magic up zero bit length algorithms that hide arbitrary bit length data.
Look - the case of sequences with very few bits is both trivial and uninteresting for these purposes. Obviously we are not going to be able to decide whether 10 is a random sequence, as opposed to 01 or 11 for example. The interesting cases are sequences with many bits.
Nonsense. I've given you an algorithm, which I described in one short English sentence, which can compress any sequence - no matter how many bits it contains - down to one bit.
EDIT - another one is an algorithm that just produces every possible sequence of increasing numbers of bits, like 0, 1, 00, 01 10, 11, 000, 001, 010, 011, 100, etc. Very easy to specify, and will produce any finite sequence.
But that can't be what you are claiming. Because I could say that no general algorithm could compress multiple results of the snooker ball example, which you claim is nonrandom for some reason.