Missing genetic information refutes neo-Darwinism

BTW, do you know that from a logical viewpoint reductionism implies, that a just hatched chick is less ordered (complex) than the just fertilized egg, because only processes which increase entropy are possible?

If you can't grasp the difference between formal concepts of 'complex' and 'ordered' and informal ones you will not be able to make your case - it is that simple.

Your understanding of logic is sufficiently shallow enough to get you into serious trouble.
 
Thumans have more than 20 000 properties so the genes cannot be an intricate description of a human.
O.k. this is finally causing me to de-lurk.

I call BS. Reason being, that the only rigid scientific framework applicable - Information Theory - has this covered long since. It is called "Kolmogorov Complexity" and basically establishes a measure of how "compressible" any given data set is.

Apart from how vague and unscientific the "more than 20 000 properties" claim is, one would at least need to establish that these claimed "more than 20 000 properties" are indeed "more than 20 000" independent degrees of freedom, because otherwise it looks just like the apparent "Complexity" of the decimal expansion of "PI", which, in fact, is the result of a rather simple algorithm.

And: NO! The Kolmogorov Complexity of any given data set cannot be calculated by anything less than brute force. Thus, I wonder how the claimant (Earthborn) got to the conclusion that these supposed "more than 20 000 properties" are all independent from each other.

Calculating the compressability(sp?) of data of this order of magnitude brute force style is way beyond what the combined computing resources of this particular planet can achieve today. Thus, it can be argued that the claimant (Earthborn) must have been sent in from the future, where such computing may be possible.
 
Well, I'm still waiting for a reply to my question. Wobegong (or whatever) told me that souls were material and I asked him/here where to find them. Still no reply. Really, there are microscopes now that can see individual atoms so wherever these things are, we can find them. All I need is directions. And no South Africa map jokes, OK?
 
Reason being, that the only rigid scientific framework applicable - Information Theory - has this covered long since.
I guess that silly old thing called "biology" can safely be ignored. :nope:

Apart from how vague and unscientific the "more than 20 000 properties" claim is, one would at least need to establish that these claimed "more than 20 000 properties" are indeed "more than 20 000" independent degrees of freedom, because otherwise it looks just like the apparent "Complexity" of the decimal expansion of "PI", which, in fact, is the result of a rather simple algorithm.
There are about 20 000 genes. If the genome is a compressed version of the phenotype, then they would be more analogous to the bytes within the rather simple algorithm for Pi.

And: NO! The Kolmogorov Complexity of any given data set cannot be calculated by anything less than brute force.
Kolmogorov complexity can only be defined for a specified programming language, in which it is the shortest possible program that can be written to produce the string. I could define a programming language in which Pi is compressed to 1 bit, if I define 1 as "the rather simple algorithm for Pi" and 0 as "NOT the rather simple algorithm for Pi". Therefore the Kolmogorov Complexity of any given data set depends on the language used to describe it. It is not some absolute metaphysical quantity.

There is no programming language in biology, therefore "Kolmogorov complexity" is irrelevant.

Thus, I wonder how the claimant (Earthborn) got to the conclusion that these supposed "more than 20 000 properties" are all independent from each other.
They are not independent, and they are also not independent of the physical environment in which they develop.

Calculating the compressability(sp?) of data of this order of magnitude brute force style is way beyond what the combined computing resources of this particular planet can achieve today. Thus, it can be argued that the claimant (Earthborn) must have been sent in from the future, where such computing may be possible.
Quite possible. I can compress it quite easily by defining the language in which I describe these datasets as follows: 1 = Me, 0 = Not Me. I have calculated the Kolmogorov complexity of my phenotype and it is 1 bit.

We people from the future don't like to use concepts that are irrelevant to the issue at hand, though. We use all that computing power for more important issues.
 
Huntsman said:
Well, he's right, techically, that (so far as we know, on the macro level) only entropy-neutral or entropy-increasing processes are possible.

But that's general, not in regards to every part of an interacting system.
Hence my "Oh, please." I was reacting to the old canard that evolution is impossible because all we can get everywhere is an increase in entropy.

And, of course, people forget to note that this problem would also rule out life from any source.

~~ Paul
 
Hence my "Oh, please." I was reacting to the old canard that evolution is impossible because all we can get everywhere is an increase in entropy.

And, of course, people forget to note that this problem would also rule out life from any source.

~~ Paul

I figured you probably knew, but I wanted to make sure none of our studio audience misunderstood (and make sure you knew, while I was at it ;)).

As to life, well, yeah. They also forget it would rule out, well, pretty much everything in existence :)
 
And in humans there are only about 20 000 of them, humans have more than 20 000 properties so the genes cannot be an intricate description of a human.

And a deck of cards has only 52 cards, therefore you can't have more than 52 different hands of poker. :rolleyes:

The main insight of panpsychists such as Nicholas of Cusa (1401-1464) was the recognition, that plants and animals do not grow from dead matter, but are built up by invisible animated entities with the involvement of perception and intelligence.

Just so I can be clear on this point...are you referring to pixies, elves, or leprechauns? The distinction may be an important one.....
 
It is a fact that the information of the genetic make-up of a human is a far cry from what is needed in order to transform a fertilized egg only into a human body, let alone into a person with intelligence and consciousness....

I don't know what the information of the used parts of the human genome is....

Hmm...I am undecided. Should I laugh or cry?
 
Last edited:
Kolmogorov complexity can only be defined for a specified programming language, in which it is the shortest possible program that can be written to produce the string. I could define a programming language in which Pi is compressed to 1 bit, if I define 1 as "the rather simple algorithm for Pi" and 0 as "NOT the rather simple algorithm for Pi". Therefore the Kolmogorov Complexity of any given data set depends on the language used to describe it. It is not some absolute metaphysical quantity.

What is this 'Pi' you speak of in this language you have defined?
 
Division by zero

If a computer program can perform "extremely different" tasks depending on input parameters, then from a purely logical point of view we must conclude: either the program contains the information corresponding to all tasks which can be switched on by parameters, or the parameters themselves constitute the information needed for the tasks.
Remember the freshman algebra parlour trick that uses a hidden division by zero to prove that two equals one? As I've tried to point out to John Hewitt in the "Five Critiques..." thread, if "information" is anything that might be treated as meaningful by a goal-driven, pattern-sensitive agent, then the term is utterly vacuous: a big, fat, ZERO.

Meaning is a qualitative property, not a quantitative one. Information theory is concerned with quantity ONLY.
 
I guess that silly old thing called "biology" can safely be ignored.
TTBOMK, biology does not have its own independent scientific framework dealing with the quantization of information and stuff.
There are about 20 000 genes. If the genome is a compressed version of the phenotype, then they would be more analogous to the bytes within the rather simple algorithm for Pi.
Not exactly a bad analogy. Of course, we have to consider issues like self-modifying code, code interpreters being generated "on the fly", massive parallel processing and so on.

Plus, of course, there are environmental factors during development that can be considered to have a "random" influence on the phenotype. I reckon not even the most extreme kind of creationist would argue that the environment does not add at least some information.
Kolmogorov complexity can only be defined for a specified programming language,
Not true. All it requires is a description language for strings to be specified.
I could define a programming language in which Pi is compressed to 1 bit, if I define 1 as "the rather simple algorithm for Pi" and 0 as "NOT the rather simple algorithm for Pi". Therefore the Kolmogorov Complexity of any given data set depends on the language used to describe it. It is not some absolute metaphysical quantity.
It certainly is not. What you are overlooking, however, is that for any given string you cannot prove that it is complex by means other than brute force. This is independent of the language used!

You example is not equivalent to what you were asserting originally: That a string of - say - 20.000 characters can be proven to be complex. It cannot, and therefore you cannot base an argument on the assertion that a string of 20.000 has (at least) a certain complexity.

Because you cannot know.
There is no programming language in biology, therefore "Kolmogorov complexity" is irrelevant.
Semantics. A programming language is not required.
I can compress it quite easily by defining the language in which I describe these datasets as follows: 1 = Me, 0 = Not Me. I have calculated the Kolmogorov complexity of my phenotype and it is 1 bit.
This way you have proven yourself to be not complex. Your original assertion, however, requires proof of the opposite. Good luck!
 
BTW, do you know that from a logical viewpoint reductionism implies, that a just hatched chick is less ordered (complex) than the just fertilized egg, because only processes which increase entropy are possible?
No, I do not "know" that, but I do know that you're talking complete rubbish.

(The open/closed-system confusion is pointless in this context.)
Utterly pointless, so why are you confusing open and closed systems?

A chick is an open system.

No Nobel Prize for you, my lad.
 
Kolmogorov complexity can only be defined for a specified programming language, in which it is the shortest possible program that can be written to produce the string.

Er, no. I don't have my copy of Li and Vitanyi to hand for the direct citations, but this isn't the right definition. Kolmogorov complexity is not defined for languages, but for specific Universal Turing Machines (and since any one UTM can implement any other in a constant-length program), K-complexity is universal for any string to within that constant.

As such, it's a property of the string, not of the computing environment.

I could define a programming language in which Pi is compressed to 1 bit, if I define 1 as "the rather simple algorithm for Pi" and 0 as "NOT the rather simple algorithm for Pi".

You could; but the Kolmogorov complexity of Pi in this programming language would not simply be 1, but 1 plus the length of the UTM that compiles and executes that program. Or perhaps shorter, if there is a more direct representation.

Therefore the Kolmogorov Complexity of any given data set depends on the language used to describe it. It is not some absolute metaphysical quantity.

Competely and totally wrong.

It's quite reasonable, then, to use Kolmogorov complexity as a measure of the information contained in a digital biological system, as long as it does something recognizably Turing-like or that can be implemented as a Turing machine. DNA transcription and replication can, so it's reasonable to talk about the K-complexity of a given piece of DNA or a given protein that results directly from transcription. Of course, when you start getting into the non-digital processes (for example, enzymes working on (analog) proteins instead of on (digital) RNA), the model breaks down.

The real problem here is not that Kolmogorov complexity is inecessarily nappropriate, but that the units upon which this argument is based are inappropriate. FIrst, a "gene" is not a "bit"; far from it. A typical gene will contain hundreds or thousands of base pairs, each of which contains (naively) two bits of information. The second is, of course, that the TM upon which biology "runs" also contains information -- human body temperature, for example, is relatively fixed and the development process can rely on it. This is why mammals like humans have shorter genomes than reptiles; mammals don't need to deal with a sudden in utero "cold snap" -- and any assessment of K-complexity has to be able to take this information into account as well.

Good luck.
 
Paul:

Well, he's right, techically, that (so far as we know, on the macro level) only entropy-neutral or entropy-increasing processes are possible.

But that's general, not in regards to every part of an interacting system.

The decrease in entropy locally for a life-form is more than balanced by the increase in entropy that life form creates in the environment around it.

mmm... Not really. For a process to occur spontaneously, you have to have a negative free energy change associated with it, following this equation:

deltaG = deltaH - TdeltaS

where G is the Gibbs free energy, H is the enthalpy of reaction, T is the temperature in Kelvin and S is the entropy.

Basically, you can have a decreased entropy in a process if you input energy. Guess what happens in living systems ? They need energy for continued existence, to balance a decrease in entropy. This energy comes from chemical reactions, and in our own food chain, ultimately from the sun (but there are extremophile organisms which are independant from this).

If no process which decrease entropy were possible, how can you explain the formation of crystals ? The binding of substrate to enzyme ? The separation of oil and water in a oil-water mix ?

the Kemist
 
All it requires is a description language for strings to be specified.
I didn't claim that it had to be Turing complete, so you can call it what you will. It is still not something that exists in biology.

You example is not equivalent to what you were asserting originally: That a string of - say - 20.000 characters can be proven to be complex.
I never said anything of the sort. My assertion was that the genome is not a compressed description of the phenotype. Furthermore I said "genes" not "characters".

Because you cannot know.
I don't have to know. Even if you can write a program that produces a person's entire DNA in fewer bytes than it costs to store that DNA, it does not prove me wrong because what you say is irrelevant.

Plus, of course, there are environmental factors during development that can be considered to have a "random" influence on the phenotype.
Which disproves that the properties of the phenotype are coded in their entirety in the genome.

This way you have proven yourself to be not complex.
No, I have proven that how far you can compress information depends on the language used. It does not make the thing that the information refers to any less complex.
 
It's quite reasonable, then, to use Kolmogorov complexity as a measure of the information contained in a digital biological system, as long as it does something recognizably Turing-like or that can be implemented as a Turing machine. DNA transcription and replication can, so it's reasonable to talk about the K-complexity of a given piece of DNA or a given protein that results directly from transcription.
No doubt. But I am talking about the complexity of the phenotype and whether its complexity can be compressed to the information within the genome. And obviously it can't as the development of the phenotype depends on factors outside the genome. Like when you said that the K-complexity is the length of the string "plus the length of the UTM that compiles and executes that program", the K-complexity is the length of the genome plus the environment in which the genome is used. And that is not a Turing machine.

FIrst, a "gene" is not a "bit"; far from it. A typical gene will contain hundreds or thousands of base pairs, each of which contains (naively) two bits of information.
I also noticed how wuschel magically changed my "20 000 genes" into "20 000 characters".
 
biology does not have its own independent scientific framework dealing with the quantization of information and stuff.
The question then seems to be: "how may information theory be applied to biology, and what are the strongest conclusions that can be reached by doing so?"

If what the biologist is most interested in are such qualities as top running speed, tolerance for heat, swimming ability, or aggressiveness (leaving aside intelligence and consciousness), what is the starting point for an information theoretical approach?

Before beginning a search of the organism's genome, perhaps a practice excercise, using something less fuzzy. How about an example which (unlike a biological organism, as Earthborn points out) does develop in a very predictable way from a set of explicit plans comprising a detailed description of all its intricacies:

Where in the detailed set of plans for an automobile would we expect to find INFORMATION about such interesting properties as cornering ability, center of mass, fuel economy, or speed on acceleration? If we quantified the amount of information in the plans, in information theoretical terms, would we be any closer to answering questions about these properties?
 
If what the biologist is most interested in are such qualities as top running speed, tolerance for heat, swimming ability, or aggressiveness (leaving aside intelligence and consciousness), what is the starting point for an information theoretical approach?

Why on earth should biologists restrict their interests to such things?


Where in the detailed set of plans for an automobile would we expect to find INFORMATION about such interesting properties as cornering ability, center of mass, fuel economy, or speed on acceleration? If we quantified the amount of information in the plans, in information theoretical terms, would we be any closer to answering questions about these properties?

No, but we would almost certainly be closer to answering questions about manufacturing costs and tolerances. It would be a poor auto manufacturer who designed a machine for the best possible cornering ability without asking hmself at some point in the process if it would be physically possible to build the thing with the equipment and finances at his disposal....
 
mmm... Not really. For a process to occur spontaneously, you have to have a negative free energy change associated with it, following this equation:

deltaG = deltaH - TdeltaS

where G is the Gibbs free energy, H is the enthalpy of reaction, T is the temperature in Kelvin and S is the entropy.

Basically, you can have a decreased entropy in a process if you input energy. Guess what happens in living systems ? They need energy for continued existence, to balance a decrease in entropy. This energy comes from chemical reactions, and in our own food chain, ultimately from the sun (but there are extremophile organisms which are independant from this).

If no process which decrease entropy were possible, how can you explain the formation of crystals ? The binding of substrate to enzyme ? The separation of oil and water in a oil-water mix ?

the Kemist

Kemist:

Yes, but recall, we're ignoring the open/closed bit so we instead have to consider a universal for all processes. The energy input must be included in the process, and that energy input created entropy, as well. Which was my point. You can decrease entropy locally, which is what you stated, but that requires a corresponding increase of entropy generally, in your example where the nergy was created/input/whatever. I think we're agreeing, just using differing scales :)
 
I didn't claim that it had to be Turing complete, so you can call it what you will. It is still not something that exists in biology.
These are called "models" and you wouldn't believe they are used in science a lot.

If I were allowed to post links, I would post this one:

www[dot]google[dot]com[slash]search?q=kolmogorov+complexity+biology+DNA

Surprise: They do use K-complexity in biology as well!

Which disproves that the properties of the phenotype are coded in their entirety in the genome.

I never claimed they are.

No, I have proven that how far you can compress information depends on the language used. It does not make the thing that the information refers to any less complex.

You're not getting it:

You
Cannot
Prove
Complexity

All you can prove is the lack of.

In your statement:

Earthborn said:
And in humans there are only about 20 000 of them, humans have more than 20 000 properties so the genes cannot be an intricate description of a human.

You assert complexity WRT "more than 20 000 properties". For if you don't, that would mean "non sequitur".

Now I did point out to you that you cannot prove complexity, therefore, your assertion is necessarily unfounded and thus cannot be used to support the conclusion.

This does not disprove the conclusion, mind. It just invalidates your argument.
 

Back
Top Bottom