• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Random mutations cannot explain evolution of humans

Typical dogmatic reasoning: "Despite contradictions and lacking explanations, our beliefs are nevertheless correct."
I expressed no beliefs about how axon guidance works. Nor ddid I claim that the beliefs that I don't have were correct. Nor did the views that I didn't express have contradictions in them.

My point, let me restate it, is that lacking a complete explanation for something does not allow you to conclude that psychons are doing it, still less to take the absence of a complete explanation as evidence for psychons.

A typical sign of dogmatism is ignoring logical inconsistencies. For instance: On the one hand, the fact that computer-implemented algorithms cannot be improved by randomly changing bits or bytes ...
This is not true.

Another sign of dogmatism is ignoring facts.

Look up, e.g. Tierra.

On the other hand, the highly complex architecture of the human brain at birth is explained by the assumption that the necessary information is somehow generated by an algorithm. Do you have any idea, how enzymes could implement such an algorithm?
Just as I challenged you to find an enzyme without a gene, now I'd like you to find a metabolic process without an enzyme of functional RNA.

And by the way, panpsychism (or better: pandualism) is a fully legitimate hypothesis with a long tradition. I know that it is a long and difficult process to substantially change one's own world view, e.g. from pure materialism to the recognition of psychons. For me, this step was not so difficult, because already in former lives I had considered panpsychism as a reasonable scientific hypothesis.
popcornav2.jpg


Not even the assumption that proteins are fully coded by the DNA is true. The genetic code includes twenty amino acids. Apart from these amino acids many proteins contain other amino acids and other components ...
I am not sure which of many facts well-known to biologists you're trying to refer to, but the mechanisms underlying these facts are well-known and do not involve psychons doing magic.

Now, let's make it simple. Find me a polypeptide without a gene. It's a simple question. Don't waffle, don't try to teach your grandmother to suck eggs, fiind a polypeptide without a gene.

The genetic code is not universally valid as it was initially assumed. Several exceptions have been found.
I know that; I have no idea why you mention it.

Genes of plants and animals regularly contain non-coding sequences. These introns must be cut out from the RNA copies of the genes. The information indicating which regions represent no code and must be removed is not coded.
Wrong.

Some introns even cut out themselves. In several cases, RNA nucleotides are changed, deleted or inserted (RNA editing) before translation starts. In order to produce correct proteins, ribosomes sometimes skip nucleotides instead of translating them. Even from the translated sequences sometimes parts are cut out before protein folding starts. All this is not coded!
Wrong.

After transcription, many amino acid sequences efficiently take on a stable form. Biotechnologically produced, random sequences do not fold to a protein.
Every polypeptide is a polypeptide. Do you mean that most of them don't have a stable tertiary structure?

The common explanation is that proteins have been selected during evolution to fold properly. Yet, if only a very small proportion of possible sequences take on a stable form, only this small proportion can undergo selection of protein function, and the probability that random mutations destroy stability is very high. Furthermore, it is improbable that a protein, selected for a stable form, also acts as a catalyst for complex functions.
It would be even more unlikely if an unstable structure could act as a catalyst.

On the other hand, there are related proteins of similar form and function, whose amino acid sequences have drifted apart substantially.
Yes, biology is robust, isn't it?

There are even cases where the completely different amino acid sequences, corresponding to different reading frames of a given RNA sequence (frameshift), result in correct proteins or parts of proteins.
I know that, too. This is just standard creationist fare. "It's complicated, so it can't have evolved".

But the fact (which you denied above) that computer programs and electronic circuits and so forth produced by variation and selection show similar complexity despite a similar apparent lack of robustness shows that your incredulity is misplaced.
 
For instance: On the one hand, the fact that computer-implemented algorithms cannot be improved by randomly changing bits or bytes, is declared irrelevant because "genes are not computer programs".
It might be relevant if it were, true, but your claim is false.

First randomized algorithms are incredibly important in computer science. There is an entire complexity class, the foundation of quantum computing, called bounded probabilistic polynomial(BPP). Second, computer programs, have been evolved using random evolutionary algorithms for all sorts of tasks. Theorems have been proved this way, novel FPGA designs have been created, neural networks and other cognitive systems have been generated and honed this way. Don't assume that its not done in research just because you aren't aware of it being done in Windows, or Word, or IE.

Wogoga said:
On the other hand, the highly complex architecture of the human brain at birth is explained by the assumption that the necessary information is somehow generated by an algorithm. Do you have any idea, how enzymes could implement such an algorithm?
Yes....I've been talking to doctor kitten about this very topic in this very thread. The algorithm is as follows:

Neural connections are very roughly specified by axonal guidance cues(I've got mention my favorite again, Sonic Hedge Hog(Shh)). Randomly some connections are good and others bad, some useful, some not. Activity at sensory systems then serve to hone the rest of the brain's connections. This normally involves eliminating connections that are too weak or too strong. Sometimes neurons that are under-stimulated often commit suicide(called apoptosis).

That is the rough outline of the algorithm, if you want the details then you should go to grad school in neuroscience in a lab that studies developmental neurobiology, cause only very specialized people know the majority of the details.


Also Wogoga, you've already conceded that complex adaptations can be generated by random mutations as long as the mutations independently aren't harmful. So your only chance of making a useful, potentially believable, quality argument, is to show that there is some complex multi-step adaptation where a necessary intermediate step was harmful.
You have yet to even suggest something hypothetical, much less something plausible. In absence of this you'll be denying the overall conclusion to evolution even though every objection you've made has been carefully and methodically addressed by several people.

As an analogy to how you're behaving. Imagine we're in a rowboat on a lake. You claim there is a hole in the boat and that we're going to die. I examine the boat and find no hole and show you this. I show you that there is no water in the bottom of the boat. I show you that the edge of the boat is just as high above the water at one time then five minutes later. I also note that even if the boat were sinking we're wearing life preservers and that the shore is only thirty or forty feet away. So we're certainly not going to die even if the boat does sink. You insist that boat is still sinking, you say it must not be a hole, it must be the termites eating boat frame and even if we don't drown we'll probably catch some disease from the water that will kill us anyway. Do you see how this works? The process repeats, you continually insist on the same conclusion but the reasons you claim for believing change each round. Do you see how presupposing the conclusion is illogical, how its silly, how it doesn't make sense?
 
There might be an average amount of information, but there is no way to assume its not a variable mapping. We know it is a variable mapping, because some amino acids can be coded in more than one way. And that doesn't even take things like protein structure into account.

You're talking about the difference between interpretation and representation.

A bit stream of 8 characters represents at most 8 bits of information. A DNA sequence of 4 base pairs contains at most 8 bits of information. Mapping one to the other is very trivial.

When it comes to how one may interpret that information that is dependent on the language - and since one may have an infinite number of languages there's an infinite number of interpretations with perhaps an infinite amount of information that could be represented.

The "language" of chemistry is clearly highly complex and so is the interpretation of the meaning of DNA.

But this complexity is not to be found within the encoding of the DNA itself - this is a classic error of reasoning of the type wogoga is making when he says there is not enough information with in the DNA.
 
I understand how information theory works. I also understand the mathematics behind it, and I know that bit ends up being a bit because of the application of the base 2 logarithm in Shannon's theory.

So far, so good.

Logarithm's in other bases tell you just as much. Thats why I would say its a measurement but its not unit.

Er,.... huh?

If you take the log in other bases, you get other units. The most common alternative formulation is by using natural logs (base e), in which case the units that you get are called "nats." Nats used to be more commonly used in Europe than in the US (BTL was a US company, after all), but I almost never see them used any more.

But,.... so what? Meters are units of lengths, and centimeters are also units of length, and there's a well-established correspondence between the two --- but that doesn't invalidate either one of them.


In the sense that it has a no correspondence to some calibrated physical property.

Well, if you're measuring a non-physical entity, I'd be surprised if your measurement would correspond to any specific physical property -- how many inches correspond to a farad? Length and capacitance are incommeasurable.

The log of a probability, or distribution, or number of states, also doesn't have units.

Certainly it does. Depending upon the base of the log, you get bits, or nats, or something less usual --- and the units are critical, because you can't subtract bits from nats without conversion, any more than you can subtract six feet from two meters and get a meaningful -4.


Or I could just take the much more conservative tack and say, it's not SI, it doesn't have an SI correspondence, and it certainly isn't derived from a combination of SI units.

So? There's not an SI standard "dollar" either, but prices are most certainly measurements, and a "dollar" is certainly a unit of price. There are, in fact, many units of price that do not correspond to any physical object (when was the last time you saw a single Turkish lira, or a US mill?) At my university, courseloads are measured (and paid for) in "credits," and of course there's no SI standard credit, either.

And the "dollar" isn't a physical unit, so it doesn't correspond to any physical process. It is, nevertheless, an economic unit, just as a "credit" is an educational one. (And my "credits" may or may not correspond with those at your "university," because our unit bases are different. If you're using a trimester system and I'm not, for example....)
 
@Dr Kitten

Its kinda funny because I think we both understand what each other are saying. So I'm just gonna lay it out one more time. If you disagree, thats fine but I don't think more discussion can add that much more to what I say.

There is a difference between a measure and a unit. Many measures are unitless. What you are saying is that taking a logarithm of a unitless quantity will somehow add a unit. But if that were the case then whenever you take a logarithm it would have to add a unit. That is not the case. Thus it is a measure without units.

If I look at your examples,
The dollar, is pretty interesting, because even though its not SI, countries maintain huge apparatuses to precisely calibrate it. It allows us to relate abstract numerical quantities to a physical amount of value, and doing so is no simple feat. Note: Log_2(n dollars) != n dollar-bits

The farad too is very demonstrative. It, of course, is not equal to length, but it actually is commensurable, that is part of what makes it a little different than something purely abstract. F=C^2/(N*m) The farad helps us take some numerical quantity into a specific amount of physical capacitance. The idea of what does and does not constitute a unit actually makes a lot of sense if you look at it in terms of the unit cancellation equations that often show up in physics and chemistry.

I said this before, but I'll say it again, the usage of bits and bytes in computer science is different than the sense in which you would normally use a unit, in the sense that you use it to take an abstract quantity and establish a correlation to something in the physical world. You might say its the difference between Unit and unit. A precise technical term and a colloquial use that bears some relation to the technical term. A colloquial use that is also a more convenient way of talking about something that you don't want to have to explain every time you type it. So in the sense that we can use it as a shorthand to talk about a logarithm of a count or a probability; a way to relate two purely abstract quantities, then yes its a unit. But its not in the sense that Unit is used more technically in empirical sciences to take an abstract relation into a physical relation.

I think we can agree on that, yes? If you don't agree thats fine, but I don't think any more discussion on the topic is going to add anything that isn't available in our previous posts. So rebut me one more time, if you must, and as long as you are nice and as reasonable as you were in your previous posts, I intent to leave the issue as it stands.
 
On the one hand, the fact that computer-implemented algorithms cannot be improved by randomly changing bits or bytes, is declared irrelevant because "genes are not computer programs".


You haven't heard of "genetic algorithms"? I spent something like five years of my life doing exactly that --- randomly changing bits and bytes in computer-implemented algorithms and looking to see which of the random changes resulted in performance iimprovements.


The by far most important principle of life is neither mutation nor selection but the finalistic principle of reproduction. Organisms must at first be able to create viable copies of themselves (see also).

(In the case of humans, reproduction implies the formation of far more than 10^20 apriori rather improbable chemical uphill bonds. There are uncountable possible errors in cell replications. The assumption that the only errors worth a mention are correctly bonded DNA changes and that a substantial proportion of these changes has even positive effects is totally unjustified within reductionist materialism.)

In the case of genetic algorithms we have in principle on the one hand data sequences and on the other hand algorithms creating new data sequences using already existing data sequences. Whether we call such data sequences genotypes, phenotypes, individuals or something else is irrelevant. Relevant to this discussion however is that such data sequences are passive entities manipulated and copied by a computer program, which cannot improve itself by randomly changing its own bits or bytes (see also).

So genetic algorithms do not resemble very much real life. They are much closer to handwritten chain letters which also "are capable of evolution" if humans do the real work of creating new individuals. Also computer simulations such as Tierra are rather based on the chain-letter principle than on the principle of biological evolution.

Cheers, Wolfgang
 
Relevant to this discussion however is that such data sequences are passive entities manipulated and copied by a computer program, which cannot improve itself by randomly changing its own bits or bytes
This far into the discussion, and you are already forgetting that Natural Selection is not a theory that relies on purely random changes.

Natural Selection is an algorithm that non-randomly selects from available variety.

Arguing that complex features cannot come about by randomly changing bits, is not a valid argument. Evolutionary biologists will already agree with you. That is why the selection process is Non-Random: Dependant on environmental and other fitness landscape factors.

Also computer simulations such as Tierra are rather based on the chain-letter principle than on the principle of biological evolution.
It makes no difference what principals you think it is based on. The fact that it works, is an admission that Evolution is plausible. Tierra represents an idea of how it is possible for complexity of life to emerge from an evolutionary process. Arguing that such a thing is "impossible", is no longer viable.
 
Last edited:
In the case of genetic algorithms we have in principle on the one hand data sequences and on the other hand algorithms creating new data sequences using already existing data sequences. Whether we call such data sequences genotypes, phenotypes, individuals or something else is irrelevant. Relevant to this discussion however is that such data sequences are passive entities manipulated and copied by a computer program, which cannot improve itself by randomly changing its own bits or bytes (see also).
In exactly the same way, the laws of chemistry cannot change themselves by "manipulating their own bits and bytes".

The analogy is fairly precise.

---

Here's a question for you. Why did you think that this was a good analogy when you wrote that "computer-implemented algorithms cannot be improved by randomly changing bits or bytes", and now think that this is a bad analogy when it turned out that you were talking rubbish and know that computer-implemented algorithms can be improved by randomly changing bits or bytes?

If the claim that this couldn't happen was an argument on your side, then surely the fact that it can is an argument on ours.
 
In the case of genetic algorithms we have in principle on the one hand data sequences and on the other hand algorithms creating new data sequences using already existing data sequences. Whether we call such data sequences genotypes, phenotypes, individuals or something else is irrelevant. Relevant to this discussion however is that such data sequences are passive entities manipulated and copied by a computer program, which cannot improve itself by randomly changing its own bits or bytes (see also).

You're obviously not a LISP or Prolog programmer.

The distinction you wish to create between data and algorithms is a false one. Any data stream can be executed; any algorithm can be encoded as data. In some languages there is an algorithmic transcription process; in others, the data is directly executable as an algorithm

So,... no.

And as Dr. A. pointed out, the computer analogy is yours. Why were self-modifying computer programs a good model of evolution twenty posts ago and now a bad one?
 
Substitute "known" for "assumed", and that's about right.


So your credo is:

It is known that the active genetic information of less than 0.1 Gigabyte is essentially enough to determine many thousand enzyme species, differentiation into more than two hundred cell types, the highly complex anatomy of the human body at all levels, the complex brain architecture at birth, human learning capacity, instinctive behaviour and even talents.

I'm sure that at the latest in a future life you will wonder how educated persons of the beginning third millennium were able to belief in such a thing. Have you ever thought how much information is needed in order to build e.g. a computer?


All you need to do is find a protein the synthesis of which is not directed by DNA.


I'm not sure whether psychons being able of doing that still subsist. But even if they still exist, it may be rather difficult to detect them, because of their inefficient replication they may nowhere reach a high enough concentration in order to be detectable with our current biotechnological means. (The same was valid before the 1980s with e.g. viruses such as HIV). And to demonstrate definitively that such proteins cannot somehow be the result of uncommon splicing and composition, of frame shifts or changes analogous to error correction, could be quite difficult.

However, it is a quite obvious consequence from the hypothesis of a continuous evolution that before the invention of the highly complex translation, proteins were somehow able to create copies of themselves without DNA and RNA. A quote from the psychon theory:

During evolution, psychon animated molecules have been joining together in always bigger units. Animated molecules such as amino acids and nucleotides began sometime to form chains. By specialization psychons emerged which dominated such chains. Proteins are conceivable which replicate by adding corresponding amino acids to one chain end, until an identical protein can split off. Reproduction by base pairing of two complementary strands is even more efficient. The invention of translation, a complex symbiosis of various ribosomal psychons, was certainly one of the most essential steps during the evolution of life.

Cheers, Wolfgang
 
So your credo is:

It is known that the active genetic information of less than 0.1 Gigabyte is essentially enough to determine many thousand enzyme species, differentiation into more than two hundred cell types, the highly complex anatomy of the human body at all levels, the complex brain architecture at birth, human learning capacity, instinctive behaviour and even talents.

That is unfortunately a complete misrepresentation of the process of embryological development. The idea that DNA contains "all" the information necessary to build a living creature is at best outdated and at worst active deception.

Creatures are built as a result of an interaction between the DNA and the chemical and physical environment in which that is expressed. As a simple example of this --- sex development in mammals seems to be almost purely genetically determined. For reptiles, however, it is a complex combination of genes and environment (mostly temperature) Alligators are good examples. An egg incubated above about 90 degrees will always be male. Below 86 degrees, it will always be female. Between 86 and 90 degrees, it is genetically determined. What's "really" going on is that the expression of DNA is a complex chemical process and the speed and outcome of almost any chemical process is temperature dependent (try seeing how much salt you can dissolve in hot vs. cold water for a simple example).

But from an information-theoretic perspective, this means that sex (in alligators) is not a function of genetics, but of genetics PLUS the environment. The environment contains information that is crucial to the proper development of the organism.

Mammals don't have this issue, of course, because mammals are incubated at roughly uniform temperature. But in this case, again, the proper temperature (and environment otherwise) is crucial to the proper development of the organism; the mother's womb as well as the DNA provides "information."

You are trying to use "psychons" to fill in the information-theoretic holes. Why not use something that actually exists, like a mother?
 
I'm sure that at the latest in a future life you will wonder how educated persons of the beginning third millennium were able to belief in such a thing. Have you ever thought how much information is needed in order to build e.g. a computer?

Hmm, I'm guessing that I'll definitely need to know the position of every single atom in the computer. That's got to be the smallest representation possible!
 
So your credo is:

It is known that the active genetic information of less than 0.1 Gigabyte is essentially enough to determine many thousand enzyme species, differentiation into more than two hundred cell types, the highly complex anatomy of the human body at all levels, the complex brain architecture at birth, human learning capacity, instinctive behaviour and even talents.

I'm sure that at the latest in a future life you will wonder how educated persons of the beginning third millennium were able to belief in such a thing.
Well, that was amusing.

Have you ever thought how much information is needed in order to build e.g. a computer?
Yes. And I know that it is much smaller than the amount of information you can fit on a computer.

I'm not sure whether psychons being able of doing that still subsist.
Well, that was honest. So, are you going to drop the assertion that DNA can't code for all the proteins from your website?

But even if they still exist, it may be rather difficult to detect them ...
I'm not asking you to detect the psychons, just a polypeptide without a gene. For starters.

However, it is a quite obvious consequence from the hypothesis of a continuous evolution that before the invention of the highly complex translation, proteins were somehow able to create copies of themselves without DNA and RNA. A quote from the psychon theory:
It is not at all obvious. For example, many people think that the RNA came before the proteins.
 
Last edited:
Genes of plants and animals regularly contain non-coding sequences. These introns must be cut out from the RNA copies of the genes. The information indicating which regions represent no code and must be removed is not coded. Some introns even cut out themselves. In several cases, RNA nucleotides are changed, deleted or inserted (RNA editing) before translation starts. In order to produce correct proteins, ribosomes sometimes skip nucleotides instead of translating them. Even from the translated sequences sometimes parts are cut out before protein folding starts. All this is not coded!



Some quotes from SkepticWiki:

"A gene may have no introns; it may have dozens. In eukaryotic organisms (i.e. organisms whose cells have nuclei, such as plants and animals) the typical gene consists mostly of introns. An average protein-coding gene will be about 8000 nucleotides long, whereas a average piece of mature mRNA after splicing will only be 1200 nucleotides long (figures from Campbell and Reece, Biology)."

"Group 2 introns are also self-splicing, with no assistance whatsoever: purely as a result of the sequence of bases in the RNA, they curl themselves up and snip themselves out of the RNA, with the excised intron ending up in what is known as a lariat structure --- a loop of RNA with a tail."

"Nuclear introns are far from being self-splicing: rather, they are spliced with the aid of a rather complicated bit of cellular machinery called a spliceosome".

According to the psychon thesis, group 2 introns are fully self-splicing because corresponding psychons have survived. Folding of a chain into an enzymatic active form always depends on corresponding psychons, which are limited in number. So one can predict that such introns cannot always cut themselves out if genes with such introns are expressed at a too high rate.

This can be verified by experiment: Take an intron which is not very widespread, transfer it into a crucial and widely expressed gene of a fast growing organism. At some point in time, it will become impossible to further increase the genetically modified organism in number, because introns will no longer be able to cut themselves out, thus impeding the correct expression of the crucial gene. This is similar to the principle of psychon-deficit diseases.

You assume that some RNA-sequences cannot be used as information-carrier because physical and chemical laws will induce these sequences to cut out themselves (and to correctly bond the two open ends of two remaining information sequences). This seems very unrealistic to me.

In the case of introns which depend on spliceosomes you probably assume: The information which parts must be removed from the mRNA and which of the remaining parts have to be put together in which way (alternative splicing) is somehow coded in the DNA of the spliceosome or of other enzymes. That this is a rather implausible explanation can be better seen if we deal with the case where by "programmed frame shifting" a ribosome corrects a fatal mutation consisting of an insertion of one base pair.

The hypothesis that such a fatal mutation coincides with other mutations being able to correct the error can be excluded. However, if we accept the simple and logically consistent psychon thesis, then such strange error-correction behaviours can easily be explained, because memory is a fundamental principle of psychons (and of life in general).

The psychons involved in the corresponding protein biosynthesis had so often in the past created correct proteins and had become so accustomed to create correct proteins, that they simply were able to ignore the newly introduced fatal mutation. Think about the monks of the past who copied the same texts again and again by writing them. They certainly were able to recognize and correct some errors in such texts, even if they didn't understand the texts.

Cheers, Wolfgang
 
According to the psychon thesis, group 2 introns are fully self-splicing because corresponding psychons have survived.
And this conclusion is utterly untrue, since we know the biochemistry of group 2 introns, and they do it all according to the laws of chemistry with no magic psychons required.

Now, if this stuff about psychons was a real scientific hypothesis which really implied the necessity of psychons for self-splicing of group 2 introns, then this fact would disprove the psychon hypothesis.

But it isn't and it doesn't so it won't.

This can be verified by experiment: Take an intron which is not very widespread, transfer it into a crucial and widely expressed gene of a fast growing organism. At some point in time, it will become impossible to further increase the genetically modified organism in number, because introns will no longer be able to cut themselves out, thus impeding the correct expression of the crucial gene.
I don't quite follow this, but when you say this can be experimentally verified, do you actually mean that there are some experiments which have been done which verify it, or do you mean that you know of no facts that prevent you from daydreaming about an experiment that proves you right?

You assume that some RNA-sequences cannot be used as information-carrier because physical and chemical laws will induce these sequences to cut out themselves (and to correctly bond the two open ends of two remaining information sequences). This seems very unrealistic to me.
I do not understand what you think I am assuming or why, which suggests that I am not.

In the case of introns which depend on spliceosomes you probably assume: The information which parts must be removed from the mRNA and which of the remaining parts have to be put together in which way (alternative splicing) is somehow coded in the DNA of the spliceosome or of other enzymes.
Yes, appart from the word "assume", which is a funny term to apply to a belief supported by all the data. But if you will do as I asked and show me a process that is not dependent on enzymes or functional RNA, then I shall rethink my position. Knock yourself out.

That this is a rather implausible explanation can be better seen if we deal with the case where by "programmed frame shifting" a ribosome corrects a fatal mutation consisting of an insertion of one base pair.

The hypothesis that such a fatal mutation coincides with other mutations being able to correct the error can be excluded. However, if we accept the simple and logically consistent psychon thesis, then such strange error-correction behaviours can easily be explained, because memory is a fundamental principle of psychons (and of life in general).

The psychons involved in the corresponding protein biosynthesis had so often in the past created correct proteins and had become so accustomed to create correct proteins, that they simply were able to ignore the newly introduced fatal mutation. Think about the monks of the past who copied the same texts again and again by writing them. They certainly were able to recognize and correct some errors in such texts, even if they didn't understand the texts.

Cheers, Wolfgang
Summary: you don't understand programmed frameshifting, so psychons do it by magic.
 
Last edited:
I've been trying to tell people, its not magic or psychons, its the spaghetti monster. He weaves the nucleotides in our genome in such a way to make us tastier for when he sweeps us up in the night and eats us to create demographic decline. I don't know why you people can't see the obviousness and simplicity of the spaghetti monster. It must be that you scientists are too frightened by the truth and are worried about losing your funding....how terribly unethical. Shame of you, you evidence believing, parsimonious, sell-outs!
 
Last edited:
Just out of curiosity (and forgive me if I missed the relevant post), has wogoga ever given us a testable hypothesis regarding psychons? If not, I suggest focusing on that: Until he does develop such a testable hypothesis he:
1. is not really engaging in science
2. has no right to criticize another idea that does help us develop testable hypothesis
3. should probably read a good, basic book on how science is done, if anyone has any recommendations
4. will probably rant and rave in incredibly ill-informed ways until he takes points #1 to 3 into consideration.

That is all.
 
One thing is sure: the information cannot come from the DNA, simply because the DNA does not contain enough information.
I think you misunderstand the role of DNA in embryology. DNA is NOT like a blue-print. There is NO one-to-one mapping of genes to parts of the body or brain.

DNA is more like a recipe. Several genes act with each other, and the emergent system is one we call a complete life form.

In a cake recipe, for example, there is no one-to-one mapping of a letter in the text to a specific piece in the cake. The whole recipe works, together, to make the whole cake.

So, of course it would seem like there is "not enough information" in DNA to do everything it does. It's not a matter of "how much" information, but how it is used.

And, as others have mentioned, physics plays a role in providing "information" into the system, as well.
 
Last edited:

Back
Top Bottom