Annoying creationists

Status
Not open for further replies.
Is there a point in bringing up what Einstein’s beliefs were? Looks like a textbook example of fallacious appeal to authority. This is simply more wasted screen space; another addition to the multitude of intellectually dishonest debate tactics plaguing this thread.
 
Paul, Dr Schneider design a selection process that has very high resolution. You are evolving very small groups of bases, typically only 6 bases wide with a comparison weight matrix typically 5 bases wide. He then sets a threshold that allows for partial matches of the base sequences to allow for selection. This gives very high precision for the selection process that I don’t believe exists in reality.
Can you explain why this is "high precision"?
also, Do you believe that partially matched binding sites don't provide adherance for transcription factors in vivo?
 
No I don't trust such instinct. On the other hand, if we could find a way of formally describing those data sets that can arise through evolution.
This is being done. It is part of the field of evolution. however, this is a seperate issue from seperating what was designed and what was evolution. Again, I hold to the notion that ID infers no process. Science looks for the way things happened, regardless of a "designer". Evolution is the best fitting model we have and as we learn more the theory will improve as well.

An example occurs to me, following an earlier posting from on this thread - from someone whom I fear I cannot remember. One evident difference between living things and watches is that watch "design" is only on one scale - millimetre to centimetres. This is something to be expected of a designer trying to achieve an end, whereas biological organisms have design right down to a molecular level - something one would expect of evolution.

That is just handwaving, of course, but it seems it could be expressed in a more formal way. I think there may be other aspects of such data sets too that could be analysed formally.
The lengthscale idea can't work because it's a moving boundry. AS our microchips possess finer features this explanation fails. Another example is liposomes are designed drug carriers and they have order on the lengths of nanometers.

To repeat, evolution and ID are not discussing the same thing. ID is a philosophy not a science.
 

If less_than_three had read this thread carefully, he would have seen that for free living prokaryotes, the shortest known genome lengths are 700k base pairs in length, for free living eukaryotes, the shortest know genome lengths are 1.3m base pairs in length which are much, much longer that the 256 base length use by Dr Schneider, mutation rates given by Adequate of 1.7E-8 and from your own reference of 2.14x10^-8 are much, much lower than the 4E-3 value used by Dr Schneider. Now Dr Schneider did use a small population size of 64 in his published case and said the following:
First, I have made no claims to have read this thread carefully; in fact I’ll freely admit to being only a part-time lurker in this particular thread. The topic is not one that particularly interests me, but the nonsense being spewed out is entertaining. However, whether I have read the tread carefully or not is irrelevant to what I said, as I was simply pointing out your obvious attempt to weasel out of answering the questions asked. Paul asked you specific questions and you completely evaded answer them.

In fact, even now you continue to do so. You simply claim you’ve already answered them, and then go on to claim it is my (and implying this on anyone else) fault for not already knowing the answer because I haven’t read the thread well enough. Now you may or may not be correct in your claim that you’ve already provided answers, but that claim alone isn’t going to cut it as it reeks of dishonest debate tactics and will fool very few people here. If you have already answered the questions it should be easy enough to repeat the values you used, or if it is too inconvenient to retype the answers, simply provide a link or the post number where you’ve done so. That would be the reasonable way to answer such questions. Not only would it help your side of the debate, but showing that you’ve clearly already answered these questions of Paul’s may provide reason to question Paul’s motive for re-asking such questions. The fact that you opted evasion over a simply answer, while not providing direct reason to show you are wrong, certainly doesn’t provide any reason for anyone to think you are correct either.

Now for your convenience I’ll repeat and number the questions for you.

1. What were the realistic genome lengths that you used?
2. What were the realistic mutation rates that you used?
3. What were the realistic population sizes that you used?
4. What were the realistic values for any other parameters you may have used? (A catch all since Paul’s original question was more open-ended than just the list of three parameters you stated.)
5. What were the resulting data sets that you obtained using the above parameters?

No dodging this time. Please provide a direct answer to or link/post # to where you’ve already provided an answer to each and every one of these questions.


PS – Do you play with the text formatting just to making quoting you unnecessarily complicated? :confused: :)
 
If kjkent1 would do his homework, he would find that Dr Schneider addressed this issue. You could allow for extinction.

I wonder how your God feels about your constantly using ad hominem as a means of making your points?

If I were to use your method of argumentation in the courtroom, I'd be writing checks to the court for contempt every five minutes.

Paul A thought my idea re mortality rate has some merit, and I assume he's done a fair amount of homework to understand EV. So, why don't you tell us all why this suggestion is such an obvious dead end?

Alternatively, you could continue to debate like a child if you wish. But, as I've already mentioned -- I doubt that your Lord will be impressed with your behavior.
 
Last edited:
Annoying Creationists

If less_than_three, kjkent1 and other evolutionarians think they can discuss Dr Schneider’s ev program without studying the model and what he has written about it, perhaps they should post their comments on the paranormal forum on this web site. Otherwise, read the thread, read Dr Schneider’s writings on his program and stop whining like crybabies. Are there any evolutionarians out their who are not thin skinned crybabies?
 
I've been having an email debate with my own creationist. As an example of order arising from mechanical action, I mentioned that if you dump a trillion tons of dirt into empty space, eventually it will form a sphere. Not a cube, or a pyramid; always a sphere. Thus showing that order (a specific shape) can arise out of disorder and simple physical laws.

His response was to assert that there was "no reason for matter to aggregate on its own."

Yes... that's right: Creationists have not yet discovered gravity.

:D
 
I wonder how your God feels about your constantly using ad hominem as a means of making your points?
They all do this.

Indeed, my creationist even accused me of ad hominen, and explained that was what people do when they want to silence questions that are too hard to answer...

One of the most blatant examples of projection I have ever seen.
 
If less_than_three, kjkent1 and other evolutionarians think they can discuss Dr Schneider’s ev program without studying the model and what he has written about it, perhaps they should post their comments on the paranormal forum on this web site. Otherwise, read the thread, read Dr Schneider’s writings on his program and stop whining like crybabies. Are there any evolutionarians out their who are not thin skinned crybabies?

I'd love to get you on a witness stand. Your responses would positively get you a night behind bars.

I've already read all of Schneider's explanations, and I see nothing in Schneider's writing that explains why a different die off rate is irrelevant. I also see nothing that shows how different environmental stresses would not have a marked effect on the outcome of a creature at any point during its evolution.

So, would you kindly explain it for me?
 
If less_than_three, kjkent1 and other evolutionarians think they can discuss Dr Schneider’s ev program without studying the model and what he has written about it, perhaps they should post their comments on the paranormal forum on this web site. Otherwise, read the thread, read Dr Schneider’s writings on his program and stop whining like crybabies. Are there any evolutionarians out their who are not thin skinned crybabies?

Waaa!!

The big bad boy hurt me!!!
 
Kleinman said:
If less_than_three, kjkent1 and other evolutionarians think they can discuss Dr Schneider’s ev program without studying the model and what he has written about it, perhaps they should post their comments on the paranormal forum on this web site. Otherwise, read the thread, read Dr Schneider’s writings on his program and stop whining like crybabies. Are there any evolutionarians out their who are not thin skinned crybabies?
Kleinman said:
kjkent1 said:
I'd love to get you on a witness stand. Your responses would positively get you a night behind bars.

I have had you evolutionarians call me a beast; I warn you, stay away from the entrance to my cave.
kjkent1 said:
I've already read all of Schneider's explanations, and I see nothing in Schneider's writing that explains why a different die off rate is irrelevant. I also see nothing that shows how different environmental stresses would not have a marked effect on the outcome of a creature at any point during its evolution.

I doubt you have read this entire thread let alone Dr Schneider’s entire web site dedicated to this computer model. There are probably close to 100 pages of information related to ev on his site.
kjkent1 said:
So, would you kindly explain it for me?

I’ll give you one instance of when Dr Schneider talked about his selection process, and then you will have to do your own homework.
The following quotes are taken from An Analysis of Batten's Criticism of the Ev Model page. This page is linked from Dr Schneider’s ev-blog page. If you have trouble finding the page, I’ll repost the links which have been posted multiple times on this thread.
Don Batten said:
For example, the selection coefficient is extremely high, the genome is extremely small, the mutation rate high, no possibility of extinction is permitted, etc
Dr Schneider said:
the selection coefficient is extremely high. Why does a dandelion produce so many seeds? One plant could repopulate the entire yard! Why does a man or a sea urchin make millions of sperm? The answer is easy: most don't survive. But this objection is again a distraction from the point of the simulation, which was to demonstrate generation of information. Since does that, the objection is mute.
Don Batten said:
Dr Schneider said:

the mutation rate high. Perhaps Batten did not read the paper carefully. In Ev the mutation rate is only an order of magnitude higher than HIV. So instead of the simulation taking 700 generations it would take 7000. At 20 minutes per generation, this is only 4 hours. We have 2 to 3 billion years available.

the genome is extremely small. This is not relevant to the problem. If you make the genome larger it only slows down the simulation (you are welcome to try it yourself) but the same results will be obtained. If you don't believe that claim, do the simulation yourself.

no possibility of extinction is permitted. This is not relevant to the problem of information gain since extinction would merely stop the simulation. Besides, the simulation was explicitly designed to model gain of information in a new genetic control system that the organism is not yet dependent on. Logically, any new genetic control system has this property! So any extinctions that occur at the start are irrelevant. The relative survival will cause the evolution of the sites. If you don't believe this, try a simulation with extinction allowed.

Dr Schneider used a mutation rate 2 orders of magnitude higher than the HIV mutation rate and at least 3-4 orders of magnitude higher than those seen in any free living organism.

The genome length is unrealistically small and is relevant to the rate of information gain. I did do the simulation myself and as you lengthen the genome the rate of convergence of the model becomes so profoundly slow that macroevolution becomes mathematically impossible. As Dr Schneider recommends, “try it yourself”.

Dr Schneider’s argument that extinction is not relevant to the problem of information gain is incorrect. Extinction is an observed phenomenon. I have not raised this issue because even with Dr Schneider’s very liberal selection criteria, the model is still far too slow to explain macroevolution when realistic genome lengths and mutation rates are used in his model. You can find other discussions that Dr Schneider has made with reference to his selection process used in his program if you are willing to take the time to study his web site.
DHR said:

There, there DHR, just take your medicine and you will feel much better afterwards.
 
If less_than_three, kjkent1 and other evolutionarians think they can discuss Dr Schneider’s ev program without studying the model and what he has written about it, perhaps they should post their comments on the paranormal forum on this web site. Otherwise, read the thread, read Dr Schneider’s writings on his program and stop whining like crybabies. Are there any evolutionarians out their who are not thin skinned crybabies?
You’ve managed to produce yet another post containing nothing but fallacies, evasions, and unsubstantiated claims. Good for you. If nothing else you’re consistent with your deceitful debate tactics.

Now, provide the answers to the questions asked, or admit that you can not. Then find the post and quote me on where I said (or even implied) I was an evolutionarian, which by the way is not a word; it appears to be an ineffective and witless insult. You can then proceed to show where I have “whined like a crybaby”. I’m merely requiring you to hold an honest debate and to follow the rules expected in such. Thus far, you appear completely unable to do so. That gives the impression that your argument holds no merit.
 
Last edited:
. . . Paul, can’t you tell the difference between a bird’s nest and a bunch of grass and sticks blown by the wind into a pile? . .

See my unanswered post #733:
This is the same flawed argument (and thoroughly debunked) that William Dembski nattered on about for several years.

Still looks like you are pushing the same argument as Dembski.
 
I have had you evolutionarians call me a beast; I warn you, stay away from the entrance to my cave.
<shrug> Your ad hominems don't rise to the level of anything legally actionable. If they ever do, I'll let you know via personal service of summons. Meanwhile, if you'd rather not discuss this subject with me, then just let me know and I'll stop asking you questions.

OTOH, if you want to educate me, then answer my questions.

kleinman said:
I doubt you have read this entire thread let alone Dr Schneider’s entire web site dedicated to this computer model. There are probably close to 100 pages of information related to ev on his site.
Irrelevant.

kleinman said:
Dr Schneider used a mutation rate 2 orders of magnitude higher than the HIV mutation rate and at least 3-4 orders of magnitude higher than those seen in any free living organism.
Maybe "macro"-evolution doesn't happen gradually. Maybe evolution via punctuated equillibrium due to extreme environmental stress on a organism population is the norm.

That's why I'm raising the issue. If we are to modify EV's selection mechanism, then we may as well do it realistically.

kleinman said:
The genome length is unrealistically small and is relevant to the rate of information gain. I did do the simulation myself and as you lengthen the genome the rate of convergence of the model becomes so profoundly slow that macroevolution becomes mathematically impossible. As Dr Schneider recommends, “try it yourself”.
Yes, I know. This is your prima facie case, and you are relentlessly compounding it. It's an interesting case, in my opinion, but not necessarily a winner.

kleinman said:
Dr Schneider’s argument that extinction is not relevant to the problem of information gain is incorrect. Extinction is an observed phenomenon. I have not raised this issue because even with Dr Schneider’s very liberal selection criteria, the model is still far too slow to explain macroevolution when realistic genome lengths and mutation rates are used in his model.
Dr. Schneider's model proves information gain due to mutation and selection. The rate of mutation and method of selection/environmental stress is THE issue, because as you have demonstrated, unless some realistic combination of mutation and selection causes a profound increase in evolutionary change, then either the model is flat wrong, or there is something else present in the evolutionary process which remains unknown.
 
Last edited:
In seems to be in the area of expertise of evolutionarians to rule out God in the orderly harmony of what exists. I don’t know what mysterious expertise they have to draw this conclusion but they certainly seem sure of this.

I want to know what mysterious expertise anyone has to have any conclusions about what most people would say is a thing that is inherently beyond human comprehension.

Besides, it's entirely clear that ol' Al was using a poetic god when the fully body of his quotes on the matter are considered. Either way, it's pretty irrelevant since you're saying you're cleverer than all 'evolutionists' and hence your opinions on gods are superior to theirs or you're saying everyone's too stupid and hence you're perfectly justified in making any conclusion you want.
 
Annoying Creationists

This is the same flawed argument (and thoroughly debunked) that William Dembski nattered on about for several years.
fishbob said:
Still looks like you are pushing the same argument as Dembski.

I have only read Dr Schneider’s rebuttal to Dembski’s criticisms of the ev model. Apparently Demski’s argument is that Dr Schneider has somehow smuggled information into the computer model. My argument is different. I believe that Dr Schneider has written a plausible mathematical model for random point mutations and natural selection. What my assertion is that when realistic parameters are used in his model, it shows that rate of information acquisition becomes profoundly slow, too slow to allow for macroevolution by the mechanism of random point mutations and natural selection. Fishbob, why don’t you try Dr Schneider’s model? Paul Anagnostopoulos, moderator for this forum wrote the java version of the program and made it simple to use. Try increasing the genome length in the model and see what happens to the generations for convergence.
Kleinman said:
Dr Schneider used a mutation rate 2 orders of magnitude higher than the HIV mutation rate and at least 3-4 orders of magnitude higher than those seen in any free living organism.
Kleinman said:
kjkent1 said:
Maybe "macro"-evolution doesn't happen gradually. Maybe evolution via punctuated equillibrium due to extreme environmental stress on a organism population is the norm.

That's why I'm raising the issue. If we are to modify EV's selection mechanism, then we may as well do it realistically.

The point you are missing is that when you use realistic mutation rates and genome lengths, it takes huge numbers of generations to evolve only a few loci in the ev model. Paul has not made very many extrapolations with the data from ev that I agree with, but one that I think we are close on is for the evolution of 16 binding sites (each 6 bases wide for a total of 96 loci) on a 100k genome with a mutation rate of 10^-6 and a population of 1,000,000 would take 200,000,000 generations to evolve only 96 loci.

If you could find a selection process that would speed up convergence in ev, I think evolutionarians would raise a glass to you from one end of this forum to the other. Delphi would raise a glass either way.
Kleinman said:
The genome length is unrealistically small and is relevant to the rate of information gain. I did do the simulation myself and as you lengthen the genome the rate of convergence of the model becomes so profoundly slow that macroevolution becomes mathematically impossible. As Dr Schneider recommends, “try it yourself”.
kjkent1 said:
Yes, I know. This is your prima facie case, and you are relentlessly compounding it. It's an interesting case, in my opinion, but not necessarily a winner.

Again, what you don’t realize is that it forces evolutionarians to take the position that all fundamental genes and genetic control systems have to evolve on very short length genomes. You can not evolve any long sequences of bases on a long genome by random point mutations and natural selection.
Kleinman said:
Dr Schneider’s argument that extinction is not relevant to the problem of information gain is incorrect. Extinction is an observed phenomenon. I have not raised this issue because even with Dr Schneider’s very liberal selection criteria, the model is still far too slow to explain macroevolution when realistic genome lengths and mutation rates are used in his model.
kjkent1 said:
Dr. Schneider's model proves information gain due to mutation and selection. The rate of mutation and method of selection/environmental stress is THE issue, because as you have demonstrated, unless some realistic combination of mutation and selection causes a profound increase in evolutionary change, then either the model is flat wrong, or there is something else present in the evolutionary process which remains unknown.

Virtually everything is unknown in abiogenesis and the theory of evolution. There is no plausible explanation how RNA bases could form in the primordial soup, there is no plausible explanation how these bases could link up nonezymatically to form the first functional ribozymes, there is no plausible explanation how these ribozymes could initiate protein synthesis and form DNA, there is no plausible explanation how this collection of chemicals could remain stable long enough and in one place long enough to combine and form the first living thing, there is no explanation of what kind of selection process would allow for such chemical reactions, and now ev shows that once you have living creatures, random point mutations and natural selection is so slow that you don’t have enough time to evolve any fundamental gene or genetic control system. The abiogenesis and the theory of evolution is a long list of speculations each with its own set of highly improbable unknowns.

Your own evolutionarian mathematical model argues against your case.
 
The point you are missing is that when you use realistic mutation rates and genome lengths, it takes huge numbers of generations to evolve only a few loci in the ev model.
I'm not missing this point -- you repeat it frequently, and I am acknowledging that it is important.

However, the issue, "now," is whether the generational requirements would be reduced given other "realistic" parameters, such as actual organism survival rates, external environmental stresses, other types of genetic changes, such as recombination, and different types of mutation "mistakes" (i.e., I note that the EV program only codes two types of mistakes), none of which appears to have been contemplated by the EV model.

Also, on the other side of the coin, someone could impose a specific set of changes at some specific point (a "design"), and then continue to run the program, to see whether the design would start to substantially deteriorate, and how long that deterioration would require. If deterioration occurred quickly, it would suggest a rather crappy design, and it would also suggest that continuous intervention by the designer would be necessary to maintain the species continuum.

kleinman said:
If you could find a selection process that would speed up convergence in ev, I think evolutionarians would raise a glass to you from one end of this forum to the other. Delphi would raise a glass either way.
General legal advice: stop suggesting that Delphi has a drinking problem. If it turns out in reality that he has none, and that persons who know him for who he really is, think less of him as the result of your comments here, then you could find yourself to be the "natural selection" for the role of defendant in a libel/false light attribution suit. Getting your actual ID and contact info would be trivially easy via a subpoena served on randi.org.

kleinman said:
Again, what you don’t realize is that it forces evolutionarians to take the position that all fundamental genes and genetic control systems have to evolve on very short length genomes. You can not evolve any long sequences of bases on a long genome by random point mutations and natural selection.
This is not necessarily a fatal flaw in the theory of evolution. Maybe this is exactly what happens.

PS. I'm not an "evolutionarian." I prefer not to believe that magic rules the universe. However, if this ultimately is the case, then I'll just have to deal with it.

PPS. Schneider's published paper states that its purpose is to "demonstrate that Rsequence can indeed evolve into Rfrequency." That is the thesis statement and that is the only bit of science which was likely peer reviewed. You admit that this is proven by the program, so your other complaints about the publication are not particularly relevant.

If you want to show that EV specifically refutes long genomes by point mutation and natural selection, then it falls to you to submit your research for publication. If you are rejected, THEN you may have something worthy of a serious complaint (i.e.: $$$).
 
I have only read Dr Schneider’s rebuttal to Dembski’s criticisms of the ev model. Apparently Demski’s argument is that Dr Schneider has somehow smuggled information into the computer model. My argument is different. I believe that Dr Schneider has written a plausible mathematical model for random point mutations and natural selection. What my assertion is that when realistic parameters are used in his model, it shows that rate of information acquisition becomes profoundly slow, too slow to allow for macroevolution by the mechanism of random point mutations and natural selection. Fishbob, why don’t you try Dr Schneider’s model? Paul Anagnostopoulos, moderator for this forum wrote the java version of the program and made it simple to use. Try increasing the genome length in the model and see what happens to the generations for convergence.

I have no facility or real interest in running (or technical cability to run) computer models. There is no reason for me to try to duplicate what people with more skills are already doing.

I was not in any way referring to anything Dembski had to say about the ev model. What Dembski has to say about anything is not relevant.

I am saying that your argument against evolution appears to share the same flawed basis as Dembski's argument against evolution. The difference being that your argument focuses on the Schneider model, while Dembski's argument used statistics.
 
For those who might be interested, here's a review of some details of how ev actually works:

1. The model starts with a population of individually generated random genomes -- sequences of random characters from the set {a, c, g, t} whose length is G+w-1, where

G = the number of possible binding sites (a user-controllable parameter)
and
w = the width, in bases, of each binding site or "site width" (a user-controllable parameter)

Every position in the genome is considered a possible binding site except for the (w-1) bases at the end of the genome, which can't be binding sites because there's not enough genome left to bind with.

2. The initial portion of the genome encodes a weight matrix, an array of numbers representing a weight for each of the four possible bases for each of the w positions of a potential binding site. Each base represents two binary digits (a = "00," c="01," g="10", t = "11"). The number of bases used to specify each entry in the weight matrix, or "weight width," is a user-controllable parameter. The gene sequences encoding the weight matrix values are converted into numbers using twos-complement notation, so the allowed values have a roughly symmetrical negative to positive range (and a single mutation to the most significant digit can cause a wide swing). Since the sequences start out random, obviously the numbers in the weight matrix also start out random.

3. Following the weight matrix, there is a threshold region of the same number of bases as encode each entry in the weight matrix (parameter "weight width") which is decoded the same way, as a twos-complement (positive or negative) binary number.

4. The remainder of the genome is the region in which binding sites may be located. A user-controllable parameter specifies the number of binding sites. The user can specify whether the binding sites are located at evenly spaced intervals or randomly, and if randomly, with or without the possibility of overlapping. The binding site locations are set at the outset, do not move, and are the same for all individuals.

The purpose of the simulation is to demonstrate that the information necessary to "find" or bind to the binding sites, and not to any other sites in the genome, evolves in the genome through random mutation and selection. The weight matrix, threshold, binding sites, and all other non-binding sites evolve together to reach a configuration in which the weight matrix yields an above-threshold result at the binding sites and a below-threshold result at all other sites. The resulting evolved genomes exhibit a property that appears to meet IDers' definition of irreducible complexity, because the binding site sequences and the weight matrix sequence must, and do, match up to each other in order for the binding sites to function.

It's important to note that while the binding sites are located in the region of the genome following the threshold, every position in the genome (except a few at the very end as already noted), including within the weight matrix and threshold regions, is considered a potential binding site where an unwanted binding "mistake" can occur.

5. The key operation in ev's model of natural selection is counting the number of "mistakes" each creature has. To count the mistakes, the model first reads the weight matrix and the threshold value encoded in the genome. Using the weight matrix, the model determines a binding value starting at each base in the genome (except the last few at the end). For instance, for the 101st possible binding site, starting at the 101st base in the genome, if the 101st base is "t" and the weight matrix entry for [0, t] is 52, then 52 is added to the binding strength. If the 102nd base is also "t" and the weight matrix entry for [1, t] is -120, then -120 is added to the running total. If the 103rd base is "a" and the weight matrix entry for [2, a] is 21, then 21 is added. And so forth, until the total over all w positions (w = the specified binding site width) have been summed.

The sum or total binding strength is compared to the threshold value. If the site is a binding site, and the binding strength is less than the threshold, then that counts as a "mistake" -- the binding mechanism fails to bind to the useful binding site. If the site is not a designated binding site, and the binding strength is greater than the threshold, that also counts as a "mistake" -- the binding mechanism is binding needlessly to a position that's not a useful binding site.

6. Selection in ev is based entirely on number of mistakes. All the individuals in the population are sorted by their total number of mistakes. They are then compared in pairs starting at the beginning and end of the sorted list. That is, the first (fewest mistakes) creature on the list is compared with the last (most mistakes), then the second creature is compared with the second to last creature, and so forth. If the creature from the first half of the list has fewer mistakes than the creature from the second half of the list, the bottom-half creature's genome is erased and replaced with a copy of the top-half creature's genome. If they are tied with the same number of mistakes, both survive. It's not unusual, depending on the parameters, to have a large "tied" population in each generation.

(In response to certain critical comments, Dr. Schneider installed and tested variant tie-breaking methods that can be invoked by user-settable flags. In one version, ties are broken by a 50-50 random choice. In another, whichever creature happened to be sorted into the first half of the list wins a tie, which makes survival dependent on the arbitrary internal behavior of the sorting algorithm.)

One subtlety that might be worth noting is that even with the original algorithm in which both creatures survive a tie, which creatures survive is still sometimes dependent on the arbitrary internal behavior of the sorting algorithm. Suppose the population has the following numbers of mistakes, after sorting:

4 4 4 4 5 5 5 5 5 5 5 5 6 9

The 6, the 9, and the two fives who happen to be in the unlucky third-to-last and fourth-to-last position will be matched up against the 4's and consequently replaced. The remaining 5's are all matched up against each other, are tied, and so all survive. Ev counts the number of deaths, and also makes a separate count of numbers of deaths for which other creatures with the same number of mistakes survived that generation, so experimenters can judge for themselves whether this occurrence has any significant effect.

7. Random mutation in ev is straightforward and works as one might expect. All bases in a genome are subject to random mutation with equal probability. User-settable parameters can define the mutation rate as either the expected number per creature per generation, or the expected number per base per creature per generation. Regardless of which option is used for setting the parameter, if the expected number of mutations per creature per generation has a fractional part, then whether that fractional part results in a mutation is randomly determined (with a probability equal to the fraction) for each creature for each generation.

The preceeding are all plain facts about how ev works, easily verified by examining the code and/or reading the provided documentation. What follows is evaluation of that behavior based on experience with actual runs.

- Selection Model -

Two facts are very important for understanding ev's behavior: One, that selection is based solely on the number of mistakes; and two, that after a relatively brief initial period at the start of a run, most of the mistakes in the population require multiple point changes in the genome to eliminate.

There is no positive selective value for a creature for any improvement short of eliminating a mistake. A mutation that reduces the magnitude of a mistake, without eliminating the mistake, does not confer any advantage, except for the chance of a subsequent mutation that eliminates the mistake the rest of the way. (Of course, subsequent mutations can also make the mistake worse again, or create a new mistake somewhere else that will cause that creature to be selected out.) Furthermore, a mutation that makes an existing mistake worse in magnitude does not confer any selective disadvantage either. (This remains equally true if either of the two alternative tie-breaking methods are used.)

With large genomes and/or low mutation rates, the population eventually settles out so that most of the individuals have not only the same number of mistakes, but mistakes at all the same locations. (These are binding site mistakes; non-binding-site mistakes tend to be much more quickly selected out.) Creatures receiving mutations that increase their number of mistakes are the ones selected out each generation; the others all remain tied. The appearance of an individual with one fewer mistake is a rare event. When it happens, the individual with one fewer mistake, unless it is exceedingly unlucky with subsequent mutations, quickly multiplies and its descendents replace the entire rest of the population in a few generations. This can result in a loss of diversity at the remaining binding sites -- the individual with one fewer mistake might have worse than average values at its remaining mistake locations, for instance. Because of this, and because there are fewer and fewer improvements left to make, there tend to be an increasing number of generations between reductions in the number of mistakes as the number of mistakes decreases.

- Population -

Because multiple changes to the genome are typically required to eliminate any one population-wide mistake, Kleinman's argument that large increases in population should have little effect on the number of generations to reach a "perfect creature" (no mistakes), because the probability of any one specific mutation occurring in the population per generation approaches 1 with a population on the order of the genome length, is invalid. The probabilities of a given combination of 2 or more mutations obviously does not approach 1 until the population reaches the order of successive powers of the genome length -- which microbial populations in nature can easily do, up to at least the third power. This prediction is consistent with test results. Every series of test runs with increasing populations has continued to show reductions in number of generations for as long as the data series is extended. Kleinman points out that the rate of reduction decreases as the population increases, but has not given any reason why we should expect otherwise if the curve has an exponent of, say, .5 or .33. Therefore Kleinman's assertion that large populations make no difference is contradicted on both theoretical and experimental grounds.

One detail that should be kept in mind as tests with higher populations are contemplated is ev's current relatively crude method of distributing mutations. In ev, if the mutation rate is 1 per genome per generation, it means each creature undergoes exactly 1 mutation. This is a reasonable approximation at low populations, but it might change the behavior significantly at higher populations. For instance, suppose the population were 10^12 individuals, with a genome length of 10^6 and a mutation rate of 1 per genome per generation. Furthermore, suppose that a certain population-wide mistake can be eliminated by two mutations, but each of those mutations individually creates a new mistake. (For instance, one mutation might change the weight matrix, and another mutation in a non-mistake binding site prevents the change from causing a new mistake there). If mutations were truly distributed randomly through the population at an expected rate of 1/genome-generation, the chance per generation of both mutations occurring simultaneously in the same individual would be about 1 in 10^13, so population-wide it would be about 1 in 10 per generation. But with mutations distributed as exactly one per creature per generation as ev does it, the probability of that same event is zero.

- Genome Length -

As the genome length in ev is increased, with other parameters held constant, several different effects occur.

1. If the mutation rate per genome is held constant, then the chance of any given base mutating decreases proportionally. This reduces the effective mutation rate of the "key" parts of the genome -- the weight matrix, threshold, and binding sites. Of course, a mutation to any other part of the genome can be significant if it causes a non-binding-site mistake, but such mistakes are relatively less likely to occur and are quickly selected out when they do occur. So, the net effect is that convergence toward "perfect creature" slows down.

2. The longer the genome, the more information is required to specify the locations of the binding sites in the genome. The amount (in bits) of information required to find a given binding site is displayed by the program as the value Rfrequency. Thus, the genome has to evolve more information per binding site to find the binding sites.

Furthermore, the binding sites can only contain a certain maximum amount of information, which is 2 bits per base or 2*(site width) bits total. This limit is called Rcapacity (but it is not displayed by the program). As Rfrequency increases above about (Rcapacity - 2), the convergence slows down rapidly, and when Rfreq >= Rcapacity, the population doesn't converge at all. (In nature, according to Dr. Schneider's work, Rfrequency tends to approximately Rcapacity/2.)

With ev's default binding site width of 6 (Rcapacity = 12), and with its default 16 binding sites, Rfrequency starts getting close to Rcapacity (to 10.0) at a genome length of 16,384. Convergence slows down rapidly beyond that, and doesn't happen at all at or above a genome length of 65,000.

This is not merely an effect of genome length alone. It can easily be seen at shorter genome lengths, where convergence is normally rapid, by reducing the site width to reduce Rcapacity. For instance, I've run 1024 bases, 8 binding sites (Rfreq = 7), site width 3 (Rcapacity = 6), for over 7,000,000 generations without ever seeing a non-mistake binding site appear. I have to admit that while I understand the information theory explanation of why this occurs (basically, you can't put seven gallons of water in a six-gallon bucket), I don't understand it intuitively on the what-happens-next level of how the simulation runs. However, it may be directly related to the phenomena described in the next item.

3. With longer genomes, there are noticeable differences in how the evolution progresses, especially in the early stages. With a short genome, the initial selection often favors individuals that, by chance, have fewer binding-site mistakes to begin with. But with longer genomes, there are many more non-binding sites, and so the weight of selection shifts to favoring individuals with fewer non-binding-site mistakes. With long random genomes, the individuals with the fewest non-binding-site mistakes are the ones with high threshold values and few positive values anywhere in the weight matrix, so the population quickly acquires those characteristics. This results in a population in which every binding site is a mistake in every individual. Elimination of mistakes is slow, because the same kinds of changes likely to eliminate a binding site mistake -- decreases to the threshold or increases in weight values -- are also likely to cause multiple non-binding-site mistakes to appear.

4. Longer genomes increase the memory requirements to run the program, and increase the realtime necessary to run the model per generation. This is irrelevant to the results of tests as they apply to evolution, but it has a big effect on what tests can be performed practically.

Kleinman has reported that the generations to convergence increase dramatically as longer genome lengths are tested. However, what he's seeing are largely the result of effects 1 and 2. To my recollection he's never reported the results of any tests at any binding site width other than the default, so his runs never converge past genome lengths about 50,000 bases.

Paul has reported tests using a constant mutation rate per base (controlling effect #1) and starting with a large site width (controlling effect #2 within the practical limits of genome lengths for his test run) and reported that the generations to convergence increases linearly with the genome length. This despite effect #3 and despite all the limitations of the selection model discussed previously.

Thus, Kleinman's claims of evolution becoming "profoundly slow" with "realistic" parameters for genome length and mutation rate (see below) are not supported by the evidence.

- Mutation Rate -

By all accounts and according to all tests so far, reducing the mutation rate has a linear effect on increasing the generations to convergence, for reasons that should be intuitively obvious. Except for cases such as I described above where simultaneous mutations might be advantageous but individually fatal, there's no difference to a creature whether it receives 10 mutations one every 100 generations on average, or 10 mutations in the same generation, or somewhere in between. The effect of the mutation rate only becomes complex when it becomes very high (many orders of magnitude higher than what Kleinman accepts as "realistic") resulting in a mutation load that slows down or even prevents evolutionary progress.

Even if one accepts the claim that the ancient prokaryotes that are the closest scenario from nature to what ev simulates must have mutation rates similar to present-day microorganisms (and no evidence whatsoever has been offered to support that claim), such a mutation rate (versus the 1 per 512 base rate that Paul used in his genome-length series) only accounts for a further increase in number of generations of a factor of about 10^4, which combined with the linear effects of expanding to "realistic" genome lengths, still does not result in evolution that's "profoundly slow" by known evolutionary time scales. There are also sound mathematical and experimental reasons to expect that higher populations would indeed compensate for lower mutation rates. For instance, if (unlike in ev) mutations were truly randomly distributed, a large fraction of the population would receive signficantly more than the expected number of mutations in any given generation.

- Sex Sells Everything, Even Evolution -

In attempting to apply quantitative results (however questionable) of ev to questions of the evolution rate of humans and other eukaryotes, Kleinman has rejected any hypothesis that sexual reproduction can account for faster, more efficient evolution. While it is true that recombination alone does not create additional mutations, mutations alone do not control the rate of information increase. The generation of combinations of mutations and the selection of such combinations is critical, as should be patently obvious to anyone who, like Kleinman, has run the ev model and observed that, taking the population into account, it can take enough generations to converge for every possible point mutation to have occurred tens, hundreds, or thousands of times over along the way. Clearly it matters what combinations of mutations appear in which individuals, and sexual reproduction generates new combinations much more efficiently while allowing the population to assimilate a considerably higher mutation load.

This is well known to every engineer who designs and uses genetic algorithms.

It's also well-known to Kleinman, or at least it should be. One mathematical model comparing asexual to sexual reproduction is given by MacKay available at w w w.inference.phy.cam.ac.uk/mackay/itprnn/ps/265.280.pdf. (Figure 19.1 sums up the difference recombination makes very succinctly.) I'm indebted to Kleinman for pointing me to the MacKay monograph in the first place, and I've pointed out significance of the MacKay model to Kleinman on several occasions to no apparent avail.

- Further Research -

I'm glad to hear that Paul is contemplating some experiments with modified selection models. I've been experimenting with two ideas: one is to include other selective factors such as the mean or maximum mistake magnitude, and the other is to randomly or periodically alter the effective threshold value (representing e.g. day/night temperature cycles affecting the binding strength) so as to give a selective advantage to more robust binding strengths and lesser-magnitude mistakes (which are more likely to become non-mistakes with an altered threshold value). This is going slowly because for convenience and flexibility, I'm using a slow-running scripting language (Lingo/Shockwave) for my test programs. But I'm not under any deadline.

Selecting based on worst mistake magnitude (as a tie-breaker for total number of mistakes) has produced the most interesting results so far. For one thing, it appears to cause the number of generations between successively smaller numbers of mistakes to decrease as the number of remaining mistakes decreases, instead of increasing as it does in plain ev.

Some other ideas:

- Use Gray binary instead of twos-complement for the weights and threshold encodings. This guarantees that it's always possible to make a one-unit incremental change by a single mutation, whereas regular binary can get trapped and unable to make a needed small increase or small decrease without changing mutiple digits. Example: attt = 00111111; if this needs to be higher but cttt = 01111111 is too high, then two separate mutations are needed to reach a value in between.

- To represent the presence of a already-evolved genes in the genome, designate sections of the genome as gene regions, with some fixed chance that any mutation to a gene region is immediately fatal. (However, some provision must also be made to allow mutations that eliminate mistakes.) Not only is this reasonably realistic, but some quick calculations and small-scale experiments suggest that as long as the overall mutation load is not too high, this can speed up the convergence by selecting in favor of the portion of the population receiving modifications to the evolving (threshold, weight matrix, and binding site) parts of the genome. Essentially, partially compensating for the genome being longer, at low mutation rates.

Respectfully,
Myriad
 
Last edited:
Excellent, Myriad. Thanks for the detailed summary.

Furthermore, the binding sites can only contain a certain maximum amount of information, which is 2 bits per base or 2*(site width) bits total. This limit is called Rcapacity (but it is not displayed by the program).
Rcapacity is displayed on the Data & Statistics panel.

~~ Paul
 
Status
Not open for further replies.

Back
Top Bottom