Paul C. Anagnostopoulos
Nap, interrupted.
- Joined
- Aug 3, 2001
- Messages
- 19,141
This is another thread in the continuing saga of the Ev evolution simulation program. I have run across a behavior I don't understand, and I thought perhaps someone here might have a flash of insight.
Ev simulates the evolution of DNA binding sites:
http://www.lecb.ncifcrf.gov/~toms/paper/ev/
Certain critics have been complaining that the mutation rates used in experiments are much too high to be realistic. They insist that we run experiments with mutation rates on the order of 1/million DNA bases, rather than the 1/256 bases we often run.
So I ran a set of experiments with a fixed population (64) and a fixed chromosome length (1000). I varied the mutation rate from 1/1 million bases down to 1/200 bases. The number of generations to converge on a creature with perfect DNA binding varied according to the equation [latex]$g = .9m^{1.02}$[/latex], which is just about linear, as one would expect. I therefore conclude that high mutation rates can be extrapolated to lower ones with no fuss.
Then, for some reason, I decided to run another series of experiments with a fixed population (64) and a fixed number of mutations per base (1/16,000). I'm varying the chromosome length from 512 bases by factors of 2. I would expect the number of generations to evolve a perfect creature to remain constant, because the probability of mutating any non-junk DNA base in the chromosome remains constant. However, I'm seeing what appears to be a factor of 2 increase in the number of generations required as the chromosome increases in length by a factor of 2.
Does anyone have any thoughts on why this should be the case?
~~ Paul
Ev simulates the evolution of DNA binding sites:
http://www.lecb.ncifcrf.gov/~toms/paper/ev/
Certain critics have been complaining that the mutation rates used in experiments are much too high to be realistic. They insist that we run experiments with mutation rates on the order of 1/million DNA bases, rather than the 1/256 bases we often run.
So I ran a set of experiments with a fixed population (64) and a fixed chromosome length (1000). I varied the mutation rate from 1/1 million bases down to 1/200 bases. The number of generations to converge on a creature with perfect DNA binding varied according to the equation [latex]$g = .9m^{1.02}$[/latex], which is just about linear, as one would expect. I therefore conclude that high mutation rates can be extrapolated to lower ones with no fuss.
Then, for some reason, I decided to run another series of experiments with a fixed population (64) and a fixed number of mutations per base (1/16,000). I'm varying the chromosome length from 512 bases by factors of 2. I would expect the number of generations to evolve a perfect creature to remain constant, because the probability of mutating any non-junk DNA base in the chromosome remains constant. However, I'm seeing what appears to be a factor of 2 increase in the number of generations required as the chromosome increases in length by a factor of 2.
Does anyone have any thoughts on why this should be the case?
~~ Paul
