Annoying creationists

Status
Not open for further replies.
Kleinman said:
That’s not a very good answer to his question Mr Rcapacity.
I'm not sure what else to say. Ev does not model gene duplication.

Let’s see if I can explain this to you Mr Rcapacity. When the genome is lengthened beyond a certain point in ev, the errors in the non-binding site region dominate the selection process and stops the evolution of binding sites. So what you call a “one mutation/selection process” in actuality is two selection conditions. One condition is the selection for binding sites on the binding site region of the genome and the other condition is selection for no binding sites on the non-binding site region of the genome. Do you understand what I mean Mr Rcapacity?
Yes, that is a fine way of looking at it. In fact, there are three selection conditions: binding at the sites, spurious binding within the gene, spurious binding outside the gene.

However, your idea of spurious bindings dominating the selection process needs more thought. It certainly has an effect on the evolution, but I very much doubt it "stops" it. The stopping is due to Rcapacity problems. You'll remember I ran some experiments where we crossed the Rcapacity threshold with only a slight increase in the ratio of non-binding-site size to binding-site size, and evolution was thwarted. I very much doubt that's due to the increase in the ratio.

But you can always run some experiments to see if you can get evolution to "stop" without running up against Rcapacity. If this "stopping" is some sort of fundamental feature of evolution, you ought to be able to invoke it without requiring huge genomes.

~~ Paul
 
Last edited:
I, too, look forward to the document. I have a few suspicions about possible flaws in Kleinman's arguments, but it would be pointless to argue against something without that something being well-presented to begin with.

Take your time--edit well...
 
Annoying Creationists

Mr Scott said:
Paul, is what Alan saying here true? In a large population, when different genes in the same organism duplicate and mutate in parallel, does evolution, as simulated by Ev, slow down?
Paul said:
Ev does not simulate gene duplication.
Kleinman said:
That’s not a very good answer to his question Mr Rcapacity.
Paul said:
I'm not sure what else to say. Ev does not model gene duplication.
Your stylized model of random point mutation and natural selection does include two selection conditions operating in parallel. These competing conditions not only cause evolution as simulated by ev to slow down, but it can draw evolution to a complete standstill.
Kleinman said:
Let’s see if I can explain this to you Mr Rcapacity. When the genome is lengthened beyond a certain point in ev, the errors in the non-binding site region dominate the selection process and stops the evolution of binding sites. So what you call a “one mutation/selection process” in actuality is two selection conditions. One condition is the selection for binding sites on the binding site region of the genome and the other condition is selection for no binding sites on the non-binding site region of the genome. Do you understand what I mean Mr Rcapacity?
Paul said:
Yes, that is a fine way of looking at it. In fact, there are three selection conditions: binding at the sites, spurious binding within the gene, spurious binding outside the gene.
You did add the third selection condition to the model since we started our discussion. Did adding the third selection condition speed up evolution or slow down evolution when including all three selection conditions in a simulation?
Paul said:
However, your idea of spurious bindings dominating the selection process needs more thought. It certainly has an effect on the evolution, but I very much doubt it "stops" it. The stopping is due to Rcapacity problems. You'll remember I ran some experiments where we crossed the Rcapacity threshold with only a slight increase in the ratio of non-binding-site size to binding-site size, and evolution was thwarted. I very much doubt that's due to the increase in the ratio.
Your Rcapacity concept needs more thought. What makes you think that the weight matrix will no longer find a match on the genome just because the genome exceeds a particular length? The failure of ev to converge is not due to a failure of the weight matrix finding binding sites, it is due to the weight matrix finding erroneous sites in the non-binding site region which when these errors dominate the selection process, no evolution of binding sites can occur in the binding site region. You have the program set up to count errors. See for yourself which errors are driving the selection process. When the genome is short, errors in the binding site region drives selection and the binding sites rapidly evolve. As errors in the non-binding site region increase, it becomes more and more difficult for ev to select for the evolution of binding sites until ultimately, the errors in the non-binding site region dominates and no evolution of binding sites can occur.
Paul said:
But you can always run some experiments to see if you can get evolution to "stop" without running up against Rcapacity. If this "stopping" is some sort of fundamental feature of evolution, you ought to be able to invoke it without requiring huge genomes.
We already have experiments. We have the original form of ev with two selection conditions which fails to converge with short genome lengths (far shorter than any realistic length for a free living creature). You then introduced a third selection condition, which is spurious binding in the binding site region. Does this third condition speed up or slow down evolution? This concept goes to the core of Mr Scott’s question. Do multiple selection conditions acting in parallel speed up or slow down evolution? Ev shows that multiple selection conditions acting in parallel slows down evolution. There is no reason to believe that this effect would be countered by adding other mutation mechanism such as gene duplication, indels or any other mutation mechanism. So as you take your stylized model of random point mutations and natural selection and make it more realistic, the number of generations for convergence will become larger and larger as your include more and more selection processes. Paul, the theory of evolution is mathematically impossible.
 
Kleinman said:
Your stylized model of random point mutation and natural selection does include two selection conditions operating in parallel. These competing conditions not only cause evolution as simulated by ev to slow down, but it can draw evolution to a complete standstill.
What does this have to do with whether Ev models gene duplication?

You did add the third selection condition to the model since we started our discussion. Did adding the third selection condition speed up evolution or slow down evolution when including all three selection conditions in a simulation?
Ev has always scored mistake points for missed bindings, spurious bindings within the gene, and spurious bindings outside the gene. What was added in November, 2005, was the ability to specify different mistake points for these three situations.

Your Rcapacity concept needs more thought. What makes you think that the weight matrix will no longer find a match on the genome just because the genome exceeds a particular length?
It's not the length that matters, but Rfrequency and Rsequence. In order to evolve a perfect creature, Rseq must approach Rfreq. But if the binding sites are too narrow relative to the genome size, then they cannot contain a pattern (as shown in the sequence logo) that is unique enough for the transcription factor to match it but nowhere else.

The failure of ev to converge is not due to a failure of the weight matrix finding binding sites, it is due to the weight matrix finding erroneous sites in the non-binding site region which when these errors dominate the selection process, no evolution of binding sites can occur in the binding site region.
It's not that the matrix fails to match binding sites, but that it too easily succeeds in matching elsewhere. This is certainly affected by the genome size, but I see no reason why, barring Rcapacity issues, it should ever halt evolution entirely. What would cause the discontinuity? If you think it does, run some Ev models and find out when and why.

You have the program set up to count errors. See for yourself which errors are driving the selection process. When the genome is short, errors in the binding site region drives selection and the binding sites rapidly evolve. As errors in the non-binding site region increase, it becomes more and more difficult for ev to select for the evolution of binding sites until ultimately, the errors in the non-binding site region dominates and no evolution of binding sites can occur.
I agree with everything except your conclusion.

We already have experiments. We have the original form of ev with two selection conditions which fails to converge with short genome lengths (far shorter than any realistic length for a free living creature).
The only relevant experiments we have are the ones I ran, mentioned above. If you think you have others, present the parameters.

On the other hand, if you are simply going to refuse to accept the fact that there is an Rcapacity issue, no reason to waste our time.

~~ Paul
 

Let’s see if we can figure out your logic. Your own computer model shows your theory to be mathematically impossible but because taxpayers have bamboozled into paying for research into an impossible theory, it makes it a better theory. I wonder how much private money is put into researching your ridiculous theory.

I see that kleinman has told Lie #1 again.

No new lies, eh, kleinman?

If you're planning to recite lies until reality disappears, why do it in public? 'Cos, you know, if this strange magical ritual doesn't overturn the laws of nature, people will laugh at you and think you're off your head.

Yes. The mathematical proof is:
Selection process to evolve a new gene from the beginning = f

As a mathematician, I can tell the difference between a mathematical proof and the screaming, gibbering, and twitching of a lunatic trying to hide from reality. That was the latter; and if you are unable to do any maths to back up your crackpot hypothesis, then drooling out nonsense like this is no substitute.
 
Last edited:
Annoying Creationists

Kleinman said:
Your stylized model of random point mutation and natural selection does include two selection conditions operating in parallel. These competing conditions not only cause evolution as simulated by ev to slow down, but it can draw evolution to a complete standstill.
Paul said:
What does this have to do with whether Ev models gene duplication?
Again I post Mr Scott’s question.
Mr Scott said:
Paul, is what Alan saying here true? In a large population, when different genes in the same organism duplicate and mutate in parallel, does evolution, as simulated by Ev, slow down?
You evade the key part of his question by simply saying ev does not model gene duplication. The key part of his question is concerns what the effect of multiple selection processes have on the rate of evolution. Ev shows that multiple selection processes slow down evolution.
Kleinman said:
You did add the third selection condition to the model since we started our discussion. Did adding the third selection condition speed up evolution or slow down evolution when including all three selection conditions in a simulation?
Paul said:
Ev has always scored mistake points for missed bindings, spurious bindings within the gene, and spurious bindings outside the gene. What was added in November, 2005, was the ability to specify different mistake points for these three situations.
I stand corrected on this point. What happens when you weight mistakes for missed binding sites to the maximum value while setting spurious binding in an out of the binding site region to 0? Then what happens as you increase the weights for spurious binding sites?
Kleinman said:
Your Rcapacity concept needs more thought. What makes you think that the weight matrix will no longer find a match on the genome just because the genome exceeds a particular length?
Paul said:
It's not the length that matters, but Rfrequency and Rsequence. In order to evolve a perfect creature, Rseq must approach Rfreq. But if the binding sites are too narrow relative to the genome size, then they cannot contain a pattern (as shown in the sequence logo) that is unique enough for the transcription factor to match it but nowhere else.
The following is your first public post concerning your Rcapacity concept from the Evolutionisdead forum:
Paul said:
What's happening is that Rfrequency is approaching Rcapacity, the information capacity of the binding sites. In this experiment, Rcapacity = 12 bits. At Rfrequency = Rcapacity, the number of generations to evolve a perfect creature is infinite. This is what we're seeing in our data sets.
This issue has nothing to do with the “transcription factor” (weight matrix) to match a binding site but nowhere else. The weight matrix will find matches anywhere on the genome where the sequence of bases is appropriate. As you lengthen the non-binding site region, you have more potential sites for a match. It is this increase in potential sites on the non-binding site region which slows down and ultimately stops evolution of the binding sites.
Kleinman said:
The failure of ev to converge is not due to a failure of the weight matrix finding binding sites, it is due to the weight matrix finding erroneous sites in the non-binding site region which when these errors dominate the selection process, no evolution of binding sites can occur in the binding site region.
Paul said:
It's not that the matrix fails to match binding sites, but that it too easily succeeds in matching elsewhere. This is certainly affected by the genome size, but I see no reason why, barring Rcapacity issues, it should ever halt evolution entirely. What would cause the discontinuity? If you think it does, run some Ev models and find out when and why.
The weight matrix does not find matches more easily on the non-binding site region of the genome as you lengthen the genome; you simply have more potential sites for erroneous binding. Set the weighting factor for errors in the non-binding site region to 0 and your Rcapacity problem disappears.
Kleinman said:
You have the program set up to count errors. See for yourself which errors are driving the selection process. When the genome is short, errors in the binding site region drives selection and the binding sites rapidly evolve. As errors in the non-binding site region increase, it becomes more and more difficult for ev to select for the evolution of binding sites until ultimately, the errors in the non-binding site region dominates and no evolution of binding sites can occur.
Paul said:
I agree with everything except your conclusion.
So what did you mean when you said this?
Paul said:
At Rfrequency = Rcapacity, the number of generations to evolve a perfect creature is infinite. This is what we're seeing in our data sets.
Kleinman said:
We already have experiments. We have the original form of ev with two selection conditions which fails to converge with short genome lengths (far shorter than any realistic length for a free living creature).
Paul said:
The only relevant experiments we have are the ones I ran, mentioned above. If you think you have others, present the parameters.
I’m talking about the experiments you ran varying the parameters which shows that erroneous binding slows down evolution. Those are good experiments. I think I’ll co-opt the results.
Paul said:
On the other hand, if you are simply going to refuse to accept the fact that there is an Rcapacity issue, no reason to waste our time.
Oh, there is an “Rcapacity” issue, but the explanation is that competing selection processes slow down and ultimately stop evolution in the ev model. This phenomenon that ev is demonstrating also occurs in reality.

Now Paul, it is not my aim to waste your time, my aim is to annoy you.
 
So kleinman is trying to "annoy" us?

I find him rather like a soap opera - dull and repetative, but you just can't stop watching because of the stilted acting and laughable script.
A human perpetuum mobile of circular argumentation.
Great entertainment!
 
You evade the key part of his question by simply saying ev does not model gene duplication. The key part of his question is concerns what the effect of multiple selection processes have on the rate of evolution. Ev shows that multiple selection processes slow down evolution.

I stand corrected on this point. What happens when you weight mistakes for missed binding sites to the maximum value while setting spurious binding in an out of the binding site region to 0? Then what happens as you increase the weights for spurious binding sites?
We can't use zero for a mistake value.

Zeroing one of the weight variables in ev prevents ev from evaluating the existence of that type of mistake, during selection. But, the mistake remains in the genome. So, by zeroing out a mistake, we would be introducing a programming error.

The only way to evolve towards a "perfect creature" is to permit ev's selection method to find all the mistakes. Reweighting the mistake values will change the importance of a particular mistake in determining survival fitness. But, if we set a mistake to zero, we are allowing that mistake to be completely ignored, which means it has no importance to survival.

In the former case, eventually all mistakes will be eliminated via selection and a legitimate "perfect creature" will appear. In the latter case, the mistake will be ignored, and the perfect creature which appears isn't actually perfect. It's still full of the mistake(s) which have been set to zero.
 
Annoying Creationists

Gargoyle said:
So kleinman is trying to "annoy" us?
Trying? Ask Paul whether I’m successful at annoying evolutionists. The only thing I am better at is showing that the theory of evolution is mathematically impossible using the ev computer model.
Gargoyle said:
I find him rather like a soap opera - dull and repetative, but you just can't stop watching because of the stilted acting and laughable script.
A human perpetuum mobile of circular argumentation.
Great entertainment!
I’m looking for something in Gargoyle’s post that breaks the pattern of dull and repetitive in this thread but it just isn’t there. Perhaps you should try something new and unusual, for example claim moving goalposts, or try yelling strawman? Oh, wait, there it is, you post gifs and jpegs. That certainly breaks the dull and repetitive nature of this thread. How clever you evolutionists are.
Kleinman said:
You evade the key part of his question by simply saying ev does not model gene duplication. The key part of his question is concerns what the effect of multiple selection processes have on the rate of evolution. Ev shows that multiple selection processes slow down evolution.

I stand corrected on this point. What happens when you weight mistakes for missed binding sites to the maximum value while setting spurious binding in an out of the binding site region to 0? Then what happens as you increase the weights for spurious binding sites?
kjkent1 said:
We can't use zero for a mistake value.
Sure you can kjkent1, it’s just a computer program. Don’t tell me you are a member of ASPCPC, the American Society for the Prevention of Cruelty to Perfect Creatures.
kjkent1 said:
Zeroing one of the weight variables in ev prevents ev from evaluating the existence of that type of mistake, during selection. But, the mistake remains in the genome. So, by zeroing out a mistake, we would be introducing a programming error.
Set the value to 10^-10. We don’t want you to divide by zero. Your theory might blow up.
kjkent1 said:
The only way to evolve towards a "perfect creature" is to permit ev's selection method to find all the mistakes. Reweighting the mistake values will change the importance of a particular mistake in determining survival fitness. But, if we set a mistake to zero, we are allowing that mistake to be completely ignored, which means it has no importance to survival.
Just what is the function of the “perfect creature”?
kjkent1 said:
In the former case, eventually all mistakes will be eliminated via selection and a legitimate "perfect creature" will appear. In the latter case, the mistake will be ignored, and the perfect creature which appears isn't actually perfect. It's still full of the mistake(s) which have been set to zero.
I thought you were arguing the Rsequence -> Rfrequency is how convergence should be measured?
 
Sure you can kjkent1, it’s just a computer program. Don’t tell me you are a member of ASPCPC, the American Society for the Prevention of Cruelty to Perfect Creatures.

Set the value to 10^-10. We don’t want you to divide by zero. Your theory might blow up.
This is not a divide by zero or a limit of a function issue. It's a programming state issue. The program selects by evaluating mistakes based on their weight. A mistake with a very small value will still be evaluated -- albeit with less selective weight than a mistake with a higher value -- but a mistake with a zero value will not be evaluated at all.
Just what is the function of the “perfect creature”?
A perfect creature for ev purposes is a genetic sequence with neither missing nor spurious bindings.
I thought you were arguing the Rsequence -> Rfrequency is how convergence should be measured?
I was indeed arguing that Rseq->Rfreq is how evolution should be measured. However, Paul explained that ev doesn't select for convergence, but rather that convergence is an artifact of ev selecting against missing and spurious bindings.

I believe that Paul's explanation is correct, and it further explains what you would describe as micro vs. macro-evolution, because a perfect creature can either evolve slowly over time, or it can simply appear by random chance between generations. And, when the latter state occurs, that creature's improvements quickly takes over the entire population (in ev, at the rate of 2generation(s)).

If we presume that ev is modeling a binding site region contained within the genome of an already existing creature (and we must, because otherwise, the ev creatures are all already alive and functioning at the start of every program run, even though none have yet evolved any genetic material), then such random accidents would explain how new functionality can appear rapidly, and thereby avoid the slower micro-evolutionary process, which you argue proves evolution mathematically impossible.

Also, it's worth noting that there is no requirement in nature that Rseq->Rfreq. It happens to be that "on average," Rseq->Rfreq is an attribute of genes in functioning organisms. But, it's just an average, which doesn't mean that a gene must have a perfect set of bindings to function. As far as I'm aware, no one has expressed the minimum requirements for a functioning gene. It could be that convergence may be sufficient, but not necessary for viable life functions.
 
Last edited:
Let's run a little experiment to demonstrate the Rcapacity problem.

population 64
genome size 1024
binding sites 8
mutations/generation 4

This makes Rfrequency = 7.

Now, let's vary the weight and binding site widths, starting with 2 and 1, and increasing by 1. Here are the results:

weight/site width, Rcapacity, generations to perfect creature

1/2, 2, ---
2/3, 4, ---
3/4, 6, ---
5/4, 8, 19137
6/5, 10, 2742

Notice that the genome size is constant and the ratio of nonbinding-site genome to binding-site genome is roughly constant. We can keep it more constant by increasing the genome size to 2048, making Rfrequency = 8. The mutations/generation is 8.

weight/site width, Rcapacity, generations to perfect creature

1/2, 2, ---
2/3, 4, ---
3/4, 6, ---
5/4, 8, ---
6/5, 10, 4105
7/6, 12, 4117

It's an amazing thing, don't you think?

~~ Paul
 
Kleinman said:
Oh, there is an “Rcapacity” issue, but the explanation is that competing selection processes slow down and ultimately stop evolution in the ev model. This phenomenon that ev is demonstrating also occurs in reality.
Convince us by running a series of experiments that shows evolution stopping, but not at the Rcapacity boundary.

The problem, see, is that you refuse to acknowledge that a binding site too narrow to contain a unique sequence logo simply cannot evolve a perfect creature, regardless of the size of the genome.

~~ Paul
 
Annoying Creationists

Kleinman said:
Sure you can kjkent1, it’s just a computer program. Don’t tell me you are a member of ASPCPC, the American Society for the Prevention of Cruelty to Perfect Creatures.
Kleinman said:

Set the value to 10^-10. We don’t want you to divide by zero. Your theory might blow up.
kjkent1 said:
This is not a divide by zero or a limit of a function issue. It's a programming state issue. The program selects by evaluating mistakes based on their weight. A mistake with a very small value (the program only allows integers) will still be evaluated -- albeit more slowly than a mistake with a higher value -- but a mistake with a zero value will not be evaluated at all.

What you need to consider is what the program is evaluating. What is the selection pressure that has been defined by Dr Schneider? It is this definition which is applied to increase the information content in the genome by the mutation and selection process and it is this definition which is the contrivance required in order to make the genomes evolve. There may be a basis in reality to include a selection condition that says the identification of a binding site where one should not exist represents a harmful mutation because an inappropriate protein would be transcribed. But as a basis for studying the mathematics of mutation and selection, setting these mistakes in the non-binding site region to 0 to illustrate why ev won’t converge is useful for identifying how this process works.
Kleinman said:
Just what is the function of the “perfect creature”?
kjkent1 said:
A perfect creature for ev purposes is a genetic sequence with neither missing nor spurious bindings.
And that function of the “perfect creature” is based on two selection conditions you have just described. If you add another selection condition on the perfect creatures (such as evolving a different set of binding sites), what do you think would happen?
Kleinman said:
I thought you were arguing the Rsequence -> Rfrequency is how convergence should be measured?
kjkent1 said:
I was indeed arguing that Rseq->Rfreq is how evolution should be measured. However, Paul explained that ev doesn't select for convergence, but rather that convergence is an artifact of ev selecting against missing and spurious bindings.
You can use either “perfect creature” or Rseq->Rfreq convergence criterion and neither will converge when you use Dr Schneider’s selection conditions and the genome exceeds the length defined by Paul’s Rcapacity variable. The reason is the spurious bindings dominate the selection process and no binding sites evolve.
kjkent1 said:
I believe that Paul's explanation is correct, and it further explains what you would describe as micro vs. macro-evolution, because a perfect creature can either evolve slowly over time, or it can simply appear by random chance between generations. And, when the latter state occurs, that creature's improvements quickly takes over the entire population (in ev, at the rate of 2generation(s)).
If you are going to take the position the life appeared strictly by random chance, you better stick with your string cheese theory and 10^500 alternative universes.
kjkent1 said:
If we presume that ev is modeling a binding site region contained within the genome of an already existing creature (and we must, because otherwise, the ev creatures are all already alive and functioning at the start of every program run, even though none have yet evolved any genetic material), then such random accidents would explain how new functionality can appear rapidly, and thereby avoid the slower micro-evolutionary process, which you argue proves evolution mathematically impossible.
Nothing appears rapidly in ev when you use realistic genome lengths and mutation rates. Nothing appears in ev at all without a selection process. And there is no selection process that can evolve a gene from the beginning. You can’t select for something that doesn’t exist.
kjkent1 said:
Also, it's worth noting that there is no requirement in nature that Rseq->Rfreq. It happens to be that "on average," Rseq->Rfreq is an attribute of genes in functioning organisms. But, it's just an average, which doesn't mean that a gene must have a perfect set of bindings to function. As far as I'm aware, no one has expressed the minimum requirements for a functioning gene. It could be that convergence may be sufficient, but not necessary for viable life functions.
I don’t believe what you are saying is physiologically or biologically true. The rates of chemical reactions in living things can be affected by how well enzymes bind to reactants. Differences in binding sites can affect the affinity of binding proteins. A realistic selection process would take into account these differences. So not only do you have competing selection processes that would slow down the evolutionary process, you have subtle differences in a given selection process that would have to be taken into account to make a realistic mathematical model. Each of these conditions reveals the mathematical impossibility for the theory of evolution. You have no selection process that can evolve a gene from the beginning and competing selection processes stop the evolutionary process.
Kleinman said:
Oh, there is an “Rcapacity” issue, but the explanation is that competing selection processes slow down and ultimately stop evolution in the ev model. This phenomenon that ev is demonstrating also occurs in reality.
Paul said:
Convince us by running a series of experiments that shows evolution stopping, but not at the Rcapacity boundary.
Paul, you have already run the experiment. Don’t you recall having difficulty in understanding why the generations for convergence does not go up linearly with increasing genome length and a fixed mutation rate as a function of a number of bases? You raised this question on the following thread: http://www.internationalskeptics.com/forums/showthread.php?t=67488

When you ignore the errors in the non-binding site region, you uncouple the rate of convergence from the genome length.
Paul said:
The problem, see, is that you refuse to acknowledge that a binding site too narrow to contain a unique sequence logo simply cannot evolve a perfect creature, regardless of the size of the genome.
The problem is you can’t remember the experiments you have done. Why did Unnamed’s selection process converge for much larger genomes than Dr Schneider’s selection process? Both selection processes use the same weight matrix and site widths.
 
Kleinman said:
Paul, you have already run the experiment. Don’t you recall having difficulty in understanding why the generations for convergence does not go up linearly with increasing genome length and a fixed mutation rate as a function of a number of bases?
Alan! Kleinman! You're claiming that evolution will stop dead. I'm not arguing that its speed varies. Demonstrate that it stops dead.

The problem is you can’t remember the experiments you have done. Why did Unnamed’s selection process converge for much larger genomes than Dr Schneider’s selection process? Both selection processes use the same weight matrix and site widths.
Because the selection procedure was not as discrete as the standard one, allowing for selection among creatures with the same mistake counts.

~~ Paul
 
Kleinman said:
You can use either “perfect creature” or Rseq->Rfreq convergence criterion and neither will converge when you use Dr Schneider’s selection conditions and the genome exceeds the length defined by Paul’s Rcapacity variable. The reason is the spurious bindings dominate the selection process and no binding sites evolve.
Rcapacity is not a function of genome size!

Hey Alan, care to comments on the experiments I reported above?

~~ Paul
 
Last edited:
Why did Unnamed’s selection process converge for much larger genomes than Dr Schneider’s selection process? Both selection processes use the same weight matrix and site widths.
Unnamed's selection process weighted mistakes by the sum of their strengths, rather than by their aggregate number. This effectively made a creature with more mistakes much less fit and a creature with less mistakes much more fit for for the purpose of determining whether or not to select it for survival.

That's all the algorithm does, although you've consistently claimed otherwise.
 
Annoying Creationists

Kleinman said:
Paul, you have already run the experiment. Don’t you recall having difficulty in understanding why the generations for convergence does not go up linearly with increasing genome length and a fixed mutation rate as a function of a number of bases?
Paul said:
Alan! Kleinman! You're claiming that evolution will stop dead. I'm not arguing that its speed varies. Demonstrate that it stops dead.
Paul, it is already demonstrated. You call it Rcapacity affect but it is due to the dominance of spurious binding in the non-binding site region.
Kleinman said:
The problem is you can’t remember the experiments you have done. Why did Unnamed’s selection process converge for much larger genomes than Dr Schneider’s selection process? Both selection processes use the same weight matrix and site widths.
Paul said:
Because the selection procedure was not as discrete as the standard one, allowing for selection among creatures with the same mistake counts.
How does that differ from varying the weight on the different errors in your model? Weight the errors for missed binding sites different than spurious binding errors and see whether your estimate for Rcapacity changes. Post your version on the evjava web page and I’ll generate the data to show this.
Kleinman said:
You can use either “perfect creature” or Rseq->Rfreq convergence criterion and neither will converge when you use Dr Schneider’s selection conditions and the genome exceeds the length defined by Paul’s Rcapacity variable. The reason is the spurious bindings dominate the selection process and no binding sites evolve.
Paul said:
Rcapacity is not a function of genome size!
It is if you vary the weights for the different types of errors.
Paul said:
Hey Alan, care to comments on the experiments I reported above?
Paul said:
Let's run a little experiment to demonstrate the Rcapacity problem.
Paul said:

population 64
genome size 1024
binding sites 8
mutations/generation 4

This makes Rfrequency = 7.

Now, let's vary the weight and binding site widths, starting with 2 and 1, and increasing by 1. Here are the results:

weight/site width, Rcapacity, generations to perfect creature

1/2, 2, ---
2/3, 4, ---
3/4, 6, ---
5/4, 8, 19137
6/5, 10, 2742

Notice that the genome size is constant and the ratio of nonbinding-site genome to binding-site genome is roughly constant. We can keep it more constant by increasing the genome size to 2048, making Rfrequency = 8. The mutations/generation is 8.

weight/site width, Rcapacity, generations to perfect creature

1/2, 2, ---
2/3, 4, ---
3/4, 6, ---
5/4, 8, ---
6/5, 10, 4105
7/6, 12, 4117

It's an amazing thing, don't you think?

I’m not sure what you are trying to demonstrate here. You’ve run one series with 5 cases, only two converged and another series with 6 cases, only two converged. The last two cases in the first series have a weight width greater than the site width and the last three cases in the second series have weight widths greater than the site width. I expect you transposed your numbers. Assuming that is the case, all you have shown is that your Rcapacity value gives a good estimate when the spurious binding errors dominate with Dr Schneider’s selection process. Change the weights for the different errors and you will find that Rcapacity value will change.
 
Kleinman said:
Paul, it is already demonstrated. You call it Rcapacity affect but it is due to the dominance of spurious binding in the non-binding site region.
How large do I have to make the genome before you'll agree that the trivial difference in the ratios of nonbinding-site size to binding-site size is irrelevant? See below.

How does that differ from varying the weight on the different errors in your model? Weight the errors for missed binding sites different than spurious binding errors and see whether your estimate for Rcapacity changes. Post your version on the evjava web page and I’ll generate the data to show this.
Unnamed took into account how close each organism was to matching another binding site. The current model does not do that, regardless of parameters.

It is if you vary the weights for the different types of errors.
[latex]$R_\mathrm{capacity} = 2 \cdot \mathit{bindingsitewidth}$[/latex]

I’m not sure what you are trying to demonstrate here. You’ve run one series with 5 cases, only two converged and another series with 6 cases, only two converged.
Note carefully at what point each converged: just after Rcapacity was large enough to accommodate Rfrequency. And in each set of experiments, the genome size is constant.

~~ Paul
 
Annoying Creationists

Kleinman said:
Paul, it is already demonstrated. You call it Rcapacity affect but it is due to the dominance of spurious binding in the non-binding site region.
Paul said:
How large do I have to make the genome before you'll agree that the trivial difference in the ratios of nonbinding-site size to binding-site size is irrelevant? See below.
I don’t understand what you are asking. The ratio of the non-binding site region size to the binding site region size will always affect the rate of convergence with Dr Schneider’s selection process.
Kleinman said:
How does that differ from varying the weight on the different errors in your model? Weight the errors for missed binding sites different than spurious binding errors and see whether your estimate for Rcapacity changes. Post your version on the evjava web page and I’ll generate the data to show this.
Paul said:
Unnamed took into account how close each organism was to matching another binding site. The current model does not do that, regardless of parameters.
So what? When you weight the errors in the binding site region differently than those in the non-binding site region you will affect the rate of convergence of ev. If you give no weight to the errors in the non-binding site region, you will uncouple the convergence of ev from the genome length and your Rcapacity problem will disappear.
Kleinman said:
Nice use of fonts. It’s also an interesting coincidence that 2*bindingsitewidth gives a value that matches the point where ev no longer converges with Dr Schneider’s selection process, but it is the errors in the non-binding site region which is preventing convergence.
Kleinman said:
I’m not sure what you are trying to demonstrate here. You’ve run one series with 5 cases, only two converged and another series with 6 cases, only two converged.
Paul said:
Note carefully at what point each converged: just after Rcapacity was large enough to accommodate Rfrequency.
So your equation gives an interesting estimate of where ev fails to converge with Dr Schneider’s selection process but your equation does not explain why ev does not converge. That explanation is the non-binding site errors dominate the calculation and prevent binding sites from evolving.

The point here is that it is the competition of the two selection processes (spurious binding in the non-binding site region and unlocated binding sites in the binding site region) that causes the failure of convergence when the first selection condition dominates.

I’ll go around on this topic as many times as you want. You already understand that if you rewrote ev such that you tried to evolve two independent sets of binding sites with two different sets of selection processes on each genome that you would slow down the evolutionary process. Selection processes by their very nature are competitive phenomena. A good mutation and a bad mutation occurring on the same genome would have countering effects. Unless you insure that you have two different good mutations on the same genome at the same time, selection processes will be working against each other. The more selection processes in action on a population at a given time, the more likely they evolutionary process will be stymied.
 
I’ll go around on this topic as many times as you want. You already understand that if you rewrote ev such that you tried to evolve two independent sets of binding sites with two different sets of selection processes on each genome that you would slow down the evolutionary process. Selection processes by their very nature are competitive phenomena. A good mutation and a bad mutation occurring on the same genome would have countering effects. Unless you insure that you have two different good mutations on the same genome at the same time, selection processes will be working against each other. The more selection processes in action on a population at a given time, the more likely they evolutionary process will be stymied.

Evolution is not nearly that simple. If two mutations arrise in a genome, their value to the organism is independantly calculated, and the sum of that calculation is used to determine the overall fitness of the organism. For example, if 10 mutations arrise, 9 on which are deletarious, but the last mutation adds a large evolutionary benefit, then the overall fitness of the organism will be higher then the wild type.
 
Status
Not open for further replies.

Back
Top Bottom