Intelligent Evolution?

Are you saying they need to be 'disallowed' in the sense that they would flaw the analogy if they weren't, and, as such, it's not a fair analogy because it's too far 'off beam'? If so, I don't consider that biological reproductive systems, so far as their ability to replicate and introduce mutations, are any different from the automaton in the AA. We're not here to consider how life (or the auotomaton) arose in the first place. Accepting that they do exist we're taking the analogy forward from that point. Don't forget that after we've taken the AA as far as we can in terms of drawing conclusions I will be seeking to replace the automaton with Sam. The only reason we've introduced the automaton is to remove any possible suggestion that it's an intelligent being. As soon as we agree that Sam was behaving exactly the same way as the automaton does we can bring Sam back on board. Thereafter, at the appropriate time, we can reconcile Sam's behaviour with Ollie's, to dispel the notion of 'design', and hence 'intelligence' in 'ID'!

I'm not saying it's an unfair analogy, but I am saying that if you start considering why the robot does what it does you run into all sorts of problems with the analogy. Robots are not made of components that could have occurred through self-replication, therefore at some point you need a designer who decides what the robot should be like and what it should do. You run into first cause problems. Limiting the scope of the analogy avoids these problems.

Using Sam instead of an automation introduced unnecessary elements, as the causes of behavior start to factor into play. As I have said previously, Sam decided that he wanted to sell products like Ollie, he decided that his criteria for success was sales, and he decided to use a random system to do it.



That's right, PROVIDED we agree here and now to remove the notion that things displaying complexity do not, by necessity, have to have been 'designed'. In other words, we have to divorce 'complexity' from 'intent and forethought' and entertain, at least for the start of the remainder of this discussion, that complexity can come about by chance. Clearly, if you can't agree to that then you are, by definition, a Creationist!
Of course complexity can arise without intent and forethought. In nature and biology that is the case most of the time. However, in science and technology intent and forethought factors in much more prominently.



I believe this statement is essentially tautological. If you disagree with the analogy then you're forced to draw this conclusion. Once you accept the analogy this criticism dissolves.

I think one example, when proven, can be shown to extend to all instances, except any blue sky innovations that just happened to succeed against all odds. I don't think there are many, if any, examples of blue sky species in nature!
I disagree here. In order for the original analogy to work, the majority of technological develop processes would have to work as the AA does. Since the majority do not work from random changes but from changes applied with intent and forethought, the majority of members in the analogy's class of comparison do not share the trait in question, rendering the analogy misleading.


You'll need to show to me where 'intent' occurs in the AA. I'm assuming you're alluding to the selection process, in which case we might need to describe the 'marketplace' (environment), possibly by way of example, in order to introduce some tangibility that we can both easily relate to.
Intent does not exist within the AA, one of the reasons I have no problem using it as an analogy. However, intent exists with Sam. Sam intends to sell his electronics. Sam intends to use the electronics kit to do so. Sam intends to proceed via random processes within the limitations he has intended to set. Without Sam intending to do this, the electronics would not combine with increasing complexity.

Using Sam as an analogy doesn't lead to disputing ID, it leads to a creator god that designs the systems of evolution. Observe:

The IDer's god intends to develop creation according to his goals (complexity, humanity, etc). The ID'ers god (may) intend to use evolution to do so. The ID'ers god (may) intend to proceed via random processes within the limitations he has intended to set. Without the ID'ers god intending to do this, life would not increase in complexity.

This analogy fits better than one to real evolution. In your Sam analogy you have an indispensable creator which proceeds to an ultimate goal of its choosing, rather than the mindless process which is the strength of the AA:evolution analogy.

What some people have tried to do here is say that because the capacity for intent comes from evolutionary processes, the exercise of intent should not be considered a real or separate process. This is wrong for four reasons.

1. Not every result of evolutionary processes plays a role in future evolutionary processes. For example evolution has given rise to varying types of human earlobes, yet those earlobes are not feeding back into the evolutionary process in any meaningful way.

2. Intent is a key aspect in the argument your analogy is trying to disprove. God intends to create the world and does so through a given process. Using an analogy to an actor who intends to create something through a given process is pedagogically inappropriate regardless of the underlying ontological status of intent.

3. Intent is a legitimate way to describe the mental phenomenon of choosing a course of action that one intends to follow. While I personally do not think there is such a thing as libertarian free will, compatibilist free will suffices for providing a meaningful philosophical definition of intent.

4. Denying the existence of intent makes the analogy convoluted to the point of incomprehensibility. You end up trying to show that evolution can happen without intent, which doesn't exist. If this is the argument you want to use, you should simply argue the non-existence of intent. Then you win the intent/evolution debate automatically because non-existent things can not be part of a process.
 
Last edited:
It's not that intent doesn't exist... it's that you are overemphasizing what it is and what it means and where it comes from. When genomic information is replicated... the intent of the organism has nothing to do with the outcome... the only intent necessary is the intent to have sex or to do whatever it is that will make copying the information more likely... that is part of the program.... the program (DNA) has something about it that gets itself copied... it does not "care" what the intent of the replicators are... the same is true of blueprints... blueprints will code for items so long as there is a reason to use them and copy them and tweak them... it doesn't matter what humans think the intent is or why humans copy them or build houses or are programmed to be interested in building things that enhance their drives and preferences... from the point of view of the information-- getting copied is all that is important-- not why or whether it's intelligent or whether there's intent or what the intent is...

If the information is "copy worthy"-- that is, it can get copied in the environment it finds itself in-- it can become a part of an evolving information system... a genome, a language, a blueprint, a branch of technology. If it isn't "copy worthy"-- if it doesn't get itself copied... it dies. Intent is just the reason humans give for the things they do... the information they process... it may be accurate, it may not, and it may be somewhere in between. But it's irrelevant to the analogy. As is the nebulous term "intelligence". Intelligence and intent can be used to describe a spider building a web in regards to the analogy. To understand how web building evolved... or why it seems designed-- you don't need to know the spiders intent or apply some outside poorly defined force called "intelligence". Humans evolved as information generators, assimilaters, replicators, and re combiners as well as environmental selectors of other information processing systems.
 

Brain Science Podcast episode 25 is an interview of author Rolf Pfeifer, Director of the Artificial Intelligence Laboratory at the University of Zurich. The focus of our conversation was the importance of embodiment. Brains (and intelligence) can not be understood separate from their interaction with the body and the physical world. Pfeifer explains how this realization has led the field of artificial intelligence away from a pure computational approach to one he calls embodied artificial intelligence. His interview is spiced with numerous examples that demonstrate why this approach is relevant to those of us who are interested in the human brain. Listen Now.
Episode Highlights:
  • A brief overview of artificial intelligence
  • introduction to biorobotics
  • why artificial intelligence and biorobotics are relevant to understand the brain
  • the meaning of complexity and emergence
  • why the close coupling of the sensory and motor systems is essential to intelligence
  • applying design principles to understanding intelligence
  • http://brainsciencepodcast.com/
Keith Stanovich holds the Canada Research Chair of Applied Cognitive Science at the Department of Human Development and Applied Psychology, University of Toronto. His research areas include the psychology of reasoning and rationality and the psychology of reading, which explores what happens in the brain and to the brain through the process of reading. Recently, he was named one of the 25 most productive educational psychologists. His many books include How to Think Straight about Psychology, Who Is Rational?: Studies of Individual Differences in Reasoning, and The Robot’s Rebellion: Finding Meaning in the Age of Darwin.

In this discussion with D.J. Grothe, Stanovich talks about his book
The Robot’s Rebellion: Finding Meaning in an Age of Darwin, which is about “Universal Darwinism” and its implications for widely and deeply held beliefs such as God, free-will, and the concept of the self. He explores the gene’s eye view of life and also memes as self-replicating units of cuture, and how these selfish replicators use humans as vehicles for their own purposes, even as they might not be in the best interest of humans. He shows some ways that we may overcome, or rebel, against these forces to construct meaning from our existence. http://www.pointofinquiry.org/

At this point, it is only willful ignorance that could make a person conclude that the analogy is flawed and not the reasoning of the person who doesn't understand the analogy. This is an analogy well understood by most experts in artificial intelligence and evolution... After pages of explanations and detailed links if you haven't concluded that your failure to understand the analogy is your own misunderstandings about natural selection and the the notion of the Selfish Gene... then you are willfully being ignorant in order to win some imaginary point or game in your head. If experts use the analogy readily... and many people who have no training in evolution can related to it and use it to understand evolution-- then obviously the failure to understand is a blind spot in the reader... and not a weakness in the analogy. Those with the blind spot seem incapable of making the leap that experts readily use despite multiple links and careful explanations about the process. They are purposefully negating explanations that would make it clearer for them in order to keep themselves confused so they can believe that it's the analogy that is confusing and not their own muddled thinking.

If you don't want to understand the analogy-- admit it. But don't try to elevate your imagined expertise by pretending it's the analogy and problems in everybody else's reasoning. It's your imagined expertise in a subject where you are not as knowledgeable as you imagine that is the true problem. And your inability to recognize those who know more than you who might give you an understanding you lack. You can't learn even the simplest things, it seems, when you assume you know all there is to know on a topic. You are not communicating anything of value to anyone. You are purposefully keeping yourself from understanding and trying to put down those who explain the facts better than you do. You are being willfully ignorant and attacking to elevate your own imagined expertise on a subject that no one else considers you an expert in.
 
I disagree; in an appropriate environment (unrealistically benign), an organism will begin the process of self replication at its inception. It will "run along the path", following its inherent behaviours and will eventually reproduce. The process begins at inception, but is usually interrupted. This "interruption" is the selection (or "culling".

With your system, the (arbitarily defined) trigger is what starts the system producing an imperfect copy. This is the receipt of sales information.

Natural-selection on self-replicating systems acts to interrupt the replication process, whilst your system can only work with an actual trigger to instigate the copying process.

This is because without self-replication, the selection of the "organism" would not affect the remotely-stored "copying instructions". Something else is needed.

OK, I'm prepared to accept the notion of an organism beginning the process of [self-]replication at inception. I'm not entirely sure, and whether it matters, what you mean exactly by 'inception', but I'll assume it to signify the time at which the organism begins to form. Please feel free to correct me if that's wrong, and it matters. I'm also prepared to accept the notion of the default scenario being that unless the organism is 'culled' as it 'runs along the path', i.e. de-selected, it will inevitably reproduce. Let's contrast that now with the AA:

So, each electronic device begins the process of [self-]replication at inception, i.e. the automaton starts to assemble each electronic device following the instructions for the previous one it assembled. Once assembled (complete with random 'mutations'), the electronic device is despatched (born, let's say) to the marketplace (environment), where it 'runs along the path' following its inherent behaviours (sits in the showroom, or on a shelf somewhere, with its features and characteristics prominently displayed for all to see). Let's assume that it will inevitably sell, meaning that proceeds are received, which are read as a signal that the device has outperformed the competition (survived), such that additional components are purchased, and the automaton is instructed to repeat the process, unless, of course, it is 'culled', i.e. it doesn't sell, denoting that the competition has outperformed it, just like you've assumed the organism will enivitably reproduce unless it is 'culled' by the competition.

There is, as we can plainly see, absolutely no difference between the two! The 'arbitrarily' defined trigger in the AA patently is NOT the proceeds of sale, as you erroneously believe. Inception, as in your biological description, can equally denote the starting point of the [self-]replication process.

We can also plainly see that you are clearly wrong in asserting that the AA 'can only work with an actual trigger to instigate the copying process'. The copying process can be considered an inevitability, unless 'interrupted' by 'culling', just like in your biological example. Again, absolutely no difference!

If it's the lines of communication that are confusing you jimbob, let's introduce a few wires and cables, plus a bit of hardware, that automate the process whereby the selling of the device is automatically registered back at the production plant and sets the automaton in motion. Hell, it could even operate like the mini-bars in up-market hotels which register removal of a product by a pressure sensor and send a signal to the computer system which automatically bills you (and, no doubt, instructs Housekeeping to [self-]replicate the product, sorry, replace the product by placing another in the fridge!).
 
I'm not saying it's an unfair analogy, but I am saying that if you start considering why the robot does what it does you run into all sorts of problems with the analogy.

Why do we need to consider or preclude considering why the robot does what it does? We know, understand and appreciate exactly why the robot does what it does. Allow me to remind you:

The robot simply assembles electronic components according to the very same pattern it followed in assembling the previous device, except that it makes a small, random change each time. It has no intelligence; it doesn't 'know' what it's assembling, or what use, if any, the devices will have. Hell, it doesn't even recognize the components that it's assembling. They could just as well be nuts, bolts, washers, springs, levers, valves, etc. so far as the robot's 'concerned'. Now, we can wind this scenario right back to the point where all the robot had at its 'disposal' was a single piece of wire, or a rudimentary PCB, or a single transistor. As Dawkins succinctly puts in 'The Blind Watchmaker', if you can't envisage winding right back to that point from the radio receiver that the automaton might now be making, just wind back far enough that you can envisage a realistic change, and repeat, and repeat, and repeat ...

Robots are not made of components that could have occurred through self-replication, therefore at some point you need a designer who decides what the robot should be like and what it should do. You run into first cause problems. Limiting the scope of the analogy avoids these problems.

Now you're falling into the same trap as jimbob here. You're extending the scope of the analogy to a point which raises the question of how the 'replicators' came into being. If you extend that line of questioning further you will reach a point where we're asking ourselves how life, and indeed the Universe, ever got started. That's a whole different subject from evolution. The concept of evolution pre-supposes that life exists in the first place, and that within that 'life' a reproduction mechanism exists. That reproduction mechanism need only replicate and mutate, which I don't believe anybody is associating with ID. The automaton is simply the reproduction mechanism in the AA, so unless we go down a track whereby we see the need to introduce an explanation of how the basic biology behind reproduction works, then equally there is no need to question how the automaton came into being. If you really see this as a sticking point then let's bring Sam back into the picture right now instead. You can't have it both ways!

Using Sam instead of an automation introduced unnecessary elements, as the causes of behavior start to factor into play.

Only in your mind. Let's eliminate these 'behaviours' once and for all. I'll continue as though I've changed the Sam & Ollie scenario slightly. The changes will be self-evident from my responses below; please don't see it as criticism of your argument:

As I have said previously, Sam decided that he wanted to sell products like Ollie ...

No he didn't. He was let loose with an electronics set and asked simply to 'join bits together'.

... he decided that his criteria for success was sales ...

No he didn't. He has no concept of 'sales'. Periodically (we could say just before bedtime, for argument's sake), Sam's 'creations' (in whatever form they may then be) are taken from him by daddy and offered for sale.

... and he decided to use a random system to do it.

No he didn't. He just adopts a random system, because he knows no better.

That's the scenario; that's what we now have to work with. If you think Sam's character is even too unrealistic for the purpose of a hypothetical debate then replace him with the automaton, as I've described above, that can't think or feel, but is simply programmed to assemble parts randomly with absolutely no perception of why, and what for!

Of course complexity can arise without intent and forethought. In nature and biology that is the case most of the time. However, in science and technology intent and forethought factors in much more prominently.

Of course it does, and I'm trying to explain why that it, and why, in the absence of it, nothing would be any different, in principle, except that we'd still be riding around on horseback instead of driving around in automobiles. But believe me, the automobile is coming, for sure, just that you'll never see it during your generation, or the next, or the next ...

I disagree here. In order for the original analogy to work, the majority of technological develop processes would have to work as the AA does. Since the majority do not work from random changes but from changes applied with intent and forethought, the majority of members in the analogy's class of comparison do not share the trait in question, rendering the analogy misleading.

You seem to be forgetting why and how the AA arose. It's a contraction of the original analogy simply to unravel and clarify some basic principles. You need to forget about the original analogy for the time being and focus on the AA. Once we reach agreement over what the AA demonstrates and tells us, then we can begin to extend it back towards the original analogy.

Intent does not exist within the AA, one of the reasons I have no problem using it as an analogy. However, intent exists with Sam. Sam intends to sell his electronics. Sam intends to use the electronics kit to do so. Sam intends to proceed via random processes within the limitations he has intended to set. Without Sam intending to do this, the electronics would not combine with increasing complexity.

I've eliminated Sam's 'intentions' above. I trust you accept my slight alteration to the Sam & Ollie story. If it helps, visualize Sam as without sight (mentally retarded too, if we need to go that far!), but capable of joining electronics parts together in a randon fashion (you see, Sam's rapidly becoming a blind watchmaker!).

Using Sam as an analogy doesn't lead to disputing ID ...

It sure does, when you get your head around it (see above)!

... it leads to a creator god that designs the systems of evolution ...

It sure doesn't, when you get your head around it (see above)!

The IDer's god intends to develop creation according to his goals (complexity, humanity, etc) ...

Sam now has no intentions or goals (blind watchmaker!).

The ID'ers god (may) intend to use evolution to do so.

Sam now has no intentions (blind watchmaker!).

The ID'ers god (may) intend to proceed via random processes within the limitations he has intended to set.

Randomness is surely the default position in the absence of intent and forethought. Sam has no intentions (or forethought) or perceived limitations (blind watchmaker!).

Without the ID'ers god intending to do this, life would not increase in complexity.

Sam now has no intentions (blind watchmaker!).

This analogy fits better than one to real evolution. In your Sam analogy you have an indispensable creator ...

Indesensible 'blind replicator' would be a more accurate description!

... which proceeds to an ultimate goal of its choosing ...

Blind watchmaker!

... rather than the mindless process which is the strength of the AA:evolution analogy.

Hey, let's go the whole hog if it helps: deaf, dumb and blind! Hell, we can even amputate his legs, if you like!

What some people have tried to do here is say that because the capacity for intent comes from evolutionary processes, the exercise of intent should not be considered a real or separate process.

I am inclined to agree with you on this. Doesn't impact on the analogy, though!

Over to you now. I suggest we either go with Sam as the replicator OR the automaton, but I trust I've amply demonstrated above that in the slightly revised Sam & Ollie story Sam is essentially now acting as an automaton. Personally, I'd prefer to run with the AA from hereon in, as I feel I'm belittling my very intelligent son somewhat! ;)
 
It's not that intent doesn't exist... it's that you are overemphasizing what it is and what it means and where it comes from. When genomic information is replicated... the intent of the organism has nothing to do with the outcome... the only intent necessary is the intent to have sex or to do whatever it is that will make copying the information more likely... that is part of the program.... the program (DNA) has something about it that gets itself copied... it does not "care" what the intent of the replicators are... the same is true of blueprints... blueprints will code for items so long as there is a reason to use them and copy them and tweak them... it doesn't matter what humans think the intent is or why humans copy them or build houses or are programmed to be interested in building things that enhance their drives and preferences... from the point of view of the information-- getting copied is all that is important-- not why or whether it's intelligent or whether there's intent or what the intent is...

If the information is "copy worthy"-- that is, it can get copied in the environment it finds itself in-- it can become a part of an evolving information system... a genome, a language, a blueprint, a branch of technology. If it isn't "copy worthy"-- if it doesn't get itself copied... it dies. Intent is just the reason humans give for the things they do... the information they process... it may be accurate, it may not, and it may be somewhere in between. But it's irrelevant to the analogy. As is the nebulous term "intelligence". Intelligence and intent can be used to describe a spider building a web in regards to the analogy. To understand how web building evolved... or why it seems designed-- you don't need to know the spiders intent or apply some outside poorly defined force called "intelligence". Humans evolved as information generators, assimilaters, replicators, and re combiners as well as environmental selectors of other information processing systems.

I have to say articulett, that whilst I whole-heartedly agree with just about everything you've written in this thread, I am struggling a little understanding your view on what we mean by intent, and its implications. I'm sure we all, here, can relate to intent in the context of the technological world we see around us. The distinction between something occurring through intent and otherwise is usually readily apparent in the outcome. I know, for example, that the computer on which I'm typing right now came about through the application of human intent. The fact that humans have evolved the characteristic of intent to the extent that we can apply it to processes beyond our mere subsistence, to my mind, does not mean that we can simply ignore it as a differentiator when comparing technological design with natural evolution.

What I'm trying to do, and it seems so obvious in my mind, is demonstrate that whilst humans do indeed possess and can apply intent, all that serves to do is leapfrog the many iterations that would otherwise inevitably have to be endured in a technological development process. In the absence of intent such leapfrogging would not be possible, but technology could still develop through random trial and error with retention of what works.

We actually see this happening in our daily lives. When I tie my tie in a morning I start by assessing the length of the back compared to the front, from experience, then I tie the knot. Sometimes the resultant length it's not quite right, so I untie it, adjust the starting lengths slightly, and try again. On a good day it's right first time; on a blonde day it can take up to four attempts!

There are two alternatives to this approach:

Try different starting lengths front at back randomly. This would include 'extremes', where it will be obvious that the result will be unsatisfactory. I could still do that though, keeping on trying until I get the right result, but be late for work most days! or,

Record, from a successful tying, how long the front, or back, needs to be and tie my tie each morning by measurement.

Let's number the three alternatives 1, 2 and 3 in the order above. Now, alternative 1 involves a degree of intent. I know what result I'm seeking and set the parameters roughly where I think they need to be. Pretty soon I get a 'result'. Alternative 2 involves absolutely no intent (other than knowing what the objective is), and alternative 3 is based completely on intent, and should get a result first time every time.

To my mind, technological development is typically a combination of 1 and 3 above. Some aspects are trial and error, usually with limiting parameters; others are wholly pre-determined. I doubt that anything falls under alternative 3, for obvious reasons explained previously in this thread, BUT, as demonstrated with the tie tying example, ALL THREE METHODS EVENTUALLY ACHIEVE THE SAME RESULT; THE ONLY DIFFERENCE IS TIMESCALE!!!

Now, having written that, I'm not sure if you're saying the same thing, or if you consider that 'intent' can really be factored out of the analogy comparison.
 
Why do we need to consider or preclude considering why the robot does what it does? We know, understand and appreciate exactly why the robot does what it does. Allow me to remind you:

The robot simply assembles electronic components according to the very same pattern it followed in assembling the previous device, except that it makes a small, random change each time. It has no intelligence; it doesn't 'know' what it's assembling, or what use, if any, the devices will have. Hell, it doesn't even recognize the components that it's assembling. They could just as well be nuts, bolts, washers, springs, levers, valves, etc. so far as the robot's 'concerned'. Now, we can wind this scenario right back to the point where all the robot had at its 'disposal' was a single piece of wire, or a rudimentary PCB, or a single transistor. As Dawkins succinctly puts in 'The Blind Watchmaker', if you can't envisage winding right back to that point from the radio receiver that the automaton might now be making, just wind back far enough that you can envisage a realistic change, and repeat, and repeat, and repeat ...

See below.

Now you're falling into the same trap as jimbob here. You're extending the scope of the analogy to a point which raises the question of how the 'replicators' came into being. If you extend that line of questioning further you will reach a point where we're asking ourselves how life, and indeed the Universe, ever got started. That's a whole different subject from evolution. The concept of evolution pre-supposes that life exists in the first place, and that within that 'life' a reproduction mechanism exists. That reproduction mechanism need only replicate and mutate, which I don't believe anybody is associating with ID. The automaton is simply the reproduction mechanism in the AA, so unless we go down a track whereby we see the need to introduce an explanation of how the basic biology behind reproduction works, then equally there is no need to question how the automaton came into being. If you really see this as a sticking point then let's bring Sam back into the picture right now instead. You can't have it both ways!
This was the point I was making in the last part you quoted, that the scope of the analogy needed to be limited before the point "which raises the question of how the 'replicators' came into being."

As I said before, I don't think it necessarily robs this analogy of any explanatory power, as you aren't trying to explain first causes. Now we'll move on the problems with Sam.



Only in your mind. Let's eliminate these 'behaviours' once and for all. I'll continue as though I've changed the Sam & Ollie scenario slightly. The changes will be self-evident from my responses below; please don't see it as criticism of your argument:

No he didn't. He was let loose with an electronics set and asked simply to 'join bits together'.

No he didn't. He has no concept of 'sales'. Periodically (we could say just before bedtime, for argument's sake), Sam's 'creations' (in whatever form they may then be) are taken from him by daddy and offered for sale.

No he didn't. He just adopts a random system, because he knows no better.

That's the scenario; that's what we now have to work with. If you think Sam's character is even too unrealistic for the purpose of a hypothetical debate then replace him with the automaton, as I've described above, that can't think or feel, but is simply programmed to assemble parts randomly with absolutely no perception of why, and what for!
I do feel that this is too unrealistic for the purpose of a hypothetical debate, but for reasons that he cannot be replaced with an automaton. In order for this to work you have to abandon any pretense of compatabilist free will/intent and enter cyborg's world where humans are mindless robot analogues. I know that's what you're trying to do here but it doesn't work.

Even a child of Sam's age has desires, motivations, and problem solving capabilities. Changing the selector from the school market to his father telling him which designs to choose doesn't remove his motivation, it only changes it. Now instead of being motivated by the prospect of making a sale, he is motivated by pleasing his father by building designs similar to what he knows his father likes.



Of course it does, and I'm trying to explain why that it, and why, in the absence of it, nothing would be any different, in principle, except that we'd still be riding around on horseback instead of driving around in automobiles. But believe me, the automobile is coming, for sure, just that you'll never see it during your generation, or the next, or the next ...
I don't see how you can make this argument and still hold an opposing position. :mghissyfit If technological development would be stunted with a random development method compared to current methods, then obviously there are great differences between a random development method and current methods of technological progress. If there are great differences between a random development method and current methods of technological progress, then it is nonsensical to use an analogy comparing evolution (a random development method) to current methods of technological progress.

If you define a specific, unusual example like the AA then you can make a comparison, but to use it as a generality is pointless for the reasons I just wrote.



You seem to be forgetting why and how the AA arose. It's a contraction of the original analogy simply to unravel and clarify some basic principles. You need to forget about the original analogy for the time being and focus on the AA. Once we reach agreement over what the AA demonstrates and tells us, then we can begin to extend it back towards the original analogy.



I've eliminated Sam's 'intentions' above. I trust you accept my slight alteration to the Sam & Ollie story. If it helps, visualize Sam as without sight (mentally retarded too, if we need to go that far!), but capable of joining electronics parts together in a randon fashion (you see, Sam's rapidly becoming a blind watchmaker!).
The problem is the AA is not a contraction of the Sam & Ollie story, it is a fundamentally different example. You have removed all human elements of it. It is no longer a process subject to human design, intent, or intelligence. When you bring Sam back into it, all those things appear in some form. I know you're trying to make it by hypothetically lobotomizing parts of Sam's personality, but until you cut him into a zombie equivalent to a robot it's not going to work. Up until that point, Sam's motivation goals and procedures are all working from a position with elements of directed design.



Randomness is surely the default position in the absence of intent and forethought. Sam has no intentions (or forethought) or perceived limitations (blind watchmaker!).

Over to you now. I suggest we either go with Sam as the replicator OR the automaton, but I trust I've amply demonstrated above that in the slightly revised Sam & Ollie story Sam is essentially now acting as an automaton. Personally, I'd prefer to run with the AA from hereon in, as I feel I'm belittling my very intelligent son somewhat! ;)
As I said, your very intelligent son would not appreciate being turned into the sort of character necessary for an apples to apples conversion into the AA. Unless of course he'd enjoy a drooling shamble to the break room for the brains you put in his lunchbox. :p
 
Last edited:
I do feel that this is too unrealistic for the purpose of a hypothetical debate, but for reasons that he cannot be replaced with an automaton. In order for this to work you have to abandon any pretense of compatabilist free will/intent and enter cyborg's world where humans are mindless robot analogues. I know that's what you're trying to do here but it doesn't work.

All I'm trying to do is extract intent and forethought from the human design process to show that, in principle, in terms of what would then emanate, intent and forethought make no difference.

If I were to present to you two identical part-decks of playing cards (say 5 cards only, for argument's sake), each having ace to 5 of spades in sequencial order, and I explain that one of the decks has been deliberately arranged into order but that the other has been repeatedly shuffled until it fell into order by chance, would you believe me? If not, try shuffling five sequential cards until you do! Now, having accepted that, if I were then to ask you which of the two decks is the shuffled deck would you be able to tell me? No you wouldn't. So, having established that intent/forethought can be extracted from seeming 'design' with playing cards, we just need to establish the same with technology. Let's start with materials, and let's do it in steps. like 'real' evolution. How about glass:

I could list out all of the variables that determine its production such as raw materials, processing thereof, batching proportions, heating temperature, rolling margins, etc, but this identifies and demonstrates them far better than I can if you prefer an illustration.

Now, how are each of the variables determined? Well, I'm not sure, but give me access to the production plant and operators in the video and we'll start with some random paremeters. How long do you think it might take before we start to see something resembling glass being produced (you'll note I didn't ask whether we'll ever see glass at all; just how long!).

OK, so you might be tempted to 'wind that back' and ask something like: "But what about the production plant itself, that was 'designed' with intent and forethought!". OK, I couldn't be bothered to search for a video, but I'm sure you can imagine what a glass production plant might have looked like, say, 30 years ago, and then 50 years ago, and then 75 years ago, and then ...

So we can 'arrange' playing cards without intent and forethought, and we can make down-stream materials without intent and forethought, so let's consider machines or appliances that are actually assembled from these materials. Actually, I needn't bother. You can see where this is going, I'm sure.

So, what, now, if anything, still prevents us from accepting that complex machines can arise without intent and forethought? Well, I'd say 'time' poses a huge mental barrier for most people. Indeed, it's the very same reason that many people struggle with the notion of natural evolution, putting aside matters of mutation, selection, replication, etc.

How long did it take man to get from this to this? It took 105 years, or roughly 4 generations of man! How many 'generations' did it take to produce Orville & Wilbur Wright? I'll leave it to you to work that one out?

Even a child of Sam's age has desires, motivations, and problem solving capabilities. Changing the selector from the school market to his father telling him which designs to choose doesn't remove his motivation, it only changes it. Now instead of being motivated by the prospect of making a sale, he is motivated by pleasing his father by building designs similar to what he knows his father likes.

I think you've misunderstood me. I haven't changed the 'selector'. I've simply clarified that I will take Sam's devices to market; the same market (environment) as before (natural selection - survival of the 'fittest').

I don't see how you can make this argument and still hold an opposing position. If technological development would be stunted with a random development method compared to current methods, then obviously there are great differences between a random development method and current methods of technological progress.

Stunted = great difference?! Is 6 months a long time? Depends on whether you're a butterfly or a mountain, I suppose! Time's arbitrary!

If there are great differences between a random development method and current methods of technological progress, then it is nonsensical to use an analogy comparing evolution (a random development method) to current methods of technological progress.

I agree. Big 'IF' though, ain't it!

If you define a specific, unusual example like the AA then you can make a comparison, but to use it as a generality is pointless for the reasons I just wrote.

What's specific about the AA? It couldn't be more generic! Change any variable within it, you still end up with replication, mutation and natural selection. Take your pick!

The problem is the AA is not a contraction of the Sam & Ollie story, it is a fundamentally different example. You have removed all human elements of it. It is no longer a process subject to human design, intent, or intelligence.

Hang on a second. 'Human elements'? Let's try to keep this objective please. We're talking 'intent and forethought', right? Nothing more, nothing less, right? Intent and forethought, for the purpose of this discussion, I thought, represented both 'intelligence' and 'design'. Have I misunderstood?

When you bring Sam back into it, all those things appear in some form. I know you're trying to make it by hypothetically lobotomizing parts of Sam's personality, but until you cut him into a zombie equivalent to a robot it's not going to work. Up until that point, Sam's motivation goals and procedures are all working from a position with elements of directed design.

As I said before, let's keep Sam out of it for now. Let's 'evolve' the discussion - one step at a time!
 
When I tie my tie in a morning I start by assessing the length of the back compared to the front, from experience, then I tie the knot. Sometimes the resultant length it's not quite right, so I untie it, adjust the starting lengths slightly, and try again. On a good day it's right first time; on a blonde day it can take up to four attempts!

There are two alternatives to this approach:

Try different starting lengths front at back randomly. This would include 'extremes', where it will be obvious that the result will be unsatisfactory. I could still do that though, keeping on trying until I get the right result, but be late for work most days! or,

Record, from a successful tying, how long the front, or back, needs to be and tie my tie each morning by measurement.

Let's number the three alternatives 1, 2 and 3 in the order above. Now, alternative 1 involves a degree of intent. I know what result I'm seeking and set the parameters roughly where I think they need to be. Pretty soon I get a 'result'. Alternative 2 involves absolutely no intent (other than knowing what the objective is), and alternative 3 is based completely on intent, and should get a result first time every time.

To my mind, technological development is typically a combination of 1 and 3 above. Some aspects are trial and error, usually with limiting parameters; others are wholly pre-determined. I doubt that anything falls under alternative 3, for obvious reasons explained previously in this thread, BUT, as demonstrated with the tie tying example, ALL THREE METHODS EVENTUALLY ACHIEVE THE SAME RESULT; THE ONLY DIFFERENCE IS TIMESCALE!!!


You have correctly noted that natural selection will not help you tie a tie... this morning.

On the subject of ties... I believe that the analogy, in factoring out intelligence, factors out a very significant way in which adapted complexity (here on earth) evolves: culture.

Or am I mistaken and is the ability to adapt through learning from the experience of fellow critters accounted for in the analogy?
 
All I'm trying to do is extract intent and forethought from the human design process to show that, in principle, in terms of what would then emanate, intent and forethought make no difference.

I'm at the end of an all-nighter, so forgive me for not getting to the rest of your post right now.

I am not talking about principle, I am not disagreeing with you that in principle you can eventually match normal technological design process with random tech design processes. What I am saying is that they are very different processes. The AA is worlds different than what goes on in the vast majority of engineering departments and trying to speak as if it was the norm (as the general 'evolution is like tech design' attempts to do) is just silly.

In fact, now that I think about it, it works better backwards. You're better off explaining your unusual design process through an analogy to evolution.
 
Southwind, two major points I think I am not getting across to you. I am going to take the liberty of reordering points, since I want to seperate ii into where we disagree on technolgy, and where we disagree on evolution.

When you say:
So, we're left comparing indifferent or successful mutations with beneficial design changes. If a designer sees some 'potential' (or even indifference) in a design modification, even though a further modification is required to realize that potential, as you describe above, that is no different from natural evolution. Provided a mutation isn't detrimental there's no reason why it should disappear.
It is completely unlike natural evolution because in design by mutation and selection you succeed or you don't, while in intelligent design there is a third category, "useful to the intelligent designer". Suppose I were to buy a new video card for my computer, and start it up. I see some beautiful graphics for few seconds, and then the computer freezes as it overheats. This is obviously detrimental to the purpose of the computer. This computer is useless to a computer user, but useful to a computer designer who needs now only through in cooling fans in order to make an improved computer.
By definition, the initial (poor) modification must have some potential benefit for the designer to retain it. If not, he would simply scrap it and return to go.
Thats just it. It has potential benefit for the designer. It has no benefit to a system without one.

I think you are also discounting the concept of the third "bin" of useful to the designer when you say
When you write 'radical', are you taking into account the fact that, at the micro level, the technology is, at best, only small steps removed from the previous design? Does the designer, for example, suddenly and unexpectedly stumble upon new materials and components that appear from thin air, or are those material and components 'evolved' over time?
Again in this situation the intelligent designer doesn't stumble upon the new materials and technologies. But the end user (the selection agent for your hypothetical robot) does. The rotary engine went through several iterations before it was suitable to replace any traditional combustion engine. These were seen and selected by the designer, an agent which doesn't exist in our natural algorithm. To the end user it did "appear from thin air".

The transition of material in electronics might be a more extreme example of materials that evolved, but emerged in the market place suddenly. To go from silicon chips to newer materials that allowed even faster chips to be built had a huge amount of "design only" stages. Before the first Galium-Arsenide chip appeared for a non-designer, there were modifications to machines to grow crystals, products used to measure characteristics of those wafers, prototype transistors that didn't function as well old Si transistors and on and on. How would your robot perform this function?
- Anybody want to buy this new transistor? No, the old style ones function better. Through it out.
- Anybody want to buy a wafer of this new material? No, the robot is the designer, I don't have use for a slab of GaAs.
- Want to buy the machines that can measure that build wafers? ...

So yes, to the intelligent designer, this material evolved over time. But again, said designer doesn't exist in your hypothetical scenerio. To us end users, circuits made up of transistor on brand new materials "appeared from thin air" in our new products.

The intelligent designer didn't leap frog steps, he stood on them. But to the intelligent designer they were steps, to your robot driven by the end user these are roadblocks rather than steps. A heap of products that are useless in a system without a designer.

Walt
 
On the subject of biological evolution, you seem to be making a lot of assumptions in how your robot operates that make it function completely unlike biological evolution.

First, when you say "we can disregard detrimental mutation" you completely change the way both systems work. As I mentioned, detrimental mutations in technological design can still lead to useful information. The mutation that is detrimental to the current generation can be overcome. But more importantly, detrimental mutations take resources from both types of evolution. Biological evolution has its own feedback mechanism wereby genes are selected that affect the mutation rate to give a balance between "stable" (where a shift in the environment causes the demise of a line) and "unstable" (where huge number of mutations would lead to something so unlike the previous generation it would be unlikely to survive). Some, and possibly most or all, organisms can even alter their own mutation rates in certain circumstances.

If you remove the affect of detrimental mutations, you would change the face of biological evolution.

Second, you insist that all you need is time. But rates and times are incredibly important to evolution. The landscape changes over time so in both cases their are windows where a particular change will be wanted/beneficial. That window is dependent not only on the selection factors external to the algorithm, but those organisms created by it also affect it.

Now, even if you had the resources to disregard detrimental mutations and infinite time the results of the two algorithms still look different. Technological innovation thrives on similarity. Look at how similar two Wii's are, or any other product. Biological evolution thries on diversity as it ensures a large pool of material when the environment changes.

Also, as mentioned, failed products can be stepping stones for the designer, where as the failed organisms create road blocks for blind biological evolution. You got around this in your answer to me by basically using arbitrarily large mutation steps,"If he keeps 'fiddling' with an otherwise detrimental modification until it becomes advantageous, then the first time that an advantage emerges can be considered to be THE mutation."

When you allow enough resources that you can disregard detrimental mutation, give the robot as much time as it wants regardless of earth age, universe age etc., and allow fiddling until you get massively different designs you are no longer playing at biological evolution. Instead you are claiming that an infinite number of robots and an infinite number of draft boards will eventually design the entire works of Thomas Edison.

Walt
 
Last edited:
"Blind Watchmaker," hah!
That's not scary. It's just pitiable.
Here's a nightmare inspired by this thread.
headless1.JPG

The Headless Watchmaker!
 
Intent and Intelligence are part of a continuum... they can be factored out... yes, they just speed up the process...

If you wanted to know how spiderwebs evolved... the intent and intelligence of the replicators isn't important. You'd want to know how the code for web building evolved genomically and how spiders are programmed via their genes and environment to build the kinds of webs they build in the places they build them. These are aspects that evolved... this is information programmed via their genomes. A spider's web doesn't evolve-- but snapshots of spiderwebs evolved over time... same with beehives and bird nests and ant colonies... also the dance bees do. They have their basis in DNA... the information coding for these "complex" and seemingly designed things evolved.

Human products evolve in a similar fashion. Planes don't evolve--but if you look at snapshots in time, they appear to evolve and speciate... just as spider webs do--each spider species has their own web building "program" selected for in their ancestors genes interacting with the environment over time. In both cases... what is evolving is the information... in both cases the replicators of the information are not the things that code for the information (spiders pass on genes for spiderweb building... humans pass on blueprints for airplane building)... and in both cases it's the information that evolves over time--giving the appearance of objects evolving over time (spiderwebs and airplanes). What drives both... why do they get increasingly better or more suited for their niche. Why do they seem so "designed"? Because information that is good at getting itself copied drives evolution and can't help but give the appearance of being an amazingly good fit for it's niche... It doesn't matter why the spider builds the webs or why humans copy some designs for mass production... What matters from an evolutionary process is that information gets copied, recombined, and processed, so that it can evolve over time... so all information that is part of any evolving system is evolving to get itself copied in the future... it can't want that... but that is the only kind of information that can evolve--only DNA that gets copied can live to program future organisms... only DNA that gets preferentially copied can become part of spiderweb evolution. Only airplane designs that are preferentially copied become part of aviation evolution.

Because of this... what we call "intelligence" and "intent" are just mechanisms evolving information systems have come up with to get themselves replicated. Blindly... but inevitably... Just as the AIDS virus can get itself copied by exploiting the human sex drive (intent:sex), Information (religions for example) can get themselves copied by exploiting human fears and hopes and intelligence (the intelligence to understand that we will die, for example). Information that can get copied will-- it doesn't care whether humans think it's good, fit, or intelligent. It doesn't care what we intend. It can't. It can only have something about itself that makes it preferentially survive... or it can die out.

It's probably too heavy. But I'm glad that Cyborg understands it. The podcasts above explain it as does the Selfish Gene and multiple other things I've linked. It's not relevant to the analogy-- it just illustrates why what humans call "intent" or "intelligence" isn't special or outside nature... isn't even real... or rather it's part of a continuum... part of a higher order "information processing" evolving system...a more advanced part of evolving information systems... If the tree of life is traced via information codes--the branch containing humans has gone wild with growth and sprouts and twigs and leaves and fruit that will produce more growth...

But don't worry about understanding this. It's enough to say that intelligence and intent are not relevant to the analogy-- it confuses the analogy with poorly defined terms. As does "self replication"-- what is important is that some information is preferentially copied and what we observe is a honing of design over time... the same as we see in nature... and increased specialization and "complexity" and efficiency. The only path for evolution is forward... a tree can only grow up an out. Bottom up is the only known path forward-- Even our own technology evolves that way. Our cities do... our languages do... there is no other way. There is no "poofing" into existence... no tornadoes making 747's... no god poofing birds into existence... it must be bottom up. The fact that entities such as us would evolve and speed up the system is inherent in such a system... in the same way that computer programs evolved increasingly better information processors, combiners, replicators, storage, memory, speed, efficiency... Once such an information system begins evolving--the only path is forward. The useful modifications stick around to be built on further and the less fit mutations die out. Humans die... the information they assemble and process can live on and evolve just as the Wright brothers airplane design has in today's aircraft.

Aack. My brain hurts. I hope someone understood a little. If not, don't worry about it. It will follow as you understand more.
 
Articulett,

Very good post. I do follow you on this subject of the evolution of information systems and a process prior to what we abstract out as intelligent intervention. I find it quite Zen in a way.

This headless stuff is quite heady as you point out.
If a student raised his hand in class and asked,
"So where are the draftsmen and engineers in biological evolution that we have in technological evolution?"
I'd not try to talk the more advanced piece. I'd make sure he understood the basic process of natural selection first to clear away the whole ID misunderstanding.
Then maybe we could begin to talk about the wider conceptual framework, beginning with natural technologies such as spider webs and beaver dams.

Most of this thread has been an argument over an anaolgy. Analogous statments aren't statements of truth but tools of presentation. Their efficacy
depends on their audience. If one doesn't work for the listeners, a teacher uses another.
 
Articulett,

Very good post. I do follow you on this subject of the evolution of information systems and a process prior to what we abstract out as intelligent intervention. I find it quite Zen in a way.

This headless stuff is quite heady as you point out.
If a student raised his hand in class and asked,
"So where are the draftsmen and engineers in biological evolution that we have in technological evolution?"
I'd not try to talk the more advanced piece. I'd make sure he understood the basic process of natural selection first to clear away the whole ID misunderstanding.
Then maybe we could begin to talk about the wider conceptual framework, beginning with natural technologies such as spider webs and beaver dams.

Most of this thread has been an argument over an anaolgy. Analogous statments aren't statements of truth but tools of presentation. Their efficacy
depends on their audience. If one doesn't work for the listeners, a teacher uses another.

Yes. I am a teacher. And Southwind's analogy does work and helps students understand what evolution is. Southwind seemed to want more explanation as to "intent" and "intelligence" and how they are part of the system. They are not necessary for the analogy at all. But on a deeper level they can be understood via the analogy. You can understand that they are products of evolution with a human centered meaning. Like "free will", they are not the concrete attributes people seem to imagine they are.

I just like understanding this. I'm excited to be on the forefront of some developing knowledge... and eager to share it with people who might be interested. But even at it's most basic level--the analogy works for many, many people.
 
Last edited:

Back
Top Bottom