• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Charles Stross on the Singularity.....

Does the brain replace all of its cells over a period of time? If it does then what is the fundamental difference between copying yourself over the course of a few years and copying yourself instantly?
 
Does the brain replace all of its cells over a period of time? If it does then what is the fundamental difference between copying yourself over the course of a few years and copying yourself instantly?
The first results in one the second in multiple copies.
 
The notion of The Singularity is nonsense, and the crowd of hangers-on, who lust for some form of immortality with all of the intellectual integrity of a Fundie, wouldn't be worth spending 64 bits on.

"Technological singularity refers to the hypothetical future emergence of greater-than human intelligence. Since the capabilities of such an intelligence would be difficult for an unaided human mind to comprehend, the occurrence of technological singularity is seen as an intellectual event horizon, beyond which the future becomes difficult to understand or predict."

That seems fairly reasonable to me regardless of what "the crowd of hangers-on" believe.
 
"Technological singularity refers to the hypothetical future emergence of greater-than human intelligence. Since the capabilities of such an intelligence would be difficult for an unaided human mind to comprehend, the occurrence of technological singularity is seen as an intellectual event horizon, beyond which the future becomes difficult to understand or predict."

That seems fairly reasonable to me regardless of what "the crowd of hangers-on" believe.


I'm well aware of what the prophet Kurzweil has proclaimed. I find him simplistic and self aggrandizing, not someone who is worth any time. He enjoys prognosticating and being worshipped, that is all.

I have little doubt that AI research will eventually make some significant advances, should we last long enough. Whether such advances would result in 'greater-than human intelligence', or simply in 'other-than-human intelligence' remains to be seen. I am not suggesting that human intelligence is insurpassable, merely that intelligence is neither simple nor scalar.

To inject a phrase such as 'intellectual event horizon' into the mix is unworthy.

I have been involved in several aspects of AI research since 1986, when I specialized in automated reasoning for my graduate studies. This is not intended to be an appeal to authority, merely a statement that I've been thinking about it for a long time and have a lot of respect for the complexity of the problems involved. I really dislike the shallow, trite treatment of this topic that some people give it.

I'd much rather read some good (or even bad) science fiction that explores what may happen rather than Kurzweil and his ilk.
 
I'm well aware of what the prophet Kurzweil has proclaimed. I find him simplistic and self aggrandizing, not someone who is worth any time. He enjoys prognosticating and being worshipped, that is all.

I have little doubt that AI research will eventually make some significant advances, should we last long enough. Whether such advances would result in 'greater-than human intelligence', or simply in 'other-than-human intelligence' remains to be seen. I am not suggesting that human intelligence is insurpassable, merely that intelligence is neither simple nor scalar.

To inject a phrase such as 'intellectual event horizon' into the mix is unworthy.

I have been involved in several aspects of AI research since 1986, when I specialized in automated reasoning for my graduate studies. This is not intended to be an appeal to authority, merely a statement that I've been thinking about it for a long time and have a lot of respect for the complexity of the problems involved. I really dislike the shallow, trite treatment of this topic that some people give it.

I'd much rather read some good (or even bad) science fiction that explores what may happen rather than Kurzweil and his ilk.

For the most part I agree with you about Kurzweil but as I understand it "greater-than human intelligence" could just as easily mean artificially enhanced human intelligence by genetic means or by interface with computers as it could a purely computer AI. I dont know how likely it is that human culture could artificially enhance its intelligence but if it could then Im not aware of a convincing argument that a technological singularity would not be the end result.
 
I have little doubt that AI research will eventually make some significant advances, should we last long enough. Whether such advances would result in 'greater-than human intelligence', or simply in 'other-than-human intelligence' remains to be seen. I am not suggesting that human intelligence is insurpassable, merely that intelligence is neither simple nor scalar.

I think that's something which is very important to point out in this discussion. However, I would define "greater than human intelligence" as intelligence that is capable of solving particular problems better (in some way*) than a human. What that means is that some AI is already (by my definition) "greater than human intelligence", at least when that AI is teamed with a human.
So is it a useful definition? I think it is because I think that we're very unlikely to develop some one computer that thinks like we do. Rather I think that AI is emerging one problem at a time. Google translate is good at giving a somewhat useable translation of text, and it's getting better. Their search engine it good at finding information that you're looking for, and getting better. Engineers have various tools that are used in design applications, and those are also improving. I don't know how useful the concept of a "singularity" is, but as these tools continue to improve, and as we use them in synch with each other, the whole system becomes more and more complex, and it's potential increases exponentially.

Anyway, that's the direction that I see things going, though predictions about the future based on present trends tend not to be worth much.

*Faster, with a more useful solution, more efficiently, whatever.
 
I think that's something which is very important to point out in this discussion. However, I would define "greater than human intelligence" as intelligence that is capable of solving particular problems better (in some way*) than a human. What that means is that some AI is already (by my definition) "greater than human intelligence", at least when that AI is teamed with a human.
So is it a useful definition? I think it is because I think that we're very unlikely to develop some one computer that thinks like we do. Rather I think that AI is emerging one problem at a time. Google translate is good at giving a somewhat useable translation of text, and it's getting better. Their search engine it good at finding information that you're looking for, and getting better. Engineers have various tools that are used in design applications, and those are also improving. I don't know how useful the concept of a "singularity" is, but as these tools continue to improve, and as we use them in synch with each other, the whole system becomes more and more complex, and it's potential increases exponentially.

Anyway, that's the direction that I see things going, though predictions about the future based on present trends tend not to be worth much.

*Faster, with a more useful solution, more efficiently, whatever.

Google on glasses or contact lenses with a heads-up display and interface invisable to observers is possible now - whether consumers want it is another matter but whether they would want it at a fraction of the cost in 5 years time is also another matter.
 
One of the problems I have with the idea is that it seems to me to assume a degree of hardware/software separation that I do not believe exists in humans. To a very large degree I am my body and the shifting soup of chemicals, hormones, neurotransmitters, waves of adrenalin that somehow get woven into what passes for a personality. For the SF fans one part of Altered Carbon that stuck in my mind was the attraction he felt to his colleague that vanished when his "mind" was transferred to another host. I'm not supposing some idea of soul, rather I think that we think of as personality is a rather hazy emergent property of multiple competing subsystems. For instance reference the studies that show muscle activity before the decision to move that muscle.
At the very least we would need to develop a rather more ...... errrr nuanced? idea of what constitutes personality and survival. We already have a certain concept of survival through our children who are at a certain remove from ourselves - others do not feel this. Perhaps an edited version of our "selves" with much of the baggage removed?
And as Ben Goldacre says "I think you'll find it's a bit more complicated than that"
 
...
There isn't such a continuity when you upload a copy, even if you immediately kill the original in the process.
Okay, so there's a "lack of continuity" but what does that mean practically? What problem(s) does it cause? You appear to be using this as an objection, but you're not saying why it's an objection.

I have a lack of continuity every day from going to sleep at night and waking up the next morning. How would waking up in a different body be different (other than the obvious that I'm not in the same body)?

Regardless of the continuity, if the original is killed I'd think a crime has taken place.

I think (IANAL, JMHO, etc.) it would be legally okay to upload or duplicate a person (with their full permission, of course), but then both the original and copy would legally be separate individuals (despite having the exact same name, memories, personality and such the moment the original is scanned and the copy created). The making of many copies could cause an interesting overpopulation problem, but that's yet another issue.
 
Okay, so there's a "lack of continuity" but what does that mean practically? What problem(s) does it cause? You appear to be using this as an objection, but you're not saying why it's an objection.

...snip...

For me the objection is that I don't want to die, which is what happens in the "make a copy and destroy the original" scenarios.
 
For me the objection is that I don't want to die, which is what happens in the "make a copy and destroy the original" scenarios.


I don't agree, if I understand you correctly. There is no 'I' apart from the body in the context of its surroundings. If your body were truly copied (not possible, I think), both would start with the same configuration of atoms, corresponding pairs of atoms would be identical, and both would start with the same 'I'.

The 'make a copy and destroy the original' scenario and the 'make a copy and destroy the copy' scenario are identical, as are the copies.

Of course, both copies would not want to die, for that is how they would interpret their destruction, but neither copy has something that the other lacks (immediately after the moment of copying).

I would regard the destruction part as immoral, of course, and rather unfriendly.
 
For me the objection is that I don't want to die, which is what happens in the "make a copy and destroy the original" scenarios.

You are a copy and the original you has been destroyed over a period of years. If you uploaded yourself cell by cell, memory by memory over a long period of time then what would be the difference?
 
I don't agree, if I understand you correctly. There is no 'I' apart from the body in the context of its surroundings. If your body were truly copied (not possible, I think), both would start with the same configuration of atoms, corresponding pairs of atoms would be identical, and both would start with the same 'I'.

The 'make a copy and destroy the original' scenario and the 'make a copy and destroy the copy' scenario are identical, as are the copies.

Of course, both copies would not want to die, for that is how they would interpret their destruction, but neither copy has something that the other lacks (immediately after the moment of copying).

I would regard the destruction part as immoral, of course, and rather unfriendly.

I don't think anyone is arguing that there is something in principle that can't be duplicated, the issue I have in most of these scenarios is all rolled up in the word I've highlighted above!
 
You are a copy and the original you has been destroyed over a period of years. If you uploaded yourself cell by cell, memory by memory over a long period of time then what would be the difference?

What am I a copy of? If there is no "of" then to call me a copy is simply changing the definition of the word copy. In these scenarios they all deal with an original from which copies are made, so you have a copy or copies and the original.

In your scenario above we are still left with an original and a copy, in other words two individuals.
 
Darat - I agree, then. I don't like the whole 'destruction' bit, and wouldn't like it even if I were the survivor.
 
What am I a copy of? If there is no "of" then to call me a copy is simply changing the definition of the word copy. In these scenarios they all deal with an original from which copies are made, so you have a copy or copies and the original.

In your scenario above we are still left with an original and a copy, in other words two individuals.

Psychically you are a copy as I am - all the cells in your brain have been replaced. In terms of the scenario you imagine then its perfectly possible to imagine the cells in your brain being substituted one by one and replaced by a computer. This could happen over a number of years and you would not be aware of the process.
 
I never understood the cult for the singularity. I mean, even if mind uploading happens, you do not really "upload" your mind, you jsut put a copy of its memory, itnerraction, and personality ina comptuer system, so that virtually there would be no way to differentiate you from it. Just like the teleportation paradox, it suffers from the fact that you are just creating a copy, and if the original dies, "you" died. The copy as a separate entity might live and go on eternaly as long as electricity is paid, but the original Aepervius the human would have died.

Therefore I would not see that as immortality, more as a way to produce an immortal indentical twin offspring from me.

Mildly interresting , but not that much.


I tend to agree wth this, but there is another way to look at how it may be possible or even the same. It sounds as though you've probably already considered the following, but I'll post it anyway for the benefit of the the general discussion, and look forward to your response.

Suppose that instead of instantly uploading our consciousness into a machine brain, we develop technology on a nanoscale that slowly converts our biological nero-networks, one connection at a time, over a period of say several weeks or months, in a manner that reproduces the same functions? Apart from sleeping, we would maintain our continuity of consciousness and wouldn't know the difference. When the transition was complete, could we not then claim to have made the "transition"?

Does the physical construction of the processing unit really have to be made of the same materials in the same configuration, or does it just need to perform the same job as well? Cells live and die all the time. We're not made of exactly the same atoms we had when we were born. We are very different as adults in many ways. My personal feeling on this is that the person who makes gradual cellular transition would still be the same "person", owning the same "consciousness", retianing the same memories and posessing ownership of their "selves".

It would not be right or fair at the point of complete transition to suddenly declare this conscious being as "dead", strip them of their rights and start reading their will. In every measurable way that matters, I contend this person would in fact be the same "person" as before, with all the rights, privileges and obligations that accompanied them before the transition to a new brain were complete.

Now this is where the problem gets tricky. In the preceeding example we have no copy at the end of the process and the transition maintains a continuity of consciousness that is indistinguishable from normal biological function. Now suppose we come up with a way to perform that task faster? How fast could this task conceivably be before the retention of self becomes untennable? What if it could happen instantly, and the person doesn't even know it had happened? Then what? Provided there are no "copies", there really isn't any difference here than in the first example above.

Where it starts to get murky is when we add the downloading issue into the mix and end up with copies. At the instant the download is complete and the "new you" is turned on, you are no longer a single minded entity. What then? Can we say we are still in essence a single entity but with two minds ... a "multiprocessor unit" with independent sensory inputs capable of simultaneous autonomous operation? Would many of such copies take on the persona similar to that of a "corporate entity", where all the "yous" are partly responsible for the actions of the collective? Or would the new "yous" be considered nothing more than advanced desktop computers and be considered property you could switch off at will?

Whether we like it or not, if we keep progressing the way we are, these are the kinds of issues we're facing. It's exciting and frightening at the same time. We live in amazing times.

j.r.
 

Back
Top Bottom