• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Intelligence and Life Beyond Death

Life without the pleasures of the flesh would be unliveable. We're animals. We wouldn't be ourselves if we lost the Dionysian side of existence. I for one would definitely not care to converse with, much less be, a TragicMonkey who had no sense of humor.
 
If I could transfer all my memories to a sentient system, would it recognize itself as me, or take those as memories of another identity which is not its own?

The thing is our personal identity is not merely memories but memories associated with this very body. It's not just our brains but our felt sense of our own organs and even more vitally our endocrine system.

If we could achieve, say just the transplanting of one head on another body, the individual would experience a crisis of identity about that body. Perhaps ze could work through that, since it is a biological body.

But a sentient AI having no endocrine system, no biological drives, no motor neurons, no emotions, would treat those memories as flat data that ze doesn't really associate with hirself.

This raises a related question: can a sentient AI be grown that doesn't have some sort of body interfacing with the environment? Human self-identity has a social component. A personality may not event exist without the social interactions.

In the wretched stories I write for my own amusement there is such a thing as a "zombot." an android who has been given the memories and mental data of a prior Human body, but can't really relate to it or feel self-identification with it, and so is in somewhat of a crisis over who to be.

Again the likely outcome is that the zombot would only use the actionable information and if ze had any felt sense of identity see the memories as of another or something like a past life no longer who ze is.
 
Life without the pleasures of the flesh would be unliveable. We're animals. We wouldn't be ourselves if we lost the Dionysian side of existence. I for one would definitely not care to converse with, much less be, a TragicMonkey who had no sense of humor.

That would be a tragic loss!
 
I'm still not getting it. What do people think will happen?

What I think will happen is that you'll download your consciousness into a computer, then you'll walk away, age and die.

Somewhere there will be a machine that thinks it's you. You won't be aware of this, you'll just be getting older and then die.


I think I must be missing something?

You aren't missing a damned thing.
People are unconsciously assuming some soul-like quality or consciousness that takes up an new residence.
 
I recall that Fred Pohl explored what such an existence might be like in his “HeeChee” novels. Conversation with a living human would be boring because the human would be on “brain time”, while the person who was now machine intelligence would be “thinking” ever so much faster.

Another zombot obstacle. Another reason why the sentient AI with the memories of a once living person would not be that person.
 
My ideal is that, very imperfectly, a program can create a me-in-the-machine that I could then task to make decisions for me when I can't be bothered. It could also undergo all kinds of tests for which I don't have the time or imagination, to see what I'm good at or suck at, and warn me when I'm about to make a decision I'll probably regret.
This sounds disturbingly like being a quadriplegic, confined to an iron lung, with even your thought life - the only life you have left - dedicated to the convenience of someone else.

I'd not volunteer for such slavery, even if the beneficiary were someone very much like me. Like me, except for their belief that such an arrangement would be equitable and fair.
 
Hell, there's a better argument to be made that we aren't even the selves we think we are, than some other thing could claim to be us!
 
This sounds disturbingly like being a quadriplegic, confined to an iron lung, with even your thought life - the only life you have left - dedicated to the convenience of someone else.

I'd not volunteer for such slavery, even if the beneficiary were someone very much like me. Like me, except for their belief that such an arrangement would be equitable and fair.

you presume that such a program would be self-aware.
But are algorithms that can predict what else you want to watch/listen to self-aware? And yet they can be very accurate.
 
you presume that such a program would be self-aware.

But are algorithms that can predict what else you want to watch/listen to self-aware? And yet they can be very accurate.
I presume that any sufficiently detailed and accurate model of your thought processes would have to be self aware.

Everything about what you think and how you think is embedded in a feedback loop with your own self awareness. Remove that loop, will you get the same results? Or even useful results?
 
I presume that any sufficiently detailed and accurate model of your thought processes would have to be self aware.

Yes, which raises interesting questions.

If you can replicate the human mind, complete with self awareness, on a computer then you can do exactly the same thing, just taking lots and lots longer, with pen and paper, and maybe a calculator.

Would hand written calculations be self aware?
 
This is just a different version of the Transporter thought experiment. I can't recall, but there is a thread somewhere that was talking about it last week (which makes it likely it is still going). If we do it right, it can also get into ship of Theseus territory.

I am pretty confident that I would not choose to end this me, even though the computer will think it is me.
 
I presume that any sufficiently detailed and accurate model of your thought processes would have to be self aware.

Everything about what you think and how you think is embedded in a feedback loop with your own self awareness. Remove that loop, will you get the same results? Or even useful results?

Would there be limits to a me-mimic that isn't self-aware?
Sure.
For example, it might have problems learning new things or self-motivate.

But for most daily activities one doesn't have to think much about, I doubt an outside observer could tell the difference between a full human mind and a "zombie".
 
This is just a different version of the Transporter thought experiment. I can't recall, but there is a thread somewhere that was talking about it last week (which makes it likely it is still going). If we do it right, it can also get into ship of Theseus territory.

I am pretty confident that I would not choose to end this me, even though the computer will think it is me.

Yes. I don't believe there's any way to get this 'me' out of me and put it someplace else. As Apathia says, that assumes there's some sort of soul to extract.

I'm just a series of processes.
 
Yes, which raises interesting questions.

If you can replicate the human mind, complete with self awareness, on a computer then you can do exactly the same thing, just taking lots and lots longer, with pen and paper, and maybe a calculator.

Would hand written calculations be self aware?

Why are you asking a question that you just finished answering in the affirmative? Am I missing something?
 
Would there be limits to a me-mimic that isn't self-aware?
Sure.
For example, it might have problems learning new things or self-motivate.

But for most daily activities one doesn't have to think much about, I doubt an outside observer could tell the difference between a full human mind and a "zombie".

One probably doesn't need much of a cognitive model to do daily activities that don't require much thought. But you're not talking about such simple brains. You're talking about a model of your thought processes that can "make decisions for me when I can't be bothered" and "could also undergo all kinds of tests for which I don't have the time or imagination, to see what I'm good at or suck at, and warn me when I'm about to make a decision I'll probably regret."

This is a model of your mind that is complex enough to handle important decisions the way you would handle them yourself, and would represent your actual beliefs and preferences and strengths with such detail and accuracy as to improve your quality of life by thinking on your behalf as if it were you.

Alexa can remind you to go to the grocery store. Only you can figure out, through life experience, that you'd rather be a painter than a welder, or that it's probably best if you attend the family reunion this year even though you have good reasons not to.
 
There's one thought experiment along these lines that I find compelling.

It requires a couple assumptions (but then again, all thoughts along this line do)

The main assumption in this case is that there is some very small unit of your brain that can be replaced with an artificial analog. Call it an artificial neuron.

Imagine that a tiny but of your brain can be replaced with an artificial part. Let's say we replace 1 out of our 100 billion neurons with a tiny nanocomputer. Let's say it's been precisely engineered to interact with it's fellow neurons in a way that's functionally indistinguishable from the original and preserves whatever state of the original is relevant for memories, meaningful levels of info, etc (I know, what a meaningful level of info is would be its own question).

I think few people would say you're no longer 'you' or that you've lost your consciousness or continuous experience after such an operation.

What's the number of neurons we could replace in one go that would preserve the "you"? Just that one out of 100 billion? How about two? Ten? A million or so? Wherever you draw the line, as long as it's not zero, let's say we replace that number and let you heal up.

If you're still 100% you, 100% continuous experience, now with .01% artificial neurons, who's to say we can't have another operation changing out more neurons as soon as you've healed from this one? Forget for the moment practical concerns of healing time or whatever, that's not what this thought experiment is testing. Can we do another .01% Is there some cumulative effect that enough changeovers that individually preserve the self somehow lose it when you've had too many?

Many (most?) of us might say that removing the brain entirely in one go and plopping in an artifical copy made of 100% artifical neurons would be more or less a death. But it's a bit harder to see why that would be so if the replacement were bit by bit and the continuity of our experience were the same as it is around normal surgery anaesthesia or sleep.
 
Last edited:
Why are you asking a question that you just finished answering in the affirmative? Am I missing something?


Because I'm unsure of my answer. I really have no clue, I'm just guessing.

It would seem ludicrous that a pen and a paper and some calculations could be self aware. However that's where the logic leads, I think.

I may have it wrong, I was canvassing opinion.
 
There's one thought experiment along these lines that I find compelling.

It requires a couple assumptions (but then again, all thoughts along this line do)

The main assumption in this case is that there is some very small unit of your brain that can be replaced with an artificial analog. Call it an artificial neuron.

Imagine that a tiny but of your brain can be replaced with an artificial part. Let's say we replace 1 out of our 100 billion neurons with a tiny nanocomputer. Let's say it's been precisely engineered to interact with it's fellow neurons in a way that's functionally indistinguishable from the original and preserves whatever state of the original is relevant for memories, meaningful levels of info, etc (I know, what a meaningful level of info is would be its own question).

I think few people would say you're no longer 'you' or that you've lost your consciousness or continuous experience after such an operation.

What's the number of neurons we could replace in one go that would preserve the "you"? Just that one out of 100 billion? How about two? Ten? A million or so? Wherever you draw the line, as long as it's not zero, let's say we replace that number and let you heal up.

If you're still 100% you, 100% continuous experience, now with .01% artificial neurons, who's to say we can't have another operation changing out more neurons as soon as you've healed from this one? Forget for the moment practical concerns of healing time or whatever, that's not what this thought experiment is testing. Can we do another .01% Is there some cumulative effect that enough changeovers that individually preserve the self somehow lose it when you've had too many?

Many (most?) of us might say that removing the brain entirely in one go and plopping in an artifical copy made of 100% artifical neurons would be more or less a death. But it's a bit harder to see why that would be so if the replacement were bit by bit and the continuity of our experience were the same as it is around normal surgery anaesthesia or sleep.
Ship of Theseus
 
Is it possible we may be able to transfer this sense of "I Am" to another material receptacle? This could be another being or perhaps a computer.

You been reading my website again, mate?

I proposed a scheme back in the late '90s that your DNA would be preserved, along with a download of your brain (when that technology exists) which are then sent to deep space.

I still see no reason why the technology would be impossible, and I'm not even sure you need to download the brain. If determinism works - which I think it does - the exact same DNA would produce the same consciousness.

The difficulty is mitochondrial DNA.
 

Back
Top Bottom