• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Sentience. How would it be recognized?

Sentience: 'The ability to feel, perceive, or experience subjectively'.
Which, pretty much by definition, cannot be recognized except in one's self.

I have no way of knowing if people around me are conscious and experience things subjectively, as opposed to sleepwalking. I am fairly sure they are, but there is really no way to know it for certain.
 
Which, pretty much by definition, cannot be recognized except in one's self.

I have no way of knowing if people around me are conscious and experience things subjectively, as opposed to sleepwalking. I am fairly sure they are, but there is really no way to know it for certain.
There is very little we can know for certain, but - as you suggest - we can be fairly sure, or even sure beyond reasonable doubt, if the behaviour exhibited is familiar to us and we can correlate it with behaviours of our own that characterise relevant mental states. This empathic comparison allows us to assess the sentience of other species too - with a confidence roughly related to the level of behavioural similarity or familiarity.

My question is whether an artificial sentience would have sufficiently similar or familiar behavioural responses for us to recognise it as such. We could assert that if it doesn't, it's not sentient, but that would be confusing what sentience means with how we recognise it...
 
In the movie, Nathan uses the entire history of Google searches (yes, every search anyone ever made since Google came into existence) to generate a model of average human personality.
That won't work. This type of modeling has already been tried by Douglas Lenat with CYC.
 
This is not a new question or thought, at least not since Turin asked the question; but it keeps cropping up in movies.


According to the movie I just saw, real AI looks like her because of course it does.
 
Which, pretty much by definition, cannot be recognized except in one's self.
This is not true for living things.

I have no way of knowing if people around me are conscious and experience things subjectively, as opposed to sleepwalking. I am fairly sure they are, but there is really no way to know it for certain.

There are ways to tell.
 
There is very little we can know for certain, but - as you suggest - we can be fairly sure, or even sure beyond reasonable doubt, if the behaviour exhibited is familiar to us and we can correlate it with behaviours of our own that characterise relevant mental states. This empathic comparison allows us to assess the sentience of other species too - with a confidence roughly related to the level of behavioural similarity or familiarity.
And because such a comparison is entirely one-sided, it has nothing at all to do with the subject ostensibly being tested. People already regularly assign emotional states to cars and computers and pet rocks.

This is not true for living things.
There are ways to tell.
I doubt it. Not in any sense that would convince someone who isn't already on board.
 
Last edited:
Without emotion, there are no inhibitory synapses...,


http://m.rstb.royalsocietypublishing.org/content/365/1551/2329.full.pdf

"We have known for many years that invertebrate neurons can have both excitatory and inhibitory outputs and in some cases both."

Your definition of emotion must differ drastically from mine for your statement to be true. The simplest nervous systems have both excitatory and inhibitory synapses, while emotional content, I would argue, is restricted primarily to higher vertebrates.
 
I suspect we would not have a problem recognizing it, in a Turin test sense. What I question however is what would motivate such an AI, if anything not imposed by design.



Humans are after all motivated by factors that we cannot control, or by the attempt to control factors that we have poor command over.


Will the "Turin" test involve resurrection, and/or a stained fabric? :)

I agree that when we succeed in designing a conscious machine we will recognize its sentience, as long as the design is similar to our own and able to communicate with us.

Animals are motivated firstly by biological
Imperatives, nutrition, reproduction, elimination, and self preservation. These are based on the necessity of evolutionary success. Arguably, all of human behavior can be accounted for by these drives.

There is no reason that an artificial intelligence should be provided with these biological drives.

A very interesting question is whether a "sentient" or "conscious" machine will acquire its own motivations (curiosity?) in the absence of biological imperatives, or whether some drive state will need to be part of its design in order for us to recognize machine consciousness.
 
There is no reason that an artificial intelligence should be provided with these biological drives.
Most likely it will be provided with drives which humans want it to have.

When I switch on my Roomba, it starts traversing the floor. When Roomba runs low on battery, it stops traversing and starts searching for its charging station. How is it different from a hungry animal looking for a food source?
 
When I switch on my Roomba, it starts traversing the floor. When Roomba runs low on battery, it stops traversing and starts searching for its charging station.
I guess your Roomba is on level of very, very simple insect, if even that. But there are living multicelluar organisms that are stupider than your Roomba.

How is it different from a hungry animal looking for a food source?
If by "hungry animal" you mean said insect, then pretty much no difference in this context. It is quite different from anything that humans usually think seeing sentence "hungry animal looking for a food source" (wolf in forest in winter or something), though.
 
FWIW all the research I've read, and from my own experience in AI, the idea that emotions are really just motivation and focus filters starts to make quite a bit of sense.

So to answer the emotion question, I think any sentient intelligence ( artificial or otherwise ) will require < something like > emotions. In other words, "emotions" are just a term we use for "constraining the working set < of memories, thoughts, actions, etc >" and any cognitive algorithm requires that feature.

HOWEVER this doesn't imply that any "emotion" of an artificial sentience will have a human correlate, and there are a number of reasons for this:
-- many of our emotions are intrinsically linked with our physical bodies and their biological nature
-- many of our emotions are intrinsically linked with our evolution as social beings
-- many of our emotions are intrinsically linked with our mental limitations
 
A very interesting question is whether a "sentient" or "conscious" machine will acquire its own motivations (curiosity?) in the absence of biological imperatives, or whether some drive state will need to be part of its design in order for us to recognize machine consciousness.

That begs the question of whether curiosity itself is independent of our own animal instincts. It could be the case that curiosity is a natural extension of our evolution as tool using creatures that get pleasure from increasing our mental skillsets < because we know it will aid in survival >.
 
That begs the question of whether curiosity itself is independent of our own animal instincts. It could be the case that curiosity is a natural extension of our evolution as tool using creatures that get pleasure from increasing our mental skillsets < because we know it will aid in survival >.
Human curiosity is, in essence, wondering what would happen if you hit one thing with another. As exemplified by the LHC.
 
I doubt it.
Your doubt is not my concern.

Not in any sense that would convince someone who isn't already on board.
Convincing someone isn't really my concern either. All I need is an accurate proof. Those who doubt can spend their time banging their heads against that proof.
 
I see a balrog capturing two hobbits. The spiky bit in the middle is their hair, the extremities are their feet. Anyone else see that?

Could be a nazgul as well, really.

I saw bigfoot hiding behind a mirror and sticking his leg out to test whether it was his or another bigfoot's leg.

Or it could have been Donald Duck.
 

Back
Top Bottom