dlorde
Philosopher
- Joined
- Apr 20, 2007
- Messages
- 6,864
Sentience: 'The ability to feel, perceive, or experience subjectively'.What's "sentience?"
Sentience: 'The ability to feel, perceive, or experience subjectively'.What's "sentience?"
Which, pretty much by definition, cannot be recognized except in one's self.Sentience: 'The ability to feel, perceive, or experience subjectively'.
There is very little we can know for certain, but - as you suggest - we can be fairly sure, or even sure beyond reasonable doubt, if the behaviour exhibited is familiar to us and we can correlate it with behaviours of our own that characterise relevant mental states. This empathic comparison allows us to assess the sentience of other species too - with a confidence roughly related to the level of behavioural similarity or familiarity.Which, pretty much by definition, cannot be recognized except in one's self.
I have no way of knowing if people around me are conscious and experience things subjectively, as opposed to sleepwalking. I am fairly sure they are, but there is really no way to know it for certain.
That won't work. This type of modeling has already been tried by Douglas Lenat with CYC.In the movie, Nathan uses the entire history of Google searches (yes, every search anyone ever made since Google came into existence) to generate a model of average human personality.
What I question however is what would motivate such an AI, if anything not imposed by design.
This is not a new question or thought, at least not since Turin asked the question; but it keeps cropping up in movies.
This is not true for living things.Which, pretty much by definition, cannot be recognized except in one's self.
I have no way of knowing if people around me are conscious and experience things subjectively, as opposed to sleepwalking. I am fairly sure they are, but there is really no way to know it for certain.
And because such a comparison is entirely one-sided, it has nothing at all to do with the subject ostensibly being tested. People already regularly assign emotional states to cars and computers and pet rocks.There is very little we can know for certain, but - as you suggest - we can be fairly sure, or even sure beyond reasonable doubt, if the behaviour exhibited is familiar to us and we can correlate it with behaviours of our own that characterise relevant mental states. This empathic comparison allows us to assess the sentience of other species too - with a confidence roughly related to the level of behavioural similarity or familiarity.
I doubt it. Not in any sense that would convince someone who isn't already on board.This is not true for living things.
There are ways to tell.
Without emotion, there are no inhibitory synapses...,
I suspect we would not have a problem recognizing it, in a Turin test sense. What I question however is what would motivate such an AI, if anything not imposed by design.
Humans are after all motivated by factors that we cannot control, or by the attempt to control factors that we have poor command over.
Most likely it will be provided with drives which humans want it to have.There is no reason that an artificial intelligence should be provided with these biological drives.
I guess your Roomba is on level of very, very simple insect, if even that. But there are living multicelluar organisms that are stupider than your Roomba.When I switch on my Roomba, it starts traversing the floor. When Roomba runs low on battery, it stops traversing and starts searching for its charging station.
If by "hungry animal" you mean said insect, then pretty much no difference in this context. It is quite different from anything that humans usually think seeing sentence "hungry animal looking for a food source" (wolf in forest in winter or something), though.How is it different from a hungry animal looking for a food source?
A very interesting question is whether a "sentient" or "conscious" machine will acquire its own motivations (curiosity?) in the absence of biological imperatives, or whether some drive state will need to be part of its design in order for us to recognize machine consciousness.
What Mark6 is describing is solipsism, and it really is impregnable.There are ways to tell.
Human curiosity is, in essence, wondering what would happen if you hit one thing with another. As exemplified by the LHC.That begs the question of whether curiosity itself is independent of our own animal instincts. It could be the case that curiosity is a natural extension of our evolution as tool using creatures that get pleasure from increasing our mental skillsets < because we know it will aid in survival >.
What Mark6 is describing is solipsism, and it really is impregnable.
It used to be.What Mark6 is describing is solipsism, and it really is impregnable.
Your doubt is not my concern.I doubt it.
Convincing someone isn't really my concern either. All I need is an accurate proof. Those who doubt can spend their time banging their heads against that proof.Not in any sense that would convince someone who isn't already on board.
I see a balrog capturing two hobbits. The spiky bit in the middle is their hair, the extremities are their feet. Anyone else see that?
Could be a nazgul as well, really.