• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

What would it look like? (I imagine similarities with Buddhist enlightenment -- a deeper self-awareness and mastery of thoughts, perhaps.)
We don’t know what sentience is. Depending on definition, it could be that Buddhist enlightened monks, or high hippies may have a higher degree of sentience than us normal people.
 
We don’t know what sentience is. Depending on definition, it could be that Buddhist enlightened monks, or high hippies may have a higher degree of sentience than us normal people.

I think we are on firmer grounds defining sentience than consciousness. We already agree that non-humans can be sentient when we use the "ability to experience feelings and sensations" definition. Could we detect it in our AIs by how they react to stimulus, something like are they reacting as if they are aware they have an existence? Since we can detect sentience in non-humans, I can't see why we couldn't. Again, I doubt it will be anything like human or other biological sentience - we aren't going to build AIs to have the stimulus we get when our pain receptors are triggered, it would serve no purpose, and there isn't any analogue of that in a computer system. (And because of people with CIPA we know such a system is not necessary even for human sentience (they do however suffer from other problems that we describe as cognitive problems)). We also know human "internal" experience is different in different humans from simply asking each other about it.

Overall I can't see why we couldn't develop software that has sentience.
 
Last edited:
Feelings and sensations ? That so vague, I'd rather not use the word at all. And I think we don't have to. Is AI sentient or not ? Who cares. Do we care if animals are ? Would we do anything different if we found out 50% of people are not sentient ?
Every scifi with rogue AI speaks about the moment it achieved sentience .. but isn't it just that ? Scifi trope ?
Also even as vague as the definition is .. I always understood sentience a bit more like having it's own agency .. being able to disobey orders. But maybe that's also scifi trope.
 
I think we are on firmer grounds defining sentience than consciousness. We already agree that non-humans can be sentient when we use the "ability to experience feelings and sensations" definition. Could we detect it in our AIs by how they react to stimulus, something like are they reacting as if they are aware they have an existence? Since we can detect sentience in non-humans, I can't see why we couldn't. Again, I doubt it will be anything like human or other biological sentience - we aren't going to build AIs to have the stimulus we get when our pain receptors are triggered, it would serve no purpose, and there isn't any analogue of that in a computer system. (And because of people with CIPA we know such a system is not necessary even for human sentience (they do however suffer from other problems that we describe as cognitive problems)). We also know human "internal" experience is different in different humans from simply asking each other about it.

Overall I can't see why we couldn't develop software that has sentience.


Many years ago there was somebody here who defined sentience as having a feedback, that is self-awareness. His definition made certain measuring instruments sentient (although at very low level). I liked the definitions he used because they were simple, but they also broadened the definition to include contraptions that most people would think were not sentient.

Unfortunately, I can’t remember the thread or the names of the users from that discussion.
 
I remember the same, I'll see if I can find one of the many threads - may be interesting to re-read given what we have accomplished over the last 10 years or so.

ETA: The device to look for is "thermostat" or "thermometer" in the R&P threads.

As I was searching came across an interesting thread that directly discusses the state of chatbots a decade ago:http://www.internationalskeptics.com/forums/showthread.php?t=220815

Another thread that gets going once PixyMisa starts getting serious about feedback: http://www.internationalskeptics.com/forums/showthread.php?t=196412
 
Last edited:
I know what you're saying. I've read it 5 times. I'm asking you a different question! You're not answering the question I am actually asking, you're repeatedly answering a question that I am not asking. If you answer "nothing - there is nothing that makes a nonbiological consciousness impossible" then I will accept that.
How is what I said not, "nothing - there is nothing that makes a nonbiological consciousness impossible."

So-called modern "artificial intelligence" applications get to places and we have no idea how they got there. The inner workings of an AI are a black box. We give them inputs and training data, and we tell them how they should process that data, but we do not understand how they produce results.
Not understanding how a particular calculation was obtained doesn't mean an independent AI function is working that wasn't programmed in.

From that link:
Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.
Mimic, seem to mirror ... but that is not sentient.

I fully expect that when a conscious AI emerges, we will have no idea how it happened. But I'm equally sure that it will bear little resemblance to a biological brain, which is why understanding biological consciousness isn't relevant.
That's a fanciful claim akin to God works in mysterious ways, he has a plan ... yada, yada. It sounds like rationalizing some means which will reach your assertion of consciousness. By resemblance if you are talking about wires vs nerves, of course they aren't the same.

But your claim is the AI program will become conscious. Is a calculator conscious or sentient? How is an AI program anymore than a very complex calculator?

Again from your link, a caution.
“I think it is absolutely critical to start by keeping in mind that what gets called ‘AI’ isn't any kind of autonomous agent, or intelligence, or thinking entity,” Bender said. “These are tools, which can serve specific purposes. As with any other tools, we should be asking: How well do they work? How suited are they to the task at hand? Who are they designed to be used by? And: How can their use reinforce or disrupt systems of oppression?”
 
Last edited:
Selection is us - selecting for "better" AI, mutation or unplanned changes in this case is the self-learning and the data they use.

How long? Probably a few weeks after we start AIs creating new AIs and give the AIs enough computational time and storage.

If we are talking about meeting the minimal threshold for the classical definition of sentience i.e. "ability to experience feelings and sensations" like others have said we are almost at the start of that process.
IOW you assert we can just change what consciousness or being sentient is.

We certainly have proof that many aspects that we thought human sentience required are capable of being delivered without sentience, such as the outputs of the LLMs.

Which doesn't mean that they are not required for human sentience and I don't think currently we are on the track to replicate human sentience, that isn't the goal, but we already know sentience is not limited to humans, so any sentience that may be created is unlikely to be the same as a human's sentience.

My dog is sentient but does not have human sentience, I can't see why in principle a future AI may not also be sentient but still not have human sentience.
My dogs were (they've both died now) sentient. Our sentience evolved. It didn't just appear overnight. All of the things like emotions, morality, appreciation of beauty and so on we attribute to 'humans' evolved. They are present in other animals.
 
I subscribe to a podcast called "Sean Carrol's Mindscape". Recent episode was "Peter Godfrey-Smith on Sentience and Octopus Minds".

It touches on a lot we've been discussing. It's almost an hour and a half long, but I think its worth a listen.
 
I remember the same, I'll see if I can find one of the many threads - may be interesting to re-read given what we have accomplished over the last 10 years or so.

ETA: The device to look for is "thermostat" or "thermometer" in the R&P threads.

As I was searching came across an interesting thread that directly discusses the state of chatbots a decade ago:http://www.internationalskeptics.com/forums/showthread.php?t=220815

Another thread that gets going once PixyMisa starts getting serious about feedback: http://www.internationalskeptics.com/forums/showthread.php?t=196412
Thanks, yes, you are right, PixyMisa was the one. His definition of consciousness was “self-referential information processing”. However, these two threads were not the ones I was thinking of. In fact I could not find the thermometer/thermostat argument. I’ll look for them myself - later.
 
I subscribe to a podcast called "Sean Carrol's Mindscape". Recent episode was "Peter Godfrey-Smith on Sentience and Octopus Minds".

It touches on a lot we've been discussing. It's almost an hour and a half long, but I think its worth a listen.

Isn't that similar to the one we just watched? I think octopi are clearly sentient. But that's a topic for another thread.
 
At what point does "something" gaze back (or listen, or whatever else would be a useful equivalent)?

It's weird that there isn't even a useful test for other humans.
 
How is what I said not, "nothing - there is nothing that makes a nonbiological consciousness impossible."
There - was that so hard?

Not understanding how a particular calculation was obtained doesn't mean an independent AI function is working that wasn't programmed in.
Precise calculations are not specifically programmed in. That's the point. What is programmed in does not determine the output- the training data and internal processes that the programmers do not always understand determines the output.

From that link:
Mimic, seem to mirror ... but that is not sentient.
Seems to mirror, in other words it appears to be doing what we can do, but we don't know precisely how.

That's a fanciful claim akin to God works in mysterious ways, he has a plan ... yada, yada. It sounds like rationalizing some means which will reach your assertion of consciousness.
*shrug* If you say so. But it is truth.

By resemblance if you are talking about wires vs nerves, of course they aren't the same.
The mechanism by which it occurs is also not the same. It can't possibly be, unless we are programming a direct simulation of a brain down to the level of the molecules in the neurons, which would be a programming and processing nightmare. An AI would achieve sentience via binary values, not via neurotransmitters.

But your claim is the AI program will become conscious. Is a calculator conscious or sentient? How is an AI program anymore than a very complex calculator?
No. My claim is that there is no reason that an AI program could not someday become sentient. There is no sentient AI today. It is entirely uncertain, in fact, how we would even recognise one if it ever does emerge.

Again from your link, a caution.
This caution regards over-interpretation of modern currently existing AI.
 
IOW you assert we can just change what consciousness or being sentient is.

My dogs were (they've both died now) sentient. Our sentience evolved. It didn't just appear overnight. All of the things like emotions, morality, appreciation of beauty and so on we attribute to 'humans' evolved. They are present in other animals.
And so also it will "evolve" in artificial systems. Like biological evolution, I think AI evolution is basically inevitable.
 
At what point does "something" gaze back (or listen, or whatever else would be a useful equivalent)?

It's weird that there isn't even a useful test for other humans.
And this, in my opinion, is the biggest problem with AGI. How would we even recognise it as sentient?

I think we will have to start giving AGIs rights, even if we don't know for sure that they are sentient.
 

Back
Top Bottom