• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

You've got that arse about tit: it's you that is trying to change the definition of sentience so that AIs can never be considered sentient.
Good grief. Whatever.

If this is all about talking past each other then what a waste of time.
 
I knew I would have to explain what I meant, sorry.

It hit me last night and dawned on me how skeptics were making the same mistake we recognize when god believers make it: If they can't explain it then God did it.

If people can't explain how an AI program is coming up with the answers it does, that doesn't mean it's on the verge of sentience.
People may disagree with my analogy.

And again - no one has argued that - you are arguing about something only you have posted, not the arguments actual folk in this thread have made.
 
My takeaway from this interesting thread seems currently to be:

- Sentience is not objectively defined, at least not yet, and may not be possible to accurately define.

- Sentient or not is not a binary definition; there seems to be a continuous range of sentience from the vaguely sentient reactions of a shoal of fish in the presence of a predator to that of humans, and possibly beyond.

- While we currently know of no non-biological sentient entities, we cannot rule out that future artificial constructions could be sentient.

- There is no reason to assume that an artificial sentient entity must behave like a biological entity, i.e. a smart computer may not behave like a human, and might as such elude any current tests (like a Turing test).

Hans
Exactly! Well summed up.
 
In other news,

ChatGPT ban in Australia’s public schools likely to be overturned

Government reveals a draft framework has been formulated for how ChatGPT rollout will work in schools


The ban on public school students using artificial intelligence tools such as ChatGPT may be reversed next year, the federal education minister says, but students will probably face changes in how they are tested and graded.

On Sunday, federal education minister Jason Clare said state and territory ministers have agreed on a draft framework for teachers on how the technology should be used in schools.

It has not yet been publicly released ahead of consultation with schools and teachers, but recommends an overhaul of assessments to prevent students using such tools to “bluff the system”, Clare said.
 
Folk aren't claiming that AI would or will have the same sentience as a human does, so whether we know how it occurs in humans is irrelevant to how it might occur in AIs.

How is claiming, "well it won't be sentient like humans are," not changing the definition to get the answer one wants?

And have all of you who think, 'AI is just some undefined number of changes away but we're on an inevitable path', come to a reasonable consensus among yourselves?


What does not mattering if one knows how it works say? To me it says if you don't know how it works you can't distinguish it from being no more than a mimic.

Does this AI discover self like looking in a mirror and recognizing self (say if installed in a robot) or would it be self-aware of whatever kind of hardware it existed in besides just repeating what's on the computer about the OS?
 
Yes. Was that not an appropriate interpretation of your double negative?

You oversimplify and post false dichotomies of so many things. Have you ever considered the bigger picture that you might be missing in this discussion?

It was an indicator people were using, not a 'cause'.
 
You oversimplify and post false dichotomies of so many things. Have you ever considered the bigger picture that you might be missing in this discussion?

It was an indicator people were using, not a 'cause'.
It isn't even an indicator. That we do not understand how it produces its outputs is not an indicator that it is sentient. We do not understand how ChatGPT produces its outputs, and it is not sentient.

That we do not understand how it produces its outputs is a reason why we won't understand exactly how an AI becomes sentient, but it is neither a cause nor an indicator of sentience.
 
....
That we do not understand how it produces its outputs is a reason why we won't understand exactly how an AI becomes sentient, but it is neither a cause nor an indicator of sentience.
my bold

Look at the context. Obviously it's not an indicator sentience has been obtained, as you say, it hasn't been obtained yet. (Yay we agree there!)

But what is it then, an excuse? An indicator sentience in an inanimate AI program won't be recognizable?
 
my bold

Look at the context. Obviously it's not an indicator sentience has been obtained, as you say, it hasn't been obtained yet. (Yay we agree there!)

But what is it then, an excuse? An indicator sentience in an inanimate AI program won't be recognizable?
You bolded it. It's a reason why we won't understand exactly how an AI becomes sentient.

Recognising the sentience of an AI is an entirely different problem. That we can't closely examine the mechanism is an additional difficulty to that problem, but it is not the basis for it.
 
It isn't even an indicator. That we do not understand how it produces its outputs is not an indicator that it is sentient. We do not understand how ChatGPT produces its outputs, and it is not sentient.

That we do not understand how it produces its outputs is a reason why we won't understand exactly how an AI becomes sentient, but it is neither a cause nor an indicator of sentience.

I don't think our current ability to understand how AI output is made is in any way an indicator of sentience. We are working hard to learn how the human brain produces it output; if we succeed are we then not sentient anymore?

Also, from the descriptions I have read about how ChatGPT and similar AI engines are constructed, it is not the case that we don't understand how it produces output, but that we are not able to predict the output because the process is so complex that we would need an even more complex process to predict it.

Hans
 
Bold is mine.

Say you come across an AI program, one with mobility of some kind. Given all the data fed into said AI program came from the programers, where does this AI program find one or more databases to explore on its own? How does it manifest any independence beyond the databases made available to it?

The data fed into the AI program ultimately comes from the outside world, same as it does for you or I. And it's the immense quantity of that data that makes it so difficult, if not impossible, for experts to understand exactly what's going on inside the AI, once it has learned it.

As for independence, you could ask the same about humans. AIs appear to be using many of the same methods of constructing complex models of everything as we do, as a way of understanding, and then they're able to make use of and build off of those models. And importantly, they can change those models as they learn new things.

It would not only be subjective, it wouldn't be independent thought no matter how much it wows people interacting with it.

Not sure what you mean here. I was not saying that AIs were subjective. I was say that we humans are being subjective when we empathize with another human, or our cat, or possibly an AI. And that that empathy correlates well with most people's claims of sentience for those beings or agents.

But I myself can't empathize with any AIs, so, *if we're using this correlation*, I would say AIs are not sentient. Some other people may empathize, so for them, those AIs are sentient.

My reason for not empathizing with AIs is simply because they don't have the fragile single-threaded existence that we humans/animals all have. All AIs, as far as I know, can have their state saved and restored at will, and can have many separate threads of experience (sessions in ChatGPT terminology), each which can be paused and restarted at any point, etc. It's not that they can't feel, it's that it doesn't have the consequences that it does for us.

Our human ethics and laws would have to be greatly modified to include AIs.
 
How is claiming, "well it won't be sentient like humans are," not changing the definition to get the answer one wants?

We cannot know how it will be, but we may well assume that it will be different, since the process that created it is different.

And have all of you who think, 'AI is just some undefined number of changes away but we're on an inevitable path', come to a reasonable consensus among yourselves?

I think not, but considering the progress of complexity in computer capability, it wise to assume that it will happen. Wise because we might want to prepare for the event in various ways. Such as, what rights will we grant to sentient machines, and what limitations might we want to impose on them?

What does not mattering if one knows how it works say? To me it says if you don't know how it works you can't distinguish it from being no more than a mimic.

It does not matter if we know how it works or not.

The reason we may have a hard time distinguishing it from mimicry is that we have a hard time defining even human sentience.

Does this AI discover self like looking in a mirror and recognizing self (say if installed in a robot) or would it be self-aware of whatever kind of hardware it existed in besides just repeating what's on the computer about the OS?

Apart from my personal opinion that the "mirror test" in worthless, this is a good question: How does the AI know if it is sentient?

Perhaps we should start by finding out, how do WE know we are sentient?

Hans
 
How is claiming, "well it won't be sentient like humans are," not changing the definition to get the answer one wants?
Which definition of sentience is being changed to get the wanted answer? I am not aware that we do have a definition of sentience at all. You seem to want to limit sentience to get the result you want.

And have all of you who think, 'AI is just some undefined number of changes away but we're on an inevitable path', come to a reasonable consensus among yourselves?
What we see is something that is increasingly indistinguishable from sentience. It may not be sentience, and it may not inevitably become sentience, but it does not seem far-fetched to think that it may actually end up being sentience.

What does not mattering if one knows how it works say? To me it says if you don't know how it works you can't distinguish it from being no more than a mimic.
If the mimic is so well done that nobody can distinguish it from sentience, why is it not sentience? It reminds me of that silly thought experiment of the Chinese Room where a person who does not understand Chinese sits inside a room, and is handed Chinese questions. The person then follows instructions in huge rule books to compose something that to him or her is pure nonsense, but the output to the Chinese-speakers outside the room are perfect answers in Chinese. The thought experiment was meant to show that the Chinese Room does not understand Chinese - you could say that it “mimics” Chinese, but actually it shows that although the person inside the room has not idea of Chinese, the room itself with person and rule books, as seen from outside actually understands Chinese.

Does this AI discover self like looking in a mirror and recognizing self (say if installed in a robot) or would it be self-aware of whatever kind of hardware it existed in besides just repeating what's on the computer about the OS?
Are these your prerequisites for sentience? I may agree, but the sarcastic framing makes me think you put this up because you believe this is another hurdle that AI cannot pass.

I am reminded of the old idea that ant hills may be sentient. If so, it is certainly something that is hard for us to recognise, but should we rule it out solely because they are in no way sentient like humans are?
 
The data fed into the AI program ultimately comes from the outside world, same as it does for you or I. And it's the immense quantity of that data that makes it so difficult, if not impossible, for experts to understand exactly what's going on inside the AI, once it has learned it.

As for independence, you could ask the same about humans. AIs appear to be using many of the same methods of constructing complex models of everything as we do, as a way of understanding, and then they're able to make use of and build off of those models. And importantly, they can change those models as they learn new things.



Not sure what you mean here. I was not saying that AIs were subjective. I was say that we humans are being subjective when we empathize with another human, or our cat, or possibly an AI. And that that empathy correlates well with most people's claims of sentience for those beings or agents.

But I myself can't empathize with any AIs, so, *if we're using this correlation*, I would say AIs are not sentient. Some other people may empathize, so for them, those AIs are sentient. My reason for not empathizing with AIs is simply because they don't have the fragile single-threaded existence that we humans/animals all have. All AIs, as far as I know, can have their state saved and restored at will, and can have many separate threads of experience (sessions in ChatGPT terminology), each which can be paused and restarted at any point, etc. It's not that they can't feel, it's that it doesn't have the consequences that it does for us. Our human ethics and laws would have to be greatly modified to include AIs.

Good points. And this highlights the process. If sentience is a process in a sufficiently complex system, then the 'individual' is not the hardware, but that process and its history. Which probably applies to humans as well: If you were to erase a person's memory, would they be the same individual? Or if you copy a human would the copy be the same person. Old riddles, that we have discussed at length, here.

Hans
 
How is claiming, "well it won't be sentient like humans are," not changing the definition to get the answer one wants?

And have all of you who think, 'AI is just some undefined number of changes away but we're on an inevitable path', come to a reasonable consensus among yourselves?


What does not mattering if one knows how it works say? To me it says if you don't know how it works you can't distinguish it from being no more than a mimic.

Does this AI discover self like looking in a mirror and recognizing self (say if installed in a robot) or would it be self-aware of whatever kind of hardware it existed in besides just repeating what's on the computer about the OS?

Are dogs sentient in the same way humans are?
 

Back
Top Bottom