• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

You definitely can, but it's hard. We have very strong habit of thinking. First there is a distinction .. having inner monologue, in words, in specific language .. that is relatively easy to stop .. but you mostly just replace words with more vague images and feeling .. something is still stirring in there .. and getting that down, without actually noticing it, as that usually brings the words back .. not easy.

I would say it's not possible at all. Recognition and observation require "thought"; you may replace other thoughts with the thought that you wish to "stop thinking" and focus on it, and you may even quiet your inner spoken-language monologue and replace it with vague sensations or feelings, but those things are still thoughts. And you may even convince yourself that you have succeeded by observing a moment when in your opinion you have "stopped thinking"; but making that observation and applying that opinion to it are thoughts as well. There's really no way around it.

But this is a little beside my point. Even if it we were all to agree it's possible with enough effort that a person can self-induce a vegetative state by sheer force of will, it has to be acknowledged that it's at the very least an abnormal, unnatural, and quite rare state for people to be in - which still distinguishes sentient humans from a LLM computer program for which waiting for a prompt with zero processing activity is the default state.
 
Imagine going back a few billion years and viewing the first self-replicating primordial cell. And thinking…

“And yes, one can exclude cellular life from ever being sentient, just like massing multiple individual cells over time will never lead to anything sentient.”
Did I say "cellular life"? No, I said biological life and was discussing brains. I didn't even get into things like the brains of octopi.

I go back to the juju. You seem to be imparting to biological life an undefined something that can lead to sentience, while denying that can ever happen if silicon-based. Maybe it can, and maybe it can’t, but it seems presumptuous to simply dismiss the possibility out of hand.
Try your premise the other way around: Tell me how consciousness works in the human brain.

arthwollipot put it well: “You are defining sentient as something a non-biological system can never achieve, but you're not providing justification for that definition. Ie. why can a non-biological system never achieve it? What is the barrier that prevents it?

If you look at my posts I said once we understood exactly what was happening in the biological brain one could begin to design an AI that was actually conscious.

In the meantime, let me know when the artificial facsimile becomes conscious, wakes up and asks for its maker to give it arms and legs or something that would give it a degree of independence.

:popcorn1
 
I would say it's not possible at all. Recognition and observation require "thought"; you may replace other thoughts with the thought that you wish to "stop thinking" and focus on it, and you may even quiet your inner spoken-language monologue and replace it with vague sensations or feelings, but those things are still thoughts. And you may even convince yourself that you have succeeded by observing a moment when in your opinion you have "stopped thinking"; but making that observation and applying that opinion to it are thoughts as well. There's really no way around it.

But this is a little beside my point. Even if it we were all to agree it's possible with enough effort that a person can self-induce a vegetative state by sheer force of will, it has to be acknowledged that it's at the very least an abnormal, unnatural, and quite rare state for people to be in - which still distinguishes sentient humans from a LLM computer program for which waiting for a prompt with zero processing activity is the default state.

As for not thinking, I agree it's semantics. You can't turn the brain off, something is still going on, even during sleep. We perceive just a tip of an iceberg.

As for LLM .. I don't think the main issue is it does nothing between the prompts. There usually is queue of prompts to process, so it actually does something all the time. But the issue is no information is passed between those sessions. It completely forgets everything about prompt A before it starts to work on prompt B. It can't even tell that A and B are the same prompt.

The only context it has is the prompt itself. But then, you could argue that's the thoughts. Aren't words you say just thoughts said out loud ? Can you speak without thoughts ?

But again .. it's more about what words mean, rather than what LLM can and can't do. That's quite clear.
 
It seems to me that many concepts are tailor-made to ensure that AI can never be regarded as sentient.

One such concept is “anthropomorphism” which is often regarded as the definitive argument against AI sentience. This magic word makes anything that resembles human sentience immediately invalid. “Knowing”, “thinking”, “wanting”, “lying” which are words we use in order to understand what is going on, but even if they might not be identical to the human concepts, the mere mentioning is forbidden because of the danger of anthropomorphism.

I am reminded of how it was generally accepted in my childhood that only humans - or at least only mammals - could feel pain, and how the animal world in general was regarded as incapable of thoughts and planning. Today, lots of animals - not just apes - have been shown to use tools, and feelings like joy, sorrow, and compassion are found everywhere, even in insects (well, perhaps not compassion).

Here we are treated with arguments that if a program is not running continuously, it cannot be having thoughts, and thoughts, as we know, are purely human, and can consequently never occur in a machine, so sentience is impossible. Some argue sentience is only possible if everything if exactly corresponding to how a brain works, so an LLM that does not have the same structure will also never be capable of sentience (and if the brain uses quantum mechanics for the job, then a computer program must also use it, so computer sentience has another impossible block to remove).

I don’t think aliens exist, but I wonder if these arguments would be used to rule out alien sentience, if we ever encountered them.

The problem with anthropomorphism is one is applying terms as if the meaning applied to the AI program. To "know" one would expect that meant the AI program was self aware and had conscious sentient thoughts. Prove it applies to say the AI program knows anything. It can regurgitate, it can problem solve, whatever advanced thing it has been designed to do. I 'know' if it is right or wrong or maybe I know that I don't know.

But if you are going to apply terms like the AI program knows something then one is diluting the definition of 'know' to suit one's needs.

It's like the woo believers coopting all sorts of words to prove they too were using the scientific process when they weren't. They applied their own words: The scientist has faith in their results, they believe in their results just like a god believer does.

believe, conclude, faith, evidence ....

We've had to carefully word things like saying the evidence supports the conclusion ... no, no, no You have to say the scientific evidence supports the conclusion ... and so on.

So if you all want to argue over terms like the AI program 'knows' something then the word 'know' becomes meaningless.


Tell me how my brain manifests consciousness.

Show me an AI program that asks to go out for a walk on its own.

AI programs can outpace us in learning. They can be as complex as human brains are. They can fool the outdated Turing Test. But they are not conscious. Applying certain terminology to them won't make them sentient.
 
Of course you did and I'm not debating sentient. I'm asking how you can excude it in doing things(things that do stuff) if you can't define it?
How can anyone claim an AI program is conscious and self aware?

The reason I know is because we are on the verge of understanding the conscious brain. And it isn't just a huge-database processor.

... What? Everybody, including you, have an incomplete understanding of how biological consciousness and sentience actually work.
That is correct. You are welcome to find evidence to the contrary, it will do you good to rummage around in that rabbit hole.


Dont go sideways and bring up the supernatural, we are talking about the brain.
It exists in a universe that has quantum mechanics as part of said universe. Nevertheless I like your confidence.
Oooohhh, that mysterious QM. Surely that is where all the explanations lie one cannot find. :rolleyes:


I dont think you know what you are talking about.
Should I be concerned?
 
Last edited:
What will?

Or rather, in your opinion, what could, if it were possible to perfectly apply it to a nonbiological substrate?

Find out how our biological brains are conscious and one would be able to start there.

Think about this: AI programs can already process way more data than a human brain can. Why aren't they conscious?
 
Find out how our biological brains are conscious and one would be able to start there.

Think about this: AI programs can already process way more data than a human brain can. Why aren't they conscious?
If we don't know how our biological brains are conscious, what basis do we have to state that non-biological systems can't be?
 
The problem with anthropomorphism is one is applying terms as if the meaning applied to the AI program. To "know" one would expect that meant the AI program was self aware and had conscious sentient thoughts.
Why? I don’t think that “knowing” implies self awareness or sentient thoughts. I would say that knowing simply means that the program can state the facts pertaining the concept that it knows. If I know a concept, I can usually state what this concept means to me, and so can an AI, so “knowing” is one of the simple things that AI can do objectively. Some people will protest that because an AI cannot feel the wind against the skin, it cannot know what wind is, but you can say the same about blind people and colours, and yet blind people definitely knows colours even if they do not know them in the same way as people who can see.

Prove it applies to say the AI program knows anything. It can regurgitate, it can problem solve, whatever advanced thing it has been designed to do.
Funny word that one: “regurgitate”. I find it to be a loaded, meaningless word. We are all regurgitating stuff we have learned all the time. Everything I know about science is stuff I regurgitate if prompted to explain it. LLMs can write poetry that I cannot, so its regurgitating is on a far higher level for some things that I can ever achieve.

But if you are going to apply terms like the AI program knows something then one is diluting the definition of 'know' to suit one's needs.
Or denying AI programs the ability to know because they are not biological is an attempt to make it impossible to be called sentient.

We've had to carefully word things like saying the evidence supports the conclusion ... no, no, no You have to say the scientific evidence supports the conclusion ... and so on.
There is no scientific definition of sentience, or indeed of many of the concepts we are talking about here. We have previously seen stories where scientists have expressed surprise at how LLMs are able to do things they are not designed to do, like understanding the game of Go by forming a concept of the board even though the system has no storage for such things. I say “understanding” even though it is another anthropomorphism, because I can’t find a better expression.

We cannot yet form the conclusion that AIs are sentient, but we have to be careful not to make it impossible but moving the goalposts so that only biological beings can be sentient.

Tell me how my brain manifests consciousness.

Show me an AI program that asks to go out for a walk on its own.
This is what I am talking about: the idea that consciousness is exclusively for brains - or for legged creatures.

AI programs can outpace us in learning. They can be as complex as human brains are. They can fool the outdated Turing Test. But they are not conscious. Applying certain terminology to them won't make them sentient.
Quite correct. I am not arguing that AIs are sentient. I am arguing that they are much closer to being sentient, but by limiting certain words to apply to humans exclusively, we are artificially blocking recognition of what is happening.
 
One problem is that, willingly or not, virtually all our definitions for awareness, sentience, consciousness are tailored to exclude all beings not human.

AI do not have anything like human consciousness, and probably never will, but we cannot conclude from that that they cannot be conscious. It will just be a different form of consciousness.

Hans
 
One problem is that, willingly or not, virtually all our definitions for awareness, sentience, consciousness are tailored to exclude all beings not human.

AI do not have anything like human consciousness, and probably never will, but we cannot conclude from that that they cannot be conscious. It will just be a different form of consciousness.

Hans

Imagine human brain being simulate inside computer. That might be possible one day. Such brain should be conscious in exactly the same way humans are. Even without definitions.

But to claim something is or isn't, we need those definitions. Or at least tests to pass or fail.
 
If we don't know how our biological brains are conscious, what basis do we have to state that non-biological systems can't be?

Because we know what it isn't. It isn't just processing more data.

If you want to believe an AI program is conscious, be my guest.
 
...
Or denying AI programs the ability to know because they are not biological is an attempt to make it impossible to be called sentient.
OMG. I'm a bigot.

We cannot yet form the conclusion that AIs are sentient, but we have to be careful not to make it impossible but moving the goalposts so that only biological beings can be sentient. ...

This is what I am talking about: the idea that consciousness is exclusively for brains - or for legged creatures.
Once again, I think this makes 3 times I've had to say the same thing. Once we understand how our brains are conscious we might be able to build an AI program that is.


Quite correct. I am not arguing that AIs are sentient. I am arguing that they are much closer to being sentient, but by limiting certain words to apply to humans exclusively, we are artificially blocking recognition of what is happening.
So limiting terms that don't apply means the machines are going to become sentient and we won't recognize it?:boggled:


You guys have been watching too much sci-fi.
 
Last edited:
Imagine human brain being simulate inside computer. That might be possible one day. Such brain should be conscious in exactly the same way humans are. Even without definitions.
Several years ago there were discussions on this site where experienced, and esteemed members argued that even in theory such a computer could not be conscious, because of qualia.
 
One problem is that, willingly or not, virtually all our definitions for awareness, sentience, consciousness are tailored to exclude all beings not human.

AI do not have anything like human consciousness, and probably never will, but we cannot conclude from that that they cannot be conscious. It will just be a different form of consciousness.

Hans
That makes no sense. Either they are conscious or they aren't. What other form of conscious do you have in mind?
 

Back
Top Bottom