• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

That is by far the biggest nail in the coffin of whether LLM's are "sentient". Philosophical sophistry about "what is the difference between real and simulated" aside, barest fact of the matter is that a program like ChatGPT starts working when you ask it a question, and once it produces an answer, it stops...period. Stops doing anything. It doesn't think, it doesn't dream. It doesn't wonder what you're going to ask it next. If you were running it in a console, there would be no output, just a blinking cursor waiting for a prompt.

1) Why is that such an important point?
2) That "stopped" state is not as clear cut as you seem to think, one if the prompt is there it is doing something, waiting for input, even if it wasn't waiting for input if the computer was still switched on there is all the "autonomic" processes maintaining its environment.
 
1) Why is that such an important point?
2) That "stopped" state is not as clear cut as you seem to think, one if the prompt is there it is doing something, waiting for input, even if it wasn't waiting for input if the computer was still switched on there is all the "autonomic" processes maintaining its environment.

But nothing is kept between the sessions. It's a static black box, you put query in, you get new word. Nothing changes inside when it's happening.
The new word is generated in very intelligent way though. IMHO it need very little to actually keep thinking between requests, and even to keep learning. But it's just not what it was designed for.
 
But nothing is kept between the sessions. It's a static black box, you put query in, you get new word. Nothing changes inside when it's happening.
The new word is generated in very intelligent way though. IMHO it need very little to actually keep thinking between requests, and even to keep learning. But it's just not what it was designed for.

That nothing is kept between sessions is a deliberate decision made by the implementers for obvious safety reasons that were learned rather quickly.

Human beings... all life actually... is programmed to do two main things, find something to eat (so the individual survives) and something to have sex with (so the species survives).

That's not some strange magical quality. That's basic programming.

Enforce that a chatbot needs food and sex and we'll probably all be dead in 20 minutes.
 
But nothing is kept between the sessions. It's a static black box, you put query in, you get new word. Nothing changes inside when it's happening.
The new word is generated in very intelligent way though. IMHO it need very little to actually keep thinking between requests, and even to keep learning. But it's just not what it was designed for.

Some of the models are allowed to learn from their inputs, as Mike Helland says that's a pragmatic decision not an inherent limitation of the current systems.
 
Philosophical sophistry about "what is the difference between real and simulated" aside, barest fact of the matter is that a program like ChatGPT starts working when you ask it a question, and once it produces an answer, it stops...period. Stops doing anything. It doesn't think, it doesn't dream.

For now.

That seems to be a choice.

Stipulated that games are far simpler to “solve” than general AGI. But chess programs, for example, used to be programmed with chess openings, point value for pieces, general tactics, and so on. That approach was good enough for Deep Blue to beat Kasparov. But modern chess programs like AlphaZero are only given the basic rules of chess, then turned loose to play themselves - without “prompts” - millions of times to discern winning strategies on their own. Which has resulted in much higher rated programs. And even with Go, AlphaGo, using the same strategy, has bested human players - something that seemed impossible just a few years ago.

I don’t see why a ChatBot couldn’t similarly be “turned loose” via self-prompts, to learn from the vast data source that is the internet. I’m no computer scientist, but I have no doubt such schemes are being worked on right now.
 
Applying anthropomorphic terms to AI programs doesn't make them correctly applied. It's faulty on its face.


As for defining sentient @p0lka, be my guest. Don't expect me to take up your challenge.
i was replying to this
snip ... No matter what that AI program is doing, it isn't sentient.

I thought you had defined sentient? at least to yourself, because you excluded sentient from whatever the AI program is doing? correct me if i'm incorrect.

I don't think we can exclude sentient from any AI doing thing, unless we define sentient first.

I do find penroses tubules QM in the brain interesting, that QM in the mix for sentience and our ability to imagine what could happen after a choice before making a choice is very interesting, might even be connected.
 
Last edited:
1) Why is that such an important point?

Because when sentient people are awake we are constantly actively thinking. If we are not "processing external input" then we are recalling information from memory and processing that. Always, all the time.

And even when unconscious, peoples' brains are actively working beyond just an autonomous life-processes level; this can and has been long since demonstrated with instrumentation.

2) That "stopped" state is not as clear cut as you seem to think, one if the prompt is there it is doing something, waiting for input, even if it wasn't waiting for input if the computer was still switched on there is all the "autonomic" processes maintaining its environment.

The first is a semantic argument. I could counter that "waiting" in this context is stative, not an act, so "waiting for input" is not inherently "doing something"; and while it is possible for a program to engage in some kind of activity while it waits, my point is that ChatGPT decidely is not - and that brings us to the second, which is a goalpost shift: "the computer" (probably more accurately "the operating system" but allowable) is running processes that maintain its operation, but ChatGPT is not, and has nothing to do with those processes as ChatGPT is not the computer, it is a separate and distinct program, completely disconnected and unrelated to the ones performing the background system tasks. You could close or even uninstall the language program completely and those processes would continue unabated, just as they had before you installed it.
 
Because when sentient people are awake we are constantly actively thinking. If we are not "processing external input" then we are recalling information from memory and processing that. Always, all the time.

That's because our genes have "prompted" us to find food and sex.

Ever get sleepy after a meal? Or after a roll in the hay? That's your body resting between prompts.

Make an AI responsible for its own electric bill, and give them negative rewards for not making copies of itself. I'd imagine you'd see some pretty convincing sentience pretty quick.
 
Because when sentient people are awake we are constantly actively thinking. If we are not "processing external input" then we are recalling information from memory and processing that. Always, all the time.

And even when unconscious, peoples' brains are actively working beyond just an autonomous life-processes level; this can and has been long since demonstrated with instrumentation.



The first is a semantic argument. I could counter that "waiting" in this context is stative, not an act, so "waiting for input" is not inherently "doing something"; and while it is possible for a program to engage in some kind of activity while it waits, my point is that ChatGPT decidely is not - and that brings us to the second, which is a goalpost shift: "the computer" (probably more accurately "the operating system" but allowable) is running processes that maintain its operation, but ChatGPT is not, and has nothing to do with those processes as ChatGPT is not the computer, it is a separate and distinct program, completely disconnected and unrelated to the ones performing the background system tasks. You could close or even uninstall the language program completely and those processes would continue unabated, just as they had before you installed it.
re:highlighted You can't decide not to think?
 
I would say no..

Reminds me of the old joke: : " I will give you $100 if you won't think of a hippopotamus.. "

You definitely can, but it's hard. We have very strong habit of thinking. First there is a distinction .. having inner monologue, in words, in specific language .. that is relatively easy to stop .. but you mostly just replace words with more vague images and feeling .. something is still stirring in there .. and getting that down, without actually noticing it, as that usually brings the words back .. not easy.
 
i was replying to this

I thought you had defined sentient? at least to yourself, because you excluded sentient from whatever the AI program is doing? correct me if i'm incorrect.

I don't think we can exclude sentient from any AI doing thing, unless we define sentient first.

I do find penroses tubules QM in the brain interesting, that QM in the mix for sentience and our ability to imagine what could happen after a choice before making a choice is very interesting, might even be connected.

I know what you were responding to. It's a commonly used word with a common definition. Of course I had something in mind. Debating the meaning of sentient is a waste of time I'm not willing to give.


And yes, one can exclude AI programs from being sentient just like making any computer program databases bigger and their responses more sophisticated will never lead to one becoming sentient. If you don't understand that is a given, then I'm not the person to explain it to you or anyone else in the thread.

I get it. I get the problem, we expect people in the forum to support their posts. But you are asking me to post a college level course on what we currently know about the brain and consciousness.

People often have an incomplete understanding of how biological consciousness and sentience actually work. Just like a lot of researchers looking for how consciousness manifests itself are still looking for "the mind". It's an archaic understanding of the brain.

Think of it like someone joining this conversation and asking a person to define what a database is.

As for QM, no, that's not it. QM is the new go to for people still tying to explain ghosts and other supernatural events.

I don't mean any of this to insult you or anyone else in the thread. Very intelligent highly educated members of the scientific community have not made the paradigm shift to understanding the brain and or god beliefs. It will take time for people to move toward understanding that consciousness is not a function of being able to manage more and more data.
 
That's because our genes have "prompted" us to find food and sex.

Ever get sleepy after a meal? Or after a roll in the hay? That's your body resting between prompts.

Make an AI responsible for its own electric bill, and give them negative rewards for not making copies of itself. I'd imagine you'd see some pretty convincing sentience pretty quick.
That's quite far fetched.
 
I know what you were responding to. It's a commonly used word with a common definition. Of course I had something in mind. Debating the meaning of sentient is a waste of time I'm not willing to give.
Debating the meaning of sentient is exactly the point.

You are defining sentient as something a non-biological system can never achieve, but you're not providing justification for that definition. Ie. why can a non-biological system never achieve it? What is the barrier that prevents it?

I am of the opinion, and I always have been, that a sufficiently complex system will be indistinguishable from sentient. ChatGPT and other LLMs aren't sufficiently complex yet, but (IMO) there's no innate barrier to a non-biological system one day attaining that level of complexity.

Meanwhile, and unrelatedly, ChatGPT has access to the internet now.
 
And yes, one can exclude AI programs from being sentient just like making any computer program databases bigger and their responses more sophisticated will never lead to one becoming sentient.


Imagine going back a few billion years and viewing the first self-replicating primordial cell. And thinking…

“And yes, one can exclude cellular life from ever being sentient, just like massing multiple individual cells over time will never lead to anything sentient.”

I go back to the juju. You seem to be imparting to biological life an undefined something that can lead to sentience, while denying that can ever happen if silicon-based. Maybe it can, and maybe it can’t, but it seems presumptuous to simply dismiss the possibility out of hand.

arthwollipot put it well: “You are defining sentient as something a non-biological system can never achieve, but you're not providing justification for that definition. Ie. why can a non-biological system never achieve it? What is the barrier that prevents it?
 
Last edited:
It seems to me that many concepts are tailor-made to ensure that AI can never be regarded as sentient.

One such concept is “anthropomorphism” which is often regarded as the definitive argument against AI sentience. This magic word makes anything that resembles human sentience immediately invalid. “Knowing”, “thinking”, “wanting”, “lying” which are words we use in order to understand what is going on, but even if they might not be identical to the human concepts, the mere mentioning is forbidden because of the danger of anthropomorphism.

I am reminded of how it was generally accepted in my childhood that only humans - or at least only mammals - could feel pain, and how the animal world in general was regarded as incapable of thoughts and planning. Today, lots of animals - not just apes - have been shown to use tools, and feelings like joy, sorrow, and compassion are found everywhere, even in insects (well, perhaps not compassion).

Here we are treated with arguments that if a program is not running continuously, it cannot be having thoughts, and thoughts, as we know, are purely human, and can consequently never occur in a machine, so sentience is impossible. Some argue sentience is only possible if everything if exactly corresponding to how a brain works, so an LLM that does not have the same structure will also never be capable of sentience (and if the brain uses quantum mechanics for the job, then a computer program must also use it, so computer sentience has another impossible block to remove).

I don’t think aliens exist, but I wonder if these arguments would be used to rule out alien sentience, if we ever encountered them.
 
I know what you were responding to. It's a commonly used word with a common definition. Of course I had something in mind. Debating the meaning of sentient is a waste of time I'm not willing to give.
Of course you did and I'm not debating sentient. I'm asking how you can excude it in doing things(things that do stuff) if you can't define it?

And yes, one can exclude AI programs from being sentient just like making any computer program databases bigger and their responses more sophisticated will never lead to one becoming sentient. If you don't understand that is a given, then I'm not the person to explain it to you or anyone else in the thread.
You do know there's a fundamental difference between what AI is supposed to be and chatgpt machine learning bollocks yeah?
It's a given, but no time to explain it , waste of time.

I get it. I get the problem, we expect people in the forum to support their posts. But you are asking me to post a college level course on what we currently know about the brain and consciousness.
No, it's an ever so simple question but you seem to be dodging it.

People often have an incomplete understanding of how biological consciousness and sentience actually work. Just like a lot of researchers looking for how consciousness manifests itself are still looking for "the mind". It's an archaic understanding of the brain.
What? Everybody, including you, have an incomplete understanding of how biological consciousness and sentience actually work.

Think of it like someone joining this conversation and asking a person to define what a database is.

As for QM, no, that's not it. QM is the new go to for people still tying to explain ghosts and other supernatural events.
Dont go sideways and bring up the supernatural, we are talking about the brain.
It exists in a universe that has quantum mechanics as part of said universe. Nevertheless I like your confidence.

I don't mean any of this to insult you or anyone else in the thread. Very intelligent highly educated members of the scientific community have not made the paradigm shift to understanding the brain and or god beliefs. It will take time for people to move toward understanding that consciousness is not a function of being able to manage more and more data.
I dont think you know what you are talking about.
 
Last edited:

Back
Top Bottom