p0lka
Illuminator
When I start thinking again.How do you know when you're not thinking?
When I start thinking again.How do you know when you're not thinking?
How do you know when you're not thinking?
You definitely can, but it's hard. We have very strong habit of thinking. First there is a distinction .. having inner monologue, in words, in specific language .. that is relatively easy to stop .. but you mostly just replace words with more vague images and feeling .. something is still stirring in there .. and getting that down, without actually noticing it, as that usually brings the words back .. not easy.
Did I say "cellular life"? No, I said biological life and was discussing brains. I didn't even get into things like the brains of octopi.Imagine going back a few billion years and viewing the first self-replicating primordial cell. And thinking…
“And yes, one can exclude cellular life from ever being sentient, just like massing multiple individual cells over time will never lead to anything sentient.”
Try your premise the other way around: Tell me how consciousness works in the human brain.I go back to the juju. You seem to be imparting to biological life an undefined something that can lead to sentience, while denying that can ever happen if silicon-based. Maybe it can, and maybe it can’t, but it seems presumptuous to simply dismiss the possibility out of hand.
arthwollipot put it well: “You are defining sentient as something a non-biological system can never achieve, but you're not providing justification for that definition. Ie. why can a non-biological system never achieve it? What is the barrier that prevents it?”

I would say it's not possible at all. Recognition and observation require "thought"; you may replace other thoughts with the thought that you wish to "stop thinking" and focus on it, and you may even quiet your inner spoken-language monologue and replace it with vague sensations or feelings, but those things are still thoughts. And you may even convince yourself that you have succeeded by observing a moment when in your opinion you have "stopped thinking"; but making that observation and applying that opinion to it are thoughts as well. There's really no way around it.
But this is a little beside my point. Even if it we were all to agree it's possible with enough effort that a person can self-induce a vegetative state by sheer force of will, it has to be acknowledged that it's at the very least an abnormal, unnatural, and quite rare state for people to be in - which still distinguishes sentient humans from a LLM computer program for which waiting for a prompt with zero processing activity is the default state.
It seems to me that many concepts are tailor-made to ensure that AI can never be regarded as sentient.
One such concept is “anthropomorphism” which is often regarded as the definitive argument against AI sentience. This magic word makes anything that resembles human sentience immediately invalid. “Knowing”, “thinking”, “wanting”, “lying” which are words we use in order to understand what is going on, but even if they might not be identical to the human concepts, the mere mentioning is forbidden because of the danger of anthropomorphism.
I am reminded of how it was generally accepted in my childhood that only humans - or at least only mammals - could feel pain, and how the animal world in general was regarded as incapable of thoughts and planning. Today, lots of animals - not just apes - have been shown to use tools, and feelings like joy, sorrow, and compassion are found everywhere, even in insects (well, perhaps not compassion).
Here we are treated with arguments that if a program is not running continuously, it cannot be having thoughts, and thoughts, as we know, are purely human, and can consequently never occur in a machine, so sentience is impossible. Some argue sentience is only possible if everything if exactly corresponding to how a brain works, so an LLM that does not have the same structure will also never be capable of sentience (and if the brain uses quantum mechanics for the job, then a computer program must also use it, so computer sentience has another impossible block to remove).
I don’t think aliens exist, but I wonder if these arguments would be used to rule out alien sentience, if we ever encountered them.
What will?Applying certain terminology to them won't make them sentient.
How can anyone claim an AI program is conscious and self aware?Of course you did and I'm not debating sentient. I'm asking how you can excude it in doing things(things that do stuff) if you can't define it?
That is correct. You are welcome to find evidence to the contrary, it will do you good to rummage around in that rabbit hole.... What? Everybody, including you, have an incomplete understanding of how biological consciousness and sentience actually work.
Oooohhh, that mysterious QM. Surely that is where all the explanations lie one cannot find.Dont go sideways and bring up the supernatural, we are talking about the brain.
It exists in a universe that has quantum mechanics as part of said universe. Nevertheless I like your confidence.
Should I be concerned?I dont think you know what you are talking about.
What will?
Or rather, in your opinion, what could, if it were possible to perfectly apply it to a nonbiological substrate?
If we don't know how our biological brains are conscious, what basis do we have to state that non-biological systems can't be?Find out how our biological brains are conscious and one would be able to start there.
Think about this: AI programs can already process way more data than a human brain can. Why aren't they conscious?
Why? I don’t think that “knowing” implies self awareness or sentient thoughts. I would say that knowing simply means that the program can state the facts pertaining the concept that it knows. If I know a concept, I can usually state what this concept means to me, and so can an AI, so “knowing” is one of the simple things that AI can do objectively. Some people will protest that because an AI cannot feel the wind against the skin, it cannot know what wind is, but you can say the same about blind people and colours, and yet blind people definitely knows colours even if they do not know them in the same way as people who can see.The problem with anthropomorphism is one is applying terms as if the meaning applied to the AI program. To "know" one would expect that meant the AI program was self aware and had conscious sentient thoughts.
Funny word that one: “regurgitate”. I find it to be a loaded, meaningless word. We are all regurgitating stuff we have learned all the time. Everything I know about science is stuff I regurgitate if prompted to explain it. LLMs can write poetry that I cannot, so its regurgitating is on a far higher level for some things that I can ever achieve.Prove it applies to say the AI program knows anything. It can regurgitate, it can problem solve, whatever advanced thing it has been designed to do.
Or denying AI programs the ability to know because they are not biological is an attempt to make it impossible to be called sentient.But if you are going to apply terms like the AI program knows something then one is diluting the definition of 'know' to suit one's needs.
There is no scientific definition of sentience, or indeed of many of the concepts we are talking about here. We have previously seen stories where scientists have expressed surprise at how LLMs are able to do things they are not designed to do, like understanding the game of Go by forming a concept of the board even though the system has no storage for such things. I say “understanding” even though it is another anthropomorphism, because I can’t find a better expression.We've had to carefully word things like saying the evidence supports the conclusion ... no, no, no You have to say the scientific evidence supports the conclusion ... and so on.
This is what I am talking about: the idea that consciousness is exclusively for brains - or for legged creatures.Tell me how my brain manifests consciousness.
Show me an AI program that asks to go out for a walk on its own.
Quite correct. I am not arguing that AIs are sentient. I am arguing that they are much closer to being sentient, but by limiting certain words to apply to humans exclusively, we are artificially blocking recognition of what is happening.AI programs can outpace us in learning. They can be as complex as human brains are. They can fool the outdated Turing Test. But they are not conscious. Applying certain terminology to them won't make them sentient.
We cannot yet form the conclusion that AIs are sentient, but we have to be careful not to make it impossible by moving the goalposts so that only biological beings can be sentient.
One problem is that, willingly or not, virtually all our definitions for awareness, sentience, consciousness are tailored to exclude all beings not human.
AI do not have anything like human consciousness, and probably never will, but we cannot conclude from that that they cannot be conscious. It will just be a different form of consciousness.
Hans
If we don't know how our biological brains are conscious, what basis do we have to state that non-biological systems can't be?
OMG. I'm a bigot....
Or denying AI programs the ability to know because they are not biological is an attempt to make it impossible to be called sentient.
Once again, I think this makes 3 times I've had to say the same thing. Once we understand how our brains are conscious we might be able to build an AI program that is.We cannot yet form the conclusion that AIs are sentient, but we have to be careful not to make it impossible but moving the goalposts so that only biological beings can be sentient. ...
This is what I am talking about: the idea that consciousness is exclusively for brains - or for legged creatures.
So limiting terms that don't apply means the machines are going to become sentient and we won't recognize it?Quite correct. I am not arguing that AIs are sentient. I am arguing that they are much closer to being sentient, but by limiting certain words to apply to humans exclusively, we are artificially blocking recognition of what is happening.

Several years ago there were discussions on this site where experienced, and esteemed members argued that even in theory such a computer could not be conscious, because of qualia.Imagine human brain being simulate inside computer. That might be possible one day. Such brain should be conscious in exactly the same way humans are. Even without definitions.
That makes no sense. Either they are conscious or they aren't. What other form of conscious do you have in mind?One problem is that, willingly or not, virtually all our definitions for awareness, sentience, consciousness are tailored to exclude all beings not human.
AI do not have anything like human consciousness, and probably never will, but we cannot conclude from that that they cannot be conscious. It will just be a different form of consciousness.
Hans