• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Once again, I think this makes 3 times I've had to say the same thing. Once we understand how our brains are conscious we might be able to build an AI program that is.

Yes, we might. But, an AI might also become conscious in some other way.

So limiting terms that don't apply means the machines are going to become sentient and we won't recognize it?:boggled:

Indeed. IF we insist on a definition on 'sentient' that is restricted to 'human style sentience', we might not recognize some other mode of sentience.

Which means that we'd better work on developing a broader understanding of sentience than we currently use, lest AI gives us a bit of a surprise.

Hans
 
That makes no sense. Either they are conscious or they aren't. What other form of conscious do you have in mind?

Oh, this is too easy: Define consciousness. As long as we don't we can't judge whether they are or not.

But of course I owe you: Consciousness should be the ability to understand and proact on your environment and yourself. ... over-simplified, but might work for a start.

Hans
 
I do not believe we are going to reach a consensus here and the debate for me is time I don't wish to spend.

From Wiki: Human Brain
In culture, the philosophy of mind has for centuries attempted to address the question of the nature of consciousness and the mind–body problem. ...

In the 19th century, the case of Phineas Gage, a railway worker who was injured by a stout iron rod passing through his brain, convinced both researchers and the public that cognitive functions were localised in the brain.[206] Following this line of thinking, a large body of empirical evidence for a close relationship between brain activity and mental activity has led most neuroscientists and contemporary philosophers to be materialists, believing that mental phenomena are ultimately the result of, or reducible to, physical phenomena.[211]
Wiki needs an update because the citation for this is from 2000. But the point is correct, consciousness is a physical phenomena.

There is so much more we know about how the brain functions including the discovery of just how much goes on outside of conscious awareness. And I'm not talking about spinal cord reflexes. One example is the directions we give our fingers to type. Ever think of one word but your fingers type another? You probably all know what we sense, especially sight and sound, are interpretations of the actual input the brain receives.

Anyway, Medical News Today; Jan 2023
When they combine data from fMRI and EEG, researchers discover that activity deep inside the brain, in the thalamusTrusted Source, varies during wakefulness in step with activity in the default mode network.

The thalamus operates like a sensory relay station, while the default mode network is a collection of regions within the cortexTrusted Source — the outermost layer of the brain — that are intimately involved in mind-wandering and self-awareness.

By contrast, during non-REM sleep and under anesthetic, functional connectivity breaks down between the thalamus and the default mode network and cortical networks that are involved in attention.

In disorders of consciousness, researchers can see reduced functional connectivity and physical damage that affects the connections between the cortex and deep brain structures.

Despite advances in our understanding of the neural correlates of consciousness, the question remains: How does consciousness arise from brain activity?

Scientists have proposed several theories. Two prominent ideas are global neuronal workspace theory (GNWT) and integrated information theory (IIT).

The article goes into a couple theories, one that is a bit of lala land. It's a long article. But there is a lot more not included, fascinating stuff how they brain works.


Bottom line, we'll never reach a consensus until we are on the same page about how consciousness arises in the brain. So I agree to disagree.
 
I was just pushed this on YouTube:

https://youtu.be/9QWaZp_2I1k?si=x7URRjVcHagLx8UW

About how sentience evolved. Have not watched it yet, but it seems like it might be on point.


That's a cool presentation there. Enjoyed hearing his views, his theory.

(Am around halfway through. The part where he finishes discussing his theory, and then says clearly that this is all speculation so far, and then starts to discuss what evidence there is to back up his theory. Had to stop there, in fact ended up spending more time on this than I should have right now because I found it so interesting. ...Am looking forward to finishing the rest of it later, hopefully later today.)
 
How can anyone claim an AI program is conscious and self aware?
The reason I know is because we are on the verge of understanding the conscious brain. And it isn't just a huge-database processor.
Who said it was? A theoretical ai isn't just a huge-database processor. Chatgpt isn't ai.

Oooohhh, that mysterious QM. Surely that is where all the explanations lie one cannot find. :rolleyes:

You know how qm is used by grifters the world over yeah? It is actually a real thing too.
Should I be concerned?
Yeah sorry, I should have typed no one knows what they're taling about.
Anyway
Of course you did and I'm not debating sentient. I'm asking how you can excude it in doing things(things that do stuff) if you can't define it?

No, it's an ever so simple question but you seem to be dodging it.
 
Last edited:
Indeed. IF we insist on a definition on 'sentient' that is restricted to 'human style sentience', we might not recognize some other mode of sentience.

That's just a tautological game. If sentience is definitionally limited to our understanding of the way it happens in human brains, of course we would not recognize "some other mode of sentience" because there IS no other mode to recognize. A theoretical AI might become something but that thing is not "sentient", it's just something else.

Which means that we'd better work on developing a broader understanding of sentience than we currently use, lest AI gives us a bit of a surprise.

Specious; we don't have to recognize something as "sentient" to be aware of potential problems it could cause based on what we know about its functions and capabilities. For instance, we can predict that self-driving car software could become confused by its environment and stop in the middle of the street, blocking traffic for an hour until a human comes and moves it (and this is something that has happened a number of times). Causing that problem doesn't require "sentience", nor does our not thinking of self-driving cars as "sentient" impact our ability to predict the risk.
 
Several years ago there were discussions on this site where experienced, and esteemed members argued that even in theory such a computer could not be conscious, because of qualia.

Ah, good old qualia. Still, it's the same as other subjective properties. Why bother to contemplate if computers can do it, if we can't test, if people are doing it.
 
Ah, good old qualia. Still, it's the same as other subjective properties. Why bother to contemplate if computers can do it, if we can't test, if people are doing it.
I had to look qualia up. The philosophy of the mind is one of those paradigm shifts we need to make.

It's akin to things like non-human animals don't use tools ... whoops, non-human animals don't make tools ... whoops,

Non human animals don't use language ... whoops, non-human animals can't be taught human language ... whoops, non-human animals don't use/understand syntax ... whoops,
 
Last edited:
AI is branch of computer science, set of solutions, sometimes also just set of problems, regardless how they are solved. It is NOT some achieved level of proficiency.
ChatGPT certainly is an AI and uses the most typical AI algorithms, ie. linear neural networks. But so can be OCR software. It doesn't really mean much.
 
Some researchers are working to overcome the problem that we have no consensus of what consciousness would mean in an AI:

If AI Becomes Conscious, Here’s How We Can Tell.

This group of “19 neuroscientists, philosophers and computer scientists” is working on a “checklist derived from six neuroscience-based theories of consciousness could help assess whether an artificial intelligence system achieves this state”.

To develop their criteria, the authors assumed that consciousness relates to how systems process information, irrespective of what they are made of — be it neurons, computer chips or something else. This approach is called computational functionalism. They also assumed that neuroscience-based theories of consciousness, which are studied through brain scans and other techniques in humans and animals, can be applied to AI.

In order to gain consensus they have selected six neuroscientific theories on consciousness, and extracted list of consciousness indicators from them.

There are unfortunately not too many details on the actual checklist.
 
Some researchers are working to overcome the problem that we have no consensus of what consciousness would mean in an AI:

If AI Becomes Conscious, Here’s How We Can Tell.

I think in general we get kind of close to an answer, and then sort of avoid actually addressing it.

From the article above:

One of the challenges in studying consciousness in AI is defining what it means to be conscious. Peters says that for the purposes of the report, the researchers focused on ‘phenomenal consciousness’, otherwise known as the subjective experience. This is the experience of being — what it’s like to be a person, an animal or an AI system (if one of them does turn out to be conscious).

Consciousness is in its simplest terms, what it is like to be.

And the hard problem of consciousness, in the simplest terms, how does being arise from matter?

If the world was fundamentally matter, why wouldn't we be the p-zombies. Why is there something like it is to be us?

I can't help but think of Heidegger when people talk about this.

http://www.naturalthinker.net/trl/t...r, Martin - Being and Time/Being and Time.pdf

Page 10, which contains the first sentence.

Introduction
The Exposition of the Question of the Meaning of Being

I

The necessity, Structure, and Priority of the Question of Being

1. The necessity of an Explicit Retrieve of the Question of Being

Now keep in mind, he hasn't actually written anything but the chapter, section, and subsection titles, but you should already be picking up a theme.

He begins:

The question has today been forgotten - although our time considers itself progressive in again affirming "metaphysics." All the same we believe that we are spared the exertion of rekindling a "Battle of the Giants concerning Being." But the question touched upon here is hardly an arbitrary one. It sustained the avid research of Plato and Aristotle but from then on ceased to be heard as a thematic question of actual investigation. What these two thinkers achieved has been preserved in various distorted and "camouflaged" forms down to Hegel's Logic. And what then was wrested from phenomena by the highest exertion of thought, albeit in fragments and first beginnings, has long since been trivialized.

That's the first paragraph. It kind of seems to me he's irritated. Irritated he would even have to write this down, it seems, because since Plato worked out, and why doesn't everyone just get it?

Not only that. On the foundation of the Greek point of departure for the interpretation of being a dogma has ten shape which not only declares the the question of the meaning of being is superfluous but sanctions its neglect. It is said that "being" is the most universal and the emptiest concept. As such it resists every attempt at definition. Nor does this most universal and this indefinable concept need any definition. Everybody uses it constantly and also already understands what is meant by it. Thus what troubled ancient philosophizing and kept it so by virtue of its obscurity has become obvious, clear as day, such that whoever persists in asking about it is accused of an error of method.

Now he seem more than irritated.

To be honest, I really haven't read much more of the book than this. But that's because I already get what's he talking about, and why he's irritated.

I'm kind of right there with him.

The hard problem of consciousness has a pretty simple answer. How does being arise from matter?

It doesn't.

Being does not arise from matter.

Matter arises from being.

Existence is fundamentally made of "being". That's really the only way it works.

Being goes on about what it does, making names for parts of itself, measurements of itself, deciding what matters to it, whether that's scientific experiments, or love, or money, or family, or winning internet debates.

In this way, matter arises from being.

Being is primary. The kilogram is not.
We are part of being, that makes us being. Part of our being is a model of the total being, including ourselves in it.

Being knows itself through us. The universe knows itself through us. (Sagan anyone?) That's consciousness.

If the universe can knows itself through a computer program, well, I don't really see much of an argument that can be made that it isn't consciousness.

I'm not talking about a wikipedia page, otherwise you could call a book conscious, or a rock with some words on it conscious. It would seem important that model has to be "running", or at least dynamic is some sense. In this sense, consciousness, or mind, would be the parts of being that act as a mirror of sorts, reflecting back at itself what this particular part of it thinks.

Ya know, we take it very welcoming that space and time are relative. But no one ever thinks about matter that way. Why not. Matter exists in space and time. Why wouldn't matter be "relative" too? I think this is a bit existentially dreadful, so I can see why it's not fun to go there.

But if you were to (in some sense) extend relativity to everything, then matter would be a consequence of measurement.

What's doing the measuring?

For a scientific attempt at answering that, I think that's something Hugh Everett's thesis is on about. I just wanted to mention that here, because, as Heidegger found out, you could go on and on about the answers one gets from the question of being, but this thread is likely to plow on as if this post never happened. But yeah. Where was I. Oh yeah. The hard problem of consciousness and the measurement problem in quantum physics seem to me fundamentally related, both a result of a messy metaphysics, the combination of an inconsistent relativeness, and a neglect of the concept of being.

So, here's what I think the check list is.

Does it exist? (Do it be?)

Does it model existence as it happens?

It seems to me, at least some of our AI is currently conscious, perhaps with even a richer subjective experience, content of the mind, than we're capable of considering.

Just some thoughts. Carry on.
 
Last edited:
I was just pushed this on YouTube:

https://youtu.be/9QWaZp_2I1k?si=x7URRjVcHagLx8UW

About how sentience evolved. Have not watched it yet, but it seems like it might be on point.

That was great*. You should watch it.

I don't agree with everything he claims about non-sentient animals without more investigation. But it's all very relevant to this thread. And it means I have more to look into about biological brain function.


For those with less time fast forward to minute 37 to see clips of sentient animals. Watch from there on and he addresses ChatGPT.


*Though the experiment they did on Helen the chimp made me terribly sad.
 
Last edited:
Personally, I don’t think consciousness is a binary proposition. I think consciousness is a continuum from no consciousness (like vira) to humans.

In the absence of a clear definition, I don’t think AI has moved from the zero stage, but I think it is getting there.
 
Personally, I don’t think consciousness is a binary proposition. I think consciousness is a continuum from no consciousness (like vira) to humans.

In the absence of a clear definition, I don’t think AI has moved from the zero stage, but I think it is getting there.

I agree on the first part.

I think AI is way above zero and has been for some time. Somewhere on insect level, at least.

Hans
 
That was great*. You should watch it.

I don't agree with everything he claims about non-sentient animals without more investigation.

I did finish it yesterday. I found the part about needing to be warm-blooded to be sentient to be quite a reach. It sure seems sentience would be a benefit to cold-blooded animals as well.


*Though the experiment they did on Helen the chimp made me terribly sad.

Ditto.
 
I did finish it yesterday. I found the part about needing to be warm-blooded to be sentient to be quite a reach. It sure seems sentience would be a benefit to cold-blooded animals as well.
.
I agree. That was the part I didn't buy either. Alligators are cold blooded and the one in thread the guy leads around on a leash fit the bill as sentient.
 
I agree. That was the part I didn't buy either. Alligators are cold blooded and the one in thread the guy leads around on a leash fit the bill as sentient.

I pictured Velociraptors hunting in packs. And since birds are dinosaurs…

Though I think there’s some debate over whether dinosaurs were warm- or cold-blooded.
 
Last edited:
I pictured Velociraptors hunting in packs. And since birds are dinosaurs…

Though I think there’s some debate over whether dinosaurs were warm- or cold-blooded.
Probably still is but the more evidence discovered the more the consensus leans toward warm blooded with exceptions. It's funny how the first ideas about dinos were that they were lizard-like. Scientists in the 1700s couldn't imagine they were like mammals. They pictured reptiles and it took a couple hundred years to shake that image off.

Yale study: Were dinosaurs warm blooded? Their eggshells say yes.


They would have to be sentient to hunt in packs.
 
Last edited:
I also liked the talk. But have similar reservations. For instance, the octopus: It may not have much feeling for other individuals, but it must have more than "blind sight" to mimic its surroundings so efficiently, not only in colour but often also in texture.

Yes, some dinosaurs seem to have been warm-blooded, but that is not important for the principle, it just widens the time-scale somewhat.

Another thing I found unfounded was the idea that sentience has to be unique to Earth. I do agree that plenty of worlds may have life, but not sentient life, but it is wide-spread on Earth, so the chance of it evolving must be considerable.

Still, what was important was that it seems to show the path to a useful definition of consciousness.

Hans
 

Back
Top Bottom