• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

My takeaway from this interesting thread seems currently to be:

- Sentience is not objectively defined, at least not yet, and may not be possible to accurately define.

- Sentient or not is not a binary definition; there seems to be a continuous range of sentience from the vaguely sentient reactions of a shoal of fish in the presence of a predator to that of humans, and possibly beyond.

- While we currently know of no non-biological sentient entities, we cannot rule out that future artificial constructions could be sentient.

- There is no reason to assume that an artificial sentient entity must behave like a biological entity, i.e. a smart computer may not behave like a human, and might as such elude any current tests (like a Turing test).

Hans

Pretty much sums up my views.

My broader view is that the current AIs - the generative/LLM types - are giving us hints as to how complex processes such as our sentience can come about (which is not to say that is how our sentience came about).

As noted several times we are no longer in the classical sense programming these AIs, and there are already aspects of these AIs that we cannot explain, and they are doing things we didn't think they could or should be able to do and we are struggling to understand how they can do that. That's the fascination for me.

For many folk who have long considered that we are merely an emergent property of the contaminated doughnut bag of water that is our physicality these early Ais are now beginning to demonstrate "emergent" properties can account for something as seemingly complex as our sentience.
 
The current AIs aren't computers. And yes the current AIs based on LLMs are already doing things that they weren't programmed to do and we don't know how they are doing it.

Re semantics: is AI a software program or not?

The idea "we don't know how they are doing it" doesn't make it sentience or conscious or anything of the sort. It's analogous to the god of the gaps.
 
Pretty much sums up my views.

My broader view is that the current AIs - the generative/LLM types - are giving us hints as to how complex processes such as our sentience can come about (which is not to say that is how our sentience came about).

As noted several times we are no longer in the classical sense programming these AIs, and there are already aspects of these AIs that we cannot explain, and they are doing things we didn't think they could or should be able to do and we are struggling to understand how they can do that. That's the fascination for me.
For many folk who have long considered that we are merely an emergent property of the contaminated doughnut bag of water that is our physicality these early Ais are now beginning to demonstrate "emergent" properties can account for something as seemingly complex as our sentience.

(Highlighted): In fact one part of the definition of sentience must be that it defies pre-programming.

ETA: - Which does imply that sentience is not a product of complexity but of structure. Although such a structure probably implies a considerable complexity.

Hans
 
Last edited:
Re semantics: is AI a software program or not?

The idea "we don't know how they are doing it" doesn't make it sentience or conscious or anything of the sort. It's analogous to the god of the gaps.

Human consciousness arose as part of a system that evolved to meet the needs of survival and reproduction. The specifics of how it does that aren't completely known to us, but the optimization process wasn't optimizing for consciousness, it was optimizing for reproduction and consciousness is something that arose along the way, either because it itself is a useful, adaptive, part of the system, or because it's a consequence of some other part of the system (a spandrel).

AI, similarly, are complex systems that arise through a process (machine learning, at least currently). They will also be optimized for their ability to achieve certain outcomes. While those outcomes aren't identical to the outcomes the human brain is optimized for, many of them are overlapping, and there is a general similarity to the abilities that we are looking for from AI and that human brains are capable of. It's not necessarily the case that the optimization process will achieve those ends through the same means, but it's certainly possible. And given the way machine learning works, we may not even know exactly how it's done when it's done.
 
Physical mechanisms are not important. The informatics which matters. Does brain something computer can't do, or at least can't emulate to sufficient degree of precision ?
So it's mostly question whether the brain relies on quantum effects, as they can't be emulated with current computers (with reasonable efficiency)
But that's where quantum computers will help. In the worst case we would need quantum computer with as many parts as brain has. Which might be decades away. But I hope it won't come to that, that it will be way easier.
 
Physical mechanisms are not important. The informatics which matters. Does brain something computer can't do, or at least can't emulate to sufficient degree of precision ?
So it's mostly question whether the brain relies on quantum effects, as they can't be emulated with current computers (with reasonable efficiency)
But that's where quantum computers will help. In the worst case we would need quantum computer with as many parts as brain has. Which might be decades away. But I hope it won't come to that, that it will be way easier.

You are here talking about simulating a human brain, and whether that will be sentient/conscious. That is indeed an interesting question, and I'm sure it will be explored.

However, another, and IMO even more interesting question is if a digital computer, operating as such, has the potential to be sentient and what kind of sentience that will be.

The human sentience was created by evolution for the purpose of survival. What will some machine sentience conceive as its purpose?

Hans
 
You are here talking about simulating a human brain, and whether that will be sentient/conscious. That is indeed an interesting question, and I'm sure it will be explored.

However, another, and IMO even more interesting question is if a digital computer, operating as such, has the potential to be sentient and what kind of sentience that will be.

The human sentience was created by evolution for the purpose of survival. What will some machine sentience conceive as its purpose?

Hans

If sentience means just "has inputs" than all computers already are sentient. And obviously computers already have purpose.
 
And no one has claimed it does - who are you arguing against? :confused:

Half the people in this thread who think AI can evolve to have some kind or sentience because it asks a question or comes up with an answer that wasn't expected.
 
Half the people in this thread who think AI can evolve to have some kind or sentience because it asks a question or comes up with an answer that wasn't expected.

To remind you what you posted and what I replied to:

The idea "we don't know how they are doing it" doesn't make it sentience or conscious or anything of the sort. It's analogous to the god of the gaps.

As I said no one in this thread is arguing that.
 
Human consciousness arose as part of a system that evolved to meet the needs of survival and reproduction. The specifics of how it does that aren't completely known to us, but the optimization process wasn't optimizing for consciousness, it was optimizing for reproduction and consciousness is something that arose along the way, either because it itself is a useful, adaptive, part of the system, or because it's a consequence of some other part of the system (a spandrel).

AI, similarly, are complex systems that arise through a process (machine learning, at least currently). They will also be optimized for their ability to achieve certain outcomes. While those outcomes aren't identical to the outcomes the human brain is optimized for, many of them are overlapping, and there is a general similarity to the abilities that we are looking for from AI and that human brains are capable of. It's not necessarily the case that the optimization process will achieve those ends through the same means, but it's certainly possible. And given the way machine learning works, we may not even know exactly how it's done when it's done.
Mimicking brain function is still mimicking.

It's oversimplified to think survival and reproduction are the only things natural selection pressures act on. But that's a subject for another thread.

One can optimize AI all one wants. It can make independent decisions (we already have that with self-driving cars as an example). What else do you want it to do to make it sentient?

What I see in the thread is an attempt to redefine sentience so one can claim AI can become sentient with the proper amount of tinkering.

:rolleyes:

AI cannot become sentient if you don't change the definition of sentience: conscious self awareness and independence beyond things it is programed to do like driving.

AI is not going to come alive. It doesn't have the structure it would need regardless if someone set out to make it alive.
 
Actually, current carrying wires and nerves perform the same function, so in that regard they are the same.

No they don't because the central processor which human brains have is missing. In the brain it doesn't function like an AI program does.

And until we figure out how it works in the brain we can't recreate it.
 
What I see in the thread is an attempt to redefine sentience so one can claim AI can become sentient with the proper amount of tinkering.

Whose definition of 'Sentient' ?

With the proper amount of tinkering, AI may become sentient.

That's what tinkering is all about.
 
Mimicking brain function is still mimicking.

It's oversimplified to think survival and reproduction are the only things natural selection pressures act on. But that's a subject for another thread.

One can optimize AI all one wants. It can make independent decisions (we already have that with self-driving cars as an example). What else do you want it to do to make it sentient?

What I see in the thread is an attempt to redefine sentience so one can claim AI can become sentient with the proper amount of tinkering.

:rolleyes:

AI cannot become sentient if you don't change the definition of sentience: conscious self awareness and independence beyond things it is programed to do like driving.
AI is not going to come alive. It doesn't have the structure it would need regardless if someone set out to make it alive.

You've got that arse about tit: it's you that is trying to change the definition of sentience so that AIs can never be considered sentient.
 
No they don't because the central processor which human brains have is missing. In the brain it doesn't function like an AI program does.

And until we figure out how it works in the brain we can't recreate it.

Folk aren't claiming that AI would or will have the same sentience as a human does, so whether we know how it occurs in humans is irrelevant to how it might occur in AIs.
 
No they don't because the central processor which human brains have is missing. In the brain it doesn't function like an AI program does.

And until we figure out how it works in the brain we can't recreate it.

What do nerves do that current carrying wires in computers do not do?

No one mentioned brain.

You might find this interesting.


New artificial nerves could transform prosthetics

Because organic electronics like this are inexpensive to make, the approach should allow scientists to integrate large numbers of artificial nerves that could pick up on multiple types of sensory information, Shepherd says. Such a system could provide far more sensory information to future prosthetics wearers, helping them better control their new appendages. It could also give future robots a greater ability to interact with their ever-changing environments—something vital for performing complex tasks, such as caring for the elderly.

When robots can feel, does it mean they have feelings?
 
To remind you what you posted and what I replied to:

The idea "we don't know how they are doing it" doesn't make it sentience or conscious or anything of the sort. It's analogous to the god of the gaps.

As I said no one in this thread is arguing that.
I knew I would have to explain what I meant, sorry.

It hit me last night and dawned on me how skeptics were making the same mistake we recognize when god believers make it: If they can't explain it then God did it.

If people can't explain how an AI program is coming up with the answers it does, that doesn't mean it's on the verge of sentience.

People may disagree with my analogy.
 

Back
Top Bottom