• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

I feel like sentience might require a very non-standard command tree. Yes, we're technically programmed to eat and sleep, but it gets a bit messy: it hurts not to eat and sleep.

Unless you're a mad scientists or actively trying to create sentience, going down that road sounds insane. Just make your software do what you want it to do.
 
I feel like sentience might require a very non-standard command tree. Yes, we're technically programmed to eat and sleep, but it gets a bit messy: it hurts not to eat and sleep.

Unless you're a mad scientists or actively trying to create sentience, going down that road sounds insane. Just make your software do what you want it to do.

Well it might bet not useful. Might be even dangerous. But for sure it's interesting.
 
Because we know what it isn't. It isn't just processing more data.
But what is it? And why can whatever-it-is not be implemented on a non-biological substrate?

If you want to believe an AI program is conscious, be my guest.
I don't believe that any AI program currently running is conscious. I just don't see any reason why a non-biological system could not some day be conscious. I don't see anything about consciousness that makes it inherently impossible to implement on a sufficiently complex non-biological system, in principle.

In fact, I'd go further, and suggest that in order to make it impossible for consciousness to exist on a non-biological substrate, there must be something special about biology that differentiates it from non-biological systems - something other than mere complexity. Dare I say, something immaterial and nonphysical. Because we (ie., science) understand the physical matter pretty well.

We cannot currently simulate an entire brain on a computer, because a brain is the most complex structure in the universe and there is no computer yet powerful enough to do so in real time. But we understand quite a lot about how the component parts of the brain - neurons, particularly - work. There's nothing about them that couldn't, in principle, be simulated on a sufficiently powerful computer. Do that, then link trillions of them together into recursive feedback loops, give it inputs analogous to senses, and outputs analogous to communication, and why shouldn't it be sentient? Why couldn't it be?

The only reason it couldn't be is for there to be something that consciousness has that is immaterial, nonphysical. And I don't believe that souls exist.
 
I quite agree. Either it is possible for a computer to be sentient at some point, or there is such a thing as souls.

I also agree that current AI software is not yet sentient.

Hans
 
Even the idea that consciousness is somehow already "out there", and interacts with sufficiently complex brains via 'quantum', doesn't preclude conscious AIs, as it could presumably 'quantum' with them just as easily. Actually, now I come to think of it, that might apply to individual souls too.
 
But what is it? And why can whatever-it-is not be implemented on a non-biological substrate?
For the fourth time now, once we understand how consciousness works in a biological being we can try to build one in a non-biological being/machine.

I wish you all would quit claiming I said something I didn't.


[snip]
The only reason it couldn't be is for there to be something that consciousness has that is immaterial, nonphysical. And I don't believe that souls exist.
The only reason? No, just no, you have presented a false dichotomy.

There are some very good links in this thread for anyone interested in learning where the science is in regards to brain research. Throwing one's hands up and saying consciousness has to be non-physical but it can't be tells me a person didn't take the time to read any of the sources.


Brain research has come a long way toward understanding consciousness. It's not there yet so don't expect a ready answer to how it works. People in this thread need to catch up to where the research is at. It can't be explained in 25 words or less.

I can say with confidence what consciousness is not. It is not about having a bigger computer with clever programs and massive data processing. No AI program is going to develop consciousness or become sentient until those things are understood in the biological brain and an AI program is then developed to function in a similar way.
 
I quite agree. Either it is possible for a computer to be sentient at some point, or there is such a thing as souls.....

Hans
That's not necessarily true either. Suppose when we discover how consciousness works in the brain, as in the physical process, it turns out we can't build an artificial brain. We might be able to, but the first step is to understand how biological consciousness works.

We won't know until we understand what is happening in biological brains.

But needing a soul is a concept that's as old as the days when people believed humans weren't animals. Philosophers might still be talking about the mind, but biologists aren't.
 
That's not necessarily true either. Suppose when we discover how consciousness works in the brain, as in the physical process, it turns out we can't build an artificial brain. We might be able to, but the first step is to understand how biological consciousness works.

We won't know until we understand what is happening in biological brains.

But needing a soul is a concept that's as old as the days when people believed humans weren't animals. Philosophers might still be talking about the mind, but biologists aren't.

Wouldn't need to, we could model it in a computer.
 
...The only reason? No, just no, you have presented a false dichotomy.
If there is some other reason why a sufficiently complex artificial system could not be conscious, please elucidate.

...I can say with confidence what consciousness is not. It is not about having a bigger computer with clever programs and massive data processing. No AI program is going to develop consciousness or become sentient until those things are understood in the biological brain and an AI program is then developed to function in a similar way.
Why? Why could we not develop an AI to function in a different way, that we could still call conscious? After all, if there is one thing we can say about artificial consciousness, it is that it will function differently to the way a biological consciousness functions.
 
If there is some other reason why a sufficiently complex artificial system could not be conscious, please elucidate.
Gawd would you read my posts and shut up with this lie about my POV! Or at least don't quote me or ask me to elucidate what I already have.

Let me make it 5 times. Please read carefully:

For the fourth [5th] time now, once we understand how consciousness works in a biological being we can try to build one in a non-biological being/machine.

How hard is that concept?



Why? Why could we not develop an AI to function in a different way, that we could still call conscious? After all, if there is one thing we can say about artificial consciousness, it is that it will function differently to the way a biological consciousness functions.
Fine, other than sci-fi you got a viable model?

Is this something we should look for a century or 2 from now?

I'm pretty sure we'll figure out biological consciousness long before that.


The problem with your (and others') sci-fi version of conscious or sentient AI programs is upon what is this based? More clever and expansive AI programs can't get from here to there unless you have some plan how that could happen.
 
Gawd would you read my posts and shut up with this lie about my POV! Or at least don't quote me or ask me to elucidate what I already have.



Let me make it 5 times. Please read carefully:



For the fourth [5th] time now, once we understand how consciousness works in a biological being we can try to build one in a non-biological being/machine.



How hard is that concept?







Fine, other than sci-fi you got a viable model?



Is this something we should look for a century or 2 from now?



I'm pretty sure we'll figure out biological consciousness long before that.





The problem with your (and others') sci-fi version of conscious or sentient AI programs is upon what is this based? More clever and expansive AI programs can't get from here to there unless you have some plan how that could happen.
Evolution had no planned path but we claim we are sentient as a result of evolution. Can't see why sentience couldn't again occur as an unintended consequence of us trying to create better AI.
 
Gawd would you read my posts and shut up with this lie about my POV! Or at least don't quote me or ask me to elucidate what I already have.

Let me make it 5 times. Please read carefully:

For the fourth [5th] time now, once we understand how consciousness works in a biological being we can try to build one in a non-biological being/machine.

How hard is that concept?
I know what you're saying. I've read it 5 times. I'm asking you a different question! You're not answering the question I am actually asking, you're repeatedly answering a question that I am not asking. If you answer "nothing - there is nothing that makes a nonbiological consciousness impossible" then I will accept that.

Fine, other than sci-fi you got a viable model?
Nope. But that doesn't mean that one will never exist.

Is this something we should look for a century or 2 from now?

I'm pretty sure we'll figure out biological consciousness long before that.
I think it'll come sooner than anyone expects.

The problem with your (and others') sci-fi version of conscious or sentient AI programs is upon what is this based? More clever and expansive AI programs can't get from here to there unless you have some plan how that could happen.
So-called modern "artificial intelligence" applications get to places and we have no idea how they got there. The inner workings of an AI are a black box. We give them inputs and training data, and we tell them how they should process that data, but we do not understand how they produce results.

I fully expect that when a conscious AI emerges, we will have no idea how it happened. But I'm equally sure that it will bear little resemblance to a biological brain, which is why understanding biological consciousness isn't relevant.
 
Last edited:
Evolution had no planned path but we claim we are sentient as a result of evolution. Can't see why sentience couldn't again occur as an unintended consequence of us trying to create better AI.
Oh, it will definitely be an intended consequence. I just think that nobody will understand how it happened.
 
Watching and being able to play - sorry research - these generative AIs is great fun, sorry, very serious work.

One of the things we discussed earlier in the thread is how the image AIs had trouble with certain aspects of image creation, a major one is their inability to create realistic and anatomically accurate fingers and hands.

This is the result from the latest Stable Diffusion XL model



Much better than previous models but clearly still very wrong. But MS just made OpenAI's DALL·E 3 available via their Bing Create and wow!



Hands and fingers seem to have been fixed. Going to play, sorry, do some more research into this.
 
A slightly smaller "wow". It is a lot better than any of the other generally available GAIs but still a long way from being perfect. Can still quite easily get it to show its weaknesses:



Another area it does seem to have got right is including text in an image - you can tell it to create "A post-it note saying "Hey I can create images with text!"" And it now better interprets what that prompt would mean to a human.

Old:



New:

 
Evolution had no planned path but we claim we are sentient as a result of evolution. Can't see why sentience couldn't again occur as an unintended consequence of us trying to create better AI.

We are the sentient being for any AI program. So where is the mutation plus selection pressures going to come from?

And I asked before, how long? Couple thousand years?

The concept sounds good until you look at the details.
 
We are the sentient being for any AI program. So where is the mutation plus selection pressures going to come from?

And I asked before, how long? Couple thousand years?

The concept sounds good until you look at the details.

Selection is us - selecting for "better" AI, mutation or unplanned changes in this case is the self-learning and the data they use.

How long? Probably a few weeks after we start AIs creating new AIs and give the AIs enough computational time and storage.

If we are talking about meeting the minimal threshold for the classical definition of sentience i.e. "ability to experience feelings and sensations" like others have said we are almost at the start of that process.

We certainly have proof that many aspects that we thought human sentience required are capable of being delivered without sentience, such as the outputs of the LLMs.

Which doesn't mean that they are not required for human sentience and I don't think currently we are on the track to replicate human sentience, that isn't the goal, but we already know sentience is not limited to humans, so any sentience that may be created is unlikely to be the same as a human's sentience.

My dog is sentient but does not have human sentience, I can't see why in principle a future AI may not also be sentient but still not have human sentience.
 
Last edited:
What did you tell it to generate these pictures?

Started trying to be clever but fell back to "simpler the better" so the last ones are "close-up of a hand with fingers crossed photorealistic".

ETA: Had to go back to check for the apple sketches - they were "close-up of an old person’s hand holding an apple, black and white sketch

ETA 1: I used the text "old person's hand" as I've noticed that possessives aren't well interpreted.
 
Last edited:
This discussion brings up related questions. Is it possible to be more sentient than a human? (I think so. I think there's a range of human sentience.)

If we encountered superior sentience artificially or in aliens, would we recognize it? (I think probably yes. ETA: And I think it would cause us existential angst.)

What would it look like? (I imagine similarities with Buddhist enlightenment -- a deeper self-awareness and mastery of thoughts, perhaps.)
 
Last edited:

Back
Top Bottom