• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Several of the generative art AI companies are now letting you create your own model to use when generating new art

I thought I'd try with my cat, Sky - typically as a cat owner I have lots of photos of my cat. I uploaded 48 images - all of them with the cat in them but in all types of backgrounds, focal length and so on. After uploading it took about 15 minutes and the model was trained.

It is rather spooky: I asked for "a picture of Sky in a box", these are not bad. Now I can tell they aren't of my cat for a few reasons* but they have captured her likeness and face very well. I am sure if I sent these to a friend or family that know her they would think it was her.

[qimg]http://www.internationalskeptics.com/forums/imagehosting/thum_16511872a10761.jpg[/qimg]

As ever not all attempts turn out quite as "good"

[qimg]http://www.internationalskeptics.com/forums/imagehosting/thum_16511893fb6fb2.jpg[/qimg]

Some really goofy "hallucinations". Interestingly it seems to have picked up on one of her idiosyncrasies, which is that she usually - especially when in the cottage loaf position - sticks out her right paw and leg.


*The reasons are: fur is rather too thick, body shape is a bit chunky and finally she has a birth mark on her pink nose that isn't in any of the photos.

AI has a sick sense of humor, having that orange cat kill your lovely Sky.
 
Not for me.
Do you recognize that you know something (regardless of your confidence in knowing it) or are you just regurgitating what your program has led you to regurgitate?


There is a feeling aspect to all facets of human existence.

How do you feel this corelates to AI?
That's the point.

No matter what that AI program is doing, it isn't sentient.
 
Do you recognize that you know something (regardless of your confidence in knowing it) or are you just regurgitating what your program has led you to regurgitate?

If you ask me what is my mother's name, I respond with her name. I have no "feeling" of cognition, the name is just there for the me to say, even that doesn't truly describe it accurately, because there is no separation between "deciding" to say the name, the name and the speech are simply one reaction. It goes further I might respond with "why do you want to know?", the speech part of that has no feeling attached to it.

I've really paid attention to how I seem to do things (internally) since I learnt I was aphantasic and I no longer assume that we all experience the same "inner world".

You may indeed have a "feeling" when you reply with a name you know, but don't assume that is the same for everyone.

That's the point.

No matter what that AI program is doing, it isn't sentient.

I for one are not arguing that the current generative "AIs" are sentient in the way humans are nor do I think we are running a LLM in our brains that is directly comparable to the AIs. What I do think they should be doing is making us question more about what we think we are and how our sentience works. I think it may be showing much of what we consider part of our "sentience" actually isn't.
 
If you ask me what is my mother's name, I respond with her name. I have no "feeling" of cognition, the name is just there for the me to say, even that doesn't truly describe it accurately, because there is no separation between "deciding" to say the name, the name and the speech are simply one reaction. It goes further I might respond with "why do you want to know?", the speech part of that has no feeling attached to it.

I've really paid attention to how I seem to do things (internally) since I learnt I was aphantasic and I no longer assume that we all experience the same "inner world".

You may indeed have a "feeling" when you reply with a name you know, but don't assume that is the same for everyone. ...

I for one are not arguing that the current generative "AIs" are sentient in the way humans are nor do I think we are running a LLM in our brains that is directly comparable to the AIs. What I do think they should be doing is making us question more about what we think we are and how our sentience works. I think it may be showing much of what we consider part of our "sentience" actually isn't.
We are defining "feeling" differently and it's my bad for oversimplifying the terminology.

We seem to agree AI is not sentient and sentient brains are not computers.

IMO, based on what I know about brains and research on consciousness which is a fair amount, no matter how much data computers can process it will not lead to sentience.

Trying to satisfy Skeptical Greg's question, "What is 'knowing' besides being able to repeat something you have read, seen or heard?", it can't easily be done without an understanding of some common terminology.

I try to use the term, 'feeling' one knows something to answer that question and you pick out 'feeling' and apply a different meaning to it.

I'm not really interested in a 20 page debate about the meaning of terminology. So I repeat my basic premise, no amount of regurgitating information that AI programs might do, including mimicking any and all sorts of things human brains can do will lead to a computer program becoming sentient.

I have a similar issue when researchers try to investigate consciousness and call it 'the mind'. Consciousness may not be pinned down yet but it most definitely is a biological process.

Until we understand that biological process it's not going to be imparted into an AI program by making the program more and more clever.
 
Last edited:
Not sure if anyone here has an answer for this:

later generations of Chat-GPT need more processing power - I assume with diminishing returns. So a basic version could conceivably run completely local on a PC or even phone?

And would it be possible for a highly skilled an motivated actor, such as North Korea, to steal a cops of Chat-GPT?
 
Not sure if anyone here has an answer for this:

later generations of Chat-GPT need more processing power - I assume with diminishing returns. So a basic version could conceivably run completely local on a PC or even phone?

You can run ChatGPT 4 locally on pretty much any recent(ish) PC. There are versions for your phone as well. ETA: removed the link as that one is no longer available

And would it be possible for a highly skilled an motivated actor, such as North Korea, to steal a cops of Chat-GPT?

It would be standard espionage so I would be incredibly surprised if many nation states haven't nicked a copy or two!
 
Last edited:
You can run ChatGPT 4 locally on pretty much any recent(ish) PC. There are versions for your phone as well. ETA: removed the link as that one is no longer available



It would be standard espionage so I would be incredibly surprised if many nation states haven't nicked a copy or two!

How ? The model has 1.7 trillion parameters. That's 800 gigabytes even if you use 4 bits per parameter (which is sometimes used in smaller LLMs). I have some experience with 2GB LLMs .. and they can do like 10 words per second. But that's because they will fit inside GPU VRAM (which is up to 16GB for common consumer GPUs).
How do you run 800GB model locally, much less on phone ? I mean I have no idea how do they run them in the cloud either. At best I would expect 1 word per minute, on high end GPU.

But btw. there are no diminished returns with size just yet. That's one of the things which GPT4 showed. All metrics improved perfectly in linear fashion with size compared to GPT3.
 
Last edited:
So I repeat my basic premise, no amount of regurgitating information that AI programs might do, including mimicking any and all sorts of things human brains can do will lead to a computer program becoming sentient.

Yet.

Not accusing you of being a “distinguished but elderly scientist”, but I’m reminded of this by Arthur C. Clarke: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

I have a similar issue when researchers try to investigate consciousness and call it 'the mind'. Consciousness may not be pinned down yet but it most definitely is a biological process.

Until we understand that biological process it's not going to be imparted into an AI program by making the program more and more clever.

I understand what you’re saying, but it seems to imply that there’s some special juju about biological systems. While biological systems are remarkably complex, there is still physics underlying everything they do or feel. I see no fundamental reason why a sufficiently complex silicon based system might not give rise to emergent properties - including consciousness. The trick may be recognizing when that point is reached - that the consciousness is real and not simple mimicry.
 
Last edited:
In the past, what was supposed to make a program sentient, or at least intelligent, was it's ability to train, or evolve, itself against copies of itself.
But models like Chat-GPT need high-quality data to improve - training these models on their own output leads to crap.

So I think we can safely assume that this is not the way to get any truly intelligent program.
 
That's a given. It won't happen though until we understand the mechanism and then develop and implement said mechanism in an AI program.

Not accusing you of being a “distinguished but elderly scientist”, but I’m reminded of this by Arthur C. Clarke: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
Good you're not accusing me because I don't hold the position it can't happen.

I understand what you’re saying, but it seems to imply that there’s some special juju about biological systems. While biological systems are remarkably complex, there is still physics underlying everything they do or feel. I see no fundamental reason why a sufficiently complex silicon based system might not give rise to emergent properties - including consciousness. The trick may be recognizing when that point is reached - that the consciousness is real and not simple mimicry.
It's not juju.

Currently we know some interesting things that the brain does inside and outside of consciousness. We know what happens when specific parts of the brain are damaged. Some of those deficits can be redeveloped by our plastic brains and so far as we know some can't.

By looking at damaged brains certain structures have been identified as the structure responsible for specific things like consciousness. AI programs

Where Does Consciousness Reside in the Brain? New Discovery Helps Pinpoint Its Location
Science may be getting closer to figuring out where consciousness resides in the brain. New research demonstrates the significance of certain kinds of neural connections in identifying consciousness.

Jun Kitazono, a corresponding author of the study and project researcher at the Department of General Systems Studies at the University of Tokyo, conducted the study, which was published in the journal Cerebral Cortex. ...

“Although we have not reached a conclusive answer, much empirical evidence has been accumulated in the course of searching for the minimal mechanisms sufficient for conscious experience, or the neural correlates of consciousness.”


This paper while only one study nonetheless is an example of what I'm talking about.
The cerebral cortex, located on the surface of the brain, contains sensory areas, motor areas and association areas that are thought to be essential to consciousness experience. The thalamus, located in the middle of the brain, has likewise been thought to be related to consciousness, and in particular, the interaction between the thalamus and cortical regions, called the thalamocortical loop, is considered important for consciousness. These results support the idea that the bi-directionality in the brain network is a key to identifying the place of consciousness.

Computer programs may be able to replicate how the brain does certain things like the bidirectional flow of data, but at what point it will 'awaken' and actually be sentient as opposed to processing information in a way that mimics how a brain processes data?

When we understand how our brains differ from a computer process then an AI program might be able to make the switch, so to speak.
 
Last edited:
Do you recognize that you know something (regardless of your confidence in knowing it) or are you just regurgitating what your program has led you to regurgitate?


That's the point.

No matter what that AI program is doing, it isn't sentient.
Gotta define sentience before you can exclude it.
Chatgpt isn't ai, but an actual ai?
gotta define it before you can exclude it.
 
Yeah, I'm afraid one issue is vague definition of the words as well .. even things like "knowing", and "understanding". I'm straight out giving up on "sentience" and such. It holds no meaning to me.

Anyway I hope quantum computing won't be necessary. But it's too soon to tell.
 
Until we understand that biological process it's not going to be imparted into an AI program by making the program more and more clever.

I’ve been considering this.

Consciousness arose through countless tiny evolutionary steps, with no one or thing needing to “understand” the biological process involved. It just happened. I see no reason that consciousness could not arise via tiny computational steps in a computer program, with no programmer needing to “understand” the “computational process” either. That’s kind of the thing with emergent properties - they can just pop up with no “understanding” of what led to them, nor needed as a prerequisite for their creation.
 
Last edited:
Yeah, I'm afraid one issue is vague definition of the words as well .. even things like "knowing", and "understanding". I'm straight out giving up on "sentience" and such. It holds no meaning to me.

Anyway I hope quantum computing won't be necessary. But it's too soon to tell.
Applying anthropomorphic terms to AI programs doesn't make them correctly applied. It's faulty on its face.


As for defining sentient @p0lka, be my guest. Don't expect me to take up your challenge.
 
I’ve been considering this.

Consciousness arose through countless tiny evolutionary steps, with no one or thing needing to “understand” the biological process involved. It just happened. I see no reason that consciousness could not arise via tiny computational steps in a computer program, with no programmer needing to “understand” the “computational process” either. That’s kind of the thing with emergent properties - they can just pop up with no “understanding” of what led to them, nor needed as a prerequisite for their creation.

In a couple billion years? Or even a couple million?

No matter how well any AI program works it's still a facade for being sentient. Do you expect the programs with enough tweaking are going to suddenly cross the barrier between a data program and a sentient being?

Think a robot might take off into woods [or anyplace] one day and reject its human overlords? That's what science fiction writers put in people's heads. It's hard to see that ever being reality unless as I mentioned we understand consciousness in the human brain and begin to specifically design AI programs to function similarly.

But it still may turn out that a biological aspect is a critical piece. In that case people are designing lab grown brains.
 
In a couple billion years? Or even a couple million?

No matter how well any AI program works it's still a facade for being sentient. Do you expect the programs with enough tweaking are going to suddenly cross the barrier between a data program and a sentient being?

Think a robot might take off into woods [or anyplace] one day and reject its human overlords? That's what science fiction writers put in people's heads. It's hard to see that ever being reality unless as I mentioned we understand consciousness in the human brain and begin to specifically design AI programs to function similarly.

But it still may turn out that a biological aspect is a critical piece. In that case people are designing lab grown brains.


There's not even a facade of sentience. At least not in the design. LLM are statistical tool, chat assistants derived from them are just chat bots. They do nothing unless we ask, and they don't learn while we're doing it.

AIs in self driving cars are much more like "beings". They do something all the time, they have goals, and they are trying to achieve them. But then they do very limited set of decisions, and their goals are simple. And the also do not learn while doing so.

We need AI with its own agency, but also being able to learn on the go. Basically no AI application does that, as it could learn something we didn't want .. it would be unpredictable. But technically it's imho not something too impossible.

LLM could be part of it, as LLM is a way to understand text, and AI using text as input and output is easier to understand, and would be relatively harmless, unlike let's say self driving car.

I think we will have something like that very soon, even if just for academic purposes. It seems to me like natural application of LLM.
 
There's not even a facade of sentience. At least not in the design. LLM are statistical tool, chat assistants derived from them are just chat bots. They do nothing unless we ask, and they don't learn while we're doing it.

That is by far the biggest nail in the coffin of whether LLM's are "sentient". Philosophical sophistry about "what is the difference between real and simulated" aside, barest fact of the matter is that a program like ChatGPT starts working when you ask it a question, and once it produces an answer, it stops...period. Stops doing anything. It doesn't think, it doesn't dream. It doesn't wonder what you're going to ask it next. If you were running it in a console, there would be no output, just a blinking cursor waiting for a prompt.
 

Back
Top Bottom