• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

You are taking it too philosophically. AI is anything, which is artificial and intelligent, to ANY extent. Also it is a branch of computer science. It is NOT some level of intelligence, or some goalpost, something which might or might not be reached.
When automated OCR was introduced, it was considered application of AI. Simple scripts which control characters in games are called AIs. Even if the character is a square which moves there and back again. Here it's more like "something in place of an intelligence" than actual intelligence, but it's called that none the less.
LLMs obviously are AIs. Are the as smart as humans ? Irrelevant.
Basically anything which does something is intelligent. Only question is how much.
 
Last edited:
I would argue that within this context, we do have a reasonably good definition of intelligence. A being is considered intelligent if it is both sapient and sentient. LLMs are neither of those, although they can sometimes appear to mimic them.
And the definitions of “sapient and sentient” are robust?

I am asking because I have followed these discussions for many years, and it seems that new definitions are made as hoc according to the desired result.

Just a couple of weeks ago I read an article in my daily newspaper about an Italian philosopher who claimed that plants were sentient, and probably more intelligent than we are, because they do longer term planning. His definition of sentience was conspicuously absent (well, I won’t claim he doesn’t have one, but the journalist who wrote the article certainly didn’t think of asking for it).
 
We can't explain sapience because we haven't found all the puppet strings yet. We know that AI isn't sapient because we know all about its puppet strings, and none of them explain human sapience.

Not to mention the fact that no one has created an actual "p-zombie" AI yet, so it really is a bit like arguing whether a wooden puppet is sapient.
 
Last edited:
We can't explain sapience because we haven't found all the puppet strings yet. We know that AI isn't sapient because we know all about its puppet strings, and none of them explain human sapience.

The greatest sages of human history are like Socrates and Lao Tzu, and they are given that status mostly because they admit they don't know what they're talking about.

LLM's seem genuinely more sage like than everyone you'll meet in a day.

Not to mention the fact that no one has created an actual "p-zombie" AI yet, so it really is a bit like arguing whether a wooden puppet is sapient.

A p-zombie is impossible. That's the point of it.
 
The greatest sages of human history are like Socrates and Lao Tzu, and they are given that status mostly because they admit they don't know what they're talking about.

And the worst sages are those that repeat meaningless clichés from more than two millenia ago.

LLM's seem genuinely more sage like than everyone you'll meet in a day.

It's a puppet, and we can find all the puppet strings, and none of the puppet strings explain sapience. I can make a wooden puppet act like a person, but hiding my manipulations as a magic trick doesn't magically make it sapient.

A p-zombie is impossible. That's the point of it.

If we create an AI that is indistinguishable from a person when its inner workings are hidden, but its inner workings don't explain conscious experience, we'll have created the equivalent of a p-zombie.

If a p-zombie is impossible, then we will always know that an AI isn't sapient because it can't even trick us into thinking it's conscious.
 
Last edited:
And the definitions of “sapient and sentient” are robust?

I am asking because I have followed these discussions for many years, and it seems that new definitions are made as hoc according to the desired result.

Just a couple of weeks ago I read an article in my daily newspaper about an Italian philosopher who claimed that plants were sentient, and probably more intelligent than we are, because they do longer term planning. His definition of sentience was conspicuously absent (well, I won’t claim he doesn’t have one, but the journalist who wrote the article certainly didn’t think of asking for it).

I don't think they're any more or less robust than the definition of "tall". I also, however, don't think they're useless.
 
How about when they get to the point where they mimic them perfectly? Kind of what the Turing Test was hinting at.

At that point, I think we can say that they arrive at the same result, albeit via different routes.

If LLMs get to a point where they can mimic extrapolative thinking, problem solving, and innovation perfectly... I think they would have to be considered intelligent.

I don't think that extrapolative thinking can really be mimicked.
 
It's a puppet, and we can find all the puppet strings, and none of the puppet strings explain sapience.

Its "puppet strings" are basically just its ability to repeat things it has heard.

Which makes it pretty much human. Not a particularly curious or creative one, but there are plenty of people like that.

And since it can read more, and remember it more accurately, it seems pretty obvious they're truly intelligent, and even superior in some ways.


If we create an AI that is indistinguishable from a person when its inner workings are hidden, but its inner workings don't explain conscious experience, we'll have created the equivalent of a p-zombie.

If a p-zombie is impossible, then we will always know that an AI isn't sapient because it can't even trick us into thinking it's conscious.

The p-zombie is a tool to think about the Hard Problem of Consciousness.

The problem is, how does a sense of being arise from matter? If the universe was simply and purely matter at the ground level, why don't people just go around and do what they do, but without the subjective experience of it.

Why do we have these sense of what it is like to "be"?

How does matter give rise to being?

The answer, at least to me, is that it doesn't. Instead, matter arises from being.

The universe, Nature, reality, existence, whatever you want to call it, is fundamentally "being".

Being contains models of itself, and "matter" as we're familiar with it, exists in the contents of those models. Sagan said we are how the universe experiences itself. If consciousness has to do with Being having a model itself, then there's no reason to suppose we're going to find something in the model that explains being.

In which case, not only are AI's (not just LLM's) truly intelligent, they are already conscious, which could be objectively measured:

https://en.wikipedia.org/wiki/Integrated_information_theory
 
Last edited:
If LLMs get to a point where they can mimic extrapolative thinking, problem solving, and innovation perfectly... I think they would have to be considered intelligent.

Most humans would fail that test, and in reality, problem solving and innovation depend on a lot on chance and accident, and sometimes pure rebellion, by simply throwing out assumptions the other humans have rigidly held on to.
 
If we create an AI that is indistinguishable from a person when its inner workings are hidden, but its inner workings don't explain conscious experience, we'll have created the equivalent of a p-zombie.


That assumes we have the ability to understand an explanation for consciousness from observing the inner workings of a conscious being. That implies that such an understanding must already exist for human brains. Great news! So, how do the inner workings of human brains produce consciousness?
 
That assumes we have the ability to understand an explanation for consciousness from observing the inner workings of a conscious being. That implies that such an understanding must already exist for human brains. Great news! So, how do the inner workings of human brains produce consciousness?

I wasn't aware that neuroscience has already figured out all the inner workings of the human brain. That IS great news! No? Try again?

On the other hand, we know all the inner workings of an AI, because we know exactly how it does what it does. Because we made it.
 
On the other hand, we know all the inner workings of an AI, because we know exactly how it does what it does. Because we made it.


We have very little idea how it does what it does. All we know is how it was made, which is not the same thing.
 
I'm sorry, did I wake up in an early 20th century science fiction story?

AI developers know exactly how their technology does what it does. The alternative would be magic.
 

Back
Top Bottom