• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Apparently, finding words that usually occur in the same context isn't quite the same as understanding the meaning of those words:

Indeed; regardless of how coherent its output seems, ChatGPT does not in fact understand the definition of even a single word. It converts text inputs into mathematical inputs and does a calculation, then converts the answer back to text. That's literally it.
 
Indeed; regardless of how coherent its output seems, ChatGPT does not in fact understand the definition of even a single word. It converts text inputs into mathematical inputs and does a calculation, then converts the answer back to text. That's literally it.

I'd argue about that. It does understand relations between words. It knows male is the opposite of female. It doesn't know how they look. It doesn't know how they sound. But he knows a lot about those terms .. in another terms, which it understands exactly the same.
But what really is understanding for people ? Isn't it the same ? You can just relate one thing to other things. We have the extra senses, so we can better "root" it in reality .. but that's it. IMHO.
 
I'd argue about that. It does understand relations between words. It knows male is the opposite of female. It doesn't know how they look. It doesn't know how they sound. But he knows a lot about those terms .. in another terms, which it understands exactly the same.

I disagree.

As far as I've been able to tell through research, the only relations between words that ChatGPT understands are statistical ones. It does not actually know that male is the opposite of female, for instance.
 
I disagree.

As far as I've been able to tell through research, the only relations between words that ChatGPT understands are statistical ones. It does not actually know that male is the opposite of female, for instance.

Statistical relations based on tons of text. And the text contains the information. What makes you believe ChatGPT doesn't know male is the opposite of female ? Much simpler models, like tokenizers used to convert text to vectors, can grasp such simple relations.
 
Statistical relations based on tons of text. And the text contains the information. What makes you believe ChatGPT doesn't know male is the opposite of female ?

As I've explained, because of what I know about how ChatGPT works.

ChatGPT only "knows" how often words have appeared next to each other in the total corpus of text it has been shown. It understands this to a few levels of depth to be sure - we're not talking simple word-clouds here - so for instance it has different figures for not only how often Word A and Word B appear together, but also how often they appear together when Word C is also nearby, as opposed to when Word D is nearby.

But that's the totality of its word comprehension. It doesn't understand that Word A is the opposite of Word B; it doesn't even understand that Word A is capable of being an "opposite" of some other word. It doesn't know why Word A's concurrence with Word B is different when Word C is nearby compared to when Word D is nearby, it simply knows that's the way it is.

Whenever you give ChatGPT a prompt, it doesn't "comprehend" what you're saying. It just changes it into a math problem and then does the math problem.
 
Last edited:
Statistical relations based on tons of text. And the text contains the information. What makes you believe ChatGPT doesn't know male is the opposite of female ? Much simpler models, like tokenizers used to convert text to vectors, can grasp such simple relations.

The best way to find out if someone something knows something is to ask it a question on the subject. Here is an example


If a person is not a female what are they?

chatGPT said:
If a person is not female, they could be male or identify as another gender. In terms of biological sex, individuals are typically classified as male or female based on reproductive anatomy and genetics. However, it's important to note that gender identity is separate from biological sex.

Gender identity refers to an individual's deeply held sense of their own gender, whether it aligns with the sex assigned at birth or not. Some people may identify as male, female, or non-binary, which is an umbrella term for genders that do not exclusively align with male or female.

It's essential to respect individuals' self-identified gender and use the appropriate pronouns they prefer, such as he/him, she/her, they/them, or other gender-neutral pronouns. Gender is a complex and personal aspect of a person's identity, and it's always best to ask and use the language that individuals themselves use to describe their gender.
 
I disagree.

As far as I've been able to tell through research, the only relations between words that ChatGPT understands are statistical ones. It does not actually know that male is the opposite of female, for instance.
What does know mean? What is the difference between knowing statistically, and any other kind of knowing? As long as this AI is lacking in senses, it can only know what it is trained on. I would hazard that a blind human would also know most things from training, i.e. statistical “knowing”.
 
As I've explained, because of what I know about how ChatGPT works.

ChatGPT only "knows" how often words have appeared next to each other in the total corpus of text it has been shown. It understands this to a few levels of depth to be sure - we're not talking simple word-clouds here - so for instance it has different figures for not only how often Word A and Word B appear together, but also how often they appear together when Word C is also nearby, as opposed to when Word D is nearby.

But that's the totality of its word comprehension. It doesn't understand that Word A is the opposite of Word B; it doesn't even understand that Word A is capable of being an "opposite" of some other word. It doesn't know why Word A's concurrence with Word B is different when Word C is nearby compared to when Word D is nearby, it simply knows that's the way it is.

Whenever you give ChatGPT a prompt, it doesn't "comprehend" what you're saying. It just changes it into a math problem and then does the math problem.

But that's all what's needed to comprehend. If you can answer correctly, you comprehend. How else would you test if for example I comprehend the words ?
 
As I've explained, because of what I know about how ChatGPT works.

ChatGPT only "knows" how often words have appeared next to each other in the total corpus of text it has been shown. It understands this to a few levels of depth to be sure - we're not talking simple word-clouds here - so for instance it has different figures for not only how often Word A and Word B appear together, but also how often they appear together when Word C is also nearby, as opposed to when Word D is nearby.

But that's the totality of its word comprehension. It doesn't understand that Word A is the opposite of Word B; it doesn't even understand that Word A is capable of being an "opposite" of some other word. It doesn't know why Word A's concurrence with Word B is different when Word C is nearby compared to when Word D is nearby, it simply knows that's the way it is.

Whenever you give ChatGPT a prompt, it doesn't "comprehend" what you're saying. It just changes it into a math problem and then does the math problem.


The thing is, though, the data doesn't only include Word A and Word B, it also includes the word "opposite" (perhaps that's Word C) and "opposite" also turns up proximate to many other words that also tend to turn up in pairs. Generalizing from many examples of a concept to the concept itself is a kind of learning that we know neural networks are useful for.

This should be easy to test! Tell the AI you're introducing two new words, such as "blurid" and "celamious," that have no past usage examples and you're providing no examples or definition, except that they are opposites. Then ask questions like, "A politician has made a speech condemning a recent rise of incidents of celamious behavior. Would that politician be likely to support, or oppose, a law that increases the legal penalties for blurid behavior?"
 
But that's all what's needed to comprehend. If you can answer correctly, you comprehend. How else would you test if for example I comprehend the words ?

Well no, see - that's the whole thing, that's the real spectacular breakthrough of trained generative language models. The fact that a program can be written to convert words to and treat them as contextless values, not even giving the program any provision for storing and remembering what they "mean" let alone instructing it to do that, and having it just purely mathematically predicting where given words should be in a new sentence based on an input sentence, and having that result actually "sound" correct to a human, is a truly amazing discovery and accomplishment; but it's a discovery that - as we can see in this thread - people have so much trouble accepting. People would prefer, or simply find it easier, to believe that a set of code must somehow have spontaneously resulted in an actual sentient humanlike "mind" that genuinely understands language in exactly the same way people do (or close enough, at least), than accept that human language could be mathematically "solved" by a computer program the same way that, for instance, chess has been.

I find that reticence really fascinating, personally. I kind of wonder what psychology underpins it.
 
This should be easy to test! Tell the AI ...

Yes, it's an easy one for it:

GPT-4: Since you've defined "blurid" and "celamious" as opposites, we can infer that if a politician condemns celamious behavior, they would likely oppose it. If that same politician opposes celamious behavior, then it stands to reason that they would support behavior that is blurid, as the two are opposites.

We know that AIs are "just" processing text, which is information, but we humans, that is, our brains, are also "just" processing information, in the form of nerve pulses. Remember, it's rather dark inside our skulls.

In both cases, the meaning of those streams of information all has to be learned.
 
Why mind ? I'm just saying that knowing the relation between words means understanding. What else would you need ? What more people with "mind" have ?

I'm not saying ChatGPT does something magical. I'm saying people understanding words is actually something simple.
 
Last edited:
... People would prefer, or simply find it easier, to believe that a set of code must somehow have spontaneously resulted in an actual sentient humanlike "mind" that genuinely understands language in exactly the same way people do (or close enough, at least), than accept that human language could be mathematically "solved" by a computer program the same way that, for instance, chess has been.

So, how does one tell "genuine" understanding from the artificial kind?

Say we have a brand-new concept that is totally unfamiliar to both a human and an AI, which we can check beforehand by quizzing them about it.

Now we give each of them the same new information about it, and then test each of them for how well they understand the new concept, and they both pass the test.

Are we now to say that only the human here has genuine understanding of this concept? There's a term for that.
 
So, how does one tell "genuine" understanding from the artificial kind?

That's the philosophical question that everyone is struggling with; and I'll freely admit it's a difficult one to answer if we're supposed to be judging by the output and nothing else. But when we have more information than just the output, it's a much easier question to answer.

That's the point of John Searle's "Chinese Room". To wit:

Imagine a kind of "physical" ChatGPT that reads and outputs in Chinese. You write a prompt in Chinese on a slip of paper and insert it into a slot in a door, "something" - you have no idea what - happens on the other side of the door, and another slip of paper with a perfectly intelligible and sensible response to your prompt in Chinese text comes out of the slot. Assuming you know Chinese, you can say that based on what you as a user can observe, the room - machine, whatever it is - certainly appears to understand Chinese much the same way that ChatGPT appears to understand English.

But now imagine another door that behaves identically, except that instead of a machine running a program, it's me sitting on the other side if it. As a simple matter of fact I don't understand a lick of Chinese - can't read it, can't write it, can't speak it. I have a huuuge book of instructions (in English), and when a slip of paper comes through the door, I cross-reference the Chinese script with my instruction manual. The manual doesn't tell me what the characters mean; it just tells me that if you see these characters in this order, write these other characters on a new slip of paper and kick it back out through the door. I have no clue what the slip of paper that came in the door says, and I don't know what the slip of paper I'm sending back through it says either - and yet, the Chinese-speaking person outside the door received what looked to them like a coherent, human-like answer to whatever their question was. Can it be said that I meaningfully "understand" Chinese when this is what I'm doing?
 
That's the philosophical question that everyone is struggling with; and I'll freely admit it's a difficult one to answer if we're supposed to be judging by the output and nothing else. But when we have more information than just the output, it's a much easier question to answer.

That's the point of John Searle's "Chinese Room". To wit:

Imagine a kind of "physical" ChatGPT that reads and outputs in Chinese. You write a prompt in Chinese on a slip of paper and insert it into a slot in a door, "something" - you have no idea what - happens on the other side of the door, and another slip of paper with a perfectly intelligible and sensible response to your prompt in Chinese text comes out of the slot. Assuming you know Chinese, you can say that based on what you as a user can observe, the room - machine, whatever it is - certainly appears to understand Chinese much the same way that ChatGPT appears to understand English.

But now imagine another door that behaves identically, except that instead of a machine running a program, it's me sitting on the other side if it. As a simple matter of fact I don't understand a lick of Chinese - can't read it, can't write it, can't speak it. I have a huuuge book of instructions (in English), and when a slip of paper comes through the door, I cross-reference the Chinese script with my instruction manual. The manual doesn't tell me what the characters mean; it just tells me that if you see these characters in this order, write these other characters on a new slip of paper and kick it back out through the door. I have no clue what the slip of paper that came in the door says, and I don't know what the slip of paper I'm sending back through it says either - and yet, the Chinese-speaking person outside the door received what looked to them like a coherent, human-like answer to whatever their question was. Can it be said that I meaningfully "understand" Chinese when this is what I'm doing?

You are a "processor" running a process .. the instructions are the "model" .. and all that together does understand Chinese.
 
You are a "processor" running a process .. the instructions are the "model" .. and all that together does understand Chinese.

That's a very unique take. Obviously I disagree and I want to find out where the disconnect is.

Okay - I don't know Chinese, I can't read what's coming in OR what I'm sending out, but you say that because I'm using this book to compose replies, I therefore "understand Chinese" in some practical way. Question: is the user outside the door, communicating - are they exchanging ideas - with me?
 
That's a very unique take. Obviously I disagree and I want to find out where the disconnect is.

Okay - I don't know Chinese, I can't read what's coming in OR what I'm sending out, but you say that because I'm using this book to compose replies, I therefore "understand Chinese" in some practical way. Question: is the user outside the door, communicating - are they exchanging ideas - with me?

Dr Sid's take is the Systems Reply, which I agree with: a part of a system (you) doesn't have to understand what the whole system (you+book) understands. It doesn't matter that you are the part of the system that the user is communicating with-- you're still not the whole system.

Think of the individual neurons in your brain. Do each of them understand everything you do? Of course not. And yet taken as a whole, they do. Same principle.
 
Last edited:
Dr Sid's take is the Systems Reply, which I agree with: a part of a system (you) doesn't have to understand what the whole system (you+book) understands. It doesn't matter that you are the part of the system that the user is communicating with-- you're still not the whole system.

Think of the individual neurons in your brain. Do each of them understand everything you do? Of course not. And yet taken as a whole, they do. Same principle.

Okay you're not saying that I understand Chinese, you're saying that when I pick up the book of instructions, I become a completely new and separate "thing" that is not simply me just holding a book, and that completely separate "thing" can be said to "understand Chinese". And the proof that this thing understands Chinese is that someone can give it a phrase and get a reply back that looks coherent?

ETA: Is this still true no matter what the instructions are or what methodology it dictates for choosing the characters that will build the reply?
 
Last edited:
As a FYI: The version of ChatGPT 3 which most people are using is a deliberately "hobbled" version, other versions of generative AIs don't have the same hobbling. For example some can incorporate information/learning from user inputs, whereas ChatGPT 3 is in effect reset at the start of each session.

So remember ChatGPT 3 is old, obsolete technology, I mean it's been out for what 7 months! :D
 

Back
Top Bottom