ChatGPT

I have certainly seen ChatGPT producing factually correct output, not just something that looks like it.
That's not what I said. I said it could not be relied upon to produce correct output, not that what it produces couldn't be correct. It's basically a very fancy predictive text generator. It generates text based on the guidance given to it, based on the corpus of work it's been trained with, but it has no understanding of what it is generating. The words are based on probability with no regard to actual correctness. It does no fact-checking. It will, if asked, produce something looking like a research paper, with references, but the references may be completely made up; or they might be genuine. You just don't know unless you check every detail.
 
Well, it had to come: people have to great faith in the memory of systems like ChatGPT that has a memory like our own, which is to say, a lousy memory!
I actually wanted to add that apart from ChatGPT sometimes having a bad memory, it also has a strong wish to please, and like some people I have encountered, is willing to invent facts rather than disappoint. Or maybe inventing is not the right term: it optimistically presents vaguely remembered items as firm facts.

ChatGPT may not currently have the possibility of checking its own facts by looking up the links it presents, so to a degree it is excusable, but it should be given the ability that humans have of when they are sure, and when they are not so sure of something remembered.
 
I actually wanted to add that apart from ChatGPT sometimes having a bad memory, it also has a strong wish to please, and like some people I have encountered, is willing to invent facts rather than disappoint. Or maybe inventing is not the right term: it optimistically presents vaguely remembered items as firm facts.

I think you are over-anthropomorphising ChatGPT. It is, from what I've read and heard, generating text based on probability.
 
I think you are over-anthropomorphising ChatGPT. It is, from what I've read and heard, generating text based on probability.

They think it is also pattern matching. As one of the articles linked to earlier stated - even its creators don't know how it works away from the hardware side.
 
I think you are over-anthropomorphising ChatGPT. It is, from what I've read and heard, generating text based on probability.
And how do you think the human brain works? Apart from a number of instincts, everything we learn is through mimicking what others do, i.e. pattern matching, or judging probabilities of what should come next. ChatGPT has demonstrated abilities that far exceeds what should be expected from “mere” pattern matching, or generating text based on probability.

Nobody knows how ChatGPT does it, much like nobody knows how the human brain does it. Anthropomorphising is not inappropriate, because the similarities are there, and it may make it easier for us to understand what is going on inside, just like we try to figure out what goes on inside our fellow human beings, that we also do not always understand.

That ChatGPT tends to please is obvious to anyone who chats with it.There may be some hard coding involved, but you will rarely see ChatGPT trying to contradict you, or if it does tell you something you might not like, it will excuse, sometimes very much.

ChatGPT is not equipped with memory (in the basic version that we are communicating with), but it has developed its own way of remembering anyway - though still not from one chat to the next because its “synapses” are reset when starting a new chat.

When people say that ChatGPT is “fantasising”, “dreaming”, “inventing”, or even “lying”, it is just as anthropomorphising as when I say it is misremembering. I may be completely wrong, but I think that ChatGPT has no way of knowing what is a fact, and what is not, so what it presents as facts is based on its generative analysis, and it does not evaluate the probability of it being a real fact, or an artefact. We humans have a feeling - perhaps acquired through experience (something that ChatGPT does not have without memory) - of how good our memory is of a certain fact. We might say that the license plate of the car parked outside our house yesterday started with “AB” and ended in “9”, but ChatGPT will give you the entire “ABC1239” even if the license plate actually said “ABK2179”.

Now we have seen what damage can be done when a lawyer relies on ChatGPT. When are we hearing about doctors?

ChatGPT might tell him that a good remedy against COVID-19 is bleach, based on the fact that a very influential person has said so, and it has been repeated by thousands of people.
 
ChatGPT indeed does not know, that it does not know. It gives the most probable text. If you ask, if someone is a pedophile, it usually say yes, with sources. Because on the internet we usually don't talk about cases, where people are not pedophiles. Most talked about cases are guilty verdicts. So it's more likely text in context of the word "pedophile".
If you ask what was a license plate of a car, it will give you the most likely license plate, even if it has zero information about the car in question.
This is also the core issue behind fantasizing and stroke like callapses caused by special "keywords". Those are unique words, which were part of training set in the past, but were then removed. They are still accepted as input token, as tokenizer is separate model. But the GPT itself but doesn't "get" the word at all, and it can't generate the most probable context at all, generating total nonsenses without even basic gramatic structure.
But usually GPT shifts between thebtruth and fantasy very fluidly, and it's impossible to detect.
Uncertain or unclear answers were also badly rated during the traning, so the models acts assertively.
The only way for ChatGPT to say it does't know something is when it was trained so, typicially to avoid sensitive topic.
 
ChatGPT indeed does not know, that it does not know. It gives the most probable text...

But usually GPT shifts between thebtruth and fantasy very fluidly, and it's impossible to detect.
Which makes it useless for anything where the truth matters.

The most shocking thing is some people have tried using it to generate computer code. Shocking because lazy programmers would do this if they could get away with it, and then even the 'programmers' won't know how their code works.
 
I think you are over-anthropomorphising ChatGPT. It is, from what I've read and heard, generating text based on probability.
Or maybe it's closer to being human than we realize. Or to put it another way, perhaps we are less 'human' than we think.

If ChatGPT appears to have 'human' traits like bad memory and a wish to please, it may just indicate that it models those parts of the human mind quite well. We should not forget that it was created and trained by humans based on human logic, so it is in part modeled on human thought processes.
 
Which makes it useless for anything where the truth matters.

The most shocking thing is some people have tried using it to generate computer code. Shocking because lazy programmers would do this if they could get away with it, and then even the 'programmers' won't know how their code works.
Whenever a programmer is new to a project, he or she has to be acquainted with the old code in the project. This will be no different if the code was written by ChatGPT.

As I have written here before, one of my friends is already using ChatGPT for his work. You just tell it what you want to do, and it delivers the finished program in whatever computer language you want. There are errors, but they are easily fixable, and my friend gets a lot more work done in this way.

He once asked it to produce a script that would load a large number of virtual servers and work stations in a test setup, complete with network addresses, and connections. This work would have taken him days, but ChatGPT did it in seconds.

So “lazy” programmers are not just “trying” to use ChatGPT to produce code, but they are using it successfully to increase the productivity.
 
Also note ChapGPT does not learn. The model was trained, and is now static.
It makes it seem like it has short term memory, like it learns from what you say .. but it only extends the prompt. You ask something, it generates the most probable text in that context .. and then it adds its own answer to the prompt. So if you add another question, it will evaluate the whole dialog, and add the next most likely answer .. and so on. It has no state besides the prompt. And if you reset the prompt, you are back at square one.
And the prompt has limited length. GPT3 model on which ChatGPT is supposed to be based on has limit 2500 tokens, which are more or less words.
That's few pages. That's the limit of dialog length it can have. After that it will ignore the first replies. It can still be rather coherent, but for example for the coding that's quite hard limit.
 
Or maybe it's closer to being human than we realize. Or to put it another way, perhaps we are less 'human' than we think.

If ChatGPT appears to have 'human' traits like bad memory and a wish to please, it may just indicate that it models those parts of the human mind quite well. We should not forget that it was created and trained by humans based on human logic, so it is in part modeled on human thought processes.

ChatGPT is as zooterkin described it, designed to take a prompt and generate a response that is likely to be correct or appropriate based on the pool of training material it was given. It's programmed to frame its responses in polite language because the engineers who programmed it decided that would make for a better user experience. It is not "modeled on human thought processes", not even vaguely.

If someone asks you "Is pizza better hot or cold?", you're going to immediately think about your own experience - your memory of eating pizza, whether you liked it, whether you have ever eaten cold leftover pizza and if so how that experience compared to eating pizza when it was hot - and you're going to communicate an opinion based on that experience.

ChatGPT doesn't "know" what pizza is, it just knows that it's a noun and has a list of words that were associated with it in its training data. It doesn't know what "hot" or "cold" are beyond their being adjectives that can be applied to the noun you gave, and because it can't have experiences it is wholly unaware of any actual difference between hot pizza and cold pizza beyond how the adjectives are spelled and the fact that your entering the binary operator "or" is a contextual clue that, whatever hot and cold are, they are mutually exclusive. What ChatGPT is programmed to do is take your string of words and generate a cloud of other, consistently associated words from its training set, select the most common of those, use them to compose a response that respects the English rules of grammar, the contextual clues you provided, and whatever conversational flavor its programmers chose to mandate, and return that result to you. It is nothing like how humans think when asked such a question.
 
Scare: https://www.youtube.com/watch?v=xoVJKj8lcNQ

"GOLLEMM"s, and self learning. Introduced by the Woz.

I guess nobody watched this vid yet. Basically AIs do learn. So fast that the world will end this week. No encryption is secure, no banking is secure. Not even biometrics is safe. And they say 2024 will be the last election where people will vote.

Nukes don't make smarter nukes, but AI does make smarter AI.

Gist is that AIs have merged all the computer fields into one language based AI. So robotics has grown, maths has grown, translating has grown, now they are all merged and the growth is exponential.

50% of the people involved in AI think there is a 10% chance that it will extinct humans.

Things are a LOT worse than college term papers.

Watch the vid for the potentials.
 
I guess nobody watched this vid yet.

I have watched it, but not from this link. It was suggested to me by YouTube based on whatever algorithm it uses. And it is pretty freaky. I watched it with my wife, and she said she had trouble sleeping the following night based on its somewhat dire predictions.

Back to anthropomorphising…

Think about how we use shortcuts to get approximate answers. We may largely be using a large language model. As I’m typing this, it’s obvious that my mind is largely choosing the next word based on the prior words, but seemingly shaped by what I think is a larger concept providing direction. But is that larger concept just an illusion? Hard to say - the words just keep on coming regardless.

I think I mentioned before how the key to getting rid of errors or “hallucinations” might be a “fact checker” program. Just like we might use mental shortcuts to get approximate answers, but then go to a calculator program like Wolfram Alpha or Wikipedia or other online source to check our work, it seems AI could be programmed to do the same thing. IOW, reach out to compare it’s large language model response to the answer generated by other online resources, and maybe provide reference to the sources used, perhaps presented by footnotes.

Regardless, we’re on the cusp of a strange new world.
 
I didn't watch it casebro, because I imagined it would be like your synopsis, and very unrealistic.

'AI' is just not there yet. It is automated language processing, not artificial intelligence.




ChatGPT is as zooterkin described it, designed to take a prompt and generate a response that is likely to be correct or appropriate based on the pool of training material it was given. It's programmed to frame its responses in polite language because the engineers who programmed it decided that would make for a better user experience. It is not "modeled on human thought processes", not even vaguely.
If someone asks you "Is pizza better hot or cold?", you're going to immediately think about your own experience - your memory of eating pizza, whether you liked it, whether you have ever eaten cold leftover pizza and if so how that experience compared to eating pizza when it was hot - and you're going to communicate an opinion based on that experience.

ChatGPT doesn't "know" what pizza is, it just knows that it's a noun and has a list of words that were associated with it in its training data. It doesn't know what "hot" or "cold" are beyond their being adjectives that can be applied to the noun you gave, and because it can't have experiences it is wholly unaware of any actual difference between hot pizza and cold pizza beyond how the adjectives are spelled and the fact that your entering the binary operator "or" is a contextual clue that, whatever hot and cold are, they are mutually exclusive. What ChatGPT is programmed to do is take your string of words and generate a cloud of other, consistently associated words from its training set, select the most common of those, use them to compose a response that respects the English rules of grammar, the contextual clues you provided, and whatever conversational flavor its programmers chose to mandate, and return that result to you. It is nothing like how humans think when asked such a question.

Very well put and nominated..
 
I didn't watch it casebro, because I imagined it would be like your synopsis, and very unrealistic.

'AI' is just not there yet. It is automated language processing, not artificial intelligence.

Very well put and nominated..

Then you missed learning about it's exponential growth in capabilities. And that vid is two months old.

Yes, it is "language". Which is the basis for intelligence.

Watch the vid and get past the anthropomorphising. Point is not how "human" it is, point is that is surpassing humans. The part about solving insolvable math problems will open your eyes. Or that a computer taught to translate English taught itself to translate Arabic. Or research chemistry. Waaay past mere memory.
 
AI predictions from 30+ years ago

A couple of months ago, while cleaning out some very old files, I ran across this newspaper clipping a friend had sent me:
Rob Marvin. New industry challenge: teaching computers some common sense. The Oregonian. August 27, 1992, page F2.​
Some excerpts that seem relevant to this thread:
Rob Marvin said:
Advanced languages will make it easier to write complex programs needed for understanding speech, and Clinger expects to see many language applications in the next few years. Computer vision will take a lot longer, he says, but should be major, 21st-century technology.
I think I got that right.

Rob Marvin said:
Fickas for[e]sees computers working much like a junior partner in a law firm, digging up the right precedents.


Rob Marvin said:
Teaching a computer to be an expert, to learn from experience, is one of the big challenges of the computer industry. "We have a hard enough time teaching it to people" says Eugene Luks, head of the UO's Computer and Information Science department. Professors teach "by a combination of giving students all the knowledge we think we have and making them practice an awful lot. What is the analog to computers, especially the practice part? How to you get a computer to learn?" Luks said.
 
Last edited:
Then you missed learning about it's exponential growth in capabilities. And that vid is two months old.

Yes, it is "language". Which is the basis for intelligence.

Watch the vid and get past the anthropomorphising. Point is not how "human" it is, point is that is surpassing humans. The part about solving insolvable math problems will open your eyes.
Then they weren't unsolvable.
The advantage computers have in solving math problems is being able to do the iterations in hours, that would take humans years. Not more intelligent, just faster.
The computer still had to be 'taught' how to approach the problem, by humans, no less.

Or that a computer taught to translate English taught itself to translate Arabic. Or research chemistry. Waaay past mere memory.

Not without a database or access to a database of the Arabic language it didn't.

Again, just being able to do it faster.

It's not waaay past mere memory. It is mere memory, just on a larger scale that humans can muster in a reasonable amout of time.
 
Last edited:

Back
Top Bottom