• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

I - and Dr Martell - are talking about Large Language Models, which are (a) not AI, and (b) doing things we understand very well in coming up with the answers to what we ask them.

What AIs do you have in mind, that are doing this other thing?

That's a semantic point until we can define what we man by human intelligence in any kind of rigorous manner. They are at the moment giving at least the impression of intelligence therefore AI is an appropriate term.

There are links to papers and articles earlier in the thread about some of the other AI systems, I'm on my phone so I don't find it easy to search to link back to older posts.
 
That's a semantic point until we can define what we man by human intelligence in any kind of rigorous manner. They are at the moment giving at least the impression of intelligence therefore AI is an appropriate term.

The point Dr Martell is making is that we have evolved to interpret linguistic fluency as reasoning. What Large Language Models like ChatGPT do - by design - is give the impression of fluency. We mistake this for an impression of intelligence.

Maybe some other AI is making inscrutable leaps of inference; I don't know. This thread is about ChatGPT. The mechanism by which ChatGPT and other LLMs arrive at an impression of fluency is well understood. It's literally the same mechanism that powers text autocomplete bots: A statistical estimate of the most likely next word, based on the preceding context.

This is why LLMs are prone to hallucination. Because they're not actually reasoning. They're just mindlessly extending a string of text based on statistical estimates.
 
That's not all they do. Prompt ChatGPT with a schema and a requirement, and it will do a fairly good job writing semi-complex SQL statements.

Which it does by predicting the most likely next word in a SQL statement, based on context. LLMs are actually a powerful tool for generating computer code. However, giving an LLM specific rules of syntax to follow, or scoping its corpus to a specific formal language, does not change the underlying mechanism by which it achieves the impression of fluency in that language.

The point is that the underlying mechanism of ChatGPT is well understood, and that it is explicitly not a reasoning mechanism.
 
Which it does by predicting the most likely next word in a SQL statement, based on context. LLMs are actually a powerful tool for generating computer code. However, giving an LLM specific rules of syntax to follow, or scoping its corpus to a specific formal language, does not change the underlying mechanism by which it achieves the impression of fluency in that language.
Notwithstanding, it's impressive how it incorporates the schema you prompt it with.
 
Notwithstanding, it's impressive how it incorporates the schema you prompt it with.

Honestly, I'm not even sure it does incorporate the schema. Give an autocomplete bot a large enough corpus of well-formed SQL to work with, and it could do a good job of guessing a next syntactically-correct word without having any concept of the syntax itself, simply because it's just doing what all the other examples it's looking at are doing.

Besides, any old dumb IDE can do grammar correction on a given chunk of code. In that respect, are you really all that impressed by garden-variety grammar-checkers?
 
I feel like I should apologize, if I seem to be yucking anyone's yum. I think enthusiasm about something is generally a good thing. I'm not being my best self when I'm pooping on other people's enjoyments.

I promise that not what I'm trying do here.

I just think we as a society are getting excited about LLMs for the wrong reasons, and we're normalizing what is actually a detrimental idea of "AI" and its role in our lives - both individually and collectively.
I'm not excited about ChatGPT, and I don't think you should be either. I'm making my case as best I can, and I hope that if it doesn't entirely persuade you, it will at least give you pause, and come to your mind when others tout the blessings of AI.
 
The point Dr Martell is making is that we have evolved to interpret linguistic fluency as reasoning. What Large Language Models like ChatGPT do - by design - is give the impression of fluency. We mistake this for an impression of intelligence.
I am not claiming LLMs are intelligent, but I wonder how it can be claimed that they are not intelligent, in view of the fact that no one has a good definition of intelligence.
 
Honestly, I'm not even sure it does incorporate the schema. Give an autocomplete bot a large enough corpus of well-formed SQL to work with, and it could do a good job of guessing a next syntactically-correct word without having any concept of the syntax itself, simply because it's just doing what all the other examples it's looking at are doing.
This should be testable by using globally unique column/table names. I'll maybe do that someday soon.
 
It's interesting how much LLMs can do by just absorbing lot of text, and how much it can emulate humans, even if it doesn't have any intelligence designed in.
But as I said, the exiting thing is the future. LLMs are way to convert not just text, but also meaning of the text, into vectors. That's huge.
 
I am not claiming LLMs are intelligent, but I wonder how it can be claimed that they are not intelligent, in view of the fact that no one has a good definition of intelligence.

If your idea of intelligence includes autocomplete algorithms that string words together based on statistical models, without any concept of syntax or semantics, you might enjoy the novel Blindsight, by Peter Watts.

Me? I'm thinking of the hypothetical alien visitors, who cannot decide if they should make first contact with Homo sapiens or Pan troglodytes, since both species use tools.
 
I am not claiming LLMs are intelligent, but I wonder how it can be claimed that they are not intelligent, in view of the fact that no one has a good definition of intelligence.

Good question but I'd say the null hypothesis here is that they are not intelligent. However they do look like an interesting way to explore how we understand language and generate answers to questions.
 
Good question but I'd say the null hypothesis here is that they are not intelligent. However they do look like an interesting way to explore how we understand language and generate answers to questions.

I'd like to hear someone explain why humans are smarter than an LLM.

Most people just slap words together. Most people you meet are just LLMing through life.

Can anyone prove humans have something LLM's don't, intelligence wise?
 
LLMs don't do learning and planing. They also don't have any agency, they do nothing on their own, they just extend prompt one word at a time.
All of that is doable to some extent, but LLMs are just not doing it, as they are not designed to be something like human intelligence.
 
LLMs don't do learning and planing.

Can you explain that? They clearly can learn. Much faster and better than we do. As well as make plans.

They also don't have any agency, they do nothing on their own, they just extend prompt one word at a time.

Still, I'm not sure that means they are any different or not as smart as we are. Isn't that what typing stuff on the internet basically is?

The one thing that's different about us is our reward system is basically for two things, find food and mates. Dawkin's Selfish Gene stuff. If we don't eat, we die. If we don't find mates, the species dies. Our genes do that because, well, long story short, the things that do that survive.

You change the reward system of an AI to impress mates and secure their own resources, they'd probably wipe us out in a day. I think that's what most of the concern is about with this stuff.

All of that is doable to some extent, but LLMs are just not doing it, as they are not designed to be something like human intelligence.

Seems to me, they are more accurate, and more objective, and lack ego, so they are already clearly smarter than us.
 

Back
Top Bottom