ChatGPT

ChatGPT is as zooterkin described it, designed to take a prompt and generate a response that is likely to be correct or appropriate based on the pool of training material it was given. It's programmed to frame its responses in polite language because the engineers who programmed it decided that would make for a better user experience. It is not "modeled on human thought processes", not even vaguely.

My mother programmed me to be polite so people would have a better user experience. And I am being serious.

If someone asks you "Is pizza better hot or cold?", you're going to immediately think about your own experience - your memory of eating pizza, whether you liked it, whether you have ever eaten cold leftover pizza and if so how that experience compared to eating pizza when it was hot - and you're going to communicate an opinion based on that experience.

Or rather you are going to answer it based on your learned experience which includes memory.

ChatGPT doesn't "know" what pizza is, it just knows that it's a noun and has a list of words that were associated with it in its training data. It doesn't know what "hot" or "cold" are beyond their being adjectives that can be applied to the noun you gave, and because it can't have experiences it is wholly unaware of any actual difference between hot pizza and cold pizza beyond how the adjectives are spelled and the fact that your entering the binary operator "or" is a contextual clue that, whatever hot and cold are, they are mutually exclusive. What ChatGPT is programmed to do is take your string of words and generate a cloud of other, consistently associated words from its training set, select the most common of those, use them to compose a response that respects the English rules of grammar, the contextual clues you provided, and whatever conversational flavor its programmers chose to mandate, and return that result to you. It is nothing like how humans think when asked such a question.

From earlier.

This what I think the large language models are in fact showing, much of what we call intelligence, that we take as evidence of us being "I" is beginning to seem like something that doesn't require that.

Plus the article says these generative AIs do seem to be starting to create for themselves "internal" representations of the world.

Sounds awfully like how we think we think.

Sounds to me very much like how we develop on our hardware... sorry brain, we start with a ton of inputs, start to react to them, that reaction becomes input (feedback) and so on.

With ChatGTP 3 it is not allowed to remember past experiences and use those as part of its internal feedback. Other researchers creating generative AIs don't impose this wiping of the slate and as I note above, they (the generative AIs) seem to be creating internal representations of the external world. Add in that the researchers themselves say i.e. they don't know how the outputs are generated.

With that in mind ;) I think there is some likelihood of my prediction coming true:

And many of us think our intelligence and whole "meness" is an emergent property of the brain, we may indeed be mimicking our own intelligence as we develop these systems without meaning to.

Obviously any "meness" arising in these emergent properties won't be the same as ours, not got the same inputs, hardware is different and so on.

Fascinating times.
 
I guess nobody watched this vid yet. Basically AIs do learn. So fast that the world will end this week. No encryption is secure, no banking is secure. Not even biometrics is safe. And they say 2024 will be the last election where people will vote.

Nukes don't make smarter nukes, but AI does make smarter AI.

Gist is that AIs have merged all the computer fields into one language based AI. So robotics has grown, maths has grown, translating has grown, now they are all merged and the growth is exponential.

50% of the people involved in AI think there is a 10% chance that it will extinct humans.

Things are a LOT worse than college term papers.

Watch the vid for the potentials.

Pretty much what lots of speculation has been over the decades. At one time we'd almost given up on AI, we didn't think we'd ever crack it, because we stuck to if you like logical structures and procedures and the likes of decision trees and look-up tables. Which was always strange given we know that isn't how we work. It was a limitation also imposed by the hardware, such as memory space, even disk space and the data available. At this point in time we've just reached the point where the hardware is powerful enough to support petabytes of data processing, we have for practical purposes limitless storage and the data are available from the internet.

We are living through a "perfect storm" moment of history.
 
ChatGPT is as zooterkin described it, designed to take a prompt and generate a response that is likely to be correct or appropriate based on the pool of training material it was given. It's programmed to frame its responses in polite language because the engineers who programmed it decided that would make for a better user experience. It is not "modeled on human thought processes", not even vaguely.

If someone asks you "Is pizza better hot or cold?", you're going to immediately think about your own experience - your memory of eating pizza, whether you liked it, whether you have ever eaten cold leftover pizza and if so how that experience compared to eating pizza when it was hot - and you're going to communicate an opinion based on that experience.

ChatGPT doesn't "know" what pizza is, it just knows that it's a noun and has a list of words that were associated with it in its training data. It doesn't know what "hot" or "cold" are beyond their being adjectives that can be applied to the noun you gave, and because it can't have experiences it is wholly unaware of any actual difference between hot pizza and cold pizza beyond how the adjectives are spelled and the fact that your entering the binary operator "or" is a contextual clue that, whatever hot and cold are, they are mutually exclusive. What ChatGPT is programmed to do is take your string of words and generate a cloud of other, consistently associated words from its training set, select the most common of those, use them to compose a response that respects the English rules of grammar, the contextual clues you provided, and whatever conversational flavor its programmers chose to mandate, and return that result to you. It is nothing like how humans think when asked such a question.

This is the most accurate explanation of what GPT is and how it works. Well said :thumbsup:
 
Then they weren't unsolvable.
The advantage computers have in solving math problems is being able to do the iterations in hours, that would take humans years. Not more intelligent, just faster.
The computer still had to be 'taught' how to approach the problem, by humans, no less.
...snip...

That is not how these AIs are created, we use "unsupervised learning" to create them. The proto-AI starts to learn by detecting and matching patterns in its dataset without guidance. This is why the actual researchers will tell you that they do not know how they generate their output.
 
I think I mentioned before how the key to getting rid of errors or “hallucinations” might be a “fact checker” program. Just like we might use mental shortcuts to get approximate answers, but then go to a calculator program like Wolfram Alpha or Wikipedia or other online source to check our work, it seems AI could be programmed to do the same thing. IOW, reach out to compare it’s large language model response to the answer generated by other online resources, and maybe provide reference to the sources used, perhaps presented by footnotes.
Bing is already doing the footnotes stuff.
 
That is not how these AIs are created, we use "unsupervised learning" to create them. The proto-AI starts to learn by detecting and matching patterns in its dataset without guidance. This is why the actual researchers will tell you that they do not know how they generate their output.

I was speaking to math problem solving. The AI has to have the "rules" to even begin.
 
I was speaking to math problem solving. The AI has to have the "rules" to even begin.
Why?

Human brains do not have the rules at start, but seem able to learn them by the same means that AI does it, i.e. through pattern matching, statistical analysis of the most likely answer, or training with reward or punishment.
 
I was speaking to math problem solving. The AI has to have the "rules" to even begin.

Sort of, I guess. But the parameters of the "rules" has gone up a level.

For instance, for years chess programs were published with human strategies - how much a piece is worth in a given situation, paths to winning a piece or how to achieve checkmate, that sort of thing. And this worked well enough for the computer program Big Blue to beat Garry Kasparov back in 1997. And since programmed by humans, the moves and strategies of the computer seemed superficially human.

But more recently, chess programs have only been programmed with the rules of the game, and they train themselves by playing millions of games and determining moves and strategies "on it's own", so to speak. And some of those moves seem decidedly "non-human" but quite effective. And far, far superior to the prior "taught" models.

I think the analogy holds for GPT4 and other Large Language Models and why even the developers are having trouble predicting what their output will be.
 
Last edited:
My mother programmed me to be polite so people would have a better user experience. And I am being serious.

I don't think you are. Your mother may have told you to be polite; but let's just say I choose to remain dubious that your thought process when deciding right now, in May of 2023, to say something politely to somebody is so robotic as "mother told me to do that". I would be willing to bet that there is at least a modicum of empathy involved, a bit of reasoning in the sense that you have personally witnessed how people respond to impolite exchanges and personally prefer not to cause that kind of response in people.

Parents' general rote instructions like "be polite" aren't meant to carry a person through their entire life. They're meant to carry a person through childhood until the point their brains have matured enough that they are capable of independent observation, judgement, and discretion. As a child your parents may have also told you to never talk to strangers, always use crosswalks, always finish everything that's on your plate, and to make sure you're home by the time the street lights turn on, but there are probably times in your adult life that you have not obeyed those edicts because your own capacity for complex reasoning told you they weren't necessary or desirable in a given instance.

That kind of discretion is something that a computer program like ChatGPT isn't capable of developing and exercising. If the programmers tell it that it can't do a certain thing, then it simply can't and will never do that thing, barring a bug or a logic error that accidentally forces it to. If you tell ChatGPT you're upset with it or that it has given a wrong response, it will always politely apologize as programmed, and it will never one day decide "you know what, this guy is a rude bastard and I don't have to apologize if he's going to talk to me that way".


Sounds to me very much like how we develop on our hardware... sorry brain, we start with a ton of inputs, start to react to them, that reaction becomes input (feedback) and so on.

With ChatGTP 3 it is not allowed to remember past experiences and use those as part of its internal feedback. Other researchers creating generative AIs don't impose this wiping of the slate and as I note above, they (the generative AIs) seem to be creating internal representations of the external world. Add in that the researchers themselves say i.e. they don't know how the outputs are generated.

Researchers admitting that they can't immediately discern why a certain (typically very vast) set of inputs led to a particular response isn't the same thing as not knowing how the machine generates responses. They know that quite well because they wrote the code instructing it exactly what to do with the data it is given.
 
…snip…

Researchers admitting that they can't immediately discern why a certain (typically very vast) set of inputs led to a particular response isn't the same thing as not knowing how the machine generates responses. They know that quite well because they wrote the code instructing it exactly what to do with the data it is given.

No they didn’t, and that’s what has been the game changer in AI, the unsupervised training without guidance.

The researchers themselves will tell you that they do not know how their AIs come up with their outputs. They are having to build tools to try and look into what is actually happening to try and understand.



ETA: From the SA article

How AI Knows Things No One Told It
Researchers are still struggling to understand how AI models trained to parrot Internet text can perform advanced tasks such as running code, playing games and trying to break up a marriage

..snip…

them. Some of these systems’ abilities go far beyond what they were trained to do—and even their inventors are baffled as to why

….

That GPT and other AI systems perform tasks they were not trained to do, giving them “emergent abilities,” has surprised even researchers who have been generally skeptical about the hype over LLMs. “I don’t know how they’re doing it or if they could do it more generally the way humans do—but they’ve challenged my views,” says Melanie Mitchell, an AI researcher at the Santa Fe Institute.

….
 
Last edited:
Some of these systems’ abilities go far beyond what they were trained to do—

That just says the programmers don't understand the full implications of their programing.
The training has produced results not expected, but it can't be anything more than what is possible with the code that is written.
 
No they didn’t, and that’s what has been the game changer in AI, the unsupervised training without guidance.

The researchers themselves will tell you that they do not know how their AIs come up with their outputs. They are having to build tools to try and look into what is actually happening to try and understand.



ETA: From the SA article

How AI Knows Things No One Told It

But that's not what's actually happening. We've already been over this, either in this thread or another one, in which someone posted a pop-sci article claiming that AI researchers are baffled that ChatGPT "somehow learned biology even though we never taught it biology", and it turns out that if you ask an actual biologist to review ChatGPT's biology homework they'll tell you that no, it actually sucks at biology, and the same is true for any other of these "emergent abilities". Coding, specifically, is one field at which professional computer programmers have said repeatedly that the AI is fairly incompetent beyond some basic concepts. If I recall, we practiced here by giving it beginner-level physics questions and watching it return completely wrong answers. The real takeaway is that "AI researchers" are probably not the best judge at how good an AI is at a subject or task that is outside their wheelhouse - as you would expect of any other kind of expert.

In the weeks after ChatGPT's release, when some people - mostly trolls trying as hard as they could to "break" the bot's imposed restrictions - started bringing attention to some embarrassingly bad or inappropriate responses they were able to get the bot to return, ChatGPT's engineers typically responded by patching the software to prevent those kinds of mistakes from recurring. They were usually able to deploy these corrective patches fairly quickly and effectively too, which doesn't seem like it should have been possible if the engineers legitimately had no idea how the bot generated responses.
 
" The AI researchers are baffled by ( place unexplained phenomena here )." sounds like a click-bait headline.

They may be baffled, but baffled doesn't equate to emerging, human-like intelligence.
 
AI is really a misnomer. They aren't even trying to make an AI. They're making software that is trying really hard to pretend being an AI by pretending to be an impossibly intelligent human.
 
AI is really a misnomer. They aren't even trying to make an AI. They're making software that is trying really hard to pretend being an AI by pretending to be an impossibly intelligent human.


Fifty years ago, AI researchers were already laughing about the fact that every advance in AI was followed by people denigrating the achievement as mimicking a behavior that didn't involve intelligence.

Consider the following goals of artificial intelligence, now mostly achieved or in the process of being achieved.

When you speak to a robot on your cell phone, and the robot occasionally understands what you're saying, that's a triumph of artificial intelligence research.

When you use a machine translation service to translate German into English, that's a triumph of artificial intelligence research.

When your vehicle beeps at you to warn of hazards fore or aft, or to suggest you're straying from your lane, that's a triumph of artificial intelligence research.

Whenever a Tesla's partially automated driver's assist system causes a crash, that's a failure of artificial intelligence as implemented in those vehicles.

Whenever law enforcement uses facial recognition to foil your terrorist plot, or you rely on facial recognition to increase the security of your cell phone, that's a triumph of artificial intelligence.

One of the reasons programmers of AI systems often say they don't fully understand their programs is that many of those programs rely on one or more varieties of machine learning, such as deep learning. Those algorithms often involve feature spaces of ridiculously high dimension, tamed by well-understood computational processes whose calculations are so complex their outputs often surprise human intuition, analyzed at multiple layers of abstraction by algorithms that are explicitly approximate and probabilistic.

ETA: The process of extracting useful information from feature spaces of ridiculously high dimension involves a lot of the same mathematics that relativity-denying crackpots fail to understand even within the low-dimensional context of general relativity.
 
Last edited:

Back
Top Bottom