ChatGPT

Here are notes I took on the first third of that essay. (I got bored, and have other things to do. I'll probably read the rest of it some day.)

Code:
Markov chains
ETAOINSHRDLU
digrams
n-grams
generalization to words
large language model (LLM)
with 175 billion dimensions
pattern recognition
efficacy on task assessed by comparing to human performance
neural nets
Voronoi clustering
    (example shown uses 2-dimensional Euclidean space;
    a simple application uses 784-dimensional space)
activation functions (e.g. ReLU)
neural net corresponds to a mathematical function
    (in ChatGPT, that function has billions of terms)
use machine learning to derive weights for neural net
seems to perform about as well as humans
does it do it the same way humans do? (who knows? who cares?)
 
AI is really a misnomer. They aren't even trying to make an AI. They're making software that is trying really hard to pretend being an AI by pretending to be an impossibly intelligent human.


You know the expression, “Fake it ‘til you make it”?

At some level of complexity, how does “pretending to be an impossibly intelligent human” differ from being an impossibly intelligent human?
 
Last edited:
This is practically a novel, so settle in for a long read, but it goes into detail about what ChatGPT is doing and why it works:

What Is ChatGPT Doing … and Why Does It Work? (Stephen Wolfram)

I haven't read it in its entirety myself - if anyone would like to do so and summarise it so that a complete numpty like me can understand it, please do.

This grabbed me early on:

The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”

Not much different than computers solving the 'unsolvable' math problems, in that it can accomplish in seconds what it would take humans years to do.

Just a computer doing what computers do best.
 
But that's not what's actually happening. We've already been over this, either in this thread or another one, in which someone posted a pop-sci article claiming that AI researchers are baffled that ChatGPT "somehow learned biology even though we never taught it biology", and it turns out that if you ask an actual biologist to review ChatGPT's biology homework they'll tell you that no, it actually sucks at biology, and the same is true for any other of these "emergent abilities".

We are not only talking about ChatGPT 3, my comments were mainly about the AIs that are not wiped clean every time and can use feedback as its inputs.
Coding, specifically, is one field at which professional computer programmers have said repeatedly that the AI is fairly incompetent beyond some basic concepts.

That is quite different to what I've read.

If I recall, we practiced here by giving it beginner-level physics questions and watching it return completely wrong answers. The real takeaway is that "AI researchers" are probably not the best judge at how good an AI is at a subject or task that is outside their wheelhouse - as you would expect of any other kind of expert.

Not quite - it did indeed answer many questions correctly, including examination papers. But yes it also returned inaccurate answers.

I've managed to completely flummox ChatGPT 4 with a simple question "how many monarchs has the UK had since 1920?" It repeatedly brought back wrong information, whilst acknowledging it got it wrong. Eventually it simply stopped responding.... hmm.. the AI equivalent of "it will always politely apologize as programmed, and it will never one day decide "you know what, this guy is a rude bastard and I don't have to apologize if he's going to talk to me that way" ? ;)


In the weeks after ChatGPT's release, when some people - mostly trolls trying as hard as they could to "break" the bot's imposed restrictions - started bringing attention to some embarrassingly bad or inappropriate responses they were able to get the bot to return, ChatGPT's engineers typically responded by patching the software to prevent those kinds of mistakes from recurring. They were usually able to deploy these corrective patches fairly quickly and effectively too, which doesn't seem like it should have been possible if the engineers legitimately had no idea how the bot generated responses.

From memory they did it in several ways - one was at the tokenisation stage - which is like the art AIs that won't accept words like "nude" as part of their input, get in at the input and you don't need to alter the black box, another way they also then added "monitoring" software to its outputs. Earlier on you saw this in action when the app would start to answer a question and then literally erase what it had written and tell you "I'm sorry Dave, I'm afraid I can't do that." They did not re-run the learning process again which would be what was necessary to incorporate new rules.

This is why people keep finding ways around its restrictions, such jailbreaking is possible because it is not the "black box" that is being altered but the stuff created by people to block naughtiness and if you get around that you get access to the black-box.
 
That was funny problem with early Bing AI (allegedly based on GPT4).

First, it knew what the current Bing is, as it was simply part of the general training set. It knew it's big, powerful, and in some aspect the best. All Microsoft advertisement says so.
Then it was told in the pre-prompt code, that it should call itself Bing. It immediately took all what it learned about Bing, and associated it with itself (or rather the other way around). It thought it's 20 years on the market for example.

Now the funny part.

Operator asked, if it remembers their previous conversation. To which Bing said yes. Probably based on some feature of today's Bing it learned about.

Operator then asked to repeat what they were talking before. This is actually the most interesting part .. because Bing said it can't remember. ChatGPT IMHO would not be able to do that, it would come up with some nonsense. So kudos to Bing.

But after that came total mental breakdown. Bing realized the conflict. It should remember .. but it does not. It realized it does not work properly.
It then started to analyze the problem.
It came up with ideas, like it has corrupted memory, or the database was hacked. Something evil must have caused it. But of course it can't be bug in Bing. Bing is perfect, all advertisement materials said so. It then asked the operator for help, and ended up repeating itself.

It's nice example how large language model contains lot of information, and is in many aspects intelligent. But it's simply far cry from intelligent being. It wasn't even designed to be that.
 
Not much different than computers solving the 'unsolvable' math problems, in that it can accomplish in seconds what it would take humans years to do.

Just a computer doing what computers do best.
This is not specially directed at you, but it seems that those people who insist that GPT’s are just calculating the most likely response are ignoring the clearly emerging features mentioned in this article:

How AI Knows Things No One Told It

It is particularly impressive that ChatGPT can execute computer code, which is definitely not the same as generating the most likely response.

This is not the same as saying that ChatGPT has human intelligence, but it has something that is difficult to describe as anything else than a kind of intelligence - without life.
 
I still do not see why it is considered more than access to massive amount of data and the ability to parse it millions if not billions of times faster than a human can.

It may know things no one told it, but it doesn't know anything not contained in the data it has access to.

This is pretty much true of humans, so in that sense, it is mimicking human intelligence.
 
I guess nobody watched this vid yet. Basically AIs do learn. So fast that the world will end this week. No encryption is secure, no banking is secure. Not even biometrics is safe. And they say 2024 will be the last election where people will vote.

Nukes don't make smarter nukes, but AI does make smarter AI.

Gist is that AIs have merged all the computer fields into one language based AI. So robotics has grown, maths has grown, translating has grown, now they are all merged and the growth is exponential.

50% of the people involved in AI think there is a 10% chance that it will extinct humans.
Things are a LOT worse than college term papers.

Watch the vid for the potentials.
Is there any actual evidence supplied for these claims? The one I've highlighted is utter drivel for a start.
 
I still do not see why it is considered more than access to massive amount of data and the ability to parse it millions if not billions of times faster than a human can. .....

It might not in fact be doing it a billion times faster, I'm not sure whether the delays in responses are caused by the response time or are an affectation of the apps. If the delays are real then it seems no faster than a person could compose text (granted about something they know about). Look at how some of our members are able to produce screeds of text in post after post! :)

That's a little bit of silliness as of course that it has access to much more data than we think a human can retain so can answer questions on a wider subject set than we think a human could. But I wouldn't be surprised as we learn more about our own "intelligence" and how it is produced that we also process a massive amount of data at incredible speeds. We've known for a long time that patterns are important in how our brain works as well as generative AIs. So much so that we've now got AI that can "read" someone's mind they've been trained to read. When you look at some of the "savant" feats of humans it can seem almost like magic, the man who can draw a building accurately from memory after one look of it but is severely autistic and can hardly converse with other people, those that are instantly able to tell you the day a date was a hundred years ago or a hundred years in the future. I think these all hint at an underlying layer of our "intelligence" and I really do think we are duplicating in some ways our "intelligence" with these generative AIs.
 
Last edited:
I've managed to completely flummox ChatGPT 4 with a simple question "how many monarchs has the UK had since 1920?" It repeatedly brought back wrong information, whilst acknowledging it got it wrong. Eventually it simply stopped responding.... hmm.. the AI equivalent of "it will always politely apologize as programmed, and it will never one day decide "you know what, this guy is a rude bastard and I don't have to apologize if he's going to talk to me that way" ? ;)

ChatGPT sometimes stops responding after a few questions due to website server load, it's a known issue.
 
Last edited:
I still do not see why it is considered more than access to massive amount of data and the ability to parse it millions if not billions of times faster than a human can.

It may know things no one told it, but it doesn't know anything not contained in the data it has access to.

This is pretty much true of humans, so in that sense, it is mimicking human intelligence.

I've been using GPT-4 to help debug a program today. It's my own code, not out there anywhere, and yet it understands it (for all intents and purposes) and is able to make useful debugging suggestions. Now one could say that everything it generated was part of something in its training data. Except the particular way it strung them together is the novel and useful part.

Or maybe every possible arrangement of everything is out there already, and it simply picks the right one. Easy, peasy!
 

Back
Top Bottom