ChatGPT

It's utterly false. Nobody who is a chemist seems to agree with the assessment that ChatGPT "knows chemistry"; indeed, several reply tweets assert quite the opposite. ChatGPT just sounds like it knows advanced chemistry to people who don't, because ChatGPT knows what chemists "sound like" in text and is able to replicate that.

Well at some point "sounding like a chemist" is indistinguishable from being a chemist.

More importantly it should be emphasized that AI like this do not learn organically or independently. It's not like they go online and start learning stuff from textbooks and Wikipedia on their own.

It's people who teach the AI. They supply the information that the AI learns from.
 
Last edited:
I don't think it's an improvement they've made over the last couple of hours. I think it's a case of, because the machine doesn't actually "know" enough chemistry to decide how likely any of the multiple responses its generated is to be more correct than the others, it just randomly chooses one to output, and that can be different for different users.

One of things to consider with regards to limitations of this AI is that it developed for natural language comprehension and generation. Math, chemistry, physics and etc are examples of subjects where people do not utilize natural language in practice.

In my opinion the biggest problem isn't that it's capable of being wrong or unable to perform a request, but rather the fact that it tends to be unable to realize that it's wrong. It's supposed to be helpful and provide information yet this apparently does not take into account that wrong advice is worse than no advice.
 
In my opinion the biggest problem isn't that it's capable of being wrong or unable to perform a request, but rather the fact that it tends to be unable to realize that it's wrong. It's supposed to be helpful and provide information yet this apparently does not take into account that wrong advice is worse than no advice.
You are so right. Humans may know that they might be wrong, and that they sometimes are sure that they are wrong. ChatGPT does not have this faculty which is something that humans acquire through experience. The learning algorithms for the AI does not teach it how to recognise stuff that may be unsure.

And the problem is compounded because the AI speaks with authority every time, and we are accustomed to believe authority.
 
Vicky Pollard explains plots, as interpreted by ChatGPT:


Okay, so there's this movie called Casablanca, right? And it's, like, all about this guy named Rick who runs this bar in Morocco during World War II. And then this lady named Ilsa shows up, and it turns out that Rick and Ilsa used to be, like, super in love and stuff. But then Ilsa, like, disappeared or something and Rick was all heartbroken and stuff. And now she's back with this other guy who needs to get out of Morocco, and she's trying to get Rick to help him. So, like, there's all this drama and stuff, and Rick's, like, trying to decide what to do. And then there's, like, this famous scene where they sing "La Marseillaise" to, like, drown out the Germans who are trying to sing their own song. In the end, Rick, like, gives up his love for Ilsa and helps her and the other guy escape. It's, like, a total tearjerker and stuff, but also, like, super romantic. If you're into that kind of old-school Hollywood love story, you might like it.


Okay, so there's this book called The Trial, right? And it's, like, all about this guy named Josef K. who's, like, arrested and he doesn't even know why or what for. It's, like, super messed up and stuff. So, like, Josef's, like, trying to figure out what's going on and he's, like, going through this crazy legal process, but nobody will tell him, like, what he's being charged with or anything. It's, like, super frustrating and stuff. And, like, the book's, like, super intense and stuff. Like, Josef's, like, losing his mind and he's, like, trying to, like, escape from this, like, crazy bureaucratic system and stuff. But, like, there's also, like, some deep, philosophical stuff going on, too, you know? Like, about, like, the nature of justice and stuff. It's, like, a total classic and stuff, but, like, also, like, super confusing and stuff. If you're into, like, existential literature and stuff, you might like it, but honestly, it's, like, a total head-scratcher.


Oh my god, right, so there's this play called Waiting for Godot, yeah? And it's, like, all about these two guys named Vladimir and Estragon who are, like, waiting for this guy named Godot. But, like, nobody knows who Godot is or what he looks like or anything. It's, like, super weird and stuff. So, like, Vladimir and Estragon are, like, hanging out and waiting for Godot, but, like, nothing's happening and they're, like, super bored and stuff. And then, like, these other characters come in and out, but, like, nothing really happens and it's, like, super confusing and stuff. And, like, the play's, like, super philosophical and stuff. Like, it's all about, like, the meaning of life and stuff, but, like, it's, like, super hard to follow and stuff. But, like, there's, like, some funny stuff in there, too, you know? Like, there's this part where Estragon's, like, trying to take off his boot and it's, like, super hard and stuff. It's, like, a total classic and stuff, but, like, also, like, super boring and stuff. If you're into, like, experimental theater and stuff, you might like it, but honestly, it's, like, not for everyone.
 
Last edited:
Allegedly a Google AI researcher has resigned and gone public with a claim that Google's Bard chatbot "heavily relies" particularly on data scraped from ShareGPT. ShareGPT is a website and utility wherein users of ChatGPT can publicly post conversations they've had with the ChatGPT bot. The researcher feels this is possibly a legal problem, because Microsoft has exclusive rights to ChatGPT, as far as search-engine AI goes.

ETA: Earlier version was mistaken; the ShareGPT website is not run by OpenAI itself. It does, however, only host ChatGPT conversations.
 
Last edited:
Hepburn mayor may sue OpenAI for defamation over false ChatGPT claims

A regional mayor may take unprecedented legal action to sue the creator of AI chatbot ChatGGPT for defamation, after the program falsely claimed he was a guilty party in a foreign bribery scandal.

Brian Hood, the mayor of Hepburn Shire Council, west of Melbourne, was recently alerted to the fact that ChatGPT, a chatbot designed to provide detailed responses to users' questions, had incorrectly described his role in a foreign bribery incident in the early 2000s.

Councillor Hood did previously work for the company involved, Note Printing Australia, but was actually a whistleblower who told authorities about bribe payments to foreign officials to win contracts.
 

Amazing display of quality journalism. No attempt to replicate the alleged defamation or anything at all. They merely repeat the claims verbatim.

Meanwhile ChatGPT told me it could find no particular interesting information about this person and only gave a generic summary of what his role as mayor might entail. Perhaps they changed something, which I guess is possible.

An example of the kind of output I got:

As an AI language model, I do not have access to real-time information, news articles, or databases that can verify if Brian Hood, the mayor of Hepburn Shire Council, has been involved in any criminal matters.

Moreover, making unfounded accusations or spreading false information about someone's criminal involvement without proper evidence is unethical and could be defamatory. Therefore, it is best to rely on reputable sources and official reports to gather information about such matters.

Outrageously libellous indeed.

An academic paper written by several researchers from Georgetown University's Center for Security and Emergency Technology, examines the potential implications of language models such as ChatGPT in creating "misleading text".

The paper recommends a more "cooperative approach" between AI developers, social media companies and government agencies could help chatbots to avoid releasing misleading information.

At least they are kind to give us some uninformative padding instead of actual useful and interesting reporting. I mean this is completely meaningless.
 
Last edited:
The BBC News version of the story says they confirmed that the publicly available ChatGPT repeated the false claim to them that Hepburn "pleaded guilty to one count of bribery in 2012 and was sentenced to four years in prison" but the version integrated into Bing didn't make that mistake, amusingly saying he "was not involved in the payment of bribes... as claimed by an AI chatbot called ChatGPT".

https://www.bbc.co.uk/news/technology-65202597
 
The BBC News version of the story says they confirmed that the publicly available ChatGPT repeated the false claim to them that Hepburn "pleaded guilty to one count of bribery in 2012 and was sentenced to four years in prison" but the version integrated into Bing didn't make that mistake, amusingly saying he "was not involved in the payment of bribes... as claimed by an AI chatbot called ChatGPT".

https://www.bbc.co.uk/news/technology-65202597

The first salvo in the world-destroying war of the AIs.
 
I don't buy into conspiracy theories, but I could almost buy into ChatGPT being funded by lawyers becase they knew all the business this kind of app would generate for them...
 
The BBC News version of the story says they confirmed that the publicly available ChatGPT repeated the false claim to them that Hepburn "pleaded guilty to one count of bribery in 2012 and was sentenced to four years in prison" but the version integrated into Bing didn't make that mistake, amusingly saying he "was not involved in the payment of bribes... as claimed by an AI chatbot called ChatGPT".

https://www.bbc.co.uk/news/technology-65202597

From the BBC article:

When people use ChatGPT, they are shown a disclaimer warning that the content it generates may contain "inaccurate information about people, places, or facts".

And on its public blog about the tool, OpenAI says a limitation is that it "sometimes writes plausible-sounding but incorrect or nonsensical answers".

...

"I think this is a pretty stark wake-up call. The system is portrayed as being credible and informative and authoritative, and it's obviously not."

In order to come to that conclusion you'd have to ignore all the warnings about potential inaccurate information, that it's not perfect and a work in progress that is made publicly available in order to help improve it by finding faults.

Clearly OpenAI is at fault for not making ChatGPT even more tedious by repeating these warnings after every paragraph. OpenAI should definitively give him generous compensation for causing him to embarrass himself like this.
 
Last edited:
https://www.bbc.co.uk/news/world-us-canada-65452940

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

Geoffrey Hinton, aged 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were "quite scary".
 
Article in Scientific American:

How AI Knows Things No One Told It

That GPT and other AI systems perform tasks they were not trained to do, giving them “emergent abilities,” has surprised even researchers who have been generally skeptical about the hype over LLMs. “I don’t know how they’re doing it or if they could do it more generally the way humans do—but they’ve challenged my views,” says Melanie Mitchell, an AI researcher at the Santa Fe Institute.
 

Back
Top Bottom