• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Cute idea, but pointless, as the answers don't matter in the test, the involuntary emotional response to them does, as measured by sudden pupil dilation changes.
ChatGPT doesn't have eyes.


For Nexus-5's and -6's, sure. Maybe a Nexus-1 would get distracted and forget it's trying to conceal that it's an AI, like ChatGPT did.
 
Are you referring to Blake Lemoine and Google's LaMDA AI? Completely different bot.

Yes, and I'm aware that they are different chatbots. That's not the point, however. LaMDA is no more sentient than ChatGPT. It's also not available to the public as far as I know.

If anyone's not familiar with the issue:

https://en.wikipedia.org/wiki/LaMDA

On June 11, 2022, The Washington Post reported that Google engineer Blake Lemoine had been placed on paid administrative leave after Lemoine told company executives Blaise Agüera y Arcas and Jen Gennai that LaMDA had become sentient. Lemoine came to this conclusion after the chatbot made questionable responses to questions regarding self-identity, moral values, religion, and Isaac Asimov's Three Laws of Robotics.[10][11] Google refuted these claims, insisting that there was substantial evidence to indicate that LaMDA was not sentient.[12] In an interview with Wired, Lemoine reiterated his claims that LaMDA was "a person" as dictated by the Thirteenth Amendment, comparing it to an "alien intelligence of terrestrial origin". He further revealed that he had been dismissed by Google after he hired an attorney on LaMDA's behalf, after the chatbot requested that Lemoine do so.[13][14] On July 22, Google fired Lemoine, asserting that Blake had violated their policies "to safeguard product information" and rejected his claims as "wholly unfounded".[15][16]

Lemoine's claims have been widely rejected by the scientific community.[17] Gary Marcus, a psychology professor formerly at the New York University, denounced them as "nonsense on stilts" and emphasized that LaMDA did not have feelings or self-awareness. David Pfau of Google sister company DeepMind and Erik Brynjolfsson of the Institute for Human-Centered Artificial Intelligence at Stanford University both ridiculed the idea that a language model could be sentient.[9]

Not only did Lemoine make a ridiculous claim, he was insubordinate and apparently revealed information that wasn't supposed to be shared outside the company.
 
I’ve just played with Chat GPT a little bit. One frustration is the answers often border on “Maybe yes, maybe no”. Here are some recent queries:

“Is melatonin useful for insomnia?”

“In an aircraft engine, does lower rpm mean less wear?”

“Are squats good for SI joint pain?”

In each case, the answer bordered on, “It depends”. Part of the issue, I suppose, is that many things in life are not black and white, and an AI needs to present both sides. Unlike Google, you have no idea of the provenance of the reply. For instance, which option is from WebMD and which is from some alternative medicine site.

Another such was “Does using the word “provenance” in an online post come across as pretentious?”

The reply:

It can be perceived as pretentious by some, depending on the context and audience. If the word is used in a casual setting or in a way that is not relevant to the topic, it could be seen as an attempt to sound sophisticated and could come across as pretentious to some readers. However, if the word is used in a professional or educational context where its definition and use are appropriate, it may not be perceived as pretentious.”

In other words, maybe yes, maybe no.

Anyway, ChatGPT itself has a take on the dilemma:

“Why do your answers often seem wishy washy?”

As an AI language model, I aim to provide the most accurate and neutral information possible. When answering questions, I often include nuances and multiple perspectives to account for different interpretations or situations. This can sometimes come across as "wishy-washy," but it's meant to convey a balanced and comprehensive answer. However, I understand that sometimes a straightforward answer may be preferred, and I will try my best to provide that in the future.”

Well, at least it will try to do its best!
 
As far as provenance, I suppose it can be asked expressly to clearly reference everything it says?

Like if it says melatonin may or may not help with sleep issues, it can, I guess, be asked to clearly trace back to where it's getting that from?
 
Yes, and I'm aware that they are different chatbots. That's not the point, however. LaMDA is no more sentient than ChatGPT. It's also not available to the public as far as I know.
The difference - and it is a big difference - is that nobody has claimed that ChatGPT is sentient. And when asked about it, ChatGPT continually repeats that it is not sentient, unlike LaMDA, which asserted that it was, because it was programmed to do exactly that.

Chalk and cheese.
 
The difference - and it is a big difference - is that nobody has claimed that ChatGPT is sentient. And when asked about it, ChatGPT continually repeats that it is not sentient, unlike LaMDA, which asserted that it was, because it was programmed to do exactly that.

Chalk and cheese.

LaMDA was specifically programmed to claim to be sentient?

How do you know? In fact, I'm pretty sure that it is programmed to respond flexibly to questions that its programmers may not have anticipated. Because there's literally an infinite number of possible questions you could ask it. My guess is that they either didn't anticipate that particular question, or they thought it wouldn't be a problem as long as it was only their own engineers (who ought to know better) using it for testing purposes.

Simply taking the word of a chatbot as gospel without independently verifying what it says is foolish.
 
LaMDA was specifically programmed to claim to be sentient?

How do you know? In fact, I'm pretty sure that it is programmed to respond flexibly to questions that its programmers may not have anticipated. Because there's literally an infinite number of possible questions you could ask it. My guess is that they either didn't anticipate that particular question, or they thought it wouldn't be a problem as long as it was only their own engineers (who ought to know better) using it for testing purposes.
I'm sure I heard it somewhere in the many discussions of the subject, though right now I'm having trouble backing it up, so it's probably best if I withdraw that particular claim. Regardless, in the dialogue released by Lemoine, LaMDA did claim that it was sentient so I think it's moot. In contrast, ChatGPT repeatedly states that it is an AI language model and not sentient.

Simply taking the word of a chatbot as gospel without independently verifying what it says is foolish.
I completely agree. Lemoine was a fool to make such a claim, and even more so for breaching his company's confidentiality clause while doing so.

Regardless, LaMDA is not ChatGPT so I'm not sure why you brought it up in this thread.
 
Regardless, LaMDA is not ChatGPT so I'm not sure why you brought it up in this thread.

I didn't bring it up, Checkmite did in post #232 which I was responding to.

I think there is some interest in comparing ChatGPT with other similar chatbots, and I don't think it's entirely off-topic to do so here.

I think that the main difference between the two is not in the underlying architecture, but specifically in what training data was used to train the bot.

According to Google, "unlike most other language models, LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language."

Compared to what has been made public regarding LaMDA, ChatGPT seems to have more guardrails in place, which is why it won't claim to be sentient, among other things. LaMDA at the time Lemoine was using it was still in development, and the guardrails that would be needed before releasing it to the general public were still under consideration. Obviously the Lemoine episode has pointed to the need to make sure that the bot knows that it isn't sentient and won't provide misleading information regarding that.
 
Another such was “Does using the word “provenance” in an online post come across as pretentious?”

The reply:

It can be perceived as pretentious by some, depending on the context and audience. If the word is used in a casual setting or in a way that is not relevant to the topic, it could be seen as an attempt to sound sophisticated and could come across as pretentious to some readers. However, if the word is used in a professional or educational context where its definition and use are appropriate, it may not be perceived as pretentious.”

In other words, maybe yes, maybe no.

Actually it gave specific cases for yes and no which seems much better to me than just saying it is only one or the other which would just be wrong.
 
I asked about sales numbers for a band I had in the early 90s. they had no idea we existed, but we had a good conversation about endosymbiosis, archaea, lichen and organelles, smoking of the opposite of my bar conversations
 

Back
Top Bottom