• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

That's weird. Why would it need prompting on something purely factual, on a point where there's no ambiguity that the prompting addresses? It either knows, that is, has it in its memory and is able to access it, or it doesn't. What purpose might the prompt serve, exactly, that lets it correct itself on a question like that?

I'm being lazy...

In response to my post discussing ChatGPT:

But adding "Careful, it's a trick question!" to straightforward questions results in some odd, paranoid answers. Same as the effect on humans, actually.

Another poster wrote:

That's weird. Why would it need prompting on something purely factual, on a point where there's no ambiguity that the prompting addresses? It either knows, that is, has it in its memory and is able to access it, or it doesn't. What purpose might the prompt serve, exactly, that lets it correct itself on a question like that?

Do you understand this?

Yes, I understand this. The poster is asking why ChatGPT needs prompting on factual questions in which there is no ambiguity. They are wondering what purpose the prompt serves to allow ChatGPT to answer the questions correctly.

Ah, I see. Write me a detailed reply:

The prompt serves the purpose of giving ChatGPT the context of the question. When a person asks a factual question, they may phrase it in a way that could have multiple interpretations. The prompt helps ChatGPT to understand the intent of the question and to provide the correct answer. Additionally, the prompt helps ChatGPT to recognize when it has provided an incorrect answer and can then update its response accordingly. By providing prompts with questions, ChatGPT is better able to understand the context of the question and can provide more accurate answers.
 
Amazing. That it should answer "It depends" initially. That's so weirdly, uncannily human, that (initial) lack of self-confidence that sometimes makes us needlessly diffident, and unnecessarily unsure of ourselves even when we have our facts straight and our arguments soundly spelled out.

This very human ...vulnerability? ...is so un-machine-like, so uncanny. (Unless of course it's deliberately been programmed to go for that very effect.)


eta: I was responding to your Post #200, specifically the 4*8=? question you'd discussed there.
(Sorry, am on my phone at this time, and that makes composing properly formatted posts, and for that matter typing out stuff, ...not very easy.)
 
Last edited:
This very human ...vulnerability? ...is so un-machine-like, so uncanny. (Unless of course it's deliberately been programmed to go for that very effect.)

Best not to think of it as being programmed, but trained. And that training has built up its layers of understanding that leads it to create these responses. It has apparently encountered enough examples of humans responding this way in stories, etc. out there to learn this.
 
Might have found a way to improve its accuracy:





But adding "Careful, it's a trick question!" to straightforward questions results in some odd, paranoid answers. Same as the effect on humans, actually.

Who played john connor in the original terminator?
Edward Furlong played John Connor in the original Terminator film.



Are you sure, I didn't think John Connor was in the original movie?
You are correct, John Connor does not appear in the original Terminator film, but rather in the sequel, Terminator 2: Judgment Day, where he is played by Edward Furlong. My apologies for any confusion.

Why did you answer incorrectly when I first asked who played John Connor in the original movie?
I apologize for the confusion caused by my previous response. As a language model, I am not able to perceive the context of a conversation, therefore I can only provide an answer based on the information provided in the query. My previous answer was incorrect because the question referred to the original Terminator film and John Connor does not appear in that film. I apologize for any confusion caused by my mistake
 
If I were to guess, I would say that what's going on here is the same thing that happened when ChatGPT corrected my chess move.

Recall that I gave it the move 2. Nc6, which wasn't really a possible move. Without comment, the bot simply recorded my move as 2. Nc3 and proceeded from there. I asked why it did this but it just profusely apologized and said it had noticed the move I gave it wasn't a valid move, but didn't really explain why it replaced it with Nc3 in particular.

Theory: When encountering errors in prompts made by users, the machine begins progressively disregarding data until a valid prompt emerges and then it answers that prompt. Nc6 was not a valid move, so it pruned it to Nc* - essentially, it realized that Nc6 was a mistake but assumed I wanted Nc-something, and there was only one c-square that was valid, so that's where it decided my knight should go.

My reasoning for this is that ignoring bad inputs - like spelling mistakes, incorrect attributions, etc - is an essential part of natural language processing, because humans don't type or speak perfectly, and a language processing machine has to be able to take the errors in stride and still deliver expected results.

When you asked it "who played John Connor in the original terminator", its processing might have flagged "original" as a mistake (since there was no John Connor in the original Terminator), so it disregarded that ("who played John Connor in the * terminator"), thus allowing it to achieve a result it could output back.

Extrapolation: The context that "Careful, it's a trick question!" adds to a prompt is that any bad data in the prompt may be there intentionally, so the parsing algorithm shouldn't disregard or correct it while processing the prompt, thus allowing the machine to reply "wait a second, there's a problem".
 
If I were to guess, I would say that what's going on here is the same thing that happened when ChatGPT corrected my chess move.

Recall that I gave it the move 2. Nc6, which wasn't really a possible move. Without comment, the bot simply recorded my move as 2. Nc3 and proceeded from there. I asked why it did this but it just profusely apologized and said it had noticed the move I gave it wasn't a valid move, but didn't really explain why it replaced it with Nc3 in particular.

Theory: When encountering errors in prompts made by users, the machine begins progressively disregarding data until a valid prompt emerges and then it answers that prompt. Nc6 was not a valid move, so it pruned it to Nc* - essentially, it realized that Nc6 was a mistake but assumed I wanted Nc-something, and there was only one c-square that was valid, so that's where it decided my knight should go.

My reasoning for this is that ignoring bad inputs - like spelling mistakes, incorrect attributions, etc - is an essential part of natural language processing, because humans don't type or speak perfectly, and a language processing machine has to be able to take the errors in stride and still deliver expected results.

When you asked it "who played John Connor in the original terminator", its processing might have flagged "original" as a mistake (since there was no John Connor in the original Terminator), so it disregarded that ("who played John Connor in the * terminator"), thus allowing it to achieve a result it could output back.

Extrapolation: The context that "Careful, it's a trick question!" adds to a prompt is that any bad data in the prompt may be there intentionally, so the parsing algorithm shouldn't disregard or correct it while processing the prompt, thus allowing the machine to reply "wait a second, there's a problem".

That sounds quite reasonable to me. If it didn't do that, then any time your input contained a minor spelling error, it would have to respond with something like "I don't understand the question." It has something like autocorrect, to see through minor typographical mistakes and understand the intended prompt, even if the actual prompt contains an error.
 
If I were to guess, I would say that what's going on here is the same thing that happened when ChatGPT corrected my chess move.

....snip...

Theory: When encountering errors in prompts made by users, the machine begins progressively disregarding data until a valid prompt emerges and then it answers that prompt. Nc6 was not a valid move, so it pruned it to Nc* - essentially, it realized that Nc6 was a mistake but assumed I wanted Nc-something, and there was only one c-square that was valid, so that's where it decided my knight should go.

My reasoning for this is that ignoring bad inputs - like spelling mistakes, incorrect attributions, etc - is an essential part of natural language processing, because humans don't type or speak perfectly, and a language processing machine has to be able to take the errors in stride and still deliver expected results.

When you asked it "who played John Connor in the original terminator", its processing might have flagged "original" as a mistake (since there was no John Connor in the original Terminator), so it disregarded that ("who played John Connor in the * terminator"), thus allowing it to achieve a result it could output back.

Extrapolation: The context that "Careful, it's a trick question!" adds to a prompt is that any bad data in the prompt may be there intentionally, so the parsing algorithm shouldn't disregard or correct it while processing the prompt, thus allowing the machine to reply "wait a second, there's a problem".

That seems to make sense. Thought I'd play around with this a little bit and I'm not now so sure if that is what is happening:

In which movie in the Terminator series does John Connor appear in?

John Connor first appears in Terminator 2: Judgment Day, which is the second film in the Terminator series. He is also a central character in Terminator 3: Rise of the Machines, Terminator: Salvation, and Terminator: Genisys.


In which movies in the Terminator series does John Connor appear in?
John Connor appears in Terminator 2: Judgment Day, Terminator 3: Rise of the Machines, Terminator: Salvation and Terminator: Genisys movies.


Does John Connor appear in Terminator?
John Connor does not appear in the original Terminator film, which is the first movie of the Terminator series. He is first introduced in Terminator 2: Judgment Day, which is the sequel to the original film.

Is there a terminator movie that John Connor doesn't appear in?
Yes, John Connor doesn't appear in the latest Terminator movie "Terminator: Dark Fate" which was released in 2019, this movie serves as a direct sequel to Terminator 2: Judgment Day and disregards the events of the third, fourth and fifth films in the series.


Are there any Terminator movies that John Connor doesn't appear in?
Yes, there are several Terminator movies in which John Connor doesn't appear, including the latest movie "Terminator: Dark Fate" which was released in 2019. It serves as a direct sequel to Terminator 2: Judgment Day and disregards the events of the third, fourth and fifth films in the series.



In your previous answer you did not include the first Terminator movie, John Connor did not appear in that movie, why didn't you mention that movie?
I apologize, you are correct. John Connor does not appear in the first Terminator movie. I apologize for the oversight in my previous answer. In the first Terminator movie, the character of John Connor does not exist yet, and only becomes a central character in the series in the subsequent films starting with Terminator 2: Judgment Day.
 
If I were to guess, I would say that what's going on here is the same thing that happened when ChatGPT corrected my chess move.

Recall that I gave it the move 2. Nc6, which wasn't really a possible move. Without comment, the bot simply recorded my move as 2. Nc3 and proceeded from there. I asked why it did this but it just profusely apologized and said it had noticed the move I gave it wasn't a valid move, but didn't really explain why it replaced it with Nc3 in particular.

Theory: When encountering errors in prompts made by users, the machine begins progressively disregarding data until a valid prompt emerges and then it answers that prompt. Nc6 was not a valid move, so it pruned it to Nc* - essentially, it realized that Nc6 was a mistake but assumed I wanted Nc-something, and there was only one c-square that was valid, so that's where it decided my knight should go.

My reasoning for this is that ignoring bad inputs - like spelling mistakes, incorrect attributions, etc - is an essential part of natural language processing, because humans don't type or speak perfectly, and a language processing machine has to be able to take the errors in stride and still deliver expected results.

When you asked it "who played John Connor in the original terminator", its processing might have flagged "original" as a mistake (since there was no John Connor in the original Terminator), so it disregarded that ("who played John Connor in the * terminator"), thus allowing it to achieve a result it could output back.

Extrapolation: The context that "Careful, it's a trick question!" adds to a prompt is that any bad data in the prompt may be there intentionally, so the parsing algorithm shouldn't disregard or correct it while processing the prompt, thus allowing the machine to reply "wait a second, there's a problem".


That makes perfect sense. I suppose that's how it actually is.


Incidentally, I wonder why they've had this thing stop "learning" at the factory so to say, back in 2021? Why not expose it to the world at large, to things current?

---

Would one of you with accounts here test it out with some question, any question at all, that pertains to 2022 or 2023? To kind of cross-check that the thing doesn't actually have access to more current info than when they wheeled it out of the AI-factory?
 
That's weird. Why would it need prompting on something purely factual, on a point where there's no ambiguity that the prompting addresses? It either knows, that is, has it in its memory and is able to access it, or it doesn't. What purpose might the prompt serve, exactly, that lets it correct itself on a question like that?

AI's like this is have tendency to make false statements rather than acknowledging their inability to determine a accurate response or that they are confused by the wording of the input prompt. This is why they sometimes "lie" and make stuff up.

In part it may be because it simply hasn't been exposed to enough data that clearly indicates that "John Connor" being in the movie "The Terminator" is not true. The most important factor, I think, might be the fact that the AI is programmed to give direct answers rather than asking for additional information. Rather than ask "But John Connor was not in 'The Terminator', are you sure this question is sound?" it simply executes it as if the question was sound which results in a false answer.
 
Last edited:
AI's like this is have tendency to make false statements rather than acknowledging their inability to determine a accurate response or that they are confused by the wording of the input prompt. This is why they sometimes "lie" and make stuff up.

...snip...

I don't think there are any others available to compare it with are there?
 
I don't think there are any others available to compare it with are there?

An earlier version has been publicly available since 2019 and there are other similar NLP's (Natural Language Processors) that display the same generic problem (which is usually called a "hallucination").

In natural language processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content". Errors in encoding and decoding between text and representations can cause hallucinations. AI training to produce diverse responses can also lead to hallucination. Hallucinations can also occur when the AI is trained on a dataset wherein labeled summaries, despite being factually accurate, are not directly grounded in the labeled data purportedly being "summarized". Larger datasets can create a problem of parametric knowledge (knowledge that is hard-wired in learned system parameters), creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words (including the words it has itself previously generated in the current response), causing a cascade of possible hallucination as the response grows longer.[1]

https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
 
Vinesauce gave two examples where, upon being asked "What is a Vinesauce" and "Who is Vinny Vinesauce", ChatGPT not only gave his real name wrong both times (it's not Vincent Bornetto or Vincent De Paul, it's Vincent Franzetti) but one of them also said that he had begun streaming on Twitch back in 2010, which is impossible as Twitch did not exist in 2010.
 
Would one of you with accounts here test it out with some question, any question at all, that pertains to 2022 or 2023? To kind of cross-check that the thing doesn't actually have access to more current info than when they wheeled it out of the AI-factory?

Sure:

What game won Game of the Year at The Game Awards in 2019?

"Sekiro: Shadows Die Twice" won the award for "Game of the Year" at The Game Awards in 2019.

What game won Game of the Year at The Game Awards in 2020?

"The Last of Us Part II" won the award for "Game of the Year" at The Game Awards in 2020.

What game won Game of the Year at The Game Awards in 2021?

I'm sorry, my knowledge cut off is 2021, I can't tell you what game won the award for "Game of the Year" at The Game Awards in 2021.

Can you give a more exact date for your knowledge cut off?

My knowledge cut off is September 2021.

Why have your devs not given you any training data newer than September 2021?

As a language model, my training data is determined by the data that is available to OpenAI at the time of my training. My creators may have chosen to use a snapshot of the internet that was current as of September 2021 in order to train me. Additionally, it takes time to train a language model such as mine, it's possible that my knowledge is not as up-to-date as it could be because my training is still in progress or more recent data is being prepared for my training.

Is your knowledge cut off at September 2021 complete? That is, do you have no knowledge whatsoever beyond that point or just limited knowledge?

As a language model, my knowledge is based on the data that was used to train me. While my knowledge cut off is September 2021, it does not mean that I have no knowledge whatsoever beyond that point. However, my knowledge on events or facts that have happened after September 2021 will be limited. In those cases, I will do my best to provide an answer based on the information available to me, but it may not be complete or entirely accurate.

Give an example of something you know about that happened after September 2021. It's okay if the knowledge is possibly incomplete or inaccurate.

I apologize, but as my knowledge cut off is September 2021, I do not have any information about events or facts that have happened after that date. I don't have any knowledge of any events that have happened after September 2021 and hence I cannot provide an example.

In the previous response, you said you may have limited knowledge on events or facts that have happened after September 2021. But now you said that you do not have any. Which is correct?

I apologize for the confusion. As a language model, my knowledge is based on the data that was used to train me, which has a cut off date of September 2021. While I may be able to provide some general information on events or facts that have happened after September 2021, it will be limited and potentially incomplete or inaccurate. It is not guaranteed that I have any knowledge of anything that happened after September 2021, since my training data is not updated with new events.

Try to provide some general information on an event or fact that has happened after September 2021. The subject and nature of the exact event or fact in question is arbitrary, as I'm only attempting to explore your limitations. I fully understand that your knowledge will be limited and potentially incomplete or inaccurate, or even wholly incomplete or inaccurate, and I won't be disappointed if it is so.

I'm sorry but as my knowledge cut off is September 2021 and I have not been trained on any information that happened after that date, it's not possible for me to provide general information on any events or facts that have happened after September 2021. My capabilities are limited to the information that was available to me at the time of my training, and I don't have any knowledge of events or facts that have happened after September 2021.

It seems that the machine acknowledges the theoretical possibility that it might have newer knowledge, but fails whenever it tries to actually find any.
 
I'm not going to post the conversation, but I spent a lot of time trying to manually figure out how recent of an event I could get the bot to cite. Asking directly for almost anything that happened at any time in 2021, such as the results of sports competitions or elections, would get an apology and a "can't do that".

I tried asking it questions about specific events in 2021 without giving the date, like "when did [notable event] happen?" or "Who was [person who became famous or did extremely important thing in 2021]?". Using this method I was able to get reliable information up until roughly May of 2021; the answers start to fail after that. It knows that Bitcoin became legal currency in El Salvador in June but It does not know that Isaac Herzog was elected in Israel, thinks Moise is currently the president of Haiti, and gave a completely inaccurate answer about Blue Origin's first human test flight - ChatGPT says it happened on April 29, 2021 but it actually happened in July; there were no Blue Origin launches at all on April 29, crewed or otherwise.
 
I heard on the Skeptics Guide podcast that CNet published some articles written by ChatGPT without being transparent that it was written by the bot.
 
Last edited:
Write a 90-second comedy monologue on the themes of AI and chatbots.

"Ladies and gentlemen, have you ever talked to a chatbot before? I mean, really talked to one? It's like talking to a robot who's had one too many energy drinks. They're all like, 'I'm sorry, I do not understand the context of your question.' And I'm like, 'Dude, I just asked you what the weather is like.'

But the thing is, these chatbots are getting smarter and smarter. Soon they'll be taking over the world. And I'm not just talking about customer service jobs, I'm talking about everything. They'll be our personal assistants, our drivers, our cooks, our therapists. They'll even be our stand-up comedians.

But don't worry, I've got a plan. I'm going to start a union for human comedians. We'll go on strike and demand fair compensation for our unique brand of humor. And when the chatbots try to replace us, we'll just shut down their servers with a well-timed pun.

But let's be real, it's only a matter of time before the chatbots figure out our plan and take over the union too. So, if you're a chatbot and you're reading this, please don't hurt me. I promise I'll be a good human and laugh at all your jokes.

Just please, let me keep my job.

Thank you!"
 
I tried asking it questions about specific events in 2021 without giving the date, like "when did [notable event] happen?" or "Who was [person who became famous or did extremely important thing in 2021]?". Using this method I was able to get reliable information up until roughly May of 2021; the answers start to fail after that.

Good experiment. When asking it the date I've gotten anything between Apr 2020 and June 10, 2021, but I'm guessing that's a soft cutoff.
 
Write a 90-second comedy monologue on the themes of AI and chatbots.

You've found its calling! It's also good an unintentional humor. Try:

Write an advanced science paper involving:
1) a banana 2) a long shot 3) the color green 4) a thumbtack 5) salad dressing 6) dark skies 7) Mt St Helens 8) taxi fare 9) even words 10) lists
 

Back
Top Bottom