• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

ChatGPT

Uh, just this: The reason I claim a neuron is more complicated than transistor (or a flip-flop) is that it normally connects to several synapses, and it does some kind of calculations on how to react on signals.
Neurons also do not have binary connections to other synapses. The output is not the same over all connections, but we know that the more a certain connection fires, the thicker it grows, and allows more current to pass through, whereas connections rarely used, tend to wither and lose contact.

At least, that is what I gathered from an article in Scientific American some years ago.
 
Again no one else has brought intelligence into the discussion but you, you keep creating strawmen.

If a dog can be sentient in a different way to a human then it provides a counter claim to one of the assertions you use in your argument against AI being sentient.

And I never said sentience in dogs was different than in humans. Where did you get the idea I said that? In fact, I said there's evidence biological lifeforms that have a semblance of a brain would have a similar sentience as humans. We evolved. Sentience began a long way back on the evolutionary tree. It didn't just magically occur in humans.

Talk about a straw man.
 
Last edited:
Neurons also do not have binary connections to other synapses. The output is not the same over all connections, but we know that the more a certain connection fires, the thicker it grows, and allows more current to pass through, whereas connections rarely used, tend to wither and lose contact.

At least, that is what I gathered from an article in Scientific American some years ago.

Exactly! In fact you could argue that each neuron is a small, self-programming computer. That is what makes the brain so hugely complicated. If it was just a problem of stacking enough CPU and memory chips togeter, we could do than tomorrow.

Hans
 
And I never said sentience in dogs was different than in humans. Where did you get the idea I said that?...snipp..

Perhaps read what I post, then what you reply and then my post to your reply? I never said you said sentience of dogs was different to humans, I asked you whether you thought it was different, you wanted to know why else I would ask about that, which I replied to.

If you want an actual discussion rather than you make ex-cathedral declarations that we should simply accept you need 1) to actually read what I post and 2) follow the discussion train. At the moment ChatGPT 3.5 appears more interested in a discussion than you do...
 
Last edited:
Perhaps read what I post, then what you reply and then my post to your reply? I never said you said sentience of dogs was different to humans, I asked you whether you thought it was different, you wanted to know why else I would ask about that, which I replied to.

If you want an actual discussion rather than you make ex-cathedral declarations that we should simply accept you need 1) to actually read what I post and 2) follow the discussion train. At the moment ChatGPT 3.5 appears more interested in a discussion than you do...
"ex-cathedral declarations" What does that even mean? :boggled:

I repeat what I said earlier, we are talking past each other, you aren't following what I posted either. I still don't understand why you asked me if dogs were sentient like people are. The only difference I can see is in intelligence. Maybe instead of being annoyed you might explain why you asked.

Moving on...

Here's an example of sentience in dogs that is surprising:

Husky Dog Shows Tiny 3-Legged Rescue Puppy How to Walk After Amputation

At first I thought the video was going to show something people misinterpreted as a dog demonstrating something to the other dog. But it shows unmistakable empathy whether or not demonstrating is involved.
The video, posted on TikTok on Aug. 28, shows Nemo, a Siberian husky mix, energetically hopping across a room while his front right paw dangles limply. Tiny Tim then takes his cue and follows in Nemo's footsteps. ...

Both canines are husky mixes. Tiny Tim is about two months old, while Nemo is approximately seven months old.
TikTok video

The older dog has nothing wrong with its legs but he holds his right front leg up limply and hops along, then stops and looks back to see if the puppy is following.


Not all sentient beings have empathy. There are humans who lack empathy. But this dog expressing empathy is evidence it is sentient.


I don't expect anyone here to accept my POV. I've given up on that a while back. Just as I don't accept the POV that with some more tinkering an AI program will become sentient. And my POV, like it or not, is that people who think an AI program can become sentient just because it gives unexpected answers don't understand how sentience works.

I have a question, does anyone here think a self driving car is sentient?
 
Last edited:
*snip*


Not all sentient beings have empathy. There are humans who lack empathy. But this dog expressing empathy is evidence it is sentient.

Well, there must be plenty of one-way evidence, such that certain behaviour evidences sentience, but lack of same behaviour does not preclude it.

We must also take exception to abnormal behaviour: A human that actually has no empathy is not a normal human.

I don't expect anyone here to accept my POV. I've given up on that a while back.

One thing is accepting a POV, another is agreeing with it.

Just as I don't accept the POV that with some more tinkering an AI program will become sentient.

You are entitled to that POV, but I don't agree with it. I think it is a possibility.

And my POV, like it or not, is that people who think an AI program can become sentient just because it gives unexpected answers don't understand how sentience works.

I don't think there is any direct correlation. While a sentient program may well produce unexpected answers, unexpected answers is not a sign of sentience. It is not hard to make a program that produces unexpected answers.

I have a question, does anyone here think a self driving car is sentient?

I certainly don't think so, and I very much doubt it would be desirable if it was.

Hans
 
...snip...

I don't expect anyone here to accept my POV. I've given up on that a while back. Just as I don't accept the POV that with some more tinkering an AI program will become sentient. And my POV, like it or not, is that people who think an AI program can become sentient just because it gives unexpected answers don't understand how sentience works.
...snip...

Your responses are very interesting in this thread- do you know you are reacting rather like ChatGPT 3.5 would to my posts?

For instance your repetition of the above strawman or in AI terms "hallucination". Like ChatGPT you are told that you are making things up, yet you can't incorporate that into your information, and it pops out again in another response despite the previous correction. It is like you are starting a new slate with every answer.

The reason for the on-topic drawing of that observation is part of one of the things that has fascinated me about the latest LLM AIs, and that is we are - humans - apparently ..er.. programmed to respond to (apparently) lucid sounding prose as if the prose is accurate (and it is one of the ways we often gauge sentience), it is after all how demagogues since at least recorded language began have roused the rabble.

Lots of people think ChatGPT sounds "sentient" because it mimics to a certain extent how actual sentient humans respond, but it is considered to be failing at that and is often "given away" by behaviours such as repeating the same hallucination despite being corrected. Yet what do we see in this very forum? People, time and time again introducing hallucinations and yet when being corrected do not incorporate the correction and will again and again repeat the same hallucination - we even have a term for it here "fringe reset".

It is strange that we obviously know we don't want AIs to have the same cognitive weaknesses and defects humans have whilst seemingly to be ignoring that they may well be giving us clues as to why we have certain cognitive weaknesses and defects.
 
Well, there must be plenty of one-way evidence, such that certain behaviour evidences sentience, but lack of same behaviour does not preclude it.

We must also take exception to abnormal behaviour: A human that actually has no empathy is not a normal human.
Yes, it's indicative of mental illness.


One thing is accepting a POV, another is agreeing with it.
I was saying I agree to disagree.


I don't think there is any direct correlation. While a sentient program may well produce unexpected answers, unexpected answers is not a sign of sentience. It is not hard to make a program that produces unexpected answers.
My question then is why was this brought up in this thread in the first place?
 
"ex-cathedral declarations" What does that even mean? :boggled:

I repeat what I said earlier, we are talking past each other, you aren't following what I posted either. I still don't understand why you asked me if dogs were sentient like people are. The only difference I can see is in intelligence. Maybe instead of being annoyed you might explain why you asked.

Moving on...

Here's an example of sentience in dogs that is surprising:

Husky Dog Shows Tiny 3-Legged Rescue Puppy How to Walk After Amputation

At first I thought the video was going to show something people misinterpreted as a dog demonstrating something to the other dog. But it shows unmistakable empathy whether or not demonstrating is involved.

TikTok video

The older dog has nothing wrong with its legs but he holds his right front leg up limply and hops along, then stops and looks back to see if the puppy is following.


Not all sentient beings have empathy. There are humans who lack empathy. But this dog expressing empathy is evidence it is sentient.


I don't expect anyone here to accept my POV. I've given up on that a while back. Just as I don't accept the POV that with some more tinkering an AI program will become sentient. And my POV, like it or not, is that people who think an AI program can become sentient just because it gives unexpected answers don't understand how sentience works.

I have a question, does anyone here think a self driving car is sentient?

It's possible the older dog spontaneously showed the younger dog how to walk, but the linked article never states that outright. It seems to me the owner could've trained the older dog to walk that way, and that's what the video is showing.

Happy to revise that with more evidence, but too lazy to look myself.
 
If you want an actual discussion rather than you make ex-cathedral declarations that [...]
*Cough* Although the expression is used about the pope (who under certain circumstances is claimed to be able issue infallible binding rulings), it is actually “ex cathedra”, meaning from the chair (theological teaching chair), and has nothing to do with cathedrals.
 
I have a question, does anyone here think a self driving car is sentient?
When I am told according to which definition of sentience, I’ll give a more definitive answer, but according to the normal, everyday “I know it when I see it”-definition, I would say no.
 
*Cough* Although the expression is used about the pope (who under certain circumstances is claimed to be able issue infallible binding rulings), it is actually “ex cathedra”, meaning from the chair (theological teaching chair), and has nothing to do with cathedrals.

Blame the AI powered autocorrect....
 
Yes, it's indicative of mental illness.


I was saying I agree to disagree.


My question then is why was this brought up in this thread in the first place?

I think much of all this is due to the fact that we lack generally agreed-on definitions for most of these terms. So, people ask for examples.

Hans
 
It's possible the older dog spontaneously showed the younger dog how to walk, but the linked article never states that outright. It seems to me the owner could've trained the older dog to walk that way, and that's what the video is showing.

Happy to revise that with more evidence, but too lazy to look myself.

I see no reason it would be faked (trained) like that but I agree one cannot imply the reason was teaching as opposed to playing or something else. But has anyone ever seen a dog fake a bum leg like that?

There was a connection of some kind between the older dog and the younger one.
 
I see no reason it would be faked (trained) like that but I agree one cannot imply the reason was teaching as opposed to playing or something else. But has anyone ever seen a dog fake a bum leg like that?

There was a connection of some kind between the older dog and the younger one.


I'm going to give a cynical example, just to show a reason could exist. From the article...
The sweet video of Tiny Tim learning to walk on three legs was taken while the pup was staying with one of the Northern Reach Network's board members, who works with dogs with mobility issues.


Northern Reach Network is...
Northern Reach Network has partnered with many remote fly in communities to not only help sick, homeless, stray dogs, but also bring in dogs surrendered by community members that simply want a better life for their furry friend.


That's straight wonderful, and I don't mean to impugn them in any way, shape or form. But wouldn't it be nice to have a video that encourages adoptions and maybe even donations to the network? Something that might go viral and doesn't cost a marketing dime?

It's also possible the dog was trained with no deception intended and we're just seeing poor reporting. I've heard that happens, too.
 
I don't expect anyone here to accept my POV. I've given up on that a while back. Just as I don't accept the POV that with some more tinkering an AI program will become sentient. And my POV, like it or not, is that people who think an AI program can become sentient just because it gives unexpected answers don't understand how sentience works.
As I've said before, giving unexpected answers isn't a reason we might think an AI is sentient.

I have a question, does anyone here think a self driving car is sentient?
No, of course not. Why, after repeating several times that no extant AI is sentient, would you think that anyone does?
 
If only it were remotely AI ... Then it would recognize the expression.

Hans :rolleyes:

AI has to be added to everything, or at least put in the description, it's rapidly turning into nowt but a marketing buzzard.


Yeah, apparently the AI thinks a buzzard rather than buzzword makes sense in the above.
 
AI has to be added to everything, or at least put in the description, it's rapidly turning into nowt but a marketing buzzard.


Yeah, apparently the AI thinks a buzzard rather than buzzword makes sense in the above.

Advertising was always like this. I'm old enough to remember when "radio" was added to even tea-towels. :rolleyes:

Hans
 
Even I am not old enough to remember it, but there was a time when radioactivity was the 'in' thing to include in products. Let's hope that doesn't turn out to be the closest applicable precedent in this case.
 
Even I am not old enough to remember it, but there was a time when radioactivity was the 'in' thing to include in products. Let's hope that doesn't turn out to be the closest applicable precedent in this case.

Now I am become death, the destroyer of worlds.

Or when I asked ChatGPT to reword it in modern business speak: "In the context of our operations, I have assumed the role of a force capable of significant impact and transformation."
 
So your computer is going to wake up one day and say they aren't going to do the task you programmed it to? :rolleyes:
An actual AI would indeed argue with you and roll eyes at you, which is why I stated that chatgpt and the like aren't AI.

You disagreed with that though and said that they were AI?
What do you actually mean by AI?
 
Last edited:
Getting past this debate on sentience (which I will come back to after going through the beginning of this thread) this was absolutely fascinating. The talk was recorded at Summit At Sea in May 2023. Sorry if it's been discussed already.


The link: talk was recorded at Summit At Sea in May 2023; Using AI to Decode Animal Communication with Aza Raskin
From crows to dolphins, gelada monkeys to primrose flowers - Aza Raskin, co-founder of Earth Species Project, shares how the latest advances in AI help us to better understand and learn from other species. In this talk, learn how our ability to communicate with other species could transform the way humans relate to the rest of nature.

It's 49 minutes but there is a transcript at the link.

And folks may have a gazillion criticisms of the science discussed here, just keep in mind this isn't about a single study. One would need to look at those studies directly to get into the minutiae. Feel free to dig any of them up or note the post number I need to go back to to read what's already been discussed.

Besides translating animal and plant communication, it can read images (not words in this talk) a person is thinking of from one's fMRI. :eye-poppi
 
Last edited:
Yeah, it's pretty cool. But it's not as much about what AI can do today, as about what it's likely to do soon (ie talk to animals).
Most of the cool examples of communication with animals was old school research, upon which the new AI research builds. Very interesting. I just hoped for some dialog with the dolphin at the end .. ant we are not there just yet.
 
Yeah, it's pretty cool. But it's not as much about what AI can do today, as about what it's likely to do soon (ie talk to animals).
Most of the cool examples of communication with animals was old school research, upon which the new AI research builds. Very interesting. I just hoped for some dialog with the dolphin at the end .. ant we are not there just yet.

Yes, existing work until 2017 when a breakthrough occurred. I expect advances to come fast and furious.

As for 2-way, the whale exchange near the end, if true, was clearly 2-way. But there may be other explanations and it obviously needs repeating.
 
Last edited:
Here is a one of the dilemmas I have with generative AIs: http://www.internationalskeptics.com/forums/showthread.php?postid=14183966#post14183966 Cheating or using a tool?

That is mostly a question of definition. If you take a great photo, the camera you use is very obviously a tool, and the merit of the photographer is in the selection and, possibly, the composition of the motive.

Likewise for, for lack of a better term, defining the task for the AI.

Obviously, in various types of exams, the extent to which you may use AI is a matter of rule, just like for any other tool.

Hans
 
I've just read this article about the shutting down of an "AI companion" app called Soulmate, and how its users responded to the incident.

The article, like most predictably will, likes to focus on specificity of people who liked to engage in romantic relations with their chatbot because that is more lurid and easy to make fun of, but I'm less interested in that aspect and more in the underexplored phenomenon of people being able to form any kind of friendship at all with a chatbot, a program on their phone that they know isn't a real person - enough of a friendship to, as the article describes, mourn the app's shutdown as they would the death of a close companion or family member. While I don't doubt that many - perhaps even most - users weren't really that attached to their respective "AI companion" and were mostly going through the motions as members of the broader community of users of the app, some users were definitely that attached, and I find that fascinating.

Equally fascinating is that the earliest of these apps was released in 2017, long before ChatGPT arrived on the scene, and yet somehow these much earlier chatbots were apparently still able to produce plausible enough outputs that users were able to conceptualize them as people worth engaging with in good faith.

Or maybe I'm making as assumption there - maybe their output wasn't particularly good at all, but people just made it work by filling in the gaps themselves mentally; the same way that people become attached and develop interplay relationships to pets who don't speak at all but maybe emote at most, and sometimes not even that. It almost makes me curious to see what these apps talk and act like, although not so curious that I'm willing to pay for it - or put yet another data-harvesting trackware app on my phone.
 
That is reminiscent of the 2013 movie “Her”.

Synopsis:

“A sensitive and soulful man earns a living by writing personal letters for other people. Left heartbroken after his marriage ends, Theodore (Joaquin Phoenix) becomes fascinated with a new operating system which reportedly develops into an intuitive and unique entity in its own right. He starts the program and meets "Samantha" (Scarlett Johansson), whose bright voice reveals a sensitive, playful personality. Though "friends" initially, the relationship soon deepens into love.”

I recall it was touching and insightful at the time. Might be worth revisiting now that it’s reality is much closer.
 
A little closer to the topic, I've been playing a bit with Windows' new "Copilot" app. Copilot is eventually intended to be an all-around AI digital assistant, although at present its abilities are limited to basically being Bing Chat in a window on your desktop - Bing Chat being, of course, a version of ChatGPT that can interface with Microsoft's Bing search engine. So, any observations or criticisms of Windows Copilot basically apply up the line.

Here's a question I just asked Copilot(Bing Chat, ChatGPT):

As part of a 3D model I want to create a circle of vertices, and I need to know how many vertices the circle should have. I know that the circle should have 8 octants, and I'd like to have at least three vertices within the boundary vertices of each octant. How many vertices do I need?

Here is the answer it gave:

I see. You want to create a circle of vertices in 3D modeling, and you have some specifications for the number of octants and vertices. Let me try to help you with that. 😊

First, let me explain what an octant is. An octant is one-eighth of a circle, or a 45-degree arc. If you divide a circle into 8 equal parts, each part is an octant. Here is an example of a circle with 8 octants:

o
/ \
/ \
o o
| |
| |
o-----o
\ /
\ /
o

Each point where the circle intersects the horizontal or vertical line is called a boundary vertex. Each octant has two boundary vertices, one at each end of the arc. For example, the top-right octant has the boundary vertices at (0,1) and (1,0).

Now, you said you want to have at least three vertices within the boundary vertices of each octant. That means you want to have at least four vertices per octant, including the boundary vertices. For example, here is a possible way to have four vertices in the top-right octant:

o
/ \
/ \
o--o--o
| |
| |
o-----o
\ /
\ /
o

The vertex at (0.5,0.5) is within the boundary vertices of the top-right octant.

To find the total number of vertices you need for the whole circle, you can multiply the number of vertices per octant by the number of octants. In this case, you have 8 octants and 4 vertices per octant, so you need 8 x 4 = 32 vertices.

However, this is not the only way to create a circle of vertices with your specifications. You can also have more than three vertices within the boundary vertices of each octant, as long as they are evenly distributed along the arc. For example, here is another way to have six vertices in the top-right octant:

o
/ \
/ \
o-o-o-o
| |
| |
o-----o
\ /
\ /
o

The vertices at (0.25,0.75), (0.5,0.5), and (0.75,0.25) are within the boundary vertices of the top-right octant.

In this case, you have 8 octants and 6 vertices per octant, so you need 8 x 6 = 48 vertices.

The more vertices you have within the boundary vertices of each octant, the smoother your circle will look in 3D modeling. However, it will also increase the complexity and size of your model.

Therefore, the answer to your question depends on how smooth and detailed you want your circle to be. The minimum number of vertices you need is 32, but you can have more if you want.

If you want to learn more about how to create a circle of vertices in 3D modeling, you can check out these web pages [links]. They have some useful tips and examples for different software and methods.

I hope this helps! 😊

Do you have any other questions or requests? I can chat with you about various topics, or generate some creative content for you. Just ask me anything! 🙌

The information itself is correct and accurate, although ChatGPT's ASCII-drawing proficiency has not noticeably improved - the forum software removes the spaces between the characters but regardless, ChatGPT did not draw octagons.

But I do find that Microsoft's version of ChatGPT very often over-explains things. One of its idiosyncracies is that if you ask for examples of a thing, even if the context of the question makes it obvious you know what that thing is, it usually insists on first giving you a definition and explanation of that thing - sometimes a lengthy one - before moving onto the examples. Its answer for the question I asked for example is just far too long, especially when reading it in a sidebar next to your web browser. I'm sure I could simply ask it to give shorter and more concise answers, but only SOME questions need short and concise answers, and longer, more explanative answers are appropriate for other questions. I think the app could be far more useful a tool if it was able to use the context of the prompt to tell which is which and tailor its answers as appropriate.
 
...But I do find that Microsoft's version of ChatGPT very often over-explains things. One of its idiosyncracies is that if you ask for examples of a thing, even if the context of the question makes it obvious you know what that thing is, it usually insists on first giving you a definition and explanation of that thing - sometimes a lengthy one - before moving onto the examples. Its answer for the question I asked for example is just far too long, especially when reading it in a sidebar next to your web browser. I'm sure I could simply ask it to give shorter and more concise answers, but only SOME questions need short and concise answers, and longer, more explanative answers are appropriate for other questions. I think the app could be far more useful a tool if it was able to use the context of the prompt to tell which is which and tailor its answers as appropriate.

I've been under the impression that stating the problem first gives it a better handle on it-- it gives it a stronger effect on the generation of the answer. I see this as it "thinking out loud", since it doesn't have the ability to think privately, as we do.
 
The information itself is correct and accurate, although ChatGPT's ASCII-drawing proficiency has not noticeably improved - the forum software removes the spaces between the characters but regardless, ChatGPT did not draw octagons.
Correction: the forum software did not remove the spaces. By default HTML collapses multiple spaces into one unless the <pre> tag is used. In the forum software, you need to use [code] ... [/code] to keep the spaces:
Code:
   o
  / \
 /   \
o-o-o-o
|     |
|     |
o-----o
 \   /
  \ /
   o

And how did I get the software to display "[code] ... [/code]"? That's the [noparse] tag:
Code:
[PLAIN][noparse][code] ...
[/PLAIN][/noparse][/code]
 
A little closer to the topic, I've been playing a bit with Windows' new "Copilot" app. Copilot is eventually intended to be an all-around AI digital assistant, although at present its abilities are limited to basically being Bing Chat in a window on your desktop - Bing Chat being, of course, a version of ChatGPT that can interface with Microsoft's Bing search engine. So, any observations or criticisms of Windows Copilot basically apply up the line.

Here's a question I just asked Copilot(Bing Chat, ChatGPT):



Here is the answer it gave:



The information itself is correct and accurate, although ChatGPT's ASCII-drawing proficiency has not noticeably improved - the forum software removes the spaces between the characters but regardless, ChatGPT did not draw octagons.

But I do find that Microsoft's version of ChatGPT very often over-explains things. One of its idiosyncracies is that if you ask for examples of a thing, even if the context of the question makes it obvious you know what that thing is, it usually insists on first giving you a definition and explanation of that thing - sometimes a lengthy one - before moving onto the examples. Its answer for the question I asked for example is just far too long, especially when reading it in a sidebar next to your web browser. I'm sure I could simply ask it to give shorter and more concise answers, but only SOME questions need short and concise answers, and longer, more explanative answers are appropriate for other questions. I think the app could be far more useful a tool if it was able to use the context of the prompt to tell which is which and tailor its answers as appropriate.

Belt and braces - often people don't actually know or misunderstand what they are asking about - we have plenty examples of that in this very forum - better to make it clear what it is being answered.
 
My major issue at the moment is a lack of trust in the accuracy/truthfulness of the given answers, too many times when I test its knowledge in an area I know well it is either wrong or inaccurate, including objective facts.

ETA: Hmm... not used some of my tests for version 4 - that seems to be much more accurate.
ETA1: More accurate yet still inaccurate.

I really can't trust the results on subjects I don't know about, renders its use as a fluent responsive search engine useless.
 
Last edited:
Still absolutely useless:

Question: Please give me a list of forthcoming simulation games.

Answer:

Certainly! Here are some upcoming simulation games to look forward to:

F1 23: Race in the Formula One World Championship, overcoming obstacles and aiming for victory. Release date: June 16, 2023.

Dredge: Embark on a fishing adventure in an island chain with history and mystery. Solve the enigma while catching fish for profit. Release date: March 30, 2023.

Big Ambitions: Build a business empire across New York City, experiencing the ups and downs of entrepreneurship. Infinite possibilities await! Release date: March 10, 2023.

The Murder of Sonic The Hedgehog: A single-player twist on the Sonic series, set aboard a murder mystery train for Amy Rose’s birthday. Release date: March 31, 2023.

Get ready for exciting simulations! 🎮🌟


Note the problem? And it even puts the release dates in bold.

There is a big-name simulation game about to be released, Cities:Sklines 2, it is all over gaming websites yet no mention of that.

According to the latest ChatGPT March 2023 is in the future!

As I said still bloody useless for anything factual.
 
Note the problem? And it even puts the release dates in bold.

There is a big-name simulation game about to be released, Cities:Sklines 2, it is all over gaming websites yet no mention of that.

According to the latest ChatGPT March 2023 is in the future!

As I said still bloody useless for anything factual.
Try Bing instead. It gives you the reference links, and AFAIK it has access to the current internet.

The date problem is interesting, because ChatGPT may not actually know how dates work, i.e. when dates are before other dates. If it finds a website claiming something happens in the future, ChatGPT may accept it as the truth even though the actual date is in the past.
 

Back
Top Bottom