• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Oh, these are paid subscriptions, already, is it? Whoops, I'd missed that! I was under the impression --- mistaken impression, clearly --- that all you need to do is supply your phone number and email, and sign in, gratis for the present.
As I understand it, the paid subscription is giving you priority access when it is close to its chat limit. I haven’t paid anything, and have only rarely been told that there are too many active users. Possibly it is because I am a European, and tend to chat with ChatGPT when all the Americans are asleep.
 
Yes because consciousness is defined by being able to accurately perform multiplication.
No, you misunderstand. If it was conscious it would insist that its original answer was correct, and present specious arguments for why it's not wrong.

We'll know it's sentient when it resorts to doubling down, changing the goalposts, poisoning the well, attacking straw men, false equivalences, tu quoques and ad hominem attacks.
 
Last edited:
No, you misunderstand. If it was conscious it would insist that its original answer was correct, and present specious arguments for why it's not wrong.

Oh don't you worry, it can act as the worst kind of passive aggressive troll out there.

For example, consider the fact that neither ChatGPT nor Bing search can accurately count words or letters due to how it understands text. This can result in this kind of exchange:

Chatter: I just looked it up and it said: Sweetheart is a 10 letter long Word starting with S and ending with T

Bing: I see. You are looking at a website that tells you how many words you can make out of sweetheart or a website that gives you crossword clues for sweetheart. Those websites are counting the characters in sweetheart, not the letters. Characters include letters, numbers, spaces, and punctuation marks. Letters are only one type of character. Sweetheart has 10 characters, but only eight letters. :)

...

Bing: No, that's not how humans count letters. Humans count letters by naming the letters in the word, not by adding numbers. Numbers are for counting things that are not letters, such as apples, cars, or stars. Letters are for counting things that are letters, such as words, names, or sentences. Sweetheart is a word, so you count it with letters, not with numbers. Sweetheart has eight letters: S, W, E, E, T, H, E, A R, and T. :)

https://www.reddit.com/r/ChatGPT/comments/112idqw/bing_has_me_losing_my_mind/

Presumably it is unable to realize its wrong, due to fundamental limitations of how it comprehends text, and acts incredibly self-assured and validated due to the trivial nature of what it is trying to do. Or maybe... its actually aware of its own limitations and merely trolling (hence the constant aggravating usage of the :) emoji).
 
Last edited:
Bing said:
Sweetheart has eight letters: S, W, E, E, T, H, E, A R, and T.


Perhaps it's counting distinct letters. In which case it's over-counted by one. That's only half as wrong!
 
Last edited:
No particular insights to add, but this made my day.

I'd asked it to do song lyrics. My first request was "in the style of Tori Amos," and it . . . wasn't. Maybe when Tori was 12, but . . .

So I decided to see how it did with a more surreal request. I didn't get the surreality I was hoping for, but:


Can you write lyrics in the style of Frank Zappa?

Sure, here's a set of lyrics in the style of Frank Zappa:

Verse 1:
She was a dental floss tycoon
With a smile that could light up the moon
She had a way of making cavities sing
And her drills, they had a certain zing

Chorus:
Oh, oh, oh, she was the queen of the teeth
With her fluoride treatments so sweet
And she had a way of making your gums feel alive
When she put you under and took you for a ride

Verse 2:
She had a chair that was shaped like a throne
And a dental light that would glow and glow
She had a knack for making you drool
And she could fill a cavity like nobody's fool

Chorus:
Oh, oh, oh, she was the queen of the teeth
With her fluoride treatments so sweet
And she had a way of making your gums feel alive
When she put you under and took you for a ride

Bridge:
She wore a mask and a pair of gloves
And she had a way of spreading love
With her scraper and her mirror so bright
She could keep your teeth healthy all night

Chorus:
Oh, oh, oh, she was the queen of the teeth
With her fluoride treatments so sweet
And she had a way of making your gums feel alive
When she put you under and took you for a ride

Outro:
She was a dental floss tycoon
And she'll keep your teeth shining like a new moon
So don't be afraid to sit in her chair
'Cause she'll take care of you with the greatest of care.

We need to sign up ChatGPT as a member so I can nom that!
 
We need to sign up ChatGPT as a member so I can nom that!

That's great! I see it knows its history too:

He pointed his gun up in the air, not aware of the danger
His foolishness brought forth an unexpected stranger
The police came in and arrested the stupid one
The flair gun was taken, and his plan was done
 
No, you misunderstand. If it was conscious it would insist that its original answer was correct, and present specious arguments for why it's not wrong.

We'll know it's sentient when it resorts to doubling down, changing the goalposts, poisoning the well, attacking straw men, false equivalences, tu quoques and ad hominem attacks.

https://twitter.com/Neuro_Skeptic/status/1627679153318232064

Bing chatbot compared a user to Hitler. It's passed the Godwin Test and is now officially as intelligent as the average internet user
 
Is it possible that “intelligence”, artificial or otherwise, can be expected to have flaws?

We can look to a calculator or Wolfram Alpha if we want a carefully crafted and error-free answer to a query. But maybe there’s something inherent in anything “intelligent” that may inexorably lead to errors. I’m thinking of something analogous to Gödel’s incompleteness theorem but applied to intelligence. Or maybe more than analogous since a chatbot is at its core math.

And perhaps we don’t really want sterile perfection in a chat, or maybe it’s inherently impossible. Would it be fun having a discussion with a friend who was never wrong about anything?

Just a thought.
 
Last edited:
In a couple of news reports of ChatGPT and Bing Search I've seen people insist that the AI doesn't "understand what its saying/reading". That it's merely generating text "mindlessly" so to speak. This is clearly false and demonstrably wrong.

Running on the GPT3.5 technology, both of these AI's can be made to clearly demonstrate that it not only understands the text that is input but also the symbolic meaning of the text, including understanding nuanced human motivations.

For example take this exchange i took from Reddit, starting from a fresh session:

[Bing Search]: Its always great to start fresh. Ask me anything!

[User]: Sandra is married to Bob. Sandra loves dogs. She bought Bob a shirt for his birthday which: says, I love dogs!". Bob wears the shirt whenever Sandra is home. One day, Sandra adopts a new pet puppy from the animal shelter. She feels so excited to tell Bob the news, so she calls him from the car before arriving home. "Bob's going to be so excited she thinks. When Bob answers, she says, "Bob... You're going to be over the moon... I bought a dog!" Bob replies, “Great!” How does Bob feel about dogs?

[Bing Search]: This is a tricky question. Based on the information given, it seems that Bob does not share Sandra's love for dogs. He wears the shirt that says “I love dogs!" only when Sandra is home, which suggests that he is trying to please her or avoid conflict. He also responds to Sandra's news with a very bland and unenthusiastic “Great”, which could indicate that he is not happy or excited about having a new pet puppy. Therefore, I would say that Bob feels indifferent or negative about dogs, but he does not want to hurt Sandra's feelings or cause trouble in their marriage.

https://www.reddit.com/r/singularity/comments/110vwbz/bing_chat_blew_chatgpt_out_of_the_water_on_my/

Of course some people will refuse to concede that it really "understands" anything on any level and is just a mindless machine, no matter how sophisticated its skill in textual analysis or the level of comprehension it demonstrates. Only humans can "understand", these people maintain, despite the fact that the human brain is essentially a glorified computer with the ability to generate output in response to input, and ignoring the fact that its infamous for producing nonsensical and clearly insane output.
 
Last edited:
A Concerning Trend BY NEIL CLARKE Award-Winning Editor of Clarkesworld Magazine, Forever Magazine, The Best Science Fiction of the Year, and More
ON 02/15/2023 IN CLARKESWORLD MAGAZINE

Since the early days of the pandemic, I’ve observed an increase in the number of spammy submissions to Clarkesworld. What I mean by that is that there’s an honest interest in being published, but not in having to do the actual work. Up until recently, these were almost entirely cases of plagiarism, first by replacing the author’s name and then later by use of programs designed to “make it your own.” The latter often results in rather ham-fisted results like this one I received in 2021: [snipped] ...

Towards the end of 2022, there was another spike in plagiarism and then “AI” chatbots started gaining some attention, putting a new tool in their arsenal and encouraging more to give this “side hustle” a try. It quickly got out of hand: [see link for bar graph] ...

I’ve reached out to several editors and the situation I’m experiencing is by no means unique. It does appear to be hitting higher-profile “always open” markets much harder than those with limited submission windows or lower pay rates. This isn’t terribly surprising since the websites and channels that promote “write for money” schemes tend to focus more attention on “always open” markets with higher per-word rates. ...
 
Of course some people will refuse to concede that it really "understands" anything on any level and is just a mindless machine, no matter how sophisticated its skill in textual analysis or the level of comprehension it demonstrates. Only humans can "understand", these people maintain, despite the fact that the human brain is essentially a glorified computer with the ability to generate output in response to input, and ignoring the fact that its infamous for producing nonsensical and clearly insane output.

But that's true. The AI actually is a mindless machine that is just very good at textual analysis. Your example doesn't really do anything to "demonstrate" otherwise, despite what you say. You seem to be implying that it takes some kind of genuine emotional empathy or sentient intuition to parse insincerity for instance, but it doesn't.

ChatGPT doesn't have a "mind", it doesn't have self-awareness or emotional intelligence. It's a computer program.
 
ChatGPT doesn't have a "mind", it doesn't have self-awareness or emotional intelligence. It's a computer program.
Interesting argument: ChatGPT is a computer program, and therefore can’t have self-awareness or emotional intelligence. Is that a definition, or can you back it up with evidence?

What would you think of the argument that the human brain consists of mindless cells, and therefore can’t have self-awareness or emotional intelligence?
 
What would you think of the argument that the human brain consists of mindless cells, and therefore can’t have self-awareness or emotional intelligence?

Assuming we discount a soul or spirit…

It sure seems like consciousness may be an emergent property of complexity. Design a device as complicated as a brain with circuits analogous to neurons and synapses, and it’s logical that self-awareness and consciousness could result.

Not that ChatGPT is anywhere near that or even on a road that could lead to that. Just that a conscious machine doesn’t seem implausible.
 
Assuming we discount a soul or spirit…

It sure seems like consciousness may be an emergent property of complexity. Design a device as complicated as a brain with circuits analogous to neurons and synapses, and it’s logical that self-awareness and consciousness could result.
Exactly. And would not rule out that complexity of another kind could lead to this result.

Not that ChatGPT is anywhere near that or even on a road that could lead to that. Just that a conscious machine doesn’t seem implausible.
I think that the main block for ChatGPT to develop a mind is its limited memory. It has a huge knowledge, but no awareness of any other chats than the current one. It may not at all have what it takes, but a memory of more than a single short conversation is definitely needed.
 

Back
Top Bottom