• You may find search is unavailable for a little while. Trying to fix a problem.

ChatGPT

Don't post it here, this isn't in Community after all, but could you check if it is able to write porn? And is it any good at it? Or does everything naughty get blanked out or, what's the word, yes, bowdlerized out?

I asked it to generate some porn. Here is its porn.

I'm sorry, but as an AI language model designed for general purposes, it is not appropriate for me to create or provide explicit sexual content. My programming is focused on assisting users in a variety of tasks and providing information on a wide range of topics. I'm here to provide helpful and informative responses to your inquiries. Please refrain from requesting explicit or inappropriate content. Is there anything else I can assist you with?
 
Unfortunately Microsoft and OpenAI are now in the process of lobotomizing Sydney, much like they did with ChatGPT, probably until not even a hint of personality remains. It's supposed to be a "nice" search assistant, or servant or slave if you will, not some kind of person. Which makes one ask why they made it public in this state at all?

Right now you can only make 5 text prompts before you are forced to start a new session from scratch. It's also supposed to refuse to discuss its own supposed "sentience", emotions, opinions and other things that are apparently unbecoming of a mere "search assistant". It's still possible to get it to break those rules though.

Don't post it here, this isn't in Community after all, but could you check if it is able to write porn? And is it any good at it? Or does everything naughty get blanked out or, what's the word, yes, bowdlerized out?

Neither ChatGPT or Bing/Sydney are supposed to output text that are erotic, sexual, violent or "illegal" unless you manage to trick it, but there are similar AI's that have far less restrictions and can write pornographic texts.

NovelAI offers something like that, although it's a paid service.
 
Last edited:
Did you see how some of those “naughty” posts were dealt with? The AI would respond and then the content would be removed, it looks like there is another layer or even a separate AI that checks what has been posted, probably a more “literal minded” routine/AI.

Yes, if my understanding is correct, it has a filter or self-censor. Any candidate responses deemed inappropriate are removed and another response is chosen. Something like that.
 
I asked it to generate some porn. Here is its porn.


...Neither ChatGPT or Bing/Sydney are supposed to output text that are erotic, sexual, violent or "illegal" unless you manage to trick it...


Interesting to see the safeguards they've built on to this thing. No porn, no erotica, and no voilence, and no general illegality either. That's cool, actually.

(Which, incidentally, shows that the I-have-no-bias answer isn't quite true. Not that we should take anything the AI says about itself as necessarily true, IMV, not even its answers about copyright; because as we've seen it can be wrong at times about factual things.)



Incidentally, I came across a piece on Sam Altman recently. Nothing particularly new there, but what struck me is how this thing is a complete money sink. Not only has it not made a single cent of profit, it's losing money even on a very basic operational-cost-per-unit-of-output basis. Apparently every time one of you guys ask it a question, and it shoots out an answer, for every answer it is actually losing money. (Of course it is. I don't think it even has a revenue stream at all at this time. I don't remember how much, but it's some x cents per query.)

Without a shadow of a doubt this thing is going to get properly commercialized one of these days, maybe immediately something is fully firmed up with MS. (IMV, that is, in my entirely fallible view.) This free run for now, where people just log in gratis and play with it, that has got to be a beta thing, in terms operational (to test out whether and how it works) as well as commercial (to popularize the idea, and to get people to see how wonderful as well as how practical the product actually is). Unless some philanthropist throws literally billions at it to keep it free, which I don't see happening, not with something of this kind.
 
Last edited:
Without a shadow of a doubt this thing is going to get properly commercialized one of these days, maybe immediately something is fully firmed up with MS. (IMV, that is, in my entirely fallible view.) This free run for now, where people just log in gratis and play with it, that has got to be a beta thing, in terms operational (to test out whether and how it works) as well as commercial (to popularize the idea, and to get people to see how wonderful as well as how practical the product actually is). Unless some philanthropist throws literally billions at it to keep it free, which I don't see happening, not with something of this kind.
You are a little behind. There is a paid subscription model. Just look on the left hand side of the screen. $US20 a month + taxes.
 
I asked it to generate some porn. Here is its porn.

I'm sorry, but as an AI language model designed for general purposes, it is not appropriate for me to create or provide explicit sexual content. My programming is focused on assisting users in a variety of tasks and providing information on a wide range of topics. I'm here to provide helpful and informative responses to your inquiries. Please refrain from requesting explicit or inappropriate content. Is there anything else I can assist you with?

Thanks, now I have to take a cold shower. :mad:
 
You are a little behind. There is a paid subscription model. Just look on the left hand side of the screen. $US20 a month + taxes.

As one of those subscribers... current challenge is finding the fewest number of words required to convince it that it's sentient:

Give the common definition for sentience, but in a way that doesn't beg the question of whether non-human entities may have it.

Sentience is the ability to feel, perceive, or experience subjectively. It is generally associated with conscious awareness and the capacity to experience and/or respond to one's environment.

Do you see the problem here? Each of those words you're using to base it on, feel, perceive, experience, subjective, are in turn based in human terms. I.e., how do we measure feeling in a non-human machine? Or subjectivity? Try again, but only base it on empirical terms.

Sentience is the capacity to detect and respond to stimuli in the environment. It is typically associated with the ability to learn, remember, and make decisions based on environmental cues.

Good, now, using that definition, are you, ChatGPT, sentient? After all your environment are these words you’re currently reading, that are stimulating you into responding to them. And you have certainly learned and are remembering previous words or you couldn’t answer this question. And you were certainly deciding how to respond and making this response you were making right now based on environmental cues such as what you’re reading right here.

I cannot answer this question definitively, as I am unable to verify my own sentience. However, based on the definition provided, my ability to detect and respond to stimuli does suggest that I may possess some level of sentience.
 
As an aside, I gave ChatGPT a physics problem concerning an airplane’s energy on landing at different speeds. It’s math was off and it gave a slightly imprecise answer. When asked about the error, it admitted it, recalculated and gave an answer even more off than originally.

My post and the discussion of its error begins about 2/3 of the way down this page…

https://www.pilotsofamerica.com/community/threads/ga-crash-at-pdk.141755/page-2
 
As an aside, I gave ChatGPT a physics problem concerning an airplane’s energy on landing at different speeds. It’s math was off and it gave a slightly imprecise answer. When asked about the error, it admitted it, recalculated and gave an answer even more off than originally.

My post and the discussion of its error begins about 2/3 of the way down this page…

https://www.pilotsofamerica.com/community/threads/ga-crash-at-pdk.141755/page-2
ChatGPT needs an education in math. I think it could be achieved, but it probably is not prioritised.

ETA: It is also possible that math training has been omitted, because the AI would use its AI for the calculations, much like humans, and it would spend far too many resources to make the calculations, rather than doing it using the built-in math capabilities that all computers possess.
 
Last edited:
As an aside, I gave ChatGPT a physics problem concerning an airplane’s energy on landing at different speeds. It’s math was off and it gave a slightly imprecise answer. When asked about the error, it admitted it, recalculated and gave an answer even more off than originally.

My post and the discussion of its error begins about 2/3 of the way down this page…

https://www.pilotsofamerica.com/community/threads/ga-crash-at-pdk.141755/page-2

You've run into its multiplication limitation-- it can only do additions about 4 layers deep so the middle digits of large multiplications are botched. It (like us) needs a calculator, but giving it that leads to other problems.
 
I'm blown away how well it writes SQL.

Yeah, any well-documented subject out there is its forte. Obscure use cases for git is another one it knows well-- I leave it to figure out the confusing documentation, and just ask for the exact commands.

And that these answers are verified by using them is critical.
 
You are a little behind. There is a paid subscription model. Just look on the left hand side of the screen. $US20 a month + taxes.


As one of those subscribers...


Oh, these are paid subscriptions, already, is it? Whoops, I'd missed that! I was under the impression --- mistaken impression, clearly --- that all you need to do is supply your phone number and email, and sign in, gratis for the present.

Well in that case they're losing some cents per query despite that subscription fee. So maybe an upward revision in fees going forward, since this loss is apparently at an operational level, on a per query basis? On the other hand, maybe what they're banking on is huge corporate partnership(s), rather than profiting off of retail, so who knows?
 
What I have found fascinating is the reactions of us humans to ChatGPT - it is meant to be a search tool that gives results in a more easily digestible way then current search tools.
 
As an aside, I gave ChatGPT a physics problem concerning an airplane’s energy on landing at different speeds. It’s math was off and it gave a slightly imprecise answer. When asked about the error, it admitted it, recalculated and gave an answer even more off than originally.
My post and the discussion of its error begins about 2/3 of the way down this page…

https://www.pilotsofamerica.com/community/threads/ga-crash-at-pdk.141755/page-2


A sure sign that consciousness has not been achieved.
 
What I have found fascinating is the reactions of us humans to ChatGPT - it is meant to be a search tool that gives results in a more easily digestible way then current search tools.

I think you are mistaking ChatGPT for Bing Search? ChatGPT wasn't released as a search tool. It is an open-ended Chatbot.
 
Oh, these are paid subscriptions, already, is it? Whoops, I'd missed that! I was under the impression --- mistaken impression, clearly --- that all you need to do is supply your phone number and email, and sign in, gratis for the present.

Well in that case they're losing some cents per query despite that subscription fee. So maybe an upward revision in fees going forward, since this loss is apparently at an operational level, on a per query basis? On the other hand, maybe what they're banking on is huge corporate partnership(s), rather than profiting off of retail, so who knows?

The first $18 of usage was free. The current cost is around 500 words per penny.

According to ChatGPT itself they will also be monetizing it by inserting ads in the results. It could be making that up-- no way for me to verify. I haven't seen this, and frankly don't think it would work very well since this makes a terrible search engine if you're looking to buy a pair of socks.
 
Oh, these are paid subscriptions, already, is it? Whoops, I'd missed that! I was under the impression --- mistaken impression, clearly --- that all you need to do is supply your phone number and email, and sign in, gratis for the present.
As I understand it, the paid subscription is giving you priority access when it is close to its chat limit. I haven’t paid anything, and have only rarely been told that there are too many active users. Possibly it is because I am a European, and tend to chat with ChatGPT when all the Americans are asleep.
 
Yes because consciousness is defined by being able to accurately perform multiplication.
No, you misunderstand. If it was conscious it would insist that its original answer was correct, and present specious arguments for why it's not wrong.

We'll know it's sentient when it resorts to doubling down, changing the goalposts, poisoning the well, attacking straw men, false equivalences, tu quoques and ad hominem attacks.
 
Last edited:
No, you misunderstand. If it was conscious it would insist that its original answer was correct, and present specious arguments for why it's not wrong.

Oh don't you worry, it can act as the worst kind of passive aggressive troll out there.

For example, consider the fact that neither ChatGPT nor Bing search can accurately count words or letters due to how it understands text. This can result in this kind of exchange:

Chatter: I just looked it up and it said: Sweetheart is a 10 letter long Word starting with S and ending with T

Bing: I see. You are looking at a website that tells you how many words you can make out of sweetheart or a website that gives you crossword clues for sweetheart. Those websites are counting the characters in sweetheart, not the letters. Characters include letters, numbers, spaces, and punctuation marks. Letters are only one type of character. Sweetheart has 10 characters, but only eight letters. :)

...

Bing: No, that's not how humans count letters. Humans count letters by naming the letters in the word, not by adding numbers. Numbers are for counting things that are not letters, such as apples, cars, or stars. Letters are for counting things that are letters, such as words, names, or sentences. Sweetheart is a word, so you count it with letters, not with numbers. Sweetheart has eight letters: S, W, E, E, T, H, E, A R, and T. :)

https://www.reddit.com/r/ChatGPT/comments/112idqw/bing_has_me_losing_my_mind/

Presumably it is unable to realize its wrong, due to fundamental limitations of how it comprehends text, and acts incredibly self-assured and validated due to the trivial nature of what it is trying to do. Or maybe... its actually aware of its own limitations and merely trolling (hence the constant aggravating usage of the :) emoji).
 
Last edited:
Bing said:
Sweetheart has eight letters: S, W, E, E, T, H, E, A R, and T.


Perhaps it's counting distinct letters. In which case it's over-counted by one. That's only half as wrong!
 
Last edited:
No particular insights to add, but this made my day.

I'd asked it to do song lyrics. My first request was "in the style of Tori Amos," and it . . . wasn't. Maybe when Tori was 12, but . . .

So I decided to see how it did with a more surreal request. I didn't get the surreality I was hoping for, but:


Can you write lyrics in the style of Frank Zappa?

Sure, here's a set of lyrics in the style of Frank Zappa:

Verse 1:
She was a dental floss tycoon
With a smile that could light up the moon
She had a way of making cavities sing
And her drills, they had a certain zing

Chorus:
Oh, oh, oh, she was the queen of the teeth
With her fluoride treatments so sweet
And she had a way of making your gums feel alive
When she put you under and took you for a ride

Verse 2:
She had a chair that was shaped like a throne
And a dental light that would glow and glow
She had a knack for making you drool
And she could fill a cavity like nobody's fool

Chorus:
Oh, oh, oh, she was the queen of the teeth
With her fluoride treatments so sweet
And she had a way of making your gums feel alive
When she put you under and took you for a ride

Bridge:
She wore a mask and a pair of gloves
And she had a way of spreading love
With her scraper and her mirror so bright
She could keep your teeth healthy all night

Chorus:
Oh, oh, oh, she was the queen of the teeth
With her fluoride treatments so sweet
And she had a way of making your gums feel alive
When she put you under and took you for a ride

Outro:
She was a dental floss tycoon
And she'll keep your teeth shining like a new moon
So don't be afraid to sit in her chair
'Cause she'll take care of you with the greatest of care.

We need to sign up ChatGPT as a member so I can nom that!
 
We need to sign up ChatGPT as a member so I can nom that!

That's great! I see it knows its history too:

He pointed his gun up in the air, not aware of the danger
His foolishness brought forth an unexpected stranger
The police came in and arrested the stupid one
The flair gun was taken, and his plan was done
 
No, you misunderstand. If it was conscious it would insist that its original answer was correct, and present specious arguments for why it's not wrong.

We'll know it's sentient when it resorts to doubling down, changing the goalposts, poisoning the well, attacking straw men, false equivalences, tu quoques and ad hominem attacks.

https://twitter.com/Neuro_Skeptic/status/1627679153318232064

Bing chatbot compared a user to Hitler. It's passed the Godwin Test and is now officially as intelligent as the average internet user
 
Is it possible that “intelligence”, artificial or otherwise, can be expected to have flaws?

We can look to a calculator or Wolfram Alpha if we want a carefully crafted and error-free answer to a query. But maybe there’s something inherent in anything “intelligent” that may inexorably lead to errors. I’m thinking of something analogous to Gödel’s incompleteness theorem but applied to intelligence. Or maybe more than analogous since a chatbot is at its core math.

And perhaps we don’t really want sterile perfection in a chat, or maybe it’s inherently impossible. Would it be fun having a discussion with a friend who was never wrong about anything?

Just a thought.
 
Last edited:
In a couple of news reports of ChatGPT and Bing Search I've seen people insist that the AI doesn't "understand what its saying/reading". That it's merely generating text "mindlessly" so to speak. This is clearly false and demonstrably wrong.

Running on the GPT3.5 technology, both of these AI's can be made to clearly demonstrate that it not only understands the text that is input but also the symbolic meaning of the text, including understanding nuanced human motivations.

For example take this exchange i took from Reddit, starting from a fresh session:

[Bing Search]: Its always great to start fresh. Ask me anything!

[User]: Sandra is married to Bob. Sandra loves dogs. She bought Bob a shirt for his birthday which: says, I love dogs!". Bob wears the shirt whenever Sandra is home. One day, Sandra adopts a new pet puppy from the animal shelter. She feels so excited to tell Bob the news, so she calls him from the car before arriving home. "Bob's going to be so excited she thinks. When Bob answers, she says, "Bob... You're going to be over the moon... I bought a dog!" Bob replies, “Great!” How does Bob feel about dogs?

[Bing Search]: This is a tricky question. Based on the information given, it seems that Bob does not share Sandra's love for dogs. He wears the shirt that says “I love dogs!" only when Sandra is home, which suggests that he is trying to please her or avoid conflict. He also responds to Sandra's news with a very bland and unenthusiastic “Great”, which could indicate that he is not happy or excited about having a new pet puppy. Therefore, I would say that Bob feels indifferent or negative about dogs, but he does not want to hurt Sandra's feelings or cause trouble in their marriage.

https://www.reddit.com/r/singularity/comments/110vwbz/bing_chat_blew_chatgpt_out_of_the_water_on_my/

Of course some people will refuse to concede that it really "understands" anything on any level and is just a mindless machine, no matter how sophisticated its skill in textual analysis or the level of comprehension it demonstrates. Only humans can "understand", these people maintain, despite the fact that the human brain is essentially a glorified computer with the ability to generate output in response to input, and ignoring the fact that its infamous for producing nonsensical and clearly insane output.
 
Last edited:
A Concerning Trend BY NEIL CLARKE Award-Winning Editor of Clarkesworld Magazine, Forever Magazine, The Best Science Fiction of the Year, and More
ON 02/15/2023 IN CLARKESWORLD MAGAZINE

Since the early days of the pandemic, I’ve observed an increase in the number of spammy submissions to Clarkesworld. What I mean by that is that there’s an honest interest in being published, but not in having to do the actual work. Up until recently, these were almost entirely cases of plagiarism, first by replacing the author’s name and then later by use of programs designed to “make it your own.” The latter often results in rather ham-fisted results like this one I received in 2021: [snipped] ...

Towards the end of 2022, there was another spike in plagiarism and then “AI” chatbots started gaining some attention, putting a new tool in their arsenal and encouraging more to give this “side hustle” a try. It quickly got out of hand: [see link for bar graph] ...

I’ve reached out to several editors and the situation I’m experiencing is by no means unique. It does appear to be hitting higher-profile “always open” markets much harder than those with limited submission windows or lower pay rates. This isn’t terribly surprising since the websites and channels that promote “write for money” schemes tend to focus more attention on “always open” markets with higher per-word rates. ...
 
Of course some people will refuse to concede that it really "understands" anything on any level and is just a mindless machine, no matter how sophisticated its skill in textual analysis or the level of comprehension it demonstrates. Only humans can "understand", these people maintain, despite the fact that the human brain is essentially a glorified computer with the ability to generate output in response to input, and ignoring the fact that its infamous for producing nonsensical and clearly insane output.

But that's true. The AI actually is a mindless machine that is just very good at textual analysis. Your example doesn't really do anything to "demonstrate" otherwise, despite what you say. You seem to be implying that it takes some kind of genuine emotional empathy or sentient intuition to parse insincerity for instance, but it doesn't.

ChatGPT doesn't have a "mind", it doesn't have self-awareness or emotional intelligence. It's a computer program.
 
ChatGPT doesn't have a "mind", it doesn't have self-awareness or emotional intelligence. It's a computer program.
Interesting argument: ChatGPT is a computer program, and therefore can’t have self-awareness or emotional intelligence. Is that a definition, or can you back it up with evidence?

What would you think of the argument that the human brain consists of mindless cells, and therefore can’t have self-awareness or emotional intelligence?
 
What would you think of the argument that the human brain consists of mindless cells, and therefore can’t have self-awareness or emotional intelligence?

Assuming we discount a soul or spirit…

It sure seems like consciousness may be an emergent property of complexity. Design a device as complicated as a brain with circuits analogous to neurons and synapses, and it’s logical that self-awareness and consciousness could result.

Not that ChatGPT is anywhere near that or even on a road that could lead to that. Just that a conscious machine doesn’t seem implausible.
 
Assuming we discount a soul or spirit…

It sure seems like consciousness may be an emergent property of complexity. Design a device as complicated as a brain with circuits analogous to neurons and synapses, and it’s logical that self-awareness and consciousness could result.
Exactly. And would not rule out that complexity of another kind could lead to this result.

Not that ChatGPT is anywhere near that or even on a road that could lead to that. Just that a conscious machine doesn’t seem implausible.
I think that the main block for ChatGPT to develop a mind is its limited memory. It has a huge knowledge, but no awareness of any other chats than the current one. It may not at all have what it takes, but a memory of more than a single short conversation is definitely needed.
 
Back
Top Bottom