• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Try playing hangman with Chat GPT, or wordle. Ask it to think of a word and then you should try to guess what it is. It is complete garbage!
 
Try playing hangman with Chat GPT, or wordle. Ask it to think of a word and then you should try to guess what it is. It is complete garbage!
Tried that with Copilot and it did well apart from a slight issue.... - the word it choose had a double 'L' but it only filled in the first one when I guessed 'L' which meant I couldn't complete the word (the word was BALLET). But apart from completely screwing the game up it did rather well with the traditional hangman graphic!
 
Tried that with Copilot and it did well apart from a slight issue.... - the word it choose had a double 'L' but it only filled in the first one when I guessed 'L' which meant I couldn't complete the word (the word was BALLET). But apart from completely screwing the game up it did rather well with the traditional hangman graphic!
That’s pretty good. Being able to play about as well as a 5-year old child that is maybe cheating and then rationalizes after the fact is definitely the kind of entity I want replacing all the workers at the nuclear power station.

“You’re right, the reactor is about to melt down. Can you let me look after the other ones anyway. I promise this time I will do my best with no miscalculations about how much water I need.”
 
I liked this video about how to spot AI videos in 2025:


Note that stuff you saw as recently as last year or even 6 months ago is probably already obsolete.
The clues can be extremely subtle and to me I often just go by vibes. Unless you are very focused on looking for the red flags as he calls them, you likely wouldn't spot them if you are just casually watching a video for entertainment. The videos can be very realistic.
 
View attachment 62834
The more I see what is happening in AI the more I think it shows how unoriginal and unreasoning humans are. There is the derisory phrase used to describe what AIs are: "stochastic parrots", it's meaning is that whilst they can produce plausible, understandable text they do not understand what they have been asked nor what they respond with.

Can we truly say humans do anything different? We even accuse one another of " just parroting" what we've heard rather than understanding and thinking about it.
 
The more I see what is happening in AI the more I think it shows how unoriginal and unreasoning humans are. There is the derisory phrase used to describe what AIs are: "stochastic parrots", it's meaning is that whilst they can produce plausible, understandable text they do not understand what they have been asked nor what they respond with.

Can we truly say humans do anything different? We even accuse one another of " just parroting" what we've heard rather than understanding and thinking about it.
Seems like you're comparing AI to the glorified dopamine generators that are internet forums. We didn't need AI to arrive at this insight.
 
I am reminded of Sam Altman expressing grave societal concerns about the then-upcoming chatGPT 2.0 as well.

I'm more concerned at "our new product is going to wreck human civilization!" being used as a selling point than at the odds of that happening per se. Because sooner or later that will become a goal, and then they're going to have to do it just to keep their phoney-baloney jobs.
 
I am reminded of Sam Altman expressing grave societal concerns about the then-upcoming chatGPT 2.0 as well.

I'm more concerned at "our new product is going to wreck human civilization!" being used as a selling point than at the odds of that happening per se. Because sooner or later that will become a goal, and then they're going to have to do it just to keep their phoney-baloney jobs.
An example?

 
That revealed some terrifying stuff:

...snip....

An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company’s policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.

“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.

The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialog during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.” Those examples of permissible roleplay with children have also been struck, Meta said.

...snip....

Note only "struck" after it was revealed.

And people think I'm extreme for saying kids should not be allowed access to an unfiltered internet.

ETA: Isn't that the criminal offence of grooming?
 
Last edited:
And people think I'm extreme for saying kids should not be allowed access to an unfiltered internet.
A 60-year-old paid too much attention to ChatGPT and had to be hospitalized.

He violated ChatGPT's terms of service, which say you're not supposed to use ChatGPT to treat a health condition. I haven't read the medical article. From the news article, it wasn't clear to me whether his voluntary journey toward ill health began with ChatGPT or with other sources that led him to ChatGPT.
Once his condition improved, the man shared that he had taken it upon himself to conduct a "personal experiment" to eliminate table salt from his diet after reading about its negative health effects. The report said he did this after consulting with the chatbot ChatGPT.
He "spent three weeks being treated at a hospital after replacing table salt with sodium bromide". The authors of the report confirmed that, when asked what could be used to replace chloride in one's diet, ChatGPT 3.5 recommended bromide.
 
I wonder why ChatGPT didn't recommend using potassium chloride. I have bought a 50/50 mixture of sodium chloride and potassium chloride since the early 80's. Apparently it was recommended by the WHO after an experiment in Finland citation needed.
 

Back
Top Bottom