• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Merged Artificial Intelligence

DEBUNK AI

There was a radio programme on BBC R4 which was interesting and apt for this forum.

From their site. "They speak with the team behind 'Debunk-bot', an AI chatbot that has shown remarkable success in shifting people’s beliefs around conspiracy theories. They also talk to Mick West, who has spent decades debunking falsehoods, about how AI might help reduce the impact of dangerous conspiracies—and what role humans must play in guiding those who find their way out of conspiracy rabbit holes with the help of a bot."

https://www.bbc.co.uk/sounds/play/m0023y0j

The site to try it out.

https://www.debunkbot.com
 

Essence of story, OpenAI's AI powered transcription tool Whisper can hallucinate, so it can insert text that was not in the audio file. Now obviously a human doing transcriptions will make mistakes I.e. hallucinate but the mistakes made by Whisper appear to be serious mistakes. And we know people for some strange reason have much more confidence in "the computer says".
 
Boy, 14, fell in love with ‘Game of Thrones’ chatbot — then killed himself after AI app told him to ‘come home’ to ‘her’: mom (New York Post)

If you look at the content of the chat and the timing of when he killed himself, it certainly does appear that the chatbot unintentionally provoked him to kill himself. However, I have some questions for the boy's mother, like why did a 14-year-old boy have access to a handgun? The article says that it was his father's gun.

Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”
Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit.
The lawsuit claims that Sewell’s mental health “quickly and severely declined” only after he downloaded the app in April 2023.

His family alleges he became withdrawn, his grades started to drop and he started getting into trouble at school the more he got sucked into speaking with the chatbot.

The changes in him got so bad that his parents arranged for him to see a therapist in late 2023, which resulted in him being diagnosed with anxiety and disruptive mood disorder, according to the suit.

They were concerned enough to send him to see a therapist, but apparently they didn't think that allowing him access to a handgun was anything to be concerned about. They're trying to find a scapegoat to blame where there's someone right there in the house whose primary responsibility was to be the child's guardian.
 
They were concerned enough to send him to see a therapist, but apparently they didn't think that allowing him access to a handgun was anything to be concerned about. They're trying to find a scapegoat to blame where there's someone right there in the house whose primary responsibility was to be the child's guardian.
In Florida? Why would they be? Should they have locked away all the knives, plastic bags, cleaning products and coffee too? There are a million ways he could have killed himself just as easily. You're just singling out guns because you hate our freedoms! :xrolleyes
 
Apple Researchers Show AI Bots Can't Think

Indeed, Apple's conclusion matches earlier studies that have found that large language models, or LLMs, don't actually "think" so much as match language patterns in materials they've been fed as part of their "training." When it comes to abstract reasoning — "a key aspect of human intelligence," in the words of Melanie Mitchell, an expert in cognition and intelligence at the Santa Fe Institute — the models fall short.

The Apple researchers set out to answer the question, "Do these models truly understand mathematical concepts?" as one of the lead authors, Mehrdad Farajtabar, put it in a thread on X. Their answer is no. They also pondered whether the shortcomings they identified can be easily fixed, and their answer is also no: "Can scaling data, models, or compute fundamentally solve this?" Farajtabar asked in his thread. "We don't think so!"

Well, my jobs safe then.

PS: My XenoForge skills are rusty. I don't know how to insert media. First YouTube, now BandCamp.
 
Watching How It's Made, I can't help but think an AI or even a not-so-smart robot would be better at certain mind-numbing, repetitive jobs than humans. Granted, some of those shows are pretty old and the sites may have been updated since then. Also granting that there are some things humans are much better at. But slipping frozen burgers into a package? If it's a very small operation, maybe -- but this one looked like a major factory.
 
Does it really matter that they can't "think"? (And what exactly does it mean to "think"?)

If it can translate Chinese or Russian into grammatically correct and easy to understand English, that's a pretty useful function. If it can pass a test that most humans can't pass without years of higher education and diligent study, does it matter what the internal process is that leads to that output (whether we call it "thinking" or some other word)?
 
I guess it matters if you want to determine who deserves the credit or blame for the things they achieve and the mistakes they make. If they're not thinking the people who programmed them do; if they are thinking, they do.
 
I guess it matters if you want to determine who deserves the credit or blame for the things they achieve and the mistakes they make. If they're not thinking the people who programmed them do; if they are thinking, they do.
even if they are de facto completely indoctrinated since birth?
When they haven't been exposed to anything but a carefully curated data set that might contain a significant bias?
 
You can ask the same question about people who've been indoctrinated from birth, of course, and end up discussing what constitutes free will. I'm not sure I want to get into those philosophical weeds.
 
As usual, we need definitions of “to think”, and “understand” (including “truly understand”).
 
Well, my jobs safe then.

PS: My XenoForge skills are rusty. I don't know how to insert media. First YouTube, now BandCamp.

Quickly checked with Copilot:
Alright, let’s do the math:

1. **Friday**: 44 kiwis
2. **Saturday**: 58 kiwis
3. **Sunday**: Double Friday’s amount: \( 44 \times 2 = 88 \) kiwis

Now, let's add them up:

- Total: \( 44 + 58 + 88 = 190 \) kiwis

Oliver has **190** kiwis in total. Those five smaller-than-average kiwis don't change the total count.

Got another question, riddle, or topic to dive into?

ETA: Google Gemini came back with 185.

Did the researchers create a control group of humans to see what percentage of humans get it right?

There still seems to be an unstated assumption in many of these papers that humans would score much higher than the AIs that are being tested, based on the idea that some unspecified percentage of humans would/can do better. If the AIs are doing at least as well as most humans how can we make claims such as in the article?
 
Last edited:
This idea of human control rate made me think about a round I did at my last quiz night which were riddles. I've just tested co-pilot on it and it solved 100% of the riddles, the best team (teams are up to a maximum of 6 people and there were 12 teams) managed 9 out of 10, the worse 2! I've got the scores on a laptop so I could find out what the average human "correct rate" was but Co-pilot blows them out of the water.

The questions (and answers) for those interested:

Trick or Treat

  • What are the two meals you can never eat for breakfast?
    • Lunch & Dinner
  • What starts with “E” and ends in “E” but only has one letter in it?
    • Envelope
  • Some months have 31 days, some have 30 days, how many have 28 days?
    • 12 - All of them
  • What goes up and down but always remains in the same place?
    • Stairs
  • I am an odd number, take away one letter and I become even – what number am I?
    • 7 – seven
  • Can you name three consecutive days without using Wednesday, Friday and Sunday?
    • Yesterday, today, tomorrow
  • What can you make that no-one not even you can see?
    • Noise
  • What can run but never walks, has a mouth but never talks, has a head but never weeps, has a bed but never sleeps?
    • A river
  • How can you drop a raw egg onto a concrete floor and not crack it?
    • Easy – concrete floors are hard to crack!
  • If you were running a race and overtook the person in 2nd place, what place would you be in now?
    • 2nd!
(ETA: New forum software is much better, I copied the above out of Word and it formatted everything automatically.)
 
Chances are that these riddles were some that were part of co-pilot’s original training. After all, they are trained on a gazillion of texts.
I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.
 
I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.
Well, at least this can be tested. Is anybody here a good riddle master? Then please devise a good one, but do not write the answer here - or write it backwards, or whatever. Then Darat can present it to Co-Pilot, and we’ll see.

Or perhaps the answer should not be presented here at all, because somebody might give it away by boasting about how she cleverly solved the encryption!
 
Chances are that these riddles were some that were part of co-pilot’s original training. After all, they are trained on a gazillion of texts.

I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.

I think you will be correct BUT that wasn't really my point. My point was that these research papers keep referring to human thinking/cognition/whatwethinkthinkingisthisweek as if all humans are equal, how many people can do the tasks that require this unique "human thinking"? How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.

(ETA: And again let me make it clear I do not think these AI's think as humans do but I do more and more think we do some things similar to how these LLMs work.)
 
How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.
Well, it's probably a small percentage, but I think it must be more than zero, or else wouldn't we still be stuck in the Stone Age?
 
Well, it's probably a small percentage, but I think it must be more than zero, or else wouldn't we still be stuck in the Stone Age?
Of course it is greater than zero, but do we need to compare AI’s to the best among us. What about comparing to the likes of me who can’t solve a single riddle, and has hardly ever had an original thought?
 
Interesting article discussing not just the accuracy of generative AI but the AIs’ confidence in their accuracy.

That is something that is well known. Check everything that it produces. People have been in trouble for blindly accepting what AI has said.
 
Watching How It's Made, I can't help but think an AI or even a not-so-smart robot would be better at certain mind-numbing, repetitive jobs than humans. Granted, some of those shows are pretty old and the sites may have been updated since then. Also granting that there are some things humans are much better at. But slipping frozen burgers into a package? If it's a very small operation, maybe -- but this one looked like a major factory.
The thing is, no AI is needed for such a routine task. Just have an array of sensors that trip an automated arm (or whatever robot tool is needed) as the items to be packaged, assembled, etc. passes a certain point. It's been done for years now, no AI needed.
 
I think you will be correct BUT that wasn't really my point. My point was that these research papers keep referring to human thinking/cognition/whatwethinkthinkingisthisweek as if all humans are equal, how many people can do the tasks that require this unique "human thinking"? How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.

(ETA: And again let me make it clear I do not think these AI's think as humans do but I do more and more think we do some things similar to how these LLMs work.)
When it it comes to it, I would be pretty confident that all these bots will be absolutely useless at solving novel problems that a properly trained human will be able to solve. They still are chatbots after all, just ones with access to huge (pirated) libraries.
 
When it it comes to it, I would be pretty confident that all these bots will be absolutely useless at solving novel problems that a properly trained human will be able to solve. They still are chatbots after all, just ones with access to huge (pirated) libraries.
And I am more than confident that most humans even with training and education would fail at solving "novel problems", my evidence is the 50 plus years I've been observing humans.
 
I think you will be correct BUT that wasn't really my point. My point was that these research papers keep referring to human thinking/cognition/whatwethinkthinkingisthisweek as if all humans are equal, how many people can do the tasks that require this unique "human thinking"? How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.

(ETA: And again let me make it clear I do not think these AI's think as humans do but I do more and more think we do some things similar to how these LLMs work.)
Rote recall of arbitrary data is something computers are especially good at. That's why we invented them and gave them work that monetizes rote recall of arbitrary data. It's no surprise that a computer with access to every joke ever written is able to perform rote recall of punchlines for an arbitrary list of setups.

Computers doing what they do best isn't the same as computers doing everything that humans are good at.
 

Has it hit a wall? Is it plateauing? Just a brief pause before the next quantum leap forward?
I think it is more a matter of a practical lmitation of one specific class of AI. I think time and time again we have to unravel the investor/marketing bubble hype to get to the strand of reality. LLMs are great for marketing and making investors drool because they make great demonstrations and as a result unfortunately have become necessary for a company to get a tick from the "financial analysts". But their real world use is quite limited and are not going to create huge share price increases!

Will LLMs become the fabled "general AI"? I'm confident in saying no, but they may well be part of a "general AI".

Personally I struggle to understand why folk think a general AI is in anyway a meaningful definition at best it seems to be defined as "can solve problems like humans do". But why on earth do we want to re-create such a bad type of intelligence?

I don't want an artificial intelligence that has the flaws of human intelligence, I want something much much better.
 
I hear we did hit text test data wall. We are using all available text on the internet. There won't be more.
I hear we did hit wall in computation power. Only few companies in the world can do cutting edge research and investments are astronomical. Same with monetization. People can get GTP3 for free, and can pay a bit for GPT4. But Sora (video generator) didn't go commercial because it would be too expensive, it's just too hardware demanding.
But we were far away from these walls just few years ago. That's because there were breakthroughs in theory. Especially text tokenization and transformers pileline, and then few years of working on the training process. And only then we did hit these walls.
There still might be improvements in what we do with the data and the processing power. And they might be major. And the processing power is still covered by the Moore's law. And as the training and experimenting gets cheaper .. the breakthroughs are more likely.
So I'm quite sure there will be progress. Probably not so fast. The bubble will collapse. And the collapse itself will slow it even more.
But the progress will still be too fast. Faster than society will adapt, and it won't end well.
 
I hear we did hit text test data wall. We are using all available text on the internet. There won't be more.
...snip...
Something that raised the hair at the back of neck is that the big data crunchers, like OpenAI may use "synthetic" data to further train their models. With of course that synthetic word meaning computer generated text. Have they never put a microphone near to a speaker outputting the mike's input?
 
There may be a bubble collapse, when the hype gets too far ahead of reality and expectations have to be reigned in.

But if we remember similar things in the past, like the collapse of the "dot com bubble" where people realized that just because a company has a website doesn't mean it can print money and some of those stocks were grossly overvalued. But the internet still went ahead, and certain companies like Amazon.com and Google really did figure it all out, while others like Pets.com did not.
 
Back
Top Bottom