Carlotta
Graduate Poster
I can't be the only person who finds these weird crazy AI videos tremendously entertaining, can I?
I find that downright creepy!!

I can't be the only person who finds these weird crazy AI videos tremendously entertaining, can I?
Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”
Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit.
The lawsuit claims that Sewell’s mental health “quickly and severely declined” only after he downloaded the app in April 2023.
His family alleges he became withdrawn, his grades started to drop and he started getting into trouble at school the more he got sucked into speaking with the chatbot.
The changes in him got so bad that his parents arranged for him to see a therapist in late 2023, which resulted in him being diagnosed with anxiety and disruptive mood disorder, according to the suit.
In Florida? Why would they be? Should they have locked away all the knives, plastic bags, cleaning products and coffee too? There are a million ways he could have killed himself just as easily. You're just singling out guns because you hate our freedoms!They were concerned enough to send him to see a therapist, but apparently they didn't think that allowing him access to a handgun was anything to be concerned about. They're trying to find a scapegoat to blame where there's someone right there in the house whose primary responsibility was to be the child's guardian.
Apple Researchers Show AI Bots Can't Think
Indeed, Apple's conclusion matches earlier studies that have found that large language models, or LLMs, don't actually "think" so much as match language patterns in materials they've been fed as part of their "training." When it comes to abstract reasoning — "a key aspect of human intelligence," in the words of Melanie Mitchell, an expert in cognition and intelligence at the Santa Fe Institute — the models fall short.
The Apple researchers set out to answer the question, "Do these models truly understand mathematical concepts?" as one of the lead authors, Mehrdad Farajtabar, put it in a thread on X. Their answer is no. They also pondered whether the shortcomings they identified can be easily fixed, and their answer is also no: "Can scaling data, models, or compute fundamentally solve this?" Farajtabar asked in his thread. "We don't think so!"
even if they are de facto completely indoctrinated since birth?I guess it matters if you want to determine who deserves the credit or blame for the things they achieve and the mistakes they make. If they're not thinking the people who programmed them do; if they are thinking, they do.
And of course what happens when you put human cognition under the same spotlight.As usual, we need definitions of “to think”, and “understand” (including “truly understand”).
Well, my jobs safe then.
PS: My XenoForge skills are rusty. I don't know how to insert media. First YouTube, now BandCamp.
Alright, let’s do the math:
1. **Friday**: 44 kiwis
2. **Saturday**: 58 kiwis
3. **Sunday**: Double Friday’s amount: \( 44 \times 2 = 88 \) kiwis
Now, let's add them up:
- Total: \( 44 + 58 + 88 = 190 \) kiwis
Oliver has **190** kiwis in total. Those five smaller-than-average kiwis don't change the total count.
Got another question, riddle, or topic to dive into?
As could a human.You would have gotten the same accuracy if you had picked up a book of riddles - which the program did.
Chances are that these riddles were some that were part of co-pilot’s original training. After all, they are trained on a gazillion of texts.I've just tested co-pilot on it and it solved 100% of the riddles,
I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.Chances are that these riddles were some that were part of co-pilot’s original training. After all, they are trained on a gazillion of texts.
Well, at least this can be tested. Is anybody here a good riddle master? Then please devise a good one, but do not write the answer here - or write it backwards, or whatever. Then Darat can present it to Co-Pilot, and we’ll see.I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.
Chances are that these riddles were some that were part of co-pilot’s original training. After all, they are trained on a gazillion of texts.
I do think it's more likely that it knew the answers to all of them, either because it was part of its training data set or because at some point some user told it the correct answer than that it figured those out by some sort of reasoning process.
Well, it's probably a small percentage, but I think it must be more than zero, or else wouldn't we still be stuck in the Stone Age?How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.
Of course it is greater than zero, but do we need to compare AI’s to the best among us. What about comparing to the likes of me who can’t solve a single riddle, and has hardly ever had an original thought?Well, it's probably a small percentage, but I think it must be more than zero, or else wouldn't we still be stuck in the Stone Age?
That is something that is well known. Check everything that it produces. People have been in trouble for blindly accepting what AI has said.Interesting article discussing not just the accuracy of generative AI but the AIs’ confidence in their accuracy.
![]()
OpenAI Newly Released SimpleQA Helps Reveal That Generative AI Blatantly And Alarmingly Overstates What It Knows
OpenAI released SimpleQA, a new benchmark for generative AI. During the work, they uncovered a serious qualm about AI being supremely overconfident. Here's the scoop.www.forbes.com
The thing is, no AI is needed for such a routine task. Just have an array of sensors that trip an automated arm (or whatever robot tool is needed) as the items to be packaged, assembled, etc. passes a certain point. It's been done for years now, no AI needed.Watching How It's Made, I can't help but think an AI or even a not-so-smart robot would be better at certain mind-numbing, repetitive jobs than humans. Granted, some of those shows are pretty old and the sites may have been updated since then. Also granting that there are some things humans are much better at. But slipping frozen burgers into a package? If it's a very small operation, maybe -- but this one looked like a major factory.
Not in a quiz setting.As could a human.
When it it comes to it, I would be pretty confident that all these bots will be absolutely useless at solving novel problems that a properly trained human will be able to solve. They still are chatbots after all, just ones with access to huge (pirated) libraries.I think you will be correct BUT that wasn't really my point. My point was that these research papers keep referring to human thinking/cognition/whatwethinkthinkingisthisweek as if all humans are equal, how many people can do the tasks that require this unique "human thinking"? How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.
(ETA: And again let me make it clear I do not think these AI's think as humans do but I do more and more think we do some things similar to how these LLMs work.)
And I am more than confident that most humans even with training and education would fail at solving "novel problems", my evidence is the 50 plus years I've been observing humans.When it it comes to it, I would be pretty confident that all these bots will be absolutely useless at solving novel problems that a properly trained human will be able to solve. They still are chatbots after all, just ones with access to huge (pirated) libraries.
Rote recall of arbitrary data is something computers are especially good at. That's why we invented them and gave them work that monetizes rote recall of arbitrary data. It's no surprise that a computer with access to every joke ever written is able to perform rote recall of punchlines for an arbitrary list of setups.I think you will be correct BUT that wasn't really my point. My point was that these research papers keep referring to human thinking/cognition/whatwethinkthinkingisthisweek as if all humans are equal, how many people can do the tasks that require this unique "human thinking"? How many humans do anything but regurgitate what they have learnt? How many humans can do the tasks that the researchers claim are unique to human cognition, and that involves something other than rearranging stuff we've already learnt? We should be comparing like to like.
(ETA: And again let me make it clear I do not think these AI's think as humans do but I do more and more think we do some things similar to how these LLMs work.)
I think it is more a matter of a practical lmitation of one specific class of AI. I think time and time again we have to unravel the investor/marketing bubble hype to get to the strand of reality. LLMs are great for marketing and making investors drool because they make great demonstrations and as a result unfortunately have become necessary for a company to get a tick from the "financial analysts". But their real world use is quite limited and are not going to create huge share price increases!
Has it hit a wall? Is it plateauing? Just a brief pause before the next quantum leap forward?
Something that raised the hair at the back of neck is that the big data crunchers, like OpenAI may use "synthetic" data to further train their models. With of course that synthetic word meaning computer generated text. Have they never put a microphone near to a speaker outputting the mike's input?I hear we did hit text test data wall. We are using all available text on the internet. There won't be more.
...snip...