Carlotta
Graduate Poster
I can't be the only person who finds these weird crazy AI videos tremendously entertaining, can I?
I find that downright creepy!!

I can't be the only person who finds these weird crazy AI videos tremendously entertaining, can I?

Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”
Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit.
The lawsuit claims that Sewell’s mental health “quickly and severely declined” only after he downloaded the app in April 2023.
His family alleges he became withdrawn, his grades started to drop and he started getting into trouble at school the more he got sucked into speaking with the chatbot.
The changes in him got so bad that his parents arranged for him to see a therapist in late 2023, which resulted in him being diagnosed with anxiety and disruptive mood disorder, according to the suit.
In Florida? Why would they be? Should they have locked away all the knives, plastic bags, cleaning products and coffee too? There are a million ways he could have killed himself just as easily. You're just singling out guns because you hate our freedoms!They were concerned enough to send him to see a therapist, but apparently they didn't think that allowing him access to a handgun was anything to be concerned about. They're trying to find a scapegoat to blame where there's someone right there in the house whose primary responsibility was to be the child's guardian.

Apple Researchers Show AI Bots Can't Think
Indeed, Apple's conclusion matches earlier studies that have found that large language models, or LLMs, don't actually "think" so much as match language patterns in materials they've been fed as part of their "training." When it comes to abstract reasoning — "a key aspect of human intelligence," in the words of Melanie Mitchell, an expert in cognition and intelligence at the Santa Fe Institute — the models fall short.
The Apple researchers set out to answer the question, "Do these models truly understand mathematical concepts?" as one of the lead authors, Mehrdad Farajtabar, put it in a thread on X. Their answer is no. They also pondered whether the shortcomings they identified can be easily fixed, and their answer is also no: "Can scaling data, models, or compute fundamentally solve this?" Farajtabar asked in his thread. "We don't think so!"
even if they are de facto completely indoctrinated since birth?I guess it matters if you want to determine who deserves the credit or blame for the things they achieve and the mistakes they make. If they're not thinking the people who programmed them do; if they are thinking, they do.
And of course what happens when you put human cognition under the same spotlight.As usual, we need definitions of “to think”, and “understand” (including “truly understand”).
Well, my jobs safe then.
PS: My XenoForge skills are rusty. I don't know how to insert media. First YouTube, now BandCamp.
Alright, let’s do the math:
1. **Friday**: 44 kiwis
2. **Saturday**: 58 kiwis
3. **Sunday**: Double Friday’s amount: \( 44 \times 2 = 88 \) kiwis
Now, let's add them up:
- Total: \( 44 + 58 + 88 = 190 \) kiwis
Oliver has **190** kiwis in total. Those five smaller-than-average kiwis don't change the total count.
Got another question, riddle, or topic to dive into?
As could a human.You would have gotten the same accuracy if you had picked up a book of riddles - which the program did.
Chances are that these riddles were some that were part of co-pilot’s original training. After all, they are trained on a gazillion of texts.I've just tested co-pilot on it and it solved 100% of the riddles,