Article about real people losing out to AI https://www.theguardian.com/technology/2025/may/31/the-workers-who-lost-their-jobs-to-ai-chatgpt

IBM Research’s AI Attribution Toolkit is a first pass at formulating what a voluntary reporting standard might look like. Released this week, the experimental toolkit makes it easy for users to write an AI attribution statement that explains precisely how they used AI in their work.
I would have liked to know what it means by “thought-provoking”. In what way does it think it “thinks”?Apropos of nothing, this was a mildly interesting exchange. Copilot had just answered a question about word choice.
View attachment 61566
I read it as provoking my thoughts. Which, as evidenced by the fact that I posted it here, it kind of did.I would have liked to know what it means by “thought-provoking”. In what way does it think it “thinks”?
That is strikingly like an exchange in 2001:A Space Odyssey where the HAL is asked if he feels things. Of course, the bot could have been influenced by that source.Apropos of nothing, this was a mildly interesting exchange. Copilot had just answered a question about word choice.
View attachment 61566
The article still maintains the irritating “hallucination” excuse for LLM errors.Bellingcat tested LLMs abilities to do geolocation. Quite interesting article
![]()
Have LLMs Finally Mastered Geolocation? - bellingcat
We tasked LLMs from OpenAI, Google, Anthropic, Mistral and xAI to geolocate our unpublished holiday snaps. Here's how they did.www.bellingcat.com
It is the correct technical term for what is occurring. It's not just that the LLM is wrong about something. It is actively confabulating an unreal thing. I'm afraid you'll need to get over that particular peeve.The article still maintains the irritating “hallucination” excuse for LLM errors.
(sorry, this really bugs me)
If these things are making ◊◊◊◊◊◊◊ mistakes, just admit it and stop the hallucination ◊◊◊◊◊◊◊◊.
Yes. Specifically, certain LLMs used in research are programmed to analyse their own output.Do these things ever perform and display error analysis of their results? Or do they, like ChatGPT, just bluster through with confident responses?
This. AIs are programmed to (almost) always give an answer, any answer, even it it's not correct. Very few times have I encountered a bot that simply up and says, "I'm sorry, but I don't have enough information in my training data to give an answer."The article still maintains the irritating “hallucination” excuse for LLM errors.
(sorry, this really bugs me)
If these things are making ◊◊◊◊◊◊◊ mistakes, just admit it and stop the hallucination ◊◊◊◊◊◊◊◊.
Do these things ever perform and display error analysis of their results? Or do they, like ChatGPT, just bluster through with confident responses?
This. AIs are programmed to (almost) always give an answer, any answer, even it it's not correct. Very few times have I encountered a bot that simply up and says, "I'm sorry, but I don't have enough information in my training data to give an answer."
It's a new area and new nomenclature is needed. Hallucinations are when they make stuff up not when they make a mistake. A mistake would be a response that there are 4 letter "r"s in strawberry when asked how many rs in the word strawberry, an hallucination would be when it says there are 4 and provides an apparent quote or reference to an OED entry that doesn't exist to support the answer 4.The article still maintains the irritating “hallucination” excuse for LLM errors.
(sorry, this really bugs me)
If these things are making ◊◊◊◊◊◊◊ mistakes, just admit it and stop the hallucination ◊◊◊◊◊◊◊◊.
Do these things ever perform and display error analysis of their results? Or do they, like ChatGPT, just bluster through with confident responses?
Seems that would be something they have in common with their creators...On another point, AI bots are very bad at analyzing input for subtle errors. For example, "How many days did this child live, who was born on March 30, 1883 and died on September 15, 1905, given that 1888, 1892, 1896, 1900, and 1904 were leap years?" The bot will happily compute the number of days, ignoring the fact that despite my claim 1900 was a leap year, it was not.