• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

I don't believe the words of LLM poropagandists. The only thing they've really done over the last few years has to become more power and processor intensive per prompt, i.e. dearer.

They are a scam that Altman is playing on the rest of the tech world and us poor slubs are caught in the middle.
Ask them how many letter rs there are in strawberry.

What is the scam?
 
AIs hallucinate because when asked a question, the LLM doesn't answer it. It generates text that looks like what an answer should look like.
That was true with the likes of ChatGPT when first publicly released but they have now gone past that.

Is that so? How, exactly? Appreciate a brief discussion of where we're at exactly --- in simple words of one syllable please!

(Browsed through the link, but emerged none the wiser. Firstly because it was a quick browse. But probably also because it's kind of beyond me, I suspect, and simply a more leisurely read later probably won't do it.)

My impression was exactly what @arthwollipot is saying here. That the ChatGPTs and Groks essentially throw up a statistically likely answer basis answers they've seen, without understanding what they've actually seen or what they're actually saying. Which is why simply going by the AI summary to a question you've asked Google, for instance, is a Bad Idea, assuming you care enough to spend more than two or three seconds on it. If it's important enough to warrant more than a second or two, then it makes sense to trace back the links that AI has used, or to ditch AI altogether in searching.

Has that changed, already? In what way has current AI "gone past that"?
 
One example is that they are no longer totally reliant on their training data, most of them now offer modes that can do live searches and incorporate what they find into their answers. So they are no longer just regurgitating their training data. This also allows them to provide up to date references to back up their answers. An example when ChatGPT was initially released to the public if you asked it who the monarch of the UK was it would more than likely tell you it was Elizabeth 2nd as that was statistically the most likely answer, even though it was the wrong answer.

They can now take the prompt you give them and develop a "plan" to carry that task out, breaking it down to sub-steps, and use the results of each step in the next task and also check each step and if necessary redo it. So in the case of asking it who the monarch of the UK is, it would break that down, probably use its training data and then recognise it is a time bounded problem and do a web search for who is the current monarch. When it then gave you the answer it would include the link to the reference material it used. So now you can check the references - as we all do all the time...

Most also have the chain-of-reasoning ability so now if asked how many Rs in strawberry then can go:
s is not a r
t is not a r
r is a r

Previously they tried to pull an answer from their training data and due to tokenisation that could fail, often amusingly so.

The "chatgpt" part is now often only a single part of the AI, an example of the simplest level now instead of using "autocomplete" to generate an answer from its training data to answer "what is 1+1" it has a calculator it can use to provide that answer.

Does that help - you could ask an AI for more details...
 
Interesting but mostly beyond my comprehension..

I did learn a cool new word. " neuromorphic "
The thing I take away from it is that "necessity is the mother of invention"; we've seen the USA (primarily) throwing ever more computing resources to improve performance which has been mainly Nvidia processors, however the USA has starved China of that type of hardware so they are having to learn new ways to run AIs.
 
One example is that they are no longer totally reliant on their training data, most of them now offer modes that can do live searches and incorporate what they find into their answers. So they are no longer just regurgitating their training data. This also allows them to provide up to date references to back up their answers. An example when ChatGPT was initially released to the public if you asked it who the monarch of the UK was it would more than likely tell you it was Elizabeth 2nd as that was statistically the most likely answer, even though it was the wrong answer.
Right. I asked Google AI who was shot, just that, and its answer included Charlie Kirk, not exclusively but pretty much prominently and unmistakably. So real time, absolutely.

They can now take the prompt you give them and develop a "plan" to carry that task out, breaking it down to sub-steps, and use the results of each step in the next task and also check each step and if necessary redo it. So in the case of asking it who the monarch of the UK is, it would break that down, probably use its training data and then recognise it is a time bounded problem and do a web search for who is the current monarch. When it then gave you the answer it would include the link to the reference material it used. So now you can check the references - as we all do all the time...

Most also have the chain-of-reasoning ability so now if asked how many Rs in strawberry then can go:
s is not a r
t is not a r
r is a r

Previously they tried to pull an answer from their training data and due to tokenisation that could fail, often amusingly so.

The "chatgpt" part is now often only a single part of the AI, an example of the simplest level now instead of using "autocomplete" to generate an answer from its training data to answer "what is 1+1" it has a calculator it can use to provide that answer.
Ah. So that's a factual thing, already. I was under the impression that AI still answers '1+1=?' using statistical answers, and you're telling me now it's now at the stage where it's now got a calculator, and more importantly it can figure out when to shift to calculator mode. That's ...cool, I guess. Certainly it means we can use AI for math questions --- always with a pinch of salt, but clearly asking AI math isn't a complete-joke-level thing anymore then.

And ditto logic as well, like you point out. ...But I'm afraid I know much too little about either how computer logic works, or for that matter about how human neuro-whatsit logicking-mechanism works, to translate that into an answer to my basic question. ...I mean, I can see how it's more than just blind regurgitation of data, and there's logic going in: but what isn't clear to me is if that added logic muscle is ...muscular enough, that we can take it seriously now. With a pinch of salt, but still. Way more than the joke-level stage, and kind of sort of like we do Wikipedia, give or take. Can we?

Does that help
Oh yes, it does, thanks! 👍

- you could ask an AI for more details...
So, like I said, I was wondering (and, while somewhat wiser thanks to your detailed response, continue to wonder):
Are we already past that point, to where we can directly rely on AI output to searches? (Like we do Wikipedia, I mean to say, checking further if the "research" is really important, but content to go with it for more or less trivial matters? Unlike earlier, when it was, IMV, reasonable not to attach anywhere close to even that much credence? Are we past that "earlier" point already, then, do you think? To be clear, is AI already at the, say, Wikipedia stage of credibility, more or less?)

----------


I took your advice, and actually did ask AI the above (italicized) question. The first sentence, that's outside of the parentheses, verbatim. And here's what I got, spoilered for brevity:


No, we are not past the point where we can directly rely on AI output for searches. While AI-powered search features like generative summaries provide speed and convenience, they are known to "hallucinate"—producing inaccurate, misleading, or fabricated information with a confident tone.
Blindly trusting AI summaries can be risky, and they should be viewed as a starting point for further research, not as a definitive answer.

The risks of relying solely on AI search summaries
  • Hallucinations are common: AI models sometimes generate nonsensical or incorrect information, especially for unusual queries or when dealing with humor and satire. Infamous examples include recommending eating rocks or adding non-toxic glue to pizza cheese.
  • Bias and manipulation: AI models are trained on data from the internet, which can contain inherent biases. This can lead to skewed or unfair results. A user relying only on a single AI-generated answer may dismiss relevant, conflicting viewpoints.
  • Narrowing of information: AI summaries often prioritize the most relevant keywords rather than critically assessing the trustworthiness of the source. This can narrow the scope of the information you receive and could be exploited by those seeking to manipulate search results.
  • Reduced organic traffic: Because AI provides a direct answer, users often have no need to click on the source material. This has a significant impact on publishers, particularly news organizations, who are seeing a decline in traffic.
  • Reduced brand trust: AI-generated content can distribute false information about products or services. One business was forced to warn customers not to trust AI-generated specials that it never offered, causing real-world frustration.

The current role of AI in search
Despite the risks, AI still provides significant benefits in search. In a Gartner survey from 2025, 51% of consumers admitted that AI impacts how they search for products and services. Search companies are working to improve their AI models.
For now, AI-powered search is best used as a hybrid approach, complementing traditional methods rather than fully replacing them. This involves:
  • Starting your search with an AI summary to get a quick overview of a topic.
  • Using the provided source links in the AI summary to dig deeper into the information.
  • Cross-referencing information from multiple links to confirm the accuracy of the AI-generated answer.
  • Using AI for simpler queries, like basic facts, and relying on traditional methods for complex or sensitive questions.
The next step in search will likely involve a collaboration between AI-powered and human-based searching, rather than a full takeover by AI.


That sounds reasonable enough, right? Although of course that's like a Zeno's question or something, to put that to AI, kind of like asking someone 'Do you ever lie?' (But of course, even taking that at face value, that answer applies to both that "earlier" phase, as well as this later/now phase that I was wondering if we're already at.)


...So then, I ask AI, one more time. This time the whole question, the entire italicized paragraph, not just the first sentence. And again, I get an apparently perfectly reasonable answer. Not taking up space copying it here, not even spoilered. But it's a cool enough discussion, complete with tables and all, that point out how and why exactly Wikipedia is better than current AI.

And if I were to take the above at face value, then that would tell me that the answer to my question is indeed a Yes, both directly as well as a meta-level demonstration. Sure, there's a difference with Wikipedia, but not quite enough, no longer, that we look down on those that in all seriousness use AI as default in order to search, as essentially intellectual cretins; or at least no more so, not really, than those who do that with Wikipedia. ...Maybe I need to get on with the times, then, that are now past that point, already? Is what I'm still not quite clear about.
 
I probably should have mentioned how most of the leading AIs are now becoming "agentic AI" or being incorporated into agentic AIs. As ever the concept is simple the implementation less so. All that it means is that they can perform tasks independently of the user. In other words I should be able to prompt one of these AIs with something like "Purchase me a copy of Bibliomen : Twenty Characters Waiting for a Book by Gene Wolfe, don't spend more than £100, want one in fine condition" and it would trundle off, search eBay, Abebooks etc to find one, perhaps send a message to a seller to check the condition and then buy it using my card details, and lo and behold a delivery driver knocks on the door with my book three days later. (And no I wouldnt trust an AI to that, at least yet!)

Apparently if you live the connected life of the denizens of the likes of Silicon Valley these agentic AI can arrange your whole life, responding to your emails, buying your partner a "thoughtful" present* and so on. For most of us in the real world they are still flaky and idiot savants.


*I read an article a month or so ago in which the author was excited by the capability of this type of AI and really did claim that it would be great to be able to set up an AI agent to not only remind him of his partner's birthday but to order on his behalf a "thoughtful present" and set up a "romantic" meal date. Yep he thought it was great that he wouldn't have to do any kind of thinking for his partner's birthday after telling the AI to sort it out, and he considered that it buying a present would constitute a "thoughtful present". He was saying it was great because the agent AI could go through her social media posts to find her favourite restaurant and to work out what she would like for a present. Presumably the goal is to never have to even directly communicate with your partner, prompt your AI up to send a few "fun and romantic" messages every day to your partner so they know they are in your thoughts, they can do the same with their AI...
 
Last edited:
I thought AI was supposed to usher in the death of gainful employment, so I'd have more time to spend with my significant other, rather than replace significant other time.
 
Last edited:
I thought AI was supposed to usher in the death of gainful employment, so I'd have more time to spend with my significant other, rather than replace significant other time.

When you read these articles "5 prompts that have changed my life" and the like you do form the impression that many of them would rather not deal directly with the humans they occasional must share some space with.

If you want to see something truly horrifying do a search something like "ai prompts for better parenting" every single one of the people behind such things should never, ever have kids.



You know those moments when you're completely out of ideas? When your kid needs help with math and you barely remember long division? When they're bored and you've exhausted your creativity? When dinner needs to happen but you're staring at random leftovers? What if you had a library of proven prompts that solve these exact problems in seconds?
.....
"Finally, a way to quickly access relevant parenting tips, in the precise moment I need them. The ability to help my kids navigate tricky friendship challenges on the fly has been a huge help!” - Mike T., dad of twins

It started with a math worksheet and a scream.​

It was nearly bedtime, and my 8-year-old daughter was sitting at the table, red-faced and teary-eyed over a single word problem.
She slammed her pencil down and shouted,

“I’m so stupid! I’ll never get this! I hate everything!”
I felt a familiar knot rise in my chest — frustration, helplessness, and guilt all tangled together.
I wanted to help her, but I also wanted to say, “It’s not that hard. Just calm down.”
Instead, I opened ChatGPT and typed:

“Help me respond gently to my 8-year-old who’s crying over homework and saying she’s stupid. I want to validate her feelings, help her regulate, and avoid escalating the situation.”
The response gave me exactly what I needed:
a calm, empathetic way to speak to her, and to myself.
 
Last edited:
You know those moments when you're completely out of ideas? When your kid needs help with math and you barely remember long division? When they're bored and you've exhausted your creativity? When dinner needs to happen but you're staring at random leftovers? What if you had a library of proven prompts that solve these exact problems in seconds?
What if you have a library of proven solutions to those problems instead?
 
What if you have a library of proven solutions to those problems instead?
You sort of have a point.

The example with the math homework is a bit weird. Like, is that really where your parenting skills have run out?

And I can see things quickly veering into creepy territory, where every interaction with your child is a script.

Edit: Actually, I've given it some thought, and a library of books on how to help a child requires actual effort, and while I may sound like an old man screaming at clouds, that effort itself should contribute far more to a healthy relationship with one's child than quick fixes.

Of course, I just remembered I'm not actually a parent, nor am I planning to be one, so this is actually far above my pay grade.
 
Last edited:
Last edited:
Surely this is not good for us humans?


But I'll wait until the price drops to 10 cents.

That's the podcast Copilot "created" on AI hallucinations, it is entirely AI and I hate to admit it but it isn't terrible.
 

Back
Top Bottom