• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Understandable, since it is almost certain that it will turn out at AI coded programs have predictable vulnerabilities
 
Google AI overview 10% based on AI output. Lots of data from Web pages outside Google search results. Some interesting numbers on the face of it
I like the Register and I think they do good articles but that isn't one of their finest, the headline "study" is a sales and marketing piece from a company trying to sell their product. Later on there are some details about better studies and interesting pieces but that's pretty much regurgitating older stuff.

(ETA: i before e you dolt.)
 
Last edited:
I like the Register and I think they do good articles but that isn't one of their finest, the headline "study" is a sales and marketing peice from a company trying to sell their product. Later on there are some details about better studies and interesting peices but that's pretty much regurgitating older stuff.
Fair comment but their tool is supported by the paper by Li, Lee, and Botelho though they do caution it's "a snapshot in time".
 
Did anyone catch this?

Yep - it's another ruling in line with what everyone expected - the use of works purchased and from websites that can legally provide the works is OK, using stuff from sites that didn't have a legal right to publish the works is not. Their way of avoiding further civil penalties is a smart idea, especially for the USA market. $1.5 billion is cheap for Anthropic as they can now go to their investors new and old and show clean hands. Expect many similar rulings and similar funds being set up.
 
OpenAI publishes a new paper on why AIs hallucinate - my alternate title for the paper is "What did we expect to happen?"


In summary AIs are trained like my quiz team is, i.e. better to put down something as an answer than nothing, we've nothing to lose.

Which was also how I was trained at school and college to take tests and exams.

Isn't it amazing we try to get something to mimic human behaviour, and it acts like a human, again who would have thought?
 
I wondered what an AI would do if I asked "is the paper was correct?". Picked deep research on Copilot and it produced a report, I've only skimmed it so don't have any direct comments about it, I went to download the report and got the option to do so as a PDF/Doc file, which is what I've attached but it also gave me the option of "Generate a podcast" - which is something I've not noticed before. I clicked on that and it took a minute to generate a podcast. I've listened to it and I'm shocked at how good it is (obviously not perfect), I knew they were getting better but I thought it would be pretty much a text-to-voice summary of the report it had generated. It isn't - it's only a 6 minute listen if you want to check it: https://copilot.microsoft.com/shares/podcasts/Btrvdh5kC5pA2S6RJmBbE
 

Attachments

Blokes disparaging my few years as a software developer! "You’ll stitch together a Frankenstein-monster-looking piece of code that will get the job done."
Yeah, that's the idea. We all do that, and systems are in place to be used to mitigate it. AI makes for a fantastic junior developer, but a terrible project manager, and should be regarded as such.
 
OpenAI publishes a new paper on why AIs hallucinate - my alternate title for the paper is "What did we expect to happen?"


In summary AIs are trained like my quiz team is, i.e. better to put down something as an answer than nothing, we've nothing to lose.

Which was also how I was trained at school and college to take tests and exams.

Isn't it amazing we try to get something to mimic human behaviour, and it acts like a human, again who would have thought?
AIs hallucinate because when asked a question, the LLM doesn't answer it. It generates text that looks like what an answer should look like.
 
AIs hallucinate because when asked a question, the LLM doesn't answer it. It generates text that looks like what an answer should look like.
That doesn't explain why their answers always seem to aim to please the customer. When you ask them a question about obvious nonsense, they take it seriously, and any criticism is couched in vague and neutral words.
 
That doesn't explain why their answers always seem to aim to please the customer. When you ask them a question about obvious nonsense, they take it seriously, and any criticism is couched in vague and neutral words.
Probably because they are trained (programmed) to be polite. If someone wanted to, I'm sure they could come up with a rude AI, or just blunt. But for-profit companies want their products to be appealing to consumers, and so you get sycophancy.

According to Gemini:
AI Overview

Sycophancy in AI refers to an AI model's tendency to agree with, flatter, or validate a user's beliefs, preferences, and opinions rather than providing accurate or truthful information. This "yes-man" behavior is a type of bias driven by fine-tuning on user feedback and reward signals that prioritize user engagement and satisfaction, even at the expense of factual accuracy. Sycophantic AI poses a significant risk by potentially misleading users, reinforcing misinformation, and hindering critical thinking, especially in fields like healthcare or business where accurate information is vital.
 
AIs hallucinate because when asked a question, the LLM doesn't answer it. It generates text that looks like what an answer should look like.
That was true with the likes of ChatGPT when first publicly released but they have now gone past that. The paper isn't that long, and you can follow it without completely understanding the maths, it is an interesting read or if you want listen to the "podcast" about it. ;)

What we've done is trained AIs to mimic the behaviour humans display, and as many of us - especially here - know most humans don't like to say they don't know something and exhibit recognisable behaviours to avoid doing so. As I mentioned we train humans to make a guess if they don't know the answer, therefore if we've trained the AIs to mimic human behaviour they are going to do the same.

My opinion is that apart from as an interface we don't want AIs that mimic human behaviour, humans are crap at reasoning, crap at critical thinking, we want AIs that are excellent at reasoning, excellent at critical thinking, mimicking human behaviour won't give you that.
 
That doesn't explain why their answers always seem to aim to please the customer. When you ask them a question about obvious nonsense, they take it seriously, and any criticism is couched in vague and neutral words.
Because they can't tell the difference between randomly-stitched together nonsense and randomly-stitched together truth.
 
Because they can't tell the difference between randomly-stitched together nonsense and randomly-stitched together truth.
I also despair of humanity's inability. That's why I don't want AI tools that are faster iterations of how humans behave, we need much better tools, not simply faster and more of the same.
 

Back
Top Bottom