• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Merged Artificial Intelligence

…snip A study in the US has found that readers can't tell the difference between poems written by famous poets and those written by AI aping their style. To make matters worse – for anyone fostering a love of literature at least – research subjects tend to like AI poetry more than they do verse from human poets.

The researchers suggest readers mistake the complexity of human-written verse for incoherence created by AI and underestimate how human-like generative AI can appear, according to a study published this week in Nature Scientific Reports. …snip…
 
Something that raised the hair at the back of neck is that the big data crunchers, like OpenAI may use "synthetic" data to further train their models. With of course that synthetic word meaning computer generated text. Have they never put a microphone near to a speaker outputting the mike's input?
True AI shouldn't need to be trained with mountains of data, it should be able to do its own research. And when it does, it should be able to cite its sources. Otherwise it's useless, because you don't know if it's providing accurate information or just 'hallucinating'.

Today I wanted to answer a question about what rapeseed oil produced in the EU is used for. So I googled it. Google's AI confidently told me that "70% of rapeseed oil is used to make biodiesel". Excellent answer, just one problem - how do I know it's telling the truth? There's no cite, no link I can follow, and I can't even search on the answer because it was generated by the AI so it probably isn't on any website or paper.

So I had to do it the hard way, using various search phrases until Google gave me a link to some real data. Crunching the numbers I found that in 2016 71% of rapeseed oil produced in the EU was used to make biodiesel. Close enough I guess, but that was 8 years ago. A lot has happened since 2016. But Google's AI didn't tell me how old its data was, where it came from, or whether it also calculated the number from other information or just 'imagined' it.

In short, not trustworthy. If I used that answer and it turned out to be wrong that would be bad - not just for me but for truth in general.

Google is failing us. Their AI is making finding good answers harder, not easier. What's worse is that faced with the difficulty of getting Google to provide real answers these days, most people will just go with what the AI says. There's a very real possibility of it spreading misinfomation, but that's not the worst part. The real problem is that it discourages going deeper. If a news article of research paper was cited you might want to read it, and then become a lot more informed. But you don't get that, all you get is one liitle factiod devoid of nuance, context, or deeper understanding of the topic. This isn't helping people become better informed, it's discouraging it.

"So you wanted to know about rapeseed oil, and I told you. Now go away, I've got more important things to do. I'm not an encyclopedia you know!"

The wierd part is that if Google's AI does what they seem to want, it's discouraging people from going to websites that advertizers are hoping they will visit. So not only is it making people less informed, it's hurting Google's business model. How long before they realize it?
 
True AI shouldn't need to be trained with mountains of data, it should be able to do its own research. And when it does, it should be able to cite its sources. Otherwise it's useless, because you don't know if it's providing accurate information or just 'hallucinating'.

Today I wanted to answer a question about what rapeseed oil produced in the EU is used for. So I googled it. Google's AI confidently told me that "70% of rapeseed oil is used to make biodiesel". Excellent answer, just one problem - how do I know it's telling the truth? There's no cite, no link I can follow, and I can't even search on the answer because it was generated by the AI so it probably isn't on any website or paper.

So I had to do it the hard way, using various search phrases until Google gave me a link to some real data. Crunching the numbers I found that in 2016 71% of rapeseed oil produced in the EU was used to make biodiesel. Close enough I guess, but that was 8 years ago. A lot has happened since 2016. But Google's AI didn't tell me how old its data was, where it came from, or whether it also calculated the number from other information or just 'imagined' it.

In short, not trustworthy. If I used that answer and it turned out to be wrong that would be bad - not just for me but for truth in general.

Google is failing us. Their AI is making finding good answers harder, not easier. What's worse is that faced with the difficulty of getting Google to provide real answers these days, most people will just go with what the AI says. There's a very real possibility of it spreading misinfomation, but that's not the worst part. The real problem is that it discourages going deeper. If a news article of research paper was cited you might want to read it, and then become a lot more informed. But you don't get that, all you get is one liitle factiod devoid of nuance, context, or deeper understanding of the topic. This isn't helping people become better informed, it's discouraging it.

"So you wanted to know about rapeseed oil, and I told you. Now go away, I've got more important things to do. I'm not an encyclopedia you know!"

The wierd part is that if Google's AI does what they seem to want, it's discouraging people from going to websites that advertizers are hoping they will visit. So not only is it making people less informed, it's hurting Google's business model. How long before they realize it?
They have already announced they are pulling back with the "ai" in their search results, I am certain they are doing so because they are concerned by the lack of accuracy and nothing to do with commercial ramifications...

I would recommend Copilot over Google's approach, it will provide the references used.
 
@Roger Ramjets
It's not clear why you had to Google twice.

I get the AI response at the top of the results, then a whole lot more stuff in the following pages.

I usually give the AI response a cursory glance at best, or ignore it altogether.

Not getting recent relevant information just means it's not very newsworthy in general. In other words, no one else cares. If the info isn't out there, Google won't find it.
 
Last edited:
Just tried the query “what rapeseed oil produced in the EU is used for?” In copilot

1732106846363.jpeg

The reference is to this: https://www.frontiersin.org/journals/plant-science/articles/10.3389/fpls.2019.01423/full


This is Gemini’s response:
Rapeseed oil produced in the EU has a variety of uses, both in the food industry and for industrial purposes.​
Food Uses:​
* Cooking oil: Rapeseed oil is a popular cooking oil due to its high smoke point, making it suitable for frying, sautéing, and baking.​
* Salad dressings and marinades: Its mild flavor makes it a versatile oil for salad dressings and marinades.​
* Mayonnaise and other sauces: It's used as an ingredient in mayonnaise, aioli, and other emulsified sauces.​
* Processed foods: Rapeseed oil is found in many processed foods, such as baked goods, snacks, and margarine.​
Industrial Uses:​
* Biodiesel production: A significant portion of EU rapeseed oil production is used to produce biodiesel, a renewable fuel for transportation.​
* Lubricants and industrial oils: Its properties make it suitable for use in various industrial lubricants and oils.​
* Cosmetics and personal care products: Rapeseed oil is used in some cosmetics and personal care products due to its moisturizing properties.​
It's important to note that the specific uses of rapeseed oil can vary depending on the quality and refining process.​
 
Just tried the query “what rapeseed oil produced in the EU is used for?” In copilot

View attachment 57806

The reference is to this: https://www.frontiersin.org/journals/plant-science/articles/10.3389/fpls.2019.01423/full


This is Gemini’s response:
Rapeseed oil produced in the EU has a variety of uses, both in the food industry and for industrial purposes.​
Food Uses:​
* Cooking oil: Rapeseed oil is a popular cooking oil due to its high smoke point, making it suitable for frying, sautéing, and baking.​
* Salad dressings and marinades: Its mild flavor makes it a versatile oil for salad dressings and marinades.​
* Mayonnaise and other sauces: It's used as an ingredient in mayonnaise, aioli, and other emulsified sauces.​
* Processed foods: Rapeseed oil is found in many processed foods, such as baked goods, snacks, and margarine.​
Industrial Uses:​
* Biodiesel production: A significant portion of EU rapeseed oil production is used to produce biodiesel, a renewable fuel for transportation.​
* Lubricants and industrial oils: Its properties make it suitable for use in various industrial lubricants and oils.​
* Cosmetics and personal care products: Rapeseed oil is used in some cosmetics and personal care products due to its moisturizing properties.​
It's important to note that the specific uses of rapeseed oil can vary depending on the quality and refining process.​
It didn't say that in the US it's called "Canola oil"? Probably because of the uncomfortable word in it.
 
That's a pretty significant fact for AI to overlook.
Why? If I want to know about rapeseed oil usage in Europe and I am asking in English and my Co-pilot is set to use English (UK) why would I need to be told that it goes under a different name in the USA? That's irrelevant to what I asked it - should it have told me that in Poland it is called "olej rzepakowy", and in Germany "Rapsöl" and so on?
 
When I Googled " rapeseed oil in EU" I got pretty much what you showed. No mention of Canola.
When I typed " rapeseed oil uses", I got " Rapeseed oil, also known as Canola oil... blah, blah , blah......"

So, there you go.
 
Summary of an interview/Q&A with the CEO of Deepmind:

He shares my view of where we are with the LLMS etc, and a true general AI:

…..Large multimodal models of generative AI [those capable of interpreting text, images, and videos] are going to be a critical part of the overall solution to developing artificial general intelligence [AGI, which matches or surpasses human intelligence], but I don’t think they’re enough on their own. I think we’re going to need a handful of other big breakthroughs before we get to what we call AGI,” says the former chess prodigy. This statement stands in contrast to the predictions of figures like Sam Altman, CEO of OpenAI, and Elon Musk, technology entrepreneur with a posting in the new Trump administration, both of whom claim that superintelligence is just around the corner….

He’s obviously not half as clever as he thinks he is but since he agrees with me he’s more than likely right….
 
I actually didn't know that rapeseed and canola are the same thing. I was also well into adulthood before I knew that what we Americans call corn is also known as maize in other English-speaking countries.
 

…It is an open question whether large language models (LLMs), trained on general text and scientific articles, can predict the outcomes of experiments. If LLMs’ predictions surpassed human experts, the practice of science and the pace of discovery would radically change. We consider this question for neuroscience, which is a large and interdisciplinary field. Prediction in neuroscience should be challenging for human experts for several reasons: (1) there are often many thousands of relevant scientific articles, (2) an individual study can be noisy or unreliable and may not replicate, (3) neuroscience is a multi-level endeavour6, spanning behaviour and molecular mechanisms, (4) and the analysis methods are diverse and can be complex7, (5) as are the methods used, which include different brain imaging techniques, lesion studies, gene modification, pharmacological interventions and so forth…
 
An interesting and long article from the New Yorker


My summary; Repetitive task training by humans and then robots.

A sample:

One camp is betting on simulation. Nvidia, the giant A.I. chipmaker, has made software for creating “digital twins” of industrial processes, which allows computers to practice motions before robots actually try them. OpenAI used simulated data when training its robotic hand to spin a Rubik’s Cube; copies of the hand, practicing in parallel, carried out simulations that would have taken a real robot ten thousand years to perform. The appeal is obvious: all you need to generate more data is more computing power, and robots can learn like Neo in “The Matrix,” when he downloaded kung fu. Robot hands and Rubik’s cubes can’t be simulated perfectly, however. Even a paper towel becomes unpredictable when you crumple or rip it. Last year, Nvidia published a paper showing that researchers could teach a simulated robot hand to spin a pen in its fingers, the way a bored student might, an action that essentially requires the pen to be in flight most of the time. But the paper makes no mention of whether an actual robot could perform the trick.
 
Something that raised the hair at the back of neck is that the big data crunchers, like OpenAI may use "synthetic" data to further train their models. With of course that synthetic word meaning computer generated text. Have they never put a microphone near to a speaker outputting the mike's input?
If they're getting their data off the internet, then surely that's already happening?
 
@Roger Ramjets
It's not clear why you had to Google twice.

I get the AI response at the top of the results, then a whole lot more stuff in the following pages.

I usually give the AI response a cursory glance at best, or ignore it altogether.

Not getting recent relevant information just means it's not very newsworthy in general. In other words, no one else cares. If the info isn't out there, Google won't find it.
It would be nice to have an option to turn off the AI result, to save me having to scroll past it. I was searching for an answer to a question yesterday, and the answer the AI gave was wrong. Luckily I already knew that, and ignored it, but someone who trusted it could have been fined. (I wanted to know what trains out of London I could use my cheap day return on.)
 
It would be nice to have an option to turn off the AI result, to save me having to scroll past it. I was searching for an answer to a question yesterday, and the answer the AI gave was wrong. Luckily I already knew that, and ignored it, but someone who trusted it could have been fined. (I wanted to know what trains out of London I could use my cheap day return on.)
Curious. I tried googling 'trains out of London' and it didn't give an AI answer. Instead I got a bunch of sponsored links, including
nationalrail.co.uk and londonnorthwesternrailway.co.uk. Perhaps they only use AI for for queries that seem to want a general answer rather than a specific web page. They say their generative AI is 'experimental', so hopefully the 'experiment' will soon be over and they will go back to doing what they are supposed to - provide links to relevant websites.

People today don't realize how bad it in the early days when if you didn't have the exact url you were screwed. We would keep long lists of websites we had discovered from print media, word of mouth or just trying random names. Some celever dicks chose urls with simlar names that you might type by mistake. You'd be trying to go to shareware.com to get some fee software, but accidentally miss off the 'e' and land on sharware.com - a porn site!
 
Some of us are old enough to remember when you had to actually submit your website to a search engine for it to be indexed.
 
Some celever dicks chose urls with simlar names that you might type by mistake. You'd be trying to go to shareware.com to get some fee software, but accidentally miss off the 'e' and land on sharware.com - a porn site!
The best (?) one of those was dinsey.com.

ETA: that's now owned by, and redirects to, disney.com. Yes, I checked.
 
The good (I think): This can be useful in many ways: helping to improve text was just the start. It can create graphics, offer ideas for plans of many kinds, perform diagnostics, etc etc and that's just the beginning.

The bad: Oddly, after seeing several mistakes, I would never count on this for grammatical proofing. It also (despite claiming great ability in this respect) reports inaccurate financial information (like expense ratios for mutual funds).

The ugly: The increasing and long-term reliance we will have on this. "Hey AI, I don't want to think or do the work, do it for me." The wheels of idiocracy are in motion
 
One other problem it has is that if you use it to create some text and send it to someone they might be able to detect that used chat GPT to write it. This software creates text in a very strange style.
 
Yeah I find it's better if you start out with something and ask for suggested improvements.

It does have its quirks. When I try using it to create graphics, it seems to have an odd aversion to blue/gray tones....and the graphics almost always start out ridiculously "busy."
 
I like to use it (Copilot, which is built on top of ChatGPT) to quickly check facts or provide definitions. Which is ironic, since I occasionally need to fact check it. But I've been a skeptic for a long time and I've got a pretty decent sense for when it's hallucinating.
 
I just had this exchange with Copilot.

1.png
Starts well. These are screenshots from my own browser, by the way.

2.png
To be fair, it's not wrong. There are indeed two Rs in "strawberry". There is also a third.

3.png
And it learns to count.

4.png
And corrects its original mistake. Okay, now let's start a new conversation.

5.png
Excellent. Well done.
 
lol

And there's the scary part, because unless it's an obvious mistake like that, pretty much everyone will take any AI's word as gospel. We are running headlong towards "Idiocracy" becoming a reality...

(PS: I had to try your strawberry test on regular ol chatgpt...sure enough, it failed again.)
 
I was reminded of the movie Fletch:

"Is that one or two B's in your name, Mr. Babar?"
"One - B A B A R"
"That's two B's"
"Yes, but not not right together. I thought that's what you meant."
 
Last edited:
I like to use it (Copilot, which is built on top of ChatGPT) to quickly check facts or provide definitions. Which is ironic, since I occasionally need to fact check it. But I've been a skeptic for a long time and I've got a pretty decent sense for when it's hallucinating.
You're making it sound as if human sources don't occassionally need fact checking ;) .

And there's the scary part, because unless it's an obvious mistake like that, pretty much everyone will take any AI's word as gospel.
Assumes facts not in evidence.
 
Last edited:
You're making it sound as if human sources don't occassionally need fact checking ;) .


Assumes facts not in evidence.
Don't think it does - humans overall do defer to those that speak or write fluently, we equate that with being "intelligent" whereas what the LLMs have shown is that ability alone does not require intelligence.
 
Strange my version of Copilot hasn't got that wrong in months which in LLM development terms i a long, long time. Same with the other old one "which number is larger 9.18 or 9.8?"
I just tried chatgpt.com, it still thinks there are 2 Rs in Strawberry. :p
 
I just tried chatgpt.com, it still thinks there are 2 Rs in Strawberry. :p
I just had this exchange with Copilot.

View attachment 58350
Starts well. These are screenshots from my own browser, by the way.

View attachment 58351
To be fair, it's not wrong. There are indeed two Rs in "strawberry". There is also a third.

View attachment 58352
And it learns to count.

View attachment 58353
And corrects its original mistake. Okay, now let's start a new conversation.

View attachment 58354
Excellent. Well done.
That's crazy. I thought they sorted that one out ages ago.

Hmmmm... I tried it and it came out correctly.

But it is still stupid. Look at this...

1736122810522.png
 
Back
Top Bottom