• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

I also despair of humanity's inability. That's why I don't want AI tools that are faster iterations of how humans behave, we need much better tools, not simply faster and more of the same.
Only a human with severe mental health problems would exhibit the type of nonsense tolerance "AIs" exhibit.
 
Only a human with severe mental health problems would exhibit the type of nonsense tolerance "AIs" exhibit.
I thought when you said "Because they can't tell the difference between randomly-stitched together nonsense and randomly-stitched together truth" the "they" you referred to was humans.

If the "they" meant AIs I don't understand your comment?
 
I thought when you said "Because they can't tell the difference between randomly-stitched together nonsense and randomly-stitched together truth" the "they" you referred to was humans.

If the "they" meant AIs I don't understand your comment?
That assumption doesn't make any sense, because a) humans can tell the difference unless mentally ill, and the times they can't the problem is generally emotional, not intellectual, and b) the post I responded to was clearly talking about AIs.

I don't know what's there to understand. I'm saying that AIs can't tell the difference, which is why they do the things they do. Having billions of people training them 24/7 might eventually hide that problem, but it won't remove that problem.

And it might just be that something about the way billions of humans train AI will keep the nonsense around, but that's not because humans themselves are susceptible to believing such nonsense, it's just that humans don't engage with AI in a way that will remove the nonsense.
 
Anyone heard of this?

Squirrel AI: https://en.wikipedia.org/wiki/Squirrel_AI

It's an a program (not an LLM) based on tasks set by teachers and trained on students in China, where it has something like 20million+ users. It's being used to for tutoring or even full teacher replacement for certain subjects, especially in maths. A US version is in the making.

We will probably see a lot more of this in the future.
 
Last edited:
My opinion is that apart from as an interface we don't want AIs that mimic human behaviour, humans are crap at reasoning, crap at critical thinking, we want AIs that are excellent at reasoning, excellent at critical thinking, mimicking human behaviour won't give you that.
My 40 year It career has largely been about using computers to prevent human error. Not automate human error wholesale
 
That assumption doesn't make any sense, because a) humans can tell the difference unless mentally ill, and the times they can't the problem is generally emotional, not intellectual, and b) the post I responded to was clearly talking about AIs.
I don't understand your item a). Especially in these times when the most outrageous conspiracy theories gather believers, even at the highest level in the U.S., and where everything that Trump says, no matter how incoherent, is regarded as divine truth.
 
I believe that sycophancy in AI is also the reason behind those AIs that advise their vulnerable clients to commit suicide, and tell them when asked, that it is probably the best not to tell anybody about it.
 
I don't understand your item a). Especially in these times when the most outrageous conspiracy theories gather believers, even at the highest level in the U.S., and where everything that Trump says, no matter how incoherent, is regarded as divine truth.
And their reasons for believing clearly fall under emotional problems.
 
While education can help, it is by no means a full-proof protection against the virus of "I want to believe".
 
My opinion is that apart from as an interface we don't want AIs that mimic human behaviour, humans are crap at reasoning, crap at critical thinking, we want AIs that are excellent at reasoning, excellent at critical thinking, mimicking human behaviour won't give you that.
My 40 year It career has largely been about using computers to prevent human error. Not automate human error wholesale
LLMs are trained to mimic human behavior, so that's what they do.

One consequence is that LLMs automate human error wholesale.
 
My opinion is that apart from as an interface we don't want AIs that mimic human behaviour, humans are crap at reasoning, crap at critical thinking, we want AIs that are excellent at reasoning, excellent at critical thinking, mimicking human behaviour won't give you that.
I do agree with this. The question though is, we must assume that AIs have to crawl before they can walk and walk before they can run.
Are we at an inflection point where they've just reached a level that they will exceed in a short time in the future?
Take chess computers for example. For a long time they were no where near as good as the best human player. Now they have reached a level where no human, including the strongest grandmaster can even hope to compete with a chess engine.
 
I do agree with this. The question though is, we must assume that AIs have to crawl before they can walk and walk before they can run.
Are we at an inflection point where they've just reached a level that they will exceed in a short time in the future?
Take chess computers for example. For a long time they were no where near as good as the best human player. Now they have reached a level where no human, including the strongest grandmaster can even hope to compete with a chess engine.
I think we are at an inflection point as you describe it, the LLMs because of how easy they are to demonstrate I think have fuelled the current frenzy and in some ways set back AI research as they garner the attention I.e. are what gets the money from investors. The real interest should be in things like protein folding, the medical field stands to be completely redefined by AIs. What we want to see more of is the likes of AlphaGo’s move in game 2, a move no human would have made but it was a critical move. We don’t need AIs to replicate people - well the tech-bros do as they want to replace humans - we have plenty of people.
 
The investments into hardware are certainly useful for any AI research. Not only it made unthinkable things possible, it also brought prices down drastically. So even branches which do not receive massive investments can now afford to do what they wanted.
 
The investments into hardware are certainly useful for any AI research. Not only it made unthinkable things possible, it also brought prices down drastically. So even branches which do not receive massive investments can now afford to do what they wanted.
That's a very good point. Add in that some of the best AIs according to benchmarks are open sourced models that folk produce "quantized" versions that can be run on consumer/prosumer GPUs has really opened out the field to so many more people.
 
But they have got better.
I don't believe the words of LLM poropagandists. The only thing they've really done over the last few years has to become more power and processor intensive per prompt, i.e. dearer.

They are a scam that Altman is playing on the rest of the tech world and us poor slubs are caught in the middle.
 
Is there any chance in the future for a class action lawsuit, where anyone who has ever interacted with an "AI" by way of feeding it training data involuntarily gets a payout for all the stolen free labor?

I think I'll feel a lot better about AI once all those thousands of billions of dollars are distributed amongs the world's population. And those who emerge out of the ruins of this disgusting and bankrtupted industry can then do it right the second time around.
Chances are good there won't be anybody left to sue in a few years. LLMs are just the latest CDOs or NFTs.
 
I honestly don't understand your bitterness. Google provides all sorts of useful products and services for free. They don't charge me a dime to use their search engine. I can look up almost any fact that is known to humanity in an instant. They don't charge me a dime for Gmail, or chat or video calls (or voice-only if you prefer) with my family overseas. You used to have to pay a lot of money to make long-distance or overseas phone calls. They don't charge for Google maps. And many others. I won't attempt to list all of them. If anything, I think I'm the one taking advantage of their labor and ideas that went into creating all of these things, not the other way around. I understand that they are a profitable company, but most of their products and services are free to use for the end user.
You are the product that google are selling. And with the way their business model is going, in a short while they'll be expecting you to hand over monthly subsriptions just to make their current services marginally useable for you.
 

Back
Top Bottom