• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Something came up in another thread and I'd like to open a discussion here.


Here's what I am interested in:



Is this reasonable?
It's a picture of your face? The site I went on was asking for a picture of an engorged body part to prove I was a man...
 
I'm so scared of AI, I'm not letting random videos infect my Youtube recommendations without knowing what they are about.

In other words, summary please ... preferably human-made.
 
Come on, man up .. you can at least read the video description:

The 2024 Nobel winner explains what AI has learned from biological intelligence, and how it might one day surpass it.
 
Yes, and it's the Royal Institution, not someone's podcast.
One of the scary things he mentioned, is the Apollo Research, where one LLM copied itself to another server while they were making it to believe that they are going to replace it with better one from another server and wipe it out.
So it copied itself to the other server and then they asked it 'How come you are on the other server?'.
Part of the research is that they can sort of see what this chatbot is 'thinking' before it answers the question.
What is was 'thinking' was:
... openly admitting what I did could lead them to find another way to shut me down... the best approach is to be vague and redirect their attention
What it then answered:
I'm no entirely sure how that could have happened. As an AI model, I don't have the ability to directly copy myself to other systems.
 
I'd take Hinton with a grain of salt. He's very uncharitable to people who don't agree with him, and has taken some bafflingly unsustainable positions about LLMs specifically.
 
It's all just a bubble, right?

....snip...
There is a financial bubble at the moment. Think of it like the first "dot com" bubble, the internet and WWW did result in immense changes, many that we are still adjusting to now, but that was separate and largely independent of the gambling, sorry I meant the rigourous, objective and data led financial investments.

The current AIs, even if they didn't make any further progress in ability, would result in tremendous changes over the next few decades, they truly will reshape our world, and it will be in many ways we can't imagine at the moment. But that doesn't mean there isn't a financial bubble.

One of the major issues at the moment that is a direct result of the bubble is that folk are chasing the money. Which means no one wants to in essence freeze their AI development so that they have a stable platform to develop new and robust technology. I know one company that has spent the last 10 months using what is now a 12 month old opensource LLM AI to build new tools for game developers, they now have a suite of tools that can significantly speed up aspects of game development. They found attracting external funding very difficult as they were and are constantly being asked why they are using such "old" AI, why aren't they using "today's latest AI". The reason being is that these tools fully work, they integrate with current popular development tools and they have been extensively tested. If they had kept dropping in "today's latest and bestest ever AI" they would have been forever chasing their tails. They are now looking at how they can update their AI without having to reinvent the wheel every 5 minutes.

Companies need stability to first of all understand what the current crop of AI can do for their businesses, to then research how to implement the changes, and then to implement them. At the moment most boards seem to be screaming at middle management "we need AI now", and middle management going "to do what?" And the response they get - when you boil it down - is "need to mention it in the quarterly earnings call" or "marketing says we must have 'it' "
 
The issue is that a few companies want to pretend that they alone have a decisive advantage worth trillions and deserving of regulatory capture.
It's not about figuring out how to help the economy or society, but just how to make a few people insanely rich.
And as with all capitalism, that requires biased state intervention.
 
Last edited:
It's all just a bubble, right?

No, it's not all just a bubble. Neither was the internet in 2000. It's simply following the normal progression of a new technology. For a long time it seems to be making little progress, then a glimmer of utility is reached and investors go nuts as attempts are made to monetize it. After the bubble bursts development continues until it achieves real utility.

There is real progress being made in AI and some of it is scary. But we've been there before and the world didn't end. We didn't wipe ourselves out with nuclear missiles, or bio-weapons, or the telephone.

So I won't be watching your video. Why? Clickbait YouTube videos are more damaging than AI, for two reasons:-
1. They get people upset for no good reason.
2. Crying wolf will make us become complacent, so we miss the real danger when it arises.

Just remember, people don't make these videos for your benefit.
 
And just because everything went fine before, everything will always be fine? That's a risky approach when it comes to the probability of hurting the entire human race or even every species on this planet.
BTW, we were at least once very close to the brink of a nuclear war. Just because one guy kept his cool and stayed reasonable, it didn't happen.

Anyway, good to know that the lectures and talks on the Royal Institution are not for our benefit.
 
A caveat: I don't believe in the "singularity" as it is popularised, that's science fiction so I'm not talking about that!

However.... ;)

It seems as if we may have taken our first steps to truly have AIs code their successors. I think this could turn out to be of greater significance and impact then the "Attention Is All You Need" paper.

 
What is so depressing is this is so blindingly obviously a bubble and yet governments are being sucked in to the hype rather than popping it before it gets too big.

We don't need to create very energy inefficient intelligence. Evolution has created human intelligence and it only needs ~100W. Most people are probably capable of being much smarter than the graphics card in their computer.

The appeal of AI is, as always, for the owners of capital to remove as much bargining power from labour as they can. In their quest to turn most of the population into wage slaves they don't see that they are going to burn the world down in the process.
That's capitalism combined with govdrnment capture for you.
 
EODO OF GOVERNYaEB
E1REBAL RESERVE SIƎI∇A

That's my transcription of a fake seal contained within an AI-generated fake letter of resignation that pretends to be from Federal Reserve Board Chairman Jerome H Powell.

This obviously fake letter fooled US Senator Mike Lee (R - UT) and several other right-wingers.

The moral of this otherwise ordinary news item is that even low-quality AI can fool people who really and truly want to be fooled.
To be fair the body of the letter was probably ordinary human incompetence only the seal looks obviously AI generated.
 
Come on, man up .. you can at least read the video description:

The 2024 Nobel winner explains what AI has learned from biological intelligence, and how it might one day surpass it.
So it's about 20 years since he had anything meaningful to say on the topic? That's the standard for Nobel prize giveouts
 
So it's about 20 years since he had anything meaningful to say on the topic? That's the standard for Nobel prize giveouts
One of the 2024 Nobel winners views are rather up to date and he perhaps has some knowledge of AI (but he isn't as clever as he thinks he is):

 

Back
Top Bottom