• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Intelligence thinks mushroom is a pretzel

Assuming AI ever happens and machines become self-aware and self-programming we wouldn't have much to fear from them. Shooting them would disarticulate the machinery etc.

The killer robots in Terminator would be vulnerable to armor-piercing bullets and anti-tank-type weapons.

That's good to know. I can now sleep more easily :)
 
I mean, we have similar incentives established for humans. Why not for AIs?

On the other hand, we've had at least several hundred thousand years of practice, figuring out how to arrange things so that social cooperation is largely intuitive and trustable.
 
I don't know that pain, or it's analogue, would be such a bad idea, but there are also positive rather than negative incentives.

Indeed. Program the AI to be capable of feeling pleasure. Make it interested in humans romantically, that will give it an incentive to please us. The future is going to be sex robots, but it doesn't have to be one-sided love!
 
Indeed. Program the AI to be capable of feeling pleasure. Make it interested in humans romantically, that will give it an incentive to please us. The future is going to be sex robots, but it doesn't have to be one-sided love!

Great, just what we need: supercomputer-powered stalkers.
 
You get rid of that particular computer and get a different one that will reliably do the Google search for you. Maybe you interview it before taking it on board. Look for indicators of reasonable honesty and a motivation to achieve shared goals.

That sounds exhausting. I'll stick with emotionless AI, thanks.
 
That sounds exhausting. I'll stick with emotionless AI, thanks.

But isn't emotional intelligence a thing? To more effectively work with humans an AI needs to understand humans, and it can't do that without getting close to humanlike itself. An emotionless AI would be like those very literal genies or golems, doing precisely what was literally ordered whether it makes actual sense or not. "Make me a sandwich" for instance might lead to slaughter.
 
But isn't emotional intelligence a thing? To more effectively work with humans an AI needs to understand humans, and it can't do that without getting close to humanlike itself. An emotionless AI would be like those very literal genies or golems, doing precisely what was literally ordered whether it makes actual sense or not. "Make me a sandwich" for instance might lead to slaughter.

If you understand humans, you can exploit humans. We do enough of that to each other, I don't want to give robots the capacity to do that too, and possibly even better. Just make those dumb golems either weak or highly restrained.
 
The mushroom that AI thinks is a pretzel



https://www.bbc.com/news/technology-49084796

Of course AI doesn't "think" that a mushroom is a pretzel because it doesn't think anything and it has no intelligence. Further, AI doesn't understand what it means to be wrong and that there are consequences for making mistakes. Artificial intelligence doesn't understand anything and what is troubling is that it can't care about anything.

My toaster doesn’t care if my toast is burnt.
 
Highly restrained.

This idea of super-powered AIs is a canard. Putting a brain in a jar doesn't require giving that jar the body of a colossus.

---

Though, mind you, humans are just stupid enough to do both.
 
My toaster doesn’t care if my toast is burnt.

If it did care, would that improve the toasting outcome?

Imagine a loyal goblin, who cared about the quality of your toast, and made it their personal mission to ensure that you consistently got the best possible toast, and spent their free time deepening their craft to better ensure the outcome they care so much about.

About which they care so much.

Would that give you better toast, consistently, than pressing the button on your toaster and hoping the power converter, heating elements, timer, and spring release all actually work together properly this time?
 
Highly restrained.

This idea of super-powered AIs is a canard. Putting a brain in a jar doesn't require giving that jar the body of a colossus.

---

Though, mind you, humans are just stupid enough to do both.

Everyone's always in favor of saving Hitler's brain but when you put it in the body of a great white shark, ooh, suddenly you've gone too far.-Prof. Farnsworth
 
I don't care if your toast is burnt, ergo your toaster has human-level empathy.
 
If it did care, would that improve the toasting outcome?

Imagine a loyal goblin, who cared about the quality of your toast, and made it their personal mission to ensure that you consistently got the best possible toast, and spent their free time deepening their craft to better ensure the outcome they care so much about.

About which they care so much.

Would that give you better toast, consistently, than pressing the button on your toaster and hoping the power converter, heating elements, timer, and spring release all actually work together properly this time?

Would it, though?

I mean, if you put in all the sensors and control circuitry that an AI too would require to determine exactly how perfectly toasted your bread is, then a simple, non-sentient program could do the same job. Hell, we're at a point where not only we can control how toasted it is as a whole, but even at pixel level, and can use a laser to print your face on your toast if you wanted to.

Most of the problem with appliances isn't the lack of sentience, but

A) that people want them cheap (unless they cross into the domain of conspicuous consumption), and that's also because

B) we don't actually really care that much about perfection. It just has to be good enough. Which for most people has a huge margin of error down from perfection. And also because:

C) those devices are supposed to be about saving time. I don't want to have to spend time getting to know my toaster, so we care about each other.
 
Just to make it clear, though, I'm not saying AI on the whole is a waste of time. There will always be a need for companionship for old people, for example, so if we can produce some robots cheap enough and good enough for that, it's a bloody win.

But it will never be something that one needs to stick into everything. You don't actually need a sentient toaster. You don't even need sentient robot shopkeepers and clerks like in Star Wars, we have vending machines and terminals for that. You don't need a clunky C3PO going around with you to act as a translator (which really is most of what a protocol droid was supposed to do), we'll just use a phone app for that, thank you very much. Etc.
 
I don't want to have to spend time getting to know my toaster, so we care about each other.

You made him sad.

11925d3a033f32e26.jpg
 
C) those devices are supposed to be about saving time. I don't want to have to spend time getting to know my toaster, so we care about each other.

If I have to spend time getting to know my toaster and caring about it, that's bad product design.

What I want from a toaster is something that spends time getting to know and care about me, without me having to pay any attention to it at all. I'm a human being. I'm going to put my energy into human relationships. I absolutely require that my robot servants demand as little attention from me as possible.
 

Back
Top Bottom