Is Stephen Hawkins paranoid?

No. You can build an AI today with insect level brain equivalence.

Only in a simulated or controlled environment. You can't build AI right now that would be able to function in the chaotic real world like an insect could. The closest we have come, is the autopilot on an aircraft. (Not sure if you would consider that an "AI.") But it is only effective during "normal" altitude, cruising speeds, and clear weather. So it's still a "controlled" environment.
 
Last edited:
The most likely threat is that it would attempt to do what humans originally designed it to do, in ways humans never intended -- and it would be incapable of understanding the side effects of its actions. The best illustration of what I am talking about is Paperclip maximizer.

Oh, I see what you are talking about:

An “intelligence explosion” is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence.

If you limit yourself to Computational Theory then this is a pretty good argument. But, CT doesn't work in this context.
 
Only in a simulated or controlled environment. You can't build AI right now that would be able to function in the chaotic real world like an insect could.
I'm curious again. What part of an insect's brain could not be reproduced today?

The closest we have come, is the autopilot on an aircraft. (Not sure if you would consider that an "AI.") But it is only effective during "normal" altitude, cruising speeds, and clear weather. So it's still a "controlled" environment.
No, an autopilot isn't an AI; you can make one with nothing more than mechanical feedback. Auto-navigation can be more sophisticated depending on what you are doing but flying to a GPS coordinate is still not AI. I'm wondering though why you feel that this is the closest we have come to an insect.
 
Last edited:
Oh, I see what you are talking about:

An “intelligence explosion” is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence.
Actually, "intelligence explosion" is NOT what I am talking about, and while my previous link does mention it, it also specifically says that intelligence explosion is not necessary for a relatively stupid AI to become dangerous:
A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn't require very high general intelligence.

In short, my biggest concern is "artificial stupidity" -- narrow-goal AI's are given (by humans) too much power, and then proceed to execute these goals with no concern for unanticipated side effects. No intelligence explosion needed.
 
In short, my biggest concern is "artificial stupidity" -- narrow-goal AI's are given (by humans) too much power, and then proceed to execute these goals with no concern for unanticipated side effects. No intelligence explosion needed.

Like the classic "computer starts nuclear war" scenario?
 
Out of curiosity, why do you think AI needs goals, desires, or other high-level interests to be dangerous?
No, the opposite. They are potentially dangerous when they have no desires or goals. Furthermore, those desires must be human-centric as well as computer-centric (self), or else decisions can be made that are harmful to people and/or itself. Those bad decisions can have delayed effect so the AI will need to predict the future (just as humans do) to foresee bad things happening long after a decision is made.

I don't think you've justified this prerequisite. Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks. We've also seen how unintelligent processes can find innovative solutions to engineering problems, such as designing efficient wind turbines. (Detailed description here.)

Your other objection is that a machine may be a bit to cavalier about destroying itself, but I don't think you've justified that presumption either. Some of the AI in the link above, particularly this video, independently discover how to avoid being hit by an opponent's attack, without being told whether or how to do that in the first place; it's interesting that the AI doesn't "know" what happening, but that the fitness function for these systems has the simple objective to maximize wins, and the genetic process rwhich constructs the AI's neural net tends towards configurations which cause the creature to veer away from on-coming opponent attacks.


Hawkings particular worry regards AI with human-like intelligence. Given plausible developments in the field, I believe very sophisticated AI will approach human levels of intelligence within my lifetime, for reasons described in another thread.
What do current AI machines have to say about their own intelligence? Can it explain (intelligently) why it makes certain decisions and not others? Do they describe themselves as self-serving or do they talk about serving mankind? Do they even like people at all, or do we need to start worrying about that aspect right away?
 
I'm curious again. What part of an insect's brain could not be reproduced today?
Most of it in silico, and any of it in real-time. Given a well-characterized individual neuron or small circuit, we can simulate it (albeit not in real time), but very little of the fruit fly (easily the most studied insect) has been characterized to such an extent. If you see anyone claiming to have made a system which "reproduces" the insect brain, rest assured they have had to make some enormous assumptions and oversimplifications to get there.

Probably the best characterized system is the lobster stomatogastric ganglion, a set of three neurons which together (with another dozen neurons or so modulating it) produce a very specific pattern of muscular contraction that the lobster uses for digestion. Studying this system has been the life's work of Eve Marder. Her lab's papers on the complexities of parameter fitting and animal-to-animal variability ought to be required reading for every two-bit hack at IBM who gets a Neuron model running on his supercomputer and thinks "oh i can brain now."

tl;dr - insect brain no, lobster guts maybe.
 
Actually, "intelligence explosion" is NOT what I am talking about, and while my previous link does mention it, it also specifically says that intelligence explosion is not necessary for a relatively stupid AI to become dangerous:
Well, the intelligence explosion theory is based on a misapplication of computational theory. So, we can dismiss that.

In short, my biggest concern is "artificial stupidity" -- narrow-goal AI's are given (by humans) too much power, and then proceed to execute these goals with no concern for unanticipated side effects. No intelligence explosion needed.
I'm trying to figure this one out. Is this like claiming that you could have swarms of intelligent bacteria or nanites or some type of distributed intelligence? I assume there must be more to this because I'm not seeing how something like robotic floor vacuums become a threat.
 
but very little of the fruit fly (easily the most studied insect) has been characterized to such an extent. If you see anyone claiming to have made a system which "reproduces" the insect brain, rest assured they have had to make some enormous assumptions and oversimplifications to get there.
I'm not following you. What function of a fruit fly's brain cannot be duplicated now?
 
Perhaps you could be less subtle. I still don't know what you are referring to.
OK, here is a hypothetical example. A sewage treatment plant separates incoming material into solid waste which is then used as fertilizer, and into water which must meet certain standards of cleanliness: no more than X ppm of this or that, no more than Y amount of bacteria -- because bringing either truly to zero is an impossibility. Inevitably the plant produces some waste -- neither water nor fertilizer.

Let's say the plant is under control of an AI. And the goal of this AI is to minimize the waste -- convert as much of the incoming stream into fertilizer, as possible. Which is actually fairly tricky because that stream is not constant -- its composition changes all the time, and microbial cultures used in treatment vats should change accordingly, and in very non-linear ways.

Let's also say this AI is free to experiment with said microbial cultures to optimize fertilizer output, as long as the constraints on the water output are met; if it is not free to do so, then there is not much point in having an AI in the first place. Eventually the AI notices that if outflowing water contains a certain bacterial strain (still within prescribed limits), few hours later there is a significant increase of organic matter in the incoming stream. Faithful to its "maximize fertilizer output" goal, AI makes sure outflowing water is thus treated.

And the city experiences an outbreak of previously unknown intestinal infection.
 
^ I don't quite understand this scenario. Is this "previously unknown" bacterium simply one that's not on the AI's list? Or one that's been engineered in a lab or something?

Either way, the default action must be to make the safety of the water paramount when detecting the presence of an unknown pathogen.

Or perhaps I've totally misunderstood.
 
^ I don't quite understand this scenario. Is this "previously unknown" bacterium simply one that's not on the AI's list?
Yes, that.
Either way, the default action must be to make the safety of the water paramount when detecting the presence of an unknown pathogen.
That's the problem -- how does AI know it is a pathogen? As far as it is concerned, it is just a new microorganism which has an unexpected and beneficial (from AI's viewpoint) effect.
 
Last edited:
Yes, that.

That's the problem -- how does AI know it is a pathogen? As far as it is concerned, it is just a new microorganism which has an unexpected and beneficial (from AI's viewpoint) effect.

It doesn't know, therefore it doesn't allow that one through until it appears on the list - the blindingly obvious default action. In fact it rings a bell back at HQ that says "unknown bacterium, please advise"

What you're describing sounds more like crappy programming and testing than any danger inherent in AI.
 
It doesn't know, therefore it doesn't allow that one through until it appears on the list - the blindingly obvious default action. In fact it rings a bell back at HQ that says "unknown bacterium, please advise"

What you're describing sounds more like crappy programming and testing than any danger inherent in AI.
I am a programmer, and I find the highlighted bit distressingly likely. That's why I called it "artificial stupidity".
 
I am a programmer, and I find the highlighted bit distressingly likely. That's why I called it "artificial stupidity".

I was programmer for 20+ years and know what you mean;) But default safeguards are really not that difficult to program. And the more safety-critical the application the tougher should be the specification and testing.
 

Back
Top Bottom