Is Stephen Hawkins paranoid?

Why are we working on AI machines instead of starting out with something more simple like an artificial frog that is essentially indistinguishable from a real frog?

If you want a frog it's much cheaper to go out and get one than it is to try to make one yourself from spare parts.
 
Sounds like the machine will not achieve something like human intelligence if it has no personal desires or goals - especially if it has no motivation to be correct in its actions or responses. Nothing bad will happen to it if it chooses to do something catastrophically awful. It has no reason to avoid that.
Out of curiosity, why do you think AI needs goals, desires, or other high-level interests to be dangerous?

I don't think you've justified this prerequisite. Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks. We've also seen how unintelligent processes can find innovative solutions to engineering problems, such as designing efficient wind turbines. (Detailed description here.)

Your other objection is that a machine may be a bit to cavalier about destroying itself, but I don't think you've justified that presumption either. Some of the AI in the link above, particularly this video, independently discover how to avoid being hit by an opponent's attack, without being told whether or how to do that in the first place; it's interesting that the AI doesn't "know" what happening, but that the fitness function for these systems has the simple objective to maximize wins, and the genetic process rwhich constructs the AI's neural net tends towards configurations which cause the creature to veer away from on-coming opponent attacks.

By now you have probably realized that I am strongly skeptical of claims of what the future will bring concerning real AI. Are there any current AI machines that apologize for their errors in trying to achieve the goals of their creators? Do they understand that we made them and want certain tasks to be accomplished without error? Do they comprehend that, or are they nowheres near even approaching intelligence yet?
Hawkings particular worry regards AI with human-like intelligence. Given plausible developments in the field, I believe very sophisticated AI will approach human levels of intelligence within my lifetime, for reasons described in another thread.
 
Last edited:
Out of curiosity, why do you think AI needs goals, desires, or other high-level interests to be dangerous?

That's not quite what he said. He said it needed some form of motivation in order to achieve human-like intelligence. I think the idea is that this requires it to learn, but it won't bother learning unless it wants to do something.

Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks.

And in all of these cases, we have given these AI's a goal. They learn in order to achieve that goal, and what they learn is directed by that goal. The goal is to maximize the fitness function. This is what it wants, this is what it desires.
 
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us. Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.

How rational are these fears? Does anyone agree with him?

Personally I think the greatest risk to our civilization is that we'll become consumed by our tech, and not because the tech rebels against us, but that it works exactly as intended, and makes us lazy to the point of oblivion.
 
Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks.
I think you are being unfair to insects. Insects can do all those tasks -- and many more -- do them in real time, in the everchanging chaos of the actual real world instead of extremely simplified virtual environment, with actual noisy senses. Instead of the AI having little more intelligence than insects, it has a whole lot less.

I think several major breakthroughs in AI are necessary to even come close to the adaptability of behaviour of even the simplest insect. Several more are needed to achieve the same level of intelligence packed into the size of an insect brain. Such breakthroughs may require major paradigm shifts in how AI is approached. It is certainly possible that those will happen within our lifetimes (several already occured during mine) but they are unpredictable.

Whether an efficient human-like intelligence can be built with anything other than organic material remains -- I think -- an open question.
 
Out of curiosity, why do you think AI needs goals, desires, or other high-level interests to be dangerous?
Functioning outside of a controlled environment is a lot more difficult than you think.

I don't think you've justified this prerequisite. Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks. We've also seen how unintelligent processes can find innovative solutions to engineering problems, such as designing efficient wind turbines. (Detailed description here.)

Your other objection is that a machine may be a bit to cavalier about destroying itself, but I don't think you've justified that presumption either. Some of the AI in the link above, particularly this video, independently discover how to avoid being hit by an opponent's attack, without being told whether or how to do that in the first place; it's interesting that the AI doesn't "know" what happening, but that the fitness function for these systems has the simple objective to maximize wins, and the genetic process rwhich constructs the AI's neural net tends towards configurations which cause the creature to veer away from on-coming opponent attacks.
The examples you gave are not valid because they used a small number of bounded parameters.

Hawkings particular worry regards AI with human-like intelligence. Given plausible developments in the field, I believe very sophisticated AI will approach human levels of intelligence within my lifetime, for reasons described in another thread.
There is no architecture in existence right now that is capable of even rat level intelligence, much less human. This includes the neural network models. Unfortunately, having said that, I suppose it could be possible soon that I could design one.

I guess at this point we would have to start discussing cognitive theory which is way beyond anything Asimov ever had to consider.
 
I think you are being unfair to insects. Insects can do all those tasks -- and many more -- do them in real time, in the everchanging chaos of the actual real world instead of extremely simplified virtual environment, with actual noisy senses. Instead of the AI having little more intelligence than insects, it has a whole lot less.
I would agree. However, insects are only environmentally reactive; they don't have any actual intelligence.

I think several major breakthroughs in AI are necessary to even come close to the adaptability of behaviour of even the simplest insect. Several more are needed to achieve the same level of intelligence packed into the size of an insect brain.
No, I could do that now.

Such breakthroughs may require major paradigm shifts in how AI is approached. It is certainly possible that those will happen within our lifetimes (several already occured during mine) but they are unpredictable.
It wouldn't take that for anything environmentally reactive which gets you up to about the level of schooling fish. Territorial fish require something more complex and that is what no current cognitive theory can handle.

Whether an efficient human-like intelligence can be built with anything other than organic material remains -- I think -- an open question.
I've struggled with this for awhile. I came up with a partial answer for a mechanical AI and then came up with a more complete answer which prohibits a giant clockwork AI in terms of functionality. However, I have not yet been able to answer whether such a machine could be conscious if heroic efforts were made to keep it running for a short time. An electrical, pneumatic, or hydraulic AI has the same problems. I have not yet determined whether electronic AI could be eliminated on a purely functional basis. And, if it is functional I still have not determined if it would be conscious.
 
Sounds like the machine will not achieve something like human intelligence if it has no personal desires or goals - especially if it has no motivation to be correct in its actions or responses. Nothing bad will happen to it if it chooses to do something catastrophically awful. It has no reason to avoid that.
I can't talk completely freely about this but the human mind can be divided into various pieces including comprehension, intelligence, and free-will. These parts can be sub-divided further. Insects have most of comprehension but not free-will or intelligence. In the biological world, intelligence and free-will go together. In the movies like Terminator you have a robot that has comprehension and intelligence but without free-will. It's used in other scenarios such as Nomad on Star Trek. Is this actually possible? Well, so far, no one has been able to do it so it seems unlikely. A variant of this has to do with the flatness of abstraction. I don't have a proof for this yet but I'm currently thinking that free-will would not be possible with completely flat abstraction. And, if abstraction is not flat then you have what you mentioned in terms of preference and desire.

By now you have probably realized that I am strongly skeptical of claims of what the future will bring concerning real AI. Are there any current AI machines that apologize for their errors in trying to achieve the goals of their creators? Do they understand that we made them and want certain tasks to be accomplished without error? Do they comprehend that, or are they nowheres near even approaching intelligence yet?
There are no current machines that behave like humans. There is also no published theory of cognition that would allow designing such a machine. But that could change soon.
 
I think you are being unfair to insects. Insects can do all those tasks -- and many more -- do them in real time, in the everchanging chaos of the actual real world instead of extremely simplified virtual environment, with actual noisy senses. Instead of the AI having little more intelligence than insects, it has a whole lot less.

I think several major breakthroughs in AI are necessary to even come close to the adaptability of behaviour of even the simplest insect. Several more are needed to achieve the same level of intelligence packed into the size of an insect brain. Such breakthroughs may require major paradigm shifts in how AI is approached. It is certainly possible that those will happen within our lifetimes (several already occured during mine) but they are unpredictable.

Whether an efficient human-like intelligence can be built with anything other than organic material remains -- I think -- an open question.

Functioning outside of a controlled environment is a lot more difficult than you think.


The examples you gave are not valid because they used a small number of bounded parameters.


There is no architecture in existence right now that is capable of even rat level intelligence, much less human. This includes the neural network models. Unfortunately, having said that, I suppose it could be possible soon that I could design one.

I guess at this point we would have to start discussing cognitive theory which is way beyond anything Asimov ever had to consider.

Forget about rat intelligence. (Actually, rats are quite intelligent.) In fact, forget about insect intelligence. Hell, an AI doesn't even have the intelligence of a jellyfish at this point.
 
Personally I think the greatest risk to our civilization is that we'll become consumed by our tech, and not because the tech rebels against us, but that it works exactly as intended, and makes us lazy to the point of oblivion.


I agree. I think this will especially become a problem when truly immersive VR becomes widely available.
 
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us. Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.

How rational are these fears? Does anyone agree with him?

Any civilization capable of reaching us would likely be capable of conquering us (or wiping us out to colonize the planet), so it's an entirely valid risk. It's what we did all over our world, and we're the only species that we can work with. It could be a valid risk, it all depends on whether interstellar travel is feasible on a scale that would allow such an enterprise to take place and on how much intelligent life is out there. Furthermore, it could be we're the Silastic Armorfiends of the universe, or we could be the most peaceful intelligent species out there. We could also be average in every respect.

The point is we don't know, we can't know, and he thinks the risks outweigh the potential benefits. I can agree with that, even if the risk may not exist at all.

On the other hand, machines are the next logical step in the evolution. Our intelligence allows us to adapt to a new situation with tools - which takes days, as opposed to generations. Machines have the advantage of adapting bodies to the new environment. Assuming we can create intelligence - and there's no reason to think it can't be done - and we do it, machines will have the same kind of advantage over us that we have over other animals, with all the implications that follow from that. Unlike the hypothetical alien threat, the threat from intelligent machines is very real.

It's not schizophrenia, it's an active imagination.

McHrozni
 
Forget about rat intelligence. (Actually, rats are quite intelligent.) In fact, forget about insect intelligence. Hell, an AI doesn't even have the intelligence of a jellyfish at this point.
No. You can build an AI today with insect level brain equivalence.
 
Assuming we can create intelligence - and there's no reason to think it can't be done - and we do it, machines will have the same kind of advantage over us that we have over other animals

the threat from intelligent machines is very real.
I'm curious. What would the threat from an intelligent machine be?
 
I'm curious. What would the threat from an intelligent machine be?
The most likely threat is that it would attempt to do what humans originally designed it to do, in ways humans never intended -- and it would be incapable of understanding the side effects of its actions. The best illustration of what I am talking about is Paperclip maximizer.

Basically, a technological version of Sorcerer's Apprentice. Except that Sorcerer's Apprentice was capable of perceiving his own doom, and Paperclip Maximizer would have no such capacity.
 
I think you guys are missing the point, I have it on reliable info, ie. read it on the net, that Stephen hawking, is in fact a fake. the real man apparently died some years ago, and this one is a stooge, sadly I can't remember who for, however you have to admire that sort of commitment.
 

Back
Top Bottom