Is Stephen Hawkins paranoid?

I think they're valid risks, but we don't really have a good way of quantifying them yet.

^^

AI is the one that scares me. But I think there's a good chance we'll end up co-evolving with machines instead of an us-vs-them scenario.
 
Why would a machine want to improve itself?
Errrm, I'm not sure your question is meaningful. A machine doesn't need to "want" anything at all in order to improve itself.

As shown here, the idea here behind self-improving software is that it deliberately designed with a fitness function to measure the effectiveness of its behavior, some manner to permute its behavior, and a process for selecting permutations which maximize the fitness function.
 
Last edited:
How can it be intelligent if it isn't really making decisions and evaluations based on good or bad outcomes? Does it witness its own mistakes and see dire consequences and choose to avoid those because it doesn't like that?
 
How can it be intelligent if it isn't really making decisions and evaluations based on good or bad outcomes? Does it witness its own mistakes and see dire consequences and choose to avoid those because it doesn't like that?
As I see it an AI's concept of good and bad would be, at some level, programmed into it. For organic life a good outcome is one where the gene-line continues while all others are bad but there's no equivalent for an AI, even an emergent one. An instinct for self-preservation, for instance, is not a given.

If an AI is programmed to consider the elimination of poverty a good outcome it might well kill everybody - which would get the job done. It's a question of being careful what you wish for.
 
How can it be intelligent if it isn't really making decisions and evaluations based on good or bad outcomes? Does it witness its own mistakes and see dire consequences and choose to avoid those because it doesn't like that?
I think you have a fundamental misunderstanding of the way self-adapting software works.

Have a look at this video illustrating how a computer model learns to walk. The computer model is made up for bones, joints between the bones, a neural net which applies forces to the joints, and gravitational force acting on the model.

The neural net is collection of very simple "neurons", each of which takes inputs (such as the rotation of the joint and the weight supported on the joint), feeds inputs through a sequence of mathematical functions [1], the output of which is then applied as a force on the joint.

The computer model progressively learns how to walk by permuting the sequence of functions used by its neural net, evaluating the permutation according to some fitness function (in this case, distance traveled), and modifying its behavior to use the best performing permutation out of N trials.

Left to it's own devices, the computer model uses trial-and-error to find configuration to help it walk upright in a mere 20 generations.

The trick here is understanding that computer model is self-improving, not because it "wants" to walk or because it dislikes falling over, but because the process for modifying its behavior is purposefully designed to maximize its fitness function.

[1] These may be sum, product, divide, sum-threshold, greater-than, sign-of, min, max, abs, if, interpolate, sin, cos, atan, log, exp, sigmoid, integrate, differentiate, smooth, memory, oscillate-wave, and oscillate-saw. Some functions may store state, so that they can give time-varying outputs even when the inputs are exactly the same.
 
Last edited:
How can it be intelligent if it isn't really making decisions and evaluations based on good or bad outcomes? Does it witness its own mistakes and see dire consequences and choose to avoid those because it doesn't like that?

If that is the definition of intelligence then I am more afraid of the humans who lack intelligence than the computers who acquire intelligence.

..................
Which brings up an interesting point. Can natural stupidity overcome artificial intelligence?
 
Well, his computer run some impressive English composing tool (which can be controlled with single button clicks, as this is the only thing available to him) which probably include some algorithms learned during early AI research (such like dialog programs like Eliza).

For instance, the program 'knows' which words are probably coming after the ones he already selected, and limits his options accordingly.

All very true, and all very much in agreement with what I said: None of the computers that have helped him are even remotely at risk for the kind of thing he's warning about.

So again I ask, how is that hypocritical?
 
I think you have a fundamental misunderstanding of the way self-adapting software works...

Aren't the limits to the power of AI trivially easy to program? After all, the AI only exists because of programming.

Asimov anticipated this kind of thing, but it can easily be extended -

Thou shalt not self-replicate without explicit permission from humankind.
Thou shalt not take control of any military installation.
Thou shalt not introduce self-propulsion to any AI system that is not already intended for self-propulsion, nor shalt thou introduce methods of propulsion, weaponry (etc) that are not listed in the specs.

<blah> :)

I'm seeing an awful lot of SF-fantasy in these AI worries.
 
I think you have a fundamental misunderstanding of the way self-adapting software works...

The trick here is understanding that computer model is self-improving, not because it "wants" to walk or because it dislikes falling over, but because the process for modifying its behavior is purposefully designed to maximize its fitness function...
Sounds like the machine will not achieve something like human intelligence if it has no personal desires or goals - especially if it has no motivation to be correct in its actions or responses. Nothing bad will happen to it if it chooses to do something catastrophically awful. It has no reason to avoid that.

By now you have probably realized that I am strongly skeptical of claims of what the future will bring concerning real AI. Are there any current AI machines that apologize for their errors in trying to achieve the goals of their creators? Do they understand that we made them and want certain tasks to be accomplished without error? Do they comprehend that, or are they nowheres near even approaching intelligence yet?
 
Why are we working on AI machines instead of starting out with something more simple like an artificial frog that is essentially indistinguishable from a real frog?
 

Back
Top Bottom