• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Is Stephen Hawkins paranoid?

Cainkane1

Philosopher
Joined
Jul 16, 2005
Messages
9,011
Location
The great American southeast
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us. Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.

How rational are these fears? Does anyone agree with him?
 
Source? Because it would strike me as a tad hypocritical, given he would have died years ago if not for computers.
 
Scientists think intelligent AI might not be that far off, so these concerns are starting to become trendy again. It's not just Hawking.
 
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us.
(bolding mine)

Damn, and there was me not realising he was talking about Galiens.

Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.

How rational are these fears? Does anyone agree with him?

The AI point is much more interesting, and I think it's valid.
 
We might need to develop AI computers to help us in dealing with bad aliens. We might also need aliens to help us deal with bad AI that we developed.
 
Thinking long-term (and by "long-term" I mean looooong), I think these are valid concerns.

It's better to think about these matters ahead of time.

Still Sci-Fi nowadays.
 
Now here is a link http://www.techworld.com/news/opera...ill-overtake-humans-within-100-years-3611397/

Speaking at the Zeitgeist 2015 conference in London, the internationally renowned cosmologist and Cambridge University professor, said: “Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.”

It does not take much effort to find such a link. How about looking for it and posting one?
 
How is that hypocritical? None of the computers that have helped him are even remotely at risk for the kind of thing he's warning about.

Well, his computer run some impressive English composing tool (which can be controlled with single button clicks, as this is the only thing available to him) which probably include some algorithms learned during early AI research (such like dialog programs like Eliza).

For instance, the program 'knows' which words are probably coming after the ones he already selected, and limits his options accordingly.
 
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us. Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.

How rational are these fears? Does anyone agree with him?
I can't really comment on the dangers of trying to contact aliens, but his comments on superintelligent computers appear rational given plausible developments in computer science and artificial intelligence in the future.

Hawking's argument is straightforward: a sufficiently intelligent AI could, in principle, improve upon its own design with a more powerful AI. After a few hundred thousand generations of incremental improvement, the resulting AI may approach human-like levels of intelligence, and would almost certainly eclipse humans.

At present, we have lots of techniques for self-improving artificial intelligence which, while having a brain less sophisticated than an insect, is quite impressive to see in action. Here are two demonstrations:

Orbs learn how to fight:


Genetic algorithms learn how to fight:


Not sure if you can appreciate what is happening here without having a technical background, but these videos depict two AI systems in an arms race. I can give you a technical description if you like, but the short and sweet version is that the "creatures" are created a from string of characters which encodes a neural net; the neural net takes inputs from sensors (for example, a left and right eye sensor); the outputs are passed along to motors which drive the creature around.

Because the "creatures" neural nets are encoded as a string of "dna", it's possible to create a few dozen randomly generated strings of dna, test how well each string performs against another, select the top 20% best performing creatures, and produce new creatures by interpolating genetic data from random pairs of "creatures". By repeating this process 1000s of times, the genetic process incrementally produces better creatures, virtually programming itself.

This is promising, because it means that machines can teach themselves how to perform tasks which would generally be difficult for humans to program. For example, there is a pretty impressive demonstration of a genetic algorithm being used to design a wind turbine, as well as demonstrations of machines learning how to walk and how to jump without any human involvement. Here, a robot automatically adjusts its walking algorithm, so that it learns to walk better and better over time. Self-driving, and likely self-improving vehicles will likely becomes ubiquitous and commonplace within my lifetime.

There is a really scary side here: nations all over the world, especially the US, are interested in self-learning, self-adapting machines for use in military purposes. In my head, I can imagine an algorithm which designs machines for use in battle; constructs machines using 3d printers; and the resulting machines can learn and adapt to obstacles on the battlefield.

Based on plausible advancements in technology and AI, it is easy to imagine self-learning military machines which look more like this robot which can run at 46 kph, or this stair climbing robot. Imagine two countries at war with one another, with AI armies locked in an arms race similar to the first two videos shown above.

I find it easy to imagine how machines like this, even if they do not approach human levels of intelligence, are potentially as dangerous to humanity as nuclear weapons.
 
Last edited:
For instance, the program 'knows' which words are probably coming after the ones he already selected, and limits his options accordingly.
I don't know the specifics of Hawking's speech prediction software, but I'm willing to bet the software has a lot less to do with artificial intelligence, and a lot more to do with Markov chains or something very similar to it. I can provide a brief description of how that process would work in more detail if requested :)
 
Last edited:
Why would a machine want to improve itself? Does improvement make it feel better, or give it the chance to brag to family and friends?
 
I don't know the specifics of Hawking's speech prediction software, but I'm willing to bet the software has a lot less to do with artificial intelligence, and a lot more to do with Markov chains or something very similar to it. I can provide a brief description of how that process would work in more detail if requested :)

Aw, when I first read this I missed that we were discussing the speech prediction software. I was hoping for some insight into how Markov Chains inevitable lead to a spatial probability of AI overtaking human intelligence.

Damn...
 

Back
Top Bottom