Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us. Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.
How rational are these fears? Does anyone agree with him?
Speaking at the Zeitgeist 2015 conference in London, the internationally renowned cosmologist and Cambridge University professor, said: “Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.”
Well, his computer run some impressive English composing tool (which can be controlled with single button clicks, as this is the only thing available to him) which probably include some algorithms learned during early AI research (such like dialog programs like Eliza).
For instance, the program 'knows' which words are probably coming after the ones he already selected, and limits his options accordingly.
He once warned us not to communicae with aliens for rear as communicating to the wrong ones would lead to these (the aliens) conquering us. Now hes warning us about computers becoming more intelligent than humans with disasterous consequences.
How rational are these fears? Does anyone agree with him?
I can't really comment on the dangers of trying to contact aliens, but his comments on superintelligent computers appear rational given plausible developments in computer science and artificial intelligence in the future.
Hawking's argument is straightforward: a sufficiently intelligent AI could, in principle, improve upon its own design with a more powerful AI. After a few hundred thousand generations of incremental improvement, the resulting AI may approach human-like levels of intelligence, and would almost certainly eclipse humans.
At present, we have lots of techniques for self-improving artificial intelligence which, while having a brain less sophisticated than an insect, is quite impressive to see in action. Here are two demonstrations:
Orbs learn how to fight:
Genetic algorithms learn how to fight:
Not sure if you can appreciate what is happening here without having a technical background, but these videos depict two AI systems in an arms race. I can give you a technical description if you like, but the short and sweet version is that the "creatures" are created a from string of characters which encodes a neural net; the neural net takes inputs from sensors (for example, a left and right eye sensor); the outputs are passed along to motors which drive the creature around.
Because the "creatures" neural nets are encoded as a string of "dna", it's possible to create a few dozen randomly generated strings of dna, test how well each string performs against another, select the top 20% best performing creatures, and produce new creatures by interpolating genetic data from random pairs of "creatures". By repeating this process 1000s of times, the genetic process incrementally produces better creatures, virtually programming itself.
This is promising, because it means that machines can teach themselves how to perform tasks which would generally be difficult for humans to program. For example, there is a pretty impressive demonstration of a genetic algorithm being used to design a wind turbine, as well as demonstrations of machines learning how to walk and how to jump without any human involvement. Here, a robot automatically adjusts its walking algorithm, so that it learns to walk better and better over time. Self-driving, and likely self-improving vehicles will likely becomes ubiquitous and commonplace within my lifetime.
There is a really scary side here: nations all over the world, especially the US, are interested in self-learning, self-adapting machines for use in military purposes. In my head, I can imagine an algorithm which designs machines for use in battle; constructs machines using 3d printers; and the resulting machines can learn and adapt to obstacles on the battlefield.
Based on plausible advancements in technology and AI, it is easy to imagine self-learning military machines which look more like this robot which can run at 46 kph, or this stair climbing robot. Imagine two countries at war with one another, with AI armies locked in an arms race similar to the first two videos shown above.
I find it easy to imagine how machines like this, even if they do not approach human levels of intelligence, are potentially as dangerous to humanity as nuclear weapons.
I don't know the specifics of Hawking's speech prediction software, but I'm willing to bet the software has a lot less to do with artificial intelligence, and a lot more to do with Markov chains or something very similar to it. I can provide a brief description of how that process would work in more detail if requested
I don't know the specifics of Hawking's speech prediction software, but I'm willing to bet the software has a lot less to do with artificial intelligence, and a lot more to do with Markov chains or something very similar to it. I can provide a brief description of how that process would work in more detail if requested
Aw, when I first read this I missed that we were discussing the speech prediction software. I was hoping for some insight into how Markov Chains inevitable lead to a spatial probability of AI overtaking human intelligence.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.