I think they're valid risks, but we don't really have a good way of quantifying them yet.
^^
AI is the one that scares me. But I think there's a good chance we'll end up co-evolving with machines instead of an us-vs-them scenario.
I think they're valid risks, but we don't really have a good way of quantifying them yet.
Why would a machine want to improve itself? Does improvement make it feel better, or give it the chance to brag to family and friends?
Errrm, I'm not sure your question is meaningful. A machine doesn't need to "want" anything at all in order to improve itself.Why would a machine want to improve itself?
The range of things that scientists, between them, think is scarier than you might imagine. Eugenics never quite went away, for instance.Scientists think intelligent AI might not be that far off, so these concerns are starting to become trendy again. It's not just Hawking.
As I see it an AI's concept of good and bad would be, at some level, programmed into it. For organic life a good outcome is one where the gene-line continues while all others are bad but there's no equivalent for an AI, even an emergent one. An instinct for self-preservation, for instance, is not a given.How can it be intelligent if it isn't really making decisions and evaluations based on good or bad outcomes? Does it witness its own mistakes and see dire consequences and choose to avoid those because it doesn't like that?
I think you have a fundamental misunderstanding of the way self-adapting software works.How can it be intelligent if it isn't really making decisions and evaluations based on good or bad outcomes? Does it witness its own mistakes and see dire consequences and choose to avoid those because it doesn't like that?
The AI in his wheelchair has been talking in his stead for years.
How can it be intelligent if it isn't really making decisions and evaluations based on good or bad outcomes? Does it witness its own mistakes and see dire consequences and choose to avoid those because it doesn't like that?
Well, his computer run some impressive English composing tool (which can be controlled with single button clicks, as this is the only thing available to him) which probably include some algorithms learned during early AI research (such like dialog programs like Eliza).
For instance, the program 'knows' which words are probably coming after the ones he already selected, and limits his options accordingly.
I think it is time to start the Butlerian Jihad.
Did Sarah Connor die in vain?
I think you have a fundamental misunderstanding of the way self-adapting software works...
Sounds like the machine will not achieve something like human intelligence if it has no personal desires or goals - especially if it has no motivation to be correct in its actions or responses. Nothing bad will happen to it if it chooses to do something catastrophically awful. It has no reason to avoid that.I think you have a fundamental misunderstanding of the way self-adapting software works...
The trick here is understanding that computer model is self-improving, not because it "wants" to walk or because it dislikes falling over, but because the process for modifying its behavior is purposefully designed to maximize its fitness function...
Did Sarah Connor die in vain?
No, I think she died in Mexico.