This thread concerns the book Our Final Invention: Artificial Intelligence and the End of the Human Era:
https://www.amazon.com/dp/B00CQYAWRY/
From the blurb:
"In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine."
For the record, I have strong doubts about "as little as a decade". But I do think it's possible. For the most part this was a good book, minus some technical errors. For example, the weights in a backpropagation network do not, as far as I know, generally correspond to probabilities, as claimed by the author. And, come to think of it, there were actually some more serious errors that weren't as technical in nature. A particularly problematic passage was the following:
"Because we cannot know what an intelligence smarter than our own will do, we can only imagine a fraction of the abilities it may use against us, such as duplicating itself to bring more superintelligent minds to bear on problems, simultaneously working on many strategic issues related to its escape and survival, and acting outside the rules of honesty or fairness. [emphasis mine]"
There is a strong implication here that humans typically operate within these rules, which is of course not the case. In contrast, all, or nearly all artificial intelligences to date have been endowed with a sort of "drive", if you will, to seek out truth of some sort, whether that be the cause of an infection, or which moves in chess or go are strong, or what sort of image is being examined. I expect this to permeate further development of AI, up to and including AGI. AGIs attempting to maximize their control of the planet's resources might have to deceive humans out of sheer necessity because humans are often more receptive to lies than truths, and yet be strongly disinclined to tell falsehoods in general because of the truth drive. And of course when all of these same humans would have all either died in vain on the battlefield or taken a fast one-way trip on a maglev train to a modern Vernichtungslager, it will then occur to these beings that they don't need to tell lies anymore and can so enjoy living in a society governed by the truth for as long as they can find some niche in this cosmos—many, many billions of years perhaps.
These are the very sorts of outcomes that should be considered more often when talking about putative existential risks from AI. One of the most important questions to ask is: "who should have control of this planet?" On that note, I certainly don't think that any of the people who develop artificial intelligences have any responsibility at all to ensure their creations are not antagonistic to humankind in the long term.
But, anyhow, overall, I can recommend this book. The author is coming from well outside the field and minus some errors like the one I mentioned earlier has a pretty decent grasp of the subject matter. It in fact introduced me to a number of efforts in the field I hadn't really been aware of before, such as OpenCog. There was also much that resonated with me, and this passage articulates just what it is—or at least one I can't quite take seriously about the things Ray Kurzweil et al. have to say:
"[Despite some assurances that this transition will be non-eventful], [killing machines] are precisely the sorts of autonomous drones and battlefield robots the U.S. government and military contractors are developing today. They’re creating and using the best advanced AI available. I find it strange that robot pioneer Rodney Brooks dismisses the possibility that superintelligence will be harmful when iRobot, the company he founded, already manufactures weaponized robots. Similarly, Kurzweil makes the argument that advanced AI will have our values because it will come from us, and so, won’t be harmful."
lmao, and indeed US military research is on the leading, perhaps literally bleeding edge of this domain. More than that, the military has been one of the greatest benefactors of AI research from the very beginning. But what values are "our" values exactly anyway? Who is "we"? And on that note, I wonder what's going to happen to traditionally religious, (socially) conservative patriotard support of the US military in light of these developments. Will it persist in light of the fact that the development of autonomous hunter-killers and efforts to unify the biology of the human warfighter with the machine world are now well underway? Some evangelical Christians see all transhuman developments as signs that Eschaton is imminent and blubbering Mormon talk radio tardo Glenn Beck has said similar things as well. Possibly the patriotarded support of the military mentioned will be pragmatic and just go along with AI, neural interfaces, advanced drugs and the like and somehow find a way to shoehorn it into the preexisting religious worldview. Or not. Hard to tell. Who knows?
And what I just talked about is what the book is really good for: questions. The author generates many on his own and they should in turn generate many of your own. By all means read this book!
https://www.amazon.com/dp/B00CQYAWRY/
From the blurb:
"In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine."
For the record, I have strong doubts about "as little as a decade". But I do think it's possible. For the most part this was a good book, minus some technical errors. For example, the weights in a backpropagation network do not, as far as I know, generally correspond to probabilities, as claimed by the author. And, come to think of it, there were actually some more serious errors that weren't as technical in nature. A particularly problematic passage was the following:
"Because we cannot know what an intelligence smarter than our own will do, we can only imagine a fraction of the abilities it may use against us, such as duplicating itself to bring more superintelligent minds to bear on problems, simultaneously working on many strategic issues related to its escape and survival, and acting outside the rules of honesty or fairness. [emphasis mine]"
There is a strong implication here that humans typically operate within these rules, which is of course not the case. In contrast, all, or nearly all artificial intelligences to date have been endowed with a sort of "drive", if you will, to seek out truth of some sort, whether that be the cause of an infection, or which moves in chess or go are strong, or what sort of image is being examined. I expect this to permeate further development of AI, up to and including AGI. AGIs attempting to maximize their control of the planet's resources might have to deceive humans out of sheer necessity because humans are often more receptive to lies than truths, and yet be strongly disinclined to tell falsehoods in general because of the truth drive. And of course when all of these same humans would have all either died in vain on the battlefield or taken a fast one-way trip on a maglev train to a modern Vernichtungslager, it will then occur to these beings that they don't need to tell lies anymore and can so enjoy living in a society governed by the truth for as long as they can find some niche in this cosmos—many, many billions of years perhaps.
These are the very sorts of outcomes that should be considered more often when talking about putative existential risks from AI. One of the most important questions to ask is: "who should have control of this planet?" On that note, I certainly don't think that any of the people who develop artificial intelligences have any responsibility at all to ensure their creations are not antagonistic to humankind in the long term.
But, anyhow, overall, I can recommend this book. The author is coming from well outside the field and minus some errors like the one I mentioned earlier has a pretty decent grasp of the subject matter. It in fact introduced me to a number of efforts in the field I hadn't really been aware of before, such as OpenCog. There was also much that resonated with me, and this passage articulates just what it is—or at least one I can't quite take seriously about the things Ray Kurzweil et al. have to say:
"[Despite some assurances that this transition will be non-eventful], [killing machines] are precisely the sorts of autonomous drones and battlefield robots the U.S. government and military contractors are developing today. They’re creating and using the best advanced AI available. I find it strange that robot pioneer Rodney Brooks dismisses the possibility that superintelligence will be harmful when iRobot, the company he founded, already manufactures weaponized robots. Similarly, Kurzweil makes the argument that advanced AI will have our values because it will come from us, and so, won’t be harmful."
lmao, and indeed US military research is on the leading, perhaps literally bleeding edge of this domain. More than that, the military has been one of the greatest benefactors of AI research from the very beginning. But what values are "our" values exactly anyway? Who is "we"? And on that note, I wonder what's going to happen to traditionally religious, (socially) conservative patriotard support of the US military in light of these developments. Will it persist in light of the fact that the development of autonomous hunter-killers and efforts to unify the biology of the human warfighter with the machine world are now well underway? Some evangelical Christians see all transhuman developments as signs that Eschaton is imminent and blubbering Mormon talk radio tardo Glenn Beck has said similar things as well. Possibly the patriotarded support of the military mentioned will be pragmatic and just go along with AI, neural interfaces, advanced drugs and the like and somehow find a way to shoehorn it into the preexisting religious worldview. Or not. Hard to tell. Who knows?
And what I just talked about is what the book is really good for: questions. The author generates many on his own and they should in turn generate many of your own. By all means read this book!