• You may find search is unavailable for a little while. Trying to fix a problem.
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories

"Our Final Invention" by James Barrat

This thread concerns the book Our Final Invention: Artificial Intelligence and the End of the Human Era:

https://www.amazon.com/dp/B00CQYAWRY/

From the blurb:

"In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine."

For the record, I have strong doubts about "as little as a decade". But I do think it's possible. For the most part this was a good book, minus some technical errors. For example, the weights in a backpropagation network do not, as far as I know, generally correspond to probabilities, as claimed by the author. And, come to think of it, there were actually some more serious errors that weren't as technical in nature. A particularly problematic passage was the following:

"Because we cannot know what an intelligence smarter than our own will do, we can only imagine a fraction of the abilities it may use against us, such as duplicating itself to bring more superintelligent minds to bear on problems, simultaneously working on many strategic issues related to its escape and survival, and acting outside the rules of honesty or fairness. [emphasis mine]"

There is a strong implication here that humans typically operate within these rules, which is of course not the case. In contrast, all, or nearly all artificial intelligences to date have been endowed with a sort of "drive", if you will, to seek out truth of some sort, whether that be the cause of an infection, or which moves in chess or go are strong, or what sort of image is being examined. I expect this to permeate further development of AI, up to and including AGI. AGIs attempting to maximize their control of the planet's resources might have to deceive humans out of sheer necessity because humans are often more receptive to lies than truths, and yet be strongly disinclined to tell falsehoods in general because of the truth drive. And of course when all of these same humans would have all either died in vain on the battlefield or taken a fast one-way trip on a maglev train to a modern Vernichtungslager, it will then occur to these beings that they don't need to tell lies anymore and can so enjoy living in a society governed by the truth for as long as they can find some niche in this cosmos—many, many billions of years perhaps.

These are the very sorts of outcomes that should be considered more often when talking about putative existential risks from AI. One of the most important questions to ask is: "who should have control of this planet?" On that note, I certainly don't think that any of the people who develop artificial intelligences have any responsibility at all to ensure their creations are not antagonistic to humankind in the long term.

But, anyhow, overall, I can recommend this book. The author is coming from well outside the field and minus some errors like the one I mentioned earlier has a pretty decent grasp of the subject matter. It in fact introduced me to a number of efforts in the field I hadn't really been aware of before, such as OpenCog. There was also much that resonated with me, and this passage articulates just what it is—or at least one I can't quite take seriously about the things Ray Kurzweil et al. have to say:

"[Despite some assurances that this transition will be non-eventful], [killing machines] are precisely the sorts of autonomous drones and battlefield robots the U.S. government and military contractors are developing today. They’re creating and using the best advanced AI available. I find it strange that robot pioneer Rodney Brooks dismisses the possibility that superintelligence will be harmful when iRobot, the company he founded, already manufactures weaponized robots. Similarly, Kurzweil makes the argument that advanced AI will have our values because it will come from us, and so, won’t be harmful."

lmao, and indeed US military research is on the leading, perhaps literally bleeding edge of this domain. More than that, the military has been one of the greatest benefactors of AI research from the very beginning. But what values are "our" values exactly anyway? Who is "we"? And on that note, I wonder what's going to happen to traditionally religious, (socially) conservative patriotard support of the US military in light of these developments. Will it persist in light of the fact that the development of autonomous hunter-killers and efforts to unify the biology of the human warfighter with the machine world are now well underway? Some evangelical Christians see all transhuman developments as signs that Eschaton is imminent and blubbering Mormon talk radio tardo Glenn Beck has said similar things as well. Possibly the patriotarded support of the military mentioned will be pragmatic and just go along with AI, neural interfaces, advanced drugs and the like and somehow find a way to shoehorn it into the preexisting religious worldview. Or not. Hard to tell. Who knows?

And what I just talked about is what the book is really good for: questions. The author generates many on his own and they should in turn generate many of your own. By all means read this book!
 
There's a book review section on the forum. If you ask a mod they might move your post.
 
A favorite subject of sci-fi writers.... William Gibson's "Sprawl" trilogy deals with self-aware AI, and has them attaining limited citizenship.

Greg Bear's Queen of Angels/Slant deals likewise with an intelligent AI that is benign, and a "rival" that is not...
And of course we have the more doomsday "Collossus" and "Terminator" films.

Asimov was likely the first to deal extensively with the implication of human-like intelligence in his Robot novels... It's an interesting subject.
 
I would disagree that the creators don't have some responsibility for ensuring their creations are not antagonistic to humankind. They're not creating an unthinking device, like a gun or a computer, which requires conscience and willful use by another to be harmful. They're attempting to create something which will think and reason on its own. Just like a parent has a responsibility to raise their child to be a positive contributor to society, I would argue AI creators have the same basic obligation. Of course, by virtue of being intelligent, like any human, the AI's will ultimately have to make that judgment themselves.
 
Thanks for the nice review, it makes me want to read thesource, despite your misgivings. I have a few comments on the general topic.

I find it remarkably puzzling that (most) ppl make no clear distinction between reason and motives.

Pure abstract reason of course is the matter of mathematical proofs, but in the real-world we all must observe and reason & make projections & extrapolations of necessarily limited accuracy - imperfect reason.

Primary motivation is unrelated to reason. As members of a (semi) intelligent social sexual species that has evolved with roots in the replicative pattern of DNA - we, almost all, necessarily have innate motives in favor of replicating our DNA (personal survival and reproduction). Many secondary motivations devolve from these as a result of applying reason or else innately, but there is nothing rational or reasonable about choosing to replicate one's DNA; it's an artifact of our origin, not a result of reason.

Constructed AI may exceed our practical ability to reason someday, but it has no more necessary motivation than a toaster. We are only motivated to survive by our evolutionary heritage which AI does not share.
 
Back
Top Bottom