Could someone recommend to me a book with general overview of current
state of AI? Basically I have in mind something like Goedel, Echer
and Bach but updated to current state of research.
As far as I could gather there are couple of possible scenarios that humankind could develop AI:
1. Someone sits down and writes slick software that is capable of learning. Then we teach it like a kid and somehow it becomes self-aware.
2a. Viruses with self-changing abilities and/or mutations evolve AI in ever increasing Internet.
2b. Cellular automaton, some game of life, in very complex virtual word with competition for resources that evolves ever increasing virtual life, and ultimately becomes self-aware.
2c. With time and Moore's law google or such with all of it's services will get so complicated and software on its zillion parallel CPUs just sparks self-awarness.
3. Imitate or simulate human brain.
Anything missing here?
Dare to bet the winner? I would pick 1 or 3...
Is there at least some development that could rule out one of these scenarios?
Thanks!
There's a problem with the phrase "self-aware". We want to say we know what it means, but when pressed, the definition tends to be tricky. On a literal reading, Some cars could be considered self-aware because they have sensors that monitor their internal state, and redundant sensors that monitor the state of the sensor systems.
But in the interests of playing along, I'll ignore that issue...
My personal take:
1. Assuming hardware architecture similar to today's only more powerful: I think it's unlikely, because complex, hand-written software tends to be rigid, bug-prone (especially in terms of regression errors), and limited by the preconceptions of the author. This is a top-down approach, and my gut tells me AI will somehow spark from the bottom up.
2a. How would we recognize it? We can't assume that a self-organized intelligence would necessarily be anything like ours. It may not recognize that there are intelligent agents "outside" its own system. In any case, I don't think the environment is fluid or variable enough.
2b. Totally possible, but to have an interesting sort of intelligence, your virtual world needs to be VERY rich and dynamic, so that the CAs don't fall into relatively simple stable patterns and interesting behaviors are allowed to form. In some sense, it's just shifting the programming burden.
2c. How's this different from 2a?
3. Can we leave out the word "human"? I think it will be easiest to coax an intelligence into existence if we make a cozy, dynamic, open-ended, mutable environment (artificial neurons in a fluid environment, maybe, which allows them to form, break, and re-form connections). Then we seed the environment with genetic algorithms, give it input sensors, output actuators and goals, and let it run for a while. I think the literature would call this a cybernetic (meaning control-oriented, not human/digital hybrid) connectionist architecture.