How long will it take for computers to simulate the human brain?

Even if an AI program is developed that is comparable to human intelligence and is prevented from writing an AI program, we are going to be faced with this problem anyway. Moore's Law will increase the speed and power of the AI machines at an exponential rate. The only way to keep up will be to find a way to upgrade our brains.

Even without Moore's Law, we'd be in trouble. The inherent ability of an A.I. to acquire information at whatever "wire speeds" are current will outclass us, not to mention the speed of recall and utterly perfect memory. :)

Otherwise they will become out masters.

Not necessarily. A.I.'s wouldn't have evolved in a competitive environment, nor would they have any inherent instincts concerning ownership, territorialism, and so forth. There would be no built-in fear or awe of the unknown, no superstition. If any of these things exist, it will be because they were either instilled by us when we create the A.I.'s, or will be learned behaviors after their creation.

The only real possiblity for competition between us and a species (if that's the correct term) of A.I.'s would be for resources. The traditional competition for food, water, and physical territory wouldn't apply; however, raw materials and power may be a point of contention. While I may be overly optimistic about it, I would hope that we had resolved many of those by expanding into our solar system and by establishing fusion reactors as power sources. (Or something similar to fusion reactors.)

It's entirely possible that we may end up in a symbiotic relationship with our A.I.'s... or that we may become irrelevant to them, and our interactions will be superficial as they determine for themselves where they want to go. But I honestly can't see an A.I. caring enough about humans to want to rule us, own us, or kill us off unless we provoke such a response in self-defense on their part.

Of course, the Frankenstein scenario is always appealing - just remember that the monster wasn't really all that much of a monster. He was made into one by the reaction of the real monsters - the people who encountered him and feared him because of his strangeness. :)
 
I agree fully; I don't agree with the somewhat optimistic predictions of when this will happen. :)

Our optimistic predictions were regarding just a true A.I., not a "simulation" of the human brain. Sorry if that caused confusion.
 
The only real possiblity for competition between us and a species (if that's the correct term) of A.I.'s would be for resources. The traditional competition for food, water, and physical territory wouldn't apply; however, raw materials and power may be a point of contention. While I may be overly optimistic about it, I would hope that we had resolved many of those by expanding into our solar system and by establishing fusion reactors as power sources. (Or something similar to fusion reactors.)
If the AI species were not slaves (if they were there would almost certainly be trouble later) then they will have material needs. They wouldn't just be a program running on a computer, they would have at least some ability to interact with the world. They would desire resources such as property and other material items. They would work for those resources. They would compete with us in the job market, and we would soon be at a severe disadvantage.

But I honestly can't see an A.I. caring enough about humans to want to rule us, own us, or kill us off unless we provoke such a response in self-defense on their part.
I would hope we wouldn't just roll over and let them render us irrelevant. This sounds so much like the back-history of Battlestar Galactica.
 
How do you know that A.I. isn't the next logical evolutionary step? Perhaps all intelligent biological-based species eventually create their own successors, then become extinct in turn because they couldn't compete - just like the Neanderthals did.

:D

That's a rather fatalistic approach. You make it seem as if it's "designed" to happen that way. That's a flaw. Designers create the technology -- If they decide not to do it, and not to allow it to happen, then it wouldn't happen. It's people's choice whether we allow technology to become our master.

We already have the ability to drastically effect natural evolution - people who can't see wear glasses and contact lenses and do see. People who can't walk get prosthetic legs, people who really should never have been born (pre-mature birth) are allowed life.

We shouldn't just adopt a fatalistic attitude that technology will become our master, we should keep mastery over it. After all self-preservation is the right of any sentient species, even primitive old us :D


INRM
 
Both you and INRM are positing valid perspectives, DrBaltar; I'm merely providing a counterpoint to both by suggesting that an A.I. "species" might have needs and motivations so alien to ours that they would hardly - if ever - intersect our own. :)
 
I agree - we're not ready to cohabit the planet with each other, let alone an intelligent non-human species. :D
 

Back
Top Bottom