How long will it take for computers to simulate the human brain?

That's not a copy - it's a image of the original. No-one would confuse that with a painting, let alone a masterpiece.

Ok - what about a robot with a hig-res camera working with dots and dabs of paint?

Given a similar type of canvas - i reckon you could create a pretty damn good facsimile
 
I think for one, our brains are bigger, and also we have the physical ability to speak. The things you mention that we can do that other animals can't (communicate on this forum, make up humorous comments (no other animal laughs), construct an artificial language (mathematics), ask the question "why" about the world surrounding us, develop civilization, put men on the moon, and so forth) took us tens of thousands of years to be able to do. Mammals have all shown greater intelligence than other animals. Surely one set of mammals are more intelligent than the others, and they have been the primates. And of the primates, hominids have been the most intelligent.

Ray Kurzweil may be a futurist, but he does convincingly show that Moore's law isn't just about microprocessors. It's about the exponentially increasing speed at which information can be encoded. And he lists many paradigm shifts that happen along the way which enable the next series of advancements. Microprocessors were one paradigm shift. See this chart:
http://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/PPTMooresLawai.jpg/596px-PPTMooresLawai.jpg

And this logarithmic chart shows the paradigm shifts that have been happening for the last several billion years:
http://upload.wikimedia.org/wikiped...lestones.jpg/770px-PPTCanonicalMilestones.jpg

If we look at another species their 'playing' or semi-intelligent and/or instinctual behaviors will seem pretty mundane to us because they are further back on that logarithmic scale. They advance nowhere near as fast as we do. But I believe their intelligence, as limited as it may be, is there.

Interesting thoughts, and I find a lot of merit in them. But evolution plays no favorites. The other species may or may not have intelligence, but since we've all been on the planet for about the same time (species-and-ancestor-wise), it seems pretty clear that selection for intelligence on the order that hominids enjoy wasn't a part of their selective process.

Mammals in particular all began their explosive growth at about the same time, historically speaking... yet only our species developed the level of intelligence we're discussing. This would argue to me that we took a fairly unique evolutionary branch, with only the Neanderthals showing a similar nature. And we know what happened to them. :)
 
Ok - what about a robot with a hig-res camera working with dots and dabs of paint?

Given a similar type of canvas - i reckon you could create a pretty damn good facsimile

Even so, that doesn't invalidate my point - the painting is more than simply the sum of it's constituent components. In the same sense, what we are is more than the sum of the constituent components of our brains.

What I'm trying to say is that systems are not as simple as what they are physically made up of. Our physical makeup is only one component, and we can't simply cite our physical makeup as the definition of what we are. :)
 
Last edited:
137 years.

No more. No less.

The simulation will be fairly accurate and effective, but will run at a much slower rate than a human brain at first.

It will become conscious, of course, and have imagination.

Its consciousness will arise by need rather than by design.

It will be a mind, but not a human mind.

The techniques that will be used in constructing the complex from which the intelligence will emerge are not known today.

It will not be appropriate to think of it as an 'artificial' intelligence. It will agree.

Any other questions?
 
Searle argues that a simulation would not be truly conscious, and could not be. Since consciousness is a real phenomena, it therefore cannot arise purely from abstract information processing.

Could a truly conscious machine be constructed? "That may be possible," says Searle, "We are one such machine."

But it won't arise magically out of information processing.
 
A skilled painter can make a good copy of the MonaLisa that would fool most part of the non experts
Try to find a computer that works like an human and fools people

If you really want to see one, go check out the "Annoying Creationists" thread.
 
The test of an AI program is that it can figure out how to write an AI program.
 
Has Moore's law been flawlessly accurate? Has there been any acceptions to it?

Regarding making intelligent machines as smart as us or smarter I'm not sure is such a good idea. What if they decide to want to kill us? Programming them to not want to kill humans won't cut it -- to have real intelligence you have to be able to re-evaluate your own beliefs. Many of you here were raised to believe in God, but eventually realized or decided not to. Well if a computer was as intelligent as us and programmed not to want to kill humans, it could re-evaluate this belief and decide that from what it experienced it should.


Complexity,
137 years.

No more. No less.

The simulation will be fairly accurate and effective, but will run at a much slower rate than a human brain at first.

It will become conscious, of course, and have imagination.

Its consciousness will arise by need rather than by design.

It will be a mind, but not a human mind.

The techniques that will be used in constructing the complex from which the intelligence will emerge are not known today.

It will not be appropriate to think of it as an 'artificial' intelligence. It will agree.

Any other questions?


What? 137 years for what? Where did you derive these figures from?



BTW: Who's Ray Kurzweil? And, while this may sound stupid, what exactly is a futurist?


INRM
 
Last edited:
A skilled painter can make a good copy of the MonaLisa that would fool most part of the non experts
Try to find a computer that works like an human and fools people

Um. I think you're arguing against someone who's agreeing with you. :)

It's going to be a long time - if ever - before we can create a computer with a similar density and complexity of the human brain. IMO, in order to duplicate the processes that make up human consciousness (not merely emulate or simulate consciousness!), we would need something of similar capacity.

That's not to say that artificial intelligence isn't possible, or that an A.I. couldn't appear to be a self-aware human - but upon closer examination, it would be clear that an A.I.'s processes to achieve that goal would be much different than the ones inside human "wetware". It may even be possible to create an A.I. that truly is self-aware - but again, I think it's unlikely that those processes would be duplicates of what evolution hath wrought.

Hey, it's unlikely that aliens would use the same processes as humans, too. It's pretty clear that evolution can (and often does) follow a wildly different path based on initial and subsequent conditions. :)
 
Um. I think you're arguing against someone who's agreeing with you. :)

It's going to be a long time - if ever - before we can create a computer with a similar density and complexity of the human brain. IMO, in order to duplicate the processes that make up human consciousness (not merely emulate or simulate consciousness!), we would need something of similar capacity.

That's not to say that artificial intelligence isn't possible, or that an A.I. couldn't appear to be a self-aware human - but upon closer examination, it would be clear that an A.I.'s processes to achieve that goal would be much different than the ones inside human "wetware". It may even be possible to create an A.I. that truly is self-aware - but again, I think it's unlikely that those processes would be duplicates of what evolution hath wrought.

Hey, it's unlikely that aliens would use the same processes as humans, too. It's pretty clear that evolution can (and often does) follow a wildly different path based on initial and subsequent conditions. :)

I can not see why A.I. should be so difficult to achieve.
We are already down to 45nm with the new Intel processors.
Soon down to 32nm and to 22nm.
DNA is 2nm thick.
How long before chips will reach a size comparable to DNA?
 
I can not see why A.I. should be so difficult to achieve.
We are already down to 45nm with the new Intel processors.
Soon down to 32nm and to 22nm.
DNA is 2nm thick.
How long before chips will reach a size comparable to DNA?

Well, first of all, cells are much larger than DNA - and the processes that occur in the brain are supported at the cellular level. Further, chips use a matrix of on-off gates to process information via patterns; what a lot of people overlook when comparing tech to brain cells is that the brain operates chemically as well as electrically. It's not just a "neuron firing, yes or no?" operation. There are a host of chemical messengers involved, such as hormones and neurotransmitter agents... all of which play a role in the operation of the brain, and all of which affect consciousness. (Particularly hormonal levels)

There are also indications that the strength of the electrical signal when a neuron is firing may have an impact too - it's very possible that neuron firings may carry more states than a simple "on-off" like computer chips do.

Then, as I said, you cannot reduce a complex system to it's basic components and say "Here is what makes this system work the way it does." The most you can say when you over-simplify (as you are doing) is "Here are the necessary components required to enable the system to exist."

There's a magnitude of difference between the two. :)
 
Last edited:
JMercer,

Having a computer AI that can make an AI could be dangerous. It would create a "singularity" like event in which each one would make one more advanced than itself at an exponential rate.

Technology is meant to serve us. With this, it will become OUR master.


INRM
 
JMercer,

Having a computer AI that can make an AI could be dangerous. It would create a "singularity" like event in which each one would make one more advanced than itself at an exponential rate.

I didn't suggest that. :)

Technology is meant to serve us. With this, it will become OUR master.

INRM

How do you know that A.I. isn't the next logical evolutionary step? Perhaps all intelligent biological-based species eventually create their own successors, then become extinct in turn because they couldn't compete - just like the Neanderthals did.

:D
 
Having a computer AI that can make an AI could be dangerous. It would create a "singularity" like event in which each one would make one more advanced than itself at an exponential rate.

Technology is meant to serve us. With this, it will become OUR master.
Even if an AI program is developed that is comparable to human intelligence and is prevented from writing an AI program, we are going to be faced with this problem anyway. Moore's Law will increase the speed and power of the AI machines at an exponential rate. The only way to keep up will be to find a way to upgrade our brains. Otherwise they will become out masters.
 
Well, first of all, cells are much larger than DNA - and the processes that occur in the brain are supported at the cellular level. Further, chips use a matrix of on-off gates to process information via patterns; what a lot of people overlook when comparing tech to brain cells is that the brain operates chemically as well as electrically. It's not just a "neuron firing, yes or no?" operation. There are a host of chemical messengers involved, such as hormones and neurotransmitter agents... all of which play a role in the operation of the brain, and all of which affect consciousness. (Particularly hormonal levels)

There are also indications that the strength of the electrical signal when a neuron is firing may have an impact too - it's very possible that neuron firings may carry more states than a simple "on-off" like computer chips do.

Then, as I said, you cannot reduce a complex system to it's basic components and say "Here is what makes this system work the way it does." The most you can say when you over-simplify (as you are doing) is "Here are the necessary components required to enable the system to exist."

There's a magnitude of difference between the two. :)

I understand what you say and I also understand the over-semplification of my stance.
Still, we are getting microchips down to molecular level, which is also the base at which biology seems to work.
Apparently, I can not see why we could not create a brain out of silicon.
At least, it seems, at first, not theoretically impossible
 
Still, we are getting microchips down to molecular level, which is also the base at which biology seems to work.
Apparently, I can not see why we could not create a brain out of silicon.
At least, it seems, at first, not theoretically impossible

I agree fully; I don't agree with the somewhat optimistic predictions of when this will happen. :)
 

Back
Top Bottom