Piggy
Unlicensed street skeptic
- Joined
- Mar 11, 2006
- Messages
- 15,905
The way I see it it looks to me at present is:
If, as the evidence suggests, a neuron is a sophisticated information processor, taking multiple input signal streams and outputting a result signal stream, we can, in theory (and probably in practice), emulate its substantive functionality with a neural processor (e.g. a chip like IBM's neural processor, but more sophisticated).
If, as the evidence suggests, brain function is a result of the signal processing many neurons with multiple connections between them, we can, in theory, emulate brain function using multiple neural processors connected in a similar way (with appropriate cross-talk if necessary). [We would probably need to emulate the brain-body neural interface too, i.e. give it sensors and effectors].
If, as the evidence suggests, consciousness is a result of certain aspects of the brain function described above, then, in theory, the emulation could support consciousness.
You're good up to here. Til this point, you're describing the process of building a real replica brain. You could sit this thing on your kitchen counter and literally watch it think.
Now keep in mind, the thing you just built is not conveying information. It's a real physical thing, moving some kind of electrophysical impulses through spacetime.
If you want it to convey information for you, you're going to have to come up with some kind of information that naturally mimics what it's already doing anyway.
Each neural processor can itself be emulated in software, and multiple neural processors and their interactions can be emulated in software; i.e. an entire subsystem of the brain can be replaced by a 'black box' subsystem emulation.
In theory, all the neural processors in a brain emulation, and their interactions, can be emulated in software using a single (very fast) processor, e.g. with multi-tasking, memory partitioning, and appropriate I/O from/to the sensor/effector net.
Given the above, it seems to follow that, in theory, consciousness could be supported on such a single processor software emulation of a brain.
I'm curious to know which of the above step(s) are considered problematic by those who don't agree, and why.
This is where you hit your problem.
When you emulate any real thing in software, the real things it was doing are no longer being done, and instead some very different real things are being done which are only informationally related to what the original system did.
And that informational relationship exists in the brains of those creating or reading the emulation.
So you've turned a real thing into an imaginary one.