Right, but keep in mind they're only "logic operations" in your mind. Objectively, they're physical computations, not symbolic ones.
Which is why I've been talking in terms of the transformation or translation of input streams to an output streams.
Yes, but this is irrelevant.
I think it is relevant. You said "
When you emulate any real thing in software... you've turned a real thing into an imaginary one". I'm saying this isn't necessarily so. When one microprocessor emulates another in software, it is just a language translation; whether it is done hard-coded on-chip or in RAM makes no difference. The same algorithm using the same instruction set can run on both processors identically. The same principle applies to one microprocessor emulating multiple others by time-slicing or other means.
But that is irrelevant to the point at issue, which is that you can't substitute an imaginary thing for a real one.
I'm trying but failing to see what imaginary thing you think I've substituted for what 'real' thing.
Physical computations are real.
The symbolic computations we associate with them are imaginary.
When you swapped the brain's neurons with other objects that performed a similar real function, you simply produced a replica brain.
Great - that's what I wanted to do.
When you attempted to introduce a simulation into the system, you swapped a real object for an object which is merely associated with the target object in your imagination (because the relationship between the simulator and the system intended to be simulated is imaginary) which won't work, for obvious reasons.
Exactly what simulation are you referring to? I talked about replacing neurons with functionally equivalent chips running neuron algorithms, which you accepted, and replacing groups of those chips with a single mutli-tasking chip that emulates them - still running the original neuron algorithms on each virtual processor. What has become imaginary?
Of course, if the simulations exists in some sort of black box along with some unknown hardware, and the real inputs (physical calculations) going into it and coming out of it are identical to the physical system you want to replace, well, in that case you've simply created a very complicated replacement part.
Eh? the inputs going into such a black box are not physical calculations, they are just modulated signals, e.g. electrical pulses. That apart, would you accept such a black box replacement part for, say, the visual cortex (assuming we could handle all the necessary inputs and outputs)?
But this simply means that you're changing parts. You're not attempting to replace a physical computation with an entirely different physical computation that is merely imagined to correspond to one that isn't different.
Well of course. We want a brain that works. My point is that if you can replace all the biological neurons with neural processors running neuron algorithms, you can also virtualise all those processors, and run the same neuron algorithms on a single multi-tasking processor. [In practice, sufficiently powerful hardware would be a problem, but the killer would probably be the timing considerations].
So,
in theory, we can have a brain emulation running on a single physical processor with memory, and apart from the I/O subsystem, everything else would be software or data.
Remember, the brain isn't processing information. To think so is to mistake the information processing metaphor (the post office metaphor) for reality.
Yes, we say that an "image" is "recognized" as a "face" and "routed" through the amygdala, and so forth.
Yet none of that happens. Until we get to the processes which support conscious awareness (which are downstream) none of that language makes sense.
...
And if the information isn't in the impulses, then it makes no sense to declare that they somehow are an image of a face as far as the organ of the brain can tell at this point. They can't be. (Symbolic value requires an interpreter.)
Nor is there any structure capable of "recognizing" the impulse as "an image of a face" and therefore "routing" it anywhere.
Yes, I'm aware of all that, and I'm not claiming those things.
Well, you're either going to have to build your machanical brain neuron-by-neuron, or you're going to have to sacrifice functionality, because each input-output point has an impact on how the brain functions.
Not entirely. As you yourself said, you can replace a subsystem with a black box where the real inputs going into it and coming out of it are identical to the physical system you want to replace; and there are certainly neural circuits at the lower levels (e.g. on the order of neural columns), whose well-defined contribution could be replaced by a component that isn't based on neuron emulation.
Keep in mind that there are folks right now who are building a brain simulation in precisely this way... neural column by neural column.
Theirs is a practical attempt using today's technology. I'm interested in what is theoretically possible.