Piggy said:
Now keep in mind, the thing you just built is not conveying information. It's a real physical thing, moving some kind of electrophysical impulses through spacetime.
If you want it to convey information for you, you're going to have to come up with some kind of information that naturally mimics what it's already doing anyway.
I don't follow what you mean by 'conveying information'. Naturally it's a physical thing - you need hardware to perform the switching & logic operations.
Right, but keep in mind they're only "logic operations" in your mind. Objectively, they're physical computations, not symbolic ones.
Piggy said:
This is where you hit your problem.
When you emulate any real thing in software, the real things it was doing are no longer being done, and instead some very different real things are being done which are only informationally related to what the original system did
...So you've turned a real thing into an imaginary one.
Sounds like some sort of deus ex machina...
I really can't see what has fundamentally changed; we know that the physical implementation of a transform function isn't relevant to the processing of inputs to produce outputs in any other form of computing - a mechanical adding machine gives the same results as an electronic calculator; the 'real things' being done are physically different, but the function achieved is the same. If the electronic calculator or a computer emulates the adding machine in software, what is imaginary? Isn't there still a functional adding machine? The implementation is different, is all.
Yes, but this is irrelevant.
What you're saying here is that we can assign imaginary symbolic values to any similar set of physical computations... as long as the changes proceed the same way, the nature of the object making the changes isn't important.
But that is irrelevant to the point at issue, which is that you can't substitute an imaginary thing for a real one.
Physical computations are real.
The symbolic computations we associate with them are imaginary.
When you swapped the brain's neurons with other objects that performed a similar real function, you simply produced a replica brain.
When you attempted to introduce a simulation into the system, you swapped a real object for an object which is merely associated with the target object in your imagination (because the relationship between the simulator and the system intended to be simulated is imaginary) which won't work, for obvious reasons.
Of course, if the simulations exists in some sort of black box along with some unknown hardware, and the real inputs (physical calculations) going into it and coming out of it are identical to the physical system you want to replace, well, in that case you've simply created a very complicated replacement part.
The original neural processors were already emulating 'real things', by running microcode that translated the instructions for the neuron behaviour into their native instruction set, then executed the native instructions, so there is a level of abstraction between the instructions for behaving like a neuron and the hardware doing it. If a different neural processor chip was used, the same neuron behavior instructions would be translated into different native instructions and executed in a different way by the hardware, but with the same end result - similar inputs would result in similar outputs; the particular physical circuits and pathways used to achieve the transformation from inputs to outputs are not relevant. A Windows application works just the same on my native Intel Pentium box running Windows OS as on the Linux server running a Windows emulation.
Right. But this simply means that you're changing parts. You're not attempting to replace a physical computation with an entirely different physical computation that is merely imagined to correspond to one that isn't different.
If you replace one physical computation with another that is functionally similar -- attach a wooden leg to a broken metal chair -- there's no problem.
But if you try to replace X in any physical system with a simulation of X, the system will ignore the informational transformations (any number of which could be possible, but none of which are affecting the machine in any way) and respond only to the physical computations which are determined by the properties of the simulator regardless of what is supposed to be simulated.
And if those are the same, then your simulation is superfluous, because you're simply using a physical replacement part (which happens to be running a simulation at the moment).
Remember, the brain isn't processing information. To think so is to mistake the information processing metaphor (the post office metaphor) for reality.
Yes, we say that an "image" is "recognized" as a "face" and "routed" through the amygdala, and so forth.
Yet none of that happens. Until we get to the processes which support conscious awareness (which are downstream) none of that language makes sense.
It's every bit as metaphorical as saying that the "pressure" at work is "building up" and I need to "let off steam" by "channeling" my energy into sports and "venting" to my buddies at the bar. We should not take the language of the information processing model any more literally than we take the language of the hydraulic model when discussing the brain.
The impulse isn't an "image of a face" because there's nobody around to look at it and decide that it is supposed to correspond with anything. That information certainly isn't in the impulses. So this would be like saying you can look at a person and deduce if they have a twin.
And if the information isn't in the impulses, then it makes no sense to declare that they somehow
are an image of a face as far as the organ of the brain can tell at this point. They can't be. (Symbolic value requires an interpreter.)
Nor is there any structure capable of "recognizing" the impulse as "an image of a face" and therefore "routing" it anywhere.
The reason the impulse goes where it goes has nothing to do with any logical computations. At this stage, it's 100% physical computations, just like your heart or your lungs or your fingernails or anything else. All the laws of physics, nothing more.
When you use a single microprocessor to emulate multiple microprocessors, there is still physical hardware that is performing the same switching and logic operations, but now one piece of hardware is performing the switching and logic operations previously performed by many pieces of hardware. The same functions are applied to convert inputs to outputs, but the overall implementation is different.
Are you suggesting that we must have as many physical processors as there are neurons?
Well, you're either going to have to build your machanical brain neuron-by-neuron, or you're going to have to sacrifice functionality, because each input-output point has an impact on how the brain functions.
Keep in mind that there are folks right now who are building a brain simulation in precisely this way... neural column by neural column.