That is ONLY because it is easier to use processors to emulate real hardware. Using a processor instead of Opamps enables us to TWEAK the parameters and BEHAVIOR a lot easier than REWIRING different Resistors and Capacitors to change gain and the TRANSFER FUNCTIONS of the neural nodes while in the R&D stages.
But if we already know the Transfer Functions we can just build the system using ENTIRELY electrical components with NO SOFTWARE WHATSOEVER.
But to do that with the tremendous number of Neurons and interconnections needed to reach the required Critical Mass might be quite a daunting task.
It would be an engineering problem.
That is why it is often easier to just SIMULATE a NN on a computer.
Simulate or emulate?
BUT...BUT.... have a look at the LAST paragraph in
my post to see why that a normally functioning NN might not even be enough.
A 'normally functioning' NN? If neuronal cross-firing is relevant and/or necessary to the emulation, it too could be implemented.
Precisely.....
We would program such a system indirectly, feeding it relevant input so that it could learn and organise itself over time, much as a biological brain does. The direct programming would be a layer of abstraction below, and would involve programming the way the system learns and organises itself. If the individual neurons were being emulated in software rather than hardware, their programming would be a level of abstraction below that.
ETA:

it seems you have deleted your post after I quoted it

Sorry.... but I like what you said ...why did you change your mind
Ennui. I couldn't see it making any difference, given the entrenched opinions here. But seeing as you responded, and I've got some spare time...
I can see that simulating or emulating a power station in a computer will not generate real power, but I think it's a red herring. Information processing is qualitatively different - it is functionally indifferent to abstraction. A physical computer can run an OS running a program that runs
Conway's Game of Life itself running a UTM implementation (Universal Turing Machine), which can run arbitrary TM programs. A cellular automaton may be a clumsy way to implement a UTM, but the UTM it runs is a 'real' programmable computer - it may be virtual and several levels of abstraction from the processor hardware, but it can really process data.
Where I worked, we had a big Linux server box that emulated multiple virtual machines at once. You could configure each virtual machine to run a different OS emulation. Users could run their OS-specific applications on it as if on a native machine. They could even run their favourite OS on a suitable microprocessor emulation running on this server. Were these
real Windows, DOS, Unix, etc., machines? Real microprocessors? No, they were virtual emulations - but they behaved exactly the same as the 'real' ones. They gave exactly the same outputs for any given inputs as the 'real' thing.
The way I see it it looks to me at present is:
If, as the evidence suggests, a neuron is a sophisticated information processor, taking multiple input signal streams and outputting a result signal stream, we can, in theory (and probably in practice), emulate its substantive functionality with a neural processor (e.g. a chip like IBM's neural processor, but more sophisticated).
If, as the evidence suggests, brain function is a result of the signal processing many neurons with multiple connections between them, we can, in theory, emulate brain function using multiple neural processors connected in a similar way (with appropriate cross-talk if necessary). [We would probably need to emulate the brain-body neural interface too, i.e. give it sensors and effectors].
If, as the evidence suggests, consciousness is a result of certain aspects of the brain function described above, then, in theory, the emulation could support consciousness.
Each neural processor can itself be emulated in software, and multiple neural processors and their interactions can be emulated in software; i.e. an entire subsystem of the brain can be replaced by a 'black box' subsystem emulation.
In theory,
all the neural processors in a brain emulation, and their interactions, can be emulated in software using a single (very fast) processor, e.g. with multi-tasking, memory partitioning, and appropriate I/O from/to the sensor/effector net.
Given the above, it seems to follow that, in theory, consciousness could be supported on such a single processor software emulation of a brain.
I'm curious to know which of the above step(s) are considered problematic by those who don't agree, and why.