Go read about NNs and about computer programming and also about how the brain works (as far as we know) and compare that to the way NNs work and contrast it to the way CPUs and computer programs work and then you will see why the highlighted statement is wrong.
Heh, I think being a professional A.I. programmer with more than the equivalent of a molecular biology minor qualifies me as "knowing enough" to discuss this issue without needing to "go read about NNs and computer programming and also about how the brain works."
So having thwarted the awkward and unnecessary credential gauntlet, can I move on to the actual question?
If you look at the organization of transistors in a "neuron emulator," for instance, which is what I would call a set of transistors that are part of a network physically arranged as a NN, it is clear which transistor plays what part in the integration of the input and furthermore exactly what that input is -- since you can trace the circuit back to all the "neuron emulators" upstream ( or downstream, as it were ).
However it is just as clear which transistor plays what part in a computer, you just have to think a little harder. The fact that the transistors are now distributed and re-organized so that a subset of the group performs all the integration while another subset stores the results of the integrations doesn't change that -- you can still point to a given transistor at a given state in the algorithm and say "that transistor is doing this <>" and the explanation can be spot on.
In both cases, you have a set of transistors that change state deterministically based on the previous state and the way the network is laid out, and at any given state you can look at a transistor in the physical NN and point to its exact correlate in the circuit running the simulated NN. Meaning, the sequence of state transitions is isomorphic.
Now if you think there is something special about the actual physical topology of a circuit, then I agree, that is certainly different and it would make a difference. But I don't subscribe to that viewpoint, my opinion is that everything important is in the sequence of deterministic state transitions, and the physical topology of the network is irrelevant as long as it supports the "same" ( isomorphic ) sequence of state transitions.
Also, note that it is easy to prove that the sequences are isomorphic -- I could hook up an LED to each artificial neuron in the physical NN such that the LED lights up when the neuron integrates past the threshold for it to fire, and I could also hook up a computer monitor such that the simulated NN is displayed in a two-dimensional diagram where parts of the screen light up when a certain simulated neuron integrates past the threshold for it to fire. And if you superimposed the two, the light on the screen would match the light from the LEDs -- whether a human was there to see it or not.