Piggy, you never responded to this post of mine.
Sorry, I don't know why I didn't see it....
I would like you to explain how in I(O) I(A) causes I(B) without in S(O) S(A) causes S(B). That is, how can an information overlay contribute to causality?
My point is that in all cases it IS actual machine parts that are involved in the causal sequence, even if a given observer needs an informational overlay to see it.
But yes, I am speaking about the machine parts. The transistors of the computer.
I'm not claiming that an informational overlay contributes to causality.
I'm observing that it doesn't.
If it did, it would muck things up. It's possible to do that, you know. If the parts (for instance, research subjects) do incorporate the informational overlay (for example, come to find out exactly what's being measured) then you've introduced such a causality, and it can complicate things to the point of having to scrap it all.
I agree with you that O(A)->O(B) in both systems, or else it's not a simulation.
But it's important to keep in mind that we're talking about discreet systems, one of which might be purely imaginary to begin with (if you're simulating a fantasy world, for instance) or in other words a state of someone's brain.
We can think of these two systems as a pair of identical twins, Pete and Repeat, and Repeat has been trained to behave exactly like Pete, even when they're apart.
As long as Pete doesn't go through anything that changes the way he acts, we'll be able to look at Repeat and know what Pete is doing.
But let's take a look at that claim.
On the surface, it seems like we're claiming a real connection between Pete and Repeat. But this doesn't exist. Pete and Repeat are each behaving according to their own physics, they've just been set off into similar patterns.
The real connection is in my brain, which knows that Repeat and Pete are behaving in sync in one of many possible ways, and that therefore I can look at Repeat and know something about Pete.
And I do mean real. It exists as a physical shape in my brain.
In fact, this is what enables me to look at Pete and Repeat and conclude that something's gone wrong with Repeat's behavior.
But without that bit of knowledge that can only exist in the brain of the programmer and user -- which is to say, the knowledge that Repeat is
supposed to act like Pete, and not the other way around, or that it's all just a freakish coincidence, or that they're both acting like someone else -- then the similarities between certain aspects of these 2 systems isn't anything but that.
This is why Repeat (or anything else) can only be an information processor if used as one, not by virtue of physical design. "Info processor" is an imaginary rather than real class of object, which means it's one if people intend it to be one or use it as one.
So we're right back to the brain of the programmer and user. That's the only location of the connection between the two systems which makes one a simulation of the other.
So now answer this:
If there is a simulation of a neural network running on a computer, and in the real neural network a neuron fires due to the integration of signals from other neurons, is there not an isomorphic causal sequence that takes place in the transistors of the computer? And isn't that causal sequence in the transistors of the computer just as "real" as the corresponding sequence in the actual neural network? Meaning, isn't something like "voltage from transistor X caused transistor Y to switch" just as "real" as whatever happens in the neural network?
Yeah, the voltage changes are as real as the neural firings.
But the similarity between these changes is only significant if you know that one is supposed to symbolize the other.
That's so important, it bears repeating:
The similarity between these changes is only significant if you know that one is
supposed to symbolize the other.
And because the symbolic changes, which is to say the logical computations or "information processing",
depend on this understanding of the similarity, it makes no sense to talk of info processing (in this sense) or logical computations without an observer.
It only makes sense to talk of physical computations without an observer.
And it only makes sense to talk of logical computations as imaginary (which is to say, changes in brain states).