You seem to have a very specific idea about how computers behave. I'd like you to tell us what that is, because as far as I can see, your assertion makes no sense.
My assertion was simply that a computer running a simulation of X behaves essentially the same as it does when running a simulation of y ("like a computer") and that the behavior of the actual systems x and y are irrelevant to the behavior of the computer, because the simulation is symbolic and must be "read" in some manner in order for a reader to make the association with the behavior of the computer -- e.g. the display of patterns of ink or light, the vibration of a speaker cone, etc. -- and the behavior of a system like x or y.
We can state that more broadly:
The media with which a representation of X is constructed retain their own physical characteristics and real-world behavior throughout the representation, and are not altered by the characteristics and behavior of an actual X because the representation's relationship with X is symbolic and must be interpreted by an observer, and is therefore imaginary rather than objectively real.
Which means that the behavior of
whatever you're using to portray the representation is necessarily distinct from the behavior of the system being represented. Whatever they have in common, and however they differ, this does not change by dint of the representation being created.
If a machine is conscious, then it will be conscious when running a simulation of anything (a bridge, a war, a brain... ) or when not running any simulation at all. If a machine is not conscious, then it will not be conscious when running a simulation of anything, or when not running a simulation.
This description is consistent with respect to our 2 known frames of reference: physical reality and our own imaginations. And it does not depend on the particular features and behaviors of any specific system (being portrayed) or any medium of represenation.
And btw, this is precisely the view of the scientists who are in fact building a neuron-level simulation of the human brain.
So once we begin to speak of the hypothetical "perspective" of anything "in the simulation", we must realize that our frame of reference is entirely our imagination.
That's because the "world of the simulation" is what is interpreted from the symbolic output of the simulator by the reader, and has no impact on (and therefore no presence in) the media of the simulation, which is what exists in physical reality.
Such speculation may be useful, but to transfer the frame anywhere outside the imagination is clearly an error.
That is the assertion.