Sorry, guys, for the delay. This thread moves fast so I fear my response may be old news already; however:
Yes, you can certainly treat the questions separately, to see what is and isn't being claimed for consciousness, but they are related. If it's impossible to implement the simulation, then physical reality cannot be simulated, so the question whether beings in a perfect simulation would be conscious becomes moot.
For me, interpretation, what it means in this context (being conscious, we're familiar with the top-down sort, but bottom-up needs a lot of explanation), is the question, the nail on which the whole frame depends: either we hang our pixellated picture of consciousness from it, or drive it into in the computationalist coffin (so to speak).
sim-blobru sim-spits, sim-scuffs the sim-dirt, sim-thrusts his sim-hands in his sim-overall pockets and takes a sim-sidelong look at sim-nothing; sim-offers sim-Ich_wasp some sim-chaw.
That meaning question again. Interaction as imposing meaning may be the key to understanding bottom-up meaning. There is certainly a sense in which interactions impose meaning, as long as they are differentiable. For example, in my simple universe of X & x, if there is a single entity with two states, it has nothing to interact with. But, if we add a second entity, and then assume interactions between the two entities depend on the state of each, meaning begins to emerge.
For example, assume the two entities sometimes interact by colliding, and that the reactions are different according to which state each is in. Symbolically, something like: x+x=X+X ; X+x=x+X ; x+X=x+X ; X+X=x+x+x (the first two collisions swap states; the third does nothing; the fourth swaps states and creates a new entity). Remember in the single entity universe, with no interactions there was no way to distinguish x expanding to X and X shrinking to x. Now, after assigning very basic properties to our two entities, we have a way to distinguish between them. Now if we run a program of this universe, the history of the program, and the switching which underlies it, may occur in a way which is only isomorphic to the simple interaction rules we assumed initially. In a way then, by running the simulation, the history of the simulation imposes bottom-up the interpretation we assumed top-down. And in that way, interaction may function as interpretation, and meaning emerge from complexity.
That's a very quick and dirty discussion of the principle (not sure how well my example works as it's just off the top of my head, but hopefully it fits well enough Wolfram's cellular automaton principle mentioned by Myriad elsewhere); it has some fascinating implications and seems to overlap with logical recursion and even quantum theory (though it's easy to get carried away with superficial resemblance), but I'm out of my depth and late for dinner so I'll leave it there for digestion's sake.
