Now you're also committing entification.
If you're looking to replicate in spacetime the result of an action in spacetime (which is what consciousness must be), then you need to reproduce some sort of direct physical cause in spacetime.
If you run a simulation of a racecar, none of the actions of the racecar happen in reality. That only happens if you build a model racecar.
The action/behavior of conscious awareness won't happen to the machine just because it runs a simulation (however accurate) of a brain, because its physical actions haven't really changed.
If it was conscious before, it'll be conscious when it runs the simulation. If it wasn't, it won't be.
To answer your post above and this one -- this is what I see as the problem in communication:
You seem to imply that what the others are arguing is that if they program a simulation using a bunch of if-then statements to carry out actions that look like consciousness, then they have not created consciousness.
That may or may not be the case, but that is certainly not what I hear them saying; but perhaps the error is on my part. What I hear is that if we were to program a world down to every atom and also program the rules of interaction that are present in the 'real world' that we could then see something that is indistinguishable from consciousness. I am not sure that position is not correct; I can't really find a clear problem with it.
Just to clarify, I am more in the physicalist camp (as far as bringing this about) -- I think that to end up with an action you need movement, so the easiest way to design a conscious machine is to link chips (or whatever medium) in a way that we recreate the physical actions of neurons. That is certainly the easiest way of looking at the issue.
But I am not at all convinced that programming could not do it because consciousness is not a thing.
The counter examples everyone uses to shoot down the idea are all things -- like the orange or the race car that you mention. Consciousness is not a thing, though; it is an action. Sure we couldn't touch a simulated race car and a simulated race car does nothing in the real world, but that is a trivial issue. Within the simulated world, if the simulation is robust enough, the car still moves. With a good enough simulation it's movement should depend on the same factors (simulated) that rule movement in the 'real world'. So, it does not move without simulated gas, simulated oxygen, etc. A bad simulation where you just have an if-then statement (if person clicks on car, make car move) doesn't help and is not the kind of simulation I hear them talking about. Consciousness is also not a 'thing' but an action, so I don't see why the same would not apply to it as to the movement of the simulated race car.
So, the obvious counters for this discussion from my limited experience are Deep Blue and the Chinese Room argument (neither of which concern consciousness but rather 'understanding'). I have gone over the Chinese Room argument several times and find myself less convinced by Searle's argument every time I encounter it. I'm not sure what we mean by understanding Chinese except that someone can use it properly. I know one of the things that is missing from his example and that he uses to sway opinion, and that is the 'aha' moment. A system that simply manipulates symbols never has a feeling of understanding anything because feeling is not a part of how it is programmed. But is that what understanding is? Is it proper use with a feeling that you have used language properly? If that is the case, what's the problem with a bigger program that includes that aspect of it?
I know very well that brute force symbol manipulation of the type proposed in the Chinese Room is not the way our brains deal with language issues; but I am less certain that we can clearly separate this way of doing things from the idea that such a system understands. What does understanding mean? Same is true of Deep Blue. It's brute force calculations do not resemble the human brain's 'chunking' of game info, but can we really say that if a computer can reliably win at chess that it doesn't understand the game? On what basis? That it isn't the way that we do it? Is this all about humanocentrism?
When it comes to creating a simulation that is built from the very bottom up, I would assume that to create a human like consciousness, a programmer would apply the same rules as the human brain uses. Why would simple if-then statements, if at the lowest level the 'particles' act like atoms in the real world, not allow for us to recreate what the brain does (theoretically, of we had that incredible level of knowledge)? I'm afraid that I don't understand the objection.