See which do you disagree with:
1) We're given (in Paul's OP) that the conscious process is a running program.
2) A running program is composed of discrete steps.
3) The output of the program could include reports of its conscious experience.
4) As long as the input and step execution order doesn't change, the output of that program doesn't change.
5) We would get the same reports from it of its own conscious experience at any non-zero speed (if played to us at full speed).
Well, reports don't matter since we can program a non-conscious robot to report anything.
The problem here is what's missing.
No matter what sort of software is being used, there has to be hardware capable of properly implementing the software to get the desired result.
In this case, the desired result is not the result of a calculation or series of logical/symbolic steps. Rather, the desired result is a non-symbolic real-world phenomenon: actual conscious awareness.
(In other words, we're asking the machine to
do something, to perform an
action, not to produce a symbolic or logical or mathematical result.)
We have one sure model of how this is done: the human brain.
For the question to be answerable at all, we must assume that our machine produces the real-world phenomenon of consciousness in a manner analogous to the one used by the human brain. If we propose a completely unknown mechanism, the only answer to "What would happen if...?" is "Nobody knows".
In the brain, research clearly demonstrates that the generation of consciousness is a distinct function. (It is not something that merely emerges from overall qualities of the system, like the whiteness of clouds.) Furthermore, research strongly suggests that the mechanisms carrying out this function require a sustained real-time coordination of coherent activity across the brain.
Therefore, we have to ask what happens to our robot's consciousness when brain activity becomes very spread out over time. Will there be sufficient real-time coherence for conscious processing to ignite and maintain itself?
We cannot yet answer "yes" to this question with any certainty.
When I run the Glacial Axon Machine thought experiment in my head, I cannot come up with any point in the experiment when it seems conscious experience would ignite.
I might be wrong about that.
But there's no way we can assert that we know consciousness would be possible under those conditions.