Not relevant, I'm afraid. We were asked to explicitly assume that a conscious robot existed, ergo that consciousness is computable.
Which is equivalent to assuming that the human brain -- or at least the consciousness creating parts of it -- can be successfully and fully implemented in a Turing machine.
If you want to argue that consciousness is not computable, you're in good company. But it's not the discussion you're in right now, and your argument is out of place.
But we already know that consciousness is computable (accepting y'all's definition, which is fine by me) because our brains are conscious and they're computible.
However, when you say that consciousness can be "implemented" in a TM, are you claiming equivalence or identity?
I assume you could imagine a theoretical TM that was the computational equivalent of the system that is my car. This TM would symbolically equate to every atom, and given the right inputs it would compute all the behavior of my car starting up and driving.
But that would be entirely symbolic. There would be no real "driving down the road" going on.
On the other hand, I could build a scale model out of different materials, using an electric motor instead of a combustion engine, and actually run that sucker around. As long as it did the macro-level jobs the same -- turned the wheels and such -- it wouldn't matter that a TM that's computationally equivalent to it would be different from the TM that's computationally equivalent to my car.
Same situation with a conscious human with a wet brain and a conscious robot with a computer brain.
If we build TMs that are computationally to each of these, they aren't exactly the same. But they don't instantiate consciousness.
Let's say we run a computer simulation of my car driving down the road. As I said, there's no "driving down the road" going on in the real world, but if a human views the simulation, it reminds us of it, and we can tweak parameters to see what would happen -- we could simulate lowering the idle speed, for example, to see when it stalls.
But if all the people leave the room, and there's just a cat and dog there, they're in no danger of getting hit by the car, and they don't perceive any car.
On the other hand, if I have my electric scale model, even though it's not computationally equivalent to my car, it does actually drive like my car, and it can hit the dog or cat, and they'll react to it.
So our hypothetical conscious robot is like our electric model car. Because it does the same things on a macro-level that our brains do when they generate consciousness (albeit using different materials, and even if it's not doing exactly the same thing on smaller scales) then it, too, generates consciousness in reality.
But the TM is only equivalent. It can
describe what happens when the brain, say, is consciously aware of seeing a green triangle. But we have no reason to believe that this makes the TM apparatus self-aware.