The statement "consciousness is not a thing you observe" is not saying that you don't observe consciousness, but that consciousness can't really be considered to be a "thing". The contents of consciousness include things you can observe, like computer screens.
I see. I disagree, though.
This is the nub of our disagreement. I can see no reason why we can't build a machine which is capable of accurately simulating all the computations carried out by a brain without being conscious. Where is the contradiction here? I have no reason to believe a car alarm or a calculator is conscious (has any sort of internal awareness). I have no reason to believe that a more powerful computer needs to have any sort of internal awareness either, even one capable of simulating the computations carried out by brains. Why do you think such a thing is impossible? If you wanted to convince me it was impossible, how would you go about it?
The properties of brains which enable them to carry out complex computations are properties to with the complex structure of the brain itself. This complex structure is physical, and therefore could theoretically be modelled on a computer (unless Penrose is correct, in which case the brain is a sort of quantum computer which could not be modelled on a computer which didn't mimick those quantum mechanical properties, but we can ignore this possibility for the moment). You are telling me that if we simulate the complexity and information-processing capacity, it logically follows that we must also simulate consciousness. Why do you think this logically follows? I think you must be basing this opinion on some other premise you are introducing with which I do not agree, because I see no logical necessity here. It may well be logically necessary if materialism is true, but we can't start this discussion with that premise, because then you would be begging the question.
The contradiction I see is that the computer has all the cognitive functions of a human but is not conscious. Either there is a contradiction or cognitive functions are not enough for something to be conscious. When a computer behaves like a human I don't see any basis to label the same behaviour differently when they appear to be so similar. But note that I'm not saying we are simulating consciousness. I don't view it as a thing that can be simulated because I don't see our brains simulating it either. I'm observing this post I'm replying to and we both call that conscious behaviour. But that doesn't necessarily mean I have consciousness that must be simulated in order to replicate all the public and private behaviours involved in replying to you. I'm just saying that the same label 'conscious' must be used to describe both human and computer when they behave alike. Or neither.
We don't have to assume materialism is true.
Nothing. The difference is between a machine, like a car alarm, which responds zombie-like to external stimuli without being internally aware of anything at all and something like a brain which carries out similar computations based on similar sense organs/devices, but which is actually internally aware that something is going on. It's the difference between mere response to stimulus and an internal awareness of the stimulus and the perception that the action taken was a free will choice (whether this is an illusion or not is another question, all I am saying is that we internally sense that we have made a free will decision whereas the car alarm does not, and neither does the computer in my example).
I'm not sure what you mean by this because I don't understand the difference between 'being aware of a stimulus' and 'being internally aware of a stimulus'. But anyway, I see being aware of a stimulus as a response. And that It's possible to have other responses without the 'being aware response'. We do things sometimes without knowing why we do them.
I agree that the car alarm does not sense that it's making a free will decision. It's not programmed to tackle with the concept. But I posit that since the computer in your example is programmed to behave as a human it must sense that it is making a 'free will decision' because if it doesn't then it isn't programmed to behave as a human.
Then I can't accept your definition of the word "observe". You are just talking about the capacity to respond to external stimuli and I can think of numerous examples of things which are capable of this but which most people do not believe are conscious.
We can't have the word "observe" meaning both what an unconscious machine does and what a conscious being does. We are talking about two completely different things. One is to do with sense equipment and information processing, the other is to do with subjective experience of the events which are occuring.
I don't see what the other type of 'observation' contributes to behaviour when the hypothetical computer in your example behaves like a human. That's why I don't see the need to use the word 'observe' differently when we talk about humans or computers.