We can induce a feeling of happyness or sadness in humans by manipulation of the brain. Why would it be any different for consciousness?
I've never claimed it would be any different.
Why can a computer not be aware of itself?
I've never claimed a computer couldn't be self aware.
So? We are debating whether a complex computer can be conscious.
You might be debating that, but I'm not.
If there is no basis, except the assumption that humans are like me, to think that other humans are conscious, then what is stopping a computer from being conscious?
Nothing; but as a computer is less like me than other humans are, my assumption that it is conscious because it is similar to me is weaker.
We can induce various 'conscious' states in people through manipulation of the brain. We are well on our way to understanding what creates consciousness.
Maybe, but we're not there yet.
Until we know more about how consciousness is created, we can't tell if a system is concious by examining the internals of the system.
So why the assumption that there is a difference?
I haven't assumed that, but the
possibility that there
might be a difference means that we can't know for certain if the computer is actually conscious.
If it does not arrise out of consciousness, then why do brain-damaged patients exhibit 'unconsciousness' (not being conscious)?
Just because one complex system (the brain) creates consciousness doesn't mean that
any complex system does the same. Complexity might be a necessary, but not sufficent, factor in creating consciousness, so we can not assume that any sufficently complex system
must be conscious.
You are begging the question here, by saying "we can never know for certain". So? We can never know anything for certain, so why does it matter?
Because we're having a philosophical discussion about the nature of consciousness. (And I've not said that we can
never know for certain; I've only said that with the understanding of consciousness we
currently have we can't know for certain.)