Yep.
Thats why I say, ask if building a brain out of neurons will result in consciousness.
I have a suspicion it isn't actually the neuron-transistor swap that holds people up, but rather a fundamental doubt that humans should be able to understand their own consciousness. Or in other words, whether or not such knowledge should be restricted to God.
Well no, I have my doubts about that too, but it is nothing religious.
I can put into my words my doubts that a computer running an algorithmic simulation of a brain would be conscious. Consider a massive computer that is running a program that simulates brain function and produces an emulation of human like behaviour.
Now can I consider that the computer is conscious? If it is running an algorithm then you could theoretically desk check this algorithm using pencil and paper - although it might take billions of years just to check a half second of consciousness (and an inordinate amount of paper).
But we would not consider that there was a brief half second of consciousness that occurred stretched out over a billenium (or I wouldn't anyway).
So the question is, why would running this process faster using silicon instead of pencil and paper make the difference?
So at this point I also wonder why replacing a set of fast instruction processing CPU's with a set of electronic components connected up like neurons in a brain would make a difference.
Then I wonder why replacing these transistors with meat neurons helps the process along.
So, no, I don't have a knock down argument - but I think at least grounds for thinking it is at least possible that we are missing something.
As Paul says, time will tell. If very complex brain like computers started producing consciousness like behaviour then I would move closer to functionalism.
If a computer simulating brain like function could pass a simple primary school comprehension test then I think it would be close to game set and match (and some interesting ethical problems).
But if sufficiently complex electronic simulations simply refused to produce the consciousness like behaviour - then we would at least have a framework for investigating why neurons behave in one way in real life, and another way in a simulation.