Hi, all.
Still discussing the simulations, eh?
Well, I've been reading my new cognitive neuroscience books, and although I'm not all the way through (by a long shot), so far the computationalist view -- as it has been presented here -- is mentioned only to point out its fatal flaws.
When I get some time, I'll have to post some choice excerpts.
Suffice it to say that absolutely no one agrees with Pixy Misa that the phenomenon of consciousness is currently understood, and there is no indication that anyone believes that such a behavior can be purely programmed (rather than built) into a machine, and no trace of the tortured interpretations of Church-Turing.
Regarding the potential consciousness of simulations, even Henry Markram, who is working with IBM to reverse-engineer the human brain in simulation, cautions that their project will "not... create a brain or artifical intelligence" but will only "represent the biological system".
I think it's important to note that the lead on a project that's attempting to simulate the brain at the neuronal level makes the same distinction that I and the other physicalists here have made -- they do not expect that even a perfect simulation of the brain would make the machine running the simulation conscious.
Here's Markram citing Jeff Hawkins, btw, on the state of the research: "'The main problem in computational neuroscience is that theoreticians [who] do not have a profound knowledge of the neuroscience build models of the brain.' Current models 'may capture some elements of biological realism, but are generally far from biological.' What the field needs... is 'computational neuroscientists willing to work closely with the neuroscientists to follow faithfully and learn from the biology.'"
According to Hawkins, "if we don't know how humans think, then we can't create a machine that can think like a human", which should, of course, be obvious.
And fooling people into thinking you've done it is not a proper benchmark, just as it would not be for any other scientific or engineering endeavor.
I also like Gazzaniga's reframing of the so-called "hard problem" of consciousness as an "explanatory gap": even if we were to discover tomorrow all of the neural corrolates of every possible state of human consciousness (NCCs), we'd still have a problem -- we would have no way of explaining why the NCC associated with, say, seeing a green light is correlated with that particular conscious experience and not some other conscious experience or none at all.
In fact, we don't even have a way of imagining the solution to that problem!
Obviously, there's some conceptual framework we haven't yet figured out.
Gazzaniga offers an analogy with trying to explain to a person in the ancient world how sound, light, and the ripples in a pond are physically similar. It makes sense to us because of our understanding of waves, but without that framework, it doesn't click. Understand the theoretical framework, and it makes sense, but without it, you're not going to see the connections.
Somewhere, there's a framework that will allow us to understand how and why a given NCC gives rise to a particular conscious experience, and not some other experience or no experience, but we haven't found it yet.
In any case, I'm finding no support at all for the claim that's been made here that a "computational model of consciousness" is the prevailing one... or for that matter that such a "model" even exists.