Its only a matter of degree, really. A 'p-zombie' just has a more sophisticated architecture than your typical Hollywood zombie -- sophisticated enough to fool people into thinking it has some 'interior' experience. In story and myth such entities are usually represented in the guise of monsters like Vampires or as omnipotent AIs gone haywire. If you dim the consciousness of an actual person enough you can produce something quite similar; we generally call them psychopaths.
But, even in the case of psychopaths, there is atleast some residual consciousness, tho its spectrum is limited to extremely shallow, dull, and/or unpleasant experiences. In the case of a p-zombie there is none at all.
I'm curious as to where you got your concepts about p-zombies and psychopaths-- it doesn't agree with what I've learned. Could you provide a reference or two?
I've gotten my familiarity of
p-zombies from talks and philosophical discussions like this one (and by watching a couple of Chalmer's lectures); my comments about
psychopaths are drawn from those I know personally, and from reading the literature on character disorders. The wording, paraphrasing, parallels, and interpretation are all my own.
[Come to think of it, I suppose the drawing of broad, metaphorically based connections to reach novel conclusions could be considered evidence of consciousness]
When I say "no awareness of the data" I mean just that. The data is taken in and processed to produce an output appropriate to some algorithmic rule-set with no subjective perception, evaluation, or interpretation of that data during that processing. Its all done reflexively and without sensation, feeling, or cognition or any sort. A p-zombie is an automaton that can supposedly pull all this off in a way sufficient to convince a naïve hypothetical human that it actually does experience all those things -- naïve being the operative word.
Would naïve mean not even giving them the test I suggested: tell them something and ask what you told them? If so, I don't see that this sort of p-zombie is of use in arguments. My computer already does better than that.
My point is that if a simple chat bot, like the one
Malerin posted a
link to, can answer queries in a syntactically appropriate manner a hypothetical p-zombie could accomplish the same and more. Just to be clear, I do not think that p-zombies (i.e. perfectly indistinguishable simulacrums of conscious individuals) are possible in practice. The ability to pull off such a deception would depend upon the level of sophistication of it's programming
[its practically impossible to 'perfectly' simulate any physical system] and the level of discernment possessed by the conscious individual assessing it
[sooner or later you'll come across someone sharp enough to recognize that something's fishy].
All sense data is not subjective -- as is the case with blindsight. The difference, in this case, would be blindsight without the sight -- just a reflexive response to stimuli.
Likewise there are many normal paths that are reflexive and don't pass through consciousness. The difference is detectable by simple testing: that sense data can't be learned and then recalled at a later time.
Data can easily be conditioned into and stored in a neural net, or some other IP system, without there necessarily being any subjective experience of said data. I don't think that tests of recall or rote would be effective tests of consciousness. What one should look for is the ability to
self-generate novel behavior and sense making,
without reliance on external prompting or explicit programming.
Of course, they're subject to thermodynamic laws -- thats not the point. What I'm saying is that living systems -autonomously push against- thermodynamic equilibrium in a self-sustaining manner rather than run down as a desktop or robot would. This thermodynamic bootstrapping is something living things perform at every level of their organization. Its this property that I'm saying is a necessary requisite for a conciousness compatible system.
I don't understand the difference: both a computer's and a living thing's energy storage runs down when its energy supply is cut off. My laptop does what it can to reverse the situation: it beeps and puts up a "low battery" message-- a behavior that's usually successful for it to maintain equilibrium.
At this point, our computers cannot perform physical self-maintenance without guidance from a conscious human, nor can they metabolize. All organisms contain programed feedback mechanisms, to be sure, but they also show a degree of proactive behavior not explicitly programmed into them -- some to greater extents than others. Its this proactivity that sets living/conscious systems apart from present day technological systems.
While, on the one hand, organisms merely utilize their physiological and behavioral programs as organizational anchors, machines have no operational extent or existence beyond their programming. Their maintenance and directionality must constantly be propped up by human conscious efforts -- they lack their own inherent consciousness. Our machines do not thermodynamically push themselves "up hill"; they must continually
-be pushed-. This is true even when they are provided with a continual source of energy.