I've gotten my familiarity of
p-zombies from talks and philosophical discussions like this one (and by watching a couple of Chalmer's lectures); my comments about
psychopaths are drawn from those I know personally, and from reading the literature on character disorders. The wording, paraphrasing, parallels, and interpretation are all my own.
[Come to think of it, I suppose the drawing of broad, metaphorically based connections to reach novel conclusions could be considered evidence of consciousness]
You've apparently misunderstood Chalmers: he states that his zombies behave just like us. Otherwise, what would be the point of his thought experiment? (Though I don't blame you-- I can't follow his reasoning on the hard problem.)
From what I've gathered, the p-zombie thought experiment is supposed to demonstrate that one cannot discern subjective experiences from outward physical behavior. But, like I've already mentioned, I do not think such entities are possible in practice and I don't find the thought experiment particularly useful. Whenever p-zombies crop up in these discussions its always someone other than myself that brings them up.
For a psychopath I think you misread it: they are said to lack a conscience (desire to not hurt others, empathy), not consciousness.
Didn't say they lack consciousness. I said that their consciousness is "dim". The range, intensity, quality and depth of their experiences (emotional, and in some cases sensory) tend to be much lower than that of other human beings. In some instances their level of emotional arousal is so low that they are constantly fighting boredom; and the way the majority of them alleviate this boredom is at the expense of others. In other words, they tend to be psychological parasites and predators.
It's quite easy to teach a computer to reach a novel conclusion too, if you're not too concerned about its correctness...
Reminds me of how when a character disordered 'friend' of mine attempts to explain things outside of it's emotional depth all they succeed in doing is spewing nonsensical gobbledygook. For the most part, it can act well enough to fool trained professionals, provided the level of personal interaction is relatively superficial and brief. It has developed a number of tricks over the years to simulate human emotion and is very gifted at picking up languages -- which is an added asset. But, by and large, it has little to no depth of understanding concerning things outside of it's narrow subjective range.
So, like I was saying, if an entity in question has a very limited subjective existence they cannot
-generate- coherent and meaningful statements about things beyond it because they don't experience them. They can only attempt to emulate based off of external behaviors and cues. If they have no subjective life at ... Well, I think you get the picture.
My point is that if a simple chat bot, like the one
Malerin posted a
link to, can answer queries in a syntactically appropriate manner a hypothetical p-zombie could accomplish the same and more. Just to be clear, I do not think that p-zombies (i.e. perfectly indistinguishable simulacrums of conscious individuals) are possible in practice. The ability to pull off such a deception would depend upon the level of sophistication of it's programming
[its practically impossible to 'perfectly' simulate any physical system] and the level of discernment possessed by the conscious individual assessing it
[sooner or later you'll come across someone sharp enough to recognize that something's fishy].
This is the Turing test, for which I can make a computer do a perfect emulation of a large number of humans: it would simply not respond at all. There's no way you could distinguish it from human who couldn't type (for whatever reason).
The Turing test is good for judging acting ability, but not good for deciding
intelligence or especially consciousness. It would be like deciding that a digging tool isn't usable unless it feels warm and hairy.
Thats a key point. The Turing test is only a way to assess how
intelligent an agent appears to be but it says nothing about discerning
consciousness. The pertinent questions in this regard would be
A) does this entity have an 'interior' subjective existence and
B) to what extent and what quality?
Data can easily be conditioned into and stored in a neural net, or some other IP system, without there necessarily being any subjective experience of said data. I don't think that tests of recall or rote would be effective tests of consciousness. What one should look for is the ability to self-generate novel behavior and sense making, without reliance on external prompting or explicit programming.
Again, what you appear to be describing is general problem-solving ability: the true Hard Problem.
I'm saying that the subjectivity of consciousness is a crucial component in human and animal problem solving abilities. Heck -- to even speak of there being a 'problem' outside the context of
-subjective- valuation is absurd.
At this point, our computers cannot perform physical self-maintenance without guidance from a conscious human, nor can they metabolize. All organisms contain programed feedback mechanisms, to be sure, but they also show a degree of proactive behavior not explicitly programmed into them -- some to greater extents than others. Its this proactivity that sets living/conscious systems apart from present day technological systems.
While, on the one hand, organisms merely utilize their physiological and behavioral programs as organizational anchors, machines have no operational extent or existence beyond their programming. Their maintenance and directionality must constantly be propped up by human conscious efforts -- they lack their own inherent consciousness. Our machines do not thermodynamically push themselves "up hill"; they must continually -be pushed-. This is true even when they are provided with a continual source of energy.
If we viewed computers as organisms, we'd say they've managed to massively increase their numbers over the last 60 years by exhibiting particular behaviors, due to a symbiotic relationship with humans. If you consider symbiotic relationships cheating, consider that most biological life-forms would die without other types around.
The point is that our machines are merely extensions of our own biology; they have no life of their own yet. My proposal is that when we figure out how to create synthetic life we'd have made a necessary
[possibly sufficient] step in the direction of creating synthetic consciousness.
I really don't see how you've decided that cells thermodynamically push themselves: what happens when you block sunlight from the Earth?
The organisms that manage to survive will be the ones that adapt to find alternate energy sources. For as long as an organism is alive and able it will
seek out and utilize available energy to maintain itself
[ETA: unless it is suicidal, in which case it will seek out the means to end its life]. Inanimate systems
do not do this without being manipulated by a conscious agent. This holds true regardless of the amount of available energy in the surrounding environment.
As I've already mentioned, our machines cannot replicate the self-maintenance capabilities of organisms. They rely entirely on human consciousness to create and maintain them. It is this innate capacity that is the necessary -- possibly sufficient -- objectively discernible requisite of a conscious system.