A proof that p-zombies are logically incoherent.

cpolk said:
It's not a matter of being able to describe them, it's whether or not they can experience them. Since qualia, the indiescribable input from our senses, are the foundation of how we continue to experience, then once the p-zombie stops experiencing qualia, it can no longer experience new things.
I think that's too narrow a definition of experience. An ant can experience new things and learn from them, though we can certainly argue that an ant has no qualia. That said, I agree that lack of qualia would result in a person whom we could certainly distinguish from ourselves. So a rig-job is required if we want him to be a p-zombie.

~~ Paul

Edited to change "broad" to "narrow."
 
Last edited:
I can't answer this until you tell me which metaphysic we're assuming. If we are assuming reductive materialism, then there are no p-zombies by definition and your original statement (above) doesn't make sense. So you must have been assuming some metaphysic in which p-zombies might make sense, such as dualism. Yet now you say "same physical stuff, same physical laws."
Yes, I don't see anything illogical about the idea of a p-zombie. Why is "same physical stuff, same physical laws" a problem?

In the case of dualism, removing consciousness could have two results: (1) The p-zombie is missing some experience we call consciousness or qualia, in which case he would answer something other than "yes" to "Are you conscious?" But then he would not be a p-zombie, because he is supposed to be indistinguishable from us.
The p-zombie is missing all experiences because it's not conscious, but why does that mean it wouldn't answer "yes"? I can write a trivial computer program that answers "yes" if you ask it whether it's conscious. (I can't personally write a program that passes the Turing test, but I don't see why such a program is impossible in principle.)

(2) The p-zombie is not missing any experience. Then we have two possibilities. Either nothing was removed, in which case what the heck is dualism? Or something was removed that has no effect on the p-zombie's brain, in which case we have some silly form of epiphenomenalism.
I think I don't understand how you're using the word "experience". Of course it's missing experiences. All experiences. It's not conscious.

Consciousness was removed. I'd say, consciousness doesn't have an effect on the brain; the brain has an effect on it. (In real people, in this world, that is; not in p-zombies, in p-zombie world. They have the same brains, but aren't conscious.) Is that epiphenomenalism? Ok. But what's silly about it?
 
Think of a computer playing noughts and crosses (or tic tac toe or whatever the denizens in the USA call it). We can simply program it to always respond in a certain way eg if its opponent places a cross in one of the corners then the computer is simply programmed to place an "O" in one of the 2 adjacent corners (but not the corner across) in order to avoid defeat.

The same goes for . .say . .chess. The 50 move rule ensures there are a finite number of possible games of chess. Thus a computer could in principle be simply programmed to make the best move in every possible game of chess on any move -- just like in the noughts and crosses. And an android could be specifically programmed to respond in an appropriate manner to all possible environmental situations.

Obviously it would be ludicrous in the extreme to say such an android is conscious, right? So we can reject behaviourism.

But there is another possible type of android. Let's think of a computer playing chess again. This time all its programmed with all the rules of the game and a "desire" to win. It's not even programmed that the Queen is valuable and that taking opponents pieces is generally advantageous to the goal of winning. Instead it learns everything by trail and error, and thus keeps improving until it too in a finite time plays perfect games of chess, just like the other chess playing computer. Again an android could be in principle created which always responded appropriately due to "learning" from experience.

But the 2 androids are externally absolutely indistinguishable. I'm assuming though that people on here would wish to say the 1st android is definitely not conscious, but the 2nd one definitely is?

It's the distinction between materialist behaviourism and materialist functionalism.

That's an interesting example. As it stands I think the first type of android would be impossible to create - and is the type of android attacked in the Chinese Room problem for example. There's just not any machine capable of storing and accessing sufficient information to handle every real-world situation! The second type is undoubtably the way A.I. is going to develop - in fact that's exactly the approach currently considered the key to future A.I.
 
I think that's too broad a definition of experience. An ant can experience new things and learn from them, though we can certainly argue that an ant has no qualia. That said, I agree that lack of qualia would result in a person whom we could certainly distinguish from ourselves. So a rig-job is required if we want him to be a p-zombie.

Well, let's define "experience" then. :) Either do one of the following:

1. Explain what separates an ant from a human, strictly in terms of sensual experience, that makes one "qualia" and one not "qualia", and explain what the other form of experience is.

2. Explain what an ant can "learn through experience" that is not considered qualia.
 
Think of a computer playing noughts and crosses (or tic tac toe or whatever the denizens in the USA call it). We can simply program it to always respond in a certain way eg if its opponent places a cross in one of the corners then the computer is simply programmed to place an "O" in one of the 2 adjacent corners (but not the corner across) in order to avoid defeat.

The same goes for . .say . .chess. The 50 move rule ensures there are a finite number of possible games of chess. Thus a computer could in principle be simply programmed to make the best move in every possible game of chess on any move -- just like in the noughts and crosses. And an android could be specifically programmed to respond in an appropriate manner to all possible environmental situations.

Obviously it would be ludicrous in the extreme to say such an android is conscious, right? So we can reject behaviourism.

But there is another possible type of android. Let's think of a computer playing chess again. This time all its programmed with all the rules of the game and a "desire" to win. It's not even programmed that the Queen is valuable and that taking opponents pieces is generally advantageous to the goal of winning. Instead it learns everything by trail and error, and thus keeps improving until it too in a finite time plays perfect games of chess, just like the other chess playing computer. Again an android could be in principle created which always responded appropriately due to "learning" from experience.

But the 2 androids are externally absolutely indistinguishable. I'm assuming though that people on here would wish to say the 1st android is definitely not conscious, but the 2nd one definitely is?

It's the distinction between materialist behaviourism and materialist functionalism.

You are mistaken in that you are failing to recognize the evolution of consciousness. The machine doesn't have to be programmed to react to every envirnmental situation at all. It merely has to learn to adapt, as a human infant does as it grows into adulthood. Think about how little an infant knows at birth - it does not even have full access to its senses!

Taking your second example, how would you program a "desire" to win? That requires abstract thought, which requires the ability of abstractual senses of the outside environment, which all lead to qualia. Basically, you are programming the ability to have qualia - something that, if you regard impossible, makes your hypothetical impossible.

Assuming you're like me and others who think it is possible, then the way to tell would be to ask it something outside of its frame of reference - something not involving chess - and see if, through sensual interaction alone, it is capable of adapting itself.
 
There's just not any machine capable of storing and accessing sufficient information to handle every real-world situation!

Not any machine that we could currently build, no. But it's silly to approach a philosophical argument on the basis of current technology. At least in theory, we could store a near-infinite amount of information in the quantum levels of a single electron in a single hydrogen atom (I'll leave the engineering details to you to work out), which certainly means that we could store enough information for a table-lookup approach to a finite game like chess or go. In fact, since there are really only 10^43 or so different chess positions, we could even go for a fairly gross kind of storage -- one atom, one position, one move -- and store the entire chess problem in only 10^16 tonnes of metallic hydrogen, an amount so small we could harvest it from the oceans of the Earth without having to look to other planets. (Again, I'll leave the engineering details to you).
 
II
They don't have any experiences or memories, they are not conscious. They only appear to have them from the 3rd person perspective.

Belz
So... a computer has no memory because it is not conscious ?

Of course not. The use of the word "memory" in computers is metaphorical. They don't consciously recollect anything.

II
They are missing any consciousness. They are automatons merely operating according to physical laws and have no more consciousness than a boulder rolling down a hill.

Belz
Yeah, but they don't know that. Do we ?

They don't know anything, they're not conscious.
 
69dodge said:
Yes, I don't see anything illogical about the idea of a p-zombie. Why is "same physical stuff, same physical laws" a problem?
Because if our alternate world had the same physical stuff and the same physical laws, then people would be like us, not p-zombies. Something has to change.

The p-zombie is missing all experiences because it's not conscious, but why does that mean it wouldn't answer "yes"? I can write a trivial computer program that answers "yes" if you ask it whether it's conscious. (I can't personally write a program that passes the Turing test, but I don't see why such a program is impossible in principle.)
As I said, if you're allowed to make arbitrary changes to a person's brain as you "remove his consciousness," then sure, you can make him answer yes, at least to trivial philosophical questions.

I think I don't understand how you're using the word "experience". Of course it's missing experiences. All experiences. It's not conscious.
Then why would it answer "yes" to the question of whether it is conscious, unless it has been rigged? Not to mention, how can it hear the question if it is missing all experiences?

Consciousness was removed. I'd say, consciousness doesn't have an effect on the brain; the brain has an effect on it. (In real people, in this world, that is; not in p-zombies, in p-zombie world. They have the same brains, but aren't conscious.) Is that epiphenomenalism? Ok. But what's silly about it?
Yes, that's epiphenomenalism. It's silly because it suggests that there is some subset of mental experience that does not form memories, yet here we are talking about those very experiences.

~~ Paul
 
cpolk said:
Well, let's define "experience" then. Either do one of the following:

1. Explain what separates an ant from a human, strictly in terms of sensual experience, that makes one "qualia" and one not "qualia", and explain what the other form of experience is.

2. Explain what an ant can "learn through experience" that is not considered qualia.
When I drive along a familiar route, I can navigate it almost nonconsciously, while my mind is pondering something else. There are virtually no qualia associated with that trip. Yet clearly I experienced it, or I would not have taken the correct route.

I realize I'm picking nits here, but I don't think we should equate experience with conscious qualia.

~~ Paul
 
When I drive along a familiar route, I can navigate it almost nonconsciously, while my mind is pondering something else. There are virtually no qualia associated with that trip. Yet clearly I experienced it, or I would not have taken the correct route.

I realize I'm picking nits here, but I don't think we should equate experience with conscious qualia.

Pick the nits! LOL That's the only way to sort through this stuff.

What is the difference between "qualia" and "experience", then? What type of experience is necessary for "qualia"?

I would argue, then, that the "qualia" came with the first time you drove down that road, and every subsequent drive was merely recalling (or maybe reaffirming?) that qualia. In the same respect, you only gain "qualia" the first time you taste the apple - every taste afterwards is not qualia unless you learn something new about the taste.

Would you agree with that?
 
I asked: "Yeah, but they don't know that. Do we ?" Do WE know that WE're not P-zombies ?

We do. Because, we are capable of abstractual thought. This affects us in many ways.

Ask a p-zombie if they are conscious, they will respond, "Yes." Ask them why. If they do not possess consciousness, then they cannot describe it to you, because they do not possess the qualia of conscious sensations.

They will not be able to speak in metaphors, such as, "It's like..." because they have no qualia; they have no sense of what something is "like".
 
cpolk said:
What is the difference between "qualia" and "experience", then? What type of experience is necessary for "qualia"?

I would argue, then, that the "qualia" came with the first time you drove down that road, and every subsequent drive was merely recalling (or maybe reaffirming?) that qualia. In the same respect, you only gain "qualia" the first time you taste the apple - every taste afterwards is not qualia unless you learn something new about the taste.

Would you agree with that?
Oh my. I think I'm experiencing a redness quale whenever I see red or recall a memory concerning red, as long as the experience is conscious. At least, that's my take on the definition of a quale. So a blindsighted person has no qualia for the red and green cups, yet is clearly experiencing the cups, because he can pick up the correct cup when told which color to get.

~~ Paul
 
cpolk said:
Ask a p-zombie if they are conscious, they will respond, "Yes." Ask them why. If they do not possess consciousness, then they cannot describe it to you, because they do not possess the qualia of conscious sensations.
Then they aren't p-zombies. A p-zombie is physically indistinguishable from a regular person.

To repeat Belz...'s question: How do we know that we're not p-zombies?

~~ Paul
 
Last edited:
Then they aren't p-zombies. A p-zombie is physically indistinguishable from a regular person.

To repeat Belz...'s question: How do we know that we're not p-zombies?

~~ Paul

That's the point - they cannot exist, because their own definition (regardless of which of the variety you use) contradicts itself.
 
Oh my. I think I'm experiencing a redness quale whenever I see red or recall a memory concerning red, as long as the experience is conscious. At least, that's my take on the definition of a quale. So a blindsighted person has no qualia for the red and green cups, yet is clearly experiencing the cups, because he can pick up the correct cup when told which color to get.

Then you are recalling something every time you get behind the wheel of your car as well. :) Is there a difference between recalling the driving experience and recalling the apple experience? Or the red experience? If so, what is it?

"Blindsight", in the way I believe you are using it, is a misconception. If your eyes are not detecting something, you are not seeing it. I would argue, if it was your intent, that people who are colorblind could recognize differences in shades.

So, I don't accept blindsight in the way you're describing it, and I think that if this can be proven under sterile conditions to be anything other than peripheral or grey-scale misconceptions, then it is worth $1-million.
 
cpolk said:
Then you are recalling something every time you get behind the wheel of your car as well. Is there a difference between recalling the driving experience and recalling the apple experience? Or the red experience? If so, what is it?
I don't recall the driving experience when I have one of those auto-pilot experiences. I just end up at the destination.

"Blindsight", in the way I believe you are using it, is a misconception. If your eyes are not detecting something, you are not seeing it. I would argue, if it was your intent, that people who are colorblind could recognize differences in shades.

So, I don't accept blindsight in the way you're describing it, and I think that if this can be proven under sterile conditions to be anything other than peripheral or grey-scale misconceptions, then it is worth $1-million.
Blindsight is an example of nonconscious vision:

http://en.wikipedia.org/wiki/Blindsight

There are plenty of examples of nonconscious sensory processing. Balance is another one. So is waking up to an alarm.

~~ Paul
 
I don't recall the driving experience when I have one of those auto-pilot experiences. I just end up at the destination.


Blindsight is an example of nonconscious vision:

http://en.wikipedia.org/wiki/Blindsight

There are plenty of examples of nonconscious sensory processing. Balance is another one. So is waking up to an alarm.

~~ Paul

That validates two of my arguments, then. Sensory perception is created by the sensory organs, not the brain. "Blind Sight" is metaphorically blind, not literally blind.

You may not recall that specific driving incident, but most people don't recall what they've had for lunch a few days ago. Do they still experience by gaining qualia when they eat? You said you experience quale every time you see red. Do you remember every incident of it?
 
cpolk said:
That validates two of my arguments, then. Sensory perception is created by the sensory organs, not the brain. "Blind Sight" is metaphorically blind, not literally blind.
Blindsight has nothing to do with metaphors. It simply shows that we can do some visual processing nonconsciously.

You may not recall that specific driving incident, but most people don't recall what they've had for lunch a few days ago. Do they still experience by gaining qualia when they eat? You said you experience quale every time you see red. Do you remember every incident of it?
I now have no idea what your definition of quale is. The auto-pilot experience is not an experience of forgetting something that happened before. It is another example of performing relatively sophisticated actions in a mostly nonconscious way.

Are you suggesting you have continuous balancing qualia while you are walking? If so, pray tell, what do they feel like?

~~ Paul
 
Because if our alternate world had the same physical stuff and the same physical laws, then people would be like us, not p-zombies. Something has to change.
Yes, something is different. What's different is that in our world, certain physical processes are accompanied by consciousness, and in the p-zombies' world, they aren't. But that's not anything that our current scientific theories deal with. It's not anything that could be called physics. Physics says that nerve impulses work in a certain way, and the p-zombies' nerve impulses work the same way. There's just no consciousness to go along with those nerve impulses. But physics doesn't say there should be.

As I said, if you're allowed to make arbitrary changes to a person's brain as you "remove his consciousness," then sure, you can make him answer yes, at least to trivial philosophical questions.
No, no changes to the physics of the brain. Changes to the way the world works, so that the same physics isn't accompanied by consciousness.

Then why would it answer "yes" to the question of whether it is conscious, unless it has been rigged? Not to mention, how can it hear the question if it is missing all experiences?
It doesn't hear anything consciously. But it has eardrums that vibrate when sound waves hit them, and it has all the rest of the mechanisms of the ear, and all the same nerves in its brain, etc. And all of those follow the same physical laws as our bodies do, so the physical result is the same: it says whatever we would say.

Yes, that's epiphenomenalism. It's silly because it suggests that there is some subset of mental experience that does not form memories, yet here we are talking about those very experiences.
But no mental experience happens without a corresponding physical process in the brain, and so we can always say that it's that physical process which results in the formation of the memory, rather than the mental experience itself. (By "formation of the memory", I mean a physical change in the brain.) And then, since our brain is different from what it would have been in the absence of that experience-and-corresponding-physical-process, our subsequent behaviour and conscious memories, which depend on our brain, also are different from what they would have been, which difference we describe as remembering and talking "about" the experience.

But actually it isn't the experience per se that results in our talking "about" it; it's the physical brain changes which accompanied it. So a p-zombie would also talk "about" it, even though it had no experience to talk about, because it did have the physical changes to its brain.
 

Back
Top Bottom