My take on why indeed the study of consciousness may not be as simple

3. If physicalism is true, then Mary has complete knowledge of the mental states associated with perceiving red when Mary has complete knowledge of the brain states associated with perceiving red.​

And let me reinforce it by pointing out the above premiss. The first "complete knowledge of" must mean exactly the same as the second "complete knowledge of".

If the first includes actually being in that state, then so must the second.

On the other hand if the second "complete knowledge of" means "complete knowledge of every irreducible falsifiable experimentally confirmed hypothesis about brain states associated with perceiving red" then the first "complete knowledge of " must mean "complete knowledge of every irreducible falsifiable experimentally confirmed hypothesis about mental states associated with perceiving red"
 
Last edited:
I'm already addressed this. Mary is not surprised that color perception is a real phenemenon. She knows that people perceive color the same way a blind person knows other people have this sense called sight. Mary's surprised because even though she has complete knowledge of the physical processes of color perception, she still discovers a new mental state when she sees a red object for the first time.
No. She. Doesn't.

This is categorically false.

Her knowledge of seeing red mental states was incomplete.
No. Wrong. You can't just say her knowledge is complete, then turn around and say it's incomplete. That doesn't illustrate a problem with materialism, it just illustrates a problem with your reasoning.

This is where the distinction between color perception1 and color perception2 comes in. Knowledge of brain states gives Mary complete knowledge of color perception1, but not color perception2. That knowledge can only be gotten through conscious experience.
No. Under the premises of the argument they are necessarily identical. The assertion that they are different is not supported by the premises. Either it's a random assertion that comes out of nowhere (and the argument is unsound), or it is based on the assumption that materialism is false (and the argument is invalid).

No, just as a blind person would not predict people can't tell the difference between red and green.
Okay, you're actually correct on this one.

But the claim is not water is reducible to "wet". However, many physicalists claim that mental states are reducible to brain states (and are identical to them). People like Pixy claim that knowledge of brain states is knowledge of mental states.
No, that's just you.

I say that mental processes are brain processes. You can in principle understand the mental process by understanding the brain process. This is not necessarily easy, of course.

If mental states are identical to brain states, to have knowledge of brain states is to have knowledge of mental states.
This is specifically meaningless, but I've granted it so far rather than be pedantic about it. If you talk about processes instead of states, it's pretty much correct.

If Mary knows all the brain states involved in perceiving color then Mary knows all the mental states involved in seeing color.
No, because (a) the premise is not actually correct and (b) even with my modified version of the premise, while mental processes are brain processes, they are viewed from a different perspective. So merely knowing the details of the brain process is insufficient; you have to also know how to map one to the other.

This is why Jackson says that Mary knows everything there is to know about the physical process of colour perception, rather than getting into specifics.

There are distinctions involved with knowledge, how could there not be? We talk all the time about how much we know of something. I know math, but I learn new math all the time. There is nothing contradictory there. Mary knows that seeing red exists as a real phenemenon, but she doesn't have complete knowledge of red perception until she sees a red object.
Yes she does. That's exactly what premise 4 says. If you're going to deny your own premises, then your argument isn't going to get very far.
 
What I'm getting at is whether a computer can have the experience of red in the same way as a human mind.

A human mind can have state S1 - where it knows all about red - and S2, where it experiences red. I think we are agreed that the two states are different.
No.
 
Then not only to you have complete knowledge, you also have the experience Joe was having when his brain was scanned.
I don't agree. You could theoreticaly, artificially, recreate Joe's state in another brain using nanobots. That would give you the effect.

BTW: If you don't have the experience, artificial or otherwise, then you don't have complete knowledge.

You folks are treating knowledge as if it is only a collection of abstract data.

For a pianist to become a virtuoso he or she needs more than abstract theory. In fact most of it will do the pianist no good if the precise instructions that directs muscle movements in the arms and fingers to play are not in the module that directs the arm and fingers to play. The brain isn't an Internet forum where you can simply upload data. It's more like a computer that must be programed.

In fact. You can't just read about math to learn math. YOU HAVE TO PERFORM MATH. You have to do homework. This is programing. It's not just abstract data. It's wiring the brain in specific ways.

Learning ISN'T simply aquiring abstract information. Learning is programing the brain. Learning abstract theory about surgery, piano playing, bike riding WON'T teach you everything there is to know. You must program your brain (to date this can only be done through physically experience of the real world).

I can upload program code into a microsoft word document the computer won't be able to use the code. The code must either be an executible file in a specific place in the computer or the must be a line of code somewhere that the computer will know to access the code.
 
Last edited:
The brain isn't a general CPU. It's composed of lots of interdependant modules. The notion that we could aquire complete information about any experienced event without experience or some artifical means is ABSURD. It can't be done. End of story.

The Capgras delusion (or Capgras syndrome) is a disorder in which a person holds a delusional belief that a friend, spouse, parent or other close family member, has been replaced by an identical-looking impostor.
If the brain were a single general CPU then Capgras delusion would not be possible.

But I will remain a lone voice crying out in the wilderness.
 
The brain isn't a general CPU. It's composed of lots of interdependant modules. The notion that we could aquire complete information about any experienced event without experience or some artifical means is ABSURD. It can't be done. End of story.

If the brain were a single general CPU then Capgras delusion would not be possible.

But I will remain a lone voice crying out in the wilderness.

I don't know why you think you are a lone voice crying out in the wilderness. I don't think the brain is a single general CPU either.
 
I don't know why you think you are a lone voice crying out in the wilderness. I don't think the brain is a single general CPU either.
:) Dude, it's my ego. No one is responding to me and I think I've got this earthshatering point to make.

Sorry.
 
I don't agree. You could theoreticaly, artificially, recreate Joe's state in another brain using nanobots. That would give you the effect.
Yes. Of course, then you'd be Joe.

BTW: If you don't have the experience, artificial or otherwise, then you don't have complete knowledge.
Absolutely.

You folks are treating knowledge as if it is only a collection of abstract data.
I'm not. Frank Jackson is. David Chalmers is. That's (one reason) why Mary's Room and the Chinese Room are physically impossible - they reduce processes to facts. Representing all the possible processes themselves to provide for "complete knowledge" would already require more storage than there are particles in the corresponding universe (except possibly for infinite universes). Flattening the processes out to all possible sets of states just raises infinity to the power of infinity.

Jackson and Chalmers both blithely invoke absurdities in their arguments. Your point is that the premises are absurd. My point is that even if we grant those premises, the arguments still aren't logically valid.

For a pianist to become a virtuoso he or she needs more than abstract theory. In fact most of it will do the pianist no good if the precise instructions that directs muscle movements in the arms and fingers to play are not in the module that directs the arm and fingers to play. The brain isn't an Internet forum where you can simply upload data. It's more like a computer that must be programed.
Yes, but remember that Jackson and Chalmers aren't talking about what is possible in reality, they're talking about metaphysical materialism. As such, they're allowed to make factually nonsensical premises as long as they're not logically nonsensical.

Though it's not at all clear that the premises aren't also logically nonsensical - that is, impossible under any possible self-consistent material universe.
 
I don't agree. You could theoreticaly, artificially, recreate Joe's state in another brain using nanobots. That would give you the effect.
My point is that we do agree. I say that it's not enough to just run Joe through the (impossibly) perfect MRI, you have to actually analyse and understand every part of the information (which is impossible).

What you are saying is that this is impossible for a human being to do by any normal means. As I said earlier, granted the premises of the argument, Mary is not human, she's some sort of superintelligent infallible robot.

Your example of nanobot brain reprogramming is about as close as it is to get to Mary in reality, but of course that just rewires one brain to mach another, where Mary is required to know "everything there is to know".
 
The brain isn't a general CPU. It's composed of lots of interdependant modules. The notion that we could aquire complete information about any experienced event without experience or some artifical means is ABSURD. It can't be done. End of story.
YES! IT IS ABSURD!

And even granted this absurd premise, the argument is still wrong.

ETA: Season with smilies to taste.

:) :D :o ;) :p :confused: :( :rolleyes: :eek: :cool: :boxedin:
 
Last edited:
Yes. Of course, then you'd be Joe.
You miss my point. I'm not saying recreate Joe's entire brain. Just the state that corresponds with a given expereince.


Absolutely.

I'm not. Frank Jackson is. David Chalmers is. That's (one reason) why Mary's Room and the Chinese Room are physically impossible - they reduce processes to facts. Representing all the possible processes themselves to provide for "complete knowledge" would already require more storage than there are particles in the corresponding universe (except possibly for infinite universes). Flattening the processes out to all possible sets of states just raises infinity to the power of infinity.

Jackson and Chalmers both blithely invoke absurdities in their arguments. Your point is that the premises are absurd. My point is that even if we grant those premises, the arguments still aren't logically valid.

Yes, but remember that Jackson and Chalmers aren't talking about what is possible in reality, they're talking about metaphysical materialism. As such, they're allowed to make factually nonsensical premises as long as they're not logically nonsensical.

Though it's not at all clear that the premises aren't also logically nonsensical - that is, impossible under any possible self-consistent material universe.
Fair enough.
 
My point is that we do agree. I say that it's not enough to just run Joe through the (impossibly) perfect MRI, you have to actually analyse and understand every part of the information (which is impossible).

What you are saying is that this is impossible for a human being to do by any normal means. As I said earlier, granted the premises of the argument, Mary is not human, she's some sort of superintelligent infallible robot.

Your example of nanobot brain reprogramming is about as close as it is to get to Mary in reality, but of course that just rewires one brain to mach another, where Mary is required to know "everything there is to know".

YES! IT IS ABSURD!

And even granted this absurd premise, the argument is still wrong.

ETA: Season with smilies to taste.

:) :D :o ;) :p :confused: :( :rolleyes: :eek: :cool: :boxedin:
Agreed.

The Mary thought experiment is sleight of hand. We take for granted what it means to say "know all". The virtuoso pianist knows more than the amateur. However, the extra "knowledge" exists as the increased neural connections (brain programing) that the master pianist isn't even conscsously aware of.
 
Last edited:
PixyMisa, I do not understand.

Are you saying that we can understand consciousness because it is based on information?

PixyMisa said:
Anyone who would care to give a definition of consciousness that could facilitate a productive conversation rather than "my favourite semantics".
Self-referential information processing. :)

and that because of this it can be understood as long as we know how to interpret this information?

I say that mental processes are brain processes. You can in principle understand the mental process by understanding the brain process. This is not necessarily easy, of course. So merely knowing the details of the brain process is insufficient; you have to also know how to map one to the other.

or that we cannot because we cannot have all the information because we cannot store it.

PixyMisa said:
That's (one reason) why Mary's Room and the Chinese Room are physically impossible - they reduce processes to facts. Representing all the possible processes themselves to provide for "complete knowledge" would already require more storage than there are particles in the corresponding universe (except possibly for infinite universes). Flattening the processes out to all possible sets of states just raises infinity to the power of infinity.

or that we cannot because we cannot analyse and understand all the information?

My point is that we do agree. I say that it's not enough to just run Joe through the (impossibly) perfect MRI, you have to actually analyse and understand every part of the information (which is impossible).

Are you also agreeing with Randfan that we can theoretically artificially copy consciousness successfully without successfully having all the information and interpreting all the information.

PixyMisa said:
I don't agree. You could theoreticaly, artificially, recreate Joe's state in another brain using nanobots. That would give you the effect.
My point is that we do agree.

Your example of nanobot brain reprogramming is about as close as it is to get to Mary in reality, but of course that just rewires one brain to mach another, where Mary is required to know "everything there is to know".

Then I do not understand why you claim that Jackson and Chalmers talk about metaphysical materialism being illogical and you are not.

Yes, but remember that Jackson and Chalmers aren't talking about what is possible in reality, they're talking about metaphysical materialism. As such, they're allowed to make factually nonsensical premises as long as they're not logically nonsensical.

Please help me understand your last few posts :boggled:
 
I don't agree. You could theoreticaly, artificially, recreate Joe's state in another brain using nanobots. That would give you the effect.

BTW: If you don't have the experience, artificial or otherwise, then you don't have complete knowledge.

You folks are treating knowledge as if it is only a collection of abstract data.

For a pianist to become a virtuoso he or she needs more than abstract theory. In fact most of it will do the pianist no good if the precise instructions that directs muscle movements in the arms and fingers to play are not in the module that directs the arm and fingers to play. The brain isn't an Internet forum where you can simply upload data. It's more like a computer that must be programed.

In fact. You can't just read about math to learn math. YOU HAVE TO PERFORM MATH. You have to do homework. This is programing. It's not just abstract data. It's wiring the brain in specific ways.

Learning ISN'T simply aquiring abstract information. Learning is programing the brain. Learning abstract theory about surgery, piano playing, bike riding WON'T teach you everything there is to know. You must program your brain (to date this can only be done through physically experience of the real world).

I can upload program code into a microsoft word document the computer won't be able to use the code. The code must either be an executible file in a specific place in the computer or the must be a line of code somewhere that the computer will know to access the code.

I don't mind the term "programming" being used in this context if it's analogous to computer programming. If it's a claim that it's the same process then I quibble.
 
I don't know why you think you are a lone voice crying out in the wilderness. I don't think the brain is a single general CPU either.

Clearly the brain is a single general CPU, but it's a lot of other things as well. How much of mind is related to the brain's functioning as a CPU? I would say not much, but we really don't know.
 
Can you name any contemporary reductionists who are not materialists or physicalists?
Contemporary, meaning still alive?

Off the top of my head, Stephen Hawking.

In the last 100 years, A.J. Ayer, Moritz Schlick, Rudolf Carnap
 
PixyMisa, I do not understand.
Okay.

Are you saying that we can understand consciousness because it is based on information?
Not at all. I'm saying that we can understand it because it's a straightforward - indeed, simple - physical process.

and that because of this it can be understood as long as we know how to interpret this information?
No, I'm not saying that. Indeed, you have things entirely reversed; understanding it from the perspective of the information being processed is not a generally useful approach. I'm saying we can understand it as a physical process.

or that we cannot because we cannot have all the information because we cannot store it.
That's not even quoting out of context, because there is more than enough context there to make it obvious that I am not talking about that.

Immaterialist philosphers Frank Jackson and David Chalmers propose thought experiments - Mary's Room and the Chinese Room respectively - that purport to show that consciousness cannot be a physical process. Instead, they show the incredibly naive views on the physical representation of information held by Professors Jackson and Chalmers and their shared inability to construct a consistent argument.

or that we cannot because we cannot analyse and understand all the information?
Malerin's question about the impossibly precise MRI fails nearly as badly as Jackson and Chalmers do.

Are you also agreeing with Randfan that we can theoretically artificially copy consciousness successfully without successfully having all the information and interpreting all the information.
You can definitely copy consciousness without interpreting all the information encoded in the brain. All you need to do is accurately copy the physical structures of the brain; you don't need to understand it.

It's physically impossible to do this perfectly for a human brain. On the other hand, it's almost certainly physically possible to do it sufficiently accurately that the original and the copy would be indistinguishable for everyday purposes.

With simpler brains, particularly electronic ones, which already have the physical uncertainties ironed out as far as practically possible, it can be done perfectly, and indeed is done very, very frequently. Just call fork() on a reflective process and you're done.

Then I do not understand why you claim that Jackson and Chalmers talk about metaphysical materialism being illogical and you are not.
I don't claim that Jackson and Chalmers assert this; they do.

The problem is that both Jackson and Chalmers (a) assume physical absurdities (perhaps logical impossibilities, but that has not been proven here), and then (b) fail to construct logically valid arguments. That's what the last several pages of this thread has been about.

Please help me understand your last few posts :boggled:
Please consider reading them, since it's pretty clear that you haven't done so yet.
 
No. If brain and mental states are identical, then complete knowledge of one is complete knowledge of the other.

But you seem to be arguing that "knowledge of" something is the same thing as experiencing something. Without experiencing wetness, how could the knowledge that something will feel wet give you all the information you can have about that something ?
 

Back
Top Bottom