• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Resolution of Transporter Problem

These states are all simple conscious experience. They surround us constantly. I don't see Chalmers saying that it's necessary to introspect upon them or to be aware that they are taking place in order for them to exist. It is simply that introspection is needed to be aware that these things that constantly surround us might seem not to be created by simple material processing.

Nick

Hi Lupus,

To make it clearer, hopefully...

Chalmers said:
"If any problem qualifies as the problem of consciousness it is this one...even when we have explained the performance of all the cognitive facilities and behavioural functions in the vicinity of experience - perceptual discrimination, categorisation, internal access, verbal report - there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? Why doesn't all this information processing go on 'in the dark,' free of any inner feel?" - Chalmers as quoted by Blackmore in Consciousness: An Introduction

It seems clear to me that Chalmers is saying that explaining experiential consciousness itself is the hard problem. This is the same as what Blackmore is asking Baars. Do you disagree?

Nick
 
Last edited:
Nick227 said:
It seems clear to me that Chalmers is saying that explaining experiential consciousness itself is the hard problem. This is the same as what Blackmore is asking Baars. Do you disagree?

Actually, you might be right about Chalmers using the "hard" problem as such. But then again, to me, it seems there's still a difference in where they are coming from: "how come certain actions make the system go 'online' in the first place" vs. "how come, when the system is 'online', it is accompanied by subjective experience". Notice the use of the term 'accompanied'. But you could interpret what he refers to as 'in the dark' is the same as unconscious and then when something has an 'inner feel' it is conscious.

Thus, we're back all this simply being problem of using the definition. I'm not particularly fond of how you use it here because it basically conflates 'subjective experience' with what Blackmore and Baars talk about as certain actions adding up to becoming 'conscious'. Do you think they meant it as such?

When you refer to the HPC as subjective experience itself as a universal category, it might not be empirically grounded but rather only notionally so. What are the grounds for thinking about subjective experience in it's own right? Or should we rather always be talking about subjective experience of something? Chalmers could agree with the former one, but I'm not so sure Blackmore or Baars would. Actually this is the same kind of critique Blackmore uses against people who talk about consciousness as in 'a stream of consciousness'. Thus, I don't think she actually asked a hard problem question, it only becomes a hard problem question if you think there is such a priori (like Chalmers appears to be doing). With Baars, it's more complicated because he's fond of using metaphors anyway.
 
Actually, you might be right about Chalmers using the "hard" problem as such. But then again, to me, it seems there's still a difference in where they are coming from: "how come certain actions make the system go 'online' in the first place" vs. "how come, when the system is 'online', it is accompanied by subjective experience". Notice the use of the term 'accompanied'.

Yes, true. Chalmers seems to be going for consciousness as a epiphenomenom, though it might not be accurate to judge him on one word.

But you could interpret what he refers to as 'in the dark' is the same as unconscious and then when something has an 'inner feel' it is conscious.

Yes, this is how I'm interpreting it. There is unconscious visual processing, and it appears (according to Baars) that pretty much next door to it there is conscious processing going on. Blackmore asks him what, in material terms, creates the difference. So, for me, that's clearly an HPC question.

Thus, we're back all this simply being problem of using the definition. I'm not particularly fond of how you use it here because it basically conflates 'subjective experience' with what Blackmore and Baars talk about as certain actions adding up to becoming 'conscious'. Do you think they meant it as such?

Yes, I do. To me, Blackmore is asking Baars why this processing is going on "in the dark" and this one just next to it is going on "in the light." I don't see that it's more complex than this.

Blackmore has some of her excellent JCS articles online here and here and when I read them I picked up the impression that this is how she's thinking here.


When you refer to the HPC as subjective experience itself as a universal category, it might not be empirically grounded but rather only notionally so. What are the grounds for thinking about subjective experience in it's own right? Or should we rather always be talking about subjective experience of something? Chalmers could agree with the former one, but I'm not so sure Blackmore or Baars would.

Uhm, my own impression is that they would be OK with both. Could be wrong.

Actually this is the same kind of critique Blackmore uses against people who talk about consciousness as in 'a stream of consciousness'. Thus, I don't think she actually asked a hard problem question, it only becomes a hard problem question if you think there is such a priori (like Chalmers appears to be doing). With Baars, it's more complicated because he's fond of using metaphors anyway.

I think complications may be arising because the passage I quoted is from Blackmore's book of interviews Conversations on Consciousness. In this book, imo, she is not so much expressing her own views but using her considerable background knowledge to ask good questions in interviews with others. I don't think Blackmore believes in the HPC, at least not to the degree that Chalmers & Co do, but she is asking the questions any normal, knowing interviewer would.

So for me, it's an HPC question because Baars has a background with dealing with this conscious/unconscious processing issue (it's predominant in GWT) and he's being interviewed by someone who understands this issue.

Anyway (!)..., what I find interesting about Baars' theory is that it seems to me that it allows P-zombies to exist, and possibly undermines the notion that AI is qualitatively analogous to human consciousness. If there is unconscious processing, and conscious processing is merely a means to broadcast information to other unconscious modules then surely, with another means to broadcast there would be no need for this experiential consciousness. The P zombie lives! And...given that a computer has a cpu and is wired quite differently to a human being this seems to me to undermine the notion that computers necessarily experience in the same sense that humans do. Not that any of this is remotely on-topic!

Nick
 
Last edited:
That it might make it clearer again, I find this quote from Dehaene and Naccache summarising their interpretation of GWT pretty explicit:

D&H said:
"At any given time, many modular cerebral networks are active in parallel and process information in an unconscious manner. An information becomes conscious, however, if the neural population that represents it is mobilized by top-down attentional amplification into a brain-scale state of coherent activity that involves many neurons distributed throughout the brain. The long distance connectivity of these "workplace neurons" can, when they are active for a minimal duration, make the information available to a variety of processes including perceptual categorization, long-term memorization, evaluation, and intentional action. We postulate that this global availability of information through the workplace is what we subjectively experience as a conscious state." - quoted by Dennett, Are We Explaining Consciousness Yet? (2000)

Nick
 
If there is unconscious processing, and conscious processing is merely a means to broadcast information to other unconscious modules then surely, with another means to broadcast there would be no need for this experiential consciousness.

Except then there would be no behavior associated with consciousness, which a p-zombie must neccessarily exhibit in order to be a true p-zombie.

So no, the p-zombie does not live.
 
Blackmore asks him what, in material terms, creates the difference. So, for me, that's clearly an HPC question.

No.

The HPC version of this question is "now that you have explained it -- in material terms -- tell me where the subjective experience is in the system?"
 
That it might make it clearer again, I find this quote from Dehaene and Naccache summarising their interpretation of GWT pretty explicit:

What is that meant to clarify?

This is simply an elaboration on what Pixy and I have been telling you.
 
It seems clear to me that Chalmers is saying that explaining experiential consciousness itself is the hard problem. This is the same as what Blackmore is asking Baars. Do you disagree?

Yes.

I interpret him as saying that explaining why experiential consciousness is like experiential consciousness is the hard problem. There is no material solution to that question other than to make it axiomatic.

You might think differently, but from what I have learned here on these forums, most HPC proponents do not believe there is a material solution.

The HPC opponents, like Pixy and I, simply grant that the solution is axiomatic, which means the problem was never valid to begin with. Why is experience like experience? Because experience is experience. The identity axiom. Remember me bringing that up? This is what I was talking about. I mean, you can ask "why is 1 like 1?" but that really isn't a valid question in a logical sense.

Blackmore and Baars seem to be asking exactly what you are asking. Which is different from "why is 1 like 1?"
 
Except then there would be no behavior associated with consciousness, which a p-zombie must neccessarily exhibit in order to be a true p-zombie.

So no, the p-zombie does not live.

What behaviour is phenomenal consciousness specific? What behaviour cannot occur in the absence of phenomenality?

Nick
 
Yes.

I interpret him as saying that explaining why experiential consciousness is like experiential consciousness is the hard problem. There is no material solution to that question other than to make it axiomatic.

We don't know that. It is not that the HPC or qualia necessarily refute materialism, rather that if the HPC and materialism are valid then there is an explanatory gap. It could also be that either the HPC or materialism or both are invalid, but either way we don't know for sure. It's too early to be jumping for axioms!

This is what Baars affirms. He says "we are in the study of consciousness where Benjamin Franklin was in the study of electricity in the 1800s."

You might think differently, but from what I have learned here on these forums, most HPC proponents do not believe there is a material solution.

The HPC opponents, like Pixy and I, simply grant that the solution is axiomatic, which means the problem was never valid to begin with. Why is experience like experience? Because experience is experience. The identity axiom. Remember me bringing that up? This is what I was talking about. I mean, you can ask "why is 1 like 1?" but that really isn't a valid question in a logical sense.

Blackmore and Baars seem to be asking exactly what you are asking. Which is different from "why is 1 like 1?"

Anyone can say it's axiomatic but it gets real science nowhere, RD. You have to research. You have to investigate. You have to account for phenomena empirically. Yours and Pixy's approach is equivalent to just flat denial. The problem with this is that if you have not really investigated the HPC the models you create are inevitably still prone to have Cartesian elements. Thus Pixy has an "experiencing feedback loop" and others have pontifical neurons.

If you get out of the investigation too early what you create is flawed. Nature of life, dude. I'd forget about axioms if I was you.

Nick
 
Last edited:
No.

The HPC version of this question is "now that you have explained it -- in material terms -- tell me where the subjective experience is in the system?"

The HPC doesn't assert consciousness as an epiphenomenom. It is to explain how phenomenal consciousness even exists. It is to say, in simple terms, "why is it so light in here when surely this processing could be going on in the dark?" This for me is what Blackmore is asking Baars.

I mean, to be honest, you could also take the question to be a simple materialist one, now I think about it more. Because we don't know why one bit of tissue is a substrate for phenomenality and another not then Baars could have taken Blackmore's question as a simple functional issue. And Blackmore could have meant it this way. If we knew the answer to the easy question here then it would be clear that Blackmore and Baars were referring to the hard one. Personally I think she is alluding to the harder issue but I must admit I could be wrong.

Nick
 
lupus said:
When you refer to the HPC as subjective experience itself as a universal category, it might not be empirically grounded but rather only notionally so. What are the grounds for thinking about subjective experience in it's own right? Or should we rather always be talking about subjective experience of something? Chalmers could agree with the former one, but I'm not so sure Blackmore or Baars would.
Nick227 said:
Uhm, my own impression is that they would be OK with both. Could be wrong.


Well, I'm somewhat unsure also, but the reason for my scepticism about the whole HPC-question is due to what Baars is saying prior to Susan stressing him on the subject. For instance in the beginning of the interview he says the following:
Baars said:
If you ask questions about consciousness purely in terms of subjectivity—‘What is it like to be you or me?’—you get into the classic mind-body paradoxes where you end up with the three classical positions in the mind-body problem: mentalism, physicalism, and dualism; and the dialogue— or rather, the dialogue of the deaf— on those particular issues, goes round and round and round and round and never gets resolved. So from my point of view the first thing that you must do if you would like to actually answer some questions, is pose the questions in a way that’s answerable.
So for me at least, that is a pretty clear indicator of him not buying in to the traditional HPC-discourse, or at the very least, he tries to stay away from it.


The other example is in the exchange that immediately follows:
Baars said:
One of the clearly answerable questions that I think we have today is, what is the difference between two identical pieces of knowledge, one of which is conscious, and the other one is unconscious? That’s an answerable question because it allows you to treat consciousness as a variable; and I would argue that anything in science that we can ask questions about has to be treated as a variable. From that point of view the problem with the mind-body paradoxes is that they are always asked from one perspective, either from the inside perspective or the outside perspective; none of the classical positions allows us to ask about consciousness as a variable.


For sure, Susan then goes on to stress him on the issues regarding conscious and unconscious, and the apparent explanatory gap therein. But once again, I think the whole stage for the question is meant to treat the explanatory gap in terms of consciousness as a variable, rather than to jump to a perspective he previously criticized as going round and round. At least this is how I interpret the discourse.


Nick227 said:
I think complications may be arising because the passage I quoted is from Blackmore's book of interviews Conversations on Consciousness. In this book, imo, she is not so much expressing her own views but using her considerable background knowledge to ask good questions in interviews with others. I don't think Blackmore believes in the HPC, at least not to the degree that Chalmers & Co do, but she is asking the questions any normal, knowing interviewer would.


Yes, that's what makes it so difficult to assess if she's coming from a point of HPC (i.e., different approach or epistemological perspective altogether) or if she's stressing him about the explanatory gap in accordance to his own epistemological perspective.
 
Baars said:
One of the clearly answerable questions that I think we have today is, what is the difference between two identical pieces of knowledge, one of which is conscious, and the other one is unconscious? That’s an answerable question because it allows you to treat consciousness as a variable; and I would argue that anything in science that we can ask questions about has to be treated as a variable. From that point of view the problem with the mind-body paradoxes is that they are always asked from one perspective, either from the inside perspective or the outside perspective; none of the classical positions allows us to ask about consciousness as a variable.

For sure, Susan then goes on to stress him on the issues regarding conscious and unconscious, and the apparent explanatory gap therein. But once again, I think the whole stage for the question is meant to treat the explanatory gap in terms of consciousness as a variable, rather than to jump to a perspective he previously criticized as going round and round. At least this is how I interpret the discourse.

Yes, fair enough. The question doesn't necessarily need to invoke the HPC. As there doesn't appear to be a scientific consensus on how neuronal data is made either conscious or unconscious then it could be considered that this is the only issue being addressed, though for me I think it's clear to both that the HPC at least lurks in the background.

It's also for me how you consider the HPC itself. Ned Block, quoted by Dennett (2000 as before), asserts that (in GWT) there is still a difference between phenomenality and global access. So, as I think an HPC proponent, he is with Chalmers in apparently considering that there is still an explanatory gap even when one has a material explanation for one set of neuronal data being globally accessible and another not.

I think another HPC fan might still legitimately consider that phenomenality = global access but that we don't have an adequate materialist explanation for global access against unconsciousness.

Baars himself, just after the part of the interview I quoted, mentions Edelman's dynamic core hypothesis (which I don't know so much about) as a proposed mechanism to account for the latter just above. If this was more widely accepted, or another hypothesis considered valid, then it would reduce the ground in which the HPC could hide.

Personally I'm still not fully convinced that the HPC is invalid. I think Baars is right when he says that it's going to take a long time and a lot more data yet to get there. To me also, it's implicit in the acceptance of the fact that we don't have an agreed material explanation for global access vs unconscious processing that the HPC may still be valid, and I think both Baars and Blackmore would agree. Until you can really close up these explanatory gaps the HPC will always have a place to hang out in!

Nick
 
Last edited:
Nick227 said:
Personally I'm still not fully convinced that the HPC is invalid. I think Baars is right when he says that it's going to take a long time and a lot more data yet to get there. To me also, it's implicit in the acceptance of the fact that we don't have an agreed material explanation for global access vs unconscious processing that the HPC may still be valid, and I think both Baars and Blackmore would agree. Until you can really close up these explanatory gaps the HPC will always have a place to hang out in!


I think even if we would have a material explanation – whatever that actually means – the HPC would not necessarily go away easily, or at least some people would still cling to it regardless. The problem stems from what Baars is partially touching upon, mainly that some people would still insist: "if you can't explain it solely from a pure first person perspective, then it's not explained at all." It might however be that they are asking for something impossible to begin with, in which case the explanation, any explanation, would only reach deaf ears.
 
I think even if we would have a material explanation – whatever that actually means – the HPC would not necessarily go away easily, or at least some people would still cling to it regardless. The problem stems from what Baars is partially touching upon, mainly that some people would still insist: "if you can't explain it solely from a pure first person perspective, then it's not explained at all." It might however be that they are asking for something impossible to begin with, in which case the explanation, any explanation, would only reach deaf ears.

Or one might say that in such a case the question is invalid since after all providing an answer is impossible.

Which is the route HPC opponents take.
 
The HPC doesn't assert consciousness as an epiphenomenom. It is to explain how phenomenal consciousness even exists. It is to say, in simple terms, "why is it so light in here when surely this processing could be going on in the dark?" This for me is what Blackmore is asking Baars.

I disagree. I interpret the HPC to say, in simple terms, "why is the subjective experience of it being so light in here different from the objective processing that could be going on in the dark?" And that is definitely not what Blackmore is asking.

Blackmore is asking "what are the differences that make this lump of neurons in a given brain unconscious and this other lump conscious?"

The HPC asks "why am I experiencing consciousness when to my objective observation this other brain <that is not my own> appears as just a lump of neurons?"

Do you see the difference? The former is a question about the mechanics behind behavior. The latter is a question about how subjectivity arises in an objective world. Blackmore is not asking about subjectivity, she is asking about mechanics.

I hate to appeal to authority, but it seems like the others who have chimed in on this issue tend to agree.
 
Last edited:
I disagree. I interpret the HPC to say, in simple terms, "why is the subjective experience of it being so light in here different from the objective processing that could be going on in the dark?" And that is definitely not what Blackmore is asking.

You need to read Chalmers more closely. The HPC does not assert that there can be no materialist model for consciousness. Chalmers says that after you've solved all the easy problems there may still be something else left to explain.

Thus, if we're talking about GWT, phenomenality can completely equal global access. Some HPC proponents, for example Block who I quoted earlier, assert that it doesn't but there is nothing in the HPC to say that it cannot.

The HPC is to demonstrate how phenomenality - subjective experience - occurs, by material means or otherwise.

BTW, I'm still interested to hear from you an example of a behaviour that cannot occur in the absence of phenomenal consciousness.

Nick
 
Last edited:
You need to read Chalmers more closely. The HPC does not assert that there can be no materialist model for consciousness.

No, but it does pose questions for which there is no valid answer in a materialist framework.

It is like asking a mathematician "why is a circle round?" The mathematician can give a very informative definition of a circle, but an HPC proponent would only respond with "..but where does the 'roundess' come from?"

Thus, if we're talking about GWT, phenomenality can completely equal global access. Some HPC proponents, for example Block who I quoted earlier, assert that it doesn't but there is nothing in the HPC to say that it cannot.

Nothing except for the HPC itself. Look, if you want to redefine the HPC from how it is commonly understood to a weaker version that suits your needs, that is fine. But you aren't in agreement with the vast majority of people who know anything about the issue.

The HPC is to demonstrate how phenomenality - subjective experience - occurs, by material means or otherwise.

No. You have already taken a giant step that the HPC doesn't allow -- you are equating phenomenality with subjective experience. You can explain all you want how GWT models will lead to a system behaving as if it were conscious, and intelligent people will pick up on the fact that if their brains can be objectively modeled by such a system it is likely the source of their own subjectivity, but HPC proponents will still ask for the link between this objective system and their own subjective experience.

And you can't provide that link, because it is axiomatic. All you can tell them is that their subjective experience is simply the result of being the system that can be modeled by GWT.

I mean, how do you respond to a question like "when we look at an MRI of that person when they look at a red screen, all we see are patterns of neuron firing. When I look at the red screen, I experience the qualia of red. What is the difference? How did it go from neurons firing to qualia?"

BTW, I'm still interested to hear from you an example of a behaviour that cannot occur in the absence of phenomenal consciousness.

There isn't one. Which is a good reason to consider the HPC invalid.

But as for p-zombies... just because a behavior can be exhibited by a p-zombie doesn't mean the notion of a p-zombie is valid. Because a p-zombie by definition exhibits all possible behaviors of a human, and that is an infinite set of behaviors and thus could not be defined beforehand. Thus p-zombies are not logically consistent.

You could say a "weak" p-zombie is valid, where "weak" means the p-zombie only has to mimic a finite set of behaviors. And in that case, determining whether someone is a weak p-zombie comes down to probability. What is the likelihood that a given behavior was in their set? To determine that you would need to weigh the chances of behavior X occuring at time Y with the chances the designer of the p-zombie programming it to exhibit behavior X at time Y.
 
Nothing except for the HPC itself. Look, if you want to redefine the HPC from how it is commonly understood to a weaker version that suits your needs, that is fine. But you aren't in agreement with the vast majority of people who know anything about the issue.

I quoted Chalmers original description of the HPC. I'm not redefining it. This is what Chalmers originally wrote. Now if you want to change it around so that it makes it easier to dismiss, that's up to you, but I'm using the original statements given by the individual who first articulated the notion of an HPC.

No. You have already taken a giant step that the HPC doesn't allow -- you are equating phenomenality with subjective experience.

The HPC does allow it. Please read Chalmers' original quote again. He says that there may be something still to be explained. The word "may" is not the same as "will."

You can explain all you want how GWT models will lead to a system behaving as if it were conscious, and intelligent people will pick up on the fact that if their brains can be objectively modeled by such a system it is likely the source of their own subjectivity, but HPC proponents will still ask for the link between this objective system and their own subjective experience.

Some probably will, as Lupus points out. Others will give up. It depends how the research goes.

And you can't provide that link, because it is axiomatic. All you can tell them is that their subjective experience is simply the result of being the system that can be modeled by GWT.

We don't know the answer yet, as Baars points out. It's too early. There may even be some big X factor out there no one's really even thought of yet. We just don't know.

There isn't one.

A few posts back you told me there was.

But as for p-zombies... just because a behavior can be exhibited by a p-zombie doesn't mean the notion of a p-zombie is valid. Because a p-zombie by definition exhibits all possible behaviors of a human, and that is an infinite set of behaviors and thus could not be defined beforehand. Thus p-zombies are not logically consistent.

You could say a "weak" p-zombie is valid, where "weak" means the p-zombie only has to mimic a finite set of behaviors. And in that case, determining whether someone is a weak p-zombie comes down to probability. What is the likelihood that a given behavior was in their set? To determine that you would need to weigh the chances of behavior X occuring at time Y with the chances the designer of the p-zombie programming it to exhibit behavior X at time Y.

I don't think this is the issue with P zombies. The trad HPC opponent stance is that it's simply not possible to have the behaviour without phenomenal consciousness. It's an impossibility borne of misguided reasoning (see Dennett 1991). But GWT does seem to me to challenge this. If we can achieve all the processing functions unconsciously and consciousness is merely one module broadcasting to the others then surely another means of transmission could be developed, negating the need for consciousness. This seems logical to me. Thus, the P zombie lives. (One might argue that all modules within the brain are themselves phenomenally conscious of course).

This argument also undermines, it seems to me, the idea that AI is necessarily analogous to human consciousness. If P zombies are feasible and computers could be considered to be examples of them then we may be straight back to the HPC again.

Nick
 
Last edited:
We don't know the answer yet, as Baars points out. It's too early. There may even be some big X factor out there no one's really even thought of yet. We just don't know.

Well, see, you are talking about magic again. This is the part of consciousness that mathematics can't describe, right? Baars is talking about mechanisms we don't understand, not "some big X factor." Do you really think the "professional researchers" buy into "some big X factor?"

I see what the snake oil you are trying to sell is made of now. You are saying that there might be some idea that could provide a non-axiomatic link between objectivity and subjectivity. And you are also saying, via your previous posts, that such an idea might not be mathematically describable.

...that isn't materialism Nick.

Furthermore it makes it pretty clear why you really do buy into the HPC -- you want magic to be real, and if the HPC didn't exist, there would be no more room for magic.

A few posts back you told me there was.

No, I told you that if there was a p-zombie who exhibited all the behaviors that can arise from unconscious processing it would lack the behaviors that arise from consciousness and hence not be a p-zombie.

If we can achieve all the processing functions unconsciously and consciousness is merely one module broadcasting to the others then surely another means of transmission could be developed, negating the need for consciousness. This seems logical to me.

Ahh... your dualism becomes apparent.

By definition if the function of conscious processing is achieved then the processing is conscious.

If you think otherwise, you are a dualist.

This argument also undermines, it seems to me, the idea that AI is necessarily analogous to human consciousness. If P zombies are feasible and computers could be considered to be examples of them then we may be straight back to the HPC again.

...yes, and the idea that people are magical compared to the rest of the matter in the universe! Oh wait... we already are back to that idea ...
 

Back
Top Bottom