Actually, you might be right about Chalmers using the "hard" problem as such. But then again, to me, it seems there's still a difference in where they are coming from: "how come certain actions make the system go 'online' in the first place" vs. "how come, when the system is 'online', it is accompanied by subjective experience". Notice the use of the term 'accompanied'.
Yes, true. Chalmers seems to be going for consciousness as a epiphenomenom, though it might not be accurate to judge him on one word.
But you could interpret what he refers to as 'in the dark' is the same as unconscious and then when something has an 'inner feel' it is conscious.
Yes, this is how I'm interpreting it. There is unconscious visual processing, and it appears (according to Baars) that pretty much next door to it there is conscious processing going on. Blackmore asks him what, in material terms, creates the difference. So, for me, that's clearly an HPC question.
Thus, we're back all this simply being problem of using the definition. I'm not particularly fond of how you use it here because it basically conflates 'subjective experience' with what Blackmore and Baars talk about as certain actions adding up to becoming 'conscious'. Do you think they meant it as such?
Yes, I do. To me, Blackmore is asking Baars why
this processing is going on "in the dark" and
this one just next to it is going on "in the light." I don't see that it's more complex than this.
Blackmore has some of her excellent JCS articles online
here and
here and when I read them I picked up the impression that this is how she's thinking here.
When you refer to the HPC as subjective experience itself as a universal category, it might not be empirically grounded but rather only notionally so. What are the grounds for thinking about subjective experience in it's own right? Or should we rather always be talking about subjective experience of something? Chalmers could agree with the former one, but I'm not so sure Blackmore or Baars would.
Uhm, my own impression is that they would be OK with both. Could be wrong.
Actually this is the same kind of critique Blackmore uses against people who talk about consciousness as in 'a stream of consciousness'. Thus, I don't think she actually asked a hard problem question, it only becomes a hard problem question if you think there is such a priori (like Chalmers appears to be doing). With Baars, it's more complicated because he's fond of using metaphors anyway.
I think complications may be arising because the passage I quoted is from Blackmore's book of interviews
Conversations on Consciousness. In this book, imo, she is not so much expressing her own views but using her considerable background knowledge to ask good questions in interviews with others. I don't think Blackmore believes in the HPC, at least not to the degree that Chalmers & Co do, but she is asking the questions any normal, knowing interviewer would.
So for me, it's an HPC question because Baars has a background with dealing with this conscious/unconscious processing issue (it's predominant in GWT) and he's being interviewed by someone who understands this issue.
Anyway (!)..., what I find interesting about Baars' theory is that it seems to me that it allows P-zombies to exist, and possibly undermines the notion that AI is qualitatively analogous to human consciousness. If there is unconscious processing, and conscious processing is merely a means to broadcast information to other unconscious modules then surely, with another means to broadcast there would be no need for this experiential consciousness. The P zombie lives! And...given that a computer has a cpu and is wired quite differently to a human being this seems to me to undermine the notion that computers necessarily experience in the same sense that humans do. Not that any of this is remotely on-topic!
Nick