Dennett's original model, Multiple Drafts, needs no "self-referencing loops" to create consciousness. In fact, I'm fairly sure he would ridicule the idea, not that this should necessarily mean anything given other ideas which he has ridiculed and subsequently re-examined.
Likewise
Fame in the Brain, which is basically his reworking of GWT. Self-referencing doesn't come into the equation. Read through his
2000 paper and see if you can find anything about self-referencing loops.
Good grief, Nick, that paper is strewn with references to self-reference! How can you possibly fail to grasp that?
What do you think a "proto-self evaluator" is? What do you think he's talking about when he says:
Dennett said:
The looming infinite regress can be stopped the way such threats are often happily stopped, not by abandoning the basic idea but by softening it. As long as your homunculi are more stupid and ignorant than the intelligent agent they compose, the nesting of homunculi within homunculi can be finite, bottoming out, eventually, with agents so unimpressive that they can be replaced by machines (Dennett, 1978).
Dennett's talking first about the fact that human consciousness is built up from a network of simpler information processing subsystems. But more generally, what's the alternative to infinite regress?
Loops, Nick. Loops.
They just don't come into the equation because consciousness itself is a global access state.
There is no global access state. That's just a model, laid on top of self-reference.
Attention may be directed by self-referencing loops, but consciousness itself is not innately self-referencing.
Fail.
Go back to Decartes' cogito. That's a statement about self-referential information processing.
I cannot see how you can reconcile your model with GWT.
GWT is a higher-level model of the human mind. It cannot exist without self-referential information processing.
I can't see how you can reconcile it with O'Regan's Sensorimotor Theory either.
Your theory might stand up, to a degree, when considering inner dialogue alone.
Again, what do you think you're talking about when you say "inner dialogue"?
I'm not clear here. But in considering human consciousness as a whole, with all its varied aspects, it really does seem to me a complete non-starter.
What aspects?
What are your definitions here? How do you distinguish between the two?
Awareness is perception.
Consciousness is awareness of self.
See the loop?
The narrative self is the "user illusion," the artificial notion that conscious states belong to someone.
That is self-referential, yes.
I would take it that it requires language.
Why on Earth would you think that? All it requires is self-reference.
Certainly those areas of the brain which create and interpret language are known to be especially active during inner dialogue.
How is that relevant?
So...are you saying there can be awareness of the monitor without consciousness present then?
Of
course. That's exactly what Dennett is getting at with his thermostat example. It's aware, but it's not self-aware, not conscious.
I think I've exaplained that about thirty times now.
How far do you want to go back? It traces back at least to Descartes (though after touching on the truth, he wandered off into less productive fields). Probably further.
You're saying that the monitor in front of me is not an aspect of consciousness?
No, Nick. The monitor in front of you is a monitor.
ETA: Your ideas may be all well and fine for AI, Pixy. I don't know. But for human consciousness they just don't cut it, as I see it, and furthermore they still leave space for the HPC to creep back in.
Read Hofstadter.
Also, please provide a statement of HPC that isn't inherently self-contradictory. Chalmer's certainly can't.
The leading question for me is...how do you actually model this difference between conscious and unconscious streams of data in GWT using self-referencing loops?
What is a "conscious stream of data" supposed to be?
How is global access innately self-referencing?
Once again, there is no such thing as global access at any physical level. That's physically impossible. There's just neurons sending signals to one another.