So you don't know, basically.
If I didn't know, I would have said I didn't know.
Consciousness is a synthesis of information (from wherever), memory, reference, and self-reference. It's self-reference that makes the critical difference, that takes it from simple awareness to self-awareness.
I don't see that self-consciousness has much to do with it here.
Self-consciousness?
Say you have a bunch of data processing going on and you mark out a region with sensors feeding back information.
Mark out a region? Sensors?
Nick, what do you think the term "self-referential" means?
So what? How does this create, say, visual awareness? The argument is nonsensical.
Why is it nonsensical? What do you think visual awareness is?
The only route the materialist has at this juncture, as I see it, is to assert that consciousness is an inherent property of certain types of system, or of all systems, and this stance must be inherently fraught also.
No. In fact, that's exactly what we've been saying is the wrong way to look at it.
Come off it, Pixy. Autonomic systems self-monitor.
No they don't.
They feed back into the nervous system constantly.
Correct. They do not self-monitor.
So why am I not conscious of them?
Why should you be? There's a lot of stuff that you're not conscious (as in, consciously aware) of. You only become consciously aware of something when you pay
attention to it - and attention is a technical term in neuroscience with a specific meaning. This is covered in the lecture series, by the way.
You can direct attention to (and take conscious control of) some of your autonomic processes (breathing, blinking) but not others.
This is what I'm asking here. Strong AI theorists will always claim that consciousness itself is an inherent property and thus doesn't in any way need to be created or explained. This may be so, but it's just a theory and how can it be proven?
Nonsense. I don't need a sense of self to see a tree.
Mayne, maybe not. Depends on what you mean by see.
Again, this is covered in detail in the lecture series. The first level of neural processing in the visual cortex is a direct map of the retina - so much so, in fact, that it is possible with current technology to scan this region of the brain and reproduce an image (albeit a low-resolution image, and that only after teaching the system to interpret a particular person's neural patterns) of what someone is looking at.
Beyond that are multiple layers of further processing - layers that detect colours and shapes and orientation and movement, layers that interpret textures and perspective and shadows, layers that assemble a composite mental image from the large number of smaller images actually seen on individual saccades. Layers that recognise faces as distinct from other shapes, layers that recognise familiar objects, layers that fire emotional responses. There's a layer that automatically corrects your colour perception based on the shape and orientation of objects, and you can mess with it (it's called the
McCollough effect) resulting in a perceptual shift that can persist for
months. Of course, there's a layer that re-inverts the inverted image of the retina, and it's not hard-wired; you can reset it by wearing inverting glasses for a few days, and then get it to reset itself again when you take the glasses off again.
All of which is covered in the lecture series.
I need it to articulate the statement "I see the tree,"
Sort of.
but I don't need it to see the tree.
Depends on what you mean by "see".
The switching that's undertaken by the amygdala in binocular rivalry doesn't use selfhood, I'm sure of it.
Two points: First, I never said it did; second, what makes you sure of this?
Listen to them all, in order. You seem to suffer under a lot of fundamental misconceptions, so I think this is important.