• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Resolution of Transporter Problem

What we're discussing is called, I think, contrastive phenomenology - trying to find out what makes the difference in relatively similar conscious and unconscious processing/events.

I know. You can't find that out until you define what the difference between conscious and unconscious is. And if your definition includes anything like subjective experience then you are out of luck, because you can't find that in anything besides yourself.

That is why the people that know what they are talking about confine the problem to the domain of behavior.

dear god in heaven, RD, you'd try the patience of a saint. Being conscious of the tree means it appears in the field of view. You can see it. Making a story up about it, or placing into some subject-object relationship, is post hoc processing.

Yeah see this is going exactly like that other thread.

My next move -- just like before -- is to ask you whether you are "conscious" of every single chair in a stadium when you are watching a baseball game, or whether you are "conscious" of every single leaf when you look at a tree, or whether you are "conscious" of every single wave when you look at the ocean, or even whether you are "conscious" of every single character when you look at this screen.

So what is your answer this time?
 
In that case, I wouldn't think along those lines, if I were you. Personally, I prefer to discuss things with someone who when presented with, say, the question "Do you see the monitor?" replies with a one word answer. Up to you of course.

Nick

But at the very least, seeing the monitor involves more than it being in your field of view.

At the very least, one has to decide whether the referrent of your question is a certain object in their field of view -- otherwise the question is meaningless to them.

Which means, at the very least, a whole bunch of "post-hoc" processing, as you say.
 
So you don't know, basically.
If I didn't know, I would have said I didn't know.

Consciousness is a synthesis of information (from wherever), memory, reference, and self-reference. It's self-reference that makes the critical difference, that takes it from simple awareness to self-awareness.

I don't see that self-consciousness has much to do with it here.
Self-consciousness?

Say you have a bunch of data processing going on and you mark out a region with sensors feeding back information.
Mark out a region? Sensors?

Nick, what do you think the term "self-referential" means?

So what? How does this create, say, visual awareness? The argument is nonsensical.
Why is it nonsensical? What do you think visual awareness is?

The only route the materialist has at this juncture, as I see it, is to assert that consciousness is an inherent property of certain types of system, or of all systems, and this stance must be inherently fraught also.
No. In fact, that's exactly what we've been saying is the wrong way to look at it.

Come off it, Pixy. Autonomic systems self-monitor.
No they don't.

They feed back into the nervous system constantly.
Correct. They do not self-monitor.

So why am I not conscious of them?
Why should you be? There's a lot of stuff that you're not conscious (as in, consciously aware) of. You only become consciously aware of something when you pay attention to it - and attention is a technical term in neuroscience with a specific meaning. This is covered in the lecture series, by the way.

You can direct attention to (and take conscious control of) some of your autonomic processes (breathing, blinking) but not others.

This is what I'm asking here. Strong AI theorists will always claim that consciousness itself is an inherent property and thus doesn't in any way need to be created or explained. This may be so, but it's just a theory and how can it be proven?

Nonsense. I don't need a sense of self to see a tree.
Mayne, maybe not. Depends on what you mean by see.

Again, this is covered in detail in the lecture series. The first level of neural processing in the visual cortex is a direct map of the retina - so much so, in fact, that it is possible with current technology to scan this region of the brain and reproduce an image (albeit a low-resolution image, and that only after teaching the system to interpret a particular person's neural patterns) of what someone is looking at.

Beyond that are multiple layers of further processing - layers that detect colours and shapes and orientation and movement, layers that interpret textures and perspective and shadows, layers that assemble a composite mental image from the large number of smaller images actually seen on individual saccades. Layers that recognise faces as distinct from other shapes, layers that recognise familiar objects, layers that fire emotional responses. There's a layer that automatically corrects your colour perception based on the shape and orientation of objects, and you can mess with it (it's called the McCollough effect) resulting in a perceptual shift that can persist for months. Of course, there's a layer that re-inverts the inverted image of the retina, and it's not hard-wired; you can reset it by wearing inverting glasses for a few days, and then get it to reset itself again when you take the glasses off again.

All of which is covered in the lecture series.

I need it to articulate the statement "I see the tree,"
Sort of.

but I don't need it to see the tree.
Depends on what you mean by "see".

The switching that's undertaken by the amygdala in binocular rivalry doesn't use selfhood, I'm sure of it.
Two points: First, I never said it did; second, what makes you sure of this?

Which lecture?
Listen to them all, in order. You seem to suffer under a lot of fundamental misconceptions, so I think this is important.
 
Well, I just listened to 66 mins of Wolfe's lecture #6 (Perceiving: Interpreting the Information), which I'm assuming is the one you're referring to and he didn't touch conscious awareness.
Keep listening.

He's not looking at the hard problem at all.
There is no "hard problem".

Not that there's anything wrong with this, but he's just not going there.
There's no there there.

I've listened to a couple of others and it's the same. He's looking at all the easy problems, which is great, but he's not AFAICanSee dealing with, say, what makes me conscious of "this" but not "this", even though both are being concurrently processed by similar circuitry.
He devotes an entire lecture to attention. Even has a live demonstration (fortunately he uses auditory attention as his example, so it comes through in the recording).

It doesn't even seem thus far to be the kind of thing he would go into.
Keep listening.
 
In that case, I wouldn't think along those lines, if I were you. Personally, I prefer to discuss things with someone who when presented with, say, the question "Do you see the monitor?" replies with a one word answer. Up to you of course.
The problem is, Nick, that Lithrael is largely correct. (And perhaps entirely correct, depending on how we define certain terms.)
 
dear god in heaven, RD, you'd try the patience of a saint.
I'd say he has the patience of a saint.

Being conscious of the tree means it appears in the field of view. You can see it.


Nick, didn't you just say
I don't need a sense of self to see a tree.


Consciousness means having a sense of self. So what you are saying is that you don't need to be conscious to be conscious?

This is why RD is telling you that you need to define your terms. You seem to be flitting from one definition of "consciousness" to another, sometimes within the same sentence, so that nothing you say makes any sense.
 
Last edited:
Personally, I prefer to discuss things with someone who when presented with, say, the question "Do you see the monitor?" replies with a one word answer. Up to you of course.

Heh! Why wouldn't anyone expect to get thoroughly trashed and/or misinterpreted for answering that question with 'sure' when they've just got done reading a debate that appeared to peak with your derision for a sensible enough definition of 'self' apparently on the grounds that you'd prefer one that didn't halfway sound like there was in fact anything to bother defining?

Why say 'yes' just for you to go 'But it's just a user illusion! Dualist! Duuuuualist!'
 
I haven't listened to the MIT lecture series, but I have a suspicion that when someone is asking whether someone can or cannot see the monitor, tree or what have you, it is a simplified question which could lead to misunderstanding the whole issue about awareness or vision in general.

'Do you see the tree' should perhaps be disqualified on the grounds of it being an ad hoc convenience rather than representative for what's actually happening in an empirical sense. Empirically, the question should probably be something like this: 'Do you distinguish something we commonly define as a tree from the rest of the noise'. Which means there must be pattern recognition and notional reconstruction in combination with the ability to receive and process visual stimuli in general.

We know about weird situations resulting from brain damage or even hypnosis. Regarding hypnosis, we might have a situation where the subject is hypnotized to not being able "to see" his/her daughter, even though she's standing right in front of the test subject. Regarding brain damage, or stroke in the left hemisphere near the language center, like what happened to Jill Bolte Taylor: she couldn't distinguish her hand from what it was touching, everything just seemed to blend into everything else ("she" felt enormous, like being the universe itself).

One might ask me whether I see the sign written on the wall. I could answer "NO"... "but I see a modern-art painting with lots of abstract figures in it." Then my Arabic friend could explain to me that "no, no, those figures are words written in Arabic, and it says where the toilet is." Thus, the so called "qualia" might be slightly different between the two of us, depending on what the brain reconstructs, at the juncture between past processing and new input, when, in this case, bombarded with visual stimuli. When reducing the level of investigation, "qualia" seems to disappear, hence it might be the case that "qualia" is simply processes that require some rudimentary form of self-reference. Going beyond that level effectively makes "qualia" go away because the relationship between certain processes only makes sense at certain levels of investigation (or description levels).

In this thread people seem to change description levels for different phenomena in order to come up with counter arguments. Thus, the argument can consist of the denial of "self" yet at the same time highlight the problem of "qualia" and vice versa. Yet, depending on the description level, both exist and neither exist.
 
The problem I have with the word "qualia" is that the concept is inextricably intertwined with an entire realm of dualistic woolly-headedness. Ask me, do we have experiences, and I'll say yes, most definitely. (And so do computers.)

Ask me if we have qualia, and I'll answer, mu.
 
But he seems to imply, unless I misunderstand him, that the 'duplicate' is the same being as the original unless the original survives.

I'm a little late in this thread... but this is what I've been arguing in the other teleportation thread. If the duplicate is not you when both copies survive then it is not you when only one does.
 
I know. You can't find that out until you define what the difference between conscious and unconscious is. And if your definition includes anything like subjective experience then you are out of luck, because you can't find that in anything besides yourself.

That is why the people that know what they are talking about confine the problem to the domain of behavior.

Well, I can't really be bothered debating with someone who wants everything defined, esp when they've already made it clear they haven't read any of the related material and don't wish to. I mean, death loses its sting after a while in such debates I find.

You can argue the toss about conscious or otherwise but it's not really the point here. What I'm asking is...what actually creates or switches conscious awareness? In binocular rivalry, for example, what is it that is being switched such that I become aware of one stream and not another concurrent stream going into the other eye? Likewise when each ear is given separate coherent audio inputs.

We know that the amygdala is switching and we know that there are concurrent processing streams going on beneath conscious awareness. What I'm asking, as do many other researchers, is what brain function creates the qualitative difference between one stream being conscious and the other not?

If you're going to starting ****ing on about defining terms again please don't bother answering. I contend that this question is well understood by anyone with at least some background in this field.


Yeah see this is going exactly like that other thread.

My next move -- just like before -- is to ask you whether you are "conscious" of every single chair in a stadium when you are watching a baseball game, or whether you are "conscious" of every single leaf when you look at a tree, or whether you are "conscious" of every single wave when you look at the ocean, or even whether you are "conscious" of every single character when you look at this screen.

So what is your answer this time?

Look, I can give a verbal report either way. So what? This is again not the point. If someone asks me "do you see the door." I look at it and answer yes. This is not to do with what I'm asking. It's not about the accuracy of heterophenomenology. It's about the brain.

Nick
 
Last edited:
Consciousness means having a sense of self.

No it doesn't. That is nonsense. "Self consciousness" means having awareness of self.

So what you are saying is that you don't need to be conscious to be conscious?

Don't get confused between "sense of self" and the hardware that creates it. As a materialist I assert that my brain is needed for there to be consciousness. This is not the same as needing a sense of self. I could sit here and look at the notice board across from me. Until thinking starts there is no sense of self, well no sense of narrative or psychological self (to use Dennett's terms.) My body would still react to threats or other stimuli. But there is no self and no need for it. Emotions can come into awareness and have a sense of self or not, likewise.

You need to distinguish conscious awareness, from self-conscious awareness. They are not the same.

I can assure you that if your personal theory of consciousness requires a sense of self in order for consciousness to even exist then it's going nowhere but the great scrapheap.

Nick
 
Last edited:
Do you really mean that as it reads?

Yes. It's a drag.

To me, it's a straightforward question for anyone who's done at least some background reading. I'll put it again...in visual or audio rivalry where conscious awareness is directed towards one of two streams, or flits between the two, what actually is being switched? We know the amygdala is switching. We know there is concurrent unconscious (or subliminal processing) of the stream that's out of awareness. But what actually is being switched?

This is an example of a situation where consciousness appears to be a variable which, scientifically, means there is a good opportunity to learn more about what consciousness actually is and how it happens. It is however drifting OT though.

Nick
 
How you can you have meaningful debate about anything if you are not willing to clearly define what on earth you are talking about?
 
He devotes an entire lecture to attention. Even has a live demonstration (fortunately he uses auditory attention as his example, so it comes through in the recording).


Keep listening.

I don't mind listening to Wolfe in the car, but I genuinely do not believe he is going to go into these types of issues. For sure we can map aspects of processing to specific brain areas. I'm more interested personally in whether it can be demonstrated that AI is actually creating consciousness, whether the computer is really conscious, or the thermostat. I understand the theory. I want to know whether it can be demonstrated or not and so examining situations where consciousness is a variable allows me to ask this question of people who know AI. You haven't answered it yet.

Nick
 
Last edited:
How you can you have meaningful debate about anything if you are not willing to clearly define what on earth you are talking about?

Because my belief is that most people have adequately similar notion of the term, as used in this specific area of debate, to render this unnecessary.

Do you personally find this question, as I just phrased it, unclear?...

"I'll put it again...in visual or audio rivalry where conscious awareness is directed towards one of two streams, or flits between the two, what actually is being switched? We know the amygdala is switching. We know there is concurrent unconscious (or subliminal) processing of the stream that's out of awareness. But what actually is being switched?"

Nick
 
Last edited:
Mark out a region? Sensors?

Nick, what do you think the term "self-referential" means?

For me it means that there is monitoring and feedback within a region defined usually by the monitoring device.

Why is it nonsensical? What do you think visual awareness is?

Being visually aware of something.

No they don't.

Correct. They do not self-monitor.

Why should you be? There's a lot of stuff that you're not conscious (as in, consciously aware) of. You only become consciously aware of something when you pay attention to it - and attention is a technical term in neuroscience with a specific meaning. This is covered in the lecture series, by the way.

Well, you become aware of it when attention is directed to it. However, I'm not really asking about attention here.


Again, this is covered in detail in the lecture series. The first level of neural processing in the visual cortex is a direct map of the retina - so much so, in fact, that it is possible with current technology to scan this region of the brain and reproduce an image (albeit a low-resolution image, and that only after teaching the system to interpret a particular person's neural patterns) of what someone is looking at.

I know. It's amazing. I read the articles too. But this is not what I am actually asking. I'm asking how and why it appears visually. I'm sure, for example, that we will also be able to use brain-imaging to interpret unconscious processing. But this is not what I'm asking.

I'm asking what actually creates visual awareness? What's the hardware? Not processing, but actual awareness. Is there a threshold potential associated with visual consciousness? Is it something else? And how do you replicate this in AI? This is a genuine question. Bernard Baars goes into it at considerable length in one of his books.

Nick
 
Last edited:
The problem I have with the word "qualia" is that the concept is inextricably intertwined with an entire realm of dualistic woolly-headedness. Ask me, do we have experiences, and I'll say yes, most definitely. (And so do computers.)

Ask me if we have qualia, and I'll answer, mu.

I think qualia are a fine concept. I don't agree they really exist, but given the interdependency here, I figure they exist at the level of examination at which "I" exists.

Tell me you think computers are having experiences and I think you need to qualify that one pretty quick.

Nick
 
Well, I can't really be bothered debating with someone who wants everything defined, esp when they've already made it clear they haven't read any of the related material and don't wish to. I mean, death loses its sting after a while in such debates I find.

What, you mean Dennet and Blackmore? Why is that the "requisite" related material?

If you think a few popular works -- and yes, they are popular works -- is more relevant than, oh I dunno, a 4 year degree in computer science, 100+ university credits in chemistry and biology including neuroscience and neurobiology, and years of professional experience programming A.I. to emulate human behavior, then maybe that is why you are having such trouble understanding what Pixy and I are telling you.

You can argue the toss about conscious or otherwise but it's not really the point here. What I'm asking is...what actually creates or switches conscious awareness? In binocular rivalry, for example, what is it that is being switched such that I become aware of one stream and not another concurrent stream going into the other eye? Likewise when each ear is given separate coherent audio inputs.

We know that the amygdala is switching and we know that there are concurrent processing streams going on beneath conscious awareness. What I'm asking, as do many other researchers, is what brain function creates the qualitative difference between one stream being conscious and the other not?

I told you. I have told you over and over.

Reasoning.

If you're going to starting ****ing on about defining terms again please don't bother answering. I contend that this question is well understood by anyone with at least some background in this field.

I contend that if someone understands their own question they should be able to easily offer definitions when asked about their terms.




Look, I can give a verbal report either way. So what? This is again not the point. If someone asks me "do you see the door." I look at it and answer yes. This is not to do with what I'm asking. It's not about the accuracy of heterophenomenology. It's about the brain.

Yep. That is how you answered last time.

And this is why you will never understand the things pixy and I understand.

Until you can tell someone the difference between a chair you notice in a stadium and all the chairs you don't notice, you won't be able to understand how consciousness can arise in a material system.
 

Back
Top Bottom