• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Resolution of Transporter Problem

I don't buy the "that's all there is" argument. I'm familiar with it, I've used it, but to me it still leaves a considerable explanatory gap. Here are a few issues I have....
Okay.

Firstly, if we look at the human brain, I'm sure we agree that it is a massive, but largely decentralised, parallel processor.
Yes. A distributed processing system might be a good way to describe it.

In the waking state, particularly, it is carrying out a phenomenal amount of processing. A colossal amount.
A fair bit, yes.

Yet, only a microscopic portion of this is conscious.
Modest, rather than microscopic, but yes.

Thus, to me there must be some qualitative, material difference between conscious and unconscious processing.
Yes, there is: Self-reference.

Not to make consciousness "special," in some romantic human way, but because this to me is simply logical.
Yes.

Secondly, and relatedly, if I read you right, your contention is that areas of the brain dealing with Self arbitrate and define what is "conscious" or "not conscious."
No, that's not what I'm saying; furthermore, I'm not sure what you mean (beyond the fact that it's not what I'm saying). What do you mean by "conscious" and "not conscious" here? Conscious what? Not conscious what?

Yet it is clear for me personally that selfhood is just another aspect of consciousness
Then you are misusing the term consciousness. Either that, or misusing the term self.

Flatworms have a sense of self, distinct from the sense of other, though they do not have a concept of self; they have awareness, but not self-awareness. Self is far more fundamental than consciousness; consciousness is a layer built on top by the mechanism of self-reference (i.e. feedback loops). (And the "self" in self-reference is not the same self, but a generic self. Anything that references itself is self-referential, whether it is itself the self or not. Clear?)

and not one that definably needs to be present in order for there to be conscious awareness. I don't see any inherently special function attributed to selfhood here on a strict materialist basis.
Then I have no idea what you are talking about when you say "self". Try defining your terms.

You might wish to accuse me of dualism for examining such things. That's up to you. But I am not a dogmatist here. I'm interested in consciousness and I don't buy it that it is purely a function of data processing.
Are you or are you not a materialist, Nick?

If you are, then self has to be purely a function of data processing. There is nothing else it can be. That still leaves us to explain it - but we have already done that.

If you are not a materialist, then you can say whatever you like, but it has no bearing. Materialism is the only possible basis for understanding this.
 
Change blindness is when you don't recognise changes in a visual scene. It's fascinating but I am talking about the phenomenom of visual consciousness itself. Visual consciousness exists. Yet the brain processes vast amounts of visual information unconsciously. I am asking you to explain what, in qualitative material terms, creates the difference between conscious and unconscious "seeing."

Nick

The same thing that is responsible for change blindness.

Change blindness happens because people are paying attention to something else in a scene. They are aware of something other than what changes because they are paying attention to something other than what changes.

Connect the dots -- the difference between 'conscious' and 'unconscious' "seeing" is the "paying attention."

Why are you not consciously aware of every leave on a tree? Why are you not consciously aware of every wave in the ocean? Because you aren't paying attention to them.

And I already explained to you exactly what "paying attention" is, in materialist terms.
 
Last edited:
A strange thought just occured to me, that I'd like to share - not that it adds or subtracts from the discussion at hand, but still...

Perhaps the entire brain is conscious. Perhaps every bit of processing has its own internal awareness going on, like some kind of hive of processors, each aware of whatever its own experiences are about - fear-threat responses, memory allocations, etc.

But maybe only one part is connected with language and the capacity for self expression. Maybe, if we connect some machines to different parts of the brain, we'll enable those parts to communicate and to tell us what it's like to be conscious of other aspects of thought-processing...

I'm not sure I'm expressing this thought right...
 
A strange thought just occured to me, that I'd like to share - not that it adds or subtracts from the discussion at hand, but still...

Perhaps the entire brain is conscious. Perhaps every bit of processing has its own internal awareness going on, like some kind of hive of processors, each aware of whatever its own experiences are about - fear-threat responses, memory allocations, etc.

But maybe only one part is connected with language and the capacity for self expression. Maybe, if we connect some machines to different parts of the brain, we'll enable those parts to communicate and to tell us what it's like to be conscious of other aspects of thought-processing...

I'm not sure I'm expressing this thought right...

Have you ever read about split brain patients?
 
Have you ever read about split brain patients?
Yeah, fascinating stuff. When you sever the corpus callosum you sort of, but not quite, end up with two independent consciousnesses.

I just learned that marsupials don't have a corpus callosum. Wonder what that implies for their outlook on life...
 
Okay.


Yes. A distributed processing system might be a good way to describe it.


A fair bit, yes.


Modest, rather than microscopic, but yes.


Yes, there is: Self-reference.

This bit I can't buy. We might be talking at crossed definitions but I don't think so. I cannot see why you would consider so called "self-reference" to be even modestly significant in this context. It sounds very much to me like you are still trying to find "an observer" or "experiencer" in the brain somewhere, which hopefully Dennett has taught you is a path not likely to be very rewarding.

If we're talking human brains, as opposed to AI, then what I certainly can these days buy is Baars basic hypothesis...consciousness is simply the best means evolution has found to distribute critical information across a wider network of unconscious processes (my words). This to me sounds reasonable. And it has no need for any "self." I'm told the wider body of neuroscientists, cognitive scientists and whoever are also largely in agreement with Baars.

There still may be hard problem issues surrounding the actual nature of consciousness as I have mentioned. Baars thinks there are.



No, that's not what I'm saying; furthermore, I'm not sure what you mean (beyond the fact that it's not what I'm saying). What do you mean by "conscious" and "not conscious" here? Conscious what? Not conscious what?

I mean that which you are conscious of and that which you are not, yet which is still being processed.

Then you are misusing the term consciousness. Either that, or misusing the term self.

To me visual consciousness, for example, means actual seeing, actual phenomenology.

Flatworms have a sense of self, distinct from the sense of other, though they do not have a concept of self; they have awareness, but not self-awareness.

Well, flatworms likely have a somatosensory cortex, or worm-y equivalent, I guess. I very much doubt they have sufficient neural networking capacity to actually think.

Self is far more fundamental than consciousness; consciousness is a layer built on top by the mechanism of self-reference (i.e. feedback loops). (And the "self" in self-reference is not the same self, but a generic self. Anything that references itself is self-referential, whether it is itself the self or not. Clear?)

You wish to distinguish between biological self and narrative (or psychological) self? Sure, no problem.


Are you or are you not a materialist, Nick?

If you are, then self has to be purely a function of data processing. There is nothing else it can be. That still leaves us to explain it - but we have already done that.

If you are not a materialist, then you can say whatever you like, but it has no bearing. Materialism is the only possible basis for understanding this.

I am a materialist, but not a dogmatist. To me, that means I like to also investigate. It doesn't bother me to investigate alternative viewpoints. Consciousness is very likely a function of data processing, I agree. I didn't make myself very clear with my earlier statement but I do still have issues with the actual phenomenological nature of, say, visual consciousness and whether it can truly be demonstrated to arise from material interactions. I think you will find that this is still a recognised issue because until you can replicate consciousness, in like manner, you will inevitably lack hard evidence.

Thanks for an interesting discussion, btw.

Nick
 
The same thing that is responsible for change blindness.

Change blindness happens because people are paying attention to something else in a scene. They are aware of something other than what changes because they are paying attention to something other than what changes.

Connect the dots -- the difference between 'conscious' and 'unconscious' "seeing" is the "paying attention."

Why are you not consciously aware of every leave on a tree? Why are you not consciously aware of every wave in the ocean? Because you aren't paying attention to them.

And I already explained to you exactly what "paying attention" is, in materialist terms.

So, in these precise material terms, what exactly is attention?

Nick

eta: actually, it has now been shown that mere paying attention does not necessarily prevent change blindness. Experiments have indicated that even when attention is focussed on something and it changes, this is usually not registered. See Blackmore precising Levin and Simons (1997)...."[They] created short movies in which various objects were changed, some in arbitrary locations and others in the centre of attention. In one case the sole actor in the movie went to answer the phone. There was a cut in which the camera angle changed and a different person picked up the phone. Only a third of the observers detected the change."

Another blow for the notion of a stream of consciousness!
 
Last edited:
Have you ever read about split brain patients?

Actually, that was part of what led me to this line of thought. Split-brain patients, various unusual brain injuries, a patient who survived a gunshot wound but wound up lacking self-awareness - except with her left hand.

Anecdotal, as I can't find where I heard that last.

It was fascinating, though - apparently her sense of self and consciousness left her. She functioned automatically - ate, took care of her needs, responded to questions - but offered no emotional responses, offered no recognition of her as separate from her environment, had no 'choices' (she would pick whichever item came closest to her left hand). But she learned to write with her left hand - apparently, not realizing her left hand was doing anything - and her left hand began writing startling questions ("Where is everyone? I can't see. I'm lost. Where am I? It's dark. It's quiet."). The 'conscious' portion of her brain seemed to only have access to her left hand - no sensory input (other than touch), no control over the rest of her body - but expression of access to self-identity and memory were entirely locked into that one hand.

Bizarre.

Of course, it could be a 'just-so story', as I can't find a reference to it anywhere at the moment.
 
OK, here's the same question I'm asking articulated by Susan Blackmore, from the same piece linked in my last post...

Nick

Sue Blackmore said:
We seem forced to distinguish between conscious and unconscious processing; between representations that are 'in' the stream of consciousness and those that are 'outside' it. Processes seem to start out unconscious and then 'enter consciousness' or 'become conscious'. But if all of them are representations built by the activity of neurons, what is the difference? What makes some into conscious representations and others not.

Almost every theory of consciousness we have confronts this problem and most try to solve it. For example, global workspace (GW) theories (e.g. Baars 1988) explicitly have a functional space, the workspace, which is a serial working memory in which the conscious processing occurs. According to Baars, information in the GW is made available (or displayed, or broadcast) to an unconscious audience in the rest of the brain. The 'difference' is that processing in the GW is conscious and that outside of it is not.

There are many varieties of GWT. In Dennett's (2001) 'fame in the brain' metaphor, as in his previous multiple drafts theory (Dennett 1991 and see below), becoming conscious means contributing to some output or result (fame is the aftermath, not something additional to it). But in many versions of GWT being conscious is equated with being available, or on display, to the rest of the system (e.g. Baars 1988, Dehaene and Naccache 2001). The question remains; the experiences in the stream of consciousness are those that are available to the rest of the system. Why does this availability turn previously unconscious physical processes into subjective experiences?

As several authors have pointed out there seems to be a consensus emerging in favour of GWTs. I believe the consensus is wrong. GWTs are doomed because they try to explain something that does not exist - a stream of conscious experiences emerging from the unconscious processes in the brain.
 
So, in these precise material terms, what exactly is attention?

I already told you -- reasoning. Remember when I said:

rocketdodger said:
A neural network is an implicit reasoning machine. Your brain takes facts gathered by your retina -- such as photons hitting a receptor at a given location -- and infers new facts using its neural network(s). Facts like patterns and colors are inferred. Then meta-patterns and meta-colors. Eventually higher level facts such as "that thing is a tree" are inferred. Then "that tree resembles the tree I saw yesterday." You can't control it. You can't turn it on or off. It is automatic -- that is the way neurons work.

You can extend the chain as long as you wish. Thats what human consciousness is -- an endless chain of reasoning taking place in the neural network(s) of our brains. You have visual awareness of something if and only if your brain is somehow reasoning about it.

So there you have it.

eta: actually, it has now been shown that mere paying attention does not necessarily prevent change blindness. Experiments have indicated that even when attention is focussed on something and it changes, this is usually not registered. See Blackmore precising Levin and Simons (1997)...."[They] created short movies in which various objects were changed, some in arbitrary locations and others in the centre of attention. In one case the sole actor in the movie went to answer the phone. There was a cut in which the camera angle changed and a different person picked up the phone. Only a third of the observers detected the change."

That is clear evidence that they weren't paying attention to what Blackmore thought they were paying attention to.

Look, it is quite simple -- if one is paying attention to something, they will notice a change. If not, not. This notion is so simple you could say it is trivially simple given that a common definition of "attention" might be "the ability to detect change in something."
 
Last edited:
OK, here's the same question I'm asking articulated by Susan Blackmore, from the same piece linked in my last post...

Nick

Sue Blackmore said:
The question remains; the experiences in the stream of consciousness are those that are available to the rest of the system. Why does this availability turn previously unconscious physical processes into subjective experiences?

Ok, here's the same answer -- "availability" is just another way to say "can be reasoned about by a greater portion of the network(s)"
 
This bit I can't buy. We might be talking at crossed definitions but I don't think so. I cannot see why you would consider so called "self-reference" to be even modestly significant in this context. It sounds very much to me like you are still trying to find "an observer" or "experiencer" in the brain somewhere, which hopefully Dennett has taught you is a path not likely to be very rewarding.

If we're talking human brains, as opposed to AI, then what I certainly can these days buy is Baars basic hypothesis...consciousness is simply the best means evolution has found to distribute critical information across a wider network of unconscious processes (my words). This to me sounds reasonable. And it has no need for any "self." I'm told the wider body of neuroscientists, cognitive scientists and whoever are also largely in agreement with Baars.

When they talk about "self-reference", I was under the impression that they just meant that the processing going on in your brain, is also referencing your past experiences in the form of memories?

Since you don't get real time data due to the time it takes sensory input to get into the brain and get processed, is it the case that all that we perceive is technically memory? If the entire system was referencing its past states like microseconds ago, I can see how this could be very significant in the emergence of consciousness.

But yeah, I don't even know if that is what you guys mean... lol

I'll go back to lurking this thread with fascination, and bewilderment.

Edit: I accidentally wrote trolling instead of lurking. How dreadful.
 
Last edited:
I already told you -- reasoning. Remember when I said:

So there you have it.

For me, you are still completely avoiding the central point.

I can put it another way...if we're considering AI, how would you empirically demonstrate actual visual consciousness in a machine?


That is clear evidence that they weren't paying attention to what Blackmore thought they were paying attention to.

It seems to me unlikely that 2/3rds of the viewers watching a movie would not be paying attention to the sole actor in a scene. Does it not to you?

Look, it is quite simple -- if one is paying attention to something, they will notice a change. If not, not. This notion is so simple you could say it is trivially simple given that a common definition of "attention" might be "the ability to detect change in something."

Yes, it is a simple notion and apparently obvious. This doesn't mean it's correct, especially when one considers evidence to the contrary. If you do ever actually read more from a variety of sources about consciousness research, I think you will quickly realise that many things are simply not what they seem. Many, many things.

Nick
 
Last edited:
This bit I can't buy. We might be talking at crossed definitions but I don't think so. I cannot see why you would consider so called "self-reference" to be even modestly significant in this context. It sounds very much to me like you are still trying to find "an observer" or "experiencer" in the brain somewhere, which hopefully Dennett has taught you is a path not likely to be very rewarding.
Okay, stop right there. Go directly to the bookstore. Purchase Douglas Hofstadter's Godel, Escher, Bach: An Eternal Golden Braid. Read it.

The entire book is about the role of self-reference in consciousness, and is far and away the best thing ever written on the subject.

And yes, it is a problem with definitions. You have completely misunderstood my (and rocketdoger's, and everyone else's) point. As computer programmers, the concept of self-reference is natural to us. But part of the reason why it's natural to us is that we've all read Godel, Escher, Bach.
 
For me, you are still completely avoiding the central point.

I can put it another way...if we're considering AI, how would you empirically demonstrate actual visual consciousness in a machine?


Like this:

[FONT=Times New Roman, Times, serif]An early success of the micro-world approach was SHRDLU, written by Terry Winograd of MIT (details of the program were published in 1972). SHRDLU controlled a robot arm that operated above a flat surface strewn with play blocks (both the arm and the blocks were virtual). SHRDLU would respond to commands typed in natural English, such as "Will you please stack up both of the red blocks and either a green cube or a pyramid". The program would plan out a sequence of actions and in the virtual world the robot arm would arrange the blocks appropriately. SHRDLU could correctly answer questions about its world of blocks, for example "Can a pyramid be supported by a pyramid?" (SHRDLU attempts to stack up two pyramids and fails) and "Is there anything which is bigger than every pyramid but is not as wide as the thing that supports it?" (to which SHRDLU answered "Yes, the blue block"). SHRDLU could also answer questions about its own actions: [/FONT]
[FONT=Times New Roman, Times, serif]Had you touched any pyramid before you put one on the green block?
YES, THE GREEN ONE.
When did you pick it up?
WHILE I WAS STACKING UP A RED CUBE, A LARGE GREEN CUBE, AND A LARGE RED BLOCK.
Why?
TO GET RID OF IT.
Why did you do that?
TO CLEAN OFF THE RED CUBE.
Why did you clear off that cube?
TO PUT IT ON A LARGE GREEN CUBE.
Why did you do that?
TO STACK UP A RED CUBE, A LARGE GREEN CUBE, AND A LARGE RED BLOCK.
Why did you do that?
BECAUSE YOU ASKED ME TO.
[/FONT]
 
Which brings me to a broader point I've been making. We knocked over "Hard Problem Consciousness" before Chalmers even formulated that phrase. It was easy - but something of a dead end.

Since then, we've been working on actual problem consciousness, i.e. the details of how the human mind works.
 
I can put it another way...if we're considering AI, how would you empirically demonstrate actual visual consciousness in a machine?

I, also, can put it another way ... if we're considering people, how would you empirically demonstrate actual visual consciousness in a human?

Same answer.

It seems to me unlikely that 2/3rds of the viewers watching a movie would not be paying attention to the sole actor in a scene. Does it not to you?

No. Because I don't make the huge sweeping assumptions you do.

For instance, I don't assume that the sole actor would be the center of attention. It is very likely that people were focused on the phone the actor was answering, or the hand of the actor, or any number of things that camerawork and context could lead people to focus on. Relying on eye tracking data seems a much safer bet than simply assuming. Did they use such data in that study?

Furthermore, for instance, even relying on eye tracking data is an assumption in itself. You should know from real world experience that quite often people aren't paying attention to whatever their eyes "appear" to be focusing on. People call it "daydreaming" and "spacing out" and all kinds of other common names.

In fact, the only surefire way to determine if someone is paying attention to something is to change it and see if they notice. Hence my comment that this is a trivially simple concept.

Yes, it is a simple notion and apparently obvious. This doesn't mean it's correct, especially when one considers evidence to the contrary.

What evidence is there to the contrary? The study Blackmore cited, you misinterpreted, and I just debunked?

If you do ever actually read more from a variety of sources about consciousness research, I think you will quickly realise that many things are simply not what they seem. Many, many things.

Do you have some examples?

Because thus far Pixy and I are batting a pretty good average out here. It seems to me that you are the one who is constantly wrong about how things "are" versus how they "seem."

Feel free to prove me wrong.
 
It seems to me unlikely that 2/3rds of the viewers watching a movie would not be paying attention to the sole actor in a scene. Does it not to you?
Ever seen a magic show, Nick?

Yes, it is a simple notion and apparently obvious. This doesn't mean it's correct, especially when one considers evidence to the contrary.
rocketdodger is correct. It is an oversimplification if you were to take it as a model of sensory perception as a whole, because we are wired so that certain classes of events draw attention to themselves, but on the specific question, it is correct.

If you do ever actually read more from a variety of sources about consciousness research
Have done.

I think you will quickly realise that many things are simply not what they seem. Many, many things.
And some things are.
 
Like this:

[/COLOR][/FONT]
[/INDENT]

Yes, I recall SHRDLU from Dennett. I don't recall it being demonstrated that SHRDLU experienced actual visual consciousness. That you can program a machine to say "Yes, I experience vision" does not really cut it for me.

Like I say, you skip over this piece by taking one of the popular materialist perspectives, which is simply that the mind is what the brain does. This I'm ok with, but I think it's also important to be aware that this is stating a position, not providing empiric evidence. Your buddy Wolfe does the same at the beginning of his lecture series. He states his position, but for me it is important to make the distinction. Of course, it's fair to say that it is currently impossible to empirically demonstrate visual consciousness, and may remain so. But I think it's good to be aware of the distinction here.

Nick
 
Last edited:
I, also, can put it another way ... if we're considering people, how would you empirically demonstrate actual visual consciousness in a human?

Same answer.

Fair point. However, as we're both materialists who believe in natural selection, it is I submit highly reasonable to believe that other members of the same species have a similar experience of visual consciousness, given the overwhelming self-reporting to agree with this and the and the stark lack of evidence to the contrary.

Now, I appreciate that it's a loaded question to ask this of a machine as we have no way to empirically demonstrate it either way, and that one may not exist or the question may simply be invalid. But, if you're scientific in nature, this lack needs to be appreciated.



For instance, I don't assume that the sole actor would be the center of attention. It is very likely that people were focused on the phone the actor was answering, or the hand of the actor, or any number of things that camerawork and context could lead people to focus on. Relying on eye tracking data seems a much safer bet than simply assuming. Did they use such data in that study?

I'm not sure. Dennett introduced the studies on change blindness as a means to demonstrate that his Multiple Drafts theory had validity. You can read the study online if you're interested - Failure to detect changes to attended objects in motion pictures. Other articles linked in the Wikipedia entry also cast doubt on the notion that attention is the sole factor relevant.

As I understand it, Dennett's assertion goes against yours here. MD, I believe, asserts that it is not possible to know what "is in consciousness" at any time. I could be wrong but this is what I currently understand.

Furthermore, for instance, even relying on eye tracking data is an assumption in itself. You should know from real world experience that quite often people aren't paying attention to whatever their eyes "appear" to be focusing on. People call it "daydreaming" and "spacing out" and all kinds of other common names.

In fact, the only surefire way to determine if someone is paying attention to something is to change it and see if they notice. Hence my comment that this is a trivially simple concept.

Well, it goes against your theory and you seem very keen to dispose of unwanted contrary evidence by any means to hand.


What evidence is there to the contrary? The study Blackmore cited, you misinterpreted, and I just debunked?

If you consider your assessment debunking, then I feel sorry for you.


Do you have some examples?

Because thus far Pixy and I are batting a pretty good average out here. It seems to me that you are the one who is constantly wrong about how things "are" versus how they "seem."

Feel free to prove me wrong.

Well, I try to raise issues but you just dump then as fast as possible without to my mind much concern as to how valid an approach you use. I'm happy to debate more.

Nick
 

Back
Top Bottom