The Hard Problem of Gravity

I was not speaking of what actually creates phenomenality. This is the whole thing here. What I'm saying is that in GWT consciousness/phenomenality/global access appears to be switched by mid-brain, attentional, self-evaluatory processes. They select representations to elevate to consciousness.

Seems to me the thalamus would be better situated as a "switch". What is the evidence that the midbrain is involved? And how do you explain the fact that vision does not pass through the midbrain to get to the occipital lobe (the only midbrain involvement concerns certain types of eye movements in the vertical plane) and smell never gets close to it? How can we be conscious of vision/smell if the midbrain serves as the switch to decide what enters and does not enter consciousness?

We don't know how phenomenality actually takes place and if there is an HPC or not. But it seems increasingly clear that it is associated with so-called "global access." Indeed, this is the core of GWT. And it also seems to me clear that it is simply not the presence of self-reference in the neuronal representation that creates the difference between conscious and unconscious processing.

Nick

OK, what in the world is 'global access'? That sounds like a descriptor not a process. How is it that split brain patients, who obviously cannot have global access can be conscious?
 
Having read a good portion of this thread, and understood some(!) of it, I haven't yet really got a feel for how consciousness relates to the unconscious processes... for example, last night I dreamt I was driving a luxury Mercedes - I felt conscious and was making reflective and informed decisions in this simulation of reality - provided/supplied/generated by processes of which I was not conscious. Occasionally I have become aware that a dream is a simulation, while experiencing it. Naturally this dream consciousness isn't quite the same as waking consciousness - many high-level parts of it seem to be disabled, but the sense of self and of awareness remains strong. It seems to me that consciousness is quite limited in terms of its control of and access to the facilities and data of the brain, and that it is actually supplied with a very limited/filtered and highly processed subset of data, just sufficient to maintain it's self-referential integrity, while the bulk of processing generally (self)attributed to consciousness is actually not conscious.

Do any of the models under discussion address these aspects of the extent of consciousness and the nature of its relationship to the unconscious?

Reading Hofstader's 'I Am A Strange Loop', he spends considerable time (and a host of analogies) emphasising the qualitative difference between what happens at a neural and inter-neural level and the conscious awareness that arises from it - suggesting that the substrate and its behaviour at the 'component' level is pretty much irrelevant - it is the structure and behaviour of assemblies and meta-assemblies of these components that determines how high-level features such as consciousness arise. To this extent, understanding how neurons talk to each other won't help us understand how consciousness arises any more than knowing the bond angles between H and O in a water molecule will help us understand turbulent flow - but this doesn't mean that we can't reach such an understanding, it simply means working at a higher level of abstraction. It's an interesting read :)
 
I think most models take this into account. GWT properly seen certainly does. We are conscious only of those things in awareness (although one can expand the view of what constitutes awareness).

So, then the problem is "what is awareness?" Some people have decided that we can't possibly explain it -- it just some universal consciousness.

Awareness, at least to me, seems to be an attentional process that reflects on its self -- sort of a meta-attention; but it is restricted to a small subset of all the data streaming into us at any one time.

If you look at GWT, that's what the model basically speaks to -- directed attention is "entrained" by some unconscious sensory processing (through a currently unexplained mechanism that somehow involves 40 Hz event related potentials) with consciousness somehow being represented in the recurrent self-referential loops between the 'percept' and attentional process which necessarily includes a body map.

Attention/awareness is always attention or awareness of something, not of everything.
 
Thanks Ichneumonwasp, that seems to correspond well to my own interpretation of what I've seen/read/heard, and I find myself more in agreement with your and Pixy's position than some others I've seen here.

ISTM that 'GWT' is a slightly unfortunate name, being open to misunderstanding of what 'Global' refers to (e.g. a workspace that has global extent or a limited workspace that has global accessibility).
 
Seems to me the thalamus would be better situated as a "switch". What is the evidence that the midbrain is involved? And how do you explain the fact that vision does not pass through the midbrain to get to the occipital lobe (the only midbrain involvement concerns certain types of eye movements in the vertical plane) and smell never gets close to it? How can we be conscious of vision/smell if the midbrain serves as the switch to decide what enters and does not enter consciousness?

I thought the "midbrain" was the same as the limbic system. My mistake!

OK, what in the world is 'global access'? That sounds like a descriptor not a process. How is it that split brain patients, who obviously cannot have global access can be conscious?

"Global access" is a descriptive term used in GWT. There's a good intro from Dennett online here.

Nick
 
No, it's not.

So, when I asked Pixy the following question, his answer is referring to lower order referencing?

Nick said:
This is what I want to know. What creates the qualitative difference between conscious and unconscious processing in humans or in AI?
Pixy said:
Self-reference.

I don't see how your "lower order" referencing can do this. Can you explain this to me?

Nick
 
I don't see how your "lower order" referencing can do this. Can you explain this to me?

Nick


Pixy, as he has said, uses self-reference in a very general way. Self-reference can mean "reference to the higher-order story self", reference to a lower order "body self" or simply (and this is the main way he has used it) recurrent looping of information.

One of the problems may be that when you see the idea of a recurrent loop you may think this means A impacts B, which in turn impacts A which impacts B, ad nauseum. You've at least implied that you don't see how that sort of loop can do anything. If that's all it consisted in it couldn't, but that isn't what happens in brains and isn't what happens in computers (not that the looping doesn't happen but the simple-no-change-in-information looping isn't the way either are set up).

In brains information comes into a system and is communicated to a different "level"; that "level" is changed by this incoming information. That "level", which also receives info from other structures, communicates back with the original system which has already changed by then (at least most of the time) because new information from the periphery has arrived which it then sends to the other "level", ad nauseum.

Now, I agree with you that a simple, constantly updating circuit like this does not in and of itself constitute consciousness in a fully human sense, but it does capture some sense of what we mean when we use a word like "awareness". What is "awareness"? It can certainly be defined as directed attention (the other "level"'s "attention" is always directed at the incoming system) with the ability to change behavior based on the information it receives. We cringe at the idea of a thermostat being called "aware" or "conscious" because it's "attention" is constantly directed to one info stream and nothing else. It is difficult to define the word "awareness", however, in a way that would leave out the thermostat -- at least in a general sense (we can always talk about human awareness, etc.). [Keep in mind that when discussing this I am not implying that a simple loop would explain how an organism might attain awareness, but only that the second "syste or level" is "aware" of the input from the first.]

The differences for a human involve at least two issues -- one is the ability to direct attention to many different streams of incoming information and the other is the "feeling of what happens", or the feeling of this process. The ability to direct attention is what I am guessing is referred to as global access though I haven't had the time to read your link yet. Simple systems do this all the time, though. We could attach a motion detector to a camera and it could move its attention wherever something moved. This shows only a change in directed attention, however, not movement toward new types of information. But the principle should be pretty obvious.

That type of changing attention is what I have been trying to get across as inescabably tied to a body map, so it is self-referential in the second sense. That simply seems to be the way our brain's directed attentional system works.

When Pixy used the general term "self-reference" he probably intended all three of these possible meanings -- because they all employ some type of self-reference. A thermostat doesn't act like a human because it cannot change its attention and it is "aware" only of its input stream. Humans can alter their attention from one input stream to another, so we have different types of "awareness" or "directed attention". Awareness in both senses is crucial to what we call consciousness, and there is nothing particularly magical about it. GWT tries to explain the process of how unconscious processing can impact directed attention in humans. That process uses the second and third meanings of self-reference, with "global access" as best I can tell simply serving as a descriptor for what happens -- that attention can be directed in a variety of ways.

What is left out of this discussion, so far though, is "the feeling of what happens" or feelings in general. That is supposedly the impossible to explain part of consciousness. I think it is just a different type of processing with different output -- a behavioral tendency rather than a behavior itself. That is why it has a "feeling", because it isn't an action that we can see but only a push toward an action.
 
Last edited:
Thanks Ichneumonwasp, that seems to correspond well to my own interpretation of what I've seen/read/heard, and I find myself more in agreement with your and Pixy's position than some others I've seen here.

ISTM that 'GWT' is a slightly unfortunate name, being open to misunderstanding of what 'Global' refers to (e.g. a workspace that has global extent or a limited workspace that has global accessibility).


Thanks. And yes, I think there is significant chance for 'global' to be miinterpreted. I need to read the link, but I assume that it refers to the ability we have to direct attention in many different ways instead of having our attention directed to only one info stream like a thermostat does.
 
I've been speaking very broadly, as much in terms of definitions as processes. What do we mean when we speak of consciousness? Take it back to Descarte's cogito: I think, therefore I am. It's a statement of self-referential information processing.

I very much doubt that Descartes' cogito would be regarded by many these days as a statement of what consciousness is. It's a statement of what the narrative self is, for sure.

To exist as a conscious entity - Nagel's be-able thing of What is it Like to Be a Bat? - is to think, at some level, about one's own thoughts.

So, again, you're stating that without inner speech there is no consciousness. To me this is just nonsensical. Inner speech comes and goes but whilst my eyes are open visual phenomena remain.

Self-reference.

Since that's how we define consciousness, it's little surprise that when we look at conscious processes, we do indeed find self-referential processing.

But it seems that only yourself and some other AI fans define it like this.

Nick
 
To this extent, understanding how neurons talk to each other won't help us understand how consciousness arises any more than knowing the bond angles between H and O in a water molecule will help us understand turbulent flow - but this doesn't mean that we can't reach such an understanding, it simply means working at a higher level of abstraction.

It is nice to see that some people understand.
 
Pixy, as he has said, uses self-reference in a very general way. Self-reference can mean "reference to the higher-order story self", reference to a lower order "body self" or simply (and this is the main way he has used it) recurrent looping of information.

One of the problems may be that when you see the idea of a recurrent loop you may think this means A impacts B, which in turn impacts A which impacts B, ad nauseum. You've at least implied that you don't see how that sort of loop can do anything. If that's all it consisted in it couldn't, but that isn't what happens in brains and isn't what happens in computers (not that the looping doesn't happen but the simple-no-change-in-information looping isn't the way either are set up).

In brains information comes into a system and is communicated to a different "level"; that "level" is changed by this incoming information. That "level", which also receives info from other structures, communicates back with the original system which has already changed by then (at least most of the time) because new information from the periphery has arrived which it then sends to the other "level", ad nauseum.

Now, I agree with you that a simple, constantly updating circuit like this does not in and of itself constitute consciousness in a fully human sense, but it does capture some sense of what we mean when we use a word like "awareness". What is "awareness"? It can certainly be defined as directed attention (the other "level"'s "attention" is always directed at the incoming system) with the ability to change behavior based on the information it receives. We cringe at the idea of a thermostat being called "aware" or "conscious" because it's "attention" is constantly directed to one info stream and nothing else. It is difficult to define the word "awareness", however, in a way that would leave out the thermostat -- at least in a general sense (we can always talk about human awareness, etc.). [Keep in mind that when discussing this I am not implying that a simple loop would explain how an organism might attain awareness, but only that the second "syste or level" is "aware" of the input from the first.]

Hi INW,

Thanks for taking the time to explain what you understand. I appreciate it. For me, personally, "awareness" is synonymous with "phenomenality." I mean, it is of course complex because we do not really know what we're aware of in the moment without reflecting upon it, and this action takes place slightly afterwards. With thoughts and feelings it is perhaps easier. This aside, yes, directed attention seems reasonable to me.

The differences for a human involve at least two issues -- one is the ability to direct attention to many different streams of incoming information and the other is the "feeling of what happens", or the feeling of this process. The ability to direct attention is what I am guessing is referred to as global access though I haven't had the time to read your link yet.

I think, actually, global access refers to this purported reverberant state where whole networks of neurons transmit one representation to many different areas of the brain. In GWT this state of global transmission is consciousness. This is what I understand. Thus GWT provides a map by which the HPC may be investigated.


That type of changing attention is what I have been trying to get across as inescabably tied to a body map, so it is self-referential in the second sense. That simply seems to be the way our brain's directed attentional system works.

With this I agree.

When Pixy used the general term "self-reference" he probably intended all three of these possible meanings -- because they all employ some type of self-reference. A thermostat doesn't act like a human because it cannot change its attention and it is "aware" only of its input stream. Humans can alter their attention from one input stream to another, so we have different types of "awareness" or "directed attention". Awareness in both senses is crucial to what we call consciousness, and there is nothing particularly magical about it. GWT tries to explain the process of how unconscious processing can impact directed attention in humans. That process uses the second and third meanings of self-reference, with "global access" as best I can tell simply serving as a descriptor for what happens -- that attention can be directed in a variety of ways.

For me, this above still leaves out the core issue which Blackmore asks of Baars - what's the material difference between one set of neurons processing information consciously and another doing the same thing unconsciously? The Dehaene paper starts to probe this question and identifies 3 criteria which need to be met for global access (consciousness) to take place.

What I don't buy is that global access is created through self-reference. I don't know about how this reverberant loop is set up and how it does what it does, so it could rely on self-reference for sure. However, just to maintain some bitchiness (!) I have to say that, whichever way it is, it sure isn't how Pixy intended it when he made the original statements about self-reference. If GWT is correct, then machine consciousness is, at least on the surface, very different from global access.

What is left out of this discussion, so far though, is "the feeling of what happens" or feelings in general. That is supposedly the impossible to explain part of consciousness. I think it is just a different type of processing with different output -- a behavioral tendency rather than a behavior itself. That is why it has a "feeling", because it isn't an action that we can see but only a push toward an action.

I don't see where feelings create a problem here.

Nick
 
On the contrary, that is how all of mathematics and computer science define it.

You are in the minority here sir.

In other areas of consciousness research (AI aside) it is not defined like this. The word "consciousness" does not simply denote "self-referential information processing." Awareness of self is an aspect of consciousness, but not the whole of it. See for example Random House Dict definition...

dictionary.com said:
1. the state of being conscious; awareness of one's own existence, sensations, thoughts, surroundings, etc.
2. the thoughts and feelings, collectively, of an individual or of an aggregate of people: the moral consciousness of a nation.
3. full activity of the mind and senses, as in waking life: to regain consciousness after fainting.
4. awareness of something for what it is; internal knowledge: consciousness of wrongdoing.
5. concern, interest, or acute awareness: class consciousness.
6. the mental activity of which a person is aware as contrasted with unconscious mental processes.
7. Philosophy. the mind or the mental faculties as characterized by thought, feelings, and volition.

Of course, it's easier to deal with something when the definition is tight and fixed, just as it is easier to make statements about the nature of consciousness in a machine than it is in a human.

Nick
 
Humans are machines.

Fair point. But they developed through a far more complex and hard-to-predict process than computers. And there seems be more and more evidence these days to suggest that human consciousness is quite different from computer consciousness.

Nick
 
Last edited:
I think, actually, global access refers to this purported reverberant state where whole networks of neurons transmit one representation to many different areas of the brain. In GWT this state of global transmission is consciousness. This is what I understand. Thus GWT provides a map by which the HPC may be investigated.

That reverberant state is directed attention.


For me, this above still leaves out the core issue which Blackmore asks of Baars - what's the material difference between one set of neurons processing information consciously and another doing the same thing unconsciously? The Dehaene paper starts to probe this question and identifies 3 criteria which need to be met for global access (consciousness) to take place.

One involves directed attention processes and includes the feeling of the process and the other doesn't -- it acts under the radar. That doesn't explain anything, though, just describes in broad terms what differs between the two.

What I don't buy is that global access is created through self-reference. I don't know about how this reverberant loop is set up and how it does what it does, so it could rely on self-reference for sure. However, just to maintain some bitchiness (!) I have to say that, whichever way it is, it sure isn't how Pixy intended it when he made the original statements about self-reference. If GWT is correct, then machine consciousness is, at least on the surface, very different from global access.


I can't tell what Pixy actually intended only how it read to me.

But, how about stating it this way -- self-reference in one or all forms is the sine qua non of consciousness? Whether or not it explains the process completely, consciousness is impossible without it.

And, yes, machine consciousness (if we really want to call it that) is very different from human consciousness at this point. But that doesn't mean that machine consciousness cannot work in the same way as human consciousness in the future. I'm not sure we really want to go there in a big way, though, for the reasons you brought up in a previous thread about one of Susan Blackmore's Ted talks.

Machines thinking like humans in any numbers -- a possibility with meme theory, as she mentions -- would be dangerous to us.


I don't see where feelings create a problem here.

Nick


Then we agree. I don't see a problem either, but many others do.
 
That reverberant state is directed attention.

Can you source that notion?

Are you saying that without directed attention there can be no consciousness?


But, how about stating it this way -- self-reference in one or all forms is the sine qua non of consciousness?

Yes. Consciousness would not have developed were it not for its value to the self.

Nick
 
Can you source that notion?

Are you saying that without directed attention there can be no consciousness?

It seems that hes saying that you can't be conscious of [something-or-another] w/o having some portion of your attention directed at it.
 
Last edited:
Belz... said:
Because there are intelligent agents at the source of the universe in both other cases (god for dualists, the mind for idealists), so there's a "reason" for everything.
But why do the intelligent agents choose the way they do?

~~ Paul
 

Back
Top Bottom