The Hard Problem of Gravity

The experience is entirely subjective. Experience can't be objective. And there's no physical theory that deals with subjective experience.

Err, you mean there is no physical theory that you accept.

Because I, for one, have already told you many times that subjectivity is merely identity, which is clearly a physical theory. In fact, it is the most fundamental axiom of physical reality that there is.
 
I'm dealing with experience in the most general sense. I'm not insisting on an implied Cartesian self - just the very fact of the perception itself.

The thing is, if you reduce experience as far as you can, you end up with only three things:

1) Percepts of the world.
2) Facts related to those percepts.
3) Cartesian self.

1) and 2) are easily explained by models that have been around for 20+ years.

So yes, actually, if you contend that experience isn't explainable yet, you are insisting on an implied Cartesian self.
 
Well, qualia aren't all that popular a phenomenom around the Strong AI scene. The basic position is that the quale is conceptually erroneous. It's a concept that reinforces a viewpoint that is invalid - it's not what happens.

To get from this "erroneous" viewpoint to Strong AI can be quite a journey, and different approaches in trying to help someone along this journey have arisen. Pixy's seems to be what might be termed the "short, sharp shock" approach. I'll try to be a little more accommodating...

Don't get me wrong. I understand Pixy's position implicity. It's sinple and very logically self-consistent -- provided you only apply it to it's own tautology. The problem is that when extended into the REAL world [i.e. outside of the conceptual sandbox Pixy can't bring himself to step out of] it falls appart. Consciousness, subjectivity, quale, etc. are terms that refer to actual empirical phenomena; the fact that they don't figure into the narrow framework of S-AI doesn't invalidate them -- it just illustates that the S-AI model is useless and should be revised or descarded.

Subjectivity relies on the notion of there being a self that is experiencing. At the whole-organism level this might be a entirely valid notion, but beneath this level what would it look like? Would it be like Descartes' humunculus sitting in the pineal gland enjoying the show? This for most is unquestionably how it feels, but it's not so appealing to the modern, scientifically minded individual. There's no little men been found in post mortem, and even if there were this would only leave infinite regress issues.

The Strong AI or Computational approach to this HPC (subjectivity) is to assert that actually there is no observer, no experiencer, beneath the level of the whole organism or whole brain. In Global Neuronal Workspace Theory (GWT) for example, likely the dominant neurological model amongst professionals in this field, consciousness simply is the stage of the theatre. But there is no one watching, no humunculus. What is present in consciousness is simply that which is present in a vast network of neural connections that simultaneously feeds the same information to a host of unconscious neural modules. That's it, end of story! There are no other stages in the process.

But thats just the problem. Strong AI [as presented by PixyMisa] necessarily implies that the unconscious processes of biology that give rise to awareness ARE aware. If you remember, he specificly states that any and feedback system is 'aware' and any self-referential feedback system is 'conscious'. This definition encoumpases not only every biological system, by default, but potentially any ther physical system By such a definition, even when an individual is unequivocably unconscious the S-AI model says that they are. The position is completely indefencible as a model of ACTUAL consciousness. What absolutely amazes me is the level of cognitive dissonace required to sustain such a view yet individuals like Pixy still cling to it.

There isn't necessarily a need to invoke an infinite regression when trying to explain consciousness. Qualia are, by definition, fundamental elements of subjective experience. They may emerge from some more fundamental elements but they aren't going to be understood if people keep retreating into ontologically unsound theories like S-AI that completely ignore the phenomena they are supposed to explain.

The GWT seems like a much more promising start than the magic loops model of consciousness since its atleast more specific in its explanitory scope. The fact of the matter is that S-AI proponents have no idea what consciouness is so they completely sidestep the issue.

I could go on but I don't know if this helps, or if you know all this stuff already so I'll leave it there for now. Feel free to challenge/ask questions. We're almost back with the OP now!

Nick

eta: Further reading - the excellent Are We Explaining Consciousness Yet? by Dan Dennett.

Thanks for the link. I'll definitely look into it once i get the chance
:)
 
Don't get me wrong. I understand Pixy's position implicity. It's sinple and very logically self-consistent -- provided you only apply it to it's own tautology. The problem is that when extended into the REAL world [i.e. outside of the conceptual sandbox Pixy can't bring himself to step out of] it falls appart. Consciousness, subjectivity, quale, etc. are terms that refer to actual empirical phenomena; the fact that they don't figure into the narrow framework of S-AI doesn't invalidate them -- it just illustates that the S-AI model is useless and should be revised or descarded.

I don't agree. I think you have to be careful not to just react to Pixy's rhetoric, and throw the baby out with the bathwater. Good to watch the associations your mind creates!

Strong AI might not be universally agreed, but I figure it's here to stay, at least until anyone can come up with some strong evidence against it, which might happen. But currently I figure the converts are growing all the time.

The problem with terms like consciousness and quale is that definitions simply aren't agreed. The more annoying, and frequently less knowledgable Strong AI fans, take this as evidence that there isn't a coherent argument to be created with these words. And like to hang out on lists baiting people with them. Personally, I think it's pretty clear what the terms mean, even if there isn't clear definition, and I recognise that no one is going to believe Strong AI in the real world unless these concepts are accounted for.

I don't think Strong AI really disputes the existence of so-called qualia, depending on how they're defined, it just disputes that they demonstrate anything particularly meaningful.



But thats just the problem. Strong AI [as presented by PixyMisa] necessarily implies that the unconscious processes of biology that give rise to awareness ARE aware. If you remember, he specificly states that any and feedback system is 'aware' and any self-referential feedback system is 'conscious'. This definition encoumpases not only every biological system, by default, but potentially any ther physical system By such a definition, even when an individual is unequivocably unconscious the S-AI model says that they are. The position is completely indefencible as a model of ACTUAL consciousness. What absolutely amazes me is the level of cognitive dissonace required to sustain such a view yet individuals like Pixy still cling to it.

Well, it's not my job to try and defend Pixy's pet version of Strong AI. Maybe it's from Hofstadter, or maybe he made it up/interpreted it himself. Maybe it's correct, maybe not. I don't know. But I would advise you to read Dennett. He's the real granddaddy of Strong AI.

There isn't necessarily a need to invoke an infinite regression when trying to explain consciousness. Qualia are, by definition, fundamental elements of subjective experience. They may emerge from some more fundamental elements but they aren't going to be understood if people keep retreating into ontologically unsound theories like S-AI that completely ignore the phenomena they are supposed to explain.

I don't see that Strong AI has an issue with qualia, unless they're defined in a way that compels it to. Red can be amazingly red, no doubt about it.

The GWT seems like a much more promising start than the magic loops model of consciousness since its atleast more specific in its explanitory scope. The fact of the matter is that S-AI proponents have no idea what consciouness is so they completely sidestep the issue.

GWT fits in pretty good with Strong AI, if you ask me. The big thing GWT does is that it gets rid of the self. This was the big problem. Of course it only replaces it with "global access," but the scientific evidence thus far seems to be backing it up. What is conscious is what is roaming this great neuronal superhighway ranging right across the cortices like a city motorway at night. See Dehaene et al's model for pretty pic!

Thanks for the link. I'll definitely look into it once i get the chance

No probs. Dennett can be a bit of a struggle at times. Blackmore is also good, much easier to read, and very objective. Consciousness: An Introduction (or the very short version) is great, I think. Strong AI does seem to me to be pretty much the last thing left standing when you go into the evidence.

Nick
 
Last edited:
But thats just the problem. Strong AI [as presented by PixyMisa] necessarily implies that the unconscious processes of biology that give rise to awareness ARE aware. If you remember, he specificly states that any and feedback system is 'aware' and any self-referential feedback system is 'conscious'. This definition encoumpases not only every biological system, by default, but potentially any ther physical system By such a definition, even when an individual is unequivocably unconscious the S-AI model says that they are.

Whoa... talk about equivocation ...

Obviously, if a model relies on a definition of "consciousness" that is something along the lines of what Pixy is using then that definition is no longer useful when distinguishing between a sleeping and waking human.

For you to suggest otherwise is a logical fallacy.

There is nothing inherently wrong with the definition Pixy uses, other than that it apparently classifies more entities as "conscious" than you agree should be.

So what?

If you want to know why humans behave like humans, just ask that question. If you want to know why you experience, just ask that question.

The whole point of trivializing "consciousness" is to show that by itself the word is almost useless. Are you conscious? Are dogs? Birds? AIBO? When? Why?

If you can't define it formally, then you must define it operationally. That means simply partitioning the world into sets of "conscious" and "not conscious." Ok. How does one do that? According to behavior. So all Pixy (and I, and others) are saying is since you have to examine behavior to determine whether something is conscious or not, the label "conscious" in and of itself means nothing.

Asking "why is X conscious?" is entirely equivalent to asking "why does X behave in the way it does?"
 
Err, you mean there is no physical theory that you accept.

Because I, for one, have already told you many times that subjectivity is merely identity, which is clearly a physical theory. In fact, it is the most fundamental axiom of physical reality that there is.

Have you tried publishing it? As I've said before, having an idea of how things might work is not the same thing as a physical theory. And the best test is to submit the idea to a Physics periodical.
 
The thing is, if you reduce experience as far as you can, you end up with only three things:

1) Percepts of the world.
2) Facts related to those percepts.
3) Cartesian self.

1) and 2) are easily explained by models that have been around for 20+ years.

So yes, actually, if you contend that experience isn't explainable yet, you are insisting on an implied Cartesian self.

You jumped from one thing to the other without actually demonstrating a link.
 
Consider an emotion. According to the James-Lange theory of emotion (which Ichneumonwasp cited earlier in two of his[/url] typically excellent posts[/url]), the "feeling" (quale) of an emotion is just the subconscious (body) informing the conscious (mind) that a given situation has been evaluated as demanding a certain response (fearful, joyous, sad, angry, etc.) The quale then communicates this information from body to mind as a feeling, where it is recognized as "fear" say, and further evaluated against the symbolic (abstract) background knowledge of the subject. Emotional qualia allow us to distinguish between emotional states by differing in type and intensity and combining with each other (vague, complex, mixed feelings / emotions).

I'm a bit skeptical of this theory. It sounds rather like dualism to me. I would say that emotions are more primal than this. They're autonomous states which occur in reaction to the mind's evaluation of the situations it encounters.

Same as the James-Lange theory (see hilited bit).

There may be the possibility for the organism to "block" the arising emotion at an conscious level, but I doubt the "body" and "mind" are so separated that there is a need for communication between the two like what's being described above. It sounds to me rather like James and Lange are dualistics.

(The "body" and "mind" terminology is my own insertion, probably misleading; updated, it's the subconscious informing the conscious mind of its evaluation.)

In addition, I've worked with deep emotions as a therapist for some years and what I noticed was that people are often repeatedly attracted subconsciously into certain situations until an underlying feeling can be consciously expressed. The unconscious mind calls most of the shots over big decisions in life.

Well, that's a bit beyond James-Lange, I think, but not in conflict; actually, more like support: in J-L, the conscious mind is informed of the subconscious' evaluation (i.e., the emotion) by the bodily feeling of the emotion. It sounds like your patients were having an emotion without identifying its feeling, possibly in a welter of ambivalent emotions unable to single it out. Once the definite feeling (quale) of the emotion is expressed singly to consciousness, the patient can consciously deal with the underlying emotion (i.e., subconscious evaluation).

So I think your therapist experience agrees with James-Lange, though my poor explanation might've made you think otherwise.

We might hesitate to call qualia "language" because they are a non-symbolic system of communication (though much of what we communicate with spoken language, even written language, is non-symbolic, dependent on context, modulation, etc.) and seem more an uncertain continuum than a discrete alphabet; however, that may only reflect our own bias. Perhaps if experience is to be meaningful, it must define qualia (otherwise how would we recognize any experience, remember it, value it?), and exist not just as phenomenal reference for linguistic concepts about experience, but as language itself.

I think I see a Catch-22 here. If emotions do transcend logical interpretation then no amount of mental assessment is likely to get one very far. You simply have to consciously have the emotions...and that's it. Thinking, speaking or writing about them would be meaningless.

No, not that they "transcend logical interpretation"; just that they're expressed to consciousness non-symbolically, as embodied feelings. Moreover, unlike other qualia, even their source (the emotion, that is; not the 'situation' that provoked it) is internal to the subject; we must assume others 'feel' likewise (share the same vocabulary of emotional qualia) when we describe a feeling and define it as belonging to some emotion (or mix of emotions). This is more difficult than say assuming we all see "blue" in a certain way, as there are plenty of external blue things to point at for common definition (though the quale, my subjective experience of "blue", is still phenomenal / non-symbolic).

It frequently seems to me that if a human is to advance towards maturity it does have to develop the capacity to consciously feel more and more. There frequently appears little sense to the patterns of attraction it is drawn into, as the subconscious mind attempts to drag it into the feelings.

I agree, I think. Knowing, being conscious of, exactly what makes you happy or ticks you off is a big part of self-knowledge, and emotional maturity.
 
Last edited:
Have you tried publishing it? As I've said before, having an idea of how things might work is not the same thing as a physical theory. And the best test is to submit the idea to a Physics periodical.

You seem to misunderstand...

... the idea that identity is responsible for subjective experience (as opposed to whatever objective behavior is behind subjective experience) is shared by everyone who subscribes to the computational model of consciousness.

So for you to respond with your cookie-cutter "why don't you submit it to a journal" shtick is a little comical. Was that your intention?
 
Don't get me wrong. I understand Pixy's position implicity.
Well, let's see.

It's sinple and very logically self-consistent -- provided you only apply it to it's own tautology. The problem is that when extended into the REAL world [i.e. outside of the conceptual sandbox Pixy can't bring himself to step out of] it falls appart.
Really?

Consciousness, subjectivity, quale, etc. are terms that refer to actual empirical phenomena; the fact that they don't figure into the narrow framework of S-AI doesn't invalidate them -- it just illustates that the S-AI model is useless and should be revised or descarded.
The concept of qualia is not logically coherent.

I'm fine with consciousness and subjectivity, though.

But thats just the problem.
What is?

Strong AI [as presented by PixyMisa] necessarily implies that the unconscious processes of biology that give rise to awareness ARE aware.
Fallacy of composition.

If you remember, he specificly states that any and feedback system is 'aware'
I never said that.

and any self-referential feedback system is 'conscious'.
Nor that.

Indeed, both of those statements are the precise reverse of what I said.

This definition encoumpases not only every biological system, by default, but potentially any ther physical system
Sure. But that definition is something you made up, unrelated to anything I have said.

By such a definition, even when an individual is unequivocably unconscious the S-AI model says that they are.
Define "unconscious".

The position is completely indefencible as a model of ACTUAL consciousness.
What is indefensible is your insistence on swapping definitions of words in the middle of a sentence.

What absolutely amazes me is the level of cognitive dissonace required to sustain such a view yet individuals like Pixy still cling to it.
Since you have completely failed to understand the argument, nothing you assert regarding the argument has any bearing on anything.
 
Whoa... talk about equivocation ...

Obviously, if a model relies on a definition of "consciousness" that is something along the lines of what Pixy is using then that definition is no longer useful when distinguishing between a sleeping and waking human.

For you to suggest otherwise is a logical fallacy.

There is nothing inherently wrong with the definition Pixy uses, other than that it apparently classifies more entities as "conscious" than you agree should be.

So what?

If you want to know why humans behave like humans, just ask that question. If you want to know why you experience, just ask that question.

IMO, Pixy's definition of consciousness is relatively meaningless and largely mis-applied in the context of a discussion on the HPC. When scientists say they're looking for the "neural basis" or "neural correlates" of "consciousness," by consciousness they mean conscious access. It's completely clear. They're not scouring the brain for self-referencing loops. They're trying to understand how conscious access comes about.

They're trying to understand why one bit of processing is conscious and another unconscious. Self-reference is relatively meaningless here because it is not the presence or absence of self-reference that makes a difference here. The brain is riddled with self-referencing loops. In some ways it is a giant self-referencing loop. So what?

This is not to say that Pixy's definition is invalid in all frames of reference, just to point out that it doesn't really have much meaning here.

Nick
 
Same as the James-Lange theory (see hilited bit).

I don't agree. For me your statement "a given situation has been evaluated as demanding a certain response" implies that the subconscious mind is working out what needs to happen. I'm saying that the emotion is an autonomous response. Yours, or J-L's interpretation, for me is trying to understand the subconscious mind in the terms of the conscious mind. I don't see the point in doing this. For me the emotional responses of the subconscious are automatic, not thought out.

(The "body" and "mind" terminology is my own insertion, probably misleading; updated, it's the subconscious informing the conscious mind of its evaluation.)

Fair enough.

Well, that's a bit beyond James-Lange, I think, but not in conflict; actually, more like support: in J-L, the conscious mind is informed of the subconscious' evaluation (i.e., the emotion) by the bodily feeling of the emotion. It sounds like your patients were having an emotion without identifying its feeling, possibly in a welter of ambivalent emotions unable to single it out. Once the definite feeling (quale) of the emotion is expressed singly to consciousness, the patient can consciously deal with the underlying emotion (i.e., subconscious evaluation).

So I think your therapist experience agrees with James-Lange, though my poor explanation might've made you think otherwise.

I'm also not clear, but thanks for taking the time to explain more.

I agree, I think. Knowing, being conscious of, exactly what makes you happy or ticks you off is a big part of self-knowledge, and emotional maturity.

For me it's rather that accepting and having more feelings expands a person emotionally. They become more mature because they need less to react to situations but can rather take the time for an appropriate and conscious response. Maturation is a biological process, largely driven subconsciously through emotional responsivity imo.

Nick
 
Last edited:
IMO, Pixy's definition of consciousness is relatively meaningless and largely mis-applied in the context of a discussion on the HPC. When scientists say they're looking for the "neural basis" or "neural correlates" of "consciousness," by consciousness they mean conscious access. It's completely clear. They're not scouring the brain for self-referencing loops. They're trying to understand how conscious access comes about.
They're not scouring the brain for self-referencing loops because they've already found them. We already know how things work at that level, and it's exactly as I have said. They're working on the level above that.

They're trying to understand why one bit of processing is conscious and another unconscious. Self-reference is relatively meaningless here because it is not the presence or absence of self-reference that makes a difference here.
Yes it is.

The brain is riddled with self-referencing loops. In some ways it is a giant self-referencing loop. So what?
So everything.

Read Hofstadter.

This is not to say that Pixy's definition is invalid in all frames of reference, just to point out that it doesn't really have much meaning here.
Read Hofstadter.
 
You seem to misunderstand...

... the idea that identity is responsible for subjective experience (as opposed to whatever objective behavior is behind subjective experience) is shared by everyone who subscribes to the computational model of consciousness.

Yes, I'm aware of the common belief system. What elevates an idea from just something that pops into someone's head into a scientific theory is a clear link of cause and effect.

When it was shown that organic chemical processes were in fact no different in principle from inorganic, it involved the synthesis of urea, and the rebuttal therewith of the theory of vitalism. The whole process was scientific - involving the subset of physics which became modern chemistry.

I'm sure that prior to this, there were many people with various "beliefs" about the matter, but these did not comprise scientific theories.

The problem with the computational model is that it appears to be purely a matter of engineering. It's not been demonstrated that any fundamental process occurs in either the brain or a computer that doesn't happen at random. There is no physical definition of computation which is a precise fit for computers and brains.

So for you to respond with your cookie-cutter "why don't you submit it to a journal" shtick is a little comical. Was that your intention?

I do find it amusing that this vast enterprise should miss the simple point that indicates that the emperor has no clothes.
 
They're not scouring the brain for self-referencing loops because they've already found them. We already know how things work at that level, and it's exactly as I have said. They're working on the level above that.

Yes, they're working on the level above. So you repeatedly stating that consciousness is self-reference is relatively meaningless in this context.

Yes it is.

Now you're contradicting your statement above.

Nick
 
Don't get me wrong. I understand Pixy's position implicity. It's sinple and very logically self-consistent -- provided you only apply it to it's own tautology. The problem is that when extended into the REAL world [i.e. outside of the conceptual sandbox Pixy can't bring himself to step out of] it falls appart. Consciousness, subjectivity, quale, etc. are terms that refer to actual empirical phenomena

Really ?
 
Can you suggest a fourth possibility?

It all depends on how you break experience up. Ideally it should be neurologically, which we can't currently achieve. The other two obvious options are - functionally, from AI and related models; and experientially - introspection on the process.

The latter for me yields experience as being just -

a) sensory perception
b) inner speech

Nick
 

Back
Top Bottom