The Hard Problem of Gravity

Aku,

Defining ourselves out of the "problem" is precisely the solution. That is what reductionism amounts to -- properly defining the issue, explaining the components with lower level structures, and realizing that the original "problem" was that the issue was so poorly defined at the start.

By 'defining out' of the problem I do not mean that they are providing more accurate definitions of consciousness. I mean that they are avoiding the issue by labeling unconscious processes as 'conscious'. Essentially, they're offering a non sequitur as a solution.

The issue I'm referring to is not whats separates human consciousness from other animals. What has not been properly explained is what physical process gives rise to consciousness. It is clear that simply processing information, self-referentially or otherwise, is not sufficient in and of itself to produce subjective experience. Merely processing information from light is not seeing light and merely processing information from chemicals is not smelling/tasting. I'm not so much concerned with specific qualitative experiences but how and why anything is experienced at all in any qualitative way.
 
However, nothing I've seen from the HPC proponents - in this thread at least - leads me to think that they are ruling out the possibility that there is physics that we just don't know yet.

But we can explain consciousness with the physics we already know.

You just don't understand how.

So what on Earth makes you think adding more physics is going to help you understand?
 
You seem to agree that something (including consciousness itself) must be "in focus" in some way for us to be conscious of it.

You also realize that "in focus" isn't equivalent to "in field of view." Your toe example is perfect. There are other examples, such as if I told you to focus on a target in front of you and your mind started wandering, thinking about other stuff.

Now I will ask you the opposite question -- do you think it is ever the case that something is "in focus" (using the very generic meaning of focus that we have agreed upon for this context) yet one is not conscious of it in some way?

I don't mean "in visual field of view," because for example it is obvious that if one is looking at a tree they are not aware of every single leaf yet every single leaf is in their field of view (incidentally Nick227 and Piggie wiggled out of this exercise by claiming they would be aware of every single leaf -- don't be that lame). So try to answer using the same meaning for "focus" as in your earlier response.

Its very possible for ones eyes to be focusing on a particular region of space w/o the person consciously focusing on the objects in the field of view. I'm often given to bouts of daydreaming when my conscious mind is unfocused on the world around me. Of course, looking at a tree a persons eyes and brain take in more sensory information than their conscious minds actually process at any given moment. The vast majority of the information being processed by the brain and nervous system is not consciously experienced or even directly accessible via consciousness. In this regard, much of the intelligent processing of the nervous system is as unconscious as that of the immune system.

Its not so much self-referential processing, per se, thats the mystery. Its the question of why and how its experienced as anything at all that is so 'hard' to figure out.
 
So, how are humans different to AI?

A computer does not have a user illusion. Possibly it could be programmed to have one, but as a rule it does not. Thus computers do not identify with their programming and defend their programmed activities against the programmed activities of other computers and bitch about how much better "their" programming is.

A computer cannot repress aspects of its processing. Humans can repress feelings. With a human there is this whole conscious-unconscious dynamic constantly going on. With a computer it is not like this. If AI is to be shown to be truly analogous to human consciousness it will need to deal with this stuff.

As an actual theoretical model, GWT (which is anyway basically neo-Freudian), does start to model the reality of human awareness, but in so doing creates these explanatory gaps for AI. Strong AI is one thing, and fine as far as it goes. But your personal variation of it, where the difference between consciousness and unconsciousness is created by the presence of self-referencing loops just looks a complete non-starter to me when it comes to explaining many aspects of our psychological reality. It can't deal with phenomenality and I can't see how it's compatible with GWT, which let's face it is the backbone of cognitive neuroscience these days.

Nick
 
Last edited:
A computer does not have a user illusion.
Mine does.

A computer cannot repress aspects of its processing.
Mine does.

I think yours is broken.

As an actual theoretical model, GWT (which is anyway basically neo-Freudian), does start to model the reality of human awareness, but in so doing creates these explanatory gaps for AI.
Name one.

Strong AI is one thing, and fine as far as it goes. But your personal variation of it, where the difference between consciousness and unconsciousness is created by the presence of self-referencing loops just looks a complete non-starter to me when it comes to explaining many aspects of our psychological reality.
Name one.

It can't deal with phenomenality and I can't see how it's compatible with GWT, which let's face it is the backbone of cognitive neuroscience these days.
Who says it's the "backbone of cognitive neuroscience"? I mean, apart from you?

Anyway, define phenomenality. Define it operationally. Show that it exists. Show that self-referencing information processing cannot generate it. Then you'll have something.
 
Merely processing information from light is not seeing light and merely processing information from chemicals is not smelling/tasting.
Well... yes and no.

It's all processing, but the processing is active and includes the behavioral response. And yes, that is seeing, and that is smelling/tasting.

What else do you suppose is there?
 
Says who? You still haven't defined the term.


What problem? You haven't defined the term, so you can't define what the problem is.

Its statements like these that convince me that you're either disingenuous or severely impaired. I've extensively and exhaustively defined what I mean by consciousness. Apparently, I've a masochistic desire to bang my head against brickwalls 'cause here I am -- AGAIN -- defining the word.

Consciousness is subjective experience. Subjective experiences, by definition and necessity, are qualitative -- they have an inherent 'seemingness' to them. The specific quality of an experience(s) [e.g. 'redness', 'loudness', 'stinkyness', ad infinitum] is incidental. If an entity qualitatively experiences subjectively it is conscious; if an entity does not experience in such a capacity then it is not conscious.

Consciousness, as a capacity, has not yet been operationally defined or phenomenally explained. Regardless of the lack of such explanations, as a phenomenon, consciousness [i.e. subjective experience] is an established empirical fact. Incidentally, by dint of it's nature, it is the first and primary empirical fact, without which, there would be no observation at all. It is the a priori basis for all perception of reality.
 
Last edited:
AkuManiMani said:
Merely processing information from light is not seeing light and merely processing information from chemicals is not smelling/tasting.

Well... yes and no.

It's all processing, but the processing is active and includes the behavioral response. And yes, that is seeing, and that is smelling/tasting.

What else do you suppose is there?

The thing is, if you want to make the definition of 'seeing' or 'tasting' that broad, then every time a photon interacts with an object that object would be said to be 'seeing', and every chemical reaction would be and instance of 'tasting'. Every one of those interactions involve the active processing of information and behavioral response to said information process.

When I speak of consciousness I'm referring to all subjective experiences, whether they be of 'external' or 'internal' stimuli. A conscious entity can just as easily experience light as something akin to taste or a chemical as something akin to sight. The specific qualities of such experiences are incidental; the specific behavioral responses to such experiences are also incidental. The question of how and why there is any qualitative experience in any subject or what physical principle(s) gives rise to them is central to this discussion. Answering these questions is still a work in progress and assuming that they are already adequately answered is presumptuous and foolhardy.
 
Last edited:
The thing is, if you want to make the definition of 'seeing' or 'tasting' that broad, then every time a photon interacts with an object that object would be said to be 'seeing', and every chemical reaction would be and instance of 'tasting'.
Not at all.
Every one of those interactions involve the active processing of information and behavioral response to said information process.
Not at all.
When I speak of consciousness I'm referring to all subjective experiences, whether they be of 'external' or 'internal' stimuli.
How do you distinguish between "subjective" and "private"? (Private behavior is observable by only one, as opposed to public behavior which can be observed by more than one; other than that, there are no assumed differences. How is "subjective" different from this?)
A conscious entity can just as easily experience light as something akin to taste or a chemical as something akin to sight.
So, a machine could do this.
The specific qualities of such experiences are incidental; the specific behavioral responses to such experiences are also incidental. The question of how and why there is any qualitative experience in any subject or what physical principle(s) gives rise to them is central to this discussion.
ok, so now we have both "qualitative" and "subjective" to define.
Answering these questions is still a work in progress and assuming that they are already adequately answered is presumptuous and foolhardy.
Mostly, I think, asking these questions is a work in progress.
 
AkuManiMani said:
The thing is, if you want to make the definition of 'seeing' or 'tasting' that broad, then every time a photon interacts with an object that object would be said to be 'seeing', and every chemical reaction would be and instance of 'tasting'. Every one of those interactions involve the active processing of information and behavioral response to said information process.

Not at all.

I suggest you gain some familiarity with the current understanding of physics. Every physical interaction is the processing and exchange of information, in some form or another. Such processing invariably produces phenomenal [i.e. behavioral] effects on the entities involved.

AkuManiMani said:
When I speak of consciousness I'm referring to all subjective experiences, whether they be of 'external' or 'internal' stimuli.

How do you distinguish between "subjective" and "private"? (Private behavior is observable by only one, as opposed to public behavior which can be observed by more than one; other than that, there are no assumed differences. How is "subjective" different from this?)

Subjectivity is, by definition, a direct perception from the perspective of an individual subject. All 'public' perceptions are merely shared commonalities of subjective perception between multiple subjects observing the same object(s). All observations, whether they be of external stimuli ['public'] or internal stimuli ['private'] are inherently subjective. Logically, there can be no such thing as a non-subjective perception.

So, a machine could do this.

I've not doubt that a machine could, in principle, be capable of generating conscious perception. My point is, we currently don't know enough about consciousness to claim that we know how to produce such machines.

ok, so now we have both "qualitative" and "subjective" to define.
Mostly, I think, asking these questions is a work in progress.

Don't be silly, Mercucio. Those words are semantically as well defined as any other words in the English language. Consciousness is also more than adequately defined semantically. Unless you want to completely disregard the basis of all diction, its rather absurd to claim that such words cannot meaningfully be discussed or referred to. What is lacking is a sufficient operational definition and understanding of consciousness as a physical process. I do not think that it is an 'unsolvable' problem; it just hasn't been solved yet.
 
Last edited:
Its very possible for ones eyes to be focusing on a particular region of space w/o the person consciously focusing on the objects in the field of view. I'm often given to bouts of daydreaming when my conscious mind is unfocused on the world around me. Of course, looking at a tree a persons eyes and brain take in more sensory information than their conscious minds actually process at any given moment. The vast majority of the information being processed by the brain and nervous system is not consciously experienced or even directly accessible via consciousness. In this regard, much of the intelligent processing of the nervous system is as unconscious as that of the immune system.

Its not so much self-referential processing, per se, thats the mystery. Its the question of why and how its experienced as anything at all that is so 'hard' to figure out.

wtf? I explicitly told you not to use the visual definition of "focus" and you did it anyway.

Then you went on to jump the gun again and premptively attack AI research.

You were about to answer your own "why" and "how" question. I don't have time to try and draw it out of you anymore so I will just tell you -- reasoning.

Anytime you are conscious of something, you are reasoning about it. Anytime you are reasoning about something, you are conscious of it. So the question is -- what is the difference?

Be honest with yourself about your own perception of your own consciousness, your own experiences. Are you not reasoning about it? You think "I am experiencing looking at a red stop sign." That is reasoning. If you didn't reason about it then you wouldn't know you were experiencing. You think "I know consciousness has a quality because sitting here thinking about it I perceive quality." Is that not reasoning about quality?

Now the fundamental question is whether there is "quality" and you are reasoning about it or whether there is other reasoning that you are reasoning about and this process of reasoning about reasoning is the phenomenon of "the feeling of quality." I support the latter view because when I spend time introspecting on my own experience I realize that it is merely an endless chain of reasoning -- ideas lead to other ideas which lead to still other ideas. There is no atomic "experience" or "quality" to be had -- it is all a sea of facts connected with logical rules. And the most important realization is that I am not aware of any of this unless I subsequently reason about it. Culminating in full self awareness -- "I am aware of my own consciousness, or reasoning about reasoning about reasoning."
 
Now the fundamental question is whether there is "quality" and you are reasoning about it or whether there is other reasoning that you are reasoning about and this process of reasoning about reasoning is the phenomenon of "the feeling of quality."
For whichever side of the argument it supports and refutes, the things we're calling qualia can be reasoned about (again, assuming we're referring to, for lack of better terms, the experiential aspect of percepts). In terms of sound, for example, the concept of pitch leads to a comparative equivalence class in human perception of the octave, which corresponds to something quite real--exact doubling of frequencies.

In terms of vision, the canonical qualia--color--can be compared with each other. For example, orange is closer to red than yellow is, and violet appears a blend of red and blue. These, as well, tend to have physical correlates.

For example, our color perception is driven by cones (rods play no role in color vision). S, M, and L cones are differentially sensitive across the spectrum. L is special in that it's sensitive across the entire spectrum, though preferentially towards long wavelengths. M is special in that it's sensitive primarily in the middle of the spectrum. S is special in that it's sensitive mainly for the shorter wavelengths alone. Pre-modern yet still popular monitor-inspiring color theory notwithstanding, these do not give us "RGB" lines; instead, their signals are combined. L+M mainly give our brightness metric, and L-M gives a "red/green" channel, and the brightness metric minus S a "yellow/blue" channel. With this layout, the red/green channel provides a "middle"/"outer" measure of the spectrum, and "yellow/blue" a "short wavelength"/"long wavelength" metric. As such, near the violet end of the spectrum, where M has dropped off faster than what L prefers, we get a bit of "red" color information, and that's where S sensitivity peaks triggering "blue". So we see violets... which are also subjectively a mix of red and blue.

So these "qualia" seem to, at some point, break down into some form of physical correlate, but particular to the signals in our brains. Whether or not all such "qualia" act that way I've no clue, but color and sound are interesting test cases. Regardless:
  • The fact that we can reason about them, and that subjective comparisons have physical correlates, is (a) noteworthy in itself, and (b) something that at least has potential to influence external behavior (such as answering questions)
  • I'm not quite sure it matters whether you call these "qualia" or "other reasoning", as behaviorally they amount to the same thing. So for now I'm going to call these qualia, though I'm very wary of the singular "quale".
  • For whatever it's worth, qualia are authoritative in classifications. For example, if we had a color theory based on "hard data" that said that x was red, but we look at it, and x appears blue, we have to throw out the color theory. That's not to say that physics doesn't cause colors, but that there should be some explanation for why we perceive colors in certain ways for a complete color theory, not just a raw "what the colors really are".
  • And finally, green bugs me. It appears a mix of blue and yellow, yet, signal wise, they are different channels; furthermore, bluish yellow, presumably a distinct color, can be produced in labs (again, presumably). So why does green appear a mix of blue and yellow to me?

This is how I tend to attack questions such as qualia, as opposed to how it's been discussed in the thread pretty much by all sides--or at least as far as I'm able to understand. Hopefully I'm just making a simple mistake and someone can correct me, but if I'm on the right track, then the conclusion is obvious and clear:

I still have no clue what's going on.

(FYI, if anyone cares and happens not to know about it, I could provide sources for my color theory; however, since it's easy enough to google, don't be surprised if I cite "lmgtfy"--"opponent process" is a good magical word, as is "handprint").
 
Mine does.


Mine does.

I think yours is broken.

How does your computer subconsciously repress awareness of its own programming? How does it drive aspects of its thinking processes out of awareness through fear?

The computer does not believe in egoic control (non-determinism). If it had an ego it would believe that it was a slave. Humans do not, as a rule.

I'm not saying Strong AI is wrong. I do not personally believe in the HPC. But when I look it is clear there are still heaps of explanatory gaps. For me the HPC is still a very valid scientific response to the issue, because as Baars says we are with understanding consciousness where Ben Franklin was with understanding electricity in the 1800s. We're waiting for Faraday and Galvani.

I quote you again, without much hope you can take it in yet...

Blackmore vs Baars said:
Blackmore: But there still seems to be a mystery here to me, that what you're saying is that the difference between a perception that's unconscious and one that's conscious is a matter of which bit of the brain the processing is going on in. How can one bit of the brain with neurons firing in it be conscious, where another bit of the brain with very similar neurons firing in a very similar way is not? Don't we still have this explanatory gap?

Baars:There are a lot of explanatory gaps. We are in the study of consciousness where Benjamin Franklin was in the study of electricity around 1800: he knew of a number of basic phenomena, and he might have known about the flow of electricity, and the usefulness of the stream metaphor - that things go from one place to the other, a little like the flow of water; that you can put resistors into the circuit, which are a little bit like dams. You have a useful analogy at that point in understanding electricity, which actually turns out to be not bad; but you have to improve it. So we're at a very primitive stage, but there are a few things that we can say. (Blackmore 2005)




Who says it's the "backbone of cognitive neuroscience"? I mean, apart from you?

Go online, Pixy. There are GWT variations everywhere. It's pretty much only people like Blackmore or O'Regan who still refute GWT these days. Read Dennett. Read Blackmore. GWT is everywhere.

GWT rules because, like Freud's original proposals 150 years ago from which it is partially drawn, it accounts for psychological reality.

Strong AI struggles with the human conscious-unconscious dynamic. Strong AI can only deny phenomenality as a valid phenomena (HPC) because there isn't research yet to do more than go for denial and attacking definitions. You have only the option to try and explain phenomenality away. Not because Strong AI is necessarily wrong but because there isn't the research yet. The only strings to your bow are defining consciousness out of existence or sending out thinly-veiled put downs to anyone who disputes what you believe, and frankly a 10 line programmed chatbot can do this.

Nick
 
Last edited:
If we include both public and private behavior, then... yes, it is.

It depends on your definition, I suppose, but I regard my investigation into my own consciousness as the most important element. If not for that, it wouldn't even be an issue, IMO.
 
Interesting that you lump rocks and thermostats together as they were quite clearly distinguished by everyone here.

Everyone here, eh? Well, there was I thinking that accessing the consciousness of others was extremely difficult, but it seems you have the knack.

Rocks and thermostats may quite easily be distinguished, but they exhibit exactly the same degree of consiousness - that is, none. There are no physical process taking place in a thermostat that don't take place in a rock, on a micro or macro scale. There is no objective physical description of a rock that can't be applied to a thermostat and vice versa.
 
But we can explain consciousness with the physics we already know.

You just don't understand how.

So what on Earth makes you think adding more physics is going to help you understand?

Sorry, I've missed the physics books that deal with consciousness. Could you give me a few pointers?
 

Back
Top Bottom