The Hard Problem of Gravity

If only you'd take the next step and realise that not only is human-like consciousness an irrelevant concept when dealing with computers (at least in their present form) and to get the idea that calling computers conscious doesn't tell us anything useful about them.

Does it tell us anything useful about people?

If computer consciousness is nothing like human consciousness, then what is the point of saying "computers are conscious"? We know how they work. We know how to get them to do stuff.

It is called epistemology. You might want to read up on it.

Don't you think that there's something odd about claiming that there's nothing special about human consciousness, and simultaneously admitting that it can never be duplicated?

Who said that? I didn't. All I said was that human-like consciousness requires a human-like body (to generate human-like percepts).

And I never said human consciousness isn't special. I happen to think I am very special, if only because I am me and nothing else is. All I said was that human consciousness isn't special in the way most people think it is. In particular, in a way that is qualitatively different from everything else.
 
Oh, not at all!

We infer consciousness, and use a vocabulary of consciousness, to describe the behavior of human and non-human animals, cars, computers, tornadoes... we will infer consciousness, in the absence of assertions, quite often. Usually, such inferences are made in situations where we are ignorant of the causes of a behavior--when it is not easily predictable. When a car refuses to start, when a tornado aims at a trailer park, when a computer waits until just before that big paper is due before choosing to crash. When a dog gets stubborn, when a horse refuses to jump a gate, when people change their minds.

We absolutely do not need an assertion of consciousness in order to treat a behavior as conscious. And of course, an assertion of consciousness can be ignored if we do not see it (as you say) as spontaneous. You meant, probably, that a computer programmed to assert consciousness does not count; I would add a person who insists he is perfectly happy doing the bidding of a cult leader. Some may say those actions are conscious; others would say that he is "brainwashed" and being controlled not by himself but by another. Any time we see sufficient controlling power in the environment, we need not infer consciousness.

Of course, what that does, boiled down, is largely equate claims of consciousness with ignorance of determining factors. And that seems about right.


Where does the concept of consciousness come from? Why are we talking about it? I can imagine us talking about life, or intelligence - but consciousness? The only reason we consider certain behaviours as conscious is because we report them as being conscious.

Consciousness is a phenomenon because we report it as such. Conscious behaviours are so-called because we differentiate them from other behaviours.
 
Oh, really? So the periodic table isn't meaningful? Because that is pretty context-dependent.

The difference here is that you haven't twisted the definitions involved to suit your argument. If you did, then we would have "why does the difference in number of valence electrons matter? Number only matters to humans, electronegtivity only matters to humans, blah blah blah."

Am I wrong?

Why change now?

Could I not twist the entire periodic table into useless mush just like you did with rocks and switches?

No, you couldn't. If you could understand why the periodic table is science, and hence not context dependent, you'd then realise what you need to do to have a scientific understanding of consciousness and information processing.

The periodic table is not predicated on what elements are "useful" or "relevant" to some "entity". The concept of electronegativity works just as well whether we are discussing computers, rocks or human beings. It is predicated on properties that can be precisely and objectively defined. We can reasonably predict that an alien being would understand our definition of the elements. He would be very unlikely to represent the data the same way, but he would probably have the same concept of what an element is.

It's the concept that the properties of elements are universal that makes the periodic table a powerful scientificconcept.
 
Where does the concept of consciousness come from? Why are we talking about it? I can imagine us talking about life, or intelligence - but consciousness? The only reason we consider certain behaviours as conscious is because we report them as being conscious.

Consciousness is a phenomenon because we report it as such. Conscious behaviours are so-called because we differentiate them from other behaviours.

No. The category of "conscious" emerged from a useful (a perfectly good scientific word, as pragmatism is a perfectly good scientific philosophy) grouping of verbs at a simpler level.

People make attributions about others' behavior. Broadly speaking, these attributions are environmental or internal. When we can easily see the cause of a behavior in the environment, we tend to make an environmental attribution (duh); when the cause is less clear, we make internal attributions--that is, we attribute the person's behavior to something about the person herself. Heider (early social psychologist) called this a "naive personality theory", in that it was not based on a scientific analysis of behavior but simply a label for what was not known. This view naturally leads to seeing individuals as active agents rather than reactive, environmentally driven objects. Viewing someone as if they were the cause of their behaviors was adaptive, if inaccurate. It was useful, but an actual scientific analysis of the causes of behavior is more useful.
 
The periodic table is not predicated on what elements are "useful" or "relevant" to some "entity".

Sure it is. Humans find it useful to differentiate between elements based on their properties. If we didn't, the periodic table wouldn't be the way it is.

What is so special about having 4 protons in a nucleus instead of 3 or 4? Hmmm? If humans weren't around, who would care?

Isn't an atom merely a collection of the same fundamental particles as all other atoms?

The concept of electronegativity works just as well whether we are discussing computers, rocks or human beings. It is predicated on properties that can be precisely and objectively defined.

Same with switching.

We can reasonably predict that an alien being would understand our definition of the elements. He would be very unlikely to represent the data the same way, but he would probably have the same concept of what an element is.

Same with switching.

It's the concept that the properties of elements are universal that makes the periodic table a powerful scientificconcept.

Same with switching.
 
Same with switching.



Same with switching.



Same with switching.

If it really were the same with switching, then it wouldn't be difficult to come up with a scientific definition of switching , which would include thermostats, and exclude rocks, just as the definition for Hydrogen includes deuterium and excludes Helium.
 
No. The category of "conscious" emerged from a useful (a perfectly good scientific word, as pragmatism is a perfectly good scientific philosophy) grouping of verbs at a simpler level.

People make attributions about others' behavior. Broadly speaking, these attributions are environmental or internal. When we can easily see the cause of a behavior in the environment, we tend to make an environmental attribution (duh); when the cause is less clear, we make internal attributions--that is, we attribute the person's behavior to something about the person herself. Heider (early social psychologist) called this a "naive personality theory", in that it was not based on a scientific analysis of behavior but simply a label for what was not known. This view naturally leads to seeing individuals as active agents rather than reactive, environmentally driven objects. Viewing someone as if they were the cause of their behaviors was adaptive, if inaccurate. It was useful, but an actual scientific analysis of the causes of behavior is more useful.

Why did this concept arise in the first place? One can assume that it was a theory constructed from the objective third person observation of human beings, or one can take the view that the reason people think that they are active agents rather than reactive environmentally driven objects is that that is the way they directly percieve themselves. I do not believe that consciousness as an idea came out of anything other than the experience of consciousness. It's certainly not a helpful idea in the sense of explaining things. Treating humans as objects like everything else in the environment would be a lot simpler.
 
Why did this concept arise in the first place?
Utility. It did not have to; the people who did this were more successful.
One can assume that it was a theory constructed from the objective third person observation of human beings, or one can take the view that the reason people think that they are active agents rather than reactive environmentally driven objects is that that is the way they directly percieve themselves.
Given that we learn to label even our private behavior through the public behavior of others, your view requires a bit more to get it started.
I do not believe that consciousness as an idea came out of anything other than the experience of consciousness.
You are under no obligation to be right.
It's certainly not a helpful idea in the sense of explaining things.
It most certainly is, if you don't have the benefit of an experimental analysis of behavior.
Treating humans as objects like everything else in the environment would be a lot simpler.
And far less useful. Given that we cannot see their environmental histories in casual interactions, knowing that they are the products of their histories would not help us to predict their actions. The naive personality theory has utility.

"A lot simpler" is only helpful if the options are equally useful.
 
The operative word in my question was "precisely".

This stuff from Chalmers is just the sort of vague and imprecise stuff I was talking about before. There are many "why" questions - why is there gravity? Why is there anything at all? Why do birds suddenly appear?

We don't know why this information processing does not go on 'in the dark' free of any inner feel.

But on the other hand we don't know any special reason why it should go on 'in the dark' free of any inner feel.

So it is not so much as a "problem" as a question.

I agree. In fact Chalmers even weakens things further by using the word "may," as in there may be something left still to be explained.

However, given the complexity of human consciousness, I still find his basic point valid. We don't know enough yet.

Nick
 
I agree. In fact Chalmers even weakens things further by using the word "may," as in there may be something left still to be explained.

However, given the complexity of human consciousness, I still find his basic point valid. We don't know enough yet.

Nick

However the point that the OP seems to be making is that you could say the same about stuff besides consciousness, for example gravity.

People seem to have this idea that the HPC is fatal for Materialism, but don't say the HPG is fatal for Materialism. I wonder why?
 
Wrong.


So tell me.

Dennett's original model, Multiple Drafts, needs no "self-referencing loops" to create consciousness. In fact, I'm fairly sure he would ridicule the idea, not that this should necessarily mean anything given other ideas which he has ridiculed and subsequently re-examined.

Likewise Fame in the Brain, which is basically his reworking of GWT. Self-referencing doesn't come into the equation. Read through his 2000 paper and see if you can find anything about self-referencing loops. They just don't come into the equation because consciousness itself is a global access state. Attention may be directed by self-referencing loops, but consciousness itself is not innately self-referencing. I cannot see how you can reconcile your model with GWT. I can't see how you can reconcile it with O'Regan's Sensorimotor Theory either.

Your theory might stand up, to a degree, when considering inner dialogue alone. I'm not clear here. But in considering human consciousness as a whole, with all its varied aspects, it really does seem to me a complete non-starter.


What's "sensory consciousness"? You mean sensory awareness?

What are your definitions here? How do you distinguish between the two?

That depends on what you mean by "narrative selfhood". As long as you understand that the narrative self requires no lanugage, only symbols, then you are correct.

The narrative self is the "user illusion," the artificial notion that conscious states belong to someone. I would take it that it requires language. Certainly those areas of the brain which create and interpret language are known to be especially active during inner dialogue.


If so, then the reason Hofstadter isn't talking about that when he talks about strange loops - self-reference - is that you don't require even that for awareness. Where consciousness is simple, awareness is almost trivial. Dennett's whole point with the thermostat is that it is aware, but not conscious.

So...are you saying there can be awareness of the monitor without consciousness present then?


It's not my theory.

Who's is it, then?


All you are doing here is dragging in random baggage and slapping "consciousness" stickers on it. That doesn't make it consciousness, or an aspect of consciousness, or in any way relevant to the discussion. Baggage with a sticker is still just baggage.

You're saying that the monitor in front of me is not an aspect of consciousness?

Nick

ETA: Your ideas may be all well and fine for AI, Pixy. I don't know. But for human consciousness they just don't cut it, as I see it, and furthermore they still leave space for the HPC to creep back in. The leading question for me is...how do you actually model this difference between conscious and unconscious streams of data in GWT using self-referencing loops? How is global access innately self-referencing?
 
Last edited:
However the point that the OP seems to be making is that you could say the same about stuff besides consciousness, for example gravity.

Well, possibly you could. But I don't see that there are valid questions about the nature of gravity on the level that there are valid questions about the nature of consciousness.

People seem to have this idea that the HPC is fatal for Materialism, but don't say the HPG is fatal for Materialism. I wonder why?

Some people have that idea. However, Chalmers only says "may," as in may be problematic.

I like Baars here. He says, essentially...get back to us in 100 years. I think that's realistic.

Nick

eta:

Blackmore vs Baars said:
Blackmore: But there still seems to be a mystery here to me, that what you're saying is that the difference between a perception that's unconscious and one that's conscious is a matter of which bit of the brain the processing is going on in. How can one bit of the brain with neurons firing in it be conscious, where another bit of the brain with very similar neurons firing in a very similar way is not? Don't we still have this explanatory gap?

Baars:There are a lot of explanatory gaps. We are in the study of consciousness where Benjamin Franklin was in the study of electricity around 1800: he knew of a number of basic phenomena, and he might have known about the flow of electricity, and the usefulness of the stream metaphor - that things go from one place to the other, a little like the flow of water; that you can put resistors into the circuit, which are a little bit like dams. You have a useful analogy at that point in understanding electricity, which actually turns out to be not bad; but you have to improve it. So we're at a very primitive stage, but there are a few things that we can say. (Blackmore 2005)
 
Last edited:
The algorithm isn't in the internal workings of the die. The algorithm is to toss the die, and map its results onto the desired range.

It happens to be trivial because the range of the die matches the range of our outcome. Suppose we're using 30^4000 sided die instead. Then we need two tosses. If we had 2 sided die, we need to do something even more complex (assuming we cared about equal distributions).

Tossing here is a special form of output--it's triggering an external event. But it still needs to be done to accomplish the goal, and the goal is still achievable given a series of well defined steps.

Or I could run down the other side. You're claiming that in order to produce my desired output, I need to rely on a physical event. Fine. But I raise you. In order to produce my desired output, I also need (in most cases) an algorithm :).
As it relates to random I've always thought of algorithms strictly in a sense of a mathematical one but even that fails given a number of examples I can now think of including a mechanical differential analyzer. All I can say is I was wrong but I very much appreciate the smiley and appreciate the tone of your response.
 
Well, possibly you could. But I don't see that there are valid questions about the nature of gravity on the level that there are valid questions about the nature of consciousness.

Huh? You don't?

Here are some:

Why should the laws of the universe give rise to a gravitational force at all?
How is it that some particles are subjects of gravitational force?
Why does the effect of gravity exist at all?
Why is there a gravitational component to behavior?
Why aren't we gravitational zombies?
Gravitational Natures are categorically different from other behaviors

What say you, eh?
 
Some people have that idea. However, Chalmers only says "may," as in may be problematic.
Here are the words of the man himself:
The problem of consciousness is indeed a serious challenge for materialism. In fact, I think it's a fatal problem for materialism
No "may" in there. And I haven't even seen the serious challenge yet.
 
If it really were the same with switching, then it wouldn't be difficult to come up with a scientific definition of switching , which would include thermostats, and exclude rocks, just as the definition for Hydrogen includes deuterium and excludes Helium.

Right off the top of my head, I would say:

A system "switches" if a linear change in some internal behavior results in a nonlinear change in some other internal behavior, and both changes are bidirectional.

According to that scientific definition, the only rocks that switch are doped semiconductors. Not canyon stones, not volcanoes, not rocks falling off a cliff.

According to that scientific definition, all thermostats switch.

Now, what were you saying?

Furthermore, I realized (again) that you were wrong and I was (again) wrong to agree with you -- by my first order logic definition, rocks do not switch because of the "if and only if." There is no conceivable situation where a system external to a rock could enter a given (reversible) state if and only if the rock was in a given reversible state where it wouldn't make sense to say the rock genuinely switches. In other words, all of your examples so far aren't switching.
 
Dennett's original model, Multiple Drafts, needs no "self-referencing loops" to create consciousness. In fact, I'm fairly sure he would ridicule the idea, not that this should necessarily mean anything given other ideas which he has ridiculed and subsequently re-examined.

Likewise Fame in the Brain, which is basically his reworking of GWT. Self-referencing doesn't come into the equation. Read through his 2000 paper and see if you can find anything about self-referencing loops.
Good grief, Nick, that paper is strewn with references to self-reference! How can you possibly fail to grasp that?

What do you think a "proto-self evaluator" is? What do you think he's talking about when he says:

Dennett said:
The looming infinite regress can be stopped the way such threats are often happily stopped, not by abandoning the basic idea but by softening it. As long as your homunculi are more stupid and ignorant than the intelligent agent they compose, the nesting of homunculi within homunculi can be finite, bottoming out, eventually, with agents so unimpressive that they can be replaced by machines (Dennett, 1978).
Dennett's talking first about the fact that human consciousness is built up from a network of simpler information processing subsystems. But more generally, what's the alternative to infinite regress? Loops, Nick. Loops.

They just don't come into the equation because consciousness itself is a global access state.
There is no global access state. That's just a model, laid on top of self-reference.

Attention may be directed by self-referencing loops, but consciousness itself is not innately self-referencing.
Fail.

Go back to Decartes' cogito. That's a statement about self-referential information processing.


I cannot see how you can reconcile your model with GWT.
GWT is a higher-level model of the human mind. It cannot exist without self-referential information processing.

I can't see how you can reconcile it with O'Regan's Sensorimotor Theory either.
Your theory might stand up, to a degree, when considering inner dialogue alone.
Again, what do you think you're talking about when you say "inner dialogue"?

I'm not clear here. But in considering human consciousness as a whole, with all its varied aspects, it really does seem to me a complete non-starter.
What aspects?

What are your definitions here? How do you distinguish between the two?
Awareness is perception.

Consciousness is awareness of self.

See the loop?

The narrative self is the "user illusion," the artificial notion that conscious states belong to someone.
That is self-referential, yes.

I would take it that it requires language.
Why on Earth would you think that? All it requires is self-reference.

Certainly those areas of the brain which create and interpret language are known to be especially active during inner dialogue.
How is that relevant?

So...are you saying there can be awareness of the monitor without consciousness present then?
Of course. That's exactly what Dennett is getting at with his thermostat example. It's aware, but it's not self-aware, not conscious.

I think I've exaplained that about thirty times now.

Who's is it, then?
How far do you want to go back? It traces back at least to Descartes (though after touching on the truth, he wandered off into less productive fields). Probably further.

You're saying that the monitor in front of me is not an aspect of consciousness?
No, Nick. The monitor in front of you is a monitor.

ETA: Your ideas may be all well and fine for AI, Pixy. I don't know. But for human consciousness they just don't cut it, as I see it, and furthermore they still leave space for the HPC to creep back in.
Read Hofstadter.

Also, please provide a statement of HPC that isn't inherently self-contradictory. Chalmer's certainly can't.

The leading question for me is...how do you actually model this difference between conscious and unconscious streams of data in GWT using self-referencing loops?
What is a "conscious stream of data" supposed to be?

How is global access innately self-referencing?
Once again, there is no such thing as global access at any physical level. That's physically impossible. There's just neurons sending signals to one another.
 
Last edited:
Furthermore, I realized (again) that you were wrong and I was (again) wrong to agree with you -- by my first order logic definition, rocks do not switch because of the "if and only if." There is no conceivable situation where a system external to a rock could enter a given (reversible) state if and only if the rock was in a given reversible state where it wouldn't make sense to say the rock genuinely switches. In other words, all of your examples so far aren't switching.
Yep.
 
westprog said:
Why did this concept arise in the first place? One can assume that it was a theory constructed from the objective third person observation of human beings, or one can take the view that the reason people think that they are active agents rather than reactive environmentally driven objects is that that is the way they directly percieve themselves.

Here's a plausible scenario: The Neurology of Self-Awareness.

I do not believe that consciousness as an idea came out of anything other than the experience of consciousness.
Which is not a very good staring point because it might leave you with nothing else than chasing a ghost.

I could also say: "I do not believe that weather as an idea came out of anything other than the experience of weather." Yet all that I experience is different referents to the concept.

It's certainly not a helpful idea in the sense of explaining things.
It's not helpful in terms of describing it as a property in it's own right.

Treating humans as objects like everything else in the environment would be a lot simpler.
Not necessarily. Treating them as conscious agents has turned out to be the simpler path from a social vantage point, which is the simpler path to survival.

Treating other humans as mere objects has been useful on the battle field. Treating them as conscious agents has also been useful in trying to figure out their war strategy. These two examples might already indicate that "consciousness" is a pragmatic concept.
 

Back
Top Bottom