The Hard Problem of Gravity

Because there is no physical reason that anything should "feel" anything.
You are a self-referential information processing system. These "feelings" and "experiences" are merely sub-processes. They "feel" real to "you" because that "you" is nothing more than a synthesis of all these sub-processes.

Computers work exactly the same way.
 
Again, what is obvious to one person might not be to another, either because two people simply think differently or because the person who thinks it's obvious is simply wrong. It would help if you'd actually stop flapping your arms around and actually answer.

If you noticed; I took the time to answer his question after he clarified it as rhetorical and his reasoning behind it. Simply asking "how do you know being conscious is different from being unconscious?" is, on its face, a silly question. His follow-up clarification lent itself more to response because it wasn't absurd.
 
Oh, no. I don't think so. I asked first, and my question is directly on topic. Can you or can you not define consciousness in a way that doesn't assume its conclusion ? I'm not talking about the dictionary, here. I'm talking about an operational definition. One we can actually use for the purposes of this discussion.

Okay, this is the thing thats pissing me off. The fact that you're even asking such a question shows that you've payed attention to nothing I've said or even taken the time to understand what my position is - the position I've explicitly and repeatedly stated for pages on end. Surely, you can understand my frustration?

I've stated over and over and over and over again that there is currently no sufficient operational definition of consciousness. I've also stated, just as often, that scientific efforts would be well spent working toward such a definition and suggested possible avenues of investigation. At this point in time, consciousness is about as defined as gravity was before Newton. We know it is a real phenomenon; we just don't have a sufficient operational definition or solid explanation of it.

The rather flip answer being presented by the strong AI proponents here is about as flimsy as the "things fall because God wills it" explanation of gravity, before it was scientifically defined.
 
So you are saying that in order to be "conscious" of something you must be "paying attention" to it in some way?

What about yourself -- do you think you need to be paying attention to your consciousness, on some level, in order to be conscious of your own consciousness?

This is a very serious question.

Good, and I've given it some serious thought even before the OP was posted and I'll be glad to share what I've so far been able to discern from it.

In answer to your first question, being conscious of something is analogous to having something in your field of vision. The farther something is into the periphery of your conscious focus, the less conscious you are of it. Of course, this does not necessarily mean that one is literally looking at the subject of their focus; tho, actually looking at an object is an example of such [Interestingly enough, the focusing of conscious attention has been associated with the synchronous firing multiple neurons rather than the independent firing of individual neurons]. For example, one can focus their conscious attention to the pain of their stubbed toe without actually looking at it. Focusing your conscious attention on one thing can greatly hinder your ability to handle other tasks. It seems that conscious thinking has something analogous to an attention budget; tasks that are given more conscious attention can be consciously dealt with more effectively. Things completely outside of one's conscious field of focus are off the radar, so to speak, and must be dealt with unconsciously, if they are even processed at all.

In instances where one must multitask it seems that one must have conditioned behaviors to handle tasks more on the periphery of one's attention [or even completely outside of it]. An example of this is a person carrying out a detailed conversation while driving. Such a task is much more difficult for someone who is just learning to drive. But, once they have driving conditioned into their behavioral repertoire, they can relegate driving to the periphery of their conscious attention while assigning a greater portion of their conscious focus to other tasks. More extreme examples of this would be the unconscious functioning of autonomic processes in the body like instincts, reflexes, heart beat, etc. Of course, one can train themselves to consciously affect some of these auxiliary processes to a limited degree but, by and large, they are outside of one's direct conscious awareness or volition.


In regards to your second question, paying attention to one's own consciousness is called introspection. It is an instance of self-referential processing, but again, I must stress that it is distinguished from other instances of computational self-reference in that it is experiential. Self-reference, in and of itself, its quite well defined but consciousness [and by extension conscious self-reference] remains an undefined function. Currently, the evidence strongly indicates that properly defining this function will depend on a better understanding the physical processes of the computer solidly known to generate it: the brain.

I think that the field of AI has much to contribute to understanding general cognition but, as a means of explaining and generating actual consciousness, its putting the cart before the horse. There is going to have to be a lot more progress in the realm of biophysics, and neuroscience in particular, before researchers like you will be able to meaningfully attempt to create conscious machines. Until then, I'm afraid that attempts by AI researchers to recreate consciousness will be shots in the dark :(
 
Last edited:
What is this difference?

You're asking me what the difference between seeing something and not seeing it is?


That's not exact, but close enough, yes. Under GWT, the "I" is a synthesis of all those conscious processes. You are not aware of them, but they are what is actually doing everything. The conscious mind is an illusion with no causal efficacy (which we know from experiment), because all of that actually happens at a lower level that you cannot directly access. (Because you are the illusion.)

"I" is the imagined recipient of conscious processing, I would say personally. Its creation is another aspect of processing.

Nick
 
Yes. And I'm still awaiting a coherent counter-argument.

There is no self-referencing inate in visual phenomenology. The contents of phenomenality might be directed into awareness by processes which self-reference. But they are not themselves self-referencing.

Nick
 
Well, what is unique and what is not is rather subjective.

I could go as far as calling the mind/consciousness unique, simply because it is a display of such complex processes in the brain. However, you have to keep in mind that this same behavior is on display in every other human that you encounter(excluding comatose individuals etc.). So it is unique when we compare Fred to a pile of dirt, but not when we compare Fred to Sam.

If we were to give an estimate of the percentage of matter in the universe that exhibits the property of consciousness - as far as we know - then I think it would be quite small.

Of course, if interstallar clouds of gas are conscious - and they might be - then the percentage might go up to 98. For all we know about it, dark matter is conscious, or the centre of the Earth. But the only matter that has asserted consciousness is in human form.

Some points:

I think that Belz point, and mine, is that it is not unique as far as being outside of the realm of what is solvable or knowable.

It may be knowable, but we don't know it. We also don't know if it is knowable. It may be solvable but we haven't solved it.

The HPC immunizes the mind from becoming truly understandable with constructs such as P-Zombies. It is not a problem so much as it is an incoherent barrier. The hard problem was designed to be not only hard, but impossible. This is open ended dualism, the HPC attempts to make it impossible to reconcile the mind with any sort of physical monism. That is why a lot of us are saying that it is dualist in nature.

I have said it before, but I think that your whole argument with Pixy and RD, is based on your criteria for conscious behavior. You are coming from a standpoint where " It is either conscious on the level(of complexity) of human consciousness, or it is not conscious ". You are almost conflating consciousness with being human. You even said earlier in this thread that we cannot be sure that cats are conscious.

I can't be sure that you are conscious.

It might be nice to be able to assert that cats are conscious, that ants aren't, that consciousness is a feature of self-referential algorithms, but this is all just assertion with precious little evidence.

When we can precisely identify and isolate the physical processes in a human being that give rise to consciousness, then we can start to consider ways that consciousness might exist elsewhere. Meanwhile, asserting certainty about things that we can't be certain of is bound to send us down blind alleys.

Pixy and RD seem to be trying to express to you that consciousness can be separated from that complexity, and understood.

I know what they are trying to express (I think). But what they don't have is a physical theory. If it isn't physics, then it's waffle, fundamentally.
 
It would be very surprising if a computer program NOT designed to be conscious would suddenly develop consciousness for no reason. That it is designed to be that way makes all the difference in the world.

Human beings are able to recognise that they are conscious, and associate their feelings of consciousness with that of other people, and communicate their consciousness to other people. If a computer program is conscious, it will be expected to do this.

How we can tell the difference between a program that is conscious, and one that fakes consciousness, is by examining the code. In particular, we'll look at the routines that create the consciousness.

That's not what I was responding to. You said that it's impossible for the experience itself to be unreal.

Yes, it is. If it is not real, then it is illusory - iow, an illusory experience. But still an experience. When we get down to the actual experience, there's nowhere for it to go. If we imagine we have an experience, that's still an experience. How it relates to the real world is something else entirely.

There are actual solipsists.

I'm sure there are, but they've yet to appear on this thread.
 
Well, it appears to take place, I just don't see why it should be such a mystery.

Anything for which we lack a physical theory is a mystery.

Compare, for example, the theory of evolution. When it was mooted from observation, Darwin and other scientists had no mechanism to explain how it worked.

It was possible to surmise that information was passed as part of the reproductive process, but the mechanism involved remained unknown until the analysis of the structure of DNA was complete. There were many theories, some completely incorrect.

What's significant is that until the process was understood at every level, it remained a mystery.

You might believe that consciousness is associated with information processing. A lot of scientists thought that protein was the genetic information carrier. Until the physical theory was confirmed, nobody knew.

No, that's the wrong way around. I don't need evidence not to assume something unfalsifiable exists. I'm just saying that "we can explain everything that happens in a human being*, on the micro and the macro scale", and apparently being a human feels like this. I then assume that not only do other humans feel similarly, but that other information processors may also feel like something. That's all, no mystery, just an inductive hunch.

I don't mind hunches, but lacking the physics, they remain hunches. And one hunch is as good as another until we back it up.

Yes there is, because you're adding unnecessary unfalsifiable hypotheses.
This can be done in principle. Build a brainy robot, teach it, and see if it acts as if it's conscious. This can falsify, only in principle, I admit, Pixymisa's claim, but not yours.

My view is also simple:

  • Consciousness is associated with information processing
  • Apparently, my way of processing information feels like this
  • Other information processors may also feel like something

The problem is that consciousness is associated with a lot of other things apart from information processing. We also lack a physical theory for information processing. (We have a mathematical theory, but that's quite another thing).

ETA: Oh and I forgot to thank you for the elaborate response, which is appreciated.

I aim to please. I also try to be only 15% brusquer than the individual to whom I reply. Please remind me if I drift into the 20-25% range.
 
There is no self-referencing inate in visual phenomenology. The contents of phenomenality might be directed into awareness by processes which self-reference. But they are not themselves self-referencing.
This might be interesting if it had anything to do with, well, anything.

Forget about "phenomenology". It's not going to lead you anywhere; indeed, it can't. Instead, study visual perception. Study what actually happens. Listen to the lecture series. Read Hofstadter.
 
Human beings are able to recognise that they are conscious, and associate their feelings of consciousness with that of other people, and communicate their consciousness to other people. If a computer program is conscious, it will be expected to do this.
SHRDLU does all of this. So?

How we can tell the difference between a program that is conscious, and one that fakes consciousness, is by examining the code. In particular, we'll look at the routines that create the consciousness.
Humans also fake consciousness, you know. Look at those neurons. Are they conscious? No. Well then!
 
Anything for which we lack a physical theory is a mystery.
We have a physical theory for consciousness.

Compare, for example, the theory of evolution. When it was mooted from observation, Darwin and other scientists had no mechanism to explain how it worked.
Darwin was, nonetheless, entirely correct.

What's significant is that until the process was understood at every level, it remained a mystery.
Nothing is understood at every level. That means everything is a mystery - which is the same as saying that nothing is.

You might believe that consciousness is associated with information processing.
It can't not be.

A lot of scientists thought that protein was the genetic information carrier. Until the physical theory was confirmed, nobody knew.
Wrong analogy. Darwin was right, even before 150 years of research into the details. Consciousness as self-referential information processing is the Theory of Evolution of neuroscience. It's perfectly obvious once you understand it, but all the interesting stuff is in the details.

The problem is that consciousness is associated with a lot of other things apart from information processing.
You have yet to define the term, so I have to believe that you don't associate it with anything. Which makes me wonder why you argue so much.

We also lack a physical theory for information processing.
Wrong.
 
Why is it impossible to accept evidence that our minds are apparently unique?

Because of the word "apparently", really. "It seems to me" is not evidence when all the other evidence points the other way. You seem to place far too much faith on your own perceptions.

Can you give an example of anything in the physical world that we don't experience indirectly? We get signals to our nerves that generate electrical impulses that create brain patterns. That's indirect.

If that's "indirect" then there is no way that anything could ever perceive anything directly. ANY interaction in the universe is thus indirect and the word becomes useless. So why use it ?
 
Okay, this is the thing thats pissing me off. The fact that you're even asking such a question shows that you've payed attention to nothing I've said or even taken the time to understand what my position is - the position I've explicitly and repeatedly stated for pages on end. Surely, you can understand my frustration?

Have a cookie.

I've stated over and over and over and over again that there is currently no sufficient operational definition of consciousness.

So why all the gum-flapping if you don't even know WHAT consciousness is ?

The rather flip answer being presented by the strong AI proponents here is about as flimsy as the "things fall because God wills it" explanation of gravity, before it was scientifically defined.

Actually, that's wrong. It's more equivalent to saying "things fall because that's what being heavy entails." which would be pretty much correct, even without a precise theory of gravity. Newton even managed to formulate one without even knowing what it was.
 
Human beings are able to recognise that they are conscious, and associate their feelings of consciousness with that of other people, and communicate their consciousness to other people. If a computer program is conscious, it will be expected to do this.

What ? So consciousnes implies all those other things ? In order to "communicate their consciousness", they'd need a way to do so.

How we can tell the difference between a program that is conscious, and one that fakes consciousness, is by examining the code. In particular, we'll look at the routines that create the consciousness.

I suspect that if you look into the human's "code", you'll find that we fake consciousness as well. But we call it consciousness and so we must for similar machines and beings, as well.

Do you even understand the p-zombie debate ?

If it is not real, then it is illusory - iow, an illusory experience. But still an experience. When we get down to the actual experience, there's nowhere for it to go. If we imagine we have an experience, that's still an experience.

So "experience" is some sort of metaphysical reality, something like a physical law ? I don't buy that. Surely you can have pseudo-experiences, or proto-experiences, or are you claiming that "experience" either is, fully-formed, or not at all ?

I'm sure there are, but they've yet to appear on this thread.

So are Thugees, but I'm sure you don't deny their existence.
 
Because of the word "apparently", really. "It seems to me" is not evidence when all the other evidence points the other way. You seem to place far too much faith on your own perceptions.

I use the word "apparently" because I'm careful about what I'm claiming.

Perhaps you could explain what you rely on to interpret the universe apart from your perceptions.

If that's "indirect" then there is no way that anything could ever perceive anything directly. ANY interaction in the universe is thus indirect and the word becomes useless. So why use it ?

Because we do directly experience our own awareness. There's nothing intervening.
 
What ? So consciousnes implies all those other things ? In order to "communicate their consciousness", they'd need a way to do so.

Communicating consciousness is what human beings spend a large amount of their time doing. You should meet some some time.

It's the fact that we've learned ways to tell each other the contents of our minds that we have generally avoided solipsism and tend to assume that other people experience the world in a comparable way to us.


I suspect that if you look into the human's "code", you'll find that we fake consciousness as well. But we call it consciousness and so we must for similar machines and beings, as well.

Do you even understand the p-zombie debate ?

I understand what a debate is. It's not a theory, or a law.

So "experience" is some sort of metaphysical reality, something like a physical law ? I don't buy that. Surely you can have pseudo-experiences, or proto-experiences, or are you claiming that "experience" either is, fully-formed, or not at all ?

What is the difference between an experience and a pseudo-experience, or a proto-experience, or a meta-experience, or a quasi-experience? They seem to be exactly the same thing. How can you think you are having an experience without having an experience? It's an oxymoron, like imaginary pain.

So are Thugees, but I'm sure you don't deny their existence.
 
We have a physical theory for consciousness.

This is why I don't spend a lot of time responding to PixyMisa.

There is quite clearly not a physical theory of consciousness. This is not something ambiguous, or debatable. Physics is a well-defined field, and we know what's involved in publishing theories. We know what a theory is in physics, and there simply isn't one.

The only physicist involved in the area in any way that I'm aware of is Penrose, and he would not class his speculations as having reached the stage of a theory yet. There's plenty of neurological research going on, but while there's a wealth of biological information, it does not add up to a theory which explains consciousness.
 

Back
Top Bottom