The Hard Problem of Gravity

That is easy. Start with a system that experiences colors, then swap the source of visual input with something that encodes auditory information. You could do it to a human, if you had a lot of money and were in certain countries.

Synesthesia demonstrates that sensory input can be experienced in many different ways. The central question here is not so much how sensory input is operationally processed [though such questions are interesting] but how and why they are experienced at all. It is perfectly possible for sensory input to be processed without any conscious experience of it whatsoever -- even if that input is utilized to trigger physical responses.

I don't know of any. If you find one, let me know, because "emotion" is easily the most difficult issue when it comes to detailing human consciousness. I am particularly interested in how suffering and happiness arise.

However, emotion is still just a detail. Otherwise, exhibiting emotion would be a requirement for consciousness, and it isn't. At least, you haven't said so yet.

'Emotion' is another kind of subjective experience. Vertigo and nausea are as much physical sensations as they are emotional responses. Yes, in themselves, they are just examples -- a 'detail'. Emotions, and all other 'qualia', are ontologically in the same boat; they all constitute conscious experience. The question is why and how do they arise at all?


As wide as the range of things that can be computed.

Computation, IAOI, is not experience.


The kinds of computation that give rise to each. A tautology, but then subjective experience is nothing but a tautology. System X experiences being system X because it is system X. What else could it be like to be system X?

So, what if "system X" were a rock? It's composition and interactions with other objects are, fundamentally, computational in nature. Would you argue that it subjectively experiences being kicked?

What you experience is generated by reasoning, which means using existing facts about the world to infer new facts. Neural networks are implicit reasoning machines -- facts come in, new facts go out. Your brain is made of neural networks. If you disagree with any of this, feel free to enumerate the types of thought you are capable of that cannot be described in terms of reasoning -- including your precious "qualia."

'QUALIA'...ARE...EXPERIENCES. ANY experience. EVERY experience.

Logical operations and computations go on regardless of whether or not there is any actual experience. Reasoning is a computational process that goes on within the context of conscious experience. There is a distinct qualitative difference between reflexive, unconscious computation, and carrying out those operations consciously. What has yet to be done is to provide a sufficient operational definition of actual conscious experience.


I don't know the details, because I am not a bat and I haven't looked at a bat's brain code in a debugger.

However, I can confidently say that the bat experiences echo location the same way you experience anything without being conscious of the experience.

What did your toe feel like when you were driving home from work the other day? Don't remember? Does that mean there was no sensory input from your toe? Or does it mean you experienced sensory input but didn't reason about it and thus weren't actively conscious of it?

[...]

To experience things like a bat you would have to be a bat. To do so means you would no longer be a human, you would be a bat. Which means you would not be a completely different species, you would be the same species.

We don't know what it is about being a bat, human, cricket, or what-have-you, that creates the particular subjective quality of one's experiences. That is one of the central questions of the EMA. The operational descriptions of general logical functions are pretty well understood -- the actual experience [the 'whys', the 'hows', and the 'what exactlys'] that are still unknown.


Yes they are working on a rat brain I think:

http://www.guardian.co.uk/technology/2007/dec/20/research.it

As Pixy has said, simulated biological neural networks aren't very useful right now because the kinds of needs we have for AI right now is best served by very deterministic systems that we understand fully and have completely predictable behavior. That is to say, behavior that a human can sit down and predict in a debugger just by looking at some numbers.

This is slowly changing though.

Yea. It definitely seems like there wouldn't be much impetus for those kinds of projects unless it cloud lead to some direct commercial use. If there is a trend to change this I sure hope it continues :)
 
Last edited:
There is a distinct qualitative difference between reflexive, unconscious computation, and carrying out those operations consciously.

How could you possibly know this? (Especially under your definition of "know").
 
Synesthesia demonstrates that sensory input can be experienced in many different ways.
Processed many different ways.

The central question here is not so much how sensory input is operationally processed
Yes it is.

[though such questions are interesting] but how and why they are experienced at all.
The experience is the processing.

It is perfectly possible for sensory input to be processed without any conscious experience of it whatsoever -- even if that input is utilized to trigger physical responses.
Not if you have a self-referential system it's not.

'Emotion' is another kind of subjective experience. Vertigo and nausea are as much physical sensations as they are emotional responses. Yes, in themselves, they are just examples -- a 'detail'. Emotions, and all other 'qualia', are ontologically in the same boat; they all constitute conscious experience. The question is why and how do they arise at all?
Information processing.

Computation, IAOI, is not experience.
Computation is not experience.

Experience is computation.

Just as insects aren't necessarily ants, but ants are insects.

So, what if "system X" were a rock? It's composition and interactions with other objects are, fundamentally, computational in nature. Would you argue that it subjectively experiences being kicked?
Is it a self-referential information processing system?

'QUALIA'...ARE...EXPERIENCES. ANY experience. EVERY experience.
Those are Aku-qualia, not how they are defined in philosophy.

Logical operations and computations go on regardless of whether or not there is any actual experience.
True but irrelevant.

Reasoning is a computational process that goes on within the context of conscious experience.
Is that your definition? If so, I'll note that you have yet to define "conscious" or "experience".

Also, this is not the definition Rocketdodger is using, so you are arguing at cross-purposes.

There is a distinct qualitative difference between reflexive, unconscious computation, and carrying out those operations consciously.
Really? What is this difference?

What has yet to be done is to provide a sufficient operational definition of actual conscious experience.
What has yet to be done is establish that this difference exists.

We don't know what it is about being a bat, human, cricket, or what-have-you, that creates the particular subjective quality of one's experiences.
Yeah, we do.
 
Of course it is. I'm not saying we shouldn't define the problem. I'm saying we haven't defined the problem. It would be a very good thing if we could.
In which case asking "what problem?" is about the best thing we could do.
I don't see how defining a different problem, and saying that it's the same thing helps us at all. In fact, it's extremely misleading. To say that if we can duplicate the behaviours associated with consciousness we understand it is fallacious.
How do you know it is a different problem, if you don't know what the problem is?
 
I'm curious about whether we actually observe our "consciousness" firsthand or if "we" simply infer it via the organism's capacity to connect different operations together into a sort of meta-cognition – pattern recognition ("I", "we", "us", "me").

Without consciousness, there is no "we", "I", "Observe", "Know", "Experience", "infer" - in fact, we can leave out most of the dictionary.

There's a simple test for all this. Look in a physics textbook, and read the description of any system. Look at what language is used to describe the system. It's very, very limited.

We can pretend that the bulk of human experience is known and understood - that qualia are just what it's like to be human, and there's no mystery or even any questions involved - but it's just leaving the difficult bits on one side until later.

For some operations, the brain and the neural context is their external environment. Only when certain operation are taking place – like "self-systems" – will it even be possible to create a distinction between the everyday notion of internal and external environment. A great deal of the brain appears not to have a clue that there is a brain or an organism or that there is identity.

When you say that consciousness is the prerequisite for all such observations, what addition to the description have you actually made, except for labelling all the complex interactions going on as such? From my perspective, I would have to say that those operations – whatever they are – are the prerequisite for us to later claim we are conscious (or that we "observe consciousness"). The difference between us being in considering consciousness as a property vs. mechanism (as a behavioural variable).

Without consciousness, observation is a meaningless concept. The physical world is full of objects that interact in various ways, but the only thing that "observes" is us.
 
In which case asking "what problem?" is about the best thing we could do.

Trying to find out what the problem is is a good idea. Denying that there is a problem at all is a very bad idea indeed.

How do you know it is a different problem, if you don't know what the problem is?

If, as you said, knowing what the problem is is the first prerequisite to solving it, then the inability to precisely define the problem proves that it's a difficult problem.

How, you may ask, do we know that there is a problem at all? Well, there's the entirety of human experience to explain. So far, we have no idea why human beings experience anything. Figuring out why we experience is the question. Unfortunately we don't know what it means to experience something.

There are a couple of "explanations" which have been mooted on this thread. One says that the experience of being a human is what it's like to be a human. The limitations of this as a way to understand something are pretty obvious. It's little more than a restatement of the problem. The other explanation is that consciousness arises as part of a self-referential system. Quite how this happens, and why, is not explained.

Neither explanation has anything to do with the world of physics. While we are in the realm of philosophy, and computer science, and information theory, we know that fundamentally, it's waffle. It's wishful thinking masquerading as hard-headed pragmatism.

Produce a physical theory, and we'll have something that we can actually test and measure. Until we know precisely, from detailed neurological studies, exactly how consciousness is produced in the human mind, what's the point in trying to make it appear elsewhere?
 
I'm sure you'd be hard-pressed to define what you mean by conscious experience, and how to determine if someone else has it. I've been asking this question for quite some time now, without a response.

One example will do.

Still waiting, Aku.
 
Last edited:
Trying to find out what the problem is is a good idea. Denying that there is a problem at all is a very bad idea indeed.

Kid: "There's a monster under my bed, dad."
Dad: "No, there isn't, son."
Kid: "How do I get rid of the monster, dad ?"
Dad: "You don't. There isn't a monster, son."
Kid: "I think we shouldn't ignore the monster, dad."
Dad: "Only if there is one, son."

Westprog. The FIRST thing we should do is determine IF there is a problem. THEN we can define the problem, and THEN we can try to solve it. We haven't even determined if there IS a problem, yet. We're still at disagreement on this, so it's not very useful of you to try to move to the next step, and continue to claim there IS a problem, after all.

How, you may ask, do we know that there is a problem at all? Well, there's the entirety of human experience to explain.

Explaining the exploding volcano doesn't require Hephaestos.

So far, we have no idea why human beings experience anything. Figuring out why we experience is the question. Unfortunately we don't know what it means to experience something.

I thought we ALL knew what it meant.

The other explanation is that consciousness arises as part of a self-referential system. Quite how this happens, and why, is not explained.

No, the explanation is that it IS a self-referential system.

Neither explanation has anything to do with the world of physics.

That's like saying wave functions don't, either.
 
Without consciousness, there is no

...snip...

Wish you'd make you mind-up! One minute you're saying we don't even know enough to define what it is that you claim we don't have the answer to and the next you are making claims about the very thing you say we don't know enough to even define what it is....
 
There's you, assuming there's a problem, again.

I find the inability to see the problem almost leads me to think that the people who deny it really aren't conscious in the way as the people who do. But then I reflect on the human capacity for self-deceipt.

Still, whether or not anybody else is actually conscious is their problem. I have seen no sensible explanation of why I'm conscious. Any complete explanation of the universe has to include why I'm thinking about the universe.
 
I find the inability to see the problem almost leads me to think that the people who deny it really aren't conscious in the way as the people who do. But then I reflect on the human capacity for self-deceipt.

Still, whether or not anybody else is actually conscious is their problem. I have seen no sensible explanation of why I'm conscious. Any complete explanation of the universe has to include why I'm thinking about the universe.

Well start at the beginning and explain to us what "the problem" is (and preferably without using words that don't have tight and clear definitions).
 
Wish you'd make you mind-up! One minute you're saying we don't even know enough to define what it is that you claim we don't have the answer to and the next you are making claims about the very thing you say we don't know enough to even define what it is....

I said we don't have a precise definition. I didn't say that we didn't know anything about it.

The reason for this is simple - but clearly I have to explain it again.

The various experiences we describe as part of being human all rest on some kind of sensation. If someone claims to be hungry, he is experiencing the sensation of hunger. We have no way to define what a sensation is. That's partly why we hop from word to word in this topic - "experience", "consciousness", "awareness" - defining each in terms of the other, but being unable to say exactly what it is, and yet all of us knowing what is meant.

That doesn't mean, of course, that we don't know what it means to be hungry. We know from observations that the sensation of hunger is associated with the attempt to eat. That's how we can talk to each other about feeling hungry. That doesn't mean that the behaviour is the most interesting thing about it. The astonishing, unexplained thing is that we feel something, and nobody has yet told me why.
 
I agree that, in principle, it should be possible to produce a synthetic conscious entity. My point is that we don't understand it enough to conclusively say that we've actually reproduced such a process already. In fact, there are strong reasons to suspect otherwise.




I'm not sure how much of the discussions you've had an opportunity to read so far, but I've stressed repeatedly that I do not think that the issue if unsolvable. Gaining more meaningful answers to this issue is of great interest to me.

There is nothing wrong with taking a crack at the issue and venturing a conjecture as to what consciousness is and how to model it. The problem is that the position being put forward by strong AI proponents like Pixy is dogmatically being touted as a sufficient answer to the problem of consciousness when, in reality, it completely side steps the issue by simply redefining it. His position is obscenely presumptuous -- especially considering that the model hes proposing is empirically falsified every day.

Pixy has not simply claimed to have solved the intricacies of human consciousness, hes outright stated that such questions are 'Irrelevant' and that the definitions he ascribes to are the sum of the matter. Hell, hes said that not only is the question of consciousness 'uninteresting', but the systems he has created are more exemplar of it than any human. :rolleyes:



I've spent a lot of time giving the actual definition of consciousness and explained why the current operational defintitions of it are not sufficient. If you get the opportunity I recommend that you read earlier portions of this discussion.



I've read a bit of the thread and I'm not quite sure where you cite any formal definition of consciousness. I've seen reference to consciousness as awareness, as being awake, as consisting in awareness of self-referential activity, as experience, etc.

You then seem to imply that consciousness consists in a restricted set of functions as in post 681:

In a thought experiment I proposed earlier I suggested a way in which you could have something that appears unconscious but actually is conscious. In principle it would be possible to have someone paralyzed from birth who was also unable to receive incoming sensory data to the brain. For all intents and purposes this would be a vegetative and severely impaired individual. Even if one were able to miraculously restore sensory and motor functions to such a person after critical developmental stages of childhood they would almost certainly not exhibit even the capacities exhibited by a conversing web-bot. I find that it would be hard to argue that such an vegetative person is not conscious in some way or that they don't experience anything akin to dreams, or what have you.

The physiology of even unconscious people exhibits self-referential intelligent capacities that could shame even the best of current AI systems. Intelligent, self-referential behaviors can and do arise in systems that are clearly unconscious. It's clear that such intelligent capacities, in and of themselves, are not sufficient to produce conscious experience. It seems painfully apparent that consciousness is restricted to a specific range of physiological states in the brain and we would do well to understand exactly what it is about such states which produce conscious experience. Simply crafting intelligent systems is not sufficient, methinks.

Or perhaps I am over-interpreting what you mean by "a specific range of physiological states"?

I'm afaid that I'm a bit confused over what exactly the definition of this word *is* because it seems to me that instead of working from a definition, we are using the word in many different guises -- as it is commonly used in English -- to refer to a whole host of different functions that share a family resemblance at best.

In neurology, for instance, we use the word 'conscious' to refer to at least three very different functions -- awake, alert, able to process information and communicate it back -- and speak of these as levels of consciousness.

In good Wittgensteinian fashion -- that is the problem, not the solution or issue. The solution can arise only out of an attempt to define the problem and then work on solutions.

Yes, this means that we are forcing reductionism onto a problem that many think cannot be reduced to individual components. Ultimately, it will not be completely expalined by reductionism but through some sort of systems approach -- because it is a system.

To get a handle on any of this, however, we must attack each part of the system in reductionist fashion. Maybe I'm reading all this the wrong way but it seems to me that Pixy is supplying one of the reductionist bits that most people thought in the past could not be reduced to simpler explanations. Of course it's not the whole answer, but it is an answer to the supposedly unexplainable self-referential quality of human consciousness.

To get to the rest of it we need strict definitions of 'awake', 'alert', 'aware', 'feeling', 'intentionality', etc. and a stern effort oriented toward understanding how those 'functions' might be emulated in a computer system.
 
Well start at the beginning and explain to us what "the problem" is (and preferably without using words that don't have tight and clear definitions).

If you read what I've written in the last couple of posts, you'll see that the inability to define the problem using tight and clear definitions is the problem, or at least the beginning of it.

The strange thing is that the hard AI people are so desperate to define the problem out of existence that they are willing to forego their own humanity, at least during the argument. I'm sure that they all go home and act as if their experiences were actually real.
 
Kid: "There's a monster under my bed, dad."
Dad: "No, there isn't, son."
Kid: "How do I get rid of the monster, dad ?"
Dad: "You don't. There isn't a monster, son."
Kid: "I think we shouldn't ignore the monster, dad."
Dad: "Only if there is one, son."

Westprog. The FIRST thing we should do is determine IF there is a problem. THEN we can define the problem, and THEN we can try to solve it. We haven't even determined if there IS a problem, yet. We're still at disagreement on this, so it's not very useful of you to try to move to the next step, and continue to claim there IS a problem, after all.

I know there is a problem. You seem to not want there to be one.

Explaining the exploding volcano doesn't require Hephaestos.



I thought we ALL knew what it meant.



No, the explanation is that it IS a self-referential system.



That's like saying wave functions don't, either.

Well, we can actually test that statement. I'll go and look up "wave functions" in my physics book. Then I'll look up "self-referential systems". What do you think I'll find?

I'm guessing that we'll find that one is a well-defined, well-understood physical concept, and the other is a bit of CS waffle that a typical physicist wouldn't give a moment's attention. What do you think?
 
westprog said:
Without consciousness, there is no "we", "I", "Observe", "Know", "Experience", "infer" - in fact, we can leave out most of the dictionary.

Which is to say that: Without weather there is no "rain", "snow", "storm" etc., thus we can leave out most of such concepts and simply focus on the weather part. How useful is that definition now?

There's a simple test for all this. Look in a physics textbook, and read the description of any system. Look at what language is used to describe the system. It's very, very limited.

We can pretend that the bulk of human experience is known and understood - that qualia are just what it's like to be human, and there's no mystery or even any questions involved - but it's just leaving the difficult bits on one side until later.
In a way we must still do that. We start with a working definition and then we refine it as more reliable information becomes available. By understanding parts of the puzzle we can even start excluding certain functions as not belonging to the definition.

For instance, we understand that access and observation is intertwined (as a conceptual definition as something like 'conscious observation'), but we still take them apart because there might be a timing difference. Hence the distinction between inference and observance.


Without consciousness, observation is a meaningless concept. The physical world is full of objects that interact in various ways, but the only thing that "observes" is us.
That's not really true. We don't need identity in order to observe content. Identity – sense of self – could be considered part of the content, that needn't be there at all times however. People have experienced such episodes quite often, it's not even extraordinary. What is always needed, is access to content, whatever it may be.

Defining consciousness as in not really defining it at all – just uttering the words – could be considered meaningless. At some point it seems inevitable that we have to take the concept of consciousness apart an start looking past the abstraction – like we have to do with "weather" and what we did with phlogiston in the past. It would also mean that we don't look for the actual "banana-ness" when studying bananas – where would we look for the ness-part anyhow?
 
Last edited:
Still waiting, Aku.

Sorry if I haven't addressed your particular question directly. It just that I've spent dozens of pages answering that very same question as thoroughly and exhaustively as language allows and all I getting are claims that I haven't explained what I mean by conscious experience. Quite frankly, I'm absolutely tired of repeating, paraphrasing, and clarifying my myself to the nth degree so, if you want the answer to that question, read the rest of the thread.
 

Back
Top Bottom