• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Now, my robot would not "feel pain" or really suffer. It would just engage in extreme measures to avoid being opened up. I suspect c. elegans does not feel pain either, but most any animal goes into a state of extreme injury avoidance as if it really did suffer.

We do, however, really suffer, and pain is, I guess, an example of a quali that seems extremely mysterious, because I have no idea how to engineer a robot that experiences extreme pain to accompany extreme injury avoidance actions.

Explain to me again why the p-zombie argument is incoherent, because this example makes it seem pretty coherent. What do I need to do to unzombify my robot?

Does an ant feel pain ? It goes to extreme measures to avoid it.
 
Huh ? You think we would not do these things otherwise ? By what logic ?

Of course you would. That's the point. That's why when I see subjective experience being disregarded as unimportant because it's inherently undetectable, I don't accept that at face value. I don't think that people claiming that behaviour is all really tell their spouses that it doesn't matter how they feel, it's how they look that counts.

Because subjective experience is not something that is considered when programming artificial intelligence doesn't mean that it isn't of central importance to the people doing that programming, as soon as they leave work.
 
Probably... depends on the robot and its line of inquiry.

However, we, to whom subjective experience is central, assume that we know what other people mean when they say that they are sad, or happy, or sleepy. We deduce what their behaviour will be, but also, we assign feelings to them analogous to the feelings that we have ourselves. Indeed, even when people don't express their feelings we deduce how they feel from their behaviour. We do this even with paintings and statues, or characters in novels - who of course aren't real people and don't have subjective experience. We know that they aren't real, but we guess what kind of experience they would have if they were! When we see actors portraying an emotion, we know that they don't have the emotion, but we also know what emotion they are trying to display. The difference between external behaviour and subjective experience is something we understand automatically.

This subjective approach is of course not the scientific approach. The scientific approach would be that of the robot. But the scientific approach - in this case - would be less likely to get the truth.
 
Yes I realise this is the issue, I am not qualified in formal logic or mathematics, so I cannot give you an answer in these terms.

I can give you my reasoning and you can take from it what you can. Certain parties here will accuse me of spouting nonsense, or pointless speculation.

Taking materialism to be the actually existing ontology, I would have to agree with your position on the generation of artificial consciousness. With some reservations on other issues.

However I give serious consideration to other ontologies. This is due to a consideration of the limited experience of being human and of humanity as a whole. Presumably when logic is exercised on this human experience and understanding of existence, the only conclusion that can be reached is that in spite of our understanding of reality, we may not, or may not be able to, perceive or understand reality and our entire experience may be no more than a confection or illusion. Such perception is likely a function of our peculiar evolutionary position, little more than a mirror image of ourselves and our perceived environment.

I am not suggesting a magical spark of life, I fully accept the materialist interpretation of physical matter as understood to science. However I consider that there may be aspects of matter of which we are not currently aware or that the constitution of matter is a reflection or emergent phenomena of something not as yet known to science.

There may well be emergent qualities in living things which are only manifest in living things and are dependent on the particular combination of molecules found in cellular life. Qualities which unbeknown to us may only manifest through cellular life. The vital quality of animal consciousness may well be due to a physical or electrical quality of the molecules of which the brain and sensory organs are constituted. Resulting in a sentience, a feeling of presence in the physical spatial and temporal environment, which is likewise constituted.

Well it is good to hear that you agree with the logic *if* one assumes monism.

As for the rest, we are just at an impasse. A stand-off, if you will.

http://www.youtube.com/watch?v=VAyUOli7bnw#t=0m28s

Note that I don't presume to know which color corresponds to what ontology. Is monism the bad guys? Maybe ...
 
Last edited:
However, we, to whom subjective experience is central, assume that we know what other people mean when they say that they are sad, or happy, or sleepy. We deduce what their behaviour will be, but also, we assign feelings to them analogous to the feelings that we have ourselves.
Yes; in short, we use empathy to understand other human beings...
We do this even with paintings and statues, or characters in novels - who of course aren't real people and don't have subjective experience.
...and other objects.
The difference between external behaviour and subjective experience is something we understand automatically.
Sure. But you're comparing the wrong things. You're comparing an external behavior to a state.

Suppose we had a machine with a knob, a button, and an arm holding a pen. In front of this machine is a sheet of paper. We push the button, and its arm moves, making a dot on the sheet of paper. Push it again, and it makes a dot in the same spot; we find it always makes the dot in the same spot when we push the button.

And then we turn the knob slightly, then press the button. Suddenly it makes a dot in a completely different location. But if we push the button again, it makes the next dot in the same place. So our first observation is that if we do not turn the knob between button presses, it will always draw the dot in the same place.

Next we may wish to try to figure out how it draws dots, so we turn the knob some more, and see where it draws. We turn it the other way, and see where it draws. What we find though, initially, is that not only can we not predict where it would draw the dots according to the knob turns, but even when we return the knob as best we can to a fixed location, we still cannot predict where it would draw it. The machine looks pretty much chaotic.

But after a while, we note something really strange. The dots don't seem to be entirely random--even though we cannot predict where it would land. It appears that the machine draws dots on the paper generally everywhere, except for a certain very prominent large area on the paper. The area is in the shape of a regular 5 pointed star. The more we turn the knob and push the button, the more it reveals this shape to us.

So how would you describe the workings of this machine, based on the given observations?
 
Yes; in short, we use empathy to understand other human beings...

...and other objects.
Sure. But you're comparing the wrong things. You're comparing an external behavior to a state.

I don't know that subjective experience corresponds to a "state". It may do.

Suppose we had a machine with a knob, a button, and an arm holding a pen. In front of this machine is a sheet of paper. We push the button, and its arm moves, making a dot on the sheet of paper. Push it again, and it makes a dot in the same spot; we find it always makes the dot in the same spot when we push the button.

And then we turn the knob slightly, then press the button. Suddenly it makes a dot in a completely different location. But if we push the button again, it makes the next dot in the same place. So our first observation is that if we do not turn the knob between button presses, it will always draw the dot in the same place.

Next we may wish to try to figure out how it draws dots, so we turn the knob some more, and see where it draws. We turn it the other way, and see where it draws. What we find though, initially, is that not only can we not predict where it would draw the dots according to the knob turns, but even when we return the knob as best we can to a fixed location, we still cannot predict where it would draw it. The machine looks pretty much chaotic.

But after a while, we note something really strange. The dots don't seem to be entirely random--even though we cannot predict where it would land. It appears that the machine draws dots on the paper generally everywhere, except for a certain very prominent large area on the paper. The area is in the shape of a regular 5 pointed star. The more we turn the knob and push the button, the more it reveals this shape to us.

So how would you describe the workings of this machine, based on the given observations?

I can't make any certain claims about the workings of such a machine, except that any hypothesis that doesn't explain the appearance of the star, in a seemingly random way, is obviously wrong. However, the internals could be mechanical, computerised, or some other kind of mechanism.

However, in the case of the human mind, we can lift the lid off and see the workings - and we still don't know what produces the subjective experience. It might as well be a black box.
 
Of course you would. That's the point. That's why when I see subjective experience being disregarded as unimportant because it's inherently undetectable, I don't accept that at face value.

Excuse me ? You just said that subjective experiences are crucial, and then admitted that they make no difference. Could you please be clear about what you think ?

Because subjective experience is not something that is considered when programming artificial intelligence doesn't mean that it isn't of central importance to the people doing that programming, as soon as they leave work.

You don't know that. Another bald assertion. AI I based on input into the computer. How is that not subjective experience ? Because we can dump the data into a file and look at it ? Does that mean subjective experience ceases to exist the day we have technology that can read into the human brain ???
 
However, in the case of the human mind, we can lift the lid off and see the workings - and we still don't know what produces the subjective experience. It might as well be a black box.

I think you may be assuming that subjective experiences are somehow qualitatively different from other forms of behaviours. I see no reason to assume this, other than human hubris.
 
Excuse me ? You just said that subjective experiences are crucial, and then admitted that they make no difference. Could you please be clear about what you think ?

I'm saying that we have no scientific access to the subjective experience of other people. We do have access in other ways. The most important element in our access to the subjective experience of other people is our own subjective experience.

You don't know that. Another bald assertion. AI I based on input into the computer. How is that not subjective experience ?

Because it's entirely objective?

Because we can dump the data into a file and look at it ? Does that mean subjective experience ceases to exist the day we have technology that can read into the human brain ???

We can already read the human brain, in the ways I've described. Human beings are very good at figuring out what other human beings are feeling. There are many, many ways to deduce what someone's subjective experience is, and always have been. It would be surprising if we couldn't produce a correlation between the behaviour of the brain and emotional states, when we already have correlation between the behaviour of all of the rest of the body.

The question of where the subjective experience comes from - and a way to objectively detect it - remains as far off as ever.
 
The source of consciousness may well be in the activity of the computation while simultaneously deriving its physical presence from some aspect of the life of the entity. Without this later presence the computational activity would be an entirely abstract phenomena, not connected in time and space with the physical world.
The activity of computation is the aspect of the life of the entity from which consciousness derives its physical presence. I think you'll find that all computational activity is entirely physical. The clue is in 'activity'. Automata theory may deal with the theory of computation on abstract machines, but any actual computation is inevitably physical.

You could show me wrong by suggesting an example of abstract or non-physical computational activity; can you?
 
I don't know that subjective experience corresponds to a "state". It may do.
Well, let's get back to that later.
...any hypothesis that doesn't explain the appearance of the star, in a seemingly random way, is obviously wrong. However, the internals could be mechanical, computerised, or some other kind of mechanism.
I agree, though I would phrase it a bit differently. The pattern we observe emerging suggests a pattern of behavior for the device. There's likely a tiny but real chance that it's coincidence, but we're probably ready to formulate a hypothesis about the machine's behavior--namely, that there exists within the machine some sort of mechanism influencing its behavior, and that this mechanism has a special property of excluding the star shape from the area of final arm movements when drawing the dot.

And as you said, it could be any of a number of things. I shouldn't be surprised when ripping the machine open to find a star shaped component. Neither would I be terribly disappointed upon opening it to discover that there wasn't one. But the one thing I would expect is that whatever the mechanism is, it is something that causes the arm to move randomly anywhere on the paper, except in the star shape. (I could easily be proven wrong).

So that's a rough model of science, and what we can determine using external behaviors. In this particular case, we can infer the existence of some sort of internal mechanism that directs the external behavior from the internal behavior (then again, it may not yet ruled out that the influence is an external mechanism either--we should probably check for eyes and stuff on the box).
However, in the case of the human mind, we can lift the lid off and see the workings - and we still don't know what produces the subjective experience. It might as well be a black box.
But it's all black box. The theory of gravity is virtually ancient, and yet we still have no clue what it is. We only see the arm move in particularly peculiar ways. We're still working on it.

One thing is for sure, though. You're allegedly describing the behavior. When we bump up to the meta level, something is causing you to describe your own behavior in this way. If we identify what is causing you to describe your behavior this way, we should automatically be including your subjective experience in our identification. If we aren't including it, then you must be failing to describe your subjective experience; or, alternately, you must be mistaken somehow about it.

There's no way I see for you to have a subjective experience, and to describe it, without the subjective experience playing a critical role in causing you to describe it that way. So if the robot researcher finds the cause for your describing it that way, and you do have subjective experiences, the robot researcher has found your subjective experience.
 
Well it is good to hear that you agree with the logic *if* one assumes monism.
Thats not what I said, I said if one assumes materialism, ie physical matter.

As for the rest, we are just at an impasse. A stand-off, if you will.
Ahhh sooo, its time to Enter the Dragon.

http://www.youtube.com/watch?v=usdcpWXPaDY&feature=related


Note that I don't presume to know which color corresponds to what ontology. Is monism the bad guys? Maybe ...
No monism can include confections of dualism such as the spirit/matter relationship and the subjective/objective duality. While remaining a monism at a more fundamental level.
 
Last edited:
Because it's entirely objective?

How the hell do you know ? You are assuming, based on the fact that it's not human (exactly what you are decrying about how we deduce consciousness in other people) that it doesn't have subjective experience.

We can already read the human brain, in the ways I've described. Human beings are very good at figuring out what other human beings are feeling.

NO. I was talking about a technology that tells you exactly how I feel and think. Where is your "subjective" experience, then ?
 
...We can already read the human brain, in the ways I've described. Human beings are very good at figuring out what other human beings are feeling...
Not in my experience. Do you have any evidence for that assertion?
 
Not in my experience. Do you have any evidence for that assertion?

I can often tell, from a handful of symbols on my computer screen, that someone posting on this subject is feeling angry. Often because of some symbols that I've caused to appear on their computer screen.
 
I can often tell, from a handful of symbols on my computer screen, that someone posting on this subject is feeling angry. Often because of some symbols that I've caused to appear on their computer screen.
That assumption is so often mistaken when the symbols are plain language, we had to invent smilies to explicitly qualify the emotion :cool:
 
How the hell do you know ? You are assuming, based on the fact that it's not human (exactly what you are decrying about how we deduce consciousness in other people) that it doesn't have subjective experience.

That's not precisely what I'm claiming. What I am claiming is that if a computational system claims to have subjective experience, we will be able to examine the system and be able to tell exactly why it claims this. We will know where its subjective experience comes from. That's inherent to computational systems. We can always tell exactly why they do things, objectively.

It's not possible to claim that a computational system will develop subjective experience, will claim to have subjective experience, but that we won't know where and how this comes about. We will know exactly how it comes about, and if the way that it obtains subjective experience.

Of course, we still might not believe that the computation really has subjective experience. Writing a program that claims to be conscious is trivial. And while we will be able to tell exactly the way that the program achieves subjective experience, we might not be able to derive a general principle from this knowledge, that would allow us to determine what other systems have such experience. However, our ability to examine the entirety of the computation is not in doubt.


NO. I was talking about a technology that tells you exactly how I feel and think. Where is your "subjective" experience, then ?

How can a technology describe exactly how you think and feel when you can't do so yourself? The language of thoughts and feelings is inherently imprecise.
 
That assumption is so often mistaken when the symbols are plain language, we had to invent smilies to explicitly qualify the emotion :cool:

I don't claim that the process is infallible. I do claim that it is better than random.
 
There's no way I see for you to have a subjective experience, and to describe it, without the subjective experience playing a critical role in causing you to describe it that way. So if the robot researcher finds the cause for your describing it that way, and you do have subjective experiences, the robot researcher has found your subjective experience.

But there is no way for the robot researcher to distinguish between something that causes something to claim to have a subjective experience, and for something to actually have a subjective experience. If he can explain the behaviour in terms which do not involve subjective experience being real, then surely he will do so. We might have a number of explanations for the box producing the shape of a star, but we are going to prefer a little star shaped object inside blocking the path of the pen to the existence of a magic elf inside the box.

If the robot can find some principle that positively identifies subjective experience as being real, then he might well accept it, even though it is something that he cannot access himself. However, if an explanation can be found that doesn't involve any new principle, then that is what he will tend to go with. He will note that the human being claiming subjective experience is able to make erroneous statements about his external environment due to incorrect information, and hence can assume that the claim of subjective experience is similarly subject to error.

I do not claim that such a robot is possible - but it stands for the objective principle.

We ourselves are not restricted in this way, because we already accept subjective experience as being real, as we have access to it ourselves. Our acceptance of subjective experience in others is based on our subjective knowledge of it in ourselves.
 
Status
Not open for further replies.

Back
Top Bottom