• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
The issue with which human beings are really concerned - the most important thing to a human being - subjective experience - is left out of the whole study.
Only if you remove it from the study.
That's why we read books, watch films, have friends, play sports, and engage in relationships of all sorts with other people.
Funny. Those are behaviors.
 
My question is how in a synthetic conscious entity is that entity embedded, present, with an experience of presence in their physical environment?

Rather than an entirely virtual entity oblivious of the physical environment and unaware of any kind of experience, other than a programmed abstraction.

You don't seem to realize that in humans -- and all animals, for that matter -- the brain doesn't have some magical direct connection with outside reality, or even internal reality.

The only access the brain has to anything at all is via input from neurons.

Given this fact of science, my question to you would be why we couldn't just trick a brain into thinking it was somewhere it wasn't, by giving it different input.

And given that the answer -- another fact of science -- is "there is no reason why we couldn't" it leads towards a conclusion that for a brain, there is no difference between having physical presence and being entirely virtual.

If you disagree I would love to hear the logic you use.

Note that this is fundamentally the disagreement between the computationalists and everyone else. A computationalist simply can't figure out why a brain in a person's body is fundamentally different from a brain in a vat, or a brain in a virtual vat, or a virtual brain in a virtual vat, or a virtual brain that doesn't need a vat. Non-computationalists insist there are differences there, but when pressed can only say "well, it isn't 'physical'" or some other vague objection that isn't based on logic or mathematics.
 
Last edited:
Any eight-bit two-dollar microcontroller can be programmed to be conscious. Many complex computer systems already have been programmed to be conscious, because the technique is extremely useful in maintaining and monitoring complex systems.

Is a thermostat conscious? Not a computer, a bimetallic strip. Oh, let's complete the device: a small thermostatically regulated oven. What has to be added to make it conscious? Define the minimum specification for a conscious machine.

Heck, how about a centrifugal governor?

The centrifugal governor is often used in the cognitive sciences as an example of a dynamic system
 
Last edited:
Is a thermostat conscious? Not a computer, a bimetallic strip. Oh, let's complete the device: a small thermostatically regulated oven. What has to be added to make it conscious? Define the minimum specification for a conscious machine.

Heck, how about a centrifugal governor?

We can indeed define consciousness in such a way that devices like thermostats can possess consciousness. It then becomes an uninteresting property. Does the statement "X is conscious" tell us anything about X in such a case that we want or need to know?

If it is being claimed that by virtue of this definition of the word conscious, we can assert that anything conscious according to this definition must share in some form of subjective experience - that the centrifugal governor must, in some way feel what it's like to be an centrifugal governor - that's anthropomorphism in the extreme.
 
Only if you remove it from the study.



Funny. Those are behaviors.

They are behaviours that, from the outside, are explicable as evolutionarily motivated. They assist in survival. It's only from the inside that we deduce why we do these things.

Take what people claim from the study - take away each individual's feeling of subjective experience - and studying behaviour won't lead us to assume subjective experience.

If a researcher happened not to have subjective experience himself (assuming that such a researcher might not exist) why would he believe that subjective experience was real? He could explain everything that his subjects did in other ways.
 
If a researcher happened not to have subjective experience himself (assuming that such a researcher might not exist) why would he believe that subjective experience was real? He could explain everything that his subjects did in other ways.

If there was a researcher who lacked subjective experience, and all our behavior is explicable to him without referencing this elusive "subjective experience," then the evidence leads to the conclusion that subjective experience isn't real in the first place.

Meaning, it is all of us that is wrong, not him.
 
Is a thermostat conscious? Not a computer, a bimetallic strip. Oh, let's complete the device: a small thermostatically regulated oven. What has to be added to make it conscious? Define the minimum specification for a conscious machine.

Heck, how about a centrifugal governor?
Dennet covered that specific example. No, a thermostat might be described as aware, since it responds to its environment in a non-linear way, but it's not conscious - it has a model of the external world, but no model of its internal world.

That's what's required for consciousness - indeed, it's exactly what we mean when we talk about consciousness.
 
They are behaviours that, from the outside, are explicable as evolutionarily motivated. They assist in survival. It's only from the inside that we deduce why we do these things.
I disagree.
Take what people claim from the study - take away each individual's feeling of subjective experience - and studying behaviour won't lead us to assume subjective experience.
Again, I disagree.

Grant me one premise--that you're describing your subjective experience by the term "subjective experience". I think this is easy enough:
If a researcher happened not to have subjective experience himself (assuming that such a researcher might not exist) why would he believe that subjective experience was real? He could explain everything that his subjects did in other ways.
...then you should hold that your behavior of describing these subjective experiences using phrases such as "feelings of subjective experience" are somehow caused by your actually having them.

So this leaves us with the notion that something in the causal chain of events that led to you describing yourself as having this feeling of subjective experience was, in fact, the presence of your feeling of subjective experience. In this case, the researcher should find this thing.
 
Was thinking of excruciating pain in the context of the qualia conundrum.

Started with a design of this fantasy robot: I'm going to make it with sensors on its skin panels, so that when you attempt to remove them, it goes into an extreme avoidance state where it stops everything and fights as hard as it can to keep you from opening it. Something like how c. elegans would squirm violently if you were to start cutting it open.

Now, my robot would not "feel pain" or really suffer. It would just engage in extreme measures to avoid being opened up. I suspect c. elegans does not feel pain either, but most any animal goes into a state of extreme injury avoidance as if it really did suffer.

We do, however, really suffer, and pain is, I guess, an example of a quali that seems extremely mysterious, because I have no idea how to engineer a robot that experiences extreme pain to accompany extreme injury avoidance actions.

Explain to me again why the p-zombie argument is incoherent, because this example makes it seem pretty coherent. What do I need to do to unzombify my robot?
 
...then you should hold that your behavior of describing these subjective experiences using phrases such as "feelings of subjective experience" are somehow caused by your actually having them.

So this leaves us with the notion that something in the causal chain of events that led to you describing yourself as having this feeling of subjective experience was, in fact, the presence of your feeling of subjective experience. In this case, the researcher should find this thing.

Yes, that's certainly possible. The universality of subjective experience seems to be an indication that it exists. However, what is this hypothetical robot who lacks subjective experience* to make of it? He keeps talking to a succession of humans who insist that it's real, but are unable to define it or explain it in a way that he understands. Won't he at least consider the possibility that it's not real?

*Assuming that such a robot is possible - which it may not be.
 
Was thinking of excruciating pain in the context of the qualia conundrum.

Started with a design of this fantasy robot: I'm going to make it with sensors on its skin panels, so that when you attempt to remove them, it goes into an extreme avoidance state where it stops everything and fights as hard as it can to keep you from opening it. Something like how c. elegans would squirm violently if you were to start cutting it open.

Now, my robot would not "feel pain" or really suffer. It would just engage in extreme measures to avoid being opened up. I suspect c. elegans does not feel pain either, but most any animal goes into a state of extreme injury avoidance as if it really did suffer.

We do, however, really suffer, and pain is, I guess, an example of a quali that seems extremely mysterious, because I have no idea how to engineer a robot that experiences extreme pain to accompany extreme injury avoidance actions.

Explain to me again why the p-zombie argument is incoherent, because this example makes it seem pretty coherent. What do I need to do to unzombify my robot?

I remember that when a work colleague of mine bought his Sinclair ZX-80, he programmed it to print "ouch" when someone touched a key. I don't think that means that it had any subjective experience of pain - or that if the message were changed, that it had a sensation of intense pleasure.
 
Was thinking of excruciating pain in the context of the qualia conundrum.

Started with a design of this fantasy robot: I'm going to make it with sensors on its skin panels, so that when you attempt to remove them, it goes into an extreme avoidance state where it stops everything and fights as hard as it can to keep you from opening it. Something like how c. elegans would squirm violently if you were to start cutting it open.

Now, my robot would not "feel pain" or really suffer. It would just engage in extreme measures to avoid being opened up. I suspect c. elegans does not feel pain either, but most any animal goes into a state of extreme injury avoidance as if it really did suffer.

We do, however, really suffer, and pain is, I guess, an example of a quali that seems extremely mysterious, because I have no idea how to engineer a robot that experiences extreme pain to accompany extreme injury avoidance actions.

Explain to me again why the p-zombie argument is incoherent, because this example makes it seem pretty coherent. What do I need to do to unzombify my robot?
Can it describe to you the pain it is feeling?
 
Now, my robot would not "feel pain" or really suffer. It would just engage in extreme measures to avoid being opened up. I suspect c. elegans does not feel pain either, but most any animal goes into a state of extreme injury avoidance as if it really did suffer.

Consider which of your own sensations you label as painful: the ones that you have a built-in desire to avoid. Why should your innate desires be any more real than those of another agent?

"Real" is relative.
 
Was thinking of excruciating pain in the context of the qualia conundrum.

Started with a design of this fantasy robot: I'm going to make it with sensors on its skin panels, so that when you attempt to remove them, it goes into an extreme avoidance state where it stops everything and fights as hard as it can to keep you from opening it. Something like how c. elegans would squirm violently if you were to start cutting it open.

Now, my robot would not "feel pain" or really suffer. It would just engage in extreme measures to avoid being opened up. I suspect c. elegans does not feel pain either, but most any animal goes into a state of extreme injury avoidance as if it really did suffer.

We do, however, really suffer, and pain is, I guess, an example of a quali that seems extremely mysterious, because I have no idea how to engineer a robot that experiences extreme pain to accompany extreme injury avoidance actions.

Explain to me again why the p-zombie argument is incoherent, because this example makes it seem pretty coherent. What do I need to do to unzombify my robot?
Oh no, the Night of the P-Zombies, Part II.
 
We infer that they are conscious because their behaviour is like ours. From this we can deduce... er, their behaviour will be like ours?

Your interpretation, not mine.

If we want to really miss the point, we can infer the underlying mechanism that produces the behaviour (and by "infer", I mean guess) and then define consciousness in terms of this. This gives us absurdities like the two dollar microcontroller being conscious, but again, it's a matter of definition.

Understanding what we mean by a word is the first step to understanding what the word represents. And it's also useful to demystify that word. "Consciousness" used to be synonymous with "soul". That under some definitions it can now include machines represents a step forward.

the most important thing to a human being - subjective experience
[...]
For the normal human being, subjective experience of other people is the most important thing in the supposed outside world. That's why we read books, watch films, have friends, play sports, and engage in relationships of all sorts with other people.

Huh ? You think we would not do these things otherwise ? By what logic ?
 
Consider which of your own sensations you label as painful: the ones that you have a built-in desire to avoid. Why should your innate desires be any more real than those of another agent?

"Real" is relative.

I'm not sure what you mean by "'real' is relative." People jump off tall buildings to avoid extreme pain, and kill themselves with drugs in pursuit of extreme pleasure.

I might alternatively want to program my robot to experience extreme pleasure in the act of preventing its panels from being opened. In either case, it's external behavior would be the same, but its internal experience would be as different as can be imagined.

What lines of code need to be changed to change the pain quale into the pleasure quale? I know, change the "CallPain()" line to "CallPleasure()" but then, what do these functions actually do? Show me their lines of code.
 
Yes, that's certainly possible. The universality of subjective experience seems to be an indication that it exists. However, what is this hypothetical robot who lacks subjective experience* to make of it? He keeps talking to a succession of humans who insist that it's real, but are unable to define it or explain it in a way that he understands. Won't he at least consider the possibility that it's not real?
Probably... depends on the robot and its line of inquiry.
 
You don't seem to realize that in humans -- and all animals, for that matter -- the brain doesn't have some magical direct connection with outside reality, or even internal reality.

The only access the brain has to anything at all is via input from neurons.

Given this fact of science, my question to you would be why we couldn't just trick a brain into thinking it was somewhere it wasn't, by giving it different input.

And given that the answer -- another fact of science -- is "there is no reason why we couldn't" it leads towards a conclusion that for a brain, there is no difference between having physical presence and being entirely virtual.

If you disagree I would love to hear the logic you use.

Note that this is fundamentally the disagreement between the computationalists and everyone else. A computationalist simply can't figure out why a brain in a person's body is fundamentally different from a brain in a vat, or a brain in a virtual vat, or a virtual brain in a virtual vat, or a virtual brain that doesn't need a vat. Non-computationalists insist there are differences there, but when pressed can only say "well, it isn't 'physical'" or some other vague objection that isn't based on logic or mathematics.

Yes I realise this is the issue, I am not qualified in formal logic or mathematics, so I cannot give you an answer in these terms.

I can give you my reasoning and you can take from it what you can. Certain parties here will accuse me of spouting nonsense, or pointless speculation.

Taking materialism to be the actually existing ontology, I would have to agree with your position on the generation of artificial consciousness. With some reservations on other issues.

However I give serious consideration to other ontologies. This is due to a consideration of the limited experience of being human and of humanity as a whole. Presumably when logic is exercised on this human experience and understanding of existence, the only conclusion that can be reached is that in spite of our understanding of reality, we may not, or may not be able to, perceive or understand reality and our entire experience may be no more than a confection or illusion. Such perception is likely a function of our peculiar evolutionary position, little more than a mirror image of ourselves and our perceived environment.

I am not suggesting a magical spark of life, I fully accept the materialist interpretation of physical matter as understood to science. However I consider that there may be aspects of matter of which we are not currently aware or that the constitution of matter is a reflection or emergent phenomena of something not as yet known to science.

There may well be emergent qualities in living things which are only manifest in living things and are dependent on the particular combination of molecules found in cellular life. Qualities which unbeknown to us may only manifest through cellular life. The vital quality of animal consciousness may well be due to a physical or electrical quality of the molecules of which the brain and sensory organs are constituted. Resulting in a sentience, a feeling of presence in the physical spatial and temporal environment, which is likewise constituted.
 
Last edited:
I'm not sure what you mean by "'real' is relative." People jump off tall buildings to avoid extreme pain, and kill themselves with drugs in pursuit of extreme pleasure.
"Real" being relative to a given viewpoint. Moriarty is real to Holmes, yet fictional to us. Sensations are real to the receiver of those sensations.

Those people are planning their future behavior based on the results of experience of sensations that caused innate avoidance and attraction behaviors, respectively (what they have learned to label "pain" and "pleasure").

I might alternatively want to program my robot to experience extreme pleasure in the act of preventing its panels from being opened. In either case, it's external behavior would be the same, but its internal experience would be as different as can be imagined.
Except that pain and pleasure aren't labels that you as a programmer of the robot can just slap on: they must be applied to particular sensations by the robot itself, based on its memory of the results of prior fixed-program (innate) behavior. So the external behavior would not be the same if that robot could choose its behavior.

What lines of code need to be changed to change the pain quale into the pleasure quale? I know, change the "CallPain()" line to "CallPleasure()" but then, what do these functions actually do? Show me their lines of code.
No such routines would exist.
 
Status
Not open for further replies.

Back
Top Bottom