• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Well, for starters, you could feed the simulated person video information about the external world, so he would see you. He might comment on how you look or what he sees behind you.

Ah, the old "world of the simulation" argument, in which simulated people become conscious and believe themselves to be living in the world that the simulator is supposed to be simulating.

Quite a feat, seeing as how the laws of physics and common sense demand that the "world of the simulation" must exist only in the imagination of the programmer and user.
 
Speak for yourself. We've had long-running arguments in which it was claimed that you could take a computer running a simulation of a brain and it would produce an instantiation of consciousness which could be used to make a "robot" conscious.



Here is an example

Nope. Just as you can replace neurons with artificial neurons that work by different means but produce the same results, you can replace the brain with, for example, a simulated brain, and get the same results.
Still not sure what you're talking about or why you think it's a problem.
 
I still wonder why the simulation going on in my head not only enables me to avoid falling down holes and bumping into things, to find food, interact with other humans to my advantage (or not!) but also gives me an awareness that these things are happening. Surely I would function just as well without the awareness, like an ant (which I assume without being able to back it up has no consiousness) or a computer (ditto).

Would you, though?

Lets hash it out.

It is clear that we do things all the time without being "aware" of doing them. I always use the example of coming back from the restroom and walking to a cube where my desk used to be rather than where it is -- that is obviously a full set of behaviors ( walking, turning ) that I wasn't particularly aware of and I did just fine making it to the destination ( although it was an incorrect destination ).

But that is a behavior that I had done before, many many times. Could you engage in a new behavior without being aware of what is happening? I may be wrong but I tend to think the answer is no.

When I am walking through the woods and I haven't been there before, part of me is certainly aware of the ground, looking for things to trip over and holes to fall in. Thats why we look down most of the time when we hike.

Finding food? I think the act of looking for something that appears like fruit, then planning a route to get there, then harvesting it, is definitely something that we would be fully aware of. Even something like peeling a banana I am fully aware of -- I have never started that task then been done with it "before I realize it."

Finally, interacting with humans... well there are times when people are at my desk and talking and I just nod and smile without even hearing what they say, I admit. But I think any behavior more complex that this surely requires awareness.

The fact that it seems impossible to do any complex task without being "aware" makes me think that "awareness" itself perhaps isn't something distinct from such tasks.

I once heard an interesting conversation about AI in which one of the participants tried to argue that a thermostat has two thoughts: 'it's too warm' and 'it's too cold'. I suppose a similar argument could be made for a toaster. How do we know a toaster doesn't think 'the toast is done' before popping up?

Well, I would phrase it differently.

Suppose you have a toaster that you are trying to make the greatest toaster in the world. So you replace the extremely simple "pop toast up when dark" mechanism with one of your own devising (and perhaps you need to modify the housing as well to accomodate ).

You don't like the existing sensor, so you replace it with something like the human eye, with millions of photon detectors. Then you need to add a way for the information from those sensors to be filtered and interpreted down to usable chunks, so you add something like our visual processing system.

But now you want the toaster to be able to learn for itself when the toast is done. So you add chemical sensory mechanisms for it to sample the toast itself, and mechanisms to interpret those sensory results based on nutritional requirements that are aligned with those of a human. And you have to give it a way to remember the samplings of the past.

Then you need to add ways for it to logically infer between good samples and the behavior that led to them, so it "knows" what it did to produce good toast color that it wants to reproduce.

But now you want even more autonomous behavior from the toaster -- why should you have to load it up each time, when it could just go get the bread and have the toast waiting for you when you wake up?

So you add functionality to learn about the environment and distinguish between bread and non-bread, obstacles and non-obstacles, so it can scan the kitchen and head to the bread.

Except how can it load the bread? So you give it some robot manipulators and sensory mechanisms to see where they are. But now there is a problem -- for the toaster to learn about the sensory manipulators properly, it needs to know that they are part of it and not part of the kitchen -- otherwise for example the obstacle avoidance mechanisms would perpetually try to route around its own arm sticking out in front of the camera ! So you add functionality for it to be able to learn about self.

This is a pretty darn good toaster, would you agree?

But lets make it even better. It sucks to have to reprogram it each time you want to wake up at a different time, or want the toast done differently, etc. So you add audio recognition mechanisms and functionality to semantically interpret human voice. Because your kitchen is extremely complex, it isn't enough to just program in a lookup table of commands -- the toaster needs to actually understand what you are telling it so that you don't need to constantly monitor what is going on.

Any other features you want to add before we are done?

So now we have an ultra toaster. Do you suspect something like a thought of "the toast is done" occurs before the toaster pops up the toast ?
 
To be precise, what I'm saying is that consciousness is not the issue; the issue is all the other stuff.

See Dennett, Consciousness Explained.

If you believe that the issue is all the other stuff besides consciousness, why do you insist on being so active in threads about consciousness?

As for "Consciousness Explained", it's merely a rehash of the very Cartesian theater he spends so much time (rightly) warning against.

Not only that, but it provides no explanation of why his A/B brain notion should produce consciousness.

Looking through "The Cognitive Neurosciences" last week, I ran across a note I made in an article on research which contradicts Dennett's model. When I'm back from NC, I'll see if I can look that up again and post it.
 
Ah, the old "world of the simulation" argument, in which simulated people become conscious and believe themselves to be living in the world that the simulator is supposed to be simulating.

Quite a feat, seeing as how the laws of physics and common sense demand that the "world of the simulation" must exist only in the imagination of the programmer and user.



Oh come on Piggy.... you are too limited in your imagination and ability to conflate that with reality..... have you not seen Tron.... it is all there in Tron II for all to see.... how can you deny it? I cannot believe you deny that people can exist inside a computer…..”sheesh”
 
If you try to go into a neurobio experiment using "self-referential information processing" as your definition of consciousness, you have no place to start.

There is a lot of self-referential (loop/feedback) activity in the brain.

Some of it affects conscious experience, and some of it does not.

Correct.

But there is no self-referential activity in a rock.

So we can be certain that rocks are not conscious. Off to the crusher with them !

At the very least, if there is no self-referential activity in a system, it is not conscious.

Do you disagree?
 
Would you, though?

Lets hash it out.

It is clear that we do things all the time without being "aware" of doing them. I always use the example of coming back from the restroom and walking to a cube where my desk used to be rather than where it is -- that is obviously a full set of behaviors ( walking, turning ) that I wasn't particularly aware of and I did just fine making it to the destination ( although it was an incorrect destination ).

But that is a behavior that I had done before, many many times. Could you engage in a new behavior without being aware of what is happening? I may be wrong but I tend to think the answer is no.

When I am walking through the woods and I haven't been there before, part of me is certainly aware of the ground, looking for things to trip over and holes to fall in. Thats why we look down most of the time when we hike.

Finding food? I think the act of looking for something that appears like fruit, then planning a route to get there, then harvesting it, is definitely something that we would be fully aware of. Even something like peeling a banana I am fully aware of -- I have never started that task then been done with it "before I realize it."

Finally, interacting with humans... well there are times when people are at my desk and talking and I just nod and smile without even hearing what they say, I admit. But I think any behavior more complex that this surely requires awareness.

The fact that it seems impossible to do any complex task without being "aware" makes me think that "awareness" itself perhaps isn't something distinct from such tasks.



Well, I would phrase it differently.

Suppose you have a toaster that you are trying to make the greatest toaster in the world. So you replace the extremely simple "pop toast up when dark" mechanism with one of your own devising (and perhaps you need to modify the housing as well to accomodate ).

You don't like the existing sensor, so you replace it with something like the human eye, with millions of photon detectors. Then you need to add a way for the information from those sensors to be filtered and interpreted down to usable chunks, so you add something like our visual processing system.

But now you want the toaster to be able to learn for itself when the toast is done. So you add chemical sensory mechanisms for it to sample the toast itself, and mechanisms to interpret those sensory results based on nutritional requirements that are aligned with those of a human. And you have to give it a way to remember the samplings of the past.

Then you need to add ways for it to logically infer between good samples and the behavior that led to them, so it "knows" what it did to produce good toast color that it wants to reproduce.

But now you want even more autonomous behavior from the toaster -- why should you have to load it up each time, when it could just go get the bread and have the toast waiting for you when you wake up?

So you add functionality to learn about the environment and distinguish between bread and non-bread, obstacles and non-obstacles, so it can scan the kitchen and head to the bread.

Except how can it load the bread? So you give it some robot manipulators and sensory mechanisms to see where they are. But now there is a problem -- for the toaster to learn about the sensory manipulators properly, it needs to know that they are part of it and not part of the kitchen -- otherwise for example the obstacle avoidance mechanisms would perpetually try to route around its own arm sticking out in front of the camera ! So you add functionality for it to be able to learn about self.

This is a pretty darn good toaster, would you agree?

But lets make it even better. It sucks to have to reprogram it each time you want to wake up at a different time, or want the toast done differently, etc. So you add audio recognition mechanisms and functionality to semantically interpret human voice. Because your kitchen is extremely complex, it isn't enough to just program in a lookup table of commands -- the toaster needs to actually understand what you are telling it so that you don't need to constantly monitor what is going on.

Any other features you want to add before we are done?

So now we have an ultra toaster. Do you suspect something like a thought of "the toast is done" occurs before the toaster pops up the toast ?

You just designed my wife.

Getting back to basics (my comfort zone) suppose the simple toaster had been delivered in two boxes. One containing the outer case and the other the inner heating parts along with a manual. I have two useless things but when I read the manual and slot them together I have a toaster. But have I created a thinking or conscious thing just by sticking the two parts together?

In the same discussion (about whether thermostats think) another image was offered. This time a guy is in a sealed room with a sort of slot through which from time to time someone would throw in a bunch of chinese symbols. The guy speaks no chinese at all but he has a rule book which tells him: "when you get this set of chinese symbols, assemble that set of chinese symbols and throw them out of the slot".

Outside the sealed room, what appears to be happening is that the 'computer' is responding intelligently to questions put to it. But it is not intelligent at all and it has no consciousness either, the guy being entirely oblivious of what's coming in and what's going out. The intelligence has been programmed by someone else (the guy who wrote the rulebook) and I see no consciousness at all, albeit to the outside world the result might be indistinguishable from conversing with a real person.
 
Last edited:
I think the objective-subjective divide is somewhat unnecessary here. If we experience pain, then that experiencing exists, plain and simple. Pain itself is not to be found in the brain in its own right of course. I don’t think anyone is even suggesting that. Yet experiencing pain exists … as a systemic condition, as particular brain processing, as different processing from when experiencing happiness, etc., etc.

Considering that fundamentally it’s all just interacting "wavicles of probability" (i.e., at the fundamental level the fundamental units interact in the same way regardless of particular "systemic conditions").How would you describe "systemic condition", "brain processing", "different processing" here? As subjective or objective?

Well, I don't usually talk about "processing" because it's pretty much an empty verb.

Yes, experiences are real. When your brain performs pain, or blue, or cold, that's really happening in the world.

We call these "subjective" because none of us can directly observe any other person's experience when they're awake and looking at a tree, for example, even though we can observe the tree, which we call "objective".

We also call the neural correlates of the experience "objective".

But we can't properly study consciousness without observing the objectively real states of matter and refering to the "subjective" experiences (such as color, warmth, happiness, surprise, rationalization, and so forth) correlated with those states.

So I find those terms useful in many contexts.

Basically, the difference between conscious and non-conscious "processing" is whether or not any "experiences" or "subjective qualia" (a term I'd never use, btw) are being performed at all. And indeed, we find that involuntary attention and volitional attention, for example, (e.g. a loud noise causing your head and eyes to turn, versus looking behind you because someone asks you to) have overlapping but non-identical neural correlates.

I think it's important to get past the leftover Skinnerian phobia of refering to "subjective" experience in science for fear of being loose and mushy -- in a properly designed experiment, this is not a problem, and as our understanding of physical correlates improves, it gets sharper and sharper.
 
It could be magic beans, if you had a sufficient weight of them.

It could be anything, if you had a sufficient weight of it.... ice, beans (magic or no), lemurs, a substance of extra-universal origin which is not made of particles at all, whatever.

And that's the point... the micro-level properties of the thing don't matter.

I don't know why you believe that a particular arbitrary level of granularity is the realm at which all significant things happen.

There is ice on a branch. Do you not agree that the weight of the ice is transferred to the branch via the innermost particles in the lining of the ice that is touching the branch?

Do you not further agree that if there is more ice, the weight is greater, and those particles touching the branch therefore are exerting more downward force on the branch?

Do you not further agree that the extra downward force exerted at the distal end of the branch is transferred to the more axial end where it merges with the tree?

Now are you going to tell me that "more force" doesn't count as a "difference" when it comes to particle interactions?
 
Correct.

But there is no self-referential activity in a rock.

So we can be certain that rocks are not conscious. Off to the crusher with them !

At the very least, if there is no self-referential activity in a system, it is not conscious.

Do you disagree?

Are you sure there is absolutely no self-referential activity going on in that rock?

Do not the particles in the rock get feedback from other particles which are getting feedback from them?

Is that not self-referential?
 
You just designed my wife.

Getting back to basics (my comfort zone) suppose the simple toaster had been delivered in two boxes. One containing the outer case and the other the inner heating parts along with a manual. I have two useless things but when I read the manual and slot them together I have a toaster. But have I created a thinking or conscious thing just by sticking the two parts together?

In the same discussion (about whether thermostats think) another image was offered. This time a guy is in a sealed room with a sort of slot through which from time to time someone would throw in a bunch of chinese symbols. The guy speaks no chinese at all but he has a rule book which tells him: "when you get this set of chinese symbols, assemble that set of chinese symbols and throw them out of the slot".

Outside the sealed room, what appears to be happening is that the 'computer' is responding intelligently to questions put to it. But it is not intelligent at all and it has no consciousness either, the guy being entirely oblivious of what's coming in and what's going out. The intelligence has been programmed by someone else (the guy who wrote the rulebook) and I see no consciousness at all, albeit to the outside world the result might be indistinguishable from conversing with a real person.

Why don't we just skip all that and go to an even simpler case: building a human.

Suppose we have an awesome machine that allows us to build humans from the ground up.

Starting with a very detailed blueprint ( perhaps DNA ? ) we just figure out where to place molecules and eventually we have a human, or at least a set of molecules that perfectly matches the human specified in the blueprint.

The human has been constructed by someone else ( us, the machine, and the guy who wrote the blueprint ).

Is this human somehow different from you and I? If so, why?
 
Correct.

But there is no self-referential activity in a rock.

So we can be certain that rocks are not conscious. Off to the crusher with them !

At the very least, if there is no self-referential activity in a system, it is not conscious.

Do you disagree?



No... no disagreement there..... but do not confuse the cause with the effect…. for example…mobility is a symptom of being alive and not the cause of it.

.... SRIP is a SYMPTOM of consciousness.... it is definitely a RESULT of consciousness..... but it is not the cause.



When a child sees his daddy and mommy kissing and later he gets a baby sibling and thus concludes that it was the kissing that caused the calamity to fall upon him we do not say he is right in a way because there definitely was kissing involved in the real cause.

Yes... we might want to humor him and go along....and that is fine. But when teenagers listen to us humoring the child and get confused by seeing an adult actually confirming the misconception then we have to take the teenagers aside and tell them what the REAL situation is.
 
Yes, it seems there’s no consensus on what we mean by consciousness. It’s my guess that if we’re asking “what consciousness is” in such a way as to imply something other than brain processing existing in the brain, like some physical force in its own right, then we might be looking in vain. Such thing might not exist. “That” could thus be considered to be an “illusion”.

That's the advantage of a definition that just says that it's a property that is present at some times and not at others, and in some systems and not in others. I know that this discussion is often interspersed with comments such as "it's not a property, it's a process" or "it's not a noun, it's a verb", and thus I suggest that "property" could be replaced with any preferred word. If it is just something that the brain is doing, then sometimes it's doing it, and sometimes it isn't.


However, if we’re looking for a gross difference between systemic conditions in the brain, where one exemplifies a condition of unconsciousness and the other consciousness, then I think there’s a good chance we’re able to map the differences in ever more detail, thus also explain how consciousness works. Thus I think we could be able to extract some more general principles for defining it … i.e. what consciousness “is”.

I think that the primary means of investigating brain function has to be looking at the brain and how it works. This might seem to be over-obvious, but it seems to be worth stating.

Could that “perceive that we are perceiving” also be described, in systemic terms, as something like self-reference?

It's an interesting thought - can we perceive without knowing that we are perceiving? Perhaps that's the kind of consciousness that some animals and very small children have - having experiences, but being unable to reflect on them. Such consciousness would be different to what we experience.
 
No... no disagreement there..... but do not confuse the cause with the effect…. for example…mobility is a symptom of being alive and not the cause of it.

Don't worry I ain't.

That post is specifically addressed at piggy's assertion that the definition "consciousness is a form of SRIP" is useless.

If it has a use, it isn't useless, and rejecting negatives is a use.
 
There is ice on a branch. Do you not agree that the weight of the ice is transferred to the branch via the innermost particles in the lining of the ice that is touching the branch?

Do you not further agree that if there is more ice, the weight is greater, and those particles touching the branch therefore are exerting more downward force on the branch?

Do you not further agree that the extra downward force exerted at the distal end of the branch is transferred to the more axial end where it merges with the tree?

Now are you going to tell me that "more force" doesn't count as a "difference" when it comes to particle interactions?

Weight is, of course, depends on mass and gravity.

But what you seem to be saying here is that the addition of more molecules of ice to the mass causes a change in the interaction between the surface particles of the ice and the branch alone, which causes the particles in another part of the tree to sever their bonds.

Odd way of looking at it.

The fact is, though, you can model the system that's responsible for the breaking branch without any reference to the activity of any specific particles.
 
Don't worry I ain't.

That post is specifically addressed at piggy's assertion that the definition "consciousness is a form of SRIP" is useless.

If it has a use, it isn't useless, and rejecting negatives is a use.

It's useless for studying consciousness, and I have a hard time picturing what other uses a definition of consciousness might be put to.
 
It's an interesting thought - can we perceive without knowing that we are perceiving? Perhaps that's the kind of consciousness that some animals and very small children have - having experiences, but being unable to reflect on them. Such consciousness would be different to what we experience.
There's something about this that is off. First, doing a thing and being able to do a thing are not the same thing, but you're confusing the two in the above paragraph. Second, we quite often perceive things without knowing that we perceive them.
 
Status
Not open for further replies.

Back
Top Bottom