• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I think we can identify at least one significant difference between us and a mere conglomeration of "billions of worms": We typical humans have managed to develop an Autobiographical Self, which builds models from memories sufficiently sophisticated to anticipate future needs more effectively. Worms don't have that, but it did evolve out of precursors that worms do have.

What evidence do you have that worms don't have such a thing?

For instance, there is no reason a small network of just a few dozen neurons couldn't satisfy the constraints of "building a model from memories sufficiently sophisticated to anticipate future needs more effectively."

The "memories" involved might need to be extremely simple, on the order of a single binary condition, but so what?

Note that I am not saying worms can do such a thing -- I am not familiar with research regarding just how smart or dumb worms are, or if they can be trained at all, etc -- I am just bringing up a question.
 
Last edited:
I believe we need to take the actual physics of color vision into account when judging your thought experiment, in which case, these particulars become critical.

No, my though experiment does not have to take into account the physics of color vision, because it's a thought experiment for which the physics of color, such as wavelengths of light and sensitivites of pigments, are irrelevant. It's about the greenish and reddish qualia in the brain, whatever way we swap them. My alternative implementation, viewing the world through a video camera/monitor where the red and green channels are swapped should have made that clear. What's relevant is that the brain, at the appropriate places, would get the green signal when looking at red, and red signal when looking at green, with learned associations opposite of what was normal.

I can rework the thought experiment to help further eliminate the irrelevancies:

We raise a child in a concocted environment in which cool grass is dyed red, ripe strawberries are dyed green, red lights mean go, green lights mean stop, and we recoil in fear when green liquid comes out of holes in our skin, etc.

What qualia does this child, after fully adjusting to this environment, see when looking at a ripe green strawberry? The same greenishness we see, but with different meaning attached? Or redishness?

Really think about this, will you? Your eagerness to preempt the question with an irrelevant squabble about the physics of light exposes your discomfort with the issues the experiment intends to expose.
 
Last edited:
But I am not talking about consciousness, I am specifically talking about qualia, and even more specifically whether there is utility in referring to meta-layers.

Again, I will ask the question in terms of mathematics -- is there any utility in referring to how round roundness is? Or even the roundness of a circle? What is the "roundness" of a circle, anyway?

Roundness is that property that arises when when we apply a constant angle to a straight line leading the line to intersect with itself. Pi is directly related to roundness and the diameter is doubly so but the circumference is the limiting factor. Roundness then is an innate property of an unbounded line being curved in upon itself and exhibiting Pi, Diameter and Circumference as determining characteristics. We call this phenomonon circalia.
 
Also please watch this video.

I just watched it for the 4th time, thank you.

Dennett is really expending more effort there refuting other's ideas about consciousness than he is explaining what he thinks consciousness really is.

Like others, he uses red as his color of choice for discussing what he never uses the Q word for: Once for the girl's lips: "there no red happening in the brain" and again for the flag afterimage: "there's no red bar intersecting the black cross."

What is it about red that makes it such a popular example to explain principles of consciousness?

I think there's too much stress in consciousness research on vision. There's also too much tendency to think of consciousness as receiving input. Aren't we still conscious during sensory deprivation? While dreaming, fantasizing, or recalling past experiences? While recalling past recollections of fantasizing, while momentarily blocking out the world?

When I chuckle at the above, that's just a hormone. That's all the feeling is, right?
 
Roundness is that property that arises when when we apply a constant angle to a straight line leading the line to intersect with itself. Pi is directly related to roundness and the diameter is doubly so but the circumference is the limiting factor. Roundness then is an innate property of an unbounded line being curved in upon itself and exhibiting Pi, Diameter and Circumference as determining characteristics. We call this phenomonon circalia.


209954d66541cbf7d8.gif
 
No, my though experiment does not have to take into account the physics of color vision, because it's a thought experiment for which the physics of color, such as wavelengths of light and sensitivites of pigments, are irrelevant. It's about the greenish and reddish qualia in the brain, whatever way we swap them. My alternative implementation, viewing the world through a video camera/monitor where the red and green channels are swapped should have made that clear. What's relevant is that the brain, at the appropriate places, would get the green signal when looking at red, and red signal when looking at green, with learned associations opposite of what was normal.

I can rework the thought experiment to help further eliminate the irrelevancies:

We raise a child in a concocted environment in which cool grass is dyed red, ripe strawberries are dyed green, red lights mean go, green lights mean stop, and we recoil in fear when green liquid comes out of holes in our skin, etc.

What qualia does this child, after fully adjusting to this environment, see when looking at a ripe green strawberry? The same greenishness we see, but with different meaning attached? Or redishness?

Really think about this, will you? Your eagerness to preempt the question with an irrelevant squabble about the physics of light exposes your discomfort with the issues the experiment intends to expose.

Your statements reveal the utter irrelevance of your thought experiment.

Arguing about what red 'is' without using physics is like discussing what ice 'is' but ignoring water.
 

Yes, that's the post I mean. The claim is that an illusion of self is generated. An illusion to whom?

One can imagine a man seeing a dog. Then one can imagine a man seeing a perfect visual, auditory and sensual version of a dog, and getting exactly the same experience. The experience of a dog and of an illusion of a dog are the same thing. So what is the illusion of an experience? It's an experience. Calling an experience an illusion tells us nothing about what the experience is. An illusion requires something to experience the illusion. The thing itself experiencing the illusion cannot be an illusion itself.
 
I just watched it for the 4th time, thank you.

Dennett is really expending more effort there refuting other's ideas about consciousness than he is explaining what he thinks consciousness really is.

Like others, he uses red as his color of choice for discussing what he never uses the Q word for: Once for the girl's lips: "there no red happening in the brain" and again for the flag afterimage: "there's no red bar intersecting the black cross."

What is it about red that makes it such a popular example to explain principles of consciousness?
I think there's too much stress in consciousness research on vision. There's also too much tendency to think of consciousness as receiving input. Aren't we still conscious during sensory deprivation? While dreaming, fantasizing, or recalling past experiences? While recalling past recollections of fantasizing, while momentarily blocking out the world?

When I chuckle at the above, that's just a hormone. That's all the feeling is, right?

Because that's what the inventor of qualia used.

Qualia" (singular, "quale") is a term introduced by C. I. Lewis (1929, p. 121) to stand for "recognizable qualitative characters of the given". Lewis’s examples were red, blue, round, and loud.
 
I just watched it for the 4th time, thank you.

Dennett is really expending more effort there refuting other's ideas about consciousness than he is explaining what he thinks consciousness really is.

Like others, he uses red as his color of choice for discussing what he never uses the Q word for: Once for the girl's lips: "there no red happening in the brain" and again for the flag afterimage: "there's no red bar intersecting the black cross."

What is it about red that makes it such a popular example to explain principles of consciousness?
I think there's too much stress in consciousness research on vision. There's also too much tendency to think of consciousness as receiving input. Aren't we still conscious during sensory deprivation? While dreaming, fantasizing, or recalling past experiences? While recalling past recollections of fantasizing, while momentarily blocking out the world?

When I chuckle at the above, that's just a hormone. That's all the feeling is, right?


Have you ever watched an animal being slaughtered or dressed after being shot.

Have you ever cut the head off of a goose and then watched it scurry about gushing blood all over the place while its body is running here and there?

Have you ever watched a human being after he landed splat into the ground because of a partial parachute failure and seen his jugular gush out blood like a garden hose?

I think you might find the color red and MORTALITY are quite well associated in people's psyche.

I know a woman who faints at the sight of blood....literally.

Also have you noticed how many fruits go red when they RIPEN? Grapes, Pomegranate, Apples, Figs, etc.

Have you noticed the color of a vagina?

I think you might find FERTILITY and the color red are quite well associated in people's instincts.

So that might be the explanation perhaps?
 
Last edited:
But I am not talking about consciousness, I am specifically talking about qualia, and even more specifically whether there is utility in referring to meta-layers.
You are assuming qualia, if it exists, is an all-or-nothing thing. Qualia could, very well, come about in layers, and in different forms. The qualia of hearing music might be fundamentally different from the qualia of seeing a color, (from a neurological pattern point of view, perhaps, or in other ways), but we would still be prone to think of them as simply parts of conscious experience.

Again, I will ask the question in terms of mathematics -- is there any utility in referring to how round roundness is? Or even the roundness of a circle? What is the "roundness" of a circle, anyway?
Using qualia to describe conscious experience is not the same as describing the "roundness of a circle". At least not when one is talking about how qualia empirically manifests itself. (Those who talk in non-empirical terms are a different story. Perhaps they are who you should be going after?) Circles, by mathematical convention, have to be round. Things that the mind experiences do not necessarily have to be presented as quale.

I challenge you to provide even a single experience that is not conscious.
I can name at least three, though I admit they are controversial:

Most forms of Dreaming: (Sentient dreaming would be an exception to this): In most forms of dreaming, your mind is experiencing a LOT of things, but almost none of it enters into qualia. If you wake up in the middle of a dream, you might remember many aspects of it (usually only for a short while), but the qualia of the memory is the only experience you got out of it. You probably got no emergence of qualia, at all, while you were actually dreaming.
Note: This does NOT imply qualia is dependent on memory. Of course, you can experience a lot of things, and forget about them, but you still experienced them at the time. What I am talking about is that idea that most dreams are different from waking life: That you get no genuine qualia while in a dream state. Even though you could be said to be experiencing them.

Semi-Conscious States: Consciousness uses glucose (energy). And, it is reasonable to think that the mind will try to find shortcuts in how we think, and shutdown parts of itself when not needed, to conserve glucose usage. Thus, there could be moments where you are awake, but not fully and completely conscious. If such states exist, you might not experience actual, genuine qualia, though your mind will "experience" something else.

I can give an example of this, that Antonio Damasio seems to like, though Daniel Dennett might disagree with it: Driving down a long stretch of road, and suddenly realizing you don't remember the last 15 minutes of the drive. I realize Dennett would argue that this is a trick of memory: He would say you probably were fully conscious while driving those 15 minutes, but you simply don't remember it.
Damasio seems to think that this is indicative of a conscious shortcutting. You probably weren't fully conscious. But, you weren't unconscious, either. Some part of your mind was alert enough to respond if sudden action was necessary: for example if an animal walked in front of the car, you would still be able to stomp on the brakes. (This would use the automatic stimulus/response systems, not conscious thinking, at least not for the first microseconds.) But, little to no genuine qualia was seeping through your conscious apparatus during that time. According to theory, anyway.

Certain forms of Mental Disorders: For example, epileptic automatism ( http://en.wikipedia.org/wiki/Automatism_(medicine) ) could leave its victim without a sense of qualia, though their minds seem to be experiencing... something.


And if you mean "conscious of the experience at the same time the experience is occurring" then I have to ask why the phrase "conscious of the experience at the same time the experience is occurring " can't just be the definition to begin with. And in that case, what is all the fuss about.
There could well be things you could experience, at the same time it is occurring, that don't get to go into a state of qualia. I identified some candidates above.

I left "sub-conscious thinking" off the list, because I figured that would not even count as an "experience". But, to neurology, the difference between sub-conscious and semi-conscious does tend to get blurred a bit. Figured that might be worth a mention.

Think about traffic dynamics in a bustling metropolis. The human brain is a similar but much more complicated set of constraint dynamics. (snip)
That's all well and good. But, what does that have to do with what I was talking about?

What evidence do you have that worms don't have such a thing?

For instance, there is no reason a small network of just a few dozen neurons couldn't satisfy the constraints of "building a model from memories sufficiently sophisticated to anticipate future needs more effectively."
It is not the size of the network, but how it is wired, and to what it is wired to.

As far as we can tell, the Autobiographical Self seems to be modeled in the Posteromedial Cortices (PMCs), on the sides of the human brain, as identified through fMRI and other lines of evidence. And, we have identified several things that are relatively unique about the wiring of this area: Its density and where it branches off to, etc. Worms do not seem to have a network wired in the way our PMCs are.

Of course, this does not rule out the idea that worms could have a model of Autobiographical Self that happens to be modeled in a very different way from ours. But, given how much we already know about the functions of areas in the neural networks of worms (which might not be much, but it is not in the direction an Autobio model would need to go in), this seems unlikely.


Again, this is based on Antonio Damasio's work. I happen to like it, because it offers a clear evolutionary pathway to consciousness, that other thinkers in this area have generally floundered on. So, I adopted it as the working theory I like to use. Most of it does not conflict with any other genuine theories of consciousness: Not even most of Dennett's Multiple Drafts theory, since they address different things. Though, there might be a small number of minor disagreements.

I am just bringing up a question.
That's fine. It's a perfectly valid question to bring up.
 
Last edited:
....

But first....remember that I said that robotic intelligence is an entirely different process to human intelligence. Most importantly in a robot someone put a PROGRAM to do what it does. In animals there are no programs.
Every ontological materalist will tell you none of us are anything but programming which begins at conception and continues until death.

How any programmer will ever manage to write code that accomplishes that for a robot is unknown. If the substrate is biological it may be easier than for silicon chips or pebbles in sand. Pixy & Wowbagger seem to think it's possible in any substrate that supports computing.
 
Last edited:
Pixy & Wowbagger seem to think it's possible in any substrate that supports computing.
If consciousness is going to emerge naturally, it would have to do so from a substrate that would support evolution towards a computing system.

Unless consciousness was Created by some Intelligent Entity :rolleyes: , you need both computing and evolution to do it. Not merely 'computing'.

But, perhaps that's besides the point.
 
So you do agree executing a program by moving pebbles in sand can be conscious?
 
If consciousness is going to emerge naturally, it would have to do so from a substrate that would support evolution towards a computing system.

Unless consciousness was Created by some Intelligent Entity :rolleyes: , you need both computing and evolution to do it. Not merely 'computing'.

But, perhaps that's besides the point.

I'm not aware of any biological process that has been created by evolution to consist of a computing system. Certainly not a Turing-type computing system.

Many of the proofs of the computational nature of consciousness seem to assume it to start with.
 
Note: This does NOT imply qualia is dependent on memory. Of course, you can experience a lot of things, and forget about them, but you still experienced them at the time. What I am talking about is that idea that most dreams are different from waking life: That you get no genuine qualia while in a dream state. Even though you could be said to be experiencing them.

I'm gonna go ahead and disagree with you here. All three of your examples can be described as situations where most of the brain is functioning normally, but the retrograde amnesia we associate with unconsciousness is present. If qualia isn't dependent on memory, there's no reason all three can't be qualia.
 
So you do agree executing a program by moving pebbles in sand can be conscious?
The pebbles would have to "evolve" to move themselves in the sand in such a way that would compute a conscious system. Having a human do it, even if they are emulating or simulating all of the steps in the process of consciousness, wouldn't be sufficient.

You have to ask where the "algorithm" of conscious is actually running:

When humans do it, the algorithm is running in their heads, not in the pebbles. If the pebbles were to do it, themselves, the algorithm is somewhere within the arrangement of the pebbles.

A computer program might be able to emerge a consciousness, if it contained the proper "algorithm". (And, if it was granted to opportunity to evolve in just the right way, it would no longer be what any Designer designed.)


There might be another missing ingredient that I allude to in this thread: http://www.internationalskeptics.com/forums/showthread.php?t=212683
I might have to modify the idea, in light of some stuff I read in the past few months. But, the general gist is this:

What the pebbles compute would need to sustain a state whereby a model of the "self" has an opportunity to sense that its mind is separate from its body, even if it is not literally separate from its body. And, that's probably a fairly complex system to wire up.
As a thought exercise, there is no reason why it would be impossible for this to happen to pebbles. Though, it is ridiculously improbable.
 
Last edited:
I'm not aware of any biological process that has been created by evolution to consist of a computing system. Certainly not a Turing-type computing system.
We have to be somewhat flexible with the term "computing system". I don't think natural consciousness nor the natural mind is a Turing system. For one thing, they both evolved for specific tasks (even if those tasks changed over time). But, they can be described as a computing system, in an abstract sense.

And, I do think a Turing system could emerge a conscious state, if it was running the right algorithm.


I'm gonna go ahead and disagree with you here. All three of your examples can be described as situations where most of the brain is functioning normally, but the retrograde amnesia we associate with unconsciousness is present. If qualia isn't dependent on memory, there's no reason all three can't be qualia.
What if these are not cases of retrograde amnesia? What if the state of the brain, in these cases, was such that qualia was not actually experienced. There is some circumstantial evidence in that direction, from what I recall.
 
What if the state of the brain, in these cases, was such that qualia was not actually experienced.
Then you'd need to better elucidate the difference between the two.
 
The pebbles would have to "evolve" to move themselves in the sand in such a way that would compute a conscious system. Having a human do it, even if they are emulating or simulating all of the steps in the process of consciousness, wouldn't be sufficient.

You have to ask where the "algorithm" of conscious is actually running:

When humans do it, the algorithm is running in their heads, not in the pebbles. If the pebbles were to do it, themselves, the algorithm is somewhere within the arrangement of the pebbles.

A computer program might be able to emerge a consciousness, if it contained the proper "algorithm". (And, if it was granted to opportunity to evolve in just the right way, it would no longer be what any Designer designed.)


There might be another missing ingredient that I allude to in this thread: http://www.internationalskeptics.com/forums/showthread.php?t=212683
I might have to modify the idea, in light of some stuff I read in the past few months. But, the general gist is this:

What the pebbles compute would need to sustain a state whereby a model of the "self" has an opportunity to sense that its mind is separate from its body, even if it is not literally separate from its body. And, that's probably a fairly complex system to wire up.
As a thought exercise, there is no reason why it would be impossible for this to happen to pebbles. Though, it is ridiculously improbable.
We can agree I suspect that pebbles will not evolve to the point they move themselves; I at least find it ridiculously improbable that consciousness will exist in the algorythm that occurs if a human or other intelligence moves the pebbles to follow programmed instructions.
 
Status
Not open for further replies.

Back
Top Bottom