Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
No. I don't think it does. Turing equivalence is not limited to syntax, though people often seem to argue that it is.

Moving beyond this requires a different sort of programming than we typically engage in now, but there is no theoretical limit that ensures that a universal Turing machine cannot produce consciousness. This is precisely why I continue to invoke "what is feeling?" and "what is meaning?". I think both of those questions have definite directions if not precise answers.

Turing machines do stimulus (the tape) and response (the state of the machine when reading a symbol). Is there anything else happening? If consciousness can emerge from a Turing machine, it emerges from stimulus-response.
 
Really? What is awareness? What is consciousness? There is no way to reduce these complex concepts into simpler processes?

There may be a way, but we haven't discovered it yet. Until we've discovered it, we don't know what it is.
 
PixyMisa said:
Pixi's definition incorporates his conclusions and simplifications and thus can always be expected to give the "correct" answer.
No. This is, once again, a hapless flaming strawman.

As you just pointed out, it's a definition. It doesn't give answers. It's a definition.

I'm probably going to drop this line of argument after this because it gets us nowhere and you keep accusing me of setting it up as a strawman and I'm doing nothing of the sort. But anyway, your definition is "stacked" with you conclusions to give the answer that certain cars are conscious by simple backward chaining (inference) to "certain cars are/emply SRIPs". It stems form your equivalence and qualification error I explained in this post which I was on your ignore list:
http://www.internationalskeptics.com/forums/showpost.php?p=5469249&postcount=952 backed up by this post
http://www.internationalskeptics.com/forums/showpost.php?p=5467992&postcount=898

You can say that it's overly broad - or overly precise - or specifies properties that are not actually present in the subject of the definition.
It under-qualifies (similar to too broad) and mistakes equivalence for inclusion (another form of being too broad).


Aside from that, the problem is that our consciousness and its properties are the only consciousness we really know anything about.
Really?

Are dolphins not conscious? Gorillas? Chimps? Elephants? Why? What precise behaviours do they fail to exhibit that you assert are necessary for consciousness?

Ah, see my bold. The word is "know", not "understand", not "have some clue about". "Know is not a word I ever just bandy about like you seem to sometimes. Philosophically and scientifically our consciousness may be the ONLY thing a human knows. Everything else is inferred and merely tries to approach true knowledge.

I do believe all the animals you mentioned have some form of consciousness for many valid reasons. But I don't "know" it, I just have a high degree of confidence. I certainly don't know how they experience consciousness or how much consciousness they have which by some stringent definitions of consciousness (that I would not agree with but some people have) where they would not be considered conscious at all.

Is SHRDLU not conscious? What precise behaviour does it exhibit or fail to exhibit that leads you to this conclusion, and why?

I won't say I "know" SHRDLU is not conscious but I think so with a very high probability for several reasons. First, it does not have an internal architecture that verifiably matches any known conscious entity. This isn't the fault of SHDLU but rather the fault that we don't know what conscious computational architecture looks like yet in humans or in any other animal we suspect of being conscious. Secondly, it does not clearly demonstrate conscious self-awareness. Conscious self-awareness is more than just reflecting on your own programming or reflecting your programming on your senses. And I hope i don't have to fight Turing Test believers by asserting that mimicking at least some conscious behavior (actions) does not prove consciousness. Anyway, I'm not sure we know yet what all the components of conscious self-awareness are yet but I'd claim that they include:

1. variable attention and focus over some minimum degrees of freedom.
There is evidence to believe that if humans are deprived of sensory input early enough, before they can memorize some form of sensory -acquired experience, they exhibit no consciousness or consciousness is seriously impaired. A certain minimum integration of sensory information is needed AND there must be a critical level of variability in the experiences being sensed as well! Consciousness needs to have something to switch focus between or it cannot exist. This is analogous to blindness induced by cessation of saccades. Other neural systems exhibit similar behavior.

SHRDLU only has one degree of freedom (arguably more but not many) for attention and focus (placing blocks) and only one sense - typed input. It's got insufficient degrees of freedom for self-awareness to maneuver.

2. Anticipation and Volition across a critical threshold of degrees of freedom that are subject to constant change.
Self-awareness requires the ability to anticipate events and make "free" choices that also anticipate the actions of your own volition. Brains are naturally modeling and forecasting engines that not only forecast external events but internal ones. A paper was published a few years ago that demonstrated that not only do humans do this for their own minds but for the minds of others they know well and this is also related to distinguish me from you in some regards.

SHRDLU cannot anticipate anything nrt does it have real volition. Seeing the next step in the routine or reflecting on the last step is not anticipation. It has no freedom to will itself along any trajectory that is not already hardcoded into it. And even if it did, there are few degrees of freedom. I don't know what the critical level is but I feel pretty confident SHRDLU doesn't come close.

3. Recognizing identity: self vs.other. Self-awareness involves some idea and ability to both fix and generalize what we are. It is easy to take this for granted but its not that easy and we cannot assume we have a separate sense of self from day one. Studies with infants and animals suggests that self-identity is not something you have in the womb.. Baby ducks will imprint on the first head configuration meeting certain criteria that they see. They can imprint to a human and apparently assume they're human by reflection. They will tend to associate with humans. People with severe sensory deprivation will report that they lose sense of self and the ability to maintain self-awareness. Remember the isolation studies for astronauts in the 50's and 60's. Some people went nuts.

There is clearly a special capability and interaction with the environment required to yield self identity and mental separation distinction. There is no evidence that SHRDLU meets this criteria at all.

4+ I believe there are others but I don't have time to think of them all - this is a start. Pixi will probably dismiss them all as handwaving and BS anyway so I don't want to make more effort until I discover otherwise.
 
Last edited:
Turing machines do stimulus (the tape) and response (the state of the machine when reading a symbol). Is there anything else happening? If consciousness can emerge from a Turing machine, it emerges from stimulus-response.

Neurons do stimulus (input) and response (the state of the neuron when summing synaptic potentials). Is there anything else happening functionally in terms of communication?

Neurons are, of course, more complex in terms of (1) their biology and (2) their processing potential. For one, many neurons have a basal firing rate that is changed by inhibitory or excitatory input. For another, we can easily change the firing threshold with particular metabotropic receptors or from neuropeptide input -- which can cause significant downstream alterations by turning on sets of genes.

Neurons are complex systems, so it is very wrong to equate a neuron with a simple Turing equivalent. The proper analogy within a neuron with a simple Turing equivalent is probably an ion channel (think sodium channel, but they all do essentially the same thing). If you have the idea that a simple Turing machine is equivalent to a neuron then you are going to have to conclude that you can't get there from here; but that is simply the wrong perspective.

To get to the level of what a neuron does requires quite a bit of programming, but I don't see how it is impossible to string a group of Turing equivalents into the functional equivalent of a neuron.
 
There may be a way, but we haven't discovered it yet. Until we've discovered it, we don't know what it is.


OK, let' try another approach from within neurology if you don't want to play the linguistic game. Can I not affect your consciousness in many different ways with different drugs or different lesions that demonstrate that at the very least there are levels of consciousness?

The biology tells us something unequivocal -- that consciousness is not one thing. Analyzing awareness I think it becomes obvious that awareness is not one thing. In the thread examining this we haven't even gotten to some of the more interesting aspects of the issue, and there are already several components being discussed.
 
When someone answers "yes" to the question "Do you exist" that does not imply that the "I" is an unchanging essence. "We" are, in a broad sense, a particular instantiation of certain materials in a particular pattern, both of which change over time -- both material and pattern. Simply because "we" feel that "we" are the same as earlier in life does not mean this is true. It's a feeling after all, a feeling that arises in the present. We cannot compare it to earlier evidence of who or what we were in the past.

Since the pattern that, in part, constitutes what "I" call "me" changes with experience "I" am literally not the same "being" (substitute a verb here, since nouns are not the proper kind of word to convey the idea) "I" was when "I" began this sentence -- since some of the synaptic connections are theoretically changing to account for memory of what "I" am currently writing.

Even an atom is just something that it's subatomic components do, and they their components -- and so on -- down the reductive chain. In this sense, matter is a verb -- its all flux, its all energy. Our constituent atoms are oscillating patterns of field activity, our cells flowing patterns of molecular activity, and our bodies the swarming activity of massive cell colonies. Our memories and habits are themselves liable to change over time.

But even if one were to strip away all of a subject's memories and revert them back to the state they were are as newborns, there is still a subject that experiences. They still exist as distinct objects with subjective states that only they have individual access to, as such. For as long as they're alive they are still an "I"; they still have being.

Since consciousness is always a present experience, I would maintain that you have this backward. Any sense of "I" consists primarily of prior memory -- autobiographical memory.

When speak of the "I" I'm not referring to our mental biographical content, per se, but the individual experiencer of that content. Consciousness itself IS the "I".

But, look, if you're just going to say David Chalmers got it right and consciousness is some irreducible fundamental component of the cosmos, then just say that. It's fine if you want to believe that, but there is no evidence of it and no logical argument that makes it so.

I've no idea whether or not it is a "fundamental" component of the cosmos. All I'm certain of is that it IS a component and we do not have a solid understanding of how it relates to the other components. Unlike Chalmers, I'm fairly confident that, atleast in principle, science can give us such an understanding.

I am, frankly, a bit put off by folks who want to argue that consciousness is irreducible.The same thing was once said of perception, life, and currently certain biological processes (by at least one biochemistry professor who was disowned by his department). The history of science and discussions on this very forum would tend to argue very strongly against this perspective.

If that is what you want to believe, then that's fine for you. Unless you can show me some sort of demonstration why anyone should believe the same, I would prefer to work toward a useful definition.

We're not even certain if 'elementary' particles are irreducible. Every time physics identifies some reductive component of matter, we end up coming upon even more fundamental elements that they're 'composed' of. All I wanna know is how consciousness fits into what we do know of physics. If it turns out to be the irreducible fundament of everything, fine; if not, also just as fine. For me, its not a matter of belief or disbelief. I'm not invested either way -- I just wanna know.
 
Last edited:
I wonder if we differ from SHRDLU in more that complexity.
And that at some point of complexity we decide to call it conciousness.
 
PixyMisa said:
But I've belabored that point enough. The next big problem I have with Pixi is he actually thinks he's explained consciousness with his definition, at least to his satisfaction.
Have I ever said that I have explained consciousness?

Depends how I interpet this - but not worth arguing. More relevant is that apparently you think you've explained it enough to claim that certain forms of cars and toasters are conscious. That's enough for now.


PixyMisa said:
Are some cars conscious by my definition? Yes, you say so yourself. So the claim is simply correct.

Yes, I do say so myself!!!! But ONLY because I'm forced to given your definition. That's a BIG reason why I think your definition is WRONG and subsumes just those types of silly conclusions a priori rather than a posterior as proper science would demand.

PixyMisa said:
FedUpWithFaith said:
Pixi's definition is that consciousness is self-referential computation.
Yes.

PixyMisa said:
FedUpWithFaith said:
So let me ask you this, how much more does this really tell us about consciousness than this definition:

Consciousness is computation.
It tells us that self-reference is required.

PixyMisa said:
FedUpWithFaith said:
With my simplfied definition anything computed is conscious - an even more ridiculous claim.
No, for several reasons. First, while your definition is over-general, it is correct. Consciousness is computation.

Oh my. There is the over-broard equivalence and qualification error you're making again. It is NOT correct. It would be correct to say "consciousnes is a form of computation", however. Big difference in qualification though - see it yet?

If your definition "SRIP is Consciousness" definition yields the conclusion that toasters and cars that have SRIPs are conscious THEN by necessity if the definition
"Consciousness is computation." is true than ANY computational device or instrument depending on computation is conscious. An abacus would be conscious. Don't you see that?

PixyMisa said:
Trees are plants. Are plants trees? No, some plants are trees.

So no, your assertion does not follow from your definition.

This is an irrelevant non-sequitur. I did not commute the definiton to draw the conclusion any more than you did with yours.


I think I addressed the balance of your questions/issues in my previous post. We'll see..
 
OK, let' try another approach from within neurology if you don't want to play the linguistic game. Can I not affect your consciousness in many different ways with different drugs or different lesions that demonstrate that at the very least there are levels of consciousness?

The biology tells us something unequivocal -- that consciousness is not one thing. Analyzing awareness I think it becomes obvious that awareness is not one thing. In the thread examining this we haven't even gotten to some of the more interesting aspects of the issue, and there are already several components being discussed.

Even if it were accepted that consciousness is not one thing, we don't know what things it is composed of, how it is so composed, what elements are essential and which are optional, and so on.
 
Even an atom is just something that it's subatomic components do, and they their components -- and so on -- down the reductive chain. In this sense, matter is a verb -- its all flux, its all energy. Our constituent atoms are oscillating patterns of field activity, our cells flowing patterns of molecular activity, and our bodies the swarming activity of massive cell colonies. Our memories and habits are themselves liable to change over time.

But even if one were to strip away all of a subject's memories and revert them back to the state they were are as newborns, there is still a subject that experiences. They still exist as distinct objects with subjective states that only they have individual access to, as such. For as long as they're alive they are still an "I"; they still have being.

I guess I'm stuck in thread now.:)

As to all of it being flux, verb, relation -- yep, welcome to the wonderful world of Heraclitus.

As to the experiencer being the "I", sure you can define it as such; it's a fairly deflationary definition of "I", but that's fine. I think it still leaves us with a set of processes and not something fundamental, though.

I would still argue that this is not "the thing itself" because I do not think that experiencing is a "thing" (which is a minor, linguistic, point since it is all part of the relations that seem to constitute the world) or a single process. The experiencer, in my view, is a complex of different processes -- we are currently trying to piece out which processes are involved in neuroscience -- and qualia are also a complex of different processes. They are only "the thing itself" by means of the way someone defines them. Qualia, when examined at a neurobiological level, appear to depend on subconscious perceptual mechanisms that reconstruct certain aspects of the environment and the object perceived and incorporates higher order hypotheses about the object in conjunction with prior memories of the object that include its function and emotional inputs that provide the 'coloring' of the experience.



I've no idea whether or not it is a "fundamental" component of the cosmos. All I'm certain of is that it IS a component and we do not have a solid understanding of how it relates to the other components. Unlike Chalmers, I'm fairly confident that, atleast in principle, science can give us such an understanding.

OK, good. If science can give us an understanding, then it is most likely reducible since most of what science explains, it explains at more fundamental levels. Whatever the fundamental constituent of the universe *is* we can never know. We can decide that it is consciousness, but it is not our consciousness. Ours depends on interaction of stuff we call matter -- so all the evidence shows.



We're not even certain if 'elementary' particles are irreducible. Every time physics identifies some reductive component of matter, we end up coming upon even more fundamental elements that they're 'composed' of. All I wanna know is how consciousness fits into what we do know of physics. If it turns out to be the irreducible fundament of everything, fine; if not, also just as fine. For me, its not a matter of belief or disbelief. I'm not invested either way -- I just wanna know.

On that we are agreed. I want to know the same. I'm glad that you are not committed to the -- you can' possibly get there from here -- camp. Seems like a group of people who don't really want to know. They just want to believe in something that makes them somehow special to the universe itself.
 
Even if it were accepted that consciousness is not one thing, we don't know what things it is composed of, how it is so composed, what elements are essential and which are optional, and so on.


Yes, true, to some extent. We know some outlines, but we do not know how it all works yet in detail.

I think we know some of the essential components -- but they are not that big of a deal. There is information processing. There is arousal (without it we can't be conscious according to one sense of that term). There is awareness. Etc. How the brain does it requires quite a bit more work, but the Chalmers line is just a worthless dead end that wastes peoples' time.
 
I'm probably going to drop this line of argument after this because it gets us nowhere and you keep accusing me of setting it up as a strawman and I'm doing nothing of the sort. But anyway, your definition is "stacked" with you conclusions
Again, no.

But let it pass, let it pass, since later in the post you get to the meat of the matter in a meaningful way, so I'll skip ahead:

Anyway, I'm not sure we know yet what all the components of conscious self-awareness are yet but I'd claim that they include:
Yes, thank you!

1. variable attention and focus over some minimum degrees of freedom.
Okay. SHRDLU knows its blockworld and the conversation. It knows everything about the blockworld and the conversation. It knows nothing else. SHRDLU has (in effect) two senses and can attend to each perfectly. Humans are quite different.

There is evidence to believe that if humans are deprived of sensory input early enough, before they can memorize some form of sensory -acquired experience, they exhibit no consciousness or consciousness is seriously impaired.
Not surprising, I'd say.

A certain minimum integration of sensory information is needed AND there must be a critical level of variability in the experiences being sensed as well!
Sure.

Consciousness needs to have something to switch focus between or it cannot exist.
Yes. One of my requirements when we discussed an absolutely minimal consciousness some months back was that there be more than one input, each having multiple states. It's arguable what the exact minimum is, but there's certainly a minimum below which it's not meaningful to describe a system as conscious.

SHRDLU only has one degree of freedom (arguably more but not many) for attention and focus (placing blocks) and only one sense - typed input.
Well, two senses. It knows where the blocks are, which would at least seem akin to proprioception.

It's got insufficient degrees of freedom for self-awareness to maneuver.
I'd argue that it has more than one, but let's set attention and degrees of freedom as point 1 for now.

2. Anticipation and Volition across a critical threshold of degrees of freedom that are subject to constant change.
Self-awareness requires the ability to anticipate events and make "free" choices that also anticipate the actions of your own volition. Brains are naturally modeling and forecasting engines that not only forecast external events but internal ones. A paper was published a few years ago that demonstrated that not only do humans do this for their own minds but for the minds of others they know well and this is also related to distinguish me from you in some regards.
Brains certainly construct models. SHRDLU does too. But I certainly agree that SHRDLU does not anticipate.

Volition is a messy area that I'd like to avoid for now, but modelling and anticipation, definitely. Let's make that point 2.

SHRDLU cannot anticipate anything nrt does it have real volition.
Agreed.

Seeing the next step in the routine or reflecting on the last step is not anticipation.
They can form the basis for anticipation, but SHRDLU does not even exhibit that.

It has no freedom to will itself along any trajectory that is not already hardcoded into it.
Aguably, neither do we, which is why I want to avoid that one.

And even if it did, there are few degrees of freedom. I don't know what the critical level is but I feel pretty confident SHRDLU doesn't come close.
Okay.

3. Recognizing identity: self vs.other. Self-awareness involves some idea and ability to both fix and generalize what we are.
Yes, thank you. You mentioned generalisation in a comment to Frank earlier. SHRDLU has a fixed initial set of categories. It can create a new category if you tell it to, but (as far as I know) it can't construct categories on its own. It cannot even do this for its blockworld, and certainly can't for itself and its conversational partners.

It is easy to take this for granted but its not that easy and we cannot assume we have a separate sense of self from day one. Studies with infants and animals suggests that self-identity is not something you have in the womb.
Thus the recurrent surprise when babies of various species realise that when I bite this thing over here, it hurts! Or, for babies of some species somewhat later on, that this thing in the mirror is me!

Baby ducks will imprint on the first head configuration meeting certain criteria that they see. They can imprint to a human and apparently assume they're human by reflection. They will tend to associate with humans.
Yup.

People with severe sensory deprivation will report that they lose sense of self and the ability to maintain self-awareness. Remember the isolation studies for astronauts in the 50's and 60's. Some people went nuts.
Yes. Mind you, if you study any group of people long enough, some of them will go nuts...

There is clearly a special capability and interaction with the environment required to yield self identity and mental separation distinction.
Not sure I like the word special there. It begs to be seized by the dualists.

There is no evidence that SHRDLU meets this criteria at all.
Agreed. Never mind the ability, it doesn't exhibit the behaviour.

4+ I believe there are others but I don't have time to think of them all - this is a start. Pixi will probably dismiss them all as handwaving and BS anyway so I don't want to make more effort until I discover otherwise.
Good grief man, will you please give up the reflexive ad hominems and strawmen for a while? This is what I was asking for. You provided it. I agree with your points.

This doesn't mean I necessarily agree that all these points must be added to the definition of consciousness, but I do agree that all these points are worth serious consideration in the construction any such definition.

As I said (several times) I created a minimal definition - necessary but not sufficient - and explained my reasoning.

You haven't been here for long, and I don't know what the discussions are like over at the Dawkins forums, but here we have (or have had, variously) people seriously arguing that consciousness is:


  • The fundamental existent of all reality.
  • An immortal immaterial entity that beams data into our brains like a radio broadcast.
  • Propagated immediately throughout the brain by an electromagnetic field.
  • Quantum. (Never any details, but definitely quantum.)
Let me think...


  • A property found even in subatomic particles.
  • Not computable, that's a popular one.
  • Tied to specific physical properties that are also not computable.
There are others, but they're too weird for me to recall at this late hour.

So I drew a line in the sand and said: This is my definition of consciousness, and this is why I define it that way. If you think differently, tell me what your definition is and why.

Which no-one ever did. Well, no-one that disagreed with me. Well, not coherently. ;)

We do agree on the fundamentals (subject to some minor semantic tweaking), so I'm afraid you don't count either, but you have raised some well-defined and worthwhile points, which was all I ever wanted.

That and a pony.
 
Last edited:
This is an irrelevant non-sequitur. I did not commute the definiton to draw the conclusion any more than you did with yours.
I realised after posting that I'd gotten sidetracked and was going to go back and strike it out, but never mind that now. You're right, it's not relevant, and wasn't the point I'd intended to address. My apologies.
 
Oh my. There is the over-broard equivalence and qualification error you're making again. It is NOT correct. It would be correct to say "consciousnes is a form of computation", however. Big difference in qualification though - see it yet?
Nope.

Trees are plants.

Oh, I see the point you are attempting to make, but it requires ignoring everything I have said - and misstating and misinterpreting my definition while you're at it.

This is not however the point I wish to discuss, because your input on the other points is far more interesting. Let's argue cognition, not semantics.
 
…contribution #2. Perhaps if I said my name is Eminem…

What behavior is necessary to assert that a creature is conscious? Perhaps the following: I am conscious if I know I am (a capacity for self-definition). A ‘consciousness’ will be capable of knowing that it is, and perhaps (should be) capable of ‘recognizing’ (via some mode of communication….conventional or otherwise) others that are (or recognize when some form of the phenomenon exists elsewhere [an example….some would naturally argue a dubious one….but one just the same….would be the historical figure Francis of Assisi who was reputed to have some ability to ‘relate’ to some variety of fundamental reality of life {call it consciousness or flubz….doesn’t matter….we don’t know how he did it}in the animals around him]). Consciousness is not conditional on the ability to ‘explain’ the issue through some form of limited symbolic vocabulary (English, for example) (….although for those of us who have been cast out of heaven it seems a logical necessity).

….by these conditions, it would be impossible to definitively assert that anyone here is conscious (everyone here does [and should] express doubts, uncertainties, questions etc. about what the phenomenon [which is, essentially, them self] even is), because to know is not to be merely capable of stating the fact (inference and approximation)….to know is a function of fundamental being (as fuwf quite accurately pointed out) that has exclusive properties that are either…known, or not. To know them, is to know that you know them. To not know them is not necessarily to realize that you don’t or what you don’t (which is perhaps why these ‘discussions’ rage so intensely [and without resolution] across so many forums).
 
Ichneumonwasp said:
Even an atom is just something that it's subatomic components do, and they their components -- and so on -- down the reductive chain. In this sense, matter is a verb -- its all flux, its all energy. Our constituent atoms are oscillating patterns of field activity, our cells flowing patterns of molecular activity, and our bodies the swarming activity of massive cell colonies. Our memories and habits are themselves liable to change over time.

But even if one were to strip away all of a subject's memories and revert them back to the state they were are as newborns, there is still a subject that experiences. They still exist as distinct objects with subjective states that only they have individual access to, as such. For as long as they're alive they are still an "I"; they still have being.

I guess I'm stuck in thread now.:)

As to all of it being flux, verb, relation -- yep, welcome to the wonderful world of Heraclitus.

Seems modern science has confirmed his intuitions :)

Ichneumonwasp said:
As to the experiencer being the "I", sure you can define it as such; it's a fairly deflationary definition of "I", but that's fine. I think it still leaves us with a set of processes and not something fundamental, though.

It may or may not be fundamental to reality, the it is most certainly fundamental to our consciousness. If the self is the same thing as what we call consciousness [as seems to be the case] then solving the alleged HPC will give us insight into who we are and add to our understanding of our place in the cosmos.

Ichneumonwasp said:
I would still argue that this is not "the thing itself" because I do not think that experiencing is a "thing" (which is a minor, linguistic, point since it is all part of the relations that seem to constitute the world) or a single process. The experiencer, in my view, is a complex of different processes -- we are currently trying to piece out which processes are involved in neuroscience -- and qualia are also a complex of different processes. They are only "the thing itself" by means of the way someone defines them. Qualia, when examined at a neurobiological level, appear to depend on subconscious perceptual mechanisms that reconstruct certain aspects of the environment and the object perceived and incorporates higher order hypotheses about the object in conjunction with prior memories of the object that include its function and emotional inputs that provide the 'coloring' of the experience.

I agree that the specifics of human experiences are determined by the processing and filtering done by unconscious biological processes. They determine which stimuli induce particular qualia in the subject. But the fact still remains that the capacity for qualia is the sine qua non of consciousness. In this sense they are elementary. We can still view them as "things" in the same way that we consider photons and electrons as "things", even tho we know that they are essentially just ripples in the vacuum field.

In order to scientifically understand consciousness, we must approach qualia as objects IAOT, in the same way that we searched for genes as verifiable objects. There are clearly underlying principles that govern their generation and variation. IMO, it is also glaringly obvious they are not simply abstractions but tangible products of the biophysical conditions of living brains. There must be a physical [as opposed to strictly functional] reason for why stimulus X produces qualia Y, rather than Z. I think the approach of viewing consciousness strictly in terms of computation is both myopic and a dead end.

Ichneumonwasp said:
I've no idea whether or not it is a "fundamental" component of the cosmos. All I'm certain of is that it IS a component and we do not have a solid understanding of how it relates to the other components. Unlike Chalmers, I'm fairly confident that, atleast in principle, science can give us such an understanding.

OK, good. If science can give us an understanding, then it is most likely reducible since most of what science explains, it explains at more fundamental levels. Whatever the fundamental constituent of the universe *is* we can never know. We can decide that it is consciousness, but it is not our consciousness. Ours depends on interaction of stuff we call matter -- so all the evidence shows.

Which means that are necessary physical conditions that must be met to produce what we call consciousness. If there are necessary conditions then there must be a way to "observe" qualia as external physical objects in some way. Even if we cannot externally observe consciousness directly there is almost certainly some means of identifying the sufficient physical conditions for consciousness. If I were forced to wager a guess I would say that it is an inherently biological phenomenon.

Ichneumonwasp said:
We're not even certain if 'elementary' particles are irreducible. Every time physics identifies some reductive component of matter, we end up coming upon even more fundamental elements that they're 'composed' of. All I wanna know is how consciousness fits into what we do know of physics. If it turns out to be the irreducible fundament of everything, fine; if not, also just as fine. For me, its not a matter of belief or disbelief. I'm not invested either way -- I just wanna know.

On that we are agreed. I want to know the same. I'm glad that you are not committed to the -- you can' possibly get there from here -- camp. Seems like a group of people who don't really want to know. They just want to believe in something that makes them somehow special to the universe itself.

I think people's tendency to cling to their favorite "-ism" greatly impairs their ability to impartially examine issues like this. Some are so addicted to certainty that they cannot tolerate maintaining an uncertain stance, hampering their ability to gain new insight. Instead of saying "I dunno, so I'll forestall coming to a definitive conclusion" they cling to ideological quick fixes; their metaphysics harden into dogmas rather than tentative tools for understanding the world.
 
Last edited:
Pixi,

I was very pleased with your arguments Pixi for the most part and relieved even though we still disagree on some major points. I'm going to start with this, which you might not like, but only because I'm trying to put this sort of thinking behind me/us.

PixyMisa said:
4+ I believe there are others but I don't have time to think of them all - this is a start. Pixi will probably dismiss them all as handwaving and BS anyway so I don't want to make more effort until I discover otherwise.
Good grief man, will you please give up the reflexive ad hominems and strawmen for a while? This is what I was asking for. You provided it. I agree with your points.

I wasn't trying to insult you at all. I really felt that way. I was up all night writing this stuff and was dog-tired and because of our central disconnect on how you (anyone - not just you) is entited to define something scientifically with both operational and observable components, I really feared the worst and that the very underpinnings of each of my points would be squandered in misunderstanding the very ontology of their semantics and logical extensions. You didn't do that this time, for the most part, at least not where it counted to "beat me" in the argument - something I may have too quickly concluded from my early experiences here watching you with others (hence my first post to you in the forum) as your modis operandi. That was my fault and it exacerbated the problem.

Anyway, to move on, it's clear we are making some progress. It seems that you are beginning to realize that your definition of consciousness is not sufficiently qualified to do as much productive scientific explanation as you previously seemed to claim. Or do you still assert that today's SRIP-based cars and toasters are conscious? Hopefully, I at least put a big dent in your confidence about those things.

The one item you didn't really want to engage was volition. I can understand that because it is messy and opens up the free will can of worms among other things. But it's still VERY important. Now that I've had some rest I've been able to think about this more and think I can possibly carve out the necessary aspects of volition in a "less messy" way for you. I thought of other conditions for consciousness too but I think we have enough with the 3 (actually 4) I already laid out. I also want to say that this dialogue is turning out to be very fruitful for me. Usually, when discussing consciousness I don't have to go far outside my comfort zone and I can usually find references to highlight my views or I simply remember a set of principles I can regurgitate. In this case I have no book on my shelves here that lays out all the stuff I've written to you in any coherent whole. I pretty much had to think it all through and write it up myself. Thank you for forcing me to undergo the exercise I was trying to avoid.

So let me tell you what elements I think are critical in volition for self-awareness and other cognitive aspects of consciousness that you might agree are critical too. First, it requires autonomous computation to define and explore a potentially indeterminate solution space of potentially very high degrees of freedom with no a posteriori constraints and few or no a priori limitations other than those which limit the nature of computation itself. Second, and essentially connected to the first principle, stochastic processing is required to expand, contract, and explore the solution space (and concommitant degrees of freedom) probablistically (no "absolute solution" is guaranteed). Third, given that you agree with me on the modeling and predictive nature of brain computation, I would argue that the first and second conditions above must not only apply to our immediate sense perception but to the simultaneous and reflective modeling and prediction of how our minds compute a reaction to sense perception. In this sense, my assertion is compatible with Dennett's "Multiple drafts" explanation for consciousness but provides deeper process understanding - especially relevant to how we do know the brain works, particular in its stochastic nature.

Let's contrast this to other computer programs like SHRDLU whihc meets none of the above criteria. In most computer programs like SHRDLU the search space of solutions is completely constrained and specified. SHRDLU has no indeterminate search space it has to explore and so no effort is made to create or discover its own. More "sophisticated" programs take an a posteriori-guided a priori approach to find solutions in open-ended search spaces that require already knowing some constraints about the problem itself. Arguably, a program as simple as what guides a Roomba can achieve this (but the nature of Roomba computation goes up quadratically with degrees of freedom so it isn't practically scaleable). The most sophisticated programs enable putting no a posteriori conditions and few if any a priori conditions on determining and "searching" (in the sense of finding local or global minima) solution space . I'm not sure AI or any human-generated computer program has ever really satisfied this last condition but I don't think we need to get into a deep argument about it sinces it's conceivable that the brain doesn't either and ultimately takes short-cuts and rough approximations (making stochastics even more critical) we can't even imagine yet. Certainly, algorithms have been implemented with arbitrarily high levels of self-referential and other forms of processing (usually stochastic) to approximate, often facing halting limits in an inefficient manner, stuff like np-complete vs incomplete problems (like the traveling salesman problem). Without being able to get very specific, I assert that consciousness still requires these higher-level forms of computation without being able to tell you exactly how low we can drop the bar. But I think we can safely assume SHRDLU is nowhere close nor is any form of car or toaster computer known to man.. And even if all the other conditons I gave for self-awareness are wrong (though that is not what you claimed), just being right on any major piece of the above argument is enough.

Anyway, I hope you found this clarification and expansion helpful to our arguments Pixi.

PixyMisa said:
This doesn't mean I necessarily agree that all these points must be added to the definition of consciousness, but I do agree that all these points are worth serious consideration in the construction any such definition.

As I said (several times) I created a minimal definition - necessary but not sufficient - and explained my reasoning.
....
So I drew a line in the sand and said: This is my definition of consciousness, and this is why I define it that way. If you think differently, tell me what your definition is and why.

Which no-one ever did. Well, no-one that disagreed with me. Well, not coherently. ;)

We do agree on the fundamentals (subject to some minor semantic tweaking), so I'm afraid you don't count either, but you have raised some well-defined and worthwhile points, which was all I ever wanted. That and a pony.

I'd say if you still feel that SRIP cars are remotely likely to be conscious we still have a major disagreement on the essence and definition of consciousness.

I'll get you the pony when you change your mind about that.:D
 
So let me tell you what elements I think are critical in volition for self-awareness and other cognitive aspects of consciousness that you might agree are critical too. First, it requires autonomous computation to define and explore a potentially indeterminate solution space of potentially very high degrees of freedom with no a posteriori constraints and few or no a priori limitations other than those which limit the nature of computation itself. Second, and essentially connected to the first principle, stochastic processing is required to expand, contract, and explore the solution space (and concommitant degrees of freedom) probablistically (no "absolute solution" is guaranteed). Third, given that you agree with me on the modeling and predictive nature of brain computation, I would argue that the first and second conditions above must not only apply to our immediate sense perception but to the simultaneous and reflective modeling and prediction of how our minds compute a reaction to sense perception. In this sense, my assertion is compatible with Dennett's "Multiple drafts" explanation for consciousness but provides deeper process understanding - especially relevant to how we do know the brain works, particular in its stochastic nature.

I think that this is one of the best descriptions of volition I've seen presented on this forum. I knew you'd bring a lot to the table, FUWF :D

Nears I can tell, volition is inherently active and spontaneous; I'd also argue that it's an attribute specific to consciousness. I suspect that if it weren't for the constraints of instincts and natural dispositions, which provide default behavioral repertoires, conscious volition would be willy-nilly -- not unlike the "random" behavior of individual particles. In previous discussions I entertained the notion that unconscious mental processes act as default quasi-deterministic constraints, while consciousness adds an element of indeterminacy to the actions of an organism. Conscious behaviors have a strong element of improvisation and, in humans atleast, serves as the root of imagination, foresight, and creativity.

Let's contrast this to other computer programs like SHRDLU which meets none of the above criteria. In most computer programs like SHRDLU the search space of solutions is completely constrained and specified. SHRDLU has no indeterminate search space it has to explore and so no effort is made to create or discover its own. More "sophisticated" programs take an a posteriori-guided a priori approach to find solutions in open-ended search spaces that require already knowing some constraints about the problem itself. Arguably, a program as simple as what guides a Roomba can achieve this (but the nature of Roomba computation goes up quadratically with degrees of freedom so it isn't practically scaleable). The most sophisticated programs enable putting no a posteriori conditions and few if any a priori conditions on determining and "searching" (in the sense of finding local or global minima) solution space . I'm not sure AI or any human-generated computer program has ever really satisfied this last condition but I don't think we need to get into a deep argument about it sinces it's conceivable that the brain doesn't either and ultimately takes short-cuts and rough approximations (making stochastics even more critical) we can't even imagine yet. Certainly, algorithms have been implemented with arbitrarily high levels of self-referential and other forms of processing (usually stochastic) to approximate, often facing halting limits in an inefficient manner, stuff like np-complete vs incomplete problems (like the traveling salesman problem). Without being able to get very specific, I assert that consciousness still requires these higher-level forms of computation without being able to tell you exactly how low we can drop the bar. But I think we can safely assume SHRDLU is nowhere close nor is any form of car or toaster computer known to man.. And even if all the other conditons I gave for self-awareness are wrong (though that is not what you claimed), just being right on any major piece of the above argument is enough.

I'm going to go out on a limb and propose that subjective experience [i.e. qualia] is the distinguishing feature of conscious rationale that makes it inherently open-ended. When making subjective valuations there is no objectively "right" or "wrong" answer; any and all choices initiated from consciousness must be based, in some degree, upon subjective valuation and interpretation. Even a conscious choice to fall back on habit requires subjective valuation since one consciously chooses that habit over improvisation or another conditioned response. Thus, our conscious volition has no choice but to be "free".

I propose that p-zombies cannot exist in principle, because there will be some discernible differences in behavior between automatons [which must be strictly consistent with preset algorithmic constrains] and conscious entities [which may exhibit inconsistent behaviors]. The only way such a zombie could exists is if their consciousness were 'paralyzed'; stuck in observer mode with no way of initiating actions.

In automated systems initial conditions completely determine behavior and any deviation is externally induced; in volitional entities initial conditions do not strictly constrain behavior and behavioral course may be spontaneously modified. For all intents and purposes, the behavior of SHRDLU is scripted and non-spontaneous -- it exhibits no volition. If it somehow is imbued with consciousness, that consciousness clearly plays no role in it's outputs. Given the above, we can reasonably conclude that SHRDLU's operations are not conscious.

I'd say if you still feel that SRIP cars are remotely likely to be conscious we still have a major disagreement on the essence and definition of consciousness.

I'll get you the pony when you change your mind about that.:D

Thats a worthy sentiment but I wouldn't get my hopes up, if I were you.
 
Last edited:
I agree that the specifics of human experiences are determined by the processing and filtering done by unconscious biological processes. They determine which stimuli induce particular qualia in the subject. But the fact still remains that the capacity for qualia is the sine qua non of consciousness. In this sense they are elementary. We can still view them as "things" in the same way that we consider photons and electrons as "things", even tho we know that they are essentially just ripples in the vacuum field.

In order to scientifically understand consciousness, we must approach qualia as objects IAOT, in the same way that we searched for genes as verifiable objects. There are clearly underlying principles that govern their generation and variation. IMO, it is also glaringly obvious they are not simply abstractions but tangible products of the biophysical conditions of living brains. There must be a physical [as opposed to strictly functional] reason for why stimulus X produces qualia Y, rather than Z. I think the approach of viewing consciousness strictly in terms of computation is both myopic and a dead end.

Anything that can be broken down into simpler parts is not fundamental. I understand your point, though; we certainly have to explain qualia. But it may actually be that the explanation looks like explaining away this phenomenon. I doubt many people are going to be very happy with the explanation of what feelings are in the fullness of time because they will continue to emphasize that they "feel" them.

Also keep in mind that when we discuss qualia, most folks concentrate on introspection about perception and not on perception itself which doesn't involve any redness of red but just red. And even that -- red -- is a bit questionable for various reasons.



Which means that are necessary physical conditions that must be met to produce what we call consciousness. If there are necessary conditions then there must be a way to "observe" qualia as external physical objects in some way. Even if we cannot externally observe consciousness directly there is almost certainly some means of identifying the sufficient physical conditions for consciousness. If I were forced to wager a guess I would say that it is an inherently biological phenomenon.


Again, I think I know what you mean but don't like the way you said it because I think it is open to misinterpretation. We will not observe qualia as external physical objects. We will do what we have always done in science -- begin with the right questions, observe correlations, and build an explanatory account that causal explanation for the correlations. So, yes, we will observe the neural correlates of qualia but not the experiences themselves. In a crude way we've already done that. We even have the beginnings of an explanatory model. We certainly need to refine the imaging/physiology techniques and we need to decide on one explanation for what we see once we do see it better. I'm betting that the answer -- the explanatory model -- is already out there given the large number of "theories of consciousness"; we simply don't have enough data to support one view over another.
 
Well, I had to step out for a while on the conversation, but I have to say I like the direction it appears to be taking. As I tried pointing out, I'm fine with a definition of consciousness like Pixy's as long as there is a reason for that definition, and I'm equally fine with a definition unlike Pixy's, as long as there are reasons. So far, I've not seen any one provide a good reason why Pixy's definition is no good, other than claims that 'it's absurd' or 'it's silly', with a huge lack of explanation as to why.

But FUWF is at least getting around to explaining why in more reasonable terms, and I'm learning a ton and a half, so for this round I'm going to mainly observe and consider. There are two points, however, that I want to bring about before we continue:

1) AkuManiMani mentioned that 'we are not our brains/bodies' because of the notion that our material composition has a turnover. Just so everyone is being 100% accurate and informed, our material composition doesn't actually have a 100% turnover: the brain cells we were born with are largely the same brain cells we have throughout our lives. It is one area in which our material composition does NOT experience turnover, and could be a clue as to the real nature of consciousness after all. We MAY be our brains, because our brains experience almost no turnover in life. Just consider that point.

2) IN spite of providing a more thorough and rational explanation for what components constitute consciousness, FUWF points out that considering intelligent cars to be conscious is still silly. The implication is that even should an artificial construct meet ALL the points FUWF outlines, considering to be conscious is still absurd. So, once more, I have to wonder, why? Or, more specifically, if we create (or discover) a complete definition/description of all the properties of consciousness, and we observe all these properties in a non-living, artifically constructed system, why should we still not accept the concept that they are, in fact, conscious? It's a tremendous apparent disconnect I've observed several times - no matter how much someone agrees to a definition of consciousness, the moment that definition allows for a conscious device or mechanism, it gets rejected (usually without a clear reason). That's what bugs me most about the whole discussion.

It just feels like the jist of most conversations on the topic has one side saying, 'People are conscious. Things aren't. I don't know exactly what consciousness is, or how to define it (other than the fact I have it), or what its properties are... but things obviously aren't conscious, no matter what, and people obviously are, no matter what...' It just seems dogmatic, stubborn, and irrational to act this way.

So let me ask: is it an issue of determinism? Is one side arguing that consciousness is a non-deterministic phenomena? And if so, how can you make that argument, when we can look even into our own conscious thoughts, and witness (when we're honest with ourselves) chains of mental cause-and-effect that lead to every possible conscious decision? It just looks as if the general disagreement is going to end up in an argument over whether free will exists in any real sense (which it doesn't, as near as I can find)...

Well, back on lurk for a bit. I think Pixy and FUWF have a good handle on the situation. Just consider the lack of turnover in brain cells, AkuManiMani, and let me know how that modifies your thoughts on the issue? Thanks!

ETA: Just did some poking about, and it's even more interesting than I thought: the 'seven year replacement' figure for body material is an oft-parrotted number, but isn't actually based on research. The range of replacement goes from a few months for certain blood cells, to never for most of the brain. What is very intruiging in terms of consciousness research is that part of the brain is determined to be on average 2.9 years younger than the person hosting them, while another part appears to be about 10 years younger. Or, in other words, part of your brain finishes being developed around age 3 and never changes again, and another part finishes around age 10 and never changes again.

As for the idea that, via metabolism, the ATOMS in your cells change completely - that's complete bunk. Most metabolism in cells does not affect the majority of atomic structures in cells. Most metabolism is about hydrocarbons and protein synthesis, and has little effect on the actual atomic structures of the cells themselves. So it would seem we're really not totally replaced every seven years - in fact, we appear to be composed of both our conscious awareness, and a selection of brain cells of varying age that stay with us throughout our conscious lives.
 
Last edited:

Back
Top Bottom