Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
Anything that can be broken down into simpler parts is not fundamental. I understand your point, though; we certainly have to explain qualia. But it may actually be that the explanation looks like explaining away this phenomenon. I doubt many people are going to be very happy with the explanation of what feelings are in the fullness of time because they will continue to emphasize that they "feel" them.

Also keep in mind that when we discuss qualia, most folks concentrate on introspection about perception and not on perception itself which doesn't involve any redness of red but just red. And even that -- red -- is a bit questionable for various reasons.

For the record , I do not think that explaining qualia will make negate their reality or significance any more than the reductive explanation of water negates its dynamic properties. The misguided efforts of Dennett et al. notwithstanding, qualia are unequivocally real and any attempt at a theory of consciousness that ignores them is folly.

Again, I think I know what you mean but don't like the way you said it because I think it is open to misinterpretation. We will not observe qualia as external physical objects.

I will say tho, that the experience of qualia -as- qualia is reference frame dependent. Clearly, if one were to observe the qualia of red from "the outside" it would not be red to the external observer. Unless there some means of having direct mental contact between two subjects [something that far out of our technological reach, atm] the experience of qualia as such is completely dependent upon one's subjective frame.

However, in order for an entity to be a subject they must have being; a subject is not simply an abstraction but a Ding an sich -- a thing in itself. Hence my earlier statement that the phenomenal is noumenal. A conscious subject may not be the ground of all reality [as per Idealism] but they are unquestionably the ground of their own experience. Qualia are not possessed by subjects, they are emanations of the subject. To find qualia is to the the "I" that thinks.

With that said, from your subjective frame, I am an object external to you. I am not only an external object, I am an object that interacts with the physical world that you share with me. Therefore I am an external physical object and, by extension, so are my qualia.


We will do what we have always done in science -- begin with the right questions, observe correlations, and build an explanatory account that causal explanation for the correlations. So, yes, we will observe the neural correlates of qualia but not the experiences themselves. In a crude way we've already done that. We even have the beginnings of an explanatory model. We certainly need to refine the imaging/physiology techniques and we need to decide on one explanation for what we see once we do see it better. I'm betting that the answer -- the explanatory model -- is already out there given the large number of "theories of consciousness"; we simply don't have enough data to support one view over another.

Some time ago I read Ramachandran & Hirstein's Three Laws of Qualia. In the paper he reaches conclusions similar to the ones I've reached concerning qualia and the role they play in cognition. Interestingly, they point out the neurological distinction between 'strong' and 'weak'. Ramachandran & Hirstein categorize bottom-up [i.e. stimulus invoked] perceptions as examples of 'strong' qualia; they are vividly explicit and unambiguous. They define 'weak' qualia as the top-down experiences generated by conceptualization which are much less vivid; the reasoning is that this difference allows us to distinguish between imagination and reality.

For the most part, the Three Laws paper was a very interesting read. I think approaching consciousness primarily from the biological/neurological perspective is the way to go. IMO, computationalism is fundamentally flawed and the only theorists that will make any real progress in advancing our understanding of consciousness will take the position of "Its the physics, stupid!". Only after we understand the physics of consciousness can we ever hope to implement it technologically.
 
Last edited:
IMO, computationalism is fundamentally flawed and the only theorists that will make any real progress in advancing our understanding of consciousness will take the position of "Its the physics, stupid!". Only after we understand the physics of consciousness can we ever hope to implement it technologically.

You say "The physics of consciousness" and they here "mystical obfuscation". Fascinating that the mere possibility of something physically unknown elicits such hostility.
 
1) AkuManiMani mentioned that 'we are not our brains/bodies' because of the notion that our material composition has a turnover. Just so everyone is being 100% accurate and informed, our material composition doesn't actually have a 100% turnover: the brain cells we were born with are largely the same brain cells we have throughout our lives. It is one area in which our material composition does NOT experience turnover, and could be a clue as to the real nature of consciousness after all. We MAY be our brains, because our brains experience almost no turnover in life. Just consider that point.

[...]

Well, back on lurk for a bit. I think Pixy and FUWF have a good handle on the situation. Just consider the lack of turnover in brain cells, AkuManiMani, and let me know how that modifies your thougths on the issue? Thanks!

I have to say that while your argument has some truth to it, it's not totally accurate. Its true that neural tissue has a relatively low rate of cellular turnover, but neural cells themselves still undergo metabolic turnover. In other words, individual brain cells have a much longer shelf life than most other cells but they still require continual maintenance and part replacement :)

2) IN spite of providing a more thorough and rational explanation for what components constitute consciousness, FUWF points out that considering intelligent cars to be conscious is still silly. The implication is that even should an artificial construct meet ALL the points FUWF outlines, considering to be conscious is still absurd. So, once more, I have to wonder, why? Or, more specifically, if we create (or discover) a complete definition/description of all the properties of consciousness, and we observe all these properties in a non-living, artifically constructed system, why should we still not accept the concept that they are, in fact, conscious? It's a tremendous apparent disconnect I've observed several times - no matter how much someone agrees to a definition of consciousness, the moment that definition allows for a conscious device or mechanism, it gets rejected (usually without a clear reason). That's what bugs me most about the whole discussion.

It just feels like the jist of most conversations on the topic has one side saying, 'People are conscious. Things aren't. I don't know exactly what consciousness is, or how to define it (other than the fact I have it), or what its properties are... but things obviously aren't conscious, no matter what, and people obviously are, no matter what...' It just seems dogmatic, stubborn, and irrational to act this way.

I can't claim to speak for FUWF, but personally, if an artificial system exhibited behaviors indicative of volition I would consider it strong evidence that it's conscious. If we have a scientific theory of consciousness that meets the criteria I mentioned earlier we would be able to prove whether or not an entity is conscious, rather than rely on personal intuition and inference.

My guess is that consciousness is extricably linked to biophysical processes and that to produce viable synthetic consciousness one would have to creating a living construct. IMO, if it's conscious it's a person.


So let me ask: is it an issue of determinism? Is one side arguing that consciousness is a non-deterministic phenomena? And if so, how can you make that argument, when we can look even into our own conscious thoughts, and witness (when we're honest with ourselves) chains of mental cause-and-effect that lead to every possible conscious decision? It just looks as if the general disagreement is going to end up in an argument over whether free will exists in any real sense (which it doesn't, as near as I can find)...

I think that while there's undeniably a chain of mental rationale behind all of our choices they are still non deterministic in the sense that, in most cases, the conclusions reached by such processes are inherently open ended. Yes, there was a definite line of reasoning that let you to choose McDonald's over Burger King. But what makes it non-deterministic is that you are not algorithmically compelled to select one choice over another.

A person can follow a formal line of logic, reach the conclusion that is consistent with those given formal rules, and still chose against it. Conscious thought process has the inherent ability to override or modify preconditioned dispositions, and may even spontaneously formulate novel patterns of behavior.
 
Last edited:
You say "The physics of consciousness" and they here "mystical obfuscation". Fascinating that the mere possibility of something physically unknown elicits such hostility.

Thats what distinguishes ideologues from actual scientists, skeptics, and truth seekers. Ideologues have the attitude of "I know enough and I am comfortable in my certainty". To raise doubts about the certainty and sufficiency of their knowledge is to disturb their psychological comfort zone.
 
Last edited:
I think that while there's undeniably a chain of mental rationale behind all of our choices they are still non deterministic in the sense that, in most cases, the conclusions reached by such processes are inherently open ended. Yes, there was a definite line of reasoning that let you to choose McDonald's over Burger King. But what makes it non-deterministic is that you are not algorithmically compelled to select one choice over another.

A person can follow a formal line of logic, reach the conclusion that is consistent with those given formal rules, and still chose against it. Conscious thought process has the inherent ability to override or modify preconditioned dispositions, and may even spontaneously formulate novel patterns of behavior.
Er, yes.

Unfortunately you've become either an ontological dualist, or an idealist.

Pixy etal will point out "that's Magic".
 
I have to say that while your argument has some truth to it, it's not totally accurate. Its true that neural tissue has a relatively low rate of cellular turnover, but neural cells themselves still undergo metabolic turnover. In other words, individual brain cells have a much longer shelf life than most other cells but they still require continual maintenance and part replacement :)

However, cellular part replacement is very, very low for most cells. Most of the metabolic process involves energy production and usage and protein synthesis; the fundamental molecular structures are rarely, if ever, replaced in a cell. Granted, research is somewhat limited in this area; but the conventional consensus is that a long-lived cell has the same essential atoms in it that it had when it was created, with only a few rare repairs and replacements.

None of which matters, though: unless you're wanting to claim that consciousness exists on a molecular (or even atomic) level, the fact remains that we largely consist of the same brain cells that we had at ages 2.9 and 10, and that consciousness could well be, in part, a physical phenomena as a result. Changing a tire doesn't mean that a car isn't an integral part of driving, for example.

I can't claim to speak for FUWF, but personally, if an artificial system exhibited behaviors indicative of volition I would consider it strong evidence that it is conscious. If we have a scientific theory of consciousness that meets the earlier criteria I mentioned earlier we would be able to prove whether or not an entity is conscious, rather than rely on personal intuition and inference.

My guess is that consciousness is extricable linked to biophysical processes and that to produce viable synthetic consciousness one would have to creating a living construct. IMO, if it's conscious it's a person.

Then there's something about personhood and life that you feel is inextricably part of consciousness. So now we get into defining 'person' - and why animals are not persons, and what the exact definition of 'life' is.

I tend to remove concepts of personhood and life from consciousness simply because both concepts are, again, ill-defined and nebulous, and can be applied or misapplied to many people or things. Many definitions of life, for example, would claim that I am no longer alive, because I can no longer reproduce. Some definitions of personhood would claim I'm not a person because I don't have a deep personal relationship with Jesus. So these concepts are best stripped away, unless we further take the time to carefully define them as well - and that seems a slippery slope that leads to 'What does 'is' mean?'

I think that while there's undeniably a chain of mental rationale behind all of our choices they are still non deterministic in the sense that, in most cases, the conclusions reached by such processes are inherently open ended. Yes, there was a definite line of reasoning that let you to choose McDonald's over Burger King. But what makes it non-deterministic is that you are not algorithmically compelled to select one choice over another.

Evidence? What seems apparent to me, and obviously isn't to you, is that the algorithm involved is immensly complex and recursive to some degree, but is still compelling and absolute. It's not open-ended at all, but there is a degree of feedback involved - most likely from the multiple consciousnesses within the brain - which gives it at least the appearance (useful illusion) of indeterminancy and free will.

But I think your claim is strongly lacking in evidence or rational, in this case.

A person can follow a formal line of logic, reach the conclusion that is consistent with those given formal rules, and still chose against it. Conscious thought process has the inherent ability to override or modify preconditioned dispositions, and may even spontaneously formulate novel patterns of behavior.

Not purely novel, though... and even the process of choosing against a rational line of reasoning is based upon another line of reasoning. IN other words, just because the deterministic algorithm is complex and recursive, doesn't mean it's not a deterministic algorith.
 
Well, I had to step out for a while on the conversation, but I have to say I like the direction it appears to be taking. As I tried pointing out, I'm fine with a definition of consciousness like Pixy's as long as there is a reason for that definition, and I'm equally fine with a definition unlike Pixy's, as long as there are reasons. So far, I've not seen any one provide a good reason why Pixy's definition is no good, other than claims that 'it's absurd' or 'it's silly', with a huge lack of explanation as to why.

At the risk of being repetitive, I'm going to try to illustrate the subtle semantic requirements and pitfalls of defining consciousness (or anything else for that matter) for philosophical rather than for colloquial discussion.

Here's Pixi's definition again:

"Consciousness is SRIP"

Colloquially, though I might argue it's still sloppy, I'm fine with it. Philosophically, it is untenable due to at least 2 problems I will explain.

Problems:

1. Too broad >>> Qualification Error. There are many forms of SRIPs including some we probably don't even know about yet (perhaps including the specific form that may truly give rise to consciousness) With a definition like this you have to logically conclude that consciousness can and must encompass all of them. There is no evidence to support this so it cannot be a definition. It is still mere conjecture.

2. Equivalence error. The word "is" sets a VERY high bar of equivalence. SRIPs must be both necessary and sufficient for consciousness. Pixi makes great arguments that they are necessary. But he has never made one that they are sufficient. In fact, he essentially admits that in several posts. Furthermore, many of my explanations that Pixi appears to agree with for the nature of consciousness propose necessary features that may employ but may not require SRIPs or for which SRIPs are not the true essence (since SRIPs encompass the most basic forms of recursion they can be too trivial to claim as essence for equivalence). All I have to do is show there is some computational requirement for consciousness that is not a form of SRIP, OR does not have to be programmed using an SRIP, e.g., it could turn out our brain uses nothing but various forms of SRIPs but that we (or evolution on some other planet) could create an equivalent AI consciousness that is not completely dependent on SRIPs.

The two mistakes above lead Pixi and you to make judgmental mistakes. Mistake 1 forces you both to conclude that smart cars are conscious. Mistake 2 enables you to deny consciousness to things that may be conscious.

Pixi got on my case about the critical nature of defining one's terms in philosophy. But I couldn't have been more aware of this - I am extremely careful, thorough, and precise although sometimes I get lazy or miss something. But here, Pixi himself gets sloppy and allows himself to make some subtle mistakes that a careful philosopher shouldn't. He confuses colloquial definitions with the stricter standards required for philosophical definitions.

He thinks he refuted my argument for Mistake 1 using this colloquial definition.

"Trees are plants."

That was careless. Philosophically, as a state of being, this is not a true statement. Trees are not plants. Trees are a form of plant. -Just as I can say, as I have before, that Consciousness is a form of SRIP (which assumes, as I believe, that certain forms of SRIPs will be found to be both necessary and sufficient).

Ironically, in thinking he refutes my argument by commuting the colloquial definition, i.e., "Plants are trees" is untrue, he really refutes the basis of his own deduction that smart cars are conscious, i.e.,

"Consciousness is SRIP" (even if true) does not mean "SRIP (such as in cars) is consciousness".

I can't imagine that I can make my case against Pixi's definitional errors any plainer or simpler to understand than this but if you still insist I haven't met my burden of proof, I'll try to address any specific concerns you have. In a philosophy forum, it is of the utmost important we know how to properly, concisely, and unambiguously define our terms and avoid the subtle mistakes that lead one to ruin, and have ruined the arguments of the most brilliant philosophers. It is very easy to make many of these mistakes. Going back over my own stuff I see several I missed - though, thankfully, none of them contravenes my arguments. However, maybe it will require one of you to find the one I made that completely robs an argument of mine of validity.
 
Last edited:
At the risk of being repetitive, I'm going to try to illustrate the subtle semantic requirements and pitfalls of defining consciousness - or anything else for that matter.

Here's Pixi's definition again:

"Consciousness is SRIP"

Problems:

1. Too broad >>> Qualification Error. There are many forms of SRIPs including some we probably don't even know about yet (perhaps including the specific form that may truly give rise to consciousness) With a definition like this you have to logically conclude that consciousness can and must encompass all of them. There is no evidence to support this so it cannot be a definition. It is still mere conjecture.

Unfortunately, there is insufficient evidence that ANY defiinition for consciousness can or cannot be extended to any and all other possible objects or beings. Solipsism still looms in the shadows, regardless of what definition you choose, on one end; panpsychism on the other.

I'm fine with wanting to narrow it down, though - as long as qualifiers are clearly expressed. If SRIP is too broad (and I prefer sensory, rather than information, processing, for example), then give the factors that narrow it into usefullness. But a generic claim of 'too broad' doesn't actually help, per se. Nor does simply claiming that 'it includes as conscious things we have no evidence for', since, technically, we have no (direct and certain) evidence of ANY consciousness save our own. If we choose to define others' consciousness in terms of induction (i.e. through structure or behavior), then we must first set the definition, then observe what meets that definition.

Yes, it seems like it's including the conclusions in the definition, but with consciousness, I can't figure out any other way to do it. Then again, my field is computer science, so I'm pretty handicapped in this area.

2. Equivalence error. The word "is" sets a VERY high bar of quivalence. SRIPs must be both necessary and sufficient for consciousness. Pixi makes great arguments that they are necessary. But he has never made one that they are sufficient. In fact, he essentially admits that in several posts. Furthermore, many of my explanations that Pixi appears to agree with for the nature of consciousness propose necessary features that may employ but do not require SRIPs or for which SRIPs are not the true essence (since SRIPs encompass the most basic forms of recursion they can be too trivial to claim as essence for equivalence). All I have to do is show there is some computational requirement for consciousness that is not a form of SRIP, OR does not have to be programmed using an SRIP, e.g., it could turn out our brain uses nothing but various forms of SRIPs but that we (or evolution on some other planet) could create an equivalent AI consciousness that is not completely dependent on SRIPs.

I'm not sure how you separate computation and information processing, but I get your point; however, that merely shows the definition is falsifiable, which is a desirable trait, as I understand it.

The two mistakes above lead Pixi and you to make judgmental mistakes. Mistake 1 forces you both to conclude that smart cars are conscious. Mistake 2 enables you to deny consciousness to things that may be conscious.

Yet both are based on the fact we do not have ANY way to determine what is or is not conscious through observation, without making some kind of assumptive reasoning.

Interestingly enough, what I quote above seems self-contradictory in a way. You say that mistake 1 forces us to conclude that things are conscious that may not be, and mistake 2 allows us to deny that things are conscious that may be; yet in the reverse, you are making the claim that something cannot be conscious that may be (the smart car)! So, for the fifth or sixth time, let me ask you: why may a smart car not be conscious? I have no problem with the idea of lesser consciousnesses. I'm fine with the concept that my laptop has more consciousness than, say, insects or lizards or most Republicans. It seems perfectly reasonable and, in fact, self-evident to me. You, however, have some objection, and from other facets of your posts, I find it hard to believe those objections might be intuitional or emotional in nature.

So all I'm asking for is a simple delineating factor. Why should consciousness be limited to a biological mechanism alone?

Pixi got on my case about the critical nature of defining one's terms in philosophy. But I couldn't have been more aware of this - I am extremely careful, thorough, and precise although sometimes I get lazy or miss something. But here, Pixi himself gets sloppy and allows himself to make some subtle mistakes that a careful philosopher shouldn't. He confuses colloquial definitions with the stricter standards required for philosophical definitions.

He thinks he refuted my argument for Mistake 1 using this colloquial definition.

Trees are plants.

That was careless. Philosophically, as a state of being, this is not a true statement. Trees are not plants. Trees are a form of plant. -Just as I can say, as I have before, that Consciousness is a form of SRIP (which assumes, as I believe, that certain forms of SRIPs will be found to be both necessary and sufficient).

Then, as with the tree-plant issue, it is imperative that you provide the necessary distinction that separates the class of SRIP we call Consciousness from the remaining classes. Something as simple as adding 'able to express its consciousness intelligebly' might work - though, of course, that excludes humans with certain types of brain or body damage or disease.

My problem here is that I have a very, very hard time thinking of any feature of distinction between forms of SRIP that wouldn't necessarily exclude a portion of the human population from consciousness, except, of course, by saying 'in humans'; but this distinction seems entirely arbitrary, especially since humans have no particular trait or ability that is unique to humans except for human-ness. (:D)

Still, if you can think of any example of a non-conscious SRIP (given, of course, rigid definitions of SRIP) that is provably non-conscious-- but, again, since we cannot even prove consciousness in others than ourselves, how on earth can we prove non-consciousness, except by defining the term first, and observing what matches our definitions second?

If this is the HPC, I think I'm beginning to see the real problem.

Ironically, in thinking he refutes my argument by commuting the colloquial definition, i.e., "Plants are tress" is untrue, he really refutes the basis of his own deduction that smart cars are conscious, i.e.,

Consciousness is SRIP (even if true) does not mean SRIP (such as in cars) is consciousness.

I can't imagine that I can make my case against definitional Pixi's errors any plainer or simpler to understand than this but if you still insist I haven't met my burden of proof, I'll try to address any specific concerns you have. In a philosophy forum, it is of the utmost important we know how to properly, concisely, and unambiguously define our terms and avoid the subtle mistakes that lead one to ruin. It is very easy to make many of these mistakes. Going back over my own stuff I see several I missed - though, thankfully, none of them contravenes my arguments. However, maybe it will require one of you to find the one I made that completely robs an argument of mine of validity.

I think your case is quite clear, though I think it suffers a few problems which I pointed out above. However, I'm not a big philosophy buff, nor have I ever taken education on debating or argumentation.

I see your points, clearly, and I agree at least tenatively on the consciousness=SRIP issue you take up. So at this point, let me re-ask a couple of quick questions:

1) What about smart cars (phones, laptops, whatever) prevents them from being conscious?

2) What delineating factor should be used to separate the class of SRIP we call 'consciousness' from any other form of SRIP, and how can we ever know that this is the differentiating factor?

Of course, for all I know, these could be the two key questions no one in the universe can answer... :D
 
That's not really the contentious issue. Well...there are certainly some people out there who believe in entirely disembodied consciousnesses, but this isn't the serious opposition to the materialist position. Most people accept that you're going to need something like a neural network. The question is whether anything else is also required. The suggestion is that the brain is not merely a neural network but a "quantum computer", or has some other key property in addition to the neutral network. The NN just explains the origin of the complexity of qualia, not their raw existence.

There is no evidence as of yet for the analog of a 'quantum computer' in the neural network. Penrose and others are very wrong.

Now when evidence shows otherwise, that would be great. But the biochemical phase shift of neural membranes and channels opening and closing is not going to generate QM signals the way that Penrose and others have suggested.

So possible but highly unlikely at this point.

And sure something else might be needed but as they say is the neighbor state of Missurouri "Show me".
 
Dr. Z, Just want to mention you ask some great questions. I can't answer them all because I'd argue nobody knows all of them yet. But here's a stab at it.

Z said:
Unfortunately, there is insufficient evidence that ANY defiinition for consciousness can or cannot be extended to any and all other possible objects or beings. Solipsism still looms in the shadows, regardless of what definition you choose, on one end; panpsychism on the other.

OK, first off, understanding consciousness was not the point of my last post. I was merely trying to address the nature of proper philosophical definition.

That aside, if you go over my 2 most recent posts to Pixi, and if you agree with many of the points I make as he seems to, then I really don't see how you come to this conclusion. The properties I have attached to the essence of consciousness set a pretty high bar - and that's even before you invoke Arististotle as I discussed before (i.e., "there is a point where a difference in degree becomes a difference in kind"). The difference in degree is complexity. I could be right about all the properties I illustrated for consciousness. It could turn out that they are all necessary. But they may only be sufficient if they pass a critical threshold of complexity which I have incorporated in my arguments as dealing with degrees of freedom. A particle in a small box doesn't have much to explore or be conscious off so why would it be conscious? In fact, if I could somehow keep you alive in a small box indefinitely with no sensory interactions of any kind there is scientific evidence to suggest you will not be able to remain conscious unless you make sufficient on-going effort to sustain some type of internal dialogue with yourself.

As for solipsism, it cannot be disproven. You can argue that it's highly unlikely because it forces you to draw some ridiculous conclusions, e.g, that somehow, in your own mind, you "created" people like Einstein and relativity and Shakespeare but you lack the ability to derive E=MC2 or write poetry de novo with your own consciousness. Silly, but not impossible. In philosophy, such silliness is often your only basis upon which to form a conclusion. There can be no formal (dis)proof. We invoke useful principles like Occam's Razor (that also cannot be formally proven philosophically) that would tend to invalidate solipsism. Otherwise, you have to infer that there are additional layers of complexity in solipsism, e.g., your mind is compartmentalized such that your conscious awareness you experience as "you" is somehow divorced from a separate part of your consciousness that creates everything you experience as the illusion of external reality. Materialism, among other monisms, avoids this pitfall though it does invite others. Those of us who choose materialism do so because we conclude it has the greatest explanatory power (does the most useful work) with the fewest number of complexifying (if that's a word) pitfalls.

I'm fine with wanting to narrow it down, though - as long as qualifiers are clearly expressed. If SRIP is too broad (and I prefer sensory, rather than information, processing, for example), then give the factors that narrow it into usefullness. But a generic claim of 'too broad' doesn't actually help, per se. Nor does simply claiming that 'it includes as conscious things we have no evidence for', since, technically, we have no (direct and certain) evidence of ANY consciousness save our own. If we choose to define others' consciousness in terms of induction (i.e. through structure or behavior), then we must first set the definition, then observe what meets that definition.

You're doing a great job of explaining that consciousness is an unsolved problem with many potential loose ends, both scientifically and metaphysically. I already knew that though. That's also why it's so hard to define. We can't even all agree the observation of consciousness in ourselves, i.e., qualia, even exists. Dennett and Pixi both deny it.

Yes, it seems like it's including the conclusions in the definition, but with consciousness, I can't figure out any other way to do it. Then again, my field is computer science, so I'm pretty handicapped in this area.

I have no problem including conclusions in the definition of consciousness IF the conclusions are proven to be true. Our modern operational definitions for light, physical motion, and evolution are all conclusions of science. All operational definitions are conclusions but not all conclusions are operational definitions.

I'm not sure how you separate computation and information processing

Good, because there is no separation. Computation is information processing. They are equivalent, identical and synonymous both colloquially and philosophically in the strictest sense.

The two mistakes above lead Pixi and you to make judgmental mistakes. Mistake 1 forces you both to conclude that smart cars are conscious. Mistake 2 enables you to deny consciousness to things that may be conscious.

Yet both are based on the fact we do not have ANY way to determine what is or is not conscious through observation, without making some kind of assumptive reasoning.

Not sure I'm following you here... The problem of figuring out what is conscious or isn't goes hand-in-hand with the problem of (i.e., nature of) consciousness itself.

Interestingly enough, what I quote above seems self-contradictory in a way. You say that mistake 1 forces us to conclude that things are conscious that may not be, and mistake 2 allows us to deny that things are conscious that may be; yet in the reverse, you are making the claim that something cannot be conscious that may be (the smart car)! So, for the fifth or sixth time, let me ask you: why may a smart car not be conscious?

I think you're having a little philosophical/logical brainfart here. My argument in the post you are quoting is not self-contradictory. I'm saying your and Pixi's logic force you to conclude smart cars are conscious. I laid out no evidence or philosophical definition in that post to say that smart cars are not conscious. I did do so in previous posts with at least 4 -8 (depending on how you classify them) specific claims about consciousness which, if any one is true, contradicts the possibility of smart car consciousness. You'll have to go back and read them again.

I have no problem with the idea of lesser consciousnesses. I'm fine with the concept that my laptop has more consciousness than, say, insects or lizards or most Republicans. It seems perfectly reasonable and, in fact, self-evident to me. You, however, have some objection, and from other facets of your posts, I find it hard to believe those objections might be intuitional or emotional in nature.

I also believe in the idea of greater/lesser consciousness. Though at the present time my order, if I had to guess they have consciousness at all using your examples, would be lizard >> insect >>>>>>>> Republicans > computer ;-) I don't believe any of them are conscious (well, we'll set Republicans aside as a special phenomenon for now - which I used to be one of) though lizards may be. I'd like to think my pet turtles were - they were so cute. I suspect, however, that most of what we call consciousness, at least in biological consciousness on Earth, requires a cortex. Reptiles have the beginnings of one. The octopus and its cephalopod relatives may an exception too.

I do believe there is a continuum of consciousness. I've even observed different forms in myself. But I also believe there are minimum thresholds too. Both probably involve various forms of complexity and computational functionality we have yet to discover. Beyond that, there is little I can say. I can speculate a lot if you like. But a lot of it would be BS. Probably most of it.

So all I'm asking for is a simple delineating factor. Why should consciousness be limited to a biological mechanism alone?

Did I say that? I guess I shot myself in the foot then by going into AI driven by my belief that humans can create conscious AI. ;-) I also happen to be a transhumanist.

Then, as with the tree-plant issue, it is imperative that you provide the necessary distinction that separates the class of SRIP we call Consciousness from the remaining classes.

Yes, it is necessary. But without the science to back it up I'd be speculating, not defining. In my previous points I provided some parameters and clues that can help lead us in the right direction. If I could do more than that I'd get the Nobel prize in a heartbeat.

Something as simple as adding 'able to express its consciousness intelligebly' might work - though, of course, that excludes humans with certain types of brain or body damage or disease.
Keep struggling - or don't. You'll find you really can't do much more useful definitional work without backing it up with some scientific discoveries we haven't made yet. You're much more likely to fool yourself with something that reasons like the Ontological Argument for God first.

My problem here is that I have a very, very hard time thinking of any feature of distinction between forms of SRIP that wouldn't necessarily exclude a portion of the human population from consciousness, except, of course, by saying 'in humans'; but this distinction seems entirely arbitrary, especially since humans have no particular trait or ability that is unique to humans except for human-ness. (:D)

Well, there are many humans that aren't capable of consciousness, assuming that is not part of your definition of being human to some extent (it is for me). What i can say are that there are many human brains out there that have been damaged who are incapable of any form of consciousness we can detect. To speculate that they might still be conscious anyway requires definitions we haven't agreed on yet and the science to observe it. I'd say at least non-REM sleep is another form of non-conscious human SRIP (and other processes). Dreams are arguably another form of consciousness that beg interesting questions why our waking consciousness is at least somewhat disconnected from it.

...how on earth can we prove non-consciousness, except by defining the term first, and observing what matches our definitions second?

Science does not require definitions for falsification, only hypotheses. In fact, falsifying a definition in the philosophical sense is incoherent and tautological. Falsifying a definition (in the strict philosophical sense), which is essentially the core premise of a theorem or a "proven theorem" (big tautological can of worms here that will take us all the way to Godel and the philosophy of science itself), is a REALLY BIG DEAL in science that arguably can't happen (because what you had then was not really a definition in the first place). It requires a paradigm shift of huge proportions and an admission that science was not only incomplete but got something dead wrong. While many people think they've defined consciousness, I'd argue that most if not all of them are just hypotheses. The safest and most complete operational definition you can make about consciousness that I believe has virtually no chance of being refuted is that Consciousness is a form of computation. That's not very heavy is it? No it isn't. But I'm afraid that's tough ◊◊◊◊. I don't know what I can add to it that doesn't significantly risk falsification.

I think your case is quite clear, though I think it suffers a few problems which I pointed out above. However, I'm not a big philosophy buff, nor have I ever taken education on debating or argumentation.

I see your points, clearly, and I agree at least tenatively on the consciousness=SRIP issue you take up. So at this point, let me re-ask a couple of quick questions:

1) What about smart cars (phones, laptops, whatever) prevents them from being conscious?

2) What delineating factor should be used to separate the class of SRIP we call 'consciousness' from any other form of SRIP, and how can we ever know that this is the differentiating factor?
See above and my other recent posts to Pixi with my 4 - 8 principles i believe are required for consciousness that smart cars, phones, and laptops don't meet. I also addressed you issues in 2 in posts i made before you entered the debate. I don't have time to go reference them but I encourage you to go back.

Of course, for all I know, these could be the two key questions no one in the universe can answer... :D

I'm confident we'll answer them in this century and hopefully before I croak. I've lived my whole life to see this answer and I'll be damned if I don't live long enough to find out.
 
I can't imagine that I can make my case against Pixi's definitional errors any plainer or simpler to understand than this ... snip ...

But thats just it -- they aren't really errors.

We have a very good idea of the kinds of things we want to label as "being conscious," chief among these is ourselves and other humans. We also have a pretty good idea of things we would like to label as "conscious," such as the mentally handicapped, our pets, smart animals, and the like. Finally, we have a decent idea of things we would probably label as being "conscious," such as small mammals, octopi, birds, whatever.

It is clear to anyone with an education in computer science that all of these creatures display behavior that can only be attributed to SRIP. It is not clear, however, what exactly is needed beyond SRIP. Because as soon as we define some other requisite, we find examples of things we want to call "conscious" that don't have that requisite. At this point in the game, we just aren't far enough to be able to make a clean partition between things based on any other characteristic besides self-reference.

In other words, we don't know what kinds of other qualitative differences there are between types of information processing that might be relevant. All we can be sure of, right now, is that self-reference is one.

So the view of Pixy and myself is hey, lets just call any SRIP "conscious" and then differentiate between different types of "consciousness" based on their features. And I really don't understand why you refuse to accept this -- it is just a matter of definition. Pixy and I define consciousness as SRIP, period. We honestly don't mind calling a thermostat conscious, or a toaster conscious, because to us it just means they are SRIP.

Furthermore we don't mind saying a toaster experiences because -- to us -- that is just the mathematical implication of being SRIP. It doesn't mean we think the toaster experiences like a human.

And that is really the problem here. Many people -- including yourself, it seems -- want to equate the term "consciousness" with "human-like consciousness" and "experience" with "human-like experience." Why?
 
We can't even all agree the observation of consciousness in ourselves, i.e., qualia, even exists. Dennett and Pixi both deny it.

I think you are being a little bull-headed here, and in my opinion you will make a better name for yourself in these forums if you try to see where others are coming from rather than just butt heads.

Case in point -- both Dennet and Pixi (and myself) do not deny that something which other people call qualia exists. We are all aware of what people call "redness," etc.

What we deny is that such a thing is somehow fundamentally different from everything else, such that it requires a novel explanation that the computational model cannot produce.

Because that is really what HPC proponents are after when they try to introduce the term "qualia."

So to say "we deny it" when speaking of "qualia" is misinforming. The reality is quite the contrary -- I know for a fact Pixy is aware of "redness." We just think, for example, that any "redness" is always paired with activity in the visual cortex rather than existing all by itself in the void of the mind as HPC proponents would have you believe.
 
For those of you who are following my arguments with Dr. Z and also my arguments on digital physics in this and other threads let me add a belief of mine which I know is nothing more than a scientist intuition and may garner heaps of ridicule:

I believe that when science discovers the nature and essence of Strange Loops = a form of SRIP that I believe govern consciousness, I believe that answer will be leverageable to understand the form of strange loops that I believe give rise to the multiverse itself and all matter and energy therein. The possibility also exists that physicists will discover the essential strange-loop nature of matter/energy/etc. first and that AI and neuroscience will leverage it to explain consciousness.
 
But thats just it -- they aren't really errors.

We have a very good idea of the kinds of things we want to label as "being conscious," chief among these is ourselves and other humans. We also have a pretty good idea of things we would like to label as "conscious," such as the mentally handicapped, our pets, smart animals, and the like. Finally, we have a decent idea of things we would probably label as being "conscious," such as small mammals, octopi, birds, whatever.

It is clear to anyone with an education in computer science that all of these creatures display behavior that can only be attributed to SRIP. It is not clear, however, what exactly is needed beyond SRIP. Because as soon as we define some other requisite, we find examples of things we want to call "conscious" that don't have that requisite. At this point in the game, we just aren't far enough to be able to make a clean partition between things based on any other characteristic besides self-reference.

In other words, we don't know what kinds of other qualitative differences there are between types of information processing that might be relevant. All we can be sure of, right now, is that self-reference is one.

So the view of Pixy and myself is hey, lets just call any SRIP "conscious" and then differentiate between different types of "consciousness" based on their features. And I really don't understand why you refuse to accept this -- it is just a matter of definition. Pixy and I define consciousness as SRIP, period. We honestly don't mind calling a thermostat conscious, or a toaster conscious, because to us it just means they are SRIP.

Furthermore we don't mind saying a toaster experiences because -- to us -- that is just the mathematical implication of being SRIP. It doesn't mean we think the toaster experiences like a human.

And that is really the problem here. Many people -- including yourself, it seems -- want to equate the term "consciousness" with "human-like consciousness" and "experience" with "human-like experience." Why?

Your "explanations" and the quote to which it addresses itself suggest to me you didn't even read the post where it came from or make any attempt to understand it. You didn't really address any of my arguments - you just snipped them out. None of them require or attempt to equate consciousness with human consciousness. Our consciousness is the only consciousness we really know. You can't establish others by fiat. What you can do is dissect human consciousness and ask yourself, and us, is this really essential for consciousness? You also have to be able to tie an observable in consciousness itself to an operational definition that explains it to have a complete definition.
 
Last edited:
Would this work just as well?

I believe that when science discovers the nature and essence of Strange Loops = a form of SRIP that I believe govern is consciousness, I believe that answer will be leverageable to understand the form of strange loops that I believe give rise to is the multiverse itself and all matter and energy therein. The possibility also exists that physicists will discover the essential strange-loop nature of matter/energy/etc. first and that AI and neuroscience will leverage it to explain consciousness.
 
Last edited:
rocketdodger said:
We can't even all agree the observation of consciousness in ourselves, i.e., qualia, even exists. Dennett and Pixi both deny it.

I think you are being a little bull-headed here, and in my opinion you will make a better name for yourself in these forums if you try to see where others are coming from rather than just butt heads.

Case in point -- both Dennet and Pixi (and myself) do not deny that something which other people call qualia exists. We are all aware of what people call "redness," etc.

What we deny is that such a thing is somehow fundamentally different from everything else, such that it requires a novel explanation that the computational model cannot produce.

Because that is really what HPC proponents are after when they try to introduce the term "qualia."

So to say "we deny it" when speaking of "qualia" is misinforming. The reality is quite the contrary -- I know for a fact Pixy is aware of "redness." We just think, for example, that any "redness" is always paired with activity in the visual cortex rather than existing all by itself in the void of the mind as HPC proponents would have you believe.

I'm not sure I know who the bull-headed one is here RD. If I'm guilty, so are you.

The part I bolded suggests you have an ideological agenda of some kind propelling your arguments. I'm not even sure I know what HPC stands for - did you mean the Human Potential Club? If so, I'm not a member. I'm looking for nothing but the truth. I have no agenda to support or undermine anybody's value judgments.

It is your explanations that are incoherent. If you and Pixi both believe in an observation thing/process called qualia exist, great, give us the definition of the observable itself without invoking an operational explanatory definition alone that you and Pixi can agree on. If you somehow insist that there can be nothing but an operational definition then you'll have to back that up. Can you give me an example of any other phenomenon known to science that can only be defined by explanatory operation?

Your putative definition of qualia as "redness" is almost identicle to a definition I gave to Pixi early on which he said was incoherent. So it will be interesting to watch you guys duke that out.

As far as Dennett is concerned, He denies flatly that qualia exist. How do I know that? He told me himself. And I've read his books too. A wonderful man, soft-spoken, a pleasure to spend an evening with... The problem with Dennett is that he can be sloppy, not as bad as the New Testament-sloppy, but similarly that you can read and interpret more than one thing into what he says. He wants his cake and to eat it to. Where an observable called qualia is indirectly implicit, he makes no effort to remove it from his thesis at risk of damaging his premises. You can read a lot from Dennett and believe he thinks the observable of qualia exist until he tells you they don't.

Added in Edit:

rocketdodger said:
What we deny is that such a thing is somehow fundamentally different from everything else, such that it requires a novel explanation that the computational model cannot produce.

The part that I underlined is too devoid of specificity to do any explanatory work. On its own it's just a sloppy way of assuming materialism as a given which means you're guilty of bias as the HPC crowd you despise. I'm not saying you're wrong about your conclusion or that i disagree with it. But you're, once again, confusing conclusion with definition.

And your reasoning and language is still too damn sloppy and ambiguous to have concise argument with you.

If consciousness didn't require a "novel" explanation in terms of computation, some computational model, or anything else for that matter then we must completely understand consciousness then. Is that your claim???? Do you have such a complete computational model in mind? Of course you don't.

Perhaps you equate "novel" with "mysterious" or "supernatural". I don't. Something novel is just new and previously unknown. It could just be a new form of SRIP or feedforward relaxation algorithm we haven't discovered yet. And right after we discover it and it finds widespread acceptance, it will cease to be novel.

When I do my best to filter out the incoherence from your sentence, I think all you are really entitled to claim is that there is no good reason to believe that consciousness depends on anything else but some form of computation that is not beyond science to discover. That I agree with.
 
Last edited:
None of them require or attempt to equate consciousness with human consciousness.

Well, it sounds to me like you are, for example the following statement:

Our consciousness is the only consciousness we really know. You can't establish others by fiat. What you can do is dissect human consciousness and ask yourself, and us, is this really essential for consciousness?

But you aren't telling the whole story here.

Our own subjective experience is the only subjective experience we really know. As far as the rest of consciousness goes -- external behaviors -- we are clearly not talking about the same thing (and neither is Pixy).

Because I really know your external behaviors, and whether I want to call them conscious behaviors or not. I also really know the behaviors of my dog, or a chimp at the zoo, or a chipmunk in my back yard. And I call them all conscious as well.

And furthermore I can look at the neural activity in a chipmunk while it is conscious and try to elucidate how those external behaviors arise. To me, that is the same thing as trying to elucidate how consciousness arises, because the external behaviors are an aspect of consciousness.

We have this argument all the time here, and it is why I wish people would be more specific in what they are referring to. A chipmunk can be conscious, because it can also be unconscious, yet a chipmunk clearly cannot have the same subjective experience as a human. So what are you talking about -- the subjective experience of a human, or the external behaviors that people label "being conscious" in animals (including ourselves)?

You also have to be able to tie an observable in consciousness itself to an operational definition that explains it to have a complete definition.

We have done that with SRIP.

Really, it is painfully clear to me that you simply consider the term "conscious" to be stronger than we do. I wish we could move on. You know what, I will even come to your side of the fence if it means we can move on. If you want me to reserve "consciousness" for something more complex, I can do that, since you agree that SRIP is a necessary requisite.
 
I'm not sure I know who the bull-headed one is here RD. If I'm guilty, so are you.

Yes, but in different ways. I usually try to resolve communication problems. You seem to exacerbate them!

The part I bolded suggests you have an ideological agenda of some kind propelling your arguments. I'm not even sure I know what HPC stands for - did you mean the Human Potential Club? If so, I'm not a member. I'm looking for nothing but the truth. I have no agenda to support or undermine anybody's value judgments.

You have no agenda to support, yet your screen name is FedUpWithFaith!!!

HPC == "Hard Problem of Consciousness"

It is your explanations that are incoherent. If you and Pixi both believe in an observation thing/process called qualia exist, great, give us the definition of the observable itself without invoking an operational explanatory definition alone that you and Pixi can agree on.

...snip...

Your putative definition of qualia as "redness" is almost identicle to a definition I gave to Pixi early on which he said was incoherent. So it will be interesting to watch you guys duke that out.

As far as Dennett is concerned, He denies flatly that qualia exist. How do I know that? He told me himself. And I've read his books too. A wonderful man, soft-spoken, a pleasure to spend an evening with... The problem with Dennett is that he can be sloppy, not as bad as the New Testament, but similarly that you can read and interpret more than one thing into what he says. He wants his cake and to eat it to. Where an observable called qualia is indirectly implicit, he makes no effort to remove it from his thesis at risk of damaging his premises. You can read a lot from Dennett and believe he thinks the observable of qualia exist until he tells you they don't.

I am pretty sure both Dennet and Pixy know that there is an attribute that many visual percepts have in common, and that they have learned to label this attribute the color "red."

I am also pretty sure that they can invoke a very similar sensation when they visualize a "red" object.

So there you go -- this thing called "redness" is just what is in common between a bunch of percepts.

I don't speak for Dennet or Pixy, but I would bet that they don't like the notion of "qualia" because everyone who uses it (except for you, apparently) mean it to be more than just this relation between percepts yet none of them can specify what that entails.

I, on the other hand, realize that people just don't know what they are talking about when they say "qualia" and I automatically interpret their use of the word to mean "relationships, that I am not educated enough to realize are relationships, between things that my brain has subconsciously caught on to."

The part that I underlined is too devoid of specificity to do any explanatory work. On its own it's just a sloppy way of assuming materialism as a given which means you're guilty of bias as the HPC crowd you despise. I'm not saying you're wrong about your conclusion or that i disagree with it. But you're, once again, subsuming your conclusions in your definitions.

Lol, yes, I am assuming that materialism (rather, physicalism) is true. In my defense, I am not sure how anyone would make any progress in science if they took the magic faerie dust scenario as seriously as the rest of the hypotheses.

And your reasoning and language is still too damn sloppy and ambiguous to have concise argument with you.

Yeah, that is my point. Lets move on to areas where we can use solid reasoning and language.

If consciousness didn't require a "novel" explanation in terms of computation, some computational model, or anything else for that matter then we must completely understand consciousness then. Is that your claim???? Do you have such a complete computational model in mind? Of course you don't.

You still don't get it.

We call SRIP conscious because that is the only qualitative difference between the set of things we consider conscious and the set of things we consider not-conscious.

It has literally nothing to do with anything else!

When I do my best to filter out the incoherence from your sentence, I think all you are really entitled to claim is that there is no good reason to believe that consciousness depends on anything else but some form of computation that is not beyond science to discover. That I agree with.

You don't think I am also entitled to claim that such computation will fall within the bounds we have known about for 50 years I.E. Turing equivalence / universal computation and all that jazz?
 

Back
Top Bottom