• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Seems other computationalists think that physics is not useful in explaining consciousness.

The idea that physics is not relevant to a topic is almost a definition of dualism or mysticism. Physics might not be the most appropriate tool for analysing certain phenomena, but if there isn't a physical explanation, then the alternative is mysticism.
 
I hope you questioned that statement since it is clearly wrong. I'm not sure there is any unifying factor amongst Amerindian thought, but the most likely candidate would be non-accpetance of Western instrumental rationality, which is very different from saying that they had no notion of causality.

I just offer it up for what it's worth. There are certainly ways of thought that are in common between different human beings, and there are other things that aren't.
 
I thought I covered this - it's the pattern of activation of the neurons - the particular neural circuits that 'light up'. Different patterns of activation correspond to different sensations or experiences.

And the flashing lights on the front of the computer correspond to different programs running - but what's the causal connection?
 
We shouldn't assume that being a computer is in any way like anything. It might be equivalent to non-existence.

Thats the hypothesis I'm leaning toward. Based upon what I understand of consciousness, the computers we have today are definitely not conscious.
 
I suggest that the ideal subjects (aside from those with brain abnormalities) would be those who are skilled at meditative practices and introspection as their ability to self-examine and manipulate their internal states would be an asset.

How would you know if they were really skilled with respect to accurate self-examining ? There's no guarantee that many years of introspection and meditation increases the accuracy of the results.

I'd stick with regular people, but improve the quality by increasing the sample size.

Maybe so, but studies done of people who do have years of experience in such practices (such as Buddhist monks) have shown significant differences in brain structure and functioning. Likewise, similar studies of the brains of psychopaths (and other 'character disordered' individuals) also show differences in the brain that veer from the norm.

I think such differences are due to how the consciousness of the individual uses their brain over the course of their lifetime. Understanding such differences will be very useful in the scientific quest to understand consciousness.
 
Last edited:
I do pay attention, and I have read a lot of the earlier parts of this thread. Just the fact that PixyMisa has stated a definition doesn't mean we all agree on that definition. I also don't know if !Kaggen was referring to that definition in the question that I replied to.
My crack about "paying attention" wasn't really warranted and I apologise for that.

I don't see consciousness as an illusion or hallucination. I see it purely as a property of sufficiently advanced computation about the environment (which includes the subject itself). The useful causal effect is survival.

For example, the sensation of "pain" causes us to pay attention to the source of the pain, and come up with a remedy to fix it, and improve chances of survival.
If the sensation of pain is merely the result of neurons firing in some pattern somewhere in the brain, then why can't the other parts of the brain that need to come up with a remedy simply read the outputs from the part of the network that is generating the "pain pattern"? Why bother to generate a "subjective pain" as well? And how does it do that?

What makes you think it's a property that you either have or have not ? I'm pretty sure a dog has a form of consciousness, but not nearly as sophisticated as our own, but more advanced than that of a chicken. It's a gradual scale.
Sorry if I wasn't clear - I'm sure phenomenal consciousness is not equally "sophisticated" across different species or even for one individual at different times. However, it does seem likely to me that some objects have no form of subjective experience at all. A molecule of water for example. My interest in isolating a simple/small system that someone like Pixy claims is definitely conscious is to then allow further exploration and discussion around how or why that system is claimed to be conscious. As long as the claim is made that the system in question has some (albeit very small) degree of phenomenal conscious experience then we have something to work with.

Most of us aren't really sure if (for example) an ant has any form of subjective experience. My intuition is yes, it does, but that's just a gut feeling more than anything else. However, PixyMisa's SRIP claim seems to allow for a specific computational example to be produced. Do you think a brainf*ck self-interpreter could possibly have any subjective experience, however "dim" that may be?

I don't think even Chalmers understands what the "hard problem" is.
Richard Dawkins seems to have some inkling of what the mystery is about. Watch this short video. Chalmers' description of the problem seems clear to me. Are you saying you really don't see what the fuss is about?
 
AkuManiMani said:
What do you think of intersubjectivity?

http://en.wikipedia.org/wiki/Intersubjectivity

This reminds me of accounts I've read of a condition called folie à deux. In the most extreme instances people actually have shared hallucinations. Freaky stuff *_*


Was thinking more along the lines of SRIP in a social environment making for some interesting algorithms.

What I mean is that people generally demand sincerity, social appropriateness, and adequate representations/truth in communicating with one another.

Have been struggling with getting PM or rd to come up with how this might be formalized in their conscious machine's algorithms.
 
If the sensation of pain is merely the result of neurons firing in some pattern somewhere in the brain, then why can't the other parts of the brain that need to come up with a remedy simply read the outputs from the part of the network that is generating the "pain pattern"? Why bother to generate a "subjective pain" as well? And how does it do that?

I came up with a thought experiment last night, dealing with the question "I understand why pain is useful, but why does it have to hurt so much ?"

Suppose you have a chronic pain condition, and none of the standard pain killers can provide good relief. You're talking to your doctor, and he suggests using a new, experimental drug, that doesn't take away any functional part of the pain, but it just removes the subjective feeling that it hurts. The rest of your consciousness remains unaffected, so you can still function exactly the same, and nobody will be able to tell any difference. If you hurt yourself, you'll still yell "ouch", and you'll still apply all the normal remedies like you did before. So, you have all the functional benefits of the pain mechanism, without the nasty side effect that it feels so bad.

Suppose the drug really works. You feel no more pain..... But then your arm grabs the bottle of old pain killers, and you see yourself swallow a couple, and when the doctor asks how you are feeling, you hear your voice say : "I'm afraid the new pills aren't working". Think about it. How would you think you'd react ?

As long as the claim is made that the system in question has some (albeit very small) degree of phenomenal conscious experience then we have something to work with.

Maybe. Suppose I define consciousness in such a way that my room thermostat is "conscious" of the temperature. How does that provide something we can work with ?

Chalmers' description of the problem seems clear to me. Are you saying you really don't see what the fuss is about?

I think Chalmers is looking at it from the wrong angle. He's basically saying: function is not enough, we need some extra ingredient to explain conscious feelings. My take on it is slightly different. I'm saying that function is enough, we just need to learn to accept it mentally.

I don't know if you've ever heard of Anton-Babinski syndrome, but it's very interesting to think about. It is a rare condition that causes blindness, but in such a way that the patient will remain strongly convinced that they can still see. When challenged with the reality of not being able to see things, they'll come up with excuses that the light's not good enough, or that they forgot their glasses.

Now, try to imagine what it would be like to have that condition, and not being able to convince yourself that you can't see. It seems strange, but it happens.

Could it not be equally possible to have a similar condition where you have "pain-blindness" ? In this condition, your body will still react to pain in a normal way, but you just don't "feel" it. However, just like Anton-Babinski syndrome, you believe that you can still feel pain.
 
Last edited:
I came up with a thought experiment last night, dealing with the question "I understand why pain is useful, but why does it have to hurt so much ?"

Suppose you have a chronic pain condition, and none of the standard pain killers can provide good relief. You're talking to your doctor, and he suggests using a new, experimental drug, that doesn't take away any functional part of the pain, but it just removes the subjective feeling that it hurts. The rest of your consciousness remains unaffected, so you can still function exactly the same, and nobody will be able to tell any difference. If you hurt yourself, you'll still yell "ouch", and you'll still apply all the normal remedies like you did before. So, you have all the functional benefits of the pain mechanism, without the nasty side effect that it feels so bad.

Suppose the drug really works. You feel no more pain..... But then your arm grabs the bottle of old pain killers, and you see yourself swallow a couple, and when the doctor asks how you are feeling, you hear your voice say : "I'm afraid the new pills aren't working". Think about it. How would you think you'd react ?
I'd say, "Hey Doc, can I have one of those super-experimental big red ZOMBIE pills instead?". So he gives me one, and "I" am gone. What's left is just the neural network merrily doing it's thing as before but without any of that annoying and completely useless subjective consciousness stuff. Now what? How and why was that stuff there to start with?

Will my house feel "pain" if I run around flicking the light switches on and off in the right order?

Suppose I define consciousness in such a way that my room thermostat is "conscious" of the temperature. How does that provide something we can work with ?
I'm presuming you mean your thermostat has phenomal/subjective consciousness. So, question 1, how is that experience of phenomenal consciousness from the pov of the thermostat being generated? Is there some law of physics that I haven't heard about that gives things that can measure temperature a "consciousness"? (Note for Pixy, it's a noun.)

If, instead of a thermostat, we have a smallish (but complete) program of some kind then we could talk about the details of it's structure and exactly which parts of it allegedly generate the conscious experience, or at least are necessary (for some reason) for that conscious experience to manifest.

I think Chalmers is looking at it from the wrong angle. He's basically saying: function is not enough, we need some extra ingredient to explain conscious feelings. My take on it is slightly different. I'm saying that function is enough, we just need to learn to accept it mentally.
I'm saying I can't understand how any amount of physical function and structure alone can generate a subjective experience for a system consisting only of those functions and structures. I'm hoping someone can explain it to me. Just accepting it is not acceptable. I'm not looking for any extra ingredient, but if it turns out that getting to the truth of the matter requires an extra ingredient or a new kind of understanding (of computation for example) then so be it.

I don't know if you've ever heard of Anton-Babinski syndrome, but it's very interesting to think about. It is a rare condition that causes blindness, but in such a way that the patient will remain strongly convinced that they can still see. When challenged with the reality of not being able to see things, they'll come up with excuses that the light's not good enough, or that they forgot their glasses.

Now, try to imagine what it would be like to have that condition, and not being able to convince yourself that you can't see. It seems strange, but it happens.

Could it not be equally possible to have a similar condition where you have "pain-blindness" ? In this condition, your body will still react to pain in a normal way, but you just don't "feel" it. However, just like Anton-Babinski syndrome, you believe that you can still feel pain.
I haven't heard of the Anton-Babinski syndrome but I will look it up shortly. Sounds interesting. But you say someone with this syndrome is really blind, so your pain analogy would mean that my body would not react to injury or things that normally cause pain, even though I would still think I was feeling pain from time to time (but not necessarily at the time when I would be expected to). In any case, the good doctor and I would both be very likely to be confused.
 
Last edited:
I'd say, "Hey Doc, can I have one of those super-experimental big red ZOMBIE pills instead?". So he gives me one, and "I" am gone. What's left is just the neural network merrily doing it's thing as before but without any of that annoying and completely useless subjective consciousness stuff. Now what? How and why was that stuff there to start with?

Try to stick with the pills that remove the feeling of pain, and not complete zombie pills. The first kind is much more interesting to think about what you would feel. Do you suppose those kinds of pain-feeling suppressing pills would be possible in theory ? And if so, how would you feel after taking them ?

I'm presuming you mean your thermostat has phenomal/subjective consciousness. So, question 1, how is that experience of phenomenal consciousness from the pov of the thermostat being generated? Is there some law of physics that I haven't heard about that gives things that can measure temperature a "consciousness"?
What I'm saying is that it's a matter of definition, but it's not very enlightening either way. I can certainly define consciousness in such a way that a thermostat falls into that class, but that doesn't get me any closer than understanding my own subjective feelings. I can also define consciousness such that I'm the only example. That doesn't help me either.

If, instead of a thermostat, we have a smallish (but complete) program of some kind then we could talk about the details of it's structure and exactly which parts of it allegedly generate the conscious experience, or at least are necessary (for some reason) for that conscious experience to manifest.

Sure, but only if we all agree that the program possesses consciousness.

I'm saying I can't understand how any amount of physical function and structure alone can generate a subjective experience for a system consisting only of those functions and structures. I'm hoping someone can explain it to me. Just accepting it is not acceptable
Obviously, it's not a matter of saying "I'll accept it". It's something that involves a big change in the way we're looking at things. Similar to a patient suffering from Anton's syndrome admitting they are blind, but even more difficult.

I haven't heard of the Anton-Babinski syndrome but I will look it up shortly. Sounds interesting. But you say someone with this syndrome is really blind, so your pain analogy would mean that my body would not react to injury or things that normally cause pain

I came up with a better exampe. Anton-Babinksi syndrome is usually discovered by other people because the patient start to walk into objects and walls.

Suppose we have a person suffering from Anton-Babinksi, but there's a direct neural pathway from the visual to the motor cortex that still works, that makes them instinctively avoid any objects and walls. So, they don't really "see" things, but their body will act as if they do. This kind of condition will be much harder to diagnose. The patient will claim everything is fine, and also appears to behave normally. You can ask them the color of an object, and, even though they can't see it, their mouth will voice the appropriate answer.
 
If the sensation of pain is merely the result of neurons firing in some pattern somewhere in the brain, then why can't the other parts of the brain that need to come up with a remedy simply read the outputs from the part of the network that is generating the "pain pattern"? Why bother to generate a "subjective pain" as well? And how does it do that?



Well, for one thing, you are putting a homunculus in there to 'read the pain' and anytime you see a homunculus you're dealing with some form of dualism.

The bottom line is that pain serves a particular function and nature works with what it has to work with. You can't seriously ask the question, "why do dung beetles eat dung and not just pop down to the local McDonald's?" because that option is not and was not evolutionarily available. Dung beetles fill a niche.

Same with the subjective sensation of pain. It plays a functional role. It did not arise in beings with language or in brains that have a homunculus sitting in them monitoring for pain in order to suppress it.

The "how does it do that?" question is very appropriate, so I think one of looking at the issue really is to look at the function that pain serves. Let's say we have neurons that sense when something damaging happens -- like our A-delta and C fibers, linked to the outside hostile environment through pain receptors that activate only (or virtually only) when something damaging to the body occurs. Would activation of those fibers be enough to get an animal to act out pain behavior? Well, no, of course not; those fibers only sense the presence of damage. Would activation of those fibers constitute pain? Well, no, again. But somehow the system must take activation of those fibers to perform the role that pain plays -- adversive behaviors or immobility.

Pain is not one thing.

I think it is wrong to view it as pure sensation. The only pure sensation side of it is the activation of C fibers which tells us only when, where, and that something damaging occurred to our body. Pain, however, is a much more elaborate behavior or tendency toward an action -- for quick pain the action would be to remove oneself from the cause of pain or the cause of pain from oneself. For chronic pain the behavioral response is generally to avoid movement. Pain, in large part, is that ongoing drive to do one or the other of those things.

Say you are nature and you want to get an animal to avoid things that hurt its body. How are you going to do it? You'd have to send some sort of signal within the animal for it to carry out the appropriate action in order to survive. What we call the subjective sensation of pain seems to fill that functional role quite well.

There is an interesting tidbit with pain as well. We know that there are at least five different pain pathways (and to answer your question about 'coming up with a remedy', there are several descending pathways that limit the pain response) that mediate pain. The two most traditional pathways carry fibers from the body to the somatosensory cortex and insula. From there, there are pathways that go to the cingulate gyrus. If we sever the pathways from the insula to the cingulate gyrus we can eliminate much of the suffering entailed in pain; it just doesn't seem to matter that pain signals are being sent if those connections are severed. To which you could say, so what? But the interesting thing about it is this: if you damage both sides of the cingulate gyrus you end up with a terrible condition known as akinetic mutism, which is a state in which the afflicted do not move and do not speak. It is as if their free will has been removed. The cingulate gyrus serves the function of willing us to move -- if you remove the input from the insula to the cingulate, the subjective sensation of pain, or at least the suffering quality of pain, is removed. Patients with this condition report that they still feel pain but that they don't care; the adversive aspects of the pain are removed and they can act as though they are not in pain but can still report where pain arises and how intense it is. My point here is that one aspect of what we call pain is a behavioral impulse and not a sensation at all. The suffering is that impulse to move (or with chronic pain to remain imobile).

I will repeat: pain is not one thing. I do not think that any subjective sensation is just one thing, one simple process. The quale 'pain' can, in fact, be broken down into different processes created by different brain areas and serving different functions. And I think that for some parts of this 'feeling', the best way to view it is as a behavioral impulse, that behavioral impulse being carried out in the cingulate gyrus (our free will area). There are obviously other aspects to pain, such as the intensity issue which is dealt with in the insula (and probably at the level of the thalamus), so there is more to explain; but the idea that a particular subjective sensation, like pain, is a single unified process because we experience it that way turns out to be wrong. We experience a unified process when every pathway is intact. But the reality is that pathways can be interrupted at various points, just as we have shown with language, etc., demonstrating that all of these processes depend on complex interactions in the brain.


Most of us aren't really sure if (for example) an ant has any form of subjective experience. My intuition is yes, it does, but that's just a gut feeling more than anything else. However, PixyMisa's SRIP claim seems to allow for a specific computational example to be produced. Do you think a brainf*ck self-interpreter could possibly have any subjective experience, however "dim" that may be?


Pixy, from what I recall of the conversations over the years, only claims SRIP as conscious given a particular definition of consciousness. He doesn't claim that this would be anything like what a human experiences. It may not even meet the criteria for what most people would consider conscious. His ultimate point was that if you don't think SRIP is conscious, then provide a definition that we can work with.
 
I came up with a thought experiment last night, dealing with the question "I understand why pain is useful, but why does it have to hurt so much ?"

Suppose you have a chronic pain condition, and none of the standard pain killers can provide good relief. You're talking to your doctor, and he suggests using a new, experimental drug, that doesn't take away any functional part of the pain, but it just removes the subjective feeling that it hurts. The rest of your consciousness remains unaffected, so you can still function exactly the same, and nobody will be able to tell any difference. If you hurt yourself, you'll still yell "ouch", and you'll still apply all the normal remedies like you did before. So, you have all the functional benefits of the pain mechanism, without the nasty side effect that it feels so bad.

Suppose the drug really works. You feel no more pain..... But then your arm grabs the bottle of old pain killers, and you see yourself swallow a couple, and when the doctor asks how you are feeling, you hear your voice say : "I'm afraid the new pills aren't working". Think about it. How would you think you'd react ?



Maybe. Suppose I define consciousness in such a way that my room thermostat is "conscious" of the temperature. How does that provide something we can work with ?



I think Chalmers is looking at it from the wrong angle. He's basically saying: function is not enough, we need some extra ingredient to explain conscious feelings. My take on it is slightly different. I'm saying that function is enough, we just need to learn to accept it mentally.

I don't know if you've ever heard of Anton-Babinski syndrome, but it's very interesting to think about. It is a rare condition that causes blindness, but in such a way that the patient will remain strongly convinced that they can still see. When challenged with the reality of not being able to see things, they'll come up with excuses that the light's not good enough, or that they forgot their glasses.

Now, try to imagine what it would be like to have that condition, and not being able to convince yourself that you can't see. It seems strange, but it happens.

Could it not be equally possible to have a similar condition where you have "pain-blindness" ? In this condition, your body will still react to pain in a normal way, but you just don't "feel" it. However, just like Anton-Babinski syndrome, you believe that you can still feel pain.


A somewhat similar condition does exist, as I had mentioned in another thread and just now to Clive.

Look up pain asymbolia. It is a disconnection syndrome in which people can feel the location and intensity of pain but not suffer from it. They simply do not care about it and so can continue to act as if there is no pain involved.

It's a little different from Anton's syndrome, where people confabulate that they can see even though blind. We're not sure exactly what causes Anton's syndrome, but there seems to be more than one way to produce it. The primary thing that must be intact for it to occur is that the left hemisphere (which is the hemisphere that tries to make sense of contradictory information and produces confabulation) must be relatively intact.

As opposed to pain asymbolia, which is a long-lasting condition, Anton's syndrome is generally fairly brief in duration. From my experience the confabulation only lasts a few days to weeks and then disappears.
 
Was thinking more along the lines of SRIP in a social environment making for some interesting algorithms.

What I mean is that people generally demand sincerity, social appropriateness, and adequate representations/truth in communicating with one another.

Have been struggling with getting PM or rd to come up with how this might be formalized in their conscious machine's algorithms.


I would suggest that once we understand all the processing underlying mirror neurons we will be able to formalize those issues better.
 
Have you read these dlorde? Do you think you properly understand what "The Hard Problem of Consciousness" is truly about (even if you don't agree it's actually a hard problem) or is it more that you just can't quite "see" what the fuss is really about to start with?
Yes, I understand what the Hard Problem of Consciousness is, and yes, I've read Chalmers' articles.

I'd like to know which category you think describes your point of view best in Chalmers' second paper (unfortunately quite long).

I think Chalmers provides a reasonably comprehensive overview of the issues and approaches - although he doesn't cover one that, for a while, I thought might be interesting - Emergence:

[Jeffrey Goldstein on emergence:
"the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems"

"The common characteristics are:
(1) radical novelty (features not previously observed in systems);
(2) coherence or correlation (meaning integrated wholes that maintain themselves over some period of time);
(3) A global or macro "level" (i.e. there is some property of "wholeness");
(4) it is the product of a dynamical process (it evolves); and
(5) it is "ostensive" (it can be perceived)."

See Emergence & Creativity for an interesting read.
]

However, I'm not quite so enthusiastic about the emergence hypothesis these days, for various reasons, although I think emergence may be useful conceptually in bridging the subjective-objective gap.

Basically, Chalmers hits the nail on the head quite early on, when he says:

"The hard problem is about explaining the view from the first-person perspective."
and:
"..whatever account of processing we give, the vital step - the step where we move from facts about structure and function to facts about experience - will always be an extra step, requiring some substantial principle to bridge the gap."
and:
"... any neurobiological or cognitive account, will be incomplete"

This seems key. What 'substantial principle' can bridge that gap? Only some arbitrary and axiomatic contingency. The problem is one of trying to find an objective explanation for something that requires a subjective explanation. AFAICS, ultimately a completely satisfying objective explanation isn't possible, because there will always be that ugly join.

All the other arguments and approaches to an objective description of consciousness founder on the same rock - the metaphysical chasm between objective & subjective. It's all speculative hand-waving that goes nowhere.

I particularly discount quantum explanations as based on misunderstanding or misapplication of QM. I also discount pan-psychism as a futile attempt to hide or ignore the problem by making it ubiquitous.

Chalmers says "if it turns out that it cannot be explained in terms of more basic entities, then it must be taken as irreducible, just as happens with such categories as space and time", which seems as much a cop-out as the Type A Materialism (nothing to explain) he dislikes so much, and he criticises the Type-B materialism of Clark and Hardcastle who postulate an empirical identity between conscious experiences and physical processes, because it makes that identity fundamental (irreducible) - perhaps too much like his own conclusion about irreducibility (above)...?

Again, he talks of examining physical process and phenomenology to "find systematic regularities between the two; work down to the simpler principles which explain these regularities in turn; and ultimately explain the connection in terms of a simple set of fundamental laws". Quite how these "underlying brutely contingent fundamental laws", that define contingent relations between the principles of function & process and of consciousness, are substantially different from the Clark and Hardcastle approach, he doesn't clarify. Nevertheless, despite some apparent contradictions, I think he does a good job overall.

If anything, I prefer a version of original Identity Theory (a pre-Type B Materialist approach) which proposes that the sensation of a thing is the sort of state caused by that thing (with the caveat that this occurs in a complex system structured like the brain).

But ultimately, there's no way to objectively explain the subjective position/experience of being the complex system under consideration. Experience and conscious awareness is what it is like to be an active brain, of sufficient complexity, in the awake state. That's why my starting point is the waves of activation that sweep across the brain when our conscious focus of awareness changes. When you are that brain, those patterns of activation are your experiences.

It seems to me that the best we can do is to examine the function of the brain to narrow down the subsystems necessary for consciousness, and to investigate how these subsystems contribute to consciousness. Once we have a better understanding of how the system is put together, we may get a better understanding of why consciousness is the result.
 
Status
Not open for further replies.

Back
Top Bottom