• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
It would be illuminating to me if you could specify what the thing A with capacities B that you are referring to in regards to the illusion of consciousness? What is analogous to the vision of the rainbow we perceive when referring to consciousness as an illusion? And what is analogous to the person doing the perceiving?
I can only speak for myself, but I see the illusion of consciousness as being the idea that consciousness is the master and commander of the mind - the 'executive control system'. I see it as the work of a subsystem (that provides us with a sense of identity), which gets access to salient 'updates' on the state of the other brain subsystems, either through access to some global workspace, or through explicit notification, and using these snippets, retrospectively generates the causal narrative of self. A bit like a sports commentator pretending that he is one of the participants, and giving the player's running commentary - except that this commentator actually believes he is playing.

How much (if any) influence the system generating consciousness has on the rest of the brain, and what that might be (e.g. influence on focus of attention?), will be difficult to establish, particularly as the retrospective generation of narrative makes introspection particularly unreliable.
 
And our assumption that a rat has consciousness based in some way on the connections in the neural network is based on guesswork. Depending on how we define consciousness, a rat's consciousness may not exist at all, or it may be no more different to ours than another person's. We simply don't know. What it's like to be a rat is inaccessible to us. What it's like to be another human being may or may not be accessible to us.

Yes, it's not an easy topic without consensus acceptance of the semantics of consciousness. It seems to me that we can only judge the consciousness of other creatures (including other people) by assessing the behaviours they display that we associate with consciousness in ourselves.
 
The response to the demand to demonstrate the existence of qualia is the question "what are you feeling right now". If you can insist that you aren't feeling anything, or that you might think that you are feeling something, but it's illusory, then you can say that qualia don't exist and can be ignored. Otherwise you have implicitly accepted their existence. That's why "what are you feeling right now" is a relevant question in this context.

Me? I'm currently feeling the emotion of slight amusement, due the the mild but pleasant cognitive dissonance generated by this discussion. Sometimes it triggers a frustration response, but this time, amusement.

So this pattern of neural activation is a quale, is it? I thought there might be more to it than that.
 
If thats the case, wouldn't it be more appropriate to say that people's -concept- of consciousness is the illusion rather than saying 'consciousness is an illusion'?

If you like - though generally, the word 'consciousness' is taken to mean people's concept of consciousness, just as a 'table' is generally taken to mean people's concept of a table. If you'd like it to mean something else, please let us know.

While the meaning you attribute to 'consciousness is an illusion' may be a more sensible interpretation the phrase itself is asinine.

OK, you're welcome to think of it as a more sensible interpretation of an asinine phrase.

The word random has two main connotations: A) equiprobability of occurrence or statistical distribution and B) occurring without meaning or purpose; both senses do not do an adequate job of conveying what is meant by 'free will'. 'Unpredictable' seems to be the more apt label.
So can we say purpose (as well as unpredictability) is a feature or requirement of free will?

Consider a couple of scenarios. A rock-slide kills a group of campers on a mountain. In scenario 1) the avalanche is simply triggered by the structural support of the materials giving way to gravity. In scenario 2) a person triggers the slide with the intent to kill the campers below. In the first scenario there is no conscious will being exerted to manipulate the course of events toward an intended goal, while in the second there is.
OK, so is it fair to say that the key feature to free will is purpose and/or intent? and is it also fair to say that purpose and intent are descriptions of goal-seeking behaviour?

If so, can we say that goal-seeking behaviour in general indicates purpose and/or intent?

If so, does goal-seeking behaviour that is unpredictable indicate free-will in action?

Well it's not really off topic since 'deliberate' refers to an essential attribute of consciousness: intentionality.
OK.
 
As commonly used, the term references a class of interactions ostensibly between the mind (or brain) and other minds or physical matter, for which there is neither evidence that the interaction occurs nor any plausible mechanism. Psi is distinguished from other forms of magic in that no mechanism other than the mind itself is postulated.

If minds are physical then whats the problem?
 
Your examples seem to have one thing in common: the ability to learn. That's the main distinguishing feature I attribute to consciousness too, but I wouldn't call it just a spark. It's a necessary part of our functioning.
Indeed, and computer systems that can learn and generate novel behaviours have already been created. So it seems that this p-zombie could continue to develop and 'deceive' us into believing it was conscious.

An event that you're not conscious of is unlikely to modify your future behavior.
Not strictly true - events perceived (sensed?) below conscious awareness can have significant effects on future behaviour.
 
Two main reasons. First, it's unnecessary; we already have a simpler explanation that accounts for the observed facts. Second, it's based on a unstated premise, i.e. that psi exists (which it doesn't).
Actually, the main reason is your worldview wouldn't work.
 
Overlords from the Kappa sector sending signals to the brains of folks with occipital lobe strokes would work easily as well, as would invisible faeries whispering the location of lights into the ears of the afflicted.
Interesting hypotheses. Do they keep their teacups orbiting Jupitor?

I have no need of that hypothesis. There is an easy way to test it. Cut the visual fibers that travel to the superior colliculus and see if the phenomenon disappears. If so, I think the psi hypothesis fails.
Let the world know when it has been so disproven. :)
 
...Theoretically, a system could be developed that is able to modify it's behavior by processing and integrating incoming data. It would be a 'zombie' if the system contained no (or virtually no) awareness of that data.
But isn't that what we're trying to establish - whether the purported 'p-zombie' that says it is aware and conscious actually is?

No matter how sophisticated the IP system is, if it lacks consciousness it will eventually run down into behavioral ruts like the ones I listed earlier
So you assert, but if it can learn, and modify its behaviours as a result (as some AI systems can), then it seems to me that this is not necessarily true. And if you are suggesting consciousness can only be distinguished by long-term avoidance of behavioural ruts, should we assess humans that have fallen into such behavioural ruts (e.g. OCDs, etc) as less conscious or aware than those that have not?

If such an automaton were inorganic one would call it a robot, if it were made of organic components one would refer to it as a zombie. The basic principle is the same tho: it simply lacks that "vital spark" of awareness.
The question was, how could you tell that this p-zombie lacks the 'vital spark' of awareness or consciousness? You appeared to agree that it would initially deceive us by appearing conscious, but you claimed that without the 'spark of awareness' its behaviour would diverge from the original it was modelled on, and degenerate into obvious failures, such as consistent nonsensical responses to novel queries.

Assuming, for now, that such behaviour does indicate lack of awareness or consciousness (I'm not convinced), then a revised p-zombie model II, which we have provided with a learning capability, and which can adjust its behaviour according to what it learns, will not fall into those degenerate behaviour states, and could, in principle, continue to deceive us indefinitely.

So is it still without the 'spark of awareness' you say is required, or has giving p-zombie mark II the ability to learn and modify its behaviour in novel ways given it that spark?

Or perhaps you maintain that we cannot give p-zombie mark II learning and novel behaviour capabilities, because these are the sole province of awareness?
 
Last edited:
I've gotten my familiarity of p-zombies from talks and philosophical discussions like this one (and by watching a couple of Chalmer's lectures); my comments about psychopaths are drawn from those I know personally, and from reading the literature on character disorders. The wording, paraphrasing, parallels, and interpretation are all my own.

[Come to think of it, I suppose the drawing of broad, metaphorically based connections to reach novel conclusions could be considered evidence of consciousness]
You've apparently misunderstood Chalmers: he states that his zombies behave just like us. Otherwise, what would be the point of his thought experiment? (Though I don't blame you-- I can't follow his reasoning on the hard problem.)

For a psychopath I think you misread it: they are said to lack a conscience (desire to not hurt others, empathy), not consciousness.

It's quite easy to teach a computer to reach a novel conclusion too, if you're not too concerned about its correctness...

My point is that if a simple chat bot, like the one Malerin posted a link to, can answer queries in a syntactically appropriate manner a hypothetical p-zombie could accomplish the same and more. Just to be clear, I do not think that p-zombies (i.e. perfectly indistinguishable simulacrums of conscious individuals) are possible in practice. The ability to pull off such a deception would depend upon the level of sophistication of it's programming [its practically impossible to 'perfectly' simulate any physical system] and the level of discernment possessed by the conscious individual assessing it [sooner or later you'll come across someone sharp enough to recognize that something's fishy].
This is the Turing test, for which I can make a computer do a perfect emulation of a large number of humans: it would simply not respond at all. There's no way you could distinguish it from human who couldn't type (for whatever reason).

The Turing test is good for judging acting ability, but not good for deciding intelligence or especially consciousness. It would be like deciding that a digging tool isn't usable unless it feels warm and hairy.

Data can easily be conditioned into and stored in a neural net, or some other IP system, without there necessarily being any subjective experience of said data. I don't think that tests of recall or rote would be effective tests of consciousness. What one should look for is the ability to self-generate novel behavior and sense making, without reliance on external prompting or explicit programming.
Again, what you appear to be describing is general problem-solving ability: the true Hard Problem.

At this point, our computers cannot perform physical self-maintenance without guidance from a conscious human, nor can they metabolize. All organisms contain programed feedback mechanisms, to be sure, but they also show a degree of proactive behavior not explicitly programmed into them -- some to greater extents than others. Its this proactivity that sets living/conscious systems apart from present day technological systems.

While, on the one hand, organisms merely utilize their physiological and behavioral programs as organizational anchors, machines have no operational extent or existence beyond their programming. Their maintenance and directionality must constantly be propped up by human conscious efforts -- they lack their own inherent consciousness. Our machines do not thermodynamically push themselves "up hill"; they must continually -be pushed-. This is true even when they are provided with a continual source of energy.
If we viewed computers as organisms, we'd say they've managed to massively increase their numbers over the last 60 years by exhibiting particular behaviors, due to a symbiotic relationship with humans. If you consider symbiotic relationships cheating, consider that most biological life-forms would die without other types around.

I really don't see how you've decided that cells thermodynamically push themselves: what happens when you block sunlight from the Earth?
 
Last edited:
I'm curious as to where you got your concepts about p-zombies and psychopaths-- it doesn't agree with what I've learned. Could you provide a reference or two?
Yes, I'm beginning to wonder if we talking about the same things at all. My understanding and (limited) experience of psychopaths is that they are all too conscious and aware.
 
... A p-zombie is an automaton that can supposedly pull all this off in a way sufficient to convince a naïve hypothetical human that it actually does experience all those things -- naïve being the operative word.
As I understand it, naivety in the observer is not relevant - a p-zombie simply responds as if it is conscious and aware, has qualia, etc., although, by definition, it doesn't. The point is to explore the practical differences between p-zombies and humans in respect of these concepts.

Of course, they're subject to thermodynamic laws -- thats not the point. What I'm saying is that living systems -autonomously push against- thermodynamic equilibrium in a self-sustaining manner rather than run down as a desktop or robot would. This thermodynamic bootstrapping is something living things perform at every level of their organization. Its this property that I'm saying is a necessary requisite for a conciousness compatible system.
But why is it a necessary requisite? We have designed systems that can seek out energy sources and 'recharge' themselves. We have not yet designed systems that can fully repair themselves, but this is just an engineering problem that hasn't been addressed because, in practice, it is unnecessary (and would be expensive). I don't see the necessary connection between consciousness and being physically self-sustaining. It just seems like defining consciousness as exclusive to biological life. A person can be conscious and aware while their body is sustained by artificial means.

You body depends upon the imposition of your consciousness to maintain it. Once that consciousness is entirely gone its called a rotting corpse and it runs down thermodynamically like any other lump of organic materials.

:confused: That is simply not the case - in coma or PVS, the individual may not be conscious or aware, yet can live on for years in that condition. We all eventually 'run down' and die, but that has nothing per-se to do with consciousness.
 
Last edited:
... What one should look for is the ability to self-generate novel behavior and sense making, without reliance on external prompting or explicit programming.
Are you talking about creativity?

... All organisms contain programed feedback mechanisms, to be sure, but they also show a degree of proactive behavior not explicitly programmed into them -- some to greater extents than others. Its this proactivity that sets living/conscious systems apart from present day technological systems.
Would you include viruses? bacteria?
 
dlorde said:
It would be illuminating to me if you could specify what the thing A with capacities B that you are referring to in regards to the illusion of consciousness? What is analogous to the vision of the rainbow we perceive when referring to consciousness as an illusion? And what is analogous to the person doing the perceiving?
I can only speak for myself, but I see the illusion of consciousness as being the idea that consciousness is the master and commander of the mind - the 'executive control system'. I see it as the work of a subsystem (that provides us with a sense of identity), which gets access to salient 'updates' on the state of the other brain subsystems, either through access to some global workspace, or through explicit notification, and using these snippets, retrospectively generates the causal narrative of self. A bit like a sports commentator pretending that he is one of the participants, and giving the player's running commentary - except that this commentator actually believes he is playing.
Thanks. That’s a different perspective that I’ve heard before. Let me try to restate it: You feel that the 'illusion of consciousness' is the illusion that we can control ourselves via our conscious thoughts. You don't feel that is actually what happens. Have I understood you correctly?

Why do you think that consciousness is not the ‘executive control system’? What purpose do you think consciousness serves if it is not making decisions (what I think of as the function of an executive control system)? What do you think the purpose of the causal narrative of self is?

How much (if any) influence the system generating consciousness has on the rest of the brain, and what that might be (e.g. influence on focus of attention?), will be difficult to establish, particularly as the retrospective generation of narrative makes introspection particularly unreliable.
Well, we can always compare the behavior of conscious individuals with unconscious individuals. I think that gives us some indication of the influence of consciousness.
 
- Again, what you appear to be describing is general problem-solving ability: the true Hard Problem. -


Some excellent analysis. I like to refer to general problem-solving as the HPI (...of Intelligence), to distinguish it from the HPC ("What and why is there consciousness?").
 
Status
Not open for further replies.

Back
Top Bottom