• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
I am not sure how long consciousness could survive without any sensory data reaching it. Hallucinations might be a way for the mind to keep itself together by inventing the sensory data. I'm just musing out loud. Wasp, do you know what the longest period of time that someone could be considered conscious while lacking any sensory data?


No idea; I would have to look it up. But the general idea is that without new sensory input -- the sensory deprivation tanks provide minimal sensory input that doesn't change, so receptors adapt -- our brains begin to hallucinate as you say.


ETA:

It's an interesting issue and interesting as to what it means.

There are some weird neurological conditions, the exact meaning of which is also unclear. I am sure you have heard of neglect syndromes, where folks with right parietal lesions neglect half of space. They will occasionally confabulate stories about the left side of their body when asked about it. You probably also know of Korsakoff's syndrome (long standing syndrome that accompanies Wernicke's encephalopathy) in which alcoholics (generally) with anterograde amnesia will confabulate false memories. One you may not know is Anton's syndrome in which someone who is recently cortically blind will report that they can see (the report of being able to see doesn't persist for very long). None of these confabulatory syndromes seems to be directly due to loss of sensory information, though. They may result from a problem related to loss of sensory information, though, since theoretically we might have a type of meta-processing of sensory information that allows for consciousness and some types of qualia (the blueness of blue), and if that higher order type of processing is left intact with limited sensory input (assume that it might receive input from more than one source) but without its primary sensory input, it might be that it gives us the sense that we still see or remember or feel. When the left hemisphere is left largely intact it has a tendency to try to reconcile discrepancies and probably causes confabulation.

The closest thing I can think of with hallucination from lack of sensory input is the syndrome of Charles Bonnet. I have also seen a few folks who have either olfactory hallucinations or olfactory illusions associated with cribiform plate menigiomas that destroy the olfactory nerve. They are sometimes accused of having temporal lobe seizures since olfactory hallucinations can also arise in this way.
 
Last edited:
AkuManiMani said:
I suppose if it persistently falls prey to the same trolling techniques, falls into repetitive formulaic behavior patterns until disturbed by an outside influence, or consistently gives nonsensical responses to novel queries -- stuff to that effect.

Such signs would indicate that if the entity in question is conscious that consciousness is extremely "dim" :)

Your examples seem to have one thing in common: the ability to learn. That's the main distinguishing feature I attribute to consciousness too, but I wouldn't call it just a spark. It's a necessary part of our functioning. An event that you're not conscious of is unlikely to modify your future behavior.

And experience is also a form of learning-- it's learning what happened at a given time, though in a consolidated "executive summary" form for use in future reasoning and planning. If the brain recorded all the sense data it receives it would be much more difficult for it to apply general rules to it.

Hmmm...I get where you're commin' from and what your gist is but I'm not sure you understand what is meant when one speaks of a zombie of any variety, "P" or otherwise. Theoretically, a system could be developed that is able to modify it's behavior by processing and integrating incoming data. It would be a 'zombie' if the system contained no (or virtually no) awareness of that data. No matter how sophisticated the IP system is, if it lacks consciousness it will eventually run down into behavioral ruts like the ones I listed earlier. If such an automaton were inorganic one would call it a robot, if it were made of organic components one would refer to it as a zombie. The basic principle is the same tho: it simply lacks that "vital spark" of awareness.

Computation does not IAOI produce consciousness; it simply provides it with organizational constrains to maximize it's efficiency. Consciousness, by it's very nature, is a capricious and virtually unpredictable element [i.e. the 'free' in 'free will']; if there are no organizational bounds placed upon it it's too chaotic and ineffective. The key issue here when dealing with questions of consciousness isn't necessarily the computational capacities of a system, per se. Computation -- even the adaptive kind -- can occur independently of consciousness. Whats pertinent is whether those functions are working in tandem with some -subjective- dimension as well, and identifying how to scientifically discern such a thing. Its that subjectivity that essentially distinguishes consciousness and, I suspect, life itself.

With that being said, I think that the foundational requisite for a system supporting consciousness would be the thermodynamic properties I mentioned earlier. I also suspect that any conscious system would have to inhere such properties "from the bottom up". As of today, all technology we create (robotic or otherwise) depends on the "top down" imposition of conscious entities like us to create and maintain them. I think we'll have succeeded in creating synthetic consciousness [or at least be on the right track] when we learn how to create synthetic life.
 
Last edited:
It is. Human consciousness is self-deluded; it presents itself to itself as something other than it is.


Human consciousness is not unitary. It's not causally efficaceous in some cases where it claims to be so. Your perceptions are reconstructions rather than representations. And so on.

This is why introspection fails and neurology is critical in understanding the mind.

Translated to compensate for projection:


.........................................................................................................................................................................
Originally Posted by MixyPisa
[My] consciousness is self-deluded; it presents itself to itself as something other than it is.

[My] consciousness is not unitary. It's not causally efficaceous in some cases where it claims to be so. [My] perceptions are reconstructions rather than representations. And so on.

This is why [my] introspection fails and [psychology] is critical in understanding [my mind].
.........................................................................................................................................................................

Fix'd
 
Last edited:
My mistake, I omitted a smiley - I wasn't being entirely serious, more ironic. But on second thoughts, and having read Pixy's original comment that I had previously missed, it seems that he is being sceptical about your claim that 'qualia' means something, and has explained why he is sceptical. So it seems to me that the onus is still on you to evidence or provide a convincing rational argument for your claim :)

The response to the demand to demonstrate the existence of qualia is the question "what are you feeling right now". If you can insist that you aren't feeling anything, or that you might think that you are feeling something, but it's illusory, then you can say that qualia don't exist and can be ignored. Otherwise you have implicitly accepted their existence. That's why "what are you feeling right now" is a relevant question in this context.

If a particular individual even bothered to honestly answer the question it knows it would be checkmate. The basis of its entire 'wrong', irrelevant', and 'incoherent' ideology would completely collapse along with any semblance of psychological security it once provided.
 
Last edited:
Hmmm...I get where you're commin' from and what your gist is but I'm not sure you understand what is meant when one speaks of a zombie of any variety, "P" or otherwise. Theoretically, a system could be developed that is able to modify it's behavior by processing and integrating incoming data. It would be a 'zombie' if the system contained no (or virtually no) awareness of that data. No matter how sophisticated the IP system is, if it lacks consciousness it will eventually run down into behavioral ruts like the ones I listed earlier. If such an automaton were inorganic one would call it a robot, if it were made of organic components one would refer to it as a zombie. The basic principle is the same tho: it simply lacks that "vital spark" of awareness.
The p-zombie is defined as being indistinguishable from a normal person, just lacking consciousness/qualia/soul. If it were like the typical Hollywood zombie it wouldn't be of much use to philosophers, since it couldn't be used to support claims about us.

But to look at your type of zombie, what do you mean by "no awareness of that data"? If any agent wasn't aware of what it just did it would easily get stuck-- not much good for anything. And quickly recognizable by telling it something and then asking it what it just heard, for example.

Computation does not IAOI produce consciousness; it simply provides it with organizational constrains to maximize it's efficiency. Consciousness, by it's very nature, is a capricious and virtually unpredictable element [i.e. the 'free' in 'free will']; if there are no organizational bounds placed upon it it's too chaotic and ineffective. The key issue here when dealing with questions of consciousness isn't necessarily the computational capacities of a system, per se. Computation -- even the adaptive kind -- can occur independently of consciousness. Whats pertinent is whether those functions are working in tandem with some -subjective- dimension as well, and identifying how to scientifically discern such a thing. Its that subjectivity that essentially distinguishes consciousness and, I suspect, life itself.
I hope you realize that what you are labeling "consciousness" isn't what most scientists would. There's no reason anything you describe here could not be due to cognitive abilities. "Free will" is only free in the general case, not in any specific case. That applies just as well for computers. And all sense data is subjective, as it is for computers: sometimes when I type the '6' key my computer subjectively perceives that as a '4'.

With that being said, I think that the foundational requisite for a system supporting consciousness would be the thermodynamic properties I mentioned earlier. I also suspect that any conscious system would have to inhere such properties "from the bottom up". As of today, all technology we create (robotic or otherwise) depends on the "top down" imposition of conscious entities like us to create and maintain them. I think we'll have succeeded in creating synthetic consciousness [or at least be on the right track] when we learn how to create synthetic life.
I don't see the connection to life. Computers also follow thermodynamic laws, but I don't see what that has to do with consciousness or any other aspect of cognition.

As far as I know all humans also depend on the imposition of conscious entities like us to create and maintain them, at least for the first few years. It's our general problem-solving capability that gives us our edge.
 
AkuManiMani said:
Hmmm...I get where you're commin' from and what your gist is but I'm not sure you understand what is meant when one speaks of a zombie of any variety, "P" or otherwise. Theoretically, a system could be developed that is able to modify it's behavior by processing and integrating incoming data. It would be a 'zombie' if the system contained no (or virtually no) awareness of that data. No matter how sophisticated the IP system is, if it lacks consciousness it will eventually run down into behavioral ruts like the ones I listed earlier. If such an automaton were inorganic one would call it a robot, if it were made of organic components one would refer to it as a zombie. The basic principle is the same tho: it simply lacks that "vital spark" of awareness.

The p-zombie is defined as being indistinguishable from a normal person, just lacking consciousness/qualia/soul. If it were like the typical Hollywood zombie it wouldn't be of much use to philosophers, since it couldn't be used to support claims about us.

Its only a matter of degree, really. A 'p-zombie' just has a more sophisticated architecture than your typical Hollywood zombie -- sophisticated enough to fool people into thinking it has some 'interior' experience. In story and myth such entities are usually represented in the guise of monsters like Vampires or as omnipotent AIs gone haywire. If you dim the consciousness of an actual person enough you can produce something quite similar; we generally call them psychopaths.

But, even in the case of psychopaths, there is atleast some residual consciousness, tho its spectrum is limited to extremely shallow, dull, and/or unpleasant experiences. In the case of a p-zombie there is none at all.

But to look at your type of zombie, what do you mean by "no awareness of that data"? If any agent wasn't aware of what it just did it would easily get stuck-- not much good for anything. And quickly recognizable by telling it something and then asking it what it just heard, for example.

When I say "no awareness of the data" I mean just that. The data is taken in and processed to produce an output appropriate to some algorithmic rule-set with no subjective perception, evaluation, or interpretation of that data during that processing. Its all done reflexively and without sensation, feeling, or cognition or any sort. A p-zombie is an automaton that can supposedly pull all this off in a way sufficient to convince a naïve hypothetical human that it actually does experience all those things -- naïve being the operative word.


Computation does not IAOI produce consciousness; it simply provides it with organizational constrains to maximize it's efficiency. Consciousness, by it's very nature, is a capricious and virtually unpredictable element [i.e. the 'free' in 'free will']; if there are no organizational bounds placed upon it it's too chaotic and ineffective. The key issue here when dealing with questions of consciousness isn't necessarily the computational capacities of a system, per se. Computation -- even the adaptive kind -- can occur independently of consciousness. Whats pertinent is whether those functions are working in tandem with some -subjective- dimension as well, and identifying how to scientifically discern such a thing. Its that subjectivity that essentially distinguishes consciousness and, I suspect, life itself.

I hope you realize that what you are labeling "consciousness" isn't what most scientists would. There's no reason anything you describe here could not be due to cognitive abilities. "Free will" is only free in the general case, not in any specific case. That applies just as well for computers. And all sense data is subjective, as it is for computers: sometimes when I type the '6' key my computer subjectively perceives that as a '4'.


All sense data is not subjective -- as is the case with blindsight. The difference, in this case, would be blindsight without the sight -- just a reflexive response to stimuli.

With that being said, I think that the foundational requisite for a system supporting consciousness would be the thermodynamic properties I mentioned earlier. I also suspect that any conscious system would have to inhere such properties "from the bottom up". As of today, all technology we create (robotic or otherwise) depends on the "top down" imposition of conscious entities like us to create and maintain them. I think we'll have succeeded in creating synthetic consciousness [or at least be on the right track] when we learn how to create synthetic life.

I don't see the connection to life. Computers also follow thermodynamic laws, but I don't see what that has to do with consciousness or any other aspect of cognition.

Of course, they're subject to thermodynamic laws -- thats not the point. What I'm saying is that living systems -autonomously push against- thermodynamic equilibrium in a self-sustaining manner rather than run down as a desktop or robot would. This thermodynamic bootstrapping is something living things perform at every level of their organization. Its this property that I'm saying is a necessary requisite for a conciousness compatible system.

As far as I know all humans also depend on the imposition of conscious entities like us to create and maintain them, at least for the first few years. It's our general problem-solving capability that gives us our edge.

You body depends upon the imposition of your consciousness to maintain it. Once that consciousness is entirely gone its called a rotting corpse and it runs down thermodynamically like any other lump of organic materials.
 
Last edited:
Its only a matter of degree, really. A 'p-zombie' just has a more sophisticated architecture than your typical Hollywood zombie -- sophisticated enough to fool people into thinking it has some 'interior' experience. In story and myth such entities are usually represented in the guise of monsters like Vampires or as omnipotent AIs gone haywire. If you dim the consciousness of an actual person enough you can produce something quite similar; we generally call them psychopaths.

But, even in the case of psychopaths, there is atleast some residual consciousness, tho its spectrum is limited to extremely shallow, dull, and/or unpleasant experiences. In the case of a p-zombie there is none at all.
I'm curious as to where you got your concepts about p-zombies and psychopaths-- it doesn't agree with what I've learned. Could you provide a reference or two?

When I say "no awareness of the data" I mean just that. The data is taken in and processed to produce an output appropriate to some algorithmic rule-set with no subjective perception, evaluation, or interpretation of that data during that processing. Its all done reflexively and without sensation, feeling, or cognition or any sort. A p-zombie is an automaton that can supposedly pull all this off in a way sufficient to convince a naïve hypothetical human that it actually does experience all those things -- naïve being the operative word.
Would naïve mean not even giving them the test I suggested: tell them something and ask what you told them? If so, I don't see that this sort of p-zombie is of use in arguments. My computer already does better than that.

All sense data is not subjective -- as is the case with blindsight. The difference, in this case, would be blindsight without the sight -- just a reflexive response to stimuli.
Likewise there are many normal paths that are reflexive and don't pass through consciousness. The difference is detectable by simple testing: that sense data can't be learned and then recalled at a later time.

Of course, they're subject to thermodynamic laws -- thats not the point. What I'm saying is that living systems -autonomously push against- thermodynamic equilibrium in a self-sustaining manner rather than run down as a desktop or robot would. This thermodynamic bootstrapping is something living things perform at every level of their organization. Its this property that I'm saying is a necessary requisite for a conciousness compatible system.
I don't understand the difference: both a computer's and a living thing's energy storage runs down when its energy supply is cut off. My laptop does what it can to reverse the situation: it beeps and puts up a "low battery" message-- a behavior that's usually successful for it to maintain equilibrium.
 
That could be correct.

Why wouldn't Psi (parapsychology) work as well as the explanation?
Two main reasons. First, it's unnecessary; we already have a simpler explanation that accounts for the observed facts. Second, it's based on a unstated premise, i.e. that psi exists (which it doesn't).
 
Its only a matter of degree, really. A 'p-zombie' just has a more sophisticated architecture than your typical Hollywood zombie -- sophisticated enough to fool people into thinking it has some 'interior' experience. In story and myth such entities are usually represented in the guise of monsters like Vampires or as omnipotent AIs gone haywire. If you dim the consciousness of an actual person enough you can produce something quite similar; we generally call them psychopaths.

But, even in the case of psychopaths, there is atleast some residual consciousness, tho its spectrum is limited to extremely shallow, dull, and/or unpleasant experiences. In the case of a p-zombie there is none at all.

I'm curious as to where you got your concepts about p-zombies and psychopaths-- it doesn't agree with what I've learned. Could you provide a reference or two?

I've gotten my familiarity of p-zombies from talks and philosophical discussions like this one (and by watching a couple of Chalmer's lectures); my comments about psychopaths are drawn from those I know personally, and from reading the literature on character disorders. The wording, paraphrasing, parallels, and interpretation are all my own.

[Come to think of it, I suppose the drawing of broad, metaphorically based connections to reach novel conclusions could be considered evidence of consciousness]

When I say "no awareness of the data" I mean just that. The data is taken in and processed to produce an output appropriate to some algorithmic rule-set with no subjective perception, evaluation, or interpretation of that data during that processing. Its all done reflexively and without sensation, feeling, or cognition or any sort. A p-zombie is an automaton that can supposedly pull all this off in a way sufficient to convince a naïve hypothetical human that it actually does experience all those things -- naïve being the operative word.

Would naïve mean not even giving them the test I suggested: tell them something and ask what you told them? If so, I don't see that this sort of p-zombie is of use in arguments. My computer already does better than that.

My point is that if a simple chat bot, like the one Malerin posted a link to, can answer queries in a syntactically appropriate manner a hypothetical p-zombie could accomplish the same and more. Just to be clear, I do not think that p-zombies (i.e. perfectly indistinguishable simulacrums of conscious individuals) are possible in practice. The ability to pull off such a deception would depend upon the level of sophistication of it's programming [its practically impossible to 'perfectly' simulate any physical system] and the level of discernment possessed by the conscious individual assessing it [sooner or later you'll come across someone sharp enough to recognize that something's fishy].

All sense data is not subjective -- as is the case with blindsight. The difference, in this case, would be blindsight without the sight -- just a reflexive response to stimuli.
Likewise there are many normal paths that are reflexive and don't pass through consciousness. The difference is detectable by simple testing: that sense data can't be learned and then recalled at a later time.

Data can easily be conditioned into and stored in a neural net, or some other IP system, without there necessarily being any subjective experience of said data. I don't think that tests of recall or rote would be effective tests of consciousness. What one should look for is the ability to self-generate novel behavior and sense making, without reliance on external prompting or explicit programming.

Of course, they're subject to thermodynamic laws -- thats not the point. What I'm saying is that living systems -autonomously push against- thermodynamic equilibrium in a self-sustaining manner rather than run down as a desktop or robot would. This thermodynamic bootstrapping is something living things perform at every level of their organization. Its this property that I'm saying is a necessary requisite for a conciousness compatible system.

I don't understand the difference: both a computer's and a living thing's energy storage runs down when its energy supply is cut off. My laptop does what it can to reverse the situation: it beeps and puts up a "low battery" message-- a behavior that's usually successful for it to maintain equilibrium.

At this point, our computers cannot perform physical self-maintenance without guidance from a conscious human, nor can they metabolize. All organisms contain programed feedback mechanisms, to be sure, but they also show a degree of proactive behavior not explicitly programmed into them -- some to greater extents than others. Its this proactivity that sets living/conscious systems apart from present day technological systems.

While, on the one hand, organisms merely utilize their physiological and behavioral programs as organizational anchors, machines have no operational extent or existence beyond their programming. Their maintenance and directionality must constantly be propped up by human conscious efforts -- they lack their own inherent consciousness. Our machines do not thermodynamically push themselves "up hill"; they must continually -be pushed-. This is true even when they are provided with a continual source of energy.
 
Last edited:
Are you aware of when you are conscious?
I think so - it's an awkward question because the words are used in various ways depending on context, but it seems to me, in this context, consciousness is an awareness of self, the construct that provides identity.

At this point I don't really care how this internal state (or lack of it) may appear to another. I simply want to know what "happens" to turn a bunch of atoms/quarks/superstrings from a system that has no "feeling" (perhaps like Pixy judging by the (lack of) evidence he is presenting? :eye-poppi) into something that would call itself conscious - if it had the ability to do that which it quite possibly doesn't!
I think consciousness as we experience it, i.e. our self-awareness, requires at least a system that can model and predict the behaviour of others (i.e. has a 'Theory Of Mind') and that can generate a narrative sequence to causally explain their behaviour. These capabilities can be used reflectively to generate an internal model of our own behaviour and a similar narrative. This is the source of consciousness, the reflective modelling of our internal state, and the generation of a narrative to explain it and predict future behaviours.

It would seem reasonable to me that the development of complex social behaviours in multi-role social groups would be a driver for the development of Theory Of Mind and narrative generation, and vice-versa - a co-evolution of external behavioural and internal neurological complexity.
 
Last edited:
What is psi?
As commonly used, the term references a class of interactions ostensibly between the mind (or brain) and other minds or physical matter, for which there is neither evidence that the interaction occurs nor any plausible mechanism. Psi is distinguished from other forms of magic in that no mechanism other than the mind itself is postulated.
 
That could be correct.

Why wouldn't Psi (parapsychology) work as well as the explanation?


Overlords from the Kappa sector sending signals to the brains of folks with occipital lobe strokes would work easily as well, as would invisible faeries whispering the location of lights into the ears of the afflicted.

I have no need of that hypothesis. There is an easy way to test it. Cut the visual fibers that travel to the superior colliculus and see if the phenomenon disappears. If so, I think the psi hypothesis fails.
 
Status
Not open for further replies.

Back
Top Bottom