Has consciousness been fully explained?

Status
Not open for further replies.
I have no objection to reading GEB, and I may get around to it eventually, but the thing is, if I want to read about cognitive neuroscience, I'd rather pick up Gazzaniga.
Now that's almost a facepalm, right there. GEB is about so much more than that, I wouldn't even know where to begin. Whenever you get around to it. You won't be sorry, I promise.
 
Okay then: Corpus callosotomy studies.
Please explain how you think it supports your claim.

That's a hopelessly vague description of psychology.
It's a good enough description for many psychologists. Only the psychologists who claim to study what is going on "in the mind" might disagree, and those tend be woo.

Why yes, in fact it does.
No, it doesn't. All it proves is changes in the brain affect consciousness, and that therefore the brain has an important role in the process.

Argument by bad analogy.
Explain why you think those are bad analogies.

It would be a foolish claim if I hadn't just established its truth.
Which you haven't.

Of course. But as far as we can tell, the spleen is not involved in the direct mechanism that generates Sofia events.
I don't know what you mean with "Sofia events".

I mean, if you take that thinking far enough, then the entire universe is part of the organ that creates consciousness.
Perhaps. I do dabble a bit in pantheism, so I don't think thinking that far is necessarily a bad thing.

I also think that whatever consciousness is, it depends at least in part on how quickly something can react to stimuli. The quicker the reaction, the more "conscious" something is of the stimulus; a universe in which a stimulus takes hundreds of thousands of years to get from one node to another is not really all that conscious compared to a human being. I have no problem acknowledging that human or animal consciousness is something of a very different nature than any other thing we might call "consciousness", and I am convinced that only things within a single body are connected enough to display "consciousness as we know it".

We know that it's possible to be consciously aware without any external sensory input, and without any input from the bodily extremities. It's called dreaming.
When we are dreaming we aren't without any external sensory input (an alarm clock would be useless if we were), and certainly not without input from the rest of the body. We're usually only barely conscious, and if we are only for the few minutes at the time that we are dreaming. All a dream appears to be is sensory input received during wakefulness continuing to have an effect.

The thing is, you can damage any area outside of the brain, and as long as the body stays alive, there's no indication it has any effect on consciousness.
I don't think this is true. Hurt someone hard enough, and s/he might lose consciousness. Hormones are known to have an effect on consciousness, and they aren't all produced in the brain. Every change in the body likely has some effect on consciousness; changes in some places -- such as the brain -- may have a greater effect than others. Influencing what happens in one airplane affects air travel, influencing what happens in the air traffic control tower may affect air travel much more profoundly. Still air travel is not something confined to the control tower. All the control tower does is make sure it happens orderly.
 
My disagreement w/ PM regarding the pen-and-paper brain is specifically over the issue of whether it would generate an instantiation of conscious awareness.

Pixy and I agree on many things, but not that.
I often think of similar instances as if an instantiation of conscious awareness, but its validity is highly limited. Much like it's easy to assign a temperature of a single air molecule, based on its velocity relative to you. Yet the validity is similarly limited there also, as I'll explain more below on emergence. However, as a conceptual device it's not totally invalid either, at least for operational intelligence like ant colony intelligence verses ant intelligence.

I do think such conceptual devices fail to an even higher degree in relation to human consciousness, as I see human consciousness not just as an emergent phenomena, but a hierarchy of emergent phenomena. With several layers of abstraction between consciousness and the mechanistic underpinnings. So I agree you there. Yet the process verses state issue is sticky at a foundational level.

Also, Pixy has several times taken the stance that consciousness is a true emergent property, like the whiteness of clouds, which requires no specific mechanism (and I use that term extremely broadly, btw) but which merely arises as a result of the presence of self-referential information processing in any form.

Ergo, thermostats may be conscious, and for all we know your computer is conscious right now.
I like studying emergent properties, and consciousness falls in that category. The most important thing to understand about emergence is that just because it's an emergent property does not mean it requires no definable mechanism. It is only when you define the raw emergent property, course graining out the ensemble of mechanistic specifics that produced it, that most it look like something devoid of meaning in the ensemble of parts. Emergence does require specific definable mechanisms. Yet the complexity can blow up really fast, as I'll demonstrate.

Consider parts A and B. An ensemble of A produces an emergent property Ae. An ensemble of B produces an emergent property Be. Then you have ensembles of emergent properties such that an ensemble of Ae is Ae2, and so on. Then you also have to consider conservation laws that produce thermodynamic laws for instance. Thus any emergent property associated with a state variable has a range of emergent properties with symmetries that define how they relate to each other. The complexity grows exponentially, even for fairly basic systems. Thus it is often easier to say we have this magic emergent property X, without specifically defining the mechanism producing it. Yet the fact remains it does have a definable mechanism, or hierarchy of mechanisms.

To the issue of "self-referential information processing", I'm not sure where to start. The concept of self reference is fundamentally important from the foundations of physics, as in RQM, to high level emergent phenomena. Even what does and doesn't constitute and an emergent property can hinge on what subset you define 'as if' it had a singular identity, a self to reference to. This is often useful, as certain emergent properties in a hierarchy can be ignored at higher levels by redefining the set identities or self. The case of human consciousness is a self identity requiring a set of properties in which the minimum complexity is quiet large. Our capacity to navigate associative hierarchies entails that our awareness can't skip over certain hierarchies in brain function, as described above, else our awareness would by definition be stuck only at certain levels of that associative hierarchy.

Consider the stunts of Derren Brown, where he depends on people acting on associations they weren't aware of making. Our brains are sorting a complete hierarchy of associations, in which we are only aware of a small subset we pay attention to. Yet the ignored information contained in the full complement of associations is not lost, it plays a role in our actions.

This is why we can't describe the mind in terms of inputs and outputs. Because the process between these inputs and outputs are also inputs and outputs, such that we can reference back to any subset of that process. An abacus cannot, even by any stretch of the principles, do this. However, molds and plants have some limited capacity to do this, even in the absence of neurons. That's another reason, besides being distinctly mechanical, I use metronomes as a toy model, where the synchronization potential becomes the salient feature for the emergent behavior. The RFID ideas is more realistic, as metronomes, even overlooking engineering issues, are strictly limited in dimensionality. The brain neuron maps can be modeled as if it was taking advantage of thousands of dimensions, defined by connectivity.
 
My disagreement w/ PM regarding the pen-and-paper brain is specifically over the issue of whether it would generate an instantiation of conscious awareness.

I think that the time frame issue is important, certainly pen and paper would not meet the medical defintion.
 
Now that's almost a facepalm, right there. GEB is about so much more than that, I wouldn't even know where to begin. Whenever you get around to it. You won't be sorry, I promise.

Dude... I wasn't being sarcastic when I said I have no objection to it and might get around to reading it. I'm not trying to diss GEB (although I seriously doubt it will change my life).

That said, if I want to read cognitive neuroscience, it ain't my first choice.
 
I like studying emergent properties, and consciousness falls in that category. The most important thing to understand about emergence is that just because it's an emergent property does not mean it requires no definable mechanism. It is only when you define the raw emergent property, course graining out the ensemble of mechanistic specifics that produced it, that most it look like something devoid of meaning in the ensemble of parts. Emergence does require specific definable mechanisms. Yet the complexity can blow up really fast, as I'll demonstrate.

Yes, I know.

But Pixy and I don't disagree on that point.

I think you're missing out on a lot of the background discussion, which spans several threads.

Pixy treats consciousness as a true emergent property of self-referential information processing per se. In other words, wherever SRIP exists, consciousness arises.

My point is that consciousness is not a true emergent property of SRIP, even if it can be viewed as an emergent property of the brain.

It is not like the whiteness of clouds. It does not, as some have asserted on these threads, arise simply from a critical mass of neurons.
 
My disagreement w/ PM regarding the pen-and-paper brain is specifically over the issue of whether it would generate an instantiation of conscious awareness.

Pixy and I agree on many things, but not that.

Also, Pixy has several times taken the stance that consciousness is a true emergent property, like the whiteness of clouds, which requires no specific mechanism (and I use that term extremely broadly, btw) but which merely arises as a result of the presence of self-referential information processing in any form.

Ergo, thermostats may be conscious, and for all we know your computer is conscious right now.

Can someone point back to this claim? I've searched and the only reference I see so far described it as a consequence of what was said by someone else, rather than a direct claim. It even appears to be disavowed in a Pantheism thread based a similar analogy involving thermostats.

I could be wrong, but I get the feeling this is a more generalized version of my issues with the Church-Turing thesis, in which the response appeared very reasonable.
 
When we are dreaming we aren't without any external sensory input (an alarm clock would be useless if we were), and certainly not without input from the rest of the body. We're usually only barely conscious, and if we are only for the few minutes at the time that we are dreaming. All a dream appears to be is sensory input received during wakefulness continuing to have an effect.

You're missing the point. Non-conscious gatekeeper daemons determine whether or not the sensory input will work its way into the dream. We often dream with no external sensory input or awareness of our bodies.

And we are conscious whenever we dream (despite the colloquial usage of "not conscious" to refer to sleep).

Dreaming is interesting because if it were just sorting and associating, the brain could do that without wasting the resources of also cranking up a conscious experience. So our awareness must serve some purpose.

If it's true that consciousness evolved in order to handle higher-level decisions that non-conscious functions can't handle, or to resolve conflicts among them, then this would indicate that making decisions is somehow critical in the function(s) served by dreaming.
 
Yes, I am missing some background. I try to avoid it, but it's hard not to get cross-eyed trying to get everything.
 
Can someone point back to this claim? I've searched and the only reference I see so far described it as a consequence of what was said by someone else, rather than a direct claim. It even appears to be disavowed in a Pantheism thread based a similar analogy involving thermostats. .

As I said, it spans several threads. That claim comes from a thread discussing what would happen to a conscious machine if its processing speed were slowed to a crawl.

Pixy, Rocket Dodger, and drkitten (iirc) all supported the notion that a pen-and-paper brain would literally be conscious, based on Turing-Church.
 
I don't think this is true. Hurt someone hard enough, and s/he might lose consciousness. Hormones are known to have an effect on consciousness, and they aren't all produced in the brain.

If the harm is isolated outside the brain (e.g., doesn't even effect bloodflow to the brain) then consciousness is unaffected. Ditto hormones, if they have no direct or indirect effects on the brain.
 
Dude... I wasn't being sarcastic when I said I have no objection to it and might get around to reading it. I'm not trying to diss GEB (although I seriously doubt it will change my life).

That said, if I want to read cognitive neuroscience, it ain't my first choice.
Okay, one more try. I know you weren't being sarcastic, and I know you weren't dissing GEB. What I'm trying to convey is that GEB is NOT a book about cognitive neuroscience. I find it frustrating that I cannot seem to find the words to concisely express what it IS about, but that's not your fault, and it's not the first time it's happened. In a delightfully ironic way, trying to describe what GEB is about encounters some of the very same difficulties which (Hofstadter himself argues) one encounters when trying to describe complex processes like information processing, intelligence, cognition, etc.: finding the right level on which to approach the question. Does GEB involve cognitive neuroscience? Absolutely. But it's just as much about formal systems in mathematics, encoding in DNA, self-reference, and a ton of other stuff. I don't think it would be completely mistaken to regard it as a work in Philosophy. It's one of those books that you just gotta read.

Anyway, carry on, and I won't butt in about it again.
 
As I said, it spans several threads. That claim comes from a thread discussing what would happen to a conscious machine if its processing speed were slowed to a crawl.

Pixy, Rocket Dodger, and drkitten (iirc) all supported the notion that a pen-and-paper brain would literally be conscious, based on Turing-Church.
Here two concepts appear to be mixed from my reading. If consciousness exist prior to slowing down, it exist regardless of how much you slow it down. Even if it takes a year for it to give a yes response. In fact this actually occurs to us in gravitational time dilation. Unlike special relativity, there is no clock paradox in general relativity. Time dilation is physically equivalent to simply making everything physically slower.

Now by tying this to a pen-and-paper brain it entails even more assumptions about consciousness. If you bypass those assumptions by simply assumming consciousness exist at normal speed, then they are absolutely right.
 
Last edited:
My disagreement w/ PM regarding the pen-and-paper brain is specifically over the issue of whether it would generate an instantiation of conscious awareness.

Pixy and I agree on many things, but not that.

Also, Pixy has several times taken the stance that consciousness is a true emergent property, like the whiteness of clouds, which requires no specific mechanism (and I use that term extremely broadly, btw) but which merely arises as a result of the presence of self-referential information processing in any form.

Ergo, thermostats may be conscious, and for all we know your computer is conscious right now.


Please don't anthropomorphize thermostats or computers. They hate that.
 
To the question of "where is consciousness" (to which "the brain" seems to be a very good working answer), I would add what I think is a second key question: "When is consciousness?"

Assuming that such thing as a universal present moment exists (physics cannot seem to find it), does consciousness occur in that present moment? In other words, do we actually experience conscious awareness, or do we only remember experiencing conscious awareness? Or is the distinction not even meaningful?

I can certainly say with confidence that without memory, we do not remember experiencing conscious awareness. (Obviously.) Hence, drugs that block the formation of memories can be (and are) used as anesthetics. But is not remembering conscious awareness equivalent to not experiencing conscious awareness? It could very well be, if memory storage and recall is an intrinsic part of the relevant self-referential loop.

If that were the case, we would expect to observe a delay between e.g. the making of a decision, and conscious awareness of having made the decision -- which we do in fact observe, apparently, as best I can read the relevant research.

If conscious awareness requires functioning memory, then a person with no capability to form new memories would be one possible type of p-zombie. However, not really, because the lack of working memory also impairs overall cognition to the point of not being able to respond as a conscious person does, which then does not qualify as a p-zombie. So, p-zombies remain an impossibility.

Thermostats and op amp circuits employ self-referential information processing, but have no memory. If memory is a required element of conscious experience, then they don't have it.

We could, however, add to a thermostat a memory -- say, a running record of temperature readings and times at which the thermostat turned off and on. A "write-only" memory that the thermostat could not access (such as running printout on paper) wouldn't qualify, though; the thermostat would also need to be able to review that memory, and have a reason for doing so. For example, we could program the thermostat to continually review and analyze its records, with higher weighting given to more recent records, matched against the most recent records up to the present, to compute the most likely most efficient timing for its next switching on or off. To do that it needs some kind of pattern matching capability to perform the comparisons, and some way of assessing the results of its actions (was the outcome as expected or otherwise; was the timing "good" or "bad"). Those assessments then become part of the memory record.

Then, by my current working definition of conscious experience (the existence of self actions in a narrative constructed from memory and under evaluation), the thermostat would be conscious. Though it would be an experience that we probably cannot imagine in comparison with our own.

Respectfully,
Myriad
 
No. I'm afraid I didn't.

Well, it is easier if you work through the issue on your own. Think about this:

1) what is the difference between a cell and a rock?

2) what are the behaviors exhibited by a cell that are responsible for that difference?

3) what cellular systems are responsible for those behaviors?

4) how do those systems work, on a molecular scale?

5) how do those molecules work, on a quantum scale?

The point is that a cell and a rock behave differently. Period. There is no denying this (no matter how much people try to ... ). Now you might be able to add more recursions than I did here (I only have 4 levels of recursion above) but eventually you should reach a point where you are comfortable agreeing that somewhere "in there" the difference lies and we should be able to identify and define it.

That is, we know a cell and a rock are different. Somehow. We also know that cells and rocks are made of the exact same species of fundamental particles, which we know behave identically, quantum randomness notwithstanding. So at the bottom level of recursion cells and rocks are the same. At the top level, they are very different. What happens in between, that leads to such a divergence in behavior?

Note that westprog's argument is that since they are the same at the bottom level of recursion, they are somehow necessarily the same at the top level as well, despite his admittance that cells and rocks are clearly not the same. *confused*
 
Last edited:
To the question of "where is consciousness" (to which "the brain" seems to be a very good working answer), I would add what I think is a second key question: "When is consciousness?"

Assuming that such thing as a universal present moment exists (physics cannot seem to find it), does consciousness occur in that present moment? In other words, do we actually experience conscious awareness, or do we only remember experiencing conscious awareness? Or is the distinction not even meaningful?

I can certainly say with confidence that without memory, we do not remember experiencing conscious awareness. (Obviously.) Hence, drugs that block the formation of memories can be (and are) used as anesthetics. But is not remembering conscious awareness equivalent to not experiencing conscious awareness? It could very well be, if memory storage and recall is an intrinsic part of the relevant self-referential loop.

If that were the case, we would expect to observe a delay between e.g. the making of a decision, and conscious awareness of having made the decision -- which we do in fact observe, apparently, as best I can read the relevant research.

If conscious awareness requires functioning memory, then a person with no capability to form new memories would be one possible type of p-zombie. However, not really, because the lack of working memory also impairs overall cognition to the point of not being able to respond as a conscious person does, which then does not qualify as a p-zombie. So, p-zombies remain an impossibility.

Thermostats and op amp circuits employ self-referential information processing, but have no memory. If memory is a required element of conscious experience, then they don't have it.

We could, however, add to a thermostat a memory -- say, a running record of temperature readings and times at which the thermostat turned off and on. A "write-only" memory that the thermostat could not access (such as running printout on paper) wouldn't qualify, though; the thermostat would also need to be able to review that memory, and have a reason for doing so. For example, we could program the thermostat to continually review and analyze its records, with higher weighting given to more recent records, matched against the most recent records up to the present, to compute the most likely most efficient timing for its next switching on or off. To do that it needs some kind of pattern matching capability to perform the comparisons, and some way of assessing the results of its actions (was the outcome as expected or otherwise; was the timing "good" or "bad"). Those assessments then become part of the memory record.

Then, by my current working definition of conscious experience (the existence of self actions in a narrative constructed from memory and under evaluation), the thermostat would be conscious. Though it would be an experience that we probably cannot imagine in comparison with our own.

Respectfully,
Myriad


Is the cruise control on my car conscious?
 
Anyway, carry on, and I won't butt in about it again.

Oh, that's ok. Although I admit I'm a bit surprised by the level of enthusiasm expressed by some for this book on this thread. I usually don't encounter that level of devotion outside of Ayn Rand devotees. (Which is not meant in any way to compare GEB to the works of Ayn Rand.)

Believe me, I've had this book recommended to me many times, and I'm generally familiar with its outline. It's just that there has always been something more pressing to read at the time.

Fwiw, I'll offer you my own recommendation. If you're interested in accessible writing on cognitive neuroscience, try Michael Gazzaniga.
 
If you bypass those assumptions by simply assumming consciousness exist at normal speed, then they are absolutely right.

Depends on what the actual mechanism turns out to be, of course.
 
Status
Not open for further replies.

Back
Top Bottom