Consciousness, the Global Workspace, and Integrated Information Theory
Responding to the replies to my long post piecemeal is a bit like getting nibbled to death by ducks, so I’m going to try a follow-up post that will hopefully clear up some questions.
To do this, I’m going to start with an article by Guilio Tononi and David Balduzzi called “Toward a Theory of Consciousness” (from “The Cognitive Neurosciences”, 4th ed., MIT 2009) describing their integrated information theory (IIT) and then apply that to the biological model I described earlier.
You might be surprised that I begin by citing information theorists, but I actually have no problem with information theory... it’s extremely useful... I only have a problem with
misapplications of info theory, such as taking the metaphors literally, or attempting to apply theories which describe non-conscious activity to conscious activity as though they were identical.
Now the first thing to note is that Tononi and Balduzzi – who are among a group of researchers at the leading edge of consciousness studies – call their article “
Toward a Theory of Consciousness”.
They do this because they know full well that
there currently is no theory which explains consciousness. (Those of you here who believe that you have such a theory, please contact these guys and share your research – I’m sure they’ll be glad to have it.)
Let’s start with the opening paragraph:
Tonini and Balduzzi said:
Consciousness poses two related problems. The first is to determine which features of the brain determine the extent to which consciousness is present. For example, why are certain corticothalamic circuits important to conscious experience, whereas cerebellar circuits are not, although the number of neurons in the two structures is comparable and their neurobiological organization is similarly complicated? And why is consciousness strikingly reduced during deep slow-wave sleep or during absence seizures, despite high levels of neural firing?
First, we should note that the
distinction between processes involved in conscious experience (which I will simply refer to as “experience” here) and those involved in non-conscious activity is accepted from the get-go. That’s because it has been amply confirmed by common sense, observation, and testing.
It is also accepted that
consciousness is a specialized function of the brain – in other words, you’re not conscious just because you have a brain – and that
consciousness does not merely “emerge” as a result of some critical mass of neurons. This is not an assumption, but is based on decades of research and experimentation.
The puzzle, rather, is to figure out how to determine if a person (or animal) is conscious at all – is any experience happening? – and to what extent – is this person (or animal) alert and awake, or only vaguely conscious of self and world?
This understanding is especially important for figuring out whether patients in apparent vegetative states are aware or not.
Tonini and Balduzzi said:
The second problem of consciousness is to understand what features of the brain determine the specific way consciousness is experienced – what is responsible for, say, the “redness” of red? We know that the activity of specific cortical areas contributes specific dimensions of conscious experience – auditory cortex to sound, visual cortex to shapes and colors. Why is this so?
These two sentences combine two perennial puzzles commonly know as the “neural correlates of consciousness” (NCC) and the “hard problem” of consciousness.
NCCs are simply the physical states of the brain (what it’s doing) when we are aware of various things... what is the brain doing when we smell cinnamon versus smelling mildew, for example.
The “hard problem” – the Holy Grail of consciousness research – is to answer the question “
Why do the NCCs correlate with the particular experiences they do?”
I mean, if we were to find out that for every human being, the smell of cinnamon happens when the brain is doing one particular combination of activities, we still might not know, from that fact alone,
why that state makes us experience the smell of cinnamon instead of having some other experience.
Tonini and Balduzzi propose that the generation of “integrated information” is key to answering some of these questions.
They begin by observing that:
Tonini and Balduzzi said:
Consciousness has a physical substrate and... the physical substrate must be working in the proper way for us to be fully conscious – it is enough to fall asleep, receive a blow on the head, or take certain drugs such as anesthetics to affect our consciousness dramatically.
This, in a nutshell, is the biological model of consciousness. And it’s important to note that information theorists who are productive in the world of consciousness research
accept the biological model as the foundation for their work.
This does not mean that building conscious machines will be impossible, mind you. But it does mean that
you “must” have a physical apparatus “working in the proper way” in order for experience to occur.
In other words, these guys would not agree that you can replace a brain with a simulator machine running a sim of a brain (however detailed) because – as Westprog and I have been trying to explain to many deaf ears – the physical work of the two machines is different. (The same is true of French scientists currently building a neuron-level simulation of the brain – they caution that it will merely be a “representation” of a brain, not a real brain, and will not provide the simulator machine with a functioning brain or consciousness.)
This is important to establish up front, because – as we’ve seen on this and just about every other thread on the topic – it is so widely misunderstood. So I’ll insert a comment from Ned Block on the subject from “Comparing Major Theories of Consciousness” (NB is Block’s):
Gazzaniga said:
The competitors to the biological account are profoundly nonbiological, having more of their inspiration in the computer model of the mind of the 1960s and 1970s than in the age of the neuroscience of consciousness of the 21st century. As Dennett confesses, “The recent history of neuroscience can be seen as a series of triumphs for the lovers of detail. Yes, the specific geometry of the connectivity matters; yes, the location of specific neuromodulators and their effects matter; yes, the architecture matters; yes, the fine temporal rhythms of the spiking patterns matter, and so on. Many of the fond hopes of the opportunistic minimalists [a version of computationalism: NB] have been dashed: they had hoped they could leave out various things, and they have learned that no, if you leave out x, or y, or z, you can’t explain how the mind works.”
Note that temporal rhythms are included here,
a feature which disproves the claim that “consciousness is a product of logical computation” because, as we know, logical computations can proceed at any speed and produce the same result – not so with consciousness. (And for the same basic reason that it’s not true of airplane flight.)
Or, as Christof Koch, himself no stranger to info theory, puts it:
Koch said:
Brain scientists are focusing on experimental approaches that shed light on the neural basis of consciousness rather than on... philosophical problems with no clear resolution.
I hope that these observations will at least inspire a modicum of caution in those tempted to accept the assertions of folks who believe that consciousness can be understood without studying it directly (e.g. by studying math, or general info theory, or how non-conscious machines behave) and a great deal of skepticism toward anyone who claims that it is widely accepted that conscious experience has no direct physical cause, especially when that is coupled with an assertion that the cause of consciousness is known. I assure you, this is a fantasy bordering on delusion.
Anyway, that settled, let’s get back to IIT....
To begin teasing out these questions, they begin with a simple thought experiment:
Tonini and Balduzzi said:
You are facing a blank screen that is alternately on and off, and you have been instructed to say “light” when the screen turns on and “dark” when it turns off. A photodiode – a simple light-sensitive device – has also been placed in front of the screen. It contains a sensor that responds to light with an increase in current and a detector connected to the sensor that says “light” if the current is above a certain threshold and “dark” otherwise.... When you distinguish between the screen being on or off, you have the... experience of seeing light or dark. The photodiode can also distinguish between the screen being light or dark, but presumably does not have [an] experience of light and dark. What is the key difference between you and the photodiode?
According to the IIT, the difference has to do with how much information is generated when that distinction is made. Information is classically defined as reduction of uncertainty: the more numerous alternatives that are ruled out, the greater the reduction of uncertainty, and thus the information.
In other words, there is more information in a die roll than in a coin toss, because the die roll eliminates 5 wrong answers, while the coin toss eliminates only 1 (corresponding to 2.59 bits and 1 bit of information, respectively – yes, this stuff can be quantified).
And although we might not be paying attention to it while doing the experiment with the photodiode (because it’s not important for the task) our actual experience of viewing the screen eliminates a truly mind-boggling number of alternative wrong choices – just think of all the possible things that could be on that screen, including every scene from every movie ever made, that we decide we are not viewing!
The diode, on the other hand, eliminates merely one wrong choice.
Now, at this point, Tonini and Balduzzi offer a note of caution:
Tonini and Balduzzi said:
Information always implies a point of view or perspective, and we need to be careful about what that point of view might be (information for whom?)
They illustrate this point with the example of a digital camera with detectors that can distinguish among enough alternative states to correspond to some 1 million bits of information.
Yet the camera is not conscious. Why?
Tonini and Balduzzi said:
According to the IIT, the difference has to do with integrated information. From the point of view of an external observer, the camera may be considered as a single system with a repertoire of 21,000,000 states. In reality, however, the chip is not an integrated entity: since its photodiodes have no way to interact, each photodiode performs its own local discrimination between a low and a high current, completely independent of what every other photodiode might be doing. In reality, the chip is just a collection of 1 million independent photodiodes, each with a repertoire of 2 discriminable states. In other words, there is no intrinsic point of view associated with the camera chip as a whole.... If the sensor chip were cut into 1 million pieces each holding its individual photodiode, the performance of the camera would not change at all.
By contrast, you discriminate among a vast repertoire of states as an integrated system, one that cannot be broken down into independent components each with its own separate repertoire.... The experience of a red square cannot be decomposed into the separate experience of red and the separate experience of a square.
At this point, you’re probably starting to see the difference between laptops and brains, and why brain waves might be more important than the computational literalists and neurons-only crowd on this forum imagine they could be. And this is coming from information theorists!
What they’re implying here is that there can only be a “you”, there can only be an “experience”, if there is some degree of integration of information caused by real physical activity.
In other words,
consciousness exists in those places where the physical-temporal integration of information actually creates a point of view! And although I’m getting a bit ahead of myself, and although this theory is quite tentative (I have my own reservations about many parts of it), it certainly agrees with the observation that “self” is merely one possible aspect of “experience”. (And not a required one, by any means.)
To put it another way, there is no “self” which is “having” an “experience”... if there is a sense of self, it is merely part and parcel of the experience. This is what I mean when I say that experience (and therefore self) is a “performance” of the brain, and nothing else.
Here I’ll also quote Koch from “The Neurobiology of Consciousness” (you can find all these articles in the same volume as Tonino and Balduzzi’s):
Koch said:
Neurological evidence indicates that neither sensory inputs nor motor outputs are needed to generate consciousness.... Consciousness depends on what certain parts of the brain are doing, without requiring any obligatory interaction with the environment or the body.
In short,
experience is what the brain is doing, not what happens in the rest of the body or outside the body. Of course our interactions with the world normally affect our experience, given how we’re designed, but experience is 100% brain activity, as far as we can tell.
Now, one thing that’s very important to note here is Tonini and Balduzzi
do not define information as symbols (who would determine their values, or read them?) nor as representations (of what?). So the integration of information is neither merely “self-referential information processing” (as was mentioned in the article’s opening paragraph, this happens all over the brain) nor is it an interaction of symbols.
Moving on....
At this point, the analysis gets very technical and quantitative, and I don’t want to get into that bit on this thread. (Considering the OP, I may already have gone too far off topic, but I don’t think it can be helped, really.) Those interested can read the article themselves.
Suffice it to say that the authors propose a method for quantifying the amount of information in a system.
Moving on to the topic of integration, they note that....
Tonini and Balduzzi said:
We need to find out how much of the information generated by a system is integrated information – that is, how much information is generated by a single entity, as opposed to a collection of independent parts. The key idea here is to consider the parts of the system independently, ask how much information they generate by themselves, and compare it with the information generated by the system as a whole.
And again, they propose a method for quantifying this, which I won’t bother to describe here.
At this point, they ask the really important question:
Tonini and Balduzzi said:
Can this approach account, at least in principle, for some of the basic facts about consciousness that have emerged from decades of clinical and neurobiological observations?
And here, since quantitative measurements of information and integration in something as complex as the brain are currently impossible, they use computer sims to help answer this question. (Note that this is a valid use of sims as a research aid... the authors are not claiming that the sims make the simulator conscious or that they create some “world of the simulation” which is in any way real.)
The authors use the Greek character phi as shorthand for “integrated information”.
Tonini and Balduzzi said:
By using computer simulations, it is possible to show that high phi requires networks that cojoin functional specialization (... each element has a unique functional role within the network) with functional integration.... Conversely, phi is low for systems that are made up of small, quasi-independent modules.
The authors then follow up with a series of rhetorical questions regarding brain anatomy, pondering how this integration of the various specialized structures might occur. They have no certain answer, but they note that....
Tonini and Balduzzi said:
The set of elements underlying consciousness is not static but forms a “dynamic complex” or “dynamic core”.
In other words, consciousness does not have to include the influence of
any particular informative module of the brain, as long as it is integrating
some set of modules.
Furthermore, they describe why consciousness may be lost during events such as certain seizures despite all the neural activity:
Tonini and Balduzzi said:
Computer simulations also indicate that the capacity to integrate information is reduced if neural activity is extremely high and near-synchronous, because of a dramatic decrease in the repertoire of discriminable states.
In other words, if everything is firing in tandem, the amount of information is reduced – imagine, for example, the difference between an orchestra playing a work by Mozart, each section with different parts, entering and leaving the composition at various points, versus all the instruments playing the same note over and over in the same tempo.
Which leads to this observation (and again, note that current research, including evidence from computer simulations – despite protests to the contrary from some quarters on this forum – contradicts the “consciousness as logical computation” model which demands that it occur at any speed, and the “information only” model which dispenses with any direct physical cause):
Tonini and Balduzzi said:
Consciousness not only requires a neural substrate with appropriate anatomical structure and appropriate physiological parameters: it also needs time. The theory predicts that the time requirements for the generation of... experience in the brain emerge directly from the time requirements for the buildup of an integrated repertoire among the elements of the corticothalamic main complex so that discriminations can be highly informative.
This would explain the “temporal granularity” of experience... those events below the integration threshold have a phi of 0.
Note that
this places the temporal “blind spots” within the mechanism of consciousness itself, not with other mechanisms, which could explain why these blind spots do not appear to exist elsewhere, and therefore our brains can non-consciously perceive subliminal stimuli and put them to use, while our conscious minds cannot.
The article goes on to note the compatibility of this model with the current body of research on consciousness, but I will leave it at that and turn back now to the description of consciousness I suggested in my previous long post.
As I mentioned, it was long a mystery how consciousness managed to craft a unified experience from the activity of separate regions of the brain, when there did not seem to be adequate neural activity to make this happen.
Experiments with magnetic stimulation confirmed that there was some sort of interconnection during consciousness which was not operating during deep sleep. (You might recall my description of the disturbances propagating “like popcorn”.)
The discovery of the 3 “signature waves” of consciousness as a result of deep brain probe research has provided us with a potential answer.
Now, here I’m jumping off into speculation, but it is productive speculation – that is to say, it is based on actual research on conscious brains and non-conscious brains (not notions derived from philosophy or observation of computers and other non-conscious entities or pure mathematics or anything like that) and it can lead to experiments which falsify or confirm the premise.
Since we know that evolution works with the parts it’s got, whatever they may be, let us speculate about an early brain, one which is not conscious but is just a set of stimuli and responses like any dumb machine can do.
This brain is based on what we normally think of as the architecture of the brain – that is, neurons firing in chains (very complex and tangled chains, but chains nonetheless) – kind of like traffic in a busy city.
It’s difficult to see how this arrangement can produce integration, and therefore conscious experience.
But like the wires on a power line (remember my point about the difference between the performance of wires strung vertically versus wires grouped triangularly) these neurons produced a lot of electrical noise.
In fact, every little bundle of neurons, depending on the type of neuron and the larger shape into which they (as a group) were bent would have it’s own shape (quite literally) of electrical noise.
The brain also produced brain-wide waves which were warped in different ways by the effect of the noise coming off of these bundles of neurons.
This physical arrangement may in fact have provided the raw material for consciousness to evolve.
Let’s go back to the choir analogy.
Let’s say you have a group of instruments which hum. You have a whole room full of them, in a complex arrangement. (Each instrument – that is, each functional area of the brain – is of course made up of parts – neurons – but what’s important is the overall sound each instrument makes, depending on its larger shape and the materials it’s made of.)
But this room has no air in it, so the “voices” of these humming instruments are not coordinated.
You also have a machine which can introduce air into this room in large “slabs”, each one encompassing a different set of instruments, sometimes overlapping each other.
You have 3 different “slabs” of air that you can introduce into the room.
When you do this, the instruments which are now encompassed in the slab of air can make their voices heard, and they produce all sorts of new harmonies and subtones which did not exist in the vacuum.
The air produces a “global workspace” in which the various hums can interact.
In other words, there is now more information in the system, and that information is integrated!
Now let’s say you introduce each of the 3 slabs of air into the room and record the resulting sound.
Then you play back all 3 tracks simultaneously to produce an extremely rich, multi-dimensional chorus with all sorts of new sounds (information) which did not exist before, and does not exist in the room without the air.
This is what we expect is happening when the 3 signature waves of consciousness are active and coherent.
In short, this model brings together the most current brain research on consciousness with the most current information theory which has the potential to make consciousness quantifiable!
If this model is accurate (who knows?) then it would mean that the classically neural function of our brains is the realm of non-conscious activity, and the larger-scale electrical activity (which is, of course, directly related to and depending upon the underlying neural activity) is the realm of consciousness.
This hypothesis has not been tested, but it sure would explain a lot, and it is consistent with current research.
If it pans out, it could enable us to answer questions like “which animals are conscious and to what degree?”... questions which are currently unanswerable, and which the computational literalists have no hope of answering, or even of correctly framing in the first place.
It may also allow us to answer currently "philosophical" questions, such as whether we have any sort of free will. Is there any feedback from the brain waves to the electrical activity of the various brain regions? If there is, does it matter at all, seeing as how we don't expect it to influence classical neuron behavior? If it does matter, does this mean that "I" really do control some of what my body does, rather than simply being a byproduct of the brain?
(I suspect that consciousness must have some sort of influence, else why bother to evolve it?)
Now mind you, this is not a full explanation of any of these ideas, but after all, space is limited and I need to fix some supper.
What I will say as an epilog, though, is that
this is the kind of research and theory that is making progress.
What is still left unanswered, though, is
why the integration of brain activity in this way makes us conscious -- the authors only assert that it appears to, and that viewing it in terms of information (which is itself a kind of metaphor) allows us to quantify it -- and
why the particular activities of this particular brain of ours produce the specific experiences of human beings.
As for all the claims that perennially clog up consciousness threads on this forum – e.g. consciousness is the result of logical computations, consciousness is caused by self-referential information processing, consciousness requires no physical substrate beyond what is required to “run the logic”, consciousness can be programmed, you can replace a brain with a computer simulation of a brain, a conscious robot would be conscious at any operating speed, your brain “is a computer”, there is no real difference between consciousness and all the other activity of the brain, thermostats might be conscious for all we know, consciousness might be a whole-body function, consciousness is the result of interaction of symbols – all of these claims are, in light of the current state of research, pure bunk... as Westprog and I have been so patiently trying to explain.
And might I add, it is the height of arrogance for people who don’t much care about brain research, yet who trot out all these falsehoods repeatedly and drown out discussion of any other topics, to insult others for having the audacity to question their expertise in the matter.
Good night, y'all, and I'll see you tomorrow.