How the Brain Does Consciousness: Biological Research Perspectives

A study from earlier this month regarding consciousness and plasticity:

Learning to See Consciously: Scientists Show How Flexibly the Brain Processes Images

Our brains process many more stimuli than we become aware of. Often images enter our brain without being noticed: visual information is being processed, but does not reach consciousness, that is, we do not have an impression of it. Then, what is the difference between conscious and unconscious perception, and can both forms of perception be changed through practice? .... Scientists at the MPI for Brain Research in Frankfurt/Main could now show that seeing can be trained.

Visual stimuli undergo a series of processing stages on their journey from the eye to the brain. How conscious perception can arise from the activity of neurons is one of the mysteries that the neurophysiologists at the MPI for Brain Research seek to solve.... In their current study, the scientists examined whether perception can be influenced by long-term and systematic training and whether such training does not only change the processing, but also affects whether the stimulus can be consciously perceived.

It is known from clinical studies that some stroke patients who suffer partial blindness as a result of damage to the visual cortex can discriminate between stimuli that fall into their blind visual field. This unconscious discrimination ability can be improved through training. Nevertheless, the patients report that they do not see the images. In a few cases, however, conscious perception of the stimuli could be improved with training. Is it maybe possible to learn to "see consciously"?

...The Frankfurt scientists developed an experimental set up with which different learning effects on perception could be measured. The subjects were shown images of two different geometric forms -- a square and a diamond -- on a screen in rapid succession and in a random sequence, and were asked to discriminate between them. The visibility of the images was limited by presenting a mask shortly after each image, which rendered the shape invisible.

The experiment was designed such that the subjects could initially not discriminate between the images and that they were also subjectively invisible. The subjects were then trained for several days.... As soon as the subject indicated by pressing a button which form had been shown and how clearly he or she had seen the form, the next stimulus and the next mask were shown. This process was repeated 600 times per day. After several days, the subjects could better discriminate between the target stimuli. From the ratings of the visibility of the stimuli, the scientists could further conclude that the participants' subjective perception had increased as well: the images now entered consciousness. Thus, the scientists succeeded in demonstrating that it is also possible to learn to see consciously.

The question remained, however, as to how objective and not necessarily conscious processing of stimuli and their subjective, conscious perception are linked.... The experiment was repeated once more. This time, the image and mask were shown on a different part of the screen, and were thus processed by a different part of the brain. "The results were revealing," explains Lucia Melloni: "While the learning effect for the pure processing of the stimuli, that is the discrimination of the shape, was lost with the spatial rearrangement of the stimuli, the clearer visibility of the images, that is the learning effect in terms of conscious seeing, remained." Therefore, objective processing and subjective perception of the stimuli seem to be less closely linked than previously assumed. The two training effects appear to be based on two different areas of the brain.
 
I actually found this one, from last November, surprising.

Fingers Detect Typos Even When Conscious Brain Doesn't

"We all know we do some things on autopilot, from walking to doing familiar tasks like making coffee and, in this study, typing. What we don't know as scientists is how people are able to control their autopilots," Gordon Logan, Centennial Professor of Psychology and lead author of the new research, said. "The remarkable thing we found is that these processes are disassociated. The hands know when the hands make an error, even when the mind does not."

... Logan and co-author Matthew Crump designed a series of experiments to break the normal connection between what we see on the screen and what our fingers feel as they type.

In the first experiment, Logan and Crump had skilled typists type in words that appeared on the screen and then report whether or not they had made any errors. Using a computer program they created, the researchers either randomly inserted errors that the user had not made or corrected errors the user had made. They also timed the typists' typing speed, looking for the slowdown that is known to occur when one hits the wrong key. They then asked the typists to evaluate their overall performance.

The researchers found the typists generally took the blame for the errors the program had inserted and took the credit for mistakes the computer had corrected. They were fooled by the program. However, their fingers, as managed by the autopilot, were not -- the typists slowed down when they actually made an error, as expected, and did not slow down when a false error appeared on the screen.

... In the second experiment, they had the typists immediately judge their performance after typing each word. In the third, they told typists that the computer might insert or correct errors and again asked them to report on their performance.

The typists still took credit for corrected errors and blame for false errors in the second experiment, and still slowed down after real errors but not after false ones. In the third experiment, the typists were fairly accurate in detecting when the computer inserted an error, but still tended to take credit for corrections the computer had made. As with the other two experiments, the typists slowed down after real but not after false errors.

The research is the first to offer evidence of the different and separate roles of conscious and unconscious processing in detecting errors.

"This suggests that error detection can occur on a voluntary and involuntary basis," Crump, a postdoctoral fellow in psychology, said. "An important feature of our research is to show that people can compensate for their mistakes even when they are not aware of their errors. And, we have developed a new research tool that allows us to separately investigate the role of awareness in error detection, and the role of more automatic processes involved in error detection. The tool will also allow a better understanding of how these different processes work together."
 
Another study on plasticity.

This one demonstrates the re-wiring of very early neural real estate, upstream from higher-order conscious processing, as people learn to recognize fainter visual patterns.

A team of researchers from the University of Minnesota's College of Liberal Arts and College of Science and Engineering have found that an early part of the brain's visual system rewires itself when people are trained to perceive patterns, and have shown for the first time that this neural learning appears to be independent of higher order conscious visual processing....

The researchers looked at how well subjects could identify a faint pattern of bars on a computer screen that continuously decreased in faintness. They found that over a period of 30 days, subjects were able to recognize fainter and fainter patterns. Before and after this training, they measured brain responses using EEG....

"We discovered that learning actually increased the strength of the EEG signal," Engel said. "Critically, the learning was visible in the initial EEG response that arose after a subject saw one of these patterns. Even a tiny fraction of a second after a pattern was flashed, subjects showed bigger responses in their brain."

In other words, this part of the brain shows local "plasticity," or flexibility, that seems independent of higher order processing, such as conscious visual processing or changes in visual attention. Such higher order processing would take time to occur and so its effects would not be seen in the earliest part of the EEG response.
 
Too tired to do the sum-up. That'll have to wait.
 
It appears everything concerning consciousness is experimental. Can the [chemical/physical] experiments be expressed mathematically?

Well, sure, the various experiments could be expressed mathematically in any number of ways, and often are... but that doesn't mean we can express consciousness mathematically... at least not yet.
 
Ok, sorry to have taken such a long hiatus, but real life does interfere at times....

I'd like to cite some more from Rees regarding visual consciousness:

Some of the most popular paradigms for studying the neural correlates of visual awareness are bistable phenomena such as binocular rivalry.

Just a reminder, that's when each eye is shown a different image, and the observer's awareness of what s/he sees flips back and forth between the two. As we've seen, some parts of the brain are neurally stable while this is happening (they correlate to the constant input going to each eye) while other parts of the brain change their neural states depending on what is consciously perceived at any given time.

Rees goes into greater detail on this process than did Koch.

This binocular rivalry is associated with relative suppression of local, eye-based representations that can also be modulated by high-level influences such as perceptual grouping. Because perceptual transitions between each monocular view occur spontaneously without any change in the physical stimulus, neural correlates of consciousness may be distinguished from neural correlates attributable to stimulus characteristics.

Ok, so far consistent with Koch, but Rees goes into more detail, revealing a more complex system in which various stages reflect the NCs of stimulus v. percept to greater or lesser degrees:

All stages of visual processing show such activity changes associated with rivalrous fluctuations.. For example, even at the earliest subcortial stages of visual processing, signals recorded from the human lateral geniculate nucleus (LGN) exhibit fluctuations in activity during binocular rivalry.

The LGN is part of the thalamus, which has already been identified on this thread as fundamental to core consciousness.

Here's how Rita Carter briefly describes the brain biology:

Signals from the two optic nerves first converge at a crossover junction called the optic chiasm. Firbers carrying information from the left side of each retina join up and proceed as the left optic tract, while fibers carrying information from the right side form the right optic tract. Each tract ends at the lateral geniculate nucleus... [then] their signals continue to the visual cortex via bands of nerve fibers called the optic radiation.

The LGN also receives signals from the reticular activating system, which regulates core consciousness, as well as feedback from primary visual cortex. (And I'm wondering if it's this last fact which might explain the "fluctuations".)

Primary visual cortex shows a similar pattern of changes in activity correlated with changes in the contents of consciousness. In general... such fluctuations in activity are about half as large as those evoked by nonrivalrous stimulus alternation. This difference indicates that the suppressed image during rivalry undergoes a considerable degree of unconscious processing. Finally, further along the ventral visual pathway, responses in fusiform face area (FFA) during rivalry are larger than those in V1, and equal in magnitude to responses evoked by nonrivalrous stimuli. This finding suggests that neural competition during rivalry has been resolved by these later stages of visual processing, and activity in FFA thus reflects the contents of consciousness rather than the retinal stimulus.

But it's not that simple....

However, such an account is inconsistent with the finding that binocularly suppressed faces can nevertheless still activate the FFA and with the recent demonstration of category-selective signals in these areas for binocularly suppressed face or house stimuli.

There are a couple more interesting points later in the Rees article that I need to get to, but I can't do it tonight, so hopefully it won't be so long between posts as it has been lately.
 

Back
Top Bottom