Robot consciousness

Link? I could probably educate some of the people on quantum theory, having studied it for the last three years during my degree.

No, you couldn't. I've read what you've written here on it.

Whats your beef with Penrose?

Bad math, bad physics, bad neurology, bad psychology. Basically, if you pile heinous mistakes from enough fields together, the experts will not be able to refute them all at the same time.

Rather like the Intelligent Design charlatans, except I think Penrose is a crackpot instead of a scam.
 
As stated above, I take a similar view to Penrose et al, that the there is something about our minds that is non computable, something that is beyond the realm of computation. So we know things other than through algorithms, sort of related to Godel's famous theorem (which to be honest, I dont fully understand).


And here's the classic problem with Penrose disciples.

Penrose is wrong, at all levels. Psychologists disagree with his psychology, neuroanatomists disagree with his neuroanatomy, mathematicians disagree with his math (any time someone drags Goedel's theorem up in a context of human intelligence, you know they understand neither intelligence or Goedel's theorem), and physicists disagree with his physics.

However, physicists don't feel comfortable disagreeing with his psychology, because it's not their field, and they're afraid to tell the world that they don't see the Emperor's marvelous new suit.
 
To boil one point down from the last few pages: A pure TM has no input/output. It's just an infinite tape with one's and zero's

These two sentences are contractictory. The infinite tape is both input and output.
 
I was using roger's definition. If I understand him rightly, all physics is computable, therefore everything in the universe is computable. Whether there are abstract problems that aren't, I wouldn't know.

You're confusing simulation with identity. We can design a computer that simulates a hurricane, but it wouldn't be a hurricane (there would be no water on the floor of the machine room). Hurricanes are physical processes that we can simulate computationally, but cannot duplicate.

But you asked us to stipulate that we can build a computer that is conscious, not that simulates consciousness. Which means that true consciousness must by assumption be achievable computationally -- i.e. consciousness is not a physical property, but a computational one. Which means that a TM can be conscious.
 
To boil one point down from the last few pages: A pure TM has no input/output. It's just an infinite tape with one's and zero's and a state machine that moves back and forth changing the bits. Consciousness (of the real world) requires at least an input from the real world.]

Not necessarily.

When humans dream, we're experiencing a state of awareness, but we're pulling all the data from cache, as it were. So you could theoretically create a robot that only dreamed by providing it with memory.
 
Aww, it's not so bad. We have cake!

Can't have cake. :(

But I think I'll do the long post anyway.

Re-thinking, now I'm back on the question: Is it possible for a human being to be conscious only for the duration of a synaptic firing, and if not, then what are the conditions that allow a series of coordinated firings to create sustained awareness?

But there's no way to get into that without doing the overview first.

I wrote it up on paper last night, so I'll need to type it out and find the remaining study links.

I do think it will be interesting to anyone who's curious about the phenomenon of consciousness. Hope so, anyway.
 
Firstly, there seems to be some ambiguity about what is meant by 'consciousness'. Is it to be aware of ones surroundings, or is it to be self-aware? I've been thinking the second meaning, not the first. I do not think processing of input data is a necessary or sufficient condition for consciousness.

The way I'm using the term, dreaming counts as a conscious state.

I think studies on subliminal perception provide the best distinction between information processing in the brain which is outside of consciousness, on the one hand, and information which is available to conscious awareness on the other.

The cocktail party effect is another, more easily grasped, example.

For example, when I was a child, a week seemed like an age, summer holidays went on for ever, birthdays were anticipated for weeks. Now a week doesn't seem so long, next summer doesn't seem so far away, birthdays fly by. Am I conscious at a different time scale?

This is a fascinating question, and it may be directly related to the process by which our brain generates "conscious events" -- that is, "moments" which are above the subliminal threshold, yet below the timespan at which events no longer seem simultaneous -- as well as how our brains use schema to make consciousness more efficient.

When you're a kid, everything is so new, and you have so few schema, that your brain has to attend more closely to actual inputs in order to learn, to wire the brain, and to build schema. So it may be that the consciousness function has to have a higher "refresh rate" -- you have more "frames" per unit of real time, so an hour or a day feels much longer.

When you're an adult, you have many more schema, better motor memory, less need to learn, so your brain might save resources by doing more "fill in" from stored patterns, paying less attention to actual sensory input, and using a slower "refresh rate" -- you have fewer "frames" per unit of real time, so an hour or a day feels shorter than it used to.

This would help explain why time seems to slow down in moments of extreme danger. The brain knows it needs to attend to actual input, so it stops using cache and schema as much and boosts the refresh rate so that you can better perceive what's happening around you in greater detail.
 
You're confusing simulation with identity. We can design a computer that simulates a hurricane, but it wouldn't be a hurricane (there would be no water on the floor of the machine room). Hurricanes are physical processes that we can simulate computationally, but cannot duplicate.

But you asked us to stipulate that we can build a computer that is conscious, not that simulates consciousness. Which means that true consciousness must by assumption be achievable computationally -- i.e. consciousness is not a physical property, but a computational one. Which means that a TM can be conscious.

It's not that I was confusing simulation and identity in this case. My problem was elsewhere.

We assume we have a conscious robot. But the robot's brain is a black box. We know it generates consciousness in a way similar to how our brains do it, but we can't say how.

Is it with a computer of the sort that we have now, just much more advanced? Or is it a different type of computer? Or a computer in conjunction with some other gizmo?

I don't want to make assumptions regarding our robot brain if doing so will lead to begging the question.
 
We assume we have a conscious robot. But the robot's brain is a black box. We know it generates consciousness in a way similar to how our brains do it, but we can't say how.

Is it with a computer of the sort that we have now, just much more advanced? Or is it a different type of computer? Or a computer in conjunction with some other gizmo?

I don't want to make assumptions regarding our robot brain if doing so will lead to begging the question.

Unfortunately, there are really only two choices for the robot's brain. Either it's a computer of the sort we have now, which means it's a Turing machine, or it's something radically different involving magic pixies.

I don't think it should be considered "begging the question" to note that mathematics already provides an answer to the first case. Especially if you didn't know the mathematics going in.
 
Re-thinking, now I'm back on the question: Is it possible for a human being to be conscious only for the duration of a synaptic firing, and if not, then what are the conditions that allow a series of coordinated firings to create sustained awareness?

Again, I think you'll need to define your terms and your scale first. Synapses aren't synchronized. Human brains have lots of synapses; at any given instant, there are literally millions of synapses "firing" within your brain. I don't know how you can expect to point at any given synapse and say "THAT is the one that produces consciousness when it fires, and when it's not firing, we are not conscious."
 
Unfortunately, there are really only two choices for the robot's brain. Either it's a computer of the sort we have now, which means it's a Turing machine, or it's something radically different involving magic pixies.

I don't think it should be considered "begging the question" to note that mathematics already provides an answer to the first case. Especially if you didn't know the mathematics going in.

That's it? Those are our choices? Interesting.

Positing a TM computer brain is only begging the question if it's not obligatory. If it is obligatory, and it's true that any TM can perform any task at any speed (which seems to assume that all macro-level tasks can be considered accomplished as long as micro-level output is the same), then yeah, we're done.

Btw, has it been demonstrated that the brain in its entirety is a TM? Is this now accepted in the field biology? (Not being a smartass, here, honest question.)
 
Again, I think you'll need to define your terms and your scale first. Synapses aren't synchronized. Human brains have lots of synapses; at any given instant, there are literally millions of synapses "firing" within your brain. I don't know how you can expect to point at any given synapse and say "THAT is the one that produces consciousness when it fires, and when it's not firing, we are not conscious."

I know. What I was pondering there is the thought experiment with the Glacial Axon Machine.

Consciousness involves the coordination of lots of high-level information, of course -- but then some interesting things can happen when disconnects among the sets are introduced, such as blindsight maze navigation and dual mode awareness of objects in split-brain patients.

The signature of conscious awareness of events isn't the firing of any neuron, but rather a particular type of coherent activation pattern across the brain.

I was just wondering aloud for how short a time a person can be conscious. Certainly not on as short a scale as the firing of a synapse (which was not to imply that the firing of a synapse creates consciousness) so how long then? And if there's a minimum (there must be, tho it's probably not exact) then why?

Looks like I'm going to have to get Galliard's "Converging Intracranial Markers of Conscious Access" and see if I can wrap my head around it. And Gazzaniga's most recent collection.

So much to learn.
 
And here's the classic problem with Penrose disciples.


Disciples? :rolleyes:

I do find this fixation a lot of people on this forum have fascinating, accusing people who prefer certain scientific models over others to be some sort of creationist/religous person. As Darth Rotor put in another post on this forum recently: "must a scientist see a Creationist behind every bush?".

I note you didn't specifically address any of the points in my post. Or support your above assertions.

Your basically using an arugment from authority, saying that many top people in many areas do not agree with his theories so it must be wrong. The fact that many people in many areas do not agree with his theories is totally understandable, as he's taken an extremely bold multi-interdisciplinary step in what he has proposed over the years, and by doing so is stepping on the toes of a lot of people in a lot of fields. Of course people are going to disagree, and if they were decent scientists they would publish the reasons why they disagree. And I'm sure Roger would respond. And they would again. And if you were a decent scientist, you would explain why yourself. Or link to such material.

Zeuzz
 
Back to our experiment with Joe.

We know from experimentation that not all of the data processed by the brain is made available to consciousness.

If Joe comes out of the room and say he saw a green triangle, we conclude he was aware of it. If he only reports the red circle and blue square, we conclude he wasn't consciously aware of seeing the green triangle.

That's all.

So the question is, if we did this with Joe, or if we did the experiment with Jane the robot and slowed down her operating speed similarly, would they report seeing the green triangle or not?

That would tell us whether they were consciously aware of it or not.

Given the length of time it's on the screen, they should report seeing it, unless the brain-zapper or the slowing of the processing speed made consciousness fail. (Or if it prevented encoding the experience into memory.)

I'll be arguing that we should expect Joe not to report seeing the green triangle -- unless when I get it all on paper my thinking falls apart and I have to change my conclusion.


As far as I understand, this is what happens when your brain receives a signal from your eye:

The signal is processed (calculated) by your neurons and distributed around depending on their programming and arrangement, i.e. their states at that very moment. The results are then disposed, or cached in different ways depending on the result of the calculations.

If Joe comes out of the room and say he saw a green triangle, we, and himself *, would conclude that as a result of his brain's proceccing the value of his "i'm aware"-neuron for the green triangle has been changed from 0 to 1.

And unless these brain activities create some kind of "field", which in turn needs to be in sync with some dualistic aether, slowing it down or speeding it up shouldn't matter.

* This also resolves the "problem" of free will.

Piggy; said:
When humans dream, we're experiencing a state of awareness, but we're pulling all the data from cache, as it were.


Much like when humans are fully awake, I would argue.
 
This may all be moot now, but I still think it's interesting.

A few observations about Consciousness (C)

C is a specialized function (or set of functions) of the brain. It is not an emergent "property" like the whiteness of clouds, although it may make sense to call it an emergent function.

The inputs to C are highly processed. Raw input is not properly formatted for the modules that generate C. Instead, they deal with chunks of data that have been filtered, associated with stored patterns, patched up, categorized, and prioritized.

C requires the coordination of sets of highly processed data. Unconscious states are associated with the discoordination of these sets. Selective partial discoordination can result in bizarre experiences and behaviors.

C is a "downstream" process. We perceive and even act on our environment before the brain makes input from the environment, as well as information about our actions, available to conscious awareness.

Not all information we perceive is made available to C. Only the information which our brains determine to be important and useful for conscious processing is served to the modules that handle conscious experience.

C is resource intensive.

Events below a certain real-time threshold are never served up to C, even though the brain processes and uses information that it perceives regarding these subliminal events. There are other types of subliminal information, including scents which are processed and acted on but of which we are not consciously aware.

A sequence of events spanning greater than a certain real-time threshold is not perceived by C as a set of simultaneous events, but as a series of events sequential in time. Below this threshold, the time difference is not perceived and sequential events appear simultaneous.

C can switch off briefly without there being any noticeable gap in conscious experience, which will appear continuous.

C can dismantle itself and reassemble itself in ways which allow the conscious subject to be aware of having lapsed into and out of an unconscious state (e.g., being aware that one has fallen asleep and awakened again in a room with no windows).

C does not require external inputs to function if it has access to stored memories and schema (e.g. dreams, completely dissociated hallucinations).

Our conscious sense of time speeds up as we get older, and slows down when we're in serious and immediate danger.


Putting it together....

Since consciousness is not aware of events below the subliminal threshold, and does not perceive sequential events above a certain threshold as simultaneous, but rather as distinct events, there must be a window of real time that the brain chunks into a single event -- a "conscious event" (CE) -- that our brains are able to perceive.

So our perception of continuous time must consist of a series of these CEs, which are produced by the coordination of highly processed information formatted for use by those functional processes of the brain responsible for generating conscious experience. The gaps between these moments are not perceived as gaps.

The fact that our sense of time speeds up as we learn more and have more "cache" to draw on, and slows down when we need to attend to real input rather than schema, suggests that there may be a "refresh rate" to conscious perception.

In other words, our experience of consciousness is, in some ways, like frames creating the illusion of a movie when run at speed (except that we're in the movie, or are the movie, rather than viewing it).

What are the implications of these observations about consciousness, and the notion of a cinematic consciousness, for our question about drastically slowing the processing rate?

Right now, I don't know. At the moment, I can't get everything to fit together neatly no matter how I look at it.


Sources:

See also subliminal experiments cited upthread.

Where Does Consciousness Come From?

This article, unfortunately, leads with "emergent property" language. :( However, it goes on to describe a "signature" of conscious awareness, which engages for perception of events above the subliminal threshold but not for events below that threshold.

The Secret Life of the Brain: The Adult Brain -- How a stroke can affect the emotional brain

This site includes an account of what happens when brain damage interrupts the neural pathways that make us consciously aware of our emotions. It demonstrates that consciousness is a downstream process, and that awareness is not global, but modular. See the parent site for a description of the everyday experience of the stroke victim.

Here's the video clip on YouTube:



Get Out of Your Own Way

Another article demonstrating the downstream position of conscious awareness.

Our Unconscious Brain Makes The Best Decisions Possible

Pouget analyzed the data from a test performed in the laboratory of Michael Shadlen, a professor of physiology and biophysics at the University of Washington. Shadlen's team watched the activity of a pair of neurons that normally respond to the sight of things moving to the left or right. For instance, when the test consisted of a few dots moving to the right within the jumble of other random dots, the neuron coding for "rightward movement" would occasionally fire. As the test continued, the neuron would fire more and more frequently until it reached a certain threshold, triggering a flurry of activity in the brain and a response from the subject of "rightward."

Pouget says a probabilistic decision-making system like this has several advantages. The most important is that it allows us to reach a reasonable decision in a reasonable amount of time. If we had to wait until we're 99 percent sure before we make a decision, Pouget says, then we would waste time accumulating data unnecessarily. If we only required a 51 percent certainty, then we might reach a decision before enough data has been collected.

Another main advantage is that when we finally reach a decision, we have a sense of how certain we are of it—say, 60 percent or 90 percent—depending on where the triggering threshold has been set. Pouget is now investigating how the brain sets this threshold for each decision, since it does not appear to have the same threshold for each kind of question it encounters.

'Brain-reading' Methods Developed

“What our research shows is that if you want to understand human cognitive function, you need to look at system-wide behavior across the entire brain,” explains Hanson. “You can’t do it by looking at single cells or areas. You need to look at many areas of the brain to even understand the simplest of functions.”

“It’s the same principle experienced during a car accident. The car accident actually happens tens of a milliseconds before you are aware you have actually been hit,” explains Hanson. “By looking at the back of the brain, we can 'read out,' for example, that a person is looking at dogs and cats before they actually know they are looking at a dog or a cat.”

Neuroscientists Demonstrate Link Between Brainwave Activity And Visual Perception

Brainwave activity has peaks and troughs that can occur around 10 times a second, he explained. In their research, Professor Ro and his colleagues demonstrated how the phase of the brainwave or alpha wave can reliably predict visual detection.

Note that this article is dealing with conscious perception, i.e., what subjects are able to report.

Blind Man Walking: With No Visual Awareness, Man Navigates Obstacle Course Flawlessly

Researchers have demonstrated for the first time that people can successfully navigate an obstacle course even after brain damage has left them with no awareness of the ability to see....

Deep Sleep Short-circuits Brain's Grid Of Connectivity

In the human brain, cells talk to one another through the routine exchange of electrical signals. But when people fall into a deep sleep, the higher regions of the brain - regions that during waking hours are a bustling grid of neural dialogue - apparently lose their ability to communicate effectively, causing consciousness to fade.

Brain Energy Use Key To Understanding Consciousness

Shulman and colleagues have proposed that it is needed to maintain a person in a state of consciousness. Heavily anesthetized people are known to show approximately 50 percent reductions in cerebral energy consumption. When the paws of lightly anesthetized rats with rather high baseline energy levels were stroked, fMRI signals were received in the sensory cortex and in many other areas of the brain. In heavily anesthetized rats the signal stopped at the sensory cortex. Both the total energy and the fMRI signals changed when the person or animal lost consciousness.

"What we propose is that a conscious person requires a high level of brain energy," Shulman said.

The Split Brain Revisited

About 30 years ago in these very pages, I wrote about dramatic new studies of the brain. Three patients who were seeking relief from epilepsy had undergone surgery that severed the corpus callosum - the superhighway of neurons connecting the halves of the brain. By working with these patients, my colleagues Roger W. Sperry, Joseph E. Bogen, P.J. Vogel and I witnessed what happened when the left and right hemispheres were unable to communicate with each other.

It became clear that visual information no longer moved between the two sides. If we projected an image to the right visual field - that is, to the left hemisphere, which is where information from the right field is processed - the patients could describe what they saw. But when the same image was displayed to the left visual field, the patients drew a blank: they said they didn’t see anything. Yet if we asked them to point to an object similar to the one being projected, they could do so with ease.

Research helps illuminate how brain fills gaps in perception.

The brain's perception of reality is actually an elaborate reconstruction, since information from different senses not only arrives at the brain at different times, but is processed in different places and at different speeds. Remarkably, these different threads are sewn together seamlessly most of the time.

"Illusions are very hard to find," says David Eagleman, a neuroscientist at the Salk Institute for Biological Studies in La Jolla, Calif. "The brain puts tremendous effort into making sure that you don't get illusions." This wouldn't be surprising, he says, except for how complex the brain's synchronization has to be to present a seamless reality.

The Brain Doesn't Like Visual Gaps And Fills Them In

When in doubt about what we see, our brains fill in the gaps for us by first drawing the borders and then ‘coloring’ in the surface area....

The famous gorilla in a basketball game experiment
 
If Joe comes out of the room and say he saw a green triangle, we, and himself *, would conclude that as a result of his brain's proceccing the value of his "i'm aware"-neuron for the green triangle has been changed from 0 to 1.

There doesn't appear to be any such neuron.

Rather, what must happen for Joe to be aware of seeing the triangle and to report this experience is for his brain to format the visual data -- which includes receiving a sufficiently long real-time signal, matching the data with stored patterns so that it's understood as a "green triangle", and deeming it important enough to flag as meriting conscious attention -- and make it available to conscious processing.

The brain then, in its regular coordination of chunked data from various modules to produce conscious awareness, produces the conscious experience of seeing a green triangle.

This experience is encoded into memory.

After the experiment, Joe's brain interprets the question about what he saw, searches his memory for matching experiences, and recalls having seen the green triangle.
 
This may all be moot now, but I still think it's interesting.

A few observations about Consciousness (C)<snip>
I've kind of walked away from this thread a bit, but let me share an observation I had while reading this. Basically, a heck of a lot of work has been on the nature of consciousness. It is a mistake to treat it as a single thing. There is considerable experimental evidence showing that we experience multiple independent consciousness of various events, and that the brain is continually weaving these together into a narrative. Dennett elaborates on this in his multiple drafts model. There are numerous levels of support for this, not all that I remember, but one that does stand out is the fact that sensations come into the brain at different speeds given the low speeds of our nerves, yet our brain obviously edits these into a cohesive time line. Different "modules' in the brain process data at the same time in different ways, coming to provisional conclusions. The network eventually settles into a coherent story, and the previous outputs from the other modules just get lost. There is no overriding cartesian theater, no 'consciousness' in charge, making decisions. Instead, consciousness is just the sum total of all these different networks working together, creating drafts to interpret what we perceive, and the best draft wins.

I recommend his book Consciousness Explained, not because I think his specific model is oh-so-certain to be correct, but because the book is just crammed full of rather counter-intuitive results from science, which any model of consciousness must address. Armchair efforts by the likes of you or I are doomed to fail, because these experiments show that our perception of consciousness is actually rather different than it's actual nature. We are unreliable narrators.

I'm not going to address all of what you wrote, other than to point out there is a lot of evidence that consciousness is not a 'downstream' activity, but what we 'perceive' is merely what ends up getting the best weighting f the network. Perceive is in scare quotes for a reason - the point is that we are conscious of a heck of a lot of things, but then the network drops the data without recording it to memory. You can try all you like to catch these things, but you can't, yet experiments show it is so. It's kind of like trying to see the blind spot in your eyes. yes, in this case you can realize it is there by holding your fingers out and moving them until they enter the blind spot, but just look at a landscape and try to see that blank hole that is there. You can't. But, at some point, you are aware of it, but that perception gets edited away before it gets put to memory.

Or take another example, which I touched on earlier - propagation of messages along nerves takes a long time. If I touch a needle to your face and your toe at roughly the same time, you can correctly tell me whether they were simultaneous or not. Yet it took several hundred milliseconds longer for the toe signal to reach your brain than your face, so it necessarily showed up later. But, if I was to just touch your face, or your toe alone, you would react (needles hurt, so you flinch) based on how long that signal takes to arrive. Face needles get reacted to faster than toe needles. So, the brain is revising its interpretation (draft, in Dennett's nomenclature) of events based on what else happens, and rewrites your consciousness of the event.

Another example: right now, move your left hand. Okay, that all happened at once, right? Well, no. Sometime after you read that sentence, you made a decision to actually move that hand. then the signals moved down to your arm, your arm moved, and then the nerves propogated the feeling of the arm moving back to your brain. Several hundred milliseconds elapse between the beginning and end of that, but it is all simultaneous to you. Interupt the process somewhere along the line, and it still feels simultaneous, even though the timing is different.

Another experiment: The experimental scenerio, as explained to the subject, is that they are to look at some slides, and to use the projector's back/forward button whenver they like. They will be hooked up to some electrodes on the scalp to record some activity.

Of course, as in any good experiment, this is a lie. Actually, the remote control is a dummy, and does not control the slide projector at all. Instead, the electrodes on the scalp read when they make the decision to press the button, and controls the slide projector instead - at the speed of electricty, not neurons.

The subjects get immediately confused because the machine advances the slide before they decide to advance it. They have the sense that they are "about to" advance it, or are debating it, but haven't actually made the decision yet, and the machine advances. They get confused, sometimes pressing the button twice or more, or trying to back up to correct the machine, etc.

It's hard to know what to conclude from this experiment. One conclusion is that our decisions are made below the level of conscousness, and get reported to us afterwards - we only 'think' we are thinking about it consciously. Another conclusion is that the brain is just making multiple drafts of what is going on, and is confused by the speed of the slide transition - i.e. in the normal course of events you decide to press the button, press it, wait for the signals to propagate, they do, you revise your draft so that the decision and the slide movement are at the same moment - and that sequence was messed up by the faster signal. In any case, it makes clear that the brain is revising what it perceives to try to create a coherent time line - a time line that does not in any way reflect what really happens given the slow speed of neural impulses, but does reflect what happens externally to us.

Anyway, talking about being conscious of something or not is hopelessly inadequate because you are talking about editing that has taken place several hundred milliseconds after the initial registration, and taking that as reality and an accurate representation of how the brain works. It's not how the brain works, and any conclusion drawn from it is suspect at best, wildly wrong at worst.
 
I wonder why consciousness is seen as being identical with awareness of consciousness. I would have believed it to be far more likely that the awareness is just a subsystem of consciousness.

It is well-known that we cannot focus our awareness on everything that goes on, so that if we are concentrating on looking at something, we do not notice every signal from, say the feet, or if we are concentrating on how a new shoe feels, we might not notice any longer if the shirt sits too tight. If we concentrate on following a ball, we might not notice people in gorilla suits waving to us.

On top of this, it is now clear that the decision-making process is also not part of the awareness-process.

In my mind, this makes our awareness a kind of monitor that gives input to the other processes, just like input from our senses. We can use to influence decision-making, but it is not part of decision-making itself.

I also tend to believe that this awareness-process is not present, or not as developed in most animals, which could explain why animals can be conscious but not necessarily be able to develop self-awareness, or to anticipate events to the degree that humans can do.
 
roger: You and I are saying the same thing there.

And btw, I read Dennett's book years ago, but it got lost in the great book purge, I'm afraid. :(

What you're saying is precisely along the lines of what I said above.

Consciousness does not deal in raw data. Rather, the functions that govern consciousness coordinate highly processed data from various areas of the brain.

The pinprick example is a classic case. Our brains are able to discern simultaneity despite the difference in distance. Same is true for our ability to perceive a red car as a red car when it drives by, despite the enormous variations in the color of the light (and the shapes it makes) that actually hits our eyes.

(And yet, if we view events that are sequential, but very close together, we will perceive them as simultaneous. For example, if you view a machine with a blinking light that makes 2 blinks close together, and you steadily decrease the time between blinks, eventually you get to a point where you see one blink, not two, even though the events are not actually simultaneous.)

Our brain is designed to take an astounding array of variable, even partial, data and render this as consistent conscious experience. Quite amazing.

But I know of no evidence which supports the idea that we're genuinely aware of everything as it happens, but lose our ability to recall that we were consciously aware of subliminal events. (The pinpricks are not subliminal, but bear with me.)

That is highly unlikely on its face, because if the brain were to do that, it would be enormously wasteful of resources. It would be much more efficient to simply never serve that information to consciousness at all. And the recent study I cited on conscious signatures demonstrates that the brain does not appear to engage the large-scale coordination of data for subliminal events as it does for events we consciously perceive.

But indeed, there's constant rewriting in order to maintain consistency. Since the pinprick to the face is not a subliminal event, and it causes pain, it gets served up to consciousness. And as you mention, when the pinprick to the foot gets served up, our awareness readjusts and changes state, replacing "I just felt a jab to the face"with "I just felt jabs to my face and foot".

All this is perfectly consistent with what I've been saying.

However, when you say that "consciousness is not a 'downstream' activity, but what we 'perceive' is merely what ends up getting the best weighting of the network", you are contradicting yourself, because that weighting must happen upstream, even for the single pinprick to the face. Because, remember, the brain has to coordinate raw data, match it with stored schema, and decide it's important enough to pay conscious attention to before we become aware of the event "jab to the face".

The slide projector experiment you mention is another example. Hopefully I can find links to better studies than the ones I cited above, but it is as you say: "our decisions are made below the level of conscousness, and get reported to us afterwards". That's what I mean by downstream.

Marvin's case is another very clear example of downstream consciousness -- in his case, emotional consciousness -- where we can actually identify the pathway and discern that bodily response comes first, awareness later.

All the examples you provide in that post simply provide further confirmation of what I'm saying.

Your conclusion, however, is incorrect:

talking about being conscious of something or not is hopelessly inadequate

It still makes sense to speak of being consciously aware of some things and not of others. In fact, studies of subliminal v. conscious processing depend on it.

Obviously, we are conscious beings. And it's not meaningless (or "hopelessly inadequate") to say that if I watch a movie and my girlfriend sleeps through it, I was consciously aware of what happened on the screen and she was not.

What these experiments do is to intentionally probe the boundaries, and explore the fluidity of conscious awareness across time.


ETA: And btw, the multiple drafts model is a kind of cinematic consciousness -- as our experience "refreshes", we're continually served up a new "now", which is a mixture of input from the world, and associations and fill-ins provided by the brain itself.

Also, there's another way to look at the pinprick experiment. It's possible (I don't think we can measure this yet) that the subject was never aware of "jab to the face". We know about the delay in the flinch response, but we also know that awareness lags behind physical response. So it's possible that the signal from the foot is received before we have time to become aware of the jab to the face; the "I've been jabbed in the face" scenario is scrubbed in favor of the "I've been jabbed in the face and foot" scenario, which is what we become aware of.
 
Last edited:
I wonder why consciousness is seen as being identical with awareness of consciousness. I would have believed it to be far more likely that the awareness is just a subsystem of consciousness.

It is well-known that we cannot focus our awareness on everything that goes on, so that if we are concentrating on looking at something, we do not notice every signal from, say the feet, or if we are concentrating on how a new shoe feels, we might not notice any longer if the shirt sits too tight. If we concentrate on following a ball, we might not notice people in gorilla suits waving to us.

On top of this, it is now clear that the decision-making process is also not part of the awareness-process.

In my mind, this makes our awareness a kind of monitor that gives input to the other processes, just like input from our senses. We can use to influence decision-making, but it is not part of decision-making itself.

I also tend to believe that this awareness-process is not present, or not as developed in most animals, which could explain why animals can be conscious but not necessarily be able to develop self-awareness, or to anticipate events to the degree that humans can do.

Unfortunately, our language is inadequate to the material, I'm afraid.

We could talk about 3 sets here, which we might call perception, consciousness, and self-awareness.

(Terms of convenience there)

We could use "perception" to mean everything the brain picks up, even the stuff we have no idea that our brains have noticed (like subliminal smells).

We could use "consciousness" to mean our experiences -- such as our dreams, the parts of our environment we pay attention to, and emotions we're aware of -- which is a combination of some of what we perceive together with sensations produced by the brain itself.

We could use "self-awareness" to mean a subset of experience that gives us a sense of "I", of being an individual within and acting on the world. (It's possible to induce a state of conscious awareness that lacks self-awareness, e.g. by using LSD.)
 

Back
Top Bottom