Back to the main topic of explaining consciousness.
It doesn't "lead" to consciousness. Instead what I'm suggesting (it's only a suggestion), is that, that is what consciousness actually is. That is ...
... what we call "consciousness" is just that continuous rapidly updating set of sensations and responses that we experience/undergo as a result of the chemical, electrical and physical changes from the sensory input, to reactions in the brain, to signals going back from the brain to the muscles and other organs and back to the sensory system in a continuous cycle ... the effect of that is what we call "consciousness".
If you don't understand how that could be what you think of as consciousness as you perceive it in your own daily life, then that may simply be because the effect has become so refined and so efficient in humans after billions of years of evolution, that to us as functioning apes, it now seems like “magic” … as if there must be some other reason different from the purely physical/chemical reactions that define how all living things function …
… but since all known evidence is against such “magic”, I expect the explanation for the effect that we call “consciousness”, is indeed just a highly evolved and very efficient (seemingly “very efficient” on out time scale at least, and where we are unaware of the underlying chemical, electrical processes that go on all the time in our cells and nerves etc.) sequence of perfectly natural chemical and electrical changes that occur in all “living things” (they occur to different extents, and with more or less complexity going from simple organisms such as plants, to the most complex such as mammals inc. apes and humans).
You could think about it another way – if you were able to travel back to the time when the first living things appeared on the Earth (e.g. you are the only human alive, but you actually know nothing about modern science or the modern world … all you can detect is what your senses see, hear, smell etc., and what your thinking human mind says to you about the single-celled “life” before you and the landscape of the planet that you perceive), then you would probably think it was impossible, even completely unimaginable, that a process of evolution would lead eventually (after billions of years) to humans that could make aircraft, computers, discover quantum field theory, develop language etc., or indeed experience an effect that we call “consciousness” …
… but the explanation for how humans came to have all those characteristics & abilities today (inc. “consciousness”), is certainly that it has been the inevitable result of 3 billion years of evolving life becoming more and more highly developed, more sophisticated, refined and more capable in everything associated with our life and existence.
This is an excellent and thought-provoking post.
I personally think we can go a little farther in specifying what it is the brain does that constitutes consciousness.
It might be helpful to consider a system of external signals, neurochemical reaction to those signals, and an adaptive motor (muscle) response, that
doesn't involve what most of us would typically regard as consciousness. Consider a filter-feeding jellyfish whose prey lives predominantly in water of a certain temperature range. The jellyfish thus thrives best when it's in water of that temperature range. In the ocean, deeper water is almost always colder, and shallower water is almost always warmer. So when the jellyfish senses a temperature lower than the optimum range, it swims upward, and when higher, it swims downward. It's sensing and responding to its environment, but its responses have don't require "understanding" anything about its environment. It has to sense temperature and it has to have at least two different motor responses, one of which reliably moves it upward and one downward. But it does not have to "know" what temperature, water, up, down, food, swimming, or "itself" are as concepts.
Contrast that with higher animals that, in order to hunt, evade threats, care for their young and so forth have to be able to recognize not just environmental signals but the presence and positions of objects, both as types ("a tree") and specific individuals ("the tree where my nest is"), rather than just specific patterns of sensory input. Of course, patterns of sensory input are how objects are recognized, but the more processing of the raw sensory input is involved, the more abstract the relationship can be and the more reliably the thing or type of thing can be recognized even if it is e.g. facing a different way, under unusual lighting conditions, viewed from an unusual angle, contorted into an unusual shape, partially or mostly concealed, and so forth.
Once objects and other basic situations (e.g. darkness; or being at a height) are perceived and recognized, there are several ways a nervous system might react to them. Some reactions are instinctive (flee, freeze, attempt to mate). Other reactions can be learned associatively and once learned, function much like instinct. An animal once burned by fire that then shies away from fire does not necessarily need to remember the specific past experience of being burned by the fire as a narrative, in order to have a learned aversion to the fire. (In humans, accidental and counterproductive associative learning early in life has been a popular idea in psychology, as an explanation for some phobias and paraphilias, where the experiences that created the association aren't remembered. It's debatable how much of a role that actually plays, but it might have a role in for example food taste preferences, where one theory holds that unusual flavors that happen to have been eaten just before an illness become disliked long afterward.)
There is far more benefit (and more metabolic cost as well) in more advanced neural processing, that recognizes, remembers, and learns narratives. By narratives I mean descriptions of agents acting in the world with volition and cause and effect, that are highly compressed and abstracted relative to the raw sensory input from which they're constructed. "The wolf is hunting but doesn't see the baby behind the bushes." Constructed narrative can become remembered narrative, which is a more advanced and far more versatile form of learning. Instead of the "fire, aaaah, bad!" of mere associative learning, you can have "I saw Joe get hurt when he got too close to fire, but when it's cold a fire can keep me warm."
Consider, though, the formidability of the computational task of constructing an ongoing stream of narrative, in close to real time, from a stream of raw sensory input plus a potentially huge store of memories. Especially when even the types of "actors" and "props" that occur in the narrative aren't implicitly known at the outset but themselves have to be learned by observation. Computer AI is now at approximately the stage AI researches in the 1960s were expecting to be at by the 70s, where visual object recognition works reasonably well, but I don't think any computer AI is close to being able to e.g. "examine this video and briefly summarize what's happening." That takes what we think of as basic animal intelligence, which we're not close to yet even though nothing about it seems fundamentally impossible for AI.
(Now, suppose you have brains that get really good at understanding the world on a summarized narrative level. Good enough for thinks like, "If Janice sees Joe with that banana she'll think it's one of hers and that Joe took it, even though it's not," such as might be advantageous in passing on ones genes a highly social species. This might lead to language, which is another way of encoding narrative. You have to already be able to perceiving the world in narrative terms in order to use it, but language in turn facilities that ability. That might help explain the question of how full-blown grammatical language can be so relatively newly evolved in our species, and yet also be so well-developed that it seems effortless, like object perception and muscle coordination that have existed many many times longer.)
Where does consciousness come in? Earlier on. It's part of the result of the process of constructing narrative in real time from sensory input and memory.
I mentioned a hypothetical AI that observed a video and describes what's happening. Would that AI have to be conscious to accomplish that task? I don't believe so. But, that version of a narrative-recognition system would never evolve in the wild. Because passive spectatorship is of far too limited use to be worth the metabolic cost. What makes the ability useful to a competing individual in an evolving species is the inclusion of the self, the organism doing the processing, as an actor, and often the primary actor of concern, in the narrative. The result and purpose of generating a narrative understanding isn't to sit and watch events unfold; it's to make better decisions about what to do in complicated and dynamically changing circumstances. "The wolf is hunting, but it doesn't see
me behind these bushes."
It's the inclusion of the self in the ongoing narrative constructed from sensory input and memory that causes, or constitutes, consciousness.
The usual reaction I get to this idea is something to the effect of, "there has to be more to it than that." That reaction is understandable. It seems like something is left hanging, or some illicit bootstrap effect is being snuck in.
If the constructed narrative is what experience is, who is experiencing that experience? The answer is, it's not; the process of constructing the narrative is what experience is, and your brain is both constructing and experiencing that narrative. (How could it evolve otherwise? What good would it be to construct the narrative and then ignore it instead?)
Okay, maybe, but if you are a "character" in that constructed narrative, how can you also be the experiencer? Isn't that like saying Harry Potter is experiencing the story he's in?
I don't think it is. The reason it's different is because your physical (and thinking) organism actually materially exists. It exists in addition to the self-character in your mental narrative. And furthermore, you normally make no distinction between the two, any more than you normally make a distinction between the wolf that's out there near you in the world and the wolf you re-construct inside your head in order to recognize and react to it. (At least, until a philosopher comes along and claims that one or the other of them doesn't really exist.) Roughly, your brain experiences the world via the process of creating a narrative from its sensory view of it, and experiences itself experiencing in the process of including its own part (including memories of thinking, deciding, and acting) as an important element of that (real) world and as a participant in that narrative.
Hofstadter and others describe this as a self-referential "strange loop," because they're looking at it on the level of how it actually works, of signals going here to there and back again between different parts of the brain. I don't try to evaluate it on that level and I don't think it's necessary in order to intuit how consciousness comes about. As with so many processes in biology, it's quite a bit easer to think about it in terms of what it accomplishes instead of how it works in detail. Either way, there's no actual contradiction or paradox in it.