My Model of Human Consciousness

rocketdodger

Philosopher
Joined
Jun 22, 2005
Messages
6,946
I am making this thread because for years people have been arguing about the origins of consciousness and I haven't yet seen someone go into much detail about exactly how it might occur. I mean, the computational model proponents are clear about consciousness being a form of "self referential information processing" but that isn't enough to get individuals who are not educated in computer science thinking on the right track.

I think I have a pretty solid model, which I have formed over the years, and I would like to present it here for anyone to read in the hopes that 1) people more educated than myself in the relevant issues might add to it, or criticize it, so that I may refine it and 2) people who do not even have a clue what I have been talking about all these years might better see where I (and the rest of us) are coming from with this whole "computation" thing.

It is a big OP, so for ease of response I am breaking it up into related sections and posting them separately. If you want to respond to multiple sections, please do so in multiple posts, so we can keep this pattern -- otherwise I fear this will get out of control!!

Just for reference -- and so people don't immediately respond to something they take exception with before reading where I am going with it -- here is a rough table of contents:

Basics -- Logical/Computational Reasoning
Basics -- Biological Neural Networks
Basics -- Artificial Neural Networks
Clues from the Visual Cortex
The Basic Digital Pattern Detector
Filters and Reasoning
Clues from Introspection
Putting it all Together
Details 1 -- Learning and Memory
Details 2 -- Sensory Perception
Implications

Finally, note that there is indeed scientific research to back up everything but the anecdotal / introspective claims that I make. However, because most of this is stuff that I have learned over the years, I don't keep references to relevant articles. So I am going to make all the posts first, and then go back and add citations and references as I find them -- which will probably be as people dispute them. So if you want a reference, and can't find it yourself, please post about it, and I will do my best to dig up the research. I also wanted to include some images to help people understand the concepts but I haven't gotten around to making them, so please let me know if you would like to see something -- otherwise I will probably just be lazy and never do it!
 
Basics -- Reasoning

A fact is a relationship between 1) percepts or 2) other facts. A knowledge base is the set of facts that an entity knows about the world (hence the name). Facts can either be acquired from perception (atomic, or axiomatic facts) or inferred from existing facts (derived facts). The process of inferring new facts from existing facts is known as reasoning. I believe reasoning is the root of human conscious experience.

There are various ways of implemeting reasoning systems. One method is known as chaining, which can be of the forwards or backwards variety. Both methods are similar in that they can be used to generate new facts from existing facts, and in my opinion both are important.

For instance, we wish to determine if apples and blood are the same color.

Here is an example of forward chaining: suppose our knowledge base is something like {f1=apples are red, f2 = blood is red, f3=red is a color}. Then with f1 and f2 we can infer {f4=apples and blood are red} and then using f3 and f4 we can infer {f5=apples and blood are the same color}.

Here is an example of backward chaining: starting with {f1, f2, f3, f5} above we note that we can possibly generate f4 if f5 and f3 are true. Going with this choice of f4 (as opposed to “apples and blood are green” or any other color) we then see that f4 is indeed true due to either f1 or f2. Note that the algorithm would possibly cycle through other colors and have to back up because f1 and f2 only apply to red, etc.
 
Basics -- Biological Neural networks

Biological neural network are amazingly complex systems but on a basic level are very straightforward. A simplified model is that each neuron (the basic unit of a network) consists of a number of incoming connections from other neurons (termed “dendrites”), a body, and an outgoing connection to other neurons (termed the “axon”). (In reality synapses can occur almost anywhere, so the distinction between dendrite, body, and axon is a little more fuzzy)

During operation, a neuron will receive impulses from other neurons along it’s dendrites. These impulses are more or less summed within the cell body and if a threshold is reached the neuron will generate it’s own impulse along it’s axon, sending a signal to other neurons.

It is important to note a few things about neurons. First, both dendrites and axons make arbitrarily many connections to other neurons (each connection is termed a “synapse”).

Second, although incoming impulses along the dendrites can be received at any time, when a neuron fires the same signal is sent to all downstream neurons along the axon.

Third, a neuron may generate a sequence of impulses rather than single spikes – because each impulse has the same magnitude, this is how neurons manage the “strength” of their signals. Because clearly multiple spikes in quick succession will add much more to the sum than a single one.

Finally, though each incoming impulse adds to the total “excitement” of a neuron, this “excitement” decays over time. Thus, a series of quick impulses has a higher chance to excite another neuron than a more spread out sequence.
 
Basics -- Artificial Neural Networks

Artificial Neural Networks, or ANNs, are the result of studying biological neural networks and discarding features of the neural model that are felt to be unimportant for various reasons. Although there are many different types, a neuron (or node) of any ANN has the same essential features as one of their biological cousins – incoming edges, a function that must be satisfied by those incoming edges, and an outgoing edge that fires once the function is satisfied.

The most basic ANN is composed of nodes that simply sum all the incoming edges and fire the outgoing edge if the sum passes some value.

Although biological neural networks (BNN hereafter) are much more complex than ANNs, we have been able to elucidate the basic function of many BNNs by studying analogous ANNs. Furthermore we are able to give the algorithms of ANNs formal treatment in the computer sciences and thus gleam more abstract functions from the analogous BNNs.

The best known example of this crossover is the area of visual recognition, and this is where I would like to truly begin.
 
Clues from the Visual Cortex

For almost 50 years humans have been (in a questionably ethical manner) gathering data from living specimens, from nematodes to cats, about the operation of neurons in the first stages of the visual pathways of creatures on Earth. At the same time, researchers have been studying the capabilities of ANNs and related systems in the area of pattern processing and recognition.

The results are both fascinating and incredibly insightful (this is likely why this area is by far the most prominent discussion that takes place in introductory neuroscience and computer vision courses).

Hereafter, I wish to equivocate between a digital image (composed of pixels) and an image projected upon a biological retina (or other detection system, such as the compound eyes of an insect). The equivalence comes into play because each pixel can be thought of a single biological light detector I.E. retinal neuron (rod, cone, etc). This will help to envision what certain processes are doing.
 
The Basic Digital Pattern Detector

I don’t want to get too technical in this piece, but I think it is important to illustrate the basic operation of a visual filter. The simplest example of such a filter is a basic detector that “fires” when there is a pixel in the center of the detector, an area of X by Y pixels, and lies dormant under all other conditions.

If we were to write a snippet of code that performs such an operation, we would do something like the following:

1. Look at every pixel
a. If a pixel is in the center position, and is dark, add a large number to the sum.
b. If a pixel is in the center but is not dark, don’t add to the sum.
c. If a pixel is not in the center, and is dark, subtract a large number from the sum.
d. If a pixel is not in the center, and is not dark, don’t add to the sum.

2. Check the sum produced against some threshold.

3. If the sum is greater, we found a section of the image that has a dot in the middle and space around the dot.

4. If not, we did not find that pattern in the current section of the image.

Now, what could we do with such a pattern detector? The most trivial use is to place one at every pixel in an image and simply check the results – this would tell us how many “dots” are in the image. But we could also be very clever and use a number of these detectors as the base units for other detectors – for instance a larger version of the same “dot” detector. Then this larger detector would look for “dots” made up of smaller “dots.”

This might not seem very useful in and of itself – dots are sort of boring – but we can actually construct filters to respond to any shape. And, incidentally, this is what evolution has done with the neurons in the visual cortex. Research has revealed filters and filters composed of filters and everything in between that respond to everything from static lines to entire faces to moving shapes.

The key to understanding how these BNNs function is to look back at our basic filter – the “dot” detector – and realize that any function based upon summation can be implemented by a BNN because that is what a BNN does.
 
Filters and Reasoning

There is something special about such a filter: it is a form of reasoning. The value of each pixel or retinal neuron are facts about the world and by returning any output the filter is inferring a new fact from those existing ones. In particular, the fact “this area of the image contains shape X” or “this area of the image does not contain shape X.”

And since the basic filter pattern is pretty much all a neuron can do – remember, they sum the incoming impulses and fire once a threshold is surpassed – it turns out all neurons implicitly perform logical reasoning. Each input is an existing fact, and the output is a new fact.

Furthermore, since every neuron’s output is just the input for many more neurons, a BNN (or ANN, for that matter) is actually a type of massively parallel reasoning machine that implements the chaining algorithms implicitly.
 
Last edited:
Clues from Introspection

It is a fallacy to think that one can rely completely upon introspection for determining the mechanisms of consciousness given the amount of research that has illustrated just how wrong we can be about what we think is taking place in our own heads.

However, this does not mean introspection is useless. On the contrary, introspection can be an invaluable tool because while it does not tell us why we think about something it at least allows us to know what we are thinking.

And I have learned two important things about my own thinking via introspection. First is that “awareness” seems to be equivalent to “focus” or “attention” or whatever you want to call it. That is, I haven’t been able to ever catch myself being “aware” of something without also being focused on it, or paying attention to it. Second, whenever I am “aware” or “focused” or “paying attention” I am in fact reasoning on some level about the object of focus or awareness or attention.

For example, when I am driving home from work on the freeway there are literally billions of objects that cross my field of vision. Yet I am only aware of an exceedingly small percentage of them. I also only focus on a small percentage, and I only pay attention to a small percentage, and concidentally everything I am aware of is also something that I am focusing on and paying attention to in some form or another.

Furthermore, when I realize I am aware of something, the basic pattern seems to be that whatever I am focusing on triggers related thoughts which trigger other related thoughts, etc. And when I try to analyze how a thought is related to a previous one there is always some kind of a link, no matter how obscure. So at least for me the “flow of consciousness” seems to be this endless series of thoughts that bleed into and are interrupted by each other and I am “aware” of any object in the world that is the subject of those thoughts.
 
Putting it all Together

What would happen if there was a massive BNN wherein neurons represented “facts” about the life experience of an entity? When enough facts were true for a new fact derived from them to be true (enough dendrites brought incoming impulses to the body of a neuron) that new fact would be added to the constantly churning mix within the BNN and possibly lead to other facts, causing an endless chain reaction of reasoning.

And doesn’t that sound very close to what seems to happen in our heads? Imagine if there was a neuron or group of neurons – lets call such a group a “symbol” -- that fires when something looking like an apple is the focus of our visual perception. This symbol could be the end of a very complex series of filters operating on visual data. Further suppose that this symbol made synapses with the dendrites of symbols that fire when the color red is perceived, when a sphere is perceived, etc. Also suppose the symbol has synapses with a “fruit” symbol, an “apple taste” symbol, a symbol for the world “apple,” etc. The list goes on and on. Then each time we observe an apple, all of these symbols might be activated automatically, each of them triggering other symbols in a cascade of reasoning.

And there is no reason the connections between symbols should be unidirectional. In fact, research has illuminated the fact that neurons in layers of the visual cortex can be activated by not only visual data – as you would expect – but also impulses from other neurons far downstream in the reasoning chain. In other words, the apple symbol could actually trigger the portions of the visual cortex that respond to an apple in the visual field. Sound familiar? It should, because it sure seems like this is what happens. You think of an apple,maybe because you read the word in a book, and you can visualize a very clear picture of what an apple looks like. And if you close your eyes, you can almost project an image of an apple right into your visual field.
 
Last edited:
The Details 1: Learning and Memory

It makes sense that a BNN wired up according to symbols would behave like our brains behave. But how does such a BNN get wired up that way to begin with?

The short and simple version of the answer is the old neurobiology saying “neurons that fire together wire together.” This refers to a number of mechanisms that allow neurons which are activated at the same time to strengthen synapses between each other such that future activations of one neuron have a higher impact upon the other.

It is quite simple. Suppose a creature observes A, B, and C simultaneously. A BNN somewhere in the creature has connections between symbols activated by A, B, and C strengthened by the mechanisms of synaptic plasticity. Thereafter, when the creature observes A and B, it is possible that the symbol for C will be automatically activated – regardless of whether the creature observes C at that time.

Furthermore, since a symbol corresponds to a derived fact, we now have an explanation for how all available facts can be learned – atomic facts come from direct perception data, and symbols created via the mechanisms above account for everything else.

An interesting feature of the human brain is that adults have nearly an order of magnitude less synapses than infants. Also, since the known mechnisms of learning do not include the creation of new synapses between far removed neurons, a picture of how our brains wire themselves is beginning to emerge.

Initially, neurons grow along chemical gradients – the same kind of gradients that guide other initial growth during the early phases of development. It appears, then, that the basic layout of the brain has been determined by evolution. But once all those neurons are in place, and wired in a basic way to all relevant neighbors, learning can begin. As a human grows and learns, synapses strengthened by the “wire together, fire together” mechanisms persist and those that are never used at all may dissappear.

And an even more fascinating implication of these mechanisms (for computer scientists, at least) is that our memories are in no way analagous to the raw data that a computer might use to store information. Instead, our memory is merely a series of strengthened synapses that lead back to the symbols relevant to the memory.

For example, you might remember the citicorp building in new york city. You might be able to even picture exactly what it looks like in your mind’s eye. But there is no actual image there, no actual array of data that any normal computer could make use of to reconstruct the appearance of the building. What is there is a series of connections with the symbols that looking at the citicorp building would activate, such that remembering the building actually activates a number of those symbols. And not surprisingly, research suggests that as memories become more removed from the event, we increasingly remember the memory rather than the event – a phenomena perfectly explained by such a self reinforcing loop of memory activating symbols which strengthen memory, etc.
 
Last edited:
The details 2: Sensory Perception

I feel this is the single area of the computational model that most individuals underestimate. Because if you think about it, most of the classic arguments that people use against the computational model rely upon the huge amount of sensory perception data available to our brains that, for some unknown reason, they assume no machine could have access to.

The basic form of such arguments is to pick a facet of human experience linked to subjectivity and flat out state that a machine could not have such an experience because a machine cannot “feel,” no matter how much reasoning might occur.

But I submit that “feeling” is simply reasoning about sensory perception data. And why would it feel any different for a machine with similar sensory perception capabilities?

Human emotions are simple to understand at a fundamental level under this reasoning model: reasoning that affects the operation of other reasoning in an incidental way.
For instance, pain is reasoning about a perceived injury (impulses from pain receptors) that modifies all other reasoning – it is hard to think about anything else when you are in pain because evolution has taught your brain that it better focus on the injury in order to survive. And if the pain is bad enough, other changes to the body will start taking place – hormones will be released – that modify the reasoning in the brain even more. How? By altering the firing patterns of the sensory perception neurons and changing reasoning chains.

All human emotions – from the generic pleasure vs. pain to the specifics of love vs. hate, can be completely explained by such a framework. Evolution gave our brains the ability to reason, and it gave our bodies the ability to let our brains know when we are doing something good vs. doing something bad. Put the two together and the entire spectrum of human experience starts to make sense. When our body is happy our brain feels pleasure, when our body is not happy our brain feels pain. When our brain reasons about something bad for us, it tricks the body into making it feel pain, which in turn leads us to reason about it even more -- and hopefully avoid whatever was a problem. When our brain reasons about something good for us, it tricks the body into making it feel pleasure, which in turn leads us to reason about it even more -- and hopefully go after whatever the body wants.
 
Implications

The biggest implication of this version of the computational model is that it should be sufficient to reproduce a certain subset of the reasoning capability and pattern of the human brain in order to produce something close to human consciousness (including human subjective experience).

This is hard for many people accept – after all, we have reasoning systems already and they act nothing like a human. But what I think most people fail to understand is the sheer magnitude of the reasoning that goes on in the human brain.

Because we are not talking about the nearly a trillion synapses of the human brain each being analagous to a bit in computer memory. Many might be equivalent, but many are not – because the human brain is an evolved master of data compression due to its ability to reference symbols instead of stored raw data.

The best way to understand this is to look at the difference between the memory required to store a font compared to the memory required to store an image of the alphabet. Fonts store the shape of characters as a series of mathematical equations – a circle here, a line here, a curve here – instead of raw data about what is located at any particular point in an image. By doing so, information about empty space that is of no concern to anyone doesn’t take up memory. All that is stored is what is needed to reproduce the characters. Furthermore, if you wanted to compare two characters, you might be able to get what you needed just by looking at the equations rather than an image produced by them. For example, if you wanted to see what was in common between “m” and “n” the equations alone would tell you that both have vertical lines, both have downward opening curves, both are of the same height, etc. Also, if you had the equations that describe “n,” it is only slightly more information to describe “m,” since the latter is only slightly different from the former. In fact, the essence of data compression is to store only what is different, which turns out to be the same as only storing what we care about.

And since a symbol is by definition only the information about something that a BNN cares about, by dealing in symbols we can skip an enormous amount of computation that would otherwise be required for the same level of reasoning. Like a computer that is smart enough to represent everything in a format as compressed as a font, we can reason about what is the equivalent of a huge amount of data using effectively only a small amount of synapses.

Furthermore, people can’t seem to get their heads around how much of their experience is defined by sensory perception data. I think if people understood the sheer magnitude we are talking about, they might be more likely to accept that their consciousness comes from these simple reasoning patterns.
 
Last edited:
Basics -- Reasoning

A fact is a relationship between 1) percepts or 2) other facts. A knowledge base is the set of facts that an entity knows about the world (hence the name). Facts can either be acquired from perception (atomic, or axiomatic facts) or inferred from existing facts (derived facts). The process of inferring new facts from existing facts is known as reasoning. I believe reasoning is the root of human conscious experience.

I have to disagree with that last part. It seems to me that the exact opposite is true; conscious experience is the root of human reasoning. Reasoning is something we do with our experiences -- reasoning itself doesn't actually explain experience, tho.
 
Implications

The biggest implication of this version of the computational model is that it should be sufficient to reproduce a certain subset of the reasoning capability and pattern of the human brain in order to produce something close to human consciousness (including human subjective experience).

This is hard for many people accept – after all, we have reasoning systems already and they act nothing like a human. But what I think most people fail to understand is the sheer magnitude of the reasoning that goes on in the human brain.

Because we are not talking about the nearly a trillion synapses of the human brain each being analagous to a bit in computer memory. Many might be equivalent, but many are not – because the human brain is an evolved master of data compression due to its ability to reference symbols instead of stored raw data.

The best way to understand this is to look at the difference between the memory required to store a font compared to the memory required to store an image of the alphabet. Fonts store the shape of characters as a series of mathematical equations – a circle here, a line here, a curve here – instead of raw data about what is located at any particular point in an image. By doing so, information about empty space that is of no concern to anyone doesn’t take up memory. All that is stored is what is needed to reproduce the characters. Furthermore, if you wanted to compare two characters, you might be able to get what you needed just by looking at the equations rather than an image produced by them. For example, if you wanted to see what was in common between “m” and “n” the equations alone would tell you that both have vertical lines, both have downward opening curves, both are of the same height, etc. Also, if you had the equations that describe “n,” it is only slightly more information to describe “m,” since the latter is only slightly different from the former. In fact, the essence of data compression is to store only what is different, which turns out to be the same as only storing what we care about.

And since a symbol is by definition only the information about something that a BNN cares about, by dealing in symbols we can skip an enormous amount of computation that would otherwise be required for the same level of reasoning. Like a computer that is smart enough to represent everything in a format as compressed as a font, we can reason about what is the equivalent of a huge amount of data using effectively only a small amount of synapses.

Furthermore, people can’t seem to get their heads around how much of their experience is defined by sensory perception data. I think if people understood the sheer magnitude we are talking about, they might be more likely to accept that their consciousness comes from these simple reasoning patterns.

I'm still having a hard time understanding how one equates computation with conscious experience. You've provided a very good description of information processing and reasoning but none of your OPs address or explain how data translate into experience. You just assume that if a system contains and manipulates data that it has subjective experience of said data. The model you've presented completely ignores the salient features of our own BNN and assume that abstracted functions of it are essentially the same.

No matter how one slices it Symbol ≠ Meaning, Syntax ≠ Semantics, and Computation ≠ Consciousness.
 
Last edited:
I agree with your emphasis on sensory perception. However, you also need to stipulate a feedback and motivational system as well. A bunch of neurons in a box will do nothing. A bunch of neurons in a box being fed information from its environment (whether it's real or virtual) may form synaptic pathways but will do nothing with the data.

Once you add a motivation factor such as, say fear of death, hunger, or a desire to reproduce, the neural system will attempt to utilize the information it receives to attain its objectives. As in a BNN, some parameters may need to be "hardwired" into the network for it to work (like BIOS in a computer). In order to proceed from this, the necessary peripherals and feedback systems need to be in place for the network to progress.

The sensory and feedback systems will allow the network to form an abstraction of it environment which would probably be the first step towards consciousness.
 
I agree with your emphasis on sensory perception. However, you also need to stipulate a feedback and motivational system as well. A bunch of neurons in a box will do nothing. A bunch of neurons in a box being fed information from its environment (whether it's real or virtual) may form synaptic pathways but will do nothing with the data.

Once you add a motivation factor such as, say fear of death, hunger, or a desire to reproduce, the neural system will attempt to utilize the information it receives to attain its objectives. As in a BNN, some parameters may need to be "hardwired" into the network for it to work (like BIOS in a computer). In order to proceed from this, the necessary peripherals and feedback systems need to be in place for the network to progress.

The sensory and feedback systems will allow the network to form an abstraction of it environment which would probably be the first step towards consciousness.

Yes I agree, but those are all the sticky details that need to be ironed out -- I think the framework of the model is a good starting point.

Personally, I can see how one could reach an understanding of those details starting with the framework ideas I describe here. But most people can't even get that far, which is why I wanted to just discuss the framework.

EDIT: For example, I didn't even begin to discuss things like how the latency inherent in biological neurons (since action potentials propagate at less than 100m/s) can both complicate and simplify things.

EDIT 2: I would be happy to discuss any specific details with you, though, because I find this area fascinating.
 
Last edited:
I have to disagree with that last part. It seems to me that the exact opposite is true; conscious experience is the root of human reasoning. Reasoning is something we do with our experiences -- reasoning itself doesn't actually explain experience, tho.

Well, you either didn't understand 90% of the OP or you posted this without reading 90% of the OP, because I make it clear exactly what I mean by "reasoning" and why it applies to machines as well as individual neurons, never mind entire human brains.
 

Back
Top Bottom