Explain consciousness to the layman.

Status
Not open for further replies.
Consciousness, for a layman, is...

The process of maintaining a model of the environment, updated in real time with input from the senses, while storing a record of those updates and comparing the model's current state with past states.

So, the essence of consciousness:

1) Input from the senses.
2) Maintain model of environment.
3) Recall past states of this model and compare it with the present state.

Those are, I think, the basic features of what we typically mean by consciousness. In other words, a machine that does the above can fairly be considered conscious. We nevertheless have embellishments and side effects not essential to the definition, such as:

1) include the conscious process as part of the modeled environment (sense of self).
2) invent imaginary sensory input (creativity).
3) play back past input and recycle it back into consciousness (mulling the past).
4) maintain an ongoing internal monologue in a native or other language of communication (includes ear worms).
5) emotions (global state changes).
6) desires (self-initiated task assignments).

I'm sure there are more. These are all I can think of in the time allocated to make this post.

I don't think consciousness is one thing with a finite set of features. We are, after all, collections of modules, each of which has the purpose of helping replicate the genes that directed their formation (Thomson). Random modules exist that are not essential to what I think we mean by consciousness, but are often considered normal features of the process and the experience.
 
Last edited:
The process of maintaining a model of the environment, updated in real time with input from the senses, while storing a record of those updates and comparing the model's current state with past states.

So, the essence of consciousness:

1) Input from the senses.
2) Maintain model of environment.
3) Recall past states of this model and compare it with the present state.

Those are, I think, the basic features of what we typically mean by consciousness. In other words, a machine that does the above can fairly be considered conscious. We nevertheless have embellishments and side effects not essential to the definition, such as:

1) include the conscious process as part of the modeled environment (sense of self).
2) invent imaginary sensory input (creativity).
3) play back past input and recycle it back into consciousness (mulling the past).
4) maintain an ongoing internal monologue in a native or other language of communication (includes ear worms).
5) emotions (global state changes).
6) desires (self-initiated task assignments).

I'm sure there are more. These are all I can think of in the time allocated to make this post.

I don't think consciousness is one thing with a finite set of features. We are, after all, collections of modules, each of which has the purpose of helping replicate the genes that directed their formation (Thomson). Random modules exist that are not essential to what I think we mean by consciousness, but are often considered normal features of the process and the experience.



Great......very nicely put.

But just a slight nitpick..... there are no random modules.... there are occasional random misfirings or activations of some modules that are not due to external (to the module) effects. Also modules may not ALWAYS respond in EXACTLY the same manner to the exact same stimuli. These two effects produce "random" behaviors.

Moreover, sometimes some modules during their own normal activations can inadvertently induce effects on other modules that as far as the influenced modules are concerned may not be distinguishable from sensory or other EXTERNAL perceptions and thus despite being INTERNAL may for all intents and purposes be conceived as external. An example of this, that has always fascinated me, is when one sneezes too violently one can end up seeing stars in front of one’s eyes as if they are in fact stars out there in the air circling one’s head.

Incidentally, the above effect also explains hearing voices and other hallucinations, such as when one starves himself and starts talking to god. The pathological situation, due to being deprived of sugars and what not, causes misfiring and other malfunctions resulting in all sorts of mayhem and the person CONCEIVES the effects as externally induced such as seeing or hearing or even feeling God. Additionally, since all these things are produced by one’s own mind these gods tend to usually conform with the models already held in long term memory.
 
The process of maintaining a model of the environment, updated in real time with input from the senses, while storing a record of those updates and comparing the model's current state with past states.

So, the essence of consciousness:

1) Input from the senses.
2) Maintain model of environment.
3) Recall past states of this model and compare it with the present state.

Those are, I think, the basic features of what we typically mean by consciousness. In other words, a machine that does the above can fairly be considered conscious. We nevertheless have embellishments and side effects not essential to the definition, such as:

1) include the conscious process as part of the modeled environment
Isn't that the essence of what we call consciousness? That you are part of the model?
 
If that theory turns out to be wrong (which is unlikely), then at least we will still learn a lot about how consciousness does work in the process of trying to emulate it.

Even though I intuitively am sympathetic to non-material or non-computational theories of mind/consciousness, I've never been able to see clearly how one goes about testing them. On the other hand AI, neurobiology etc. can apply more rigor to forming and testing hypotheses, so even if they're wrong, they extend the empirically demonstrable knowledge boundary in a way that the other camp can't. On AI, it turns out to be fairly easy to demonstrate that computers can be taught to play virtual ping pong. In biology, it can be demonstrated how damage to a specific part of the brain can affect abilities to perceive, ability to perform tasks, etc.

In the more complicated human brain we might find surprises, like how far an organism can compensate for damage to a specific area of the brain, but what some theorists say is pretty out there - that, for example, the brain is holographic, which I don't have the background to properly explain, but understand well enough to know it would predict greater rather than less brain plasticity. But while the materialist can then design an experiment involving physical damage and novel ways that an organism might adapt, I have no doubt that in simple cases you'd see other parts of the brain lighting up, as opposed to the entire brain observably reconfiguring itself or falling into some non-localized mode.

It's not a mystery why some people would want to posit maximal plasticity - "I" might want to believe that "I" still exist if I have a stroke, get shot in the head, etc. It's understandable that human beings want there to be an irreducible spark that is their unique selves, and that the spark, or pattern of information, exists in not one lobe of the brain but globally. We want Gabby Giffords to still be Gabby, and amazingly enough it seems she largely is - as evidenced by her smile, positivity, determination, etc. But a different bullet trajectory or different ammo pretty clearly could have wiped out a lot more of "her." My point being that no matter how inspirational we might find a given anecdote, there are many others illustrating how brain damage does impact the "self." Indeed the idea of a vibrant personality trapped in a damaged brain, or a damaged body that destroys the ability to interact, has its own horrors, and may be seen as a fate worse than death.

When I read about researchers using brain waves outside the skull to allow an ALS patient with no motor ability to painstakingly transmit a message it's striking how much work is involved in designing that experiment, and how much effort is required of the patient/subject. Arguably it could speak to the resilience of "self," but in this case researchers are dealing with intact "mind" and debilitated body, so it doesn't say that much about consciousness. This experiment doesn't directly apply in this discussion; my point is just that my what a lot of work it is to design and carry out the experiment and the people doing the grunt work aren't the woo camp. Even though I'm sympathetic to the woo camp.

I thought of another non-computational thing. In "Awakenings" Oliver Sacks found it was impossible to titrate optimal doses of L-dopamine for treatment of brain damage due to encephalitis. He ended up believing that the patients had lost some essential poise, and that no matter how narrowly he titrated the dose patients seemed to be either manic or catatonic, which states seemed equally insulting to their essential personalities. I think he tends to see this as a puzzle, or even a tragic marvel, but not essentially a mystery - as a clinical neurologist he spins philosophy out of specific cases, but he seems to believe that "mind" is the inevitable manifestation of a nervous system, which seems like a pretty "emergent" way of looking at it.
 
The brain probably does not literally operate like a digital computer or a Turing machine.

But, whatever processes the brain is using can be emulated on a digital computer or a Turing machine (or a universal computing machine of any sort).

In theory: If we can ever duplicated the process, exactly, it wouldn't be different from natural consciousness. It would be an actual consciousness, artificially induced.

If consciousness is, in fact, essentially a computation (using the Turing machine specification) then it is indeed the case that duplicating the process and emulating it would be the same as duplicating it.

If consciousness is not essentially a computation (and I've given my reasons why that may not be the case) then it is not true that it can be duplicated by emulation. Consciousness will be like every other process in the body, in that computer emulation might tell us how the process works, but it cannot duplicate it.

If that theory turns out to be wrong (which is unlikely), then at least we will still learn a lot about how consciousness does work in the process of trying to emulate it.

In spite of numerous claims to the contrary, I've never set any forms of research off limits.
 
my point is just that my what a lot of work it is to design and carry out the experiment and the people doing the grunt work aren't the woo camp. Even though I'm sympathetic to the woo camp.

The people carrying out detailed complex experiments tend, in general, to be different to the people making broad sweeping statements about consciousness.
 
The people carrying out detailed complex experiments tend, in general, to be different to the people making broad sweeping statements about consciousness.

Yes, exactly. Some people on this site reach for their revolvers when anyone talks of "soul" or "spirit," and I'm not one of them, but I understand the reaction. The annoyance I think comes from the way scientific/computing vocabulary is mixed with "woo." It's one thing if people want to have deep conversations about spirituality using the vocabulary of metaphysics, but when they start slinging around phrases like "quantum effects" it's fair to wonder whether they know what they're saying.

Put it another way I can say "we are all spiritual beings" but only a few of us are physicists, so I'd have more patience with physicists speculating about the spiritual realm than with spirituality gurus speculating about physics. I have a woo side but am suspicious of people who invoke scientific language to make their ideas sound important unless they have the background to know what they're saying.
 
Last edited:
To anyone who might actually read my long post from yesterday, I said "lobe" of the brain when I meant patch or sector or area - not some large, distinct anatomical structure.
 
I took him to mean that it can't be done using real-to-us 24 hr/day 3600 sec/hr time.

And no, relativity considerations do not alleviate that problem.

These two statements are not logically consistent.

When relativity is considered, the notion of "real-to-us 24hr/day/3600 sec/hr time" becomes ... um ... relative?

And are you familiar with what the notion of time becomes when it is relative? It becomes an ordered series of events. Nothing more.

Can you tell me if the "turing model" includes ordered series of events?
 
The point is, I wouldn't be able to tell if the robot was doing something different than people do. All I could do is observe the robot's behavior. I wouldn't know if it had been programmed to go through its memory and "act" conscious or if the response had arisen spontaneously. For lack of a better word, "spontaneous" will do; consciousness is the spontaneous process of knitting our internal narrative as we go along.

The idea is that as the complexity of the behavior increases, the chances that the robot was merely "programmed" to respond as if it were conscious is far smaller than the chance the robot is actually conscious.

Especially if you were the one that programmed it, and you don't remember putting in the behavior you are seeing !!!
 
Yes, exactly. Some people on this site reach for their revolvers when anyone talks of "soul" or "spirit," and I'm not one of them, but I understand the reaction. The annoyance I think comes from the way scientific/computing vocabulary is mixed with "woo." It's one thing if people want to have deep conversations about spirituality using the vocabulary of metaphysics, but when they start slinging around phrases like "quantum effects" it's fair to wonder whether they know what they're saying.

Put it another way I can say "we are all spiritual beings" but only a few of us are physicists, so I'd have more patience with physicists speculating about the spiritual realm than with spirituality gurus speculating about physics. I have a woo side but am suspicious of people who invoke scientific language to make their ideas sound important unless they have the background to know what they're saying.

IMO, when the physicists get involved, we can take things seriously.
 
I entirely agree, but there are some people entirely wedded to the idea of a spare rendering them redundant.

Nope.

There are plenty of people who say it wouldn't be so bad to be killed the instant the copy is made.

There are zero people who say it would be OK to get killed at any instant later.

The idea of people sitting in the teleporter and pressing a button to immolate themselves is just stupid, and furthermore a grossly dishonest red herring that is only used to sell snake oil to conversation observers.
 
You don't seem to understand very much about science, including computer science, yet that doesn't discourage you from sharing your opinions with us on these and other matters.

I really wish you'd find something better to do.

I don't wish that.

I have learned more about this topic by constructing arguments against westprog's absurd positions than I have by agreeing with anyone on our side of the issue.

I could do without some of the tactics employed, but in the end it is worth it IMO.
 
I don't wish that.

I have learned more about this topic by constructing arguments against westprog's absurd positions than I have by agreeing with anyone on our side of the issue.

I could do without some of the tactics employed, but in the end it is worth it IMO.


As long as your enjoying it, have at it.

The signal-to-noise ratio is far too low for me.
 
So, the essence of consciousness:

1) Input from the senses.
2) Maintain model of environment.
3) Recall past states of this model and compare it with the present state.

Yeppers.

Glad to see you are putting some serious thought into this!

Now -- assuming this is correct -- do you see any reference to the future state? That is, at an instant in time t, the system is getting input from the senses that corresponds to a world state of no later than time t, right? So the very latest model state will be of the world at time t, correct?

I ask you this -- does this imply that our consciousness is restricted to knowledge of the present and past only? That there is no actual "perception" of moving into the future, other than the fact that the present-past states shift one instant in that direction? I think it does. If the model only includes the present and past states, then ... it only includes the present and past states.

Thinking about this question made me realize that stepping into the teleporter and being destroyed wasn't so bad after all, since there is by definition no actual link to the future in our lives. If we live only in the present/past, then being destroyed and continuing to exist in a copy is logically no different than what happens every other instant in our lives.
 
I am not a 'spiritual being'.

Without reference to the "spiritual" I can say we all share the general experience of thinking, so we're all qualified to discuss that general experience in lay terms. Physicist and mathematicians are qualified to sling around lay terms, but laymen aren't qualified to sling around physics terms.

I look at your signature, that paean to thought, and wonder if this order of thinking is qualitatively different than the thinking theoretically inducible in a machine.
 
Last edited:
I look at your signature, that paean to thought, and wonder if this order of thinking is qualitatively different than the thinking theoretically inducible in a machine.


No - the thought that Russell praises is not qualitatively different than the thought that may someday arise on another machine.
 
Status
Not open for further replies.

Back
Top Bottom