Consciousness: What is 'Awareness?'

I'm starting to think that consciousness is tied (or adapted to, or whatever) to time and space in a way that might not be able to be replicated in binary code. There is no ultimately small unit of measurement that objectively exists in a final sense "here".

I could be wrong, but I'm not sure the size or function of a neuron could really be translated into ones and zeros.

And even if it could be done, in theory, I'm not sure any humans are smart enough to figure out how to do it.

But I'm really talking out of my a$$ and have no idea. lol.


That's possible, but the 'recreation' of neural function would probably occur at a higher level of programming, though this would ultimately result from 0,1, move left, move right.

I'm not willing to guess either way, but theoretically it should be realizable on a general Turing machine. What I'm interested in is exploring the possibility. I don't see why anyone would want to argue that it is impossible. How are we supposed to know until we try. We've barely scratched the surface so far.

This is a difficult issue, but I think we are smart enough to figure it out. The quickest way to do it, though, would probably be to recreate cortical columns, as is already being done, along with subcortical structures -- and then let the system learn. We aren't smart enough to know how to program the system from scratch I don't think.
 
Yeah. I understand that he was a small, fat man, so kind of big in one way, kind of not in another. Ambiguous, he was, in terms of size.

Sort of a small-fat-being-nazi-while-rector-at-Freiburg-and-not-renouncing-his-affiliation type of dude.

When he wasn't confusing people with odd German tenses on the way to language. Or engaged in strange copulatory acts with errant gypsies on their way to Johannesburg during the snow storm of '44.

Hadn't heard of that liason. I imagine they're being there, at-hand, to-hand (H being all hands, according to Hannah), things got out of hand phenomenally.

(Btw, in a previous post, I got snowed by things being "to-hand", which are being used, and "at-hand", contemplated, not the other way round. If H made the same error, that might account for the strangeness of the copulation). :eusa_shifty:

... Blobru, sorry it took so long to get back to your points about relation of percept to memory. I knew we were going back to it but didn't think it was time yet to delve into that area. You were too far ahead of the game.

Well, they weren't very substantial points to get back to. Enjoying the discussion between yourself, kellyb, and Iconclast08 last page, so quite content to lurk awhile.
 
Last edited:
That's possible, but the 'recreation' of neural function would probably occur at a higher level of programming, though this would ultimately result from 0,1, move left, move right.

I'm not willing to guess either way, but theoretically it should be realizable on a general Turing machine. What I'm interested in is exploring the possibility. I don't see why anyone would want to argue that it is impossible. How are we supposed to know until we try. We've barely scratched the surface so far.

I don't see any basis for arguing that it can't be done by a machine. After all, we're just organic machines when you get right down to it.

Whether it can be done by a Turing machine alone is a different question, though.

If it turns out that a mechanical component is required -- in other words, if being conscious is best viewed as something the body does, like blinking or sweating -- then the IP component will need to be combined with an electro-mechanical system in order for that function to be accomplished.

So, just as a robot that spraypaints cars needs a computer plus a mechanical arm to function, so a machine that does consciousness might need both types of components.

And there's good reason to think this will turn out to be the case. For instance, conscious awareness is accompanied by a distinct physical sensation. Our sense of awareness is actually locatable -- behind our eyes, above our mouth.

This physical sensation is absent from other non-conscious activities of the brain.

As of yet, I don't know of any conception of how pure IP gets you that.
 
And there's good reason to think this will turn out to be the case. For instance, conscious awareness is accompanied by a distinct physical sensation. Our sense of awareness is actually locatable -- behind our eyes, above our mouth.
OT, but interesting, to me at least:

Having read my fair share of the old introspective literature from the early days of psychology, when people were carefully trained to dissect out their thought and perceptual processes, this claim surprised me. Not saying it isn't true, just that I can't remember seeing it in the literature of that time (which may be my poor memory, or may be because it does not exist). Anyway, its existence in literature is irrelevant to what I wanted to say:

Failing to remember any introspective accounts of a "sense of awareness" that is localizable, I thought I'd first try a quick google.

Try it. It's hilarious. It gives a real cornucopia of results, from journal articles to Real Magick guides to astral projection, to...

But while I am here--Piggy, do you have a citation for that localization? I'd like to see it, cos I seriously don't experience it myself.
 
Granted, we don't currently define emotional or motivational states well enough to operationalize them in computer systems, but is there any legitimate argument why we could not?

Yeah, I don't see why not. Reminds me of neural network modeling in cognitive psych circles (more on this below). The more I read and learn, the more I find myself agreeing with Piggy's view of us as organic machines comprised of the Dennettian "tiny robots." I was extremely resistant to this idea in the past, that it somehow made us less "human" for rather silly, New Age reasons. But now I am in awe of human "machines," if you will. Highfalutin ones, but still machine-like in so many ways, though mind-blowingly more complex than non-neo-cortex sheathed brains (at least as far as we know).

Ever read Thagard (Hot Thought-- dense, but excellent book)? He cites the example of the neurocomputational model GAGE (named after our pal Phineas), which actually models decision-making through units representing cognitions distributed across different brain areas (e.g., PFC, amygdala, hippocampus, etc.). He also discusses HOTCO (for HOT COherence), which models emotional inferences made in thinking and reasoning about different things.

This would also involve a lot of different strains of constraint satisfaction programming, which would weave a fantastically complex web. Just for a taste, human motives (e.g., to seek cognitive consistency, as in cognitive dissonance reduction) impose constraints on beliefs about and attitudes toward stimuli and qualia. Not to mention the undergirding Damasio autobiographical memory piece charged with limbic and prefrontal cortical momentum, which in itself is in a near constant state of subtle if not sometimes dramatic flux based upon impinging stimuli, the bundling/chunking of which are viewed as "experiences" that shape our brains and behaviors, which enriches meta-awareness.

Frankly, it gives me a headache thinking about the immensity of it.

Should we wish to have a computer that works or thinks like a human it would probably have to learn like a human -- have set up predispositions that can be altered over time by learning through interacting with the environment. I don't see why we could not include, with the proper understanding of how they work, both emotional and motivational systems within a robot that could end up doing things much like a human by learning semantic content as it interacts with the environment.

Yes, definitely. The predispositions fit in well with the constraint programming piece. But even more complex would be the reconfiguration of the very parameters that set the bounds of those other variables via interactions with the environment. The emotional, autobiographical/me stance, and what you refer to as motivational pretty much squash the possibility of a "dumb" algorithms of the iterative paramecium photoreceptor type.

That's possible, but the 'recreation' of neural function would probably occur at a higher level of programming, though this would ultimately result from 0,1, move left, move right.

Neural network modeling already does this to some degree, though I'm not sure about the binary code bit.

I'm not willing to guess either way, but theoretically it should be realizable on a general Turing machine. What I'm interested in is exploring the possibility. I don't see why anyone would want to argue that it is impossible. How are we supposed to know until we try. We've barely scratched the surface so far.

Agreed! Exciting stuff!

This is a difficult issue, but I think we are smart enough to figure it out. The quickest way to do it, though, would probably be to recreate cortical columns, as is already being done, along with subcortical structures -- and then let the system learn. We aren't smart enough to know how to program the system from scratch I don't think.

Would you mind providing a reference or two for this cortical column material? I'm genuinely interested but don't know much about what you are referring to.
 
Piggy, do you have a citation for that localization? I'd like to see it, cos I seriously don't experience it myself.

So... you're telling me that you don't have a sense of your own center of awareness that's above your feet but not, say, ten meters above your head?
 
I
As of yet, I don't know of any conception of how pure IP gets you that.


I don't either, but I want to see if we can get there potentially. I have a 'sense' that it might simply depend on how we define IP. I think most people think of IP as purely cognitive, since that is how we use it. We don't particularly want computers to have emotional and motivational systems apart from our control, so what they do is purely 'cognitive' and not emotional or motivational.

But, if what neurons do is IP, and I have no reason to think that they do something different -- essentially they sum different inputs to reach or not reach threshold and release or not release neurotransmitter.

If we could pin down what we mean by 'feeling', 'emotion', 'motivation' and understand how neurons do that, then I think we would move far along into explicating the whole process.

More below.
 
Piggy said:
Piggy, do you have a citation for that localization? I'd like to see it, cos I seriously don't experience it myself.

So... you're telling me that you don't have a sense of your own center of awareness that's above your feet but not, say, ten meters above your head?
We apparently do have p-zombies here.

I certainly have the "sense of self" located as you previously mentioned.
 
I don't either, but I want to see if we can get there potentially. I have a 'sense' that it might simply depend on how we define IP. I think most people think of IP as purely cognitive, since that is how we use it. We don't particularly want computers to have emotional and motivational systems apart from our control, so what they do is purely 'cognitive' and not emotional or motivational.

But, if what neurons do is IP, and I have no reason to think that they do something different -- essentially they sum different inputs to reach or not reach threshold and release or not release neurotransmitter.

If we could pin down what we mean by 'feeling', 'emotion', 'motivation' and understand how neurons do that, then I think we would move far along into explicating the whole process.

More below.

It may be that neurons don't do that.

But I agree with you, we have to wait and see, and do a lot more work.

One of the big problems with imagining that IP alone is going to provide the whole solution is that, at the IP level, there doesn't seem as yet to be any way to actually model what the body does when it does consciousness.

(I wish we had a simple verb for it. That's a conceptual hurdle right there, having to use adjectives like conscious -- which make it seem like a subjective state -- or nouns like consciousness which mask the fact that it's a bodily function.)

When we look just at the IP, we're looking at inputs, outputs, and transformations, and when we do that, conscious and non-conscious processes become indistinguishable, which doesn't actually match the physical phenomena.
 
So... you're telling me that you don't have a sense of your own center of awareness that's above your feet but not, say, ten meters above your head?

Yes. What was unclear about what I said? Not only do I have no such sense, but I don't recall one from the introspection literature.

I take it from your response that you have nothing other than your personal experience of a "distinct physical sensation" in your sinuses?
 
OK, issue 1.

I didn't really respond fully to Kelly, but I think that fish probably are aware in the sense that I tried to spell out. They might even have some sense of consciousness. I don't completely agree with Damasio's portrayal of what consciousness is (even though I agree with most all of his points), so I guess I better explain.

First -- I brought this up to explore the whole idea of awareness, in part because I really do side with the folks who argue that one of the definitions of consciousness that we use is awareness of awareness.

The examination of awareness -- and I am not convinced that I haven't left something important out -- was actually of something that I think is unconscious, or subconscious. So, yes, Skeptigirl I do think that we can be aware of things of which we are not conscious. In fact, I think the lowest level of -- or simple -- awareness is a subconscious process. My favorite example is of driving on the freeway while your mind is otherwise engaged -- you continue to attend to the road and the process necessary to keep you on the road, but you do it automatically. Most of perception fits this mold -- it is done automatically. But for percepts to have any meaning they must be linked in some way to previous memories, both declarative and emotional, probably with some bits of motivation thrown in.

So, in my view, the four typical aspects of awareness -- attention, intentionality, perception, and understanding -- are subconscious processes. But they are of the type that can become conscious.

So, what is consciousness? I think, simply speaking, that one part of it is just awareness of awareness. Largely this is an attentional process -- attention to the processes that are carried out subconsciously -- but also including the other components such as understanding within a 'larger theater'. This is where the 40 Hz potentials and the ideas behind the global workspace hypothesis, which probably represents reverberant loops involving parietal (directed attention), frontal (working memory amongst many other things including attentional states), hippocampal (recreations of declarative memory), and limbic systems so that 'we' act as aware of our awareness and can therefore alter behavior -- which is the whole point of consciousness anyway.

Two of the big issues become -- what exactly is 'feeling', 'emotion', 'motivation'? And does this adequately explain semantic content?
 
It may be that neurons don't do that.

But I agree with you, we have to wait and see, and do a lot more work.

One of the big problems with imagining that IP alone is going to provide the whole solution is that, at the IP level, there doesn't seem as yet to be any way to actually model what the body does when it does consciousness.

(I wish we had a simple verb for it. That's a conceptual hurdle right there, having to use adjectives like conscious -- which make it seem like a subjective state -- or nouns like consciousness which mask the fact that it's a bodily function.)

When we look just at the IP, we're looking at inputs, outputs, and transformations, and when we do that, conscious and non-conscious processes become indistinguishable, which doesn't actually match the physical phenomena.


First, there is nothing in global workspace that is non-neuronal.

Second, I think the big issue that many feel is left out of the picture when discussing silicon doing this is that we don't have the best definitions of what emotions and feelings are. Damasio and some of his followers do have the beginnings of a perspective -- again, more below -- and I'm sorry that I've forgotten the name of the woman who has one of the better ways of looking at this whole issue, at least as far as I have found. I try to find her name again.

When we think simple information processing as computers now do we unnecessarily limit our view of what neurons and silicon can do. It just requires a slightly different perspective I think.
 
Ever read Thagard (Hot Thought-- dense, but excellent book)? He cites the example of the neurocomputational model GAGE (named after our pal Phineas), which actually models decision-making through units representing cognitions distributed across different brain areas (e.g., PFC, amygdala, hippocampus, etc.). He also discusses HOTCO (for HOT COherence), which models emotional inferences made in thinking and reasoning about different things.

This would also involve a lot of different strains of constraint satisfaction programming, which would weave a fantastically complex web. Just for a taste, human motives (e.g., to seek cognitive consistency, as in cognitive dissonance reduction) impose constraints on beliefs about and attitudes toward stimuli and qualia. Not to mention the undergirding Damasio autobiographical memory piece charged with limbic and prefrontal cortical momentum, which in itself is in a near constant state of subtle if not sometimes dramatic flux based upon impinging stimuli, the bundling/chunking of which are viewed as "experiences" that shape our brains and behaviors, which enriches meta-awareness.

Frankly, it gives me a headache thinking about the immensity of it.



Yes, indeed, headache city. That is why I think we need to break it down into smaller bits.

I haven't read that book -- sounds very interesting. Unfortunately I' pretty ignorant about the computer and mathematics side of this, so I always defer to the people who do it for a living. I know more about the neurology and some of the psychology.


Yes, definitely. The predispositions fit in well with the constraint programming piece. But even more complex would be the reconfiguration of the very parameters that set the bounds of those other variables via interactions with the environment. The emotional, autobiographical/me stance, and what you refer to as motivational pretty much squash the possibility of a "dumb" algorithms of the iterative paramecium photoreceptor type.

Yes, our nervous system is considerably more complex than what simple organisms, like paramecia, do. There is a sense in which they can be said to be aware, but they are not aware of being aware -- so not conscious. And I don't think that any would argue that they 'understand' what they are doing. They have too simple a system for that sort of interpretation; and the way they interact with their environment is too simple as well.



Would you mind providing a reference or two for this cortical column material? I'm genuinely interested but don't know much about what you are referring to.


Going to have to look that one up, so give me a few minutes, but there is a researcher trying to recreate the cortical columns of a rat cortex with silicon chips -- sounds like a terrific idea.

I'd forgotten the name of it. It's the Blue Brain Project
 
Last edited:
We all do, when we're awake and sober and stop to pay attention to it.
I assure you, "we all" do not. I don't doubt you are sincere, but so were the TT practitioners that Emily Rosa tested, and it turns out they could not feel all the things they claimed to.

Recall that the brain does not have sensory neurons (brain surgery is done with a local scalp anesthetic, but there need be no anesthetic for cutting brain tissue itself), so unless your sense of self is indeed in your sinuses, you are making quite an extraordinary claim.

(I realize this is a derail--I would prefer this side issue be made a new thread if anyone wishes to explore it. Without that, it can drop.)
 
Yes. What was unclear about what I said? Not only do I have no such sense, but I don't recall one from the introspection literature.
I didn't find mine in the introspection literature.

I take it from your response that you have nothing other than your personal experience of a "distinct physical sensation" in your sinuses?
Nor is it a "distinct physical sensation" in my sinuses.

It is a personal experience though. Sorry "I" can't share what it "feels like".
 
I didn't find mine in the introspection literature.
I doubt many did, but they would certainly have been looking for it.
Nor is it a "distinct physical sensation" in my sinuses.

It is a personal experience though. Sorry "I" can't share what it "feels like".

Thank you.

Um... since you can't share what it feels like, how do you know that it is the same thing Piggy was talking about? Do you know what it feels like to him?
 
Yes. What was unclear about what I said? Not only do I have no such sense, but I don't recall one from the introspection literature.

I take it from your response that you have nothing other than your personal experience of a "distinct physical sensation" in your sinuses?

What am I gonna say to this?

It's like if we're discussing walking, and someone mentions the physics of the foot pushing off from the ground, and someone chimes in with "My feet don't touch the ground -- they hover a couple inches above it".

What can you do but say "Well, lucky you" and move on?
 
As to emotion and feeling..............

Magda Arnold -- and this is all extension from William James, through Damasio and Schachter -- that emotion is the product of unconscious evaluation of a situation as potentially harmful or beneficial (I think Searle has argued that emotions are just agitated states of desire, which isn't very descriptive). She feels that 'feelings' are behavioral tendencies -- conscious reflections of an unconscious appraisal.

I think I might stretch this a bit in a slightly different direction because I think that some of the things we might label 'feelings', at least if they are 'behavioral tendencies', are unconscious.

But let me back up a bit. Our nervous system consists of two basic 'arms' -- sensory and motor. When we speak of things like emotion and feeling we have a tendency to view them as parts of perception, I think, in part because we also use the word 'feeling' to refer to somatosensation. I think this view is wrong; I have recently begun to view emotion and 'feeling' as part of our motor response to perception.

It might be wrong to use the word 'feeling' in so many different ways, but here is the way I try to piece some of this together. I'm gong to leave attention and intentionality on the side.

When we perceive something we do so under some aspect of ongoing mood, ongoing pleasure and pain, ongoing motivation. But all of those percepts also bring forth earlier memories which also include mood/emotion/feeling, pleasure and pain, motivational state.

All that other 'stuff' is 'tagged' to the percept and give it a 'feeling' and give it a 'value' -- which is a large part of what we seem to mean by 'meaning'. But what is this 'feeling'? I think it is actually not a perception perse but a behavioral tendency (not a motor action but only a tendency toward some action) attached to the perception. I like blue means something along the lines of I want to experience that again. That is essentially what pleasure is all about.

If you think about it, the point behind all our perceptions and all our feelings is to figure out how to navigate a complex environment. We are able to do what we do, I think in large part, because we do not have set programs for action -- like an insect -- but only slight behavioral pushes in particular directions. If this is correct -- that we are set up to experience 'behavioral pushes' -- then those behavioral pushes have to show up somehow. For them to affect our behavior they actually have to push us to some degree in a particular direction. So, they do. The way they 'do it' is what we call 'emotion' and 'feeling'. I happen to think 'feeling' is just a less dramatic push than the things we call 'emotion'. I think both originally occur subconsciously, but both are eventually available to conscious reflection.

So, what is conscious reflection? -- those same processes at a meta-level; awareness of awareness. When we become aware of awareness it is easier to see 'feeling' as part of the conscious reflection on subconscious processes -- it is a way for 'us' to sift through the various behavioral pushes in a complex situation to decide on a course of action (and, yes, I am aware that I have fallen back into dualistic language). We don't, after all, decide -- oops, time to become conscious again in order to decide on some course of action -- but rather consciousness seems to 'pop up' when unconscious processing 'deems it necessary'.

If we were to view 'feeling' as unconscious behavioral pushes in a particular direction I think it might be easier to see a way that we could program a computer to perform that sort of task (Data's emotion chip). That computer would become unpredictable, and it would probably promptly argue that it had free will; but I don't see why any of that is not computable, not IP.

At this point someone will object -- but sometimes I just want to feel; it isn't a behavioral push since I don't want to do anything but experience. Which misses the point entirely -- that is, in itself, a behavioral response, if you think about it. But the real objection to that is -- you are describing a meta-reflection on emotion and feeling that is bounded by your desire not to act but to feel. But the feeling itself -- the original unconscious processing is a behavioral tendency as far as I can tell.

ETA:

Sorry, left out motivational states, which I think are analogous to emotion -- strong behavioral pushes -- but from internal rather than external concerns. Motivational states -- the ones we refer to most often -- seem to concern things like hunger, thirst, sex, internal desires (or person to world fit; and this includes more complex constructs) while emotions seem to depend on external information that is perceived and reacted to.

As to the spectrum of the intensity -- if this helps to see emotion, motivation, feeling as behavioral pushes -- think of the most intense 'emotion' we experience: the flight or fight reaction. Fight and flight are clear motor behaviors, the emotion being a very strong push toward those motor behaviors (and there is obviously cognitive processing involved in the process).
 
Last edited:

Back
Top Bottom