• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

In my world useful generally means that there is a pragmatic consequence -- in this case that we may cut through the verbiage and equivocation over definitions to move the discussion forward instead of the merry-go-round on which it currently sits.

Well, I don't know what to say. From my perpective, I've been watching the denizens of the JREF go round and round this merry-go-round for the best part of a decade and the only reason they can't go forward is that they point-blank refuse to walk out of the only available exit. You see, that exit isn't where they want to be going. They want some other exit, which doesn't exist, but which they keep insisting might exist, if only they could find it.

How long are you going to stay on the merry-go-round for until you stand up and walk out the exit marked "EXIT"?
 
Last edited:
Okay, so we're considering the sequence of an initial portion of Run2 followed by one more instruction in each probe, but nothing following that. If that is sufficient to produce consciousness, great. But not every such sequence is sufficient. In particular, consider the probe with the first instruction of the simulation. I doubt that produces consciousness. This is why I thought we were considering the entire array of probes.

So I think I still don't understand.

~~ Paul

Yes I would definitely say that not every such subsequence is sufficient. I mean, we define consciousness according to a specific set of behaviors and until a subsequence exhibits those, the total sequence won't be conscious yet. No big deal.

But the reason we were considering the entire arrary of probes is that Robin was seeking to invalidate the idea by making the logical conclusion as absurd as possible. So Robin was saying hey, here is an algorithm, and if we space out each step of the algorithm across the universe, it is the same algorithm, so would the consciousness be distributed across the universe?

And you bit the hook, and I am explaining why it was wrong for you to do so. There is no reason -- period -- to consider the instruction that one probe executes in conjunction with any other instruction that any other probe executes. They are literally physically independent. This is trivial to see -- if probe n fails, for whatever reason, probe n+1 is not affected in any way.

And this completely violates the definition of consciousness that we are using. Any self-referential aspect of the system must necessarily reference the same system, otherwise is isn't self-referential! And there is no way any information in Run3 can refer to Run3.
 
Well, I don't know what to say. From my perpective, I've been watching the denizens of the JREF go round and round this merry-go-round for the best part of a decade and the only reason they can't go forward is that they point-blank refuse to walk out of the only available exit. You see, that exit isn't where they want to be going. They want some other exit, which doesn't exist, but which they keep insisting might exist, if only they could find it.

How long are you going to stay on the merry-go-round for until you stand up and walk out the exit marked "EXIT"?

This is untrue.

The individuals that agree upon computation being the likely source learn things from each other all the time. We are most definitely not on a merry-go-round with each other. In fact, I have learned more from this forum than any other source.

The merry-go-round occurs between such individuals and the rest of the woo crowd -- like yourself. But I don't see why you find that surprising, given that you or anyone else hasn't offered a single good reason, in the entire history of the debate, for why the computational model is insufficient.

The only reasons that have been given are :
1) "It just doesn't explain X ... because it just doesn't!"
2) "Researchers have had 50 years and they still haven't made a robot that can think like a human."
3) "Humans can do things that cannot be computed ... no, there is no actual evidence, but cmon, they can!"

Wow -- I am amazed, with arguments of that caliber, that there aren't more people dropping the computational model every day.
 
Last edited:
This is untrue.

The individuals that agree upon computation being the likely source learn things from each other all the time. We are most definitely not on a merry-go-round with each other. In fact, I have learned more from this forum than any other source.

You may learn certain things about the way minds work by thinking of them in terms of computation. What you have not and cannot learn is anything about what minds actually are.

The merry-go-round occurs between such individuals and the rest of the woo crowd -- like yourself. But I don't see why you find that surprising, given that you or anyone else hasn't offered a single good reason, in the entire history of the debate, for why the computational model is insufficient.

I've offered loads of reasons. I've explained it over and over again, in many different ways.

The only reasons that have been given are :
1) "It just doesn't explain X ... because it just doesn't!"
2) "Researchers have had 50 years and they still haven't made a robot that can think like a human."
3) "Humans can do things that cannot be computed ... no, there is no actual evidence, but cmon, they can!"

Not true. I have provided in-depth, coherent reasons why consciousness cannot be treated as a normal physical phenomenon. It is quite revealing that whenever I asked the materialists here to repeat my own arguments back to me, what they provide as an answer bears little or no resemblance to anything I actually said. Sure, I said that science has made no progress on the Hard Problem in 400 years, but that isn't the reason I've given as to why it will never make any progress. That really would be an argument from ignorance.

Consciousness/awareness is different to any normal physical phenomena because it defies our normal understanding of what the word "physical" or "material" means. Normally, when we are trying to reduce some physical phenomena to some other physical phenomena or to explain a physical phenomenon in terms of some other physical phenomena then all of the phenomena in question are component parts of our experience of reality. We are explaining component parts in terms of other component parts. In the case of consciousness we have to acknowledge that there in fact two different concepts of material/physical in play (directly experienced vs. external/noumenal) and that we are trying to explain one of these concepts in terms of the other.

Surely you have to accept, at the very least, that this problem is fundamentally different to any other tackled by science?

Where else in science do scientists face questions about what concept of "material" they are refering to? Answer: quantum mechanics and nowhere else.
 
Last edited:
Yes, and that is a very good discussion of the self and the abstraction that we need to realize when we discuss self-awareness.

I'm a little more interested in the 'awareness' side of things, though. What exactly do we mean by awareness, or understanding for that matter?


Oh, the primordial soup: the deep stuff. :gasp:

First off, I'm not sure "awareness", the state of being aware, is much different from "consciousness", the state of being conscious; the way the words are typically used, awareness is more refined consciousness (assuming a phrase like "she was barely conscious, not really aware of anything" is fairly descriptive), so that we can attend to some content of consciousness long enough to process it: compare, identify, associate, expand, repair, enjoy, etc.

Maybe start with cases. When learning a skill like driving, initially my awareness of the skill is as a sequence of steps. Step 1: start the car -- is learned as a sequence of explicit, either verbal or diagrammed, steps: locate ignition key, determine 'right-edge up', slowly insert key 'right-edge-up' into ignition, turn key clockwise until starter engages, allow key to return to insert position, repeat if engine stalls, etc. We are aware of each step separately at first, then less and less so as the skill is acquired and the steps are integrated into one larger step, starting the car, within the skill of driving.

It seems then that awareness has different intensities: the awareness of one step diminishing until it effectively disappears within a larger step. Which relates to "understanding", the second undefined term. From the example, I think it's fair to say that awareness decreases as understanding increases, until one understands the process so well that she performs all the steps within it without being aware of them individually.

Is this true of other activities to which the word 'understanding' applies? When one learns to do multiplication, for example, does one's awareness of the steps decrease as one's understanding of the process increases? That's a huge question, and could easily fill a book, I think (which I don't have the expertise to write). Off the top of my head, it seems here "understanding" has to do with making the process mechanical. At first, the student understands multiplication as: (xxx)+(xxx)+(xxx)+(xxx) -- which she is told to write "3 x 4" = 12. With memorization of the multiplication tables, that visualization step disappears: 8 x 9 = 72; there's no sense diagramming it as (xxxxxxxx)+(xxxxxxxx)+... rpt 6 ...+(xxxxxxxx), to make the student aware of the complete 'logical' translation for that step, as that step is already understood within the mechanics of the multiplication table. (Ideally, that is. It occurs to me here that for many students multiplication, or rather math in general, may be understood strictly as mechanics to be learned by rote and regurgitated by ritual, rather than as descriptions of ideally ordered and described systems. In which case, lack of awareness, of the logical level, equates to lack of understanding. I think where the student is made aware of a diagram for the process before equivalent symbols, "understanding" may be said to correspond to the student's ability to reproduce those original diagrams if necessary, not to be constantly aware of them while manipulating the symbols. But as I said, we're on the border of book-filling territory).

So before this starts to look like a book, I'll tentatively offer these definitions: "awareness" is the place of mental data prior to integration into largely unconscious process(es); "understanding" is a measure or how successfully one has integrated data for use by these larger unconscious processes (algorithms, on the computational model). A last brief example: it's interesting that when one has learned a skill, like a golf swing, and then wants to improve that skill, that one aims to divulge and dissect the original learning steps within the skill; i.e., make them conscious again, so to reintegrate them more successfully, to "break bad habits", as the saying goes. Part of the role of consciousness seems to be as a sort of clearing-house, where bits that haven't been integrated yet, or need to be re-integrated, are held until some unconscious urge spots a need for one or some of them, ponies up the neurochemical dough to adopt it into memory, integrate into process, make it one of the family, give it a go, <insert your own apt metaphor>, etc.

There. Football's on... (my saints now aware that the falcons are in fact they who dat believe they understand how to beat them, or words to that effect). :scarper:
 
Last edited:
Well, I don't know what to say. From my perpective, I've been watching the denizens of the JREF go round and round this merry-go-round for the best part of a decade and the only reason they can't go forward is that they point-blank refuse to walk out of the only available exit. You see, that exit isn't where they want to be going. They want some other exit, which doesn't exist, but which they keep insisting might exist, if only they could find it.

How long are you going to stay on the merry-go-round for until you stand up and walk out the exit marked "EXIT"?

With all due respect, UE, but that's exactly what you are doing. Everyone here pretty much agrees that consciousness (while not being the easiest subject to explain) can be explained, and there are experiments and studies that can be traced back to verify this. So there is no "exit" to look for, because we have already found the answer we were looking for (and we are open to new sources of information that may lead to even better explanations, but they have to be falsifiable).

You are the one claiming that we're stuck and we need an "exit". And this wouldn't be much of a problem if such exit you're proposing wasn't basically refusing to accept the evidence that is out there in the field of neuroscience, and claiming that the ultimate conclusion is "consciousness just can't be explained". You're the one deciding to end any further attempt to investigate. To just sit down, cross your arms and declare the problem impossible to solve. You're the one closing your eyes to the only available exit: That of rational, unbiased, scientific study.
 
Oh, the primordial soup: the deep stuff. :gasp:

First off, I'm not sure "awareness", the state of being aware, is much different from "consciousness", the state of being conscious; the way the words are typically used, awareness is more refined consciousness (assuming a phrase like "she was barely conscious, not really aware of anything" is fairly descriptive), so that we can attend to some content of consciousness long enough to process it: compare, identify, associate, expand, repair, enjoy, etc.

Maybe start with cases. When learning a skill like driving, initially my awareness of the skill is as a sequence of steps. Step 1: start the car -- is learned as a sequence of explicit, either verbal or diagrammed, steps: locate ignition key, determine 'right-edge up', slowly insert key 'right-edge-up' into ignition, turn key clockwise until starter engages, allow key to return to insert position, repeat if engine stalls, etc. We are aware of each step separately at first, then less and less so as the skill is acquired and the steps are integrated into one larger step, starting the car, within the skill of driving.

It seems then that awareness has different intensities: the awareness of one step diminishing until it effectively disappears within a larger step. Which relates to "understanding", the second undefined term. From the example, I think it's fair to say that awareness decreases as understanding increases, until one understands the process so well that she performs all the steps within it without being aware of them individually.

Is this true of other activities to which the word 'understanding' applies? When one learns to do multiplication, for example, does one's awareness of the steps decrease as one's understanding of the process increases? That's a huge question, and could easily fill a book, I think (which I don't have the expertise to write). Off the top of my head, it seems here "understanding" has to do with making the process mechanical. At first, the student understands multiplication as: (xxx)+(xxx)+(xxx)+(xxx) -- which she is told to write "3 x 4" = 12. With memorization of the multiplication tables, that visualization step disappears: 8 x 9 = 72; there's no sense diagramming it as (xxxxxxxx)+(xxxxxxxx)+... rpt 6 ...+(xxxxxxxx), to make the student aware of the complete 'logical' translation for that step, as that step is already understood within the mechanics of the multiplication table. (Ideally, that is. It occurs to me here that for many students multiplication, or rather math in general, may be understood strictly as mechanics to be learned by rote and regurgitated by ritual, rather than as descriptions of ideally ordered and described systems. In which case, lack of awareness, of the logical level, equates to lack of understanding. I think where the student is made aware of a diagram for the process before equivalent symbols, "understanding" may be said to correspond to the student's ability to reproduce those original diagrams if necessary, not to be constantly aware of them while manipulating the symbols. But as I said, we're on the border of book-filling territory).

So before this starts to look like a book, I'll tentatively offer these definitions: "awareness" is the place of mental data prior to integration into largely unconscious process(es); "understanding" is a measure or how successfully one has integrated data for use by these larger unconscious processes (algorithms, on the computational model). A last brief example: it's interesting that when one has learned a skill, like a golf swing, and then wants to improve that skill, that one aims to divulge and dissect the original learning steps within the skill; i.e., make them conscious again, so to reintegrate them more successfully, to "break bad habits", as the saying goes. Part of the role of consciousness seems to be as a sort of clearing-house, where bits that haven't been integrated yet, or need to be re-integrated, are held until some unconscious urge spots a need for one or some of them, ponies up the neurochemical dough to adopt it into memory, integrate into process, make it one of the family, give it a go, <insert your own apt metaphor>, etc.

There. Football's on... (my saints now aware that the falcons are in fact they who dat believe they understand how to beat them, or words to that effect). :scarper:


Yes, I basically agree with a few provisos. Awareness and consciousness being essentially the same is my jumping off point. There are aspects to the way we use consciousness -- certainly those bits that are considered the hard parts - that completely map onto the way we use the word awareness. There are other bits, as you know, that do not -- like to be conscious means to be awake, etc.

I would tend to emphasize awareness in a slightly different way than you have in the learning model but it all amounts to essentially the same 'thing'. We are aware of those processes that require attention to solve a particular problem. So, when learning we are aware of all the steps in the process. After learning we tend to act automatically without clear awareness of any of the steps -- for motor actions this is because the frontal lobes begin to play a smaller role and the cerebellum plays a larger role in the process of carrying out already learned motor action.

A classic example that we have all experienced is driving. We might direct attention to some other facet of the world -- an emotional problem, a movie, etc. -- or we might just "zone out" but we continue to carry on the normal steps involved in driving despite not directing attention to the task. We are, however, able to bring the task of driving back into awareness if something happens -- rabbit runs across the road, we notice somehow that we missed the exit, etc. It is a change in inputs that directs attention to the task at hand. The same is true of sensory phenomena. I am not really aware of the feel of cotton against my skin after I have been wearing the same shirt or sweater for some time until I notice the tarantula crawling across my back -- it has a different feel. This is exactly one of the issues I was going to bring up in the other thread about awareness, so you've beat me to the punch.


And good luck to your Saints.
 
With all due respect, UE, but that's exactly what you are doing. Everyone here pretty much agrees that consciousness (while not being the easiest subject to explain) can be explained, and there are experiments and studies that can be traced back to verify this.

The fact that everybody here pretty much agrees (by which you mean "mainstream skeptics of a Randi/Dawkins variety) doesn't count for very much. Obviously they do. Why do you think I am here rather than somewhere else? There's no reason for me to be making these arguments elsewhere, because elsewhere vast numbers of people pretty much don't agree, and the reason is that your average man-on-the-street is not a Randi/Dawkins skeptic. I am talking about a certain way of thinking - a way of looking at the world. It is a way of looking at the world that I am intimately familiar with, because it is where I came from as an intellectual, thinking being. I know this map of reality and I know exactly why pretty much everybody round here agrees. The reason is that for them not to agree involves inventing an entirely new part to their map, and it is a new part that looks so suspiciously like some other maps reality, in use by believers of woo-woo, that they aren't even willing to take the prospect seriously. I have the deepest sympathy with this viewpoint. I cannot stress this enough. I fully acknowledge that from within that scientific materialist map of reality, the sort of things I am saying are extremely difficult to accept, because it feels like you are being asked to throw away the map you know and replace it with one you feel perfectly justified in being suspicious of. What I am trying to do is to convince people that there are other maps to reality which can co-exist with the scientific map. They don't work the same, but that doesn't mean they aren't maps of reality, and it doesn't mean they have no validity at all.


So there is no "exit" to look for, because we have already found the answer we were looking for (and we are open to new sources of information that may lead to even better explanations, but they have to be falsifiable).

What answer have "we" found?

You are the one claiming that we're stuck and we need an "exit".

No I'm not. Icheumonwasp was the one who likened it to being on a merry-go-round. I just adapted his metaphor.

And this wouldn't be much of a problem if such exit you're proposing wasn't basically refusing to accept the evidence that is out there in the field of neuroscience, and claiming that the ultimate conclusion is "consciousness just can't be explained".

:(

I really don't understand why people keep saying this. How many times do I have to say that I am rejecting NO science before the message sinks in? The last thing I am interested in doing is ignoring scientific evidence about anything at all, including brains.

You're the one deciding to end any further attempt to investigate. To just sit down, cross your arms and declare the problem impossible to solve. You're the one closing your eyes to the only available exit: That of rational, unbiased, scientific study.

Let me try again.

What I am saying is that the questions people are asking about consciousness e.g. "what is it?" "where does it come from?" "how is it related to brains?" are not even scientifically-valid questions. All of the valid scientific questions are posed in terms that can be defined and understood in terms of a physical entities. As soon as you start talking about "awareness" or "consciousness" or "minds" then you have introduced a new class of concept. This new class of concept differs to the ones that science deals with because it cannot be defined in the way that physical entities can be defined. It has to be defined subjectively and cannot be defined externally.

The whole context in which these sorts of questions make sense is a non-scientific context. It's the wrong "language-game." THAT is why your "only available exit" is not an exit. You are claiming that science can (and has, apparently) answer questions that can only be asked in a non-scientific context. You might just as well be trying to use science to answer questions about ethics, evil spirits or the meaning of life.
 
Last edited:
(I would assume we're talking about driving a manual--coincidentally I drive one)

When I drive, I nominally don't pay attention to each of the steps. However, I can pay attention to each of the steps, and as it happens, I on occasion do pay attention to each of the steps.

But if I were to define awareness in terms of whether or not something has been integrated into an unconscious process, I would think I would be forced to say that I cannot pay attention to the steps, which I don't think is correct.

So whereas I agree that this sort of thing happens, I object to the use of it in a definition.

Rather, I think it better to describe this scenario in terms of what we pay attention to. When I start driving, I have nothing to fall back on, so I'm forced to pay attention to the individual steps. After I develop the skill, I can pay attention to the higher level tasks, and delegate to the individual steps. So I don't have to pay attention to the individual steps.

But I'd also like to say that driving requires way too few controls!

I have another skill... I'm a gvim user. Gvim has a lot of commands, and I've learned quite a few (dating back to earlier days of vi). When I want to carry out a particular set of actions, I find that my fingers just move to the appropriate place (this works with the skill of "typing" as well--I'm hardly conscious of where my fingers move). But on some occasions I find it difficult to impossible to walk someone else through these commands... quite often I have to pause, do the command slowly on an imaginary keyboard, then figure out which key I just typed.

So in this particular skill, I pay attention to the high level commands, and I can't map them to the individual steps... but that's not exactly true. Rather, I can map them to the individual steps in terms of where I move my fingers, but I can't map them to the steps at the level of which keys to press, which ironically is nevertheless what I used to learn the commands in the first place.

Given this, there may be a way to salvage the definition. Nevertheless, I think it may be better to focus on the raw information that you're able to attend to rather than on what you don't attend to because you have a skill.
 
(I would assume we're talking about driving a manual--coincidentally I drive one)

When I drive, I nominally don't pay attention to each of the steps. However, I can pay attention to each of the steps, and as it happens, I on occasion do pay attention to each of the steps.

But if I were to define awareness in terms of whether or not something has been integrated into an unconscious process, I would think I would be forced to say that I cannot pay attention to the steps, which I don't think is correct.

So whereas I agree that this sort of thing happens, I object to the use of it in a definition.

Rather, I think it better to describe this scenario in terms of what we pay attention to. When I start driving, I have nothing to fall back on, so I'm forced to pay attention to the individual steps. After I develop the skill, I can pay attention to the higher level tasks, and delegate to the individual steps. So I don't have to pay attention to the individual steps.

But I'd also like to say that driving requires way too few controls!

I have another skill... I'm a gvim user. Gvim has a lot of commands, and I've learned quite a few (dating back to earlier days of vi). When I want to carry out a particular set of actions, I find that my fingers just move to the appropriate place (this works with the skill of "typing" as well--I'm hardly conscious of where my fingers move). But on some occasions I find it difficult to impossible to walk someone else through these commands... quite often I have to pause, do the command slowly on an imaginary keyboard, then figure out which key I just typed.

So in this particular skill, I pay attention to the high level commands, and I can't map them to the individual steps... but that's not exactly true. Rather, I can map them to the individual steps in terms of where I move my fingers, but I can't map them to the steps at the level of which keys to press, which ironically is nevertheless what I used to learn the commands in the first place.

Given this, there may be a way to salvage the definition. Nevertheless, I think it may be better to focus on the raw information that you're able to attend to rather than on what you don't attend to because you have a skill.


I'm not sure why it would follow that we could not attend to something that is being carried out unconsciously, but that's another issue.

Yes, one way of looking at it would be to concentrate on the raw information. That seems to leave us with an interesting issue -- should we discuss levels of awareness and relegate those actions that we do not direct attention toward as being something of which we are aware but at a lower level or should we speak of not truly being aware of them?

I have a hard time with this issue. I generally do not think we should speak of being aware of lower level unconscious (but potentially conscious) actions. I know others disagree -- John Searle being one of them.
 
So there is no "exit" to look for, because we have already found the answer we were looking for (and we are open to new sources of information that may lead to even better explanations, but they have to be falsifiable).

What answer have "we" found?

That what we call consciousness is nothing but a physical process which can be measured and predicted scientifically like any other organic process (And this can be traced and measured, so it's not just a claim made out of thin air)



Let me try again.

What I am saying is that the questions people are asking about consciousness e.g. "what is it?" "where does it come from?" "how is it related to brains?" are not even scientifically-valid questions. All of the valid scientific questions are posed in terms that can be defined and understood in terms of a physical entities. As soon as you start talking about "awareness" or "consciousness" or "minds" then you have introduced a new class of concept.

Ok. Fine. I understand what you're saying. You are saying these are not scientifically valid questions. Then my question to you is: How can you claim these are not scientifically valid questions when neuroscience has already done major significant advancements at deciphering the "apparent complexity" of perception, feelings of awareness, deja-vu hunches, schizophrenia and others, which are essential aspects of consciousness?



The whole context in which these sorts of questions make sense is a non-scientific context. It's the wrong "language-game." THAT is why your "only available exit" is not an exit. You are claiming that science can (and has, apparently) answer questions that can only be asked in a non-scientific context. You might just as well be trying to use science to answer questions about ethics, evil spirits or the meaning of life.

What are you talking about, UE? Of course science can address these things. Neuroscience has begun, for the first time, to provide us a basic understanding on questions about ethics, spirits and meanings of life, as these are human concepts and have their origin in the human culture (and thus, human mind). All these things have been addressed and studied, their origins traced into the brain. Do some research on the study of "mirror neurons" and you get a very deep insight into the concept of empathy. Do a research on the hard-wiring of some brains which make people more prone to believe in religious ideals and you have an answer to "evil spirits" and other beliefs. Do some research on the role of the temporal lobes and how, in some patients, a cross wiring or a disruption in certain wirings of the brain causes a feeling of awe at everything and you have an answer at why some people feel like they see God in everything. The question I ask to you (and I ask sincerely) is: Have you done the research on these matters? Because you seem to be ignoring this substantial research that has been done. Otherwise I can't see why you would write something like that.
 
This new class of concept differs to the ones that science deals with because it cannot be defined in the way that physical entities can be defined. It has to be defined subjectively and cannot be defined externally.


Now wait a second......just because something has an ontologically subjective existence does not mean that it cannot be defined or studied in a scientific context. Pain has an ontologically subjective existence but has been studied in some detail and has been defined in some detail (some definitions being better than others). Refining study techniques and definitions is precisely the means by which progress is made.
 
Yes, one way of looking at it would be to concentrate on the raw information. That seems to leave us with an interesting issue -- should we discuss levels of awareness and relegate those actions that we do not direct attention toward as being something of which we are aware but at a lower level or should we speak of not truly being aware of them?
That I haven't quite decided... in fact I'm not sure it's correct to say it's one way or the other... the word's simply not refined enough. So I wouldn't object to a refined definition one way or the other.

But there is a minimal requirement.

I could be looking at a bunch of dots and simply not see the dalmatian, even though there is a dalmatian in the picture. In that case, I'm not aware of the dalmatian. On the other hand, I could be driving and daydreaming about something, and slowly in a controlled fashion change lanes because I see someone else signaling and starting to pull ahead of me more slowly. I definitely was not aware of the dalmatian at any level in the former case--in the latter case, at some level I was aware of the car changing lanes.

In order to be aware of X, something inside has to actually not only see it (just working from vision as an example), but recognize that something is there.
I have a hard time with this issue. I generally do not think we should speak of being aware of lower level unconscious (but potentially conscious) actions. I know others disagree -- John Searle being one of them.
I'm ambivalent... I wasn't trying to suggest a particular definition, but rather a particular focus. I'm not sure it matters if I'm performing a more complex task or not with respect to whether or not I'm aware of something (if I see the dalmatian, I wouldn't necessarily do anything grander with it).

But I think it's quite useful to be able to describe what we do to the blanket when we grab it, tug on it, then roll into it, while we're asleep. Obviously something knows the blanket is there--and this is quite a complex action in itself to carry out without some sort of knowledge that this thing is a blanket and behaves in this fashion to accomplish the goal of staying warm.

ETA: So maybe that's the answer--go with the more inclusive "awareness", and then we could modify it to describe what we attend to, or what we're "consciously aware" of (or aware that we are aware of).

Definitely though, we have to be aware of something to attend to it (even if the thing we're "aware of" isn't really "out there", it's at least "in there").
 
Last edited:
Now wait a second......just because something has an ontologically subjective existence does not mean that it cannot be defined or studied in a scientific context. Pain has an ontologically subjective existence but has been studied in some detail and has been defined in some detail (some definitions being better than others). Refining study techniques and definitions is precisely the means by which progress is made.

Strictly speaking, science doesn't study pain. Science studies humans and humans talk about pain. It studies what those humans say about pain and it studies pictures of what is happening in their bodies, but it doesn't actually study their subjective experiences of pain.

Science has been very good at minimising pain. But in order to do so, it needed a means of getting information about pain. There is no scientific means of doing this; you just have to ask people. In this case, the verbal reports tend to be pretty reliable, so the overall results have been quite good. It can study consciousness in similar ways, and come to results which are as reliable as the anecdotal reports they depend upon. In this way it can provide information about the content of consciousness - about what is actually going on. It can do this without ever having to tackle the question "how does consciousness arise?" or "why is there a subjective experience of pain in the first place?"
 
Strictly speaking, science doesn't study pain. Science studies humans and humans talk about pain. It studies what those humans say about pain and it studies pictures of what is happening in their bodies, but it doesn't actually study their subjective experiences of pain.

Science has been very good at minimising pain. But in order to do so, it needed a means of getting information about pain. There is no scientific means of doing this; you just have to ask people. In this case, the verbal reports tend to be pretty reliable, so the overall results have been quite good. It can study consciousness in similar ways, and come to results which are as reliable as the anecdotal reports they depend upon. In this way it can provide information about the content of consciousness - about what is actually going on. It can do this without ever having to tackle the question "how does consciousness arise?" or "why is there a subjective experience of pain in the first place?"


Then strictly speaking science doesn't study anything. We don't see evolution. We see the effects of it. We don't see electrons, we see the effects of them. What science does is look at effects and postulate causes and then model cause-effect relationships. How is this any different with pain or with consciousness? We try to model what might account for consciousness in neural systems by studying it in humans. We can only say we got it right if we can model it in a way that works. Will we ever be sure? Well, no. But we aren't sure of anything that isn't true a priori.

I'm not precisely sure what you think it is that science does.
 
Then strictly speaking science doesn't study anything. We don't see evolution. We see the effects of it. We don't see electrons, we see the effects of them.

In that case, what do we see?

What science does is look at effects and postulate causes and then model cause-effect relationships. How is this any different with pain or with consciousness?

If you are talking about brains as causes and consciousness as an effect then what you are actually saying is shot through with dualism, and therefore non-scientific.

I'm not precisely sure what you think it is that science does.

Science creates and tests hypotheses about the behaviour and probable past and future of the observable physical universe. It does not create hypotheses about the relationship between observable and unobservable realities. "Observable" includes observation using scientific instruments and discludes anything which is in principle unobservable, such as Schroedinger's cat. It does not create hypotheses about these relationships precisely because they are in principle untestable. Everytime somebody says "consciousness arises from brain activity" they are offering a hypothesis which is in principle untestable, and therefore non-scientific.
 
In that case, what do we see?


Um, I said so above. In most situations what we observe are effects not direct relationships. The relationships are modeled after the fact.



If you are talking about brains as causes and consciousness as an effect then what you are actually saying is shot through with dualism, and therefore non-scientific.

If you take that chip off your shoulder you might be able to discuss things civilly with other people. Our language is shot through with dualism. That is the problem. That is why the solution is a discussion about definitions make sense.



Science creates and tests hypotheses about the behaviour and probable past and future of the observable physical universe. It does not create hypotheses about the relationship between observable and unobservable realities. "Observable" includes observation using scientific instruments and discludes anything which is in principle unobservable, such as Schroedinger's cat. It does not create hypotheses about these relationships precisely because they are in principle untestable. Everytime somebody says "consciousness arises from brain activity" they are offering a hypothesis which is in principle untestable, and therefore non-scientific.

What unobservable reality are we discussing here? Is consciousness somehow unobservable? Is pain unobservable?
 
Last edited:
AkuManiMani said:
The ironic part is that information processing, of any type, is not a physical property

It's a physical process.

Are all physical processes the same?

AkuManiMani said:
but an abstract functional property.

No, you're confusing the model with the process.

Oddly enough, thats exactly what you're doing when you claim that a simulation of photosynthesis the same as photosynthesis.

AkuManiMani said:
The SRIP explanation of consciousness bypasses biology and physics altogether and tries to reduce it to a technical IT problem.

To a mathematical abstraction. Yes. Which is what we do when we are trying to understand things scientifically.

An abstraction of a thing is not the thing itself. A switch model of gravity is not gravity, and a switch model of photosynthesis is not photosynthesis. They're tools to help us understand systems but they are not physical recreations. Why is this even a point of contention with you? There is no possible way you're -that- naive.


Once we have our model, we can add physics and biology (also models, of course) back in as appropriate for the specific case, and see if and how the activity of the brain matches that model. It does, but while the self-referential model is successful, there's a whole lot more detail to take into account when discussing human consciousnes.

The question is whether or not simply abstracting computational functions of the brain will recreate the experiences generated in an actual brain, be it a human, a dog, or a salamander. If consciousness and the quality of conscious experiences are dependent upon physics specific to brain chemistry [as I've already argued] then abstracting its information processing into a model will not be sufficient to recreate it.

All of it is physical processes, of course.

No one here is denying that instantiations of computational functions are physical. The point is that there a features of systems that are substrate dependent and cannot be physically reproduced simply by modeling it [via simulation, or otherwise]. A computer simulation of nuclear fission is obviously physical but its physics are very different than that of actual atomic fission; no atoms are actually split, and no radiation will be produced.


Not of it involves anything beyond the established laws of physics or our existing knowledge of how biological systems can behave. Lots of detail, no qualia

What good is a model of consciousness that bends over backward to ignore it?

, no panentheism, no magic.

Wait, what? Where the hell did I argue or imply panentheism or magic [tho, I do distinctly recall you calling SRIPs magical]? If you're going to debate points I've actually made then fine, but I'm not going waste time trying to slog thru your ideological hangups. Get a grip.
 
Last edited:
Yes, I basically agree with a few provisos. Awareness and consciousness being essentially the same is my jumping off point. There are aspects to the way we use consciousness -- certainly those bits that are considered the hard parts - that completely map onto the way we use the word awareness. There are other bits, as you know, that do not -- like to be conscious means to be awake, etc.

Going over it again, I'm having a hard job separating them just now: the hard problem of "awareness"? -- (distinguishing it from "consciousness"). It is handy to have two words in any event, so we can refer to being aware of thing within consciousness (which seems slightly more natural than saying we are conscious of things within awareness).

I would tend to emphasize awareness in a slightly different way than you have in the learning model but it all amounts to essentially the same 'thing'. We are aware of those processes that require attention to solve a particular problem. So, when learning we are aware of all the steps in the process. After learning we tend to act automatically without clear awareness of any of the steps -- for motor actions this is because the frontal lobes begin to play a smaller role and the cerebellum plays a larger role in the process of carrying out already learned motor action.

Yes, that's the diminution in awareness I was interested in, why I chose the skill acquisition examples (that and the lead into a discussion of "understanding").

A classic example that we have all experienced is driving. We might direct attention to some other facet of the world -- an emotional problem, a movie, etc. -- or we might just "zone out" but we continue to carry on the normal steps involved in driving despite not directing attention to the task. We are, however, able to bring the task of driving back into awareness if something happens -- rabbit runs across the road, we notice somehow that we missed the exit, etc. It is a change in inputs that directs attention to the task at hand. The same is true of sensory phenomena. I am not really aware of the feel of cotton against my skin after I have been wearing the same shirt or sweater for some time until I notice the tarantula crawling across my back -- it has a different feel. This is exactly one of the issues I was going to bring up in the other thread about awareness, so you've beat me to the punch.

The classic example is the nose on your face. Someone says, "it's plain as the nose on your face," meaning obvious, can't be missed. Yet until your attention was drawn to it, you likely weren't aware of it. So in fact, you miss [being aware of] the nose on your face all the time (thankfully, as it would be quite distracting when driving, among other things).

And good luck to your Saints.

Thanks, they needed it. :)

(I would assume we're talking about driving a manual--coincidentally I drive one)

When I drive, I nominally don't pay attention to each of the steps. However, I can pay attention to each of the steps, and as it happens, I on occasion do pay attention to each of the steps.

But if I were to define awareness in terms of whether or not something has been integrated into an unconscious process, I would think I would be forced to say that I cannot pay attention to the steps, which I don't think is correct.

So whereas I agree that this sort of thing happens, I object to the use of it in a definition.

Good point. Integrating a small step into a larger unconscious process makes it easier to be unaware of it, but we can still be slightly aware, or fully aware of it, even after we have the acquired the skill and the supposed integration has occurred. Hmm...

Rather, I think it better to describe this scenario in terms of what we pay attention to. When I start driving, I have nothing to fall back on, so I'm forced to pay attention to the individual steps. After I develop the skill, I can pay attention to the higher level tasks, and delegate to the individual steps. So I don't have to pay attention to the individual steps.

Right. So awareness is necessary until the skill is acquired, optional after.

But I'd also like to say that driving requires way too few controls!

I have another skill... I'm a gvim user. Gvim has a lot of commands, and I've learned quite a few (dating back to earlier days of vi). When I want to carry out a particular set of actions, I find that my fingers just move to the appropriate place (this works with the skill of "typing" as well--I'm hardly conscious of where my fingers move). But on some occasions I find it difficult to impossible to walk someone else through these commands... quite often I have to pause, do the command slowly on an imaginary keyboard, then figure out which key I just typed.

So in this particular skill, I pay attention to the high level commands, and I can't map them to the individual steps... but that's not exactly true. Rather, I can map them to the individual steps in terms of where I move my fingers, but I can't map them to the steps at the level of which keys to press, which ironically is nevertheless what I used to learn the commands in the first place.

Sounds like the skill has become so familiar in terms of finger movement that the memory of the original learning steps has been discarded (which makes sense, why retain them?)

Given this, there may be a way to salvage the definition. Nevertheless, I think it may be better to focus on the raw information that you're able to attend to rather than on what you don't attend to because you have a skill.

Not sure about salvaging the definition for now; just a catalyst anyway. I think skill acquisition may be one useful avenue, though, as an illustration of diminished awareness.

I'm not sure why it would follow that we could not attend to something that is being carried out unconsciously, but that's another issue.

Yes, one way of looking at it would be to concentrate on the raw information. That seems to leave us with an interesting issue -- should we discuss levels of awareness and relegate those actions that we do not direct attention toward as being something of which we are aware but at a lower level or should we speak of not truly being aware of them?

I have a hard time with this issue. I generally do not think we should speak of being aware of lower level unconscious (but potentially conscious) actions. I know others disagree -- John Searle being one of them.

I've noticed there's a level of focus when I'm trying to describe something I'm looking at in words, a sort of "zoom-in", that's not there otherwise. Obviously, I can only describe one thing at a time, so whatever thing, event, or aspect thereof I'm trying to put into words should have my focus over other things in my experiential field, which I may be more or less aware of but unable to translate into words while I'm concentrated on this primary thing. I think it's similar with problem-solving as well: I hold "the problem" in awareness and turn it over and over looking for angles and aspects that perhaps suggest a technique from a similar, better-understood problem, recall a general theory that applies to this problem, a rote mechanism, etc.

So perhaps "focussed" awareness should be differentiated from more "peripheral" awareness, things I'm aware of but unable to describe while focussed on something else, and then perhaps from "latent" awareness (things which I'm not aware of monitoring at all, just below the threshold of attention, general body sense, for example... if that makes sense).

That I haven't quite decided... in fact I'm not sure it's correct to say it's one way or the other... the word's simply not refined enough. So I wouldn't object to a refined definition one way or the other.

But there is a minimal requirement.

I could be looking at a bunch of dots and simply not see the dalmatian, even though there is a dalmatian in the picture. In that case, I'm not aware of the dalmatian. On the other hand, I could be driving and daydreaming about something, and slowly in a controlled fashion change lanes because I see someone else signaling and starting to pull ahead of me more slowly. I definitely was not aware of the dalmatian at any level in the former case--in the latter case, at some level I was aware of the car changing lanes.

In order to be aware of X, something inside has to actually not only see it (just working from vision as an example), but recognize that something is there.

The not seeing the dalmation for the dots... you're so focussed on the dots you're unaware of the dalmatian context, maybe. The absent-minded lane-changing... you're peripherally [latently?] aware of the context, so taking the cue to change lanes from the other car took very little attention. Perhaps the recognition "something is there", the context of driving, belongs to what I'm calling latent awareness, or at most peripheral awareness (or some other operational level).

I'm ambivalent... I wasn't trying to suggest a particular definition, but rather a particular focus. I'm not sure it matters if I'm performing a more complex task or not with respect to whether or not I'm aware of something (if I see the dalmatian, I wouldn't necessarily do anything grander with it).

But I think it's quite useful to be able to describe what we do to the blanket when we grab it, tug on it, then roll into it, while we're asleep. Obviously something knows the blanket is there--and this is quite a complex action in itself to carry out without some sort of knowledge that this thing is a blanket and behaves in this fashion to accomplish the goal of staying warm.

ETA: So maybe that's the answer--go with the more inclusive "awareness", and then we could modify it to describe what we attend to, or what we're "consciously aware" of (or aware that we are aware of).

Definitely though, we have to be aware of something to attend to it (even if the thing we're "aware of" isn't really "out there", it's at least "in there").

Sounds a good approach, too. I favor attacking it from different angles and then looking for common ground among the analyses.
 
Last edited:

Back
Top Bottom