Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
The statement "consciousness is not a thing you observe" is not saying that you don't observe consciousness, but that consciousness can't really be considered to be a "thing". The contents of consciousness include things you can observe, like computer screens.

I see. I disagree, though.

This is the nub of our disagreement. I can see no reason why we can't build a machine which is capable of accurately simulating all the computations carried out by a brain without being conscious. Where is the contradiction here? I have no reason to believe a car alarm or a calculator is conscious (has any sort of internal awareness). I have no reason to believe that a more powerful computer needs to have any sort of internal awareness either, even one capable of simulating the computations carried out by brains. Why do you think such a thing is impossible? If you wanted to convince me it was impossible, how would you go about it?

The properties of brains which enable them to carry out complex computations are properties to with the complex structure of the brain itself. This complex structure is physical, and therefore could theoretically be modelled on a computer (unless Penrose is correct, in which case the brain is a sort of quantum computer which could not be modelled on a computer which didn't mimick those quantum mechanical properties, but we can ignore this possibility for the moment). You are telling me that if we simulate the complexity and information-processing capacity, it logically follows that we must also simulate consciousness. Why do you think this logically follows? I think you must be basing this opinion on some other premise you are introducing with which I do not agree, because I see no logical necessity here. It may well be logically necessary if materialism is true, but we can't start this discussion with that premise, because then you would be begging the question.

The contradiction I see is that the computer has all the cognitive functions of a human but is not conscious. Either there is a contradiction or cognitive functions are not enough for something to be conscious. When a computer behaves like a human I don't see any basis to label the same behaviour differently when they appear to be so similar. But note that I'm not saying we are simulating consciousness. I don't view it as a thing that can be simulated because I don't see our brains simulating it either. I'm observing this post I'm replying to and we both call that conscious behaviour. But that doesn't necessarily mean I have consciousness that must be simulated in order to replicate all the public and private behaviours involved in replying to you. I'm just saying that the same label 'conscious' must be used to describe both human and computer when they behave alike. Or neither.

We don't have to assume materialism is true.

Nothing. The difference is between a machine, like a car alarm, which responds zombie-like to external stimuli without being internally aware of anything at all and something like a brain which carries out similar computations based on similar sense organs/devices, but which is actually internally aware that something is going on. It's the difference between mere response to stimulus and an internal awareness of the stimulus and the perception that the action taken was a free will choice (whether this is an illusion or not is another question, all I am saying is that we internally sense that we have made a free will decision whereas the car alarm does not, and neither does the computer in my example).

I'm not sure what you mean by this because I don't understand the difference between 'being aware of a stimulus' and 'being internally aware of a stimulus'. But anyway, I see being aware of a stimulus as a response. And that It's possible to have other responses without the 'being aware response'. We do things sometimes without knowing why we do them.

I agree that the car alarm does not sense that it's making a free will decision. It's not programmed to tackle with the concept. But I posit that since the computer in your example is programmed to behave as a human it must sense that it is making a 'free will decision' because if it doesn't then it isn't programmed to behave as a human.

Then I can't accept your definition of the word "observe". You are just talking about the capacity to respond to external stimuli and I can think of numerous examples of things which are capable of this but which most people do not believe are conscious.

We can't have the word "observe" meaning both what an unconscious machine does and what a conscious being does. We are talking about two completely different things. One is to do with sense equipment and information processing, the other is to do with subjective experience of the events which are occuring.

I don't see what the other type of 'observation' contributes to behaviour when the hypothetical computer in your example behaves like a human. That's why I don't see the need to use the word 'observe' differently when we talk about humans or computers.
 
I respectfully disagree. I believe the first half of my definition is something our bodies do without learning (distinguishing between "me" and "not me"), but recognizing that ability in others appears to be taught.

I doubt that. It appears to be a stage of maturation. But in any case, recognizing that others are conscious just like ourselves is irrelevant.
 
For those who like to posit anything special about human consciousness, I believe that it is best described as being able to recognize the ability to distinguish between "me" and "not me" in the widest possible range of entities, something my dog is apparently only able to do for a very limited number of creatures.

Who said there was anything special about human consciousness?

And the ability to think objectively about conscious experience is not the same as having conscious experience.
 
The contradiction I see is that the computer has all the cognitive functions of a human but is not conscious. Either there is a contradiction or cognitive functions are not enough for something to be conscious. When a computer behaves like a human I don't see any basis to label the same behaviour differently when they appear to be so similar. But note that I'm not saying we are simulating consciousness. I don't view it as a thing that can be simulated because I don't see our brains simulating it either. I'm observing this post I'm replying to and we both call that conscious behaviour. But that doesn't necessarily mean I have consciousness that must be simulated in order to replicate all the public and private behaviours involved in replying to you. I'm just saying that the same label 'conscious' must be used to describe both human and computer when they behave alike. Or neither.
Yes. :)
 
The contradiction I see is that the computer has all the cognitive functions of a human but is not conscious. Either there is a contradiction or cognitive functions are not enough for something to be conscious.

Well, in that case you've accepted that the contradiction only exists if you have already assumed that cognitive functions are enough for something to be conscious - which is the position you were trying to defend in the first place, wasn't it? "Cognitive functions" are exactly the sort of thing we would expect to be able to model on a computer, but there's no reason why I should believe that by simulating those computations we automatically recreate consciousness.

When a computer behaves like a human I don't see any basis to label the same behaviour differently when they appear to be so similar.

But we aren't labelling the behaviour differently. When I joined in this thread the first thing I did was challenge the claim that consciousness could be considered to be "behaviour" at all.

But note that I'm not saying we are simulating consciousness. I don't view it as a thing that can be simulated because I don't see our brains simulating it either. I'm observing this post I'm replying to and we both call that conscious behaviour. But that doesn't necessarily mean I have consciousness that must be simulated in order to replicate all the public and private behaviours involved in replying to you. I'm just saying that the same label 'conscious' must be used to describe both human and computer when they behave alike. Or neither.

Bolding mine. Why "must"? Again, this is a logical necessity which appears to be being imposed because you've assumed materialism and not for any other reason. If so, you can't invoke it in a debate with an anti-materialist who explicitly rejects "materialism is true" as a premise. You'd have to find some other way to support the claim of logical necessity, and I don't think there are any.

We don't have to assume materialism is true.

Then I don't know where your "must" comes from.

I'm not sure what you mean by this because I don't understand the difference between 'being aware of a stimulus' and 'being internally aware of a stimulus'. But anyway, I see being aware of a stimulus as a response.

So how do you differentiate between a person who is 100% paralyzed but fully conscious and somebody who is in a deep coma and completely unconscious? Now you appear to be using the word "response" to mean two things, just as previously you seemed to be doing the same thing with the words "observe" and "behaviour".

I agree that the car alarm does not sense that it's making a free will decision. It's not programmed to tackle with the concept.

The car alarm isn't aware of anything at all. It is a machine. It is no more conscious than a bucket of rocks.

Simply being able to respond to external stimuli is something which all sorts of machines can do. My car does it - I turn the steering wheel (input) and the driving wheels turn (output). According to your argument, this makes it capable of "observing" or "sensing" what I'm doing to the steering wheel and being aware that the driving wheels are turning even though it doesn't think it has free will. I just don't see any reason to believe that any of this is true. For me, the car is just a car. It doesn't have any elements of consciousness. The ability to respond to external stimulus, the ability to carry out computations and being subjectively aware of any this are three different things. It's only materialism that needs to conflate them.

Happy Christmas, BTW, and I hope you enjoyed seeing the Pope wrestled to the ground by a "mentally disturbed" woman.
 
.13.

This is the essence of the objections to materialism. Materialism leads to a situation where information-processing and subjective experience have to be logically identical, yet conceptually there is no apparent logical connection. Just to be clear what this means, there is a logical connection between the properties "being a square" and "having four sides". It is an a-priori, conceptual connection. In this case we can claim there is a logically necessary connection and we can explicitly specify why the logical connection exists. In the case of information processing and consciousness, you are claiming a similar logical connection exists but in this case it is far from clear why you think it exists, unless you have assumed materialism is true right at the start of your line of reasoning.

It's also unclear to me why you think there could be any sort of logical necessity here at all. To take another example, there is a connection between having wings and being able to fly, but it's not a logically-necessary connection - there are winged things that can't fly and flying things which have no wings. This is a practical, real-world connection, not a conceptual, logical one. You appear to be claiming that consciousness and information-processing are as intimately linked as being a square and having four sides. Why? I just don't see what it is about these concepts that could lead somebody to believe there is a logically-necessary link between them. The link only appears to people who want to defend materialism. I don't think it actually exists, although I am ready to consider any suggestions anyone can offer as to why it does.
 
Last edited:
This is the essence of the objections to materialism. Materialism leads to a situation where information-processing and subjective experience have to be logically identical, yet conceptually there is no apparent logical connection.
Conceptually, it is perfectly obvious that subjective experience is information processing. It's thinking about thinking. You can't get any more information processing-y.

In the case of information processing and consciousness, you are claiming a similar logical connection exists but in this case it is far from clear why you think it exists, unless you have assumed materialism is true right at the start of your line of reasoning.
Not at all. Awareness is information processing. Consciousness is self-awareness. Therefore...

It's also unclear to me why you think there could be any sort of logical necessity here at all.
Many things are unclear to you, grasshopper.

You appear to be claiming that consciousness and information-processing are as intimately linked as being a square and having four sides. Why?
Because consciousness is information processing. It's that simple.

What is it you think there is to consciousness that is anything more than that?

I just don't see what it is about these concepts that could lead somebody to believe there is a logically-necessary link between them.
Define consciousness, then.
 
Just in case anybody reading this is unaware, I stopped reading Pixy's posts a long time ago as a result of him never bothering to actually try to understand any of my arguments (by his own admission, he sees no reason to make any effort to understand what I actually believe, because it is "obviously wrong"). This makes it pointless responding to, or even reading, his posts.
 
Then how do you recognize it in yourself? When did you begin to recognize it and what were the indicators that it was consciousness and not, say, some sort of hallucination?

Because consciousness is what has hallucinations. No consciousness - no hallucinations, no illusions, no erroneous perceptions.

If I'm having hallucinations, I'm having them via my consciousness. I can't hallucinate having consciousness, because if I wasn't conscious, I couldn't experience hallucinations.

I would make the argument that you do recognize it within yourself by way of your actions, actions in this context meaning the actions of accepting certain thoughts as "true" or "relevant" and others as "false" or "irrelevant", just the same way you can recognize it in others by way of their actions. By comparing what you do to what other people and certain animals do and what computers do not do, you can categorize some as fully conscious, others less so, and some not at all.

To me, putting it into its own category is basically defining it by what it isn't, which I personally don't find very useful. Mind you, I am not saying you are wrong, merely saying why I do not find your usage compelling.

Putting it into its own category is a necessary first step to understanding it. It isn't sufficient in itself.
 
I respectfully disagree. I believe the first half of my definition is something our bodies do without learning (distinguishing between "me" and "not me"), but recognizing that ability in others appears to be taught. I think this is why small children can be such amoral creatures until they are taught how to socialize. They have got the "me/not me" bit down, but they do not reliably recognize that in others, especially others who appear very different (such as another species, like the family dog or cat). Some people seem to never learn this. Like someone who cannot learn to play the piano, they may have some sort of consciousness tone-deafness going on.

I understand that by my definition, infants are not fully conscious, and I would argue that is true up until a certain age. In addition, my definition pretty much requires that while in utero, full consciousness is impossible. However, I do believe that the vast majority of infants are capable of learning how to be conscious, and therefore must be treated morally as there is no fixed age at which this kicks in. Like child prodigies in other fields, it would not surprise me to discover that there are those who fully conscious from an extremely young age.

For those who like to posit anything special about human consciousness, I believe that it is best described as being able to recognize the ability to distinguish between "me" and "not me" in the widest possible range of entities, something my dog is apparently only able to do for a very limited number of creatures.

The ability to recognise one's own existence is not dependent on the existence of other conscious entities. There are two distinct questions involved. A small child is probably conscious of its own needs. It doesn't need to be aware that the person supplying those needs is also conscious.
 
Simply being able to respond to external stimuli is something which all sorts of machines can do.

In fact, it's a universal quality of matter that it is affected by external stimuli.

I've repeatedly asked what makes one sort of response indicative of consciousness, and a another response meaningless, and I've yet to receive a satisfactory answer. I'm expected to just grok it.
 
Just in case anybody reading this is unaware, I stopped reading Pixy's posts a long time ago as a result of him never bothering to actually try to understand any of my arguments (by his own admission, he sees no reason to make any effort to understand what I actually believe, because it is "obviously wrong"). This makes it pointless responding to, or even reading, his posts.

Ditto. It's not possible to have a discussion with Pixy. He has views which he will put forward, and that's it. He certainly won't address any points put forward by anyone else. After a while it becomes redundant. I'm sure that he is conscious, but I can't actually prove it to myself.
 

Glad you liked it. :)

I've been wondering about conscious systems and self referential information processing. Specifically if self reference is necessary. Could there be 'self-reference-zombies'? A zombie that behaves like a system capable of self-referential information processing but it isn't. I've been thinking about something like a giant lookup table. Could that kind of a machine behave as if it was conscious? Obviously there will be practical difficulties to building such a system but I'm thinking that it might be possible in principle.
 
Glad you liked it. :)

I've been wondering about conscious systems and self referential information processing. Specifically if self reference is necessary. Could there be 'self-reference-zombies'? A zombie that behaves like a system capable of self-referential information processing but it isn't. I've been thinking about something like a giant lookup table. Could that kind of a machine behave as if it was conscious? Obviously there will be practical difficulties to building such a system but I'm thinking that it might be possible in principle.
The problem there (as I mentioned in one of these threads) is that you run into a combinatorial explosion very, very quickly, and before you know it your lookup table is bigger than the Universe.

There are of course ways to prune back the lookup table by performing more sophisticated processing instead of just mechanically looking up entries in the table. But if we do that, then we have a conscious machine.

I don't know if you've read Godel, Escher, Bach, but if not, I highly recommend it. It's probably the best examination of these ideas ever written.
 
.13.

This is the essence of the objections to materialism. Materialism leads to a situation where information-processing and subjective experience have to be logically identical, yet conceptually there is no apparent logical connection. Just to be clear what this means, there is a logical connection between the properties "being a square" and "having four sides". It is an a-priori, conceptual connection. In this case we can claim there is a logically necessary connection and we can explicitly specify why the logical connection exists. In the case of information processing and consciousness, you are claiming a similar logical connection exists but in this case it is far from clear why you think it exists, unless you have assumed materialism is true right at the start of your line of reasoning.

It's also unclear to me why you think there could be any sort of logical necessity here at all. To take another example, there is a connection between having wings and being able to fly, but it's not a logically-necessary connection - there are winged things that can't fly and flying things which have no wings. This is a practical, real-world connection, not a conceptual, logical one. You appear to be claiming that consciousness and information-processing are as intimately linked as being a square and having four sides. Why? I just don't see what it is about these concepts that could lead somebody to believe there is a logically-necessary link between them. The link only appears to people who want to defend materialism. I don't think it actually exists, although I am ready to consider any suggestions anyone can offer as to why it does.


Ah, my old friend and nemesis UE from Dawkins. I was poking around this forum and what should I spy but same old same old. Still stuck on this problem I see. It is a lot of fun. I don't think I ever really got to thank you for helping me to educate myself on Kant and Wittgenstein before we both left that forum. I learned a great deal form you there.

In any case, to pick up where we left off, I still see you confusing ontology with epistemology and insisting on incorrect or unprovable notions and assumptions concerning the supervenience of math/information/logical processing on mind vs matter. I continue to maintain it's neither but rather that both mind and matter are supervenient on math/information/logical processing. That form of neutral monism undercuts most if not all of your objections though I realize many materialists don't accept this as a form of materialism.

Even if you don't buy my supervenience claims, and I know you didn't (have you ever rethought this since I was always amazed you didn't see how it actually could liberate your own nebulous 0/1 theory of monism), the fact that subjective experience may be unknowable and unproveable via materialistic means (or any other means) does not necessitate that consciousness arises from any other cause that from a materialistic one. Nor does it necessitate or even suggest some other "realm" of conscious being any more than Godel's Incompleteness Theorem suggests there must be another realm to discover the decideability of mathematical axioms.

The logical connection between consciousness and information processing (IP) you claim doesn't exist is causation. Consciousness is a form of IP. You're mixing apples and oranges by trying to equate a necessity for a priori definitions with emprically supported truth claims. A square is a logically defined concept. It is not strictly true to say that having four equal sides "caused" something to "become" a square. Consciousness, as you've already pointed out elsewhere, is not a thing or substance. It is a process that should be treated like a verb, like "digesting" rather than a noun. You are reifying it to try to make it behave like a square (object). That will only cause you to chase your tail in every argument. Don't believe me? OK, then forget consciousness for a second and explain to us the philosophical "essence" and understandibility of digesting via pure a priori logic and conceptualization.

I saw you on Pixi's case for not making an effort to understand your arguments. As a newbie here, it isn't clear where you made them. Scanning back over many of your more recent posts I only see inklings of what I saw you attempt to do at Dawkins. But I know your main objections stem for your interpretations of Kant and Wittgenstein, particularly the conundrums you identified as resulting from argument pertaining to Private Ostensive Definitions (PODs). I did a great deal of thinking on this after we both left. In fact, I may be co-authoring a paper on this subject with a well-known philosopher of mind. Here is something for you to ponder:

Where Wittgenstein (PODs) meets Information Theory is in the very definition of “information” itself, at least from a computational perspective if not greater. Is qualia information? This is my key question. You can say a lot of stuff about qualia but we all know it is ineffable by any language we can imagine. All data or information must be addressable in the form of a message. If you can’t put something into the form of a message, then it isn’t information. And since there can be no knowledge without information, it can yield no knowledge either.

Furthermore, without information there can be no computational processing. Qualia, therefore, can serve no purpose in computational processing, as input to any other feedback or feed-forward computation. Is there some non-informational process they can somehow be involved in? I can’t imagine one. If they can’t be information or process then what are they?
 
Well, in that case you've accepted that the contradiction only exists if you have already assumed that cognitive functions are enough for something to be conscious - which is the position you were trying to defend in the first place, wasn't it? "Cognitive functions" are exactly the sort of thing we would expect to be able to model on a computer, but there's no reason why I should believe that by simulating those computations we automatically recreate consciousness.

There is a contradiction if cognitive functions are enough to replicate all the behaviours we call conscious. I think they are enough to replicate them. And if I've interpreted your position correctly, you will agree with my second sentence but disagree with the first. Because in your view replicating all the behaviours is not enough to declare a system conscious. Rephrasing my earlier post: The contradiction I see is that the computer has all the behaviours of a human but is not conscious. Either there is a contradiction or behaviours are not enough for something to be called conscious. Even if a computer replicates all the private and public behaviour you don't consider it conscious and there is no contradiction. Am I close?

If talking about consciousness in one instance (between to humans) is considered conscious behaviour then the same conversation between computers is in my opinion just as conscious. What I mean is that you can't tell the difference between those two from their behaviour. Two black boxes discussing this topic might have humans or computers inside them. If after listening to that conversation for a while we label it conscious behaviour I see no reason to change that after we open the boxes and see computers inside them.


But we aren't labelling the behaviour differently. When I joined in this thread the first thing I did was challenge the claim that consciousness could be considered to be "behaviour" at all.

What I mean when I talk about re-labeling behaviour is that we are re-labeling public behaviour. See above for the black box example. I agree that 'consciousness' can't be labeled as a behaviour. It is not an action.

Bolding mine. Why "must"? Again, this is a logical necessity which appears to be being imposed because you've assumed materialism and not for any other reason. If so, you can't invoke it in a debate with an anti-materialist who explicitly rejects "materialism is true" as a premise. You'd have to find some other way to support the claim of logical necessity, and I don't think there are any.

Then I don't know where your "must" comes from.

I say must because I don't see how the same behaviour (e.g. talking about consciousness) can be labeled differently based on who or what is behaving.

So how do you differentiate between a person who is 100% paralyzed but fully conscious and somebody who is in a deep coma and completely unconscious? Now you appear to be using the word "response" to mean two things, just as previously you seemed to be doing the same thing with the words "observe" and "behaviour".

Same way as you would, presumably. If 100% paralysed person behaves publicly exactly alike as an unconscious person in a coma there is no way to tell without using some sort of brain imaging technology to see what sort of brain activity there is.

The car alarm isn't aware of anything at all. It is a machine. It is no more conscious than a bucket of rocks.

Simply being able to respond to external stimuli is something which all sorts of machines can do. My car does it - I turn the steering wheel (input) and the driving wheels turn (output). According to your argument, this makes it capable of "observing" or "sensing" what I'm doing to the steering wheel and being aware that the driving wheels are turning even though it doesn't think it has free will. I just don't see any reason to believe that any of this is true. For me, the car is just a car. It doesn't have any elements of consciousness. The ability to respond to external stimulus, the ability to carry out computations and being subjectively aware of any this are three different things. It's only materialism that needs to conflate them.

If there are no sensors attached to the steering wheel or nothing to process the information that a sensor would provide then there are no observations. But even then it wouldn't think it had free will if it wasn't programmed to think that way.

Happy Christmas, BTW, and I hope you enjoyed seeing the Pope wrestled to the ground by a "mentally disturbed" woman.

Happy Christmas to you too. I guess that woman didn't get the memo of Ceasefire on Christmas... Hostilities will resume after the New Year. :)
 
The problem there (as I mentioned in one of these threads) is that you run into a combinatorial explosion very, very quickly, and before you know it your lookup table is bigger than the Universe.

There are of course ways to prune back the lookup table by performing more sophisticated processing instead of just mechanically looking up entries in the table. But if we do that, then we have a conscious machine.

Hmm, I guess there might be problems trying to acces data outside my lightcone...

I don't know if you've read Godel, Escher, Bach, but if not, I highly recommend it. It's probably the best examination of these ideas ever written.

I haven't read that.
 
.13.

This is the essence of the objections to materialism. Materialism leads to a situation where information-processing and subjective experience have to be logically identical, yet conceptually there is no apparent logical connection. Just to be clear what this means, there is a logical connection between the properties "being a square" and "having four sides". It is an a-priori, conceptual connection. In this case we can claim there is a logically necessary connection and we can explicitly specify why the logical connection exists. In the case of information processing and consciousness, you are claiming a similar logical connection exists but in this case it is far from clear why you think it exists, unless you have assumed materialism is true right at the start of your line of reasoning.

It's also unclear to me why you think there could be any sort of logical necessity here at all. To take another example, there is a connection between having wings and being able to fly, but it's not a logically-necessary connection - there are winged things that can't fly and flying things which have no wings. This is a practical, real-world connection, not a conceptual, logical one. You appear to be claiming that consciousness and information-processing are as intimately linked as being a square and having four sides. Why? I just don't see what it is about these concepts that could lead somebody to believe there is a logically-necessary link between them. The link only appears to people who want to defend materialism. I don't think it actually exists, although I am ready to consider any suggestions anyone can offer as to why it does.

Information processing and subjective experience are linked in the way that if you don't have the processing then you can't experience. Atleast I've never seen any evidence that a system that does not process information could experience anything. Wether there is anything other than information processing involved is another matter. But what is important in my opinion is that we have never observed that extra thing. We know we experience and we know that the brain processes information. But we don't know what else is there if anything.
 
Glad you liked it. :)

I've been wondering about conscious systems and self referential information processing. Specifically if self reference is necessary. Could there be 'self-reference-zombies'? A zombie that behaves like a system capable of self-referential information processing but it isn't. I've been thinking about something like a giant lookup table. Could that kind of a machine behave as if it was conscious? Obviously there will be practical difficulties to building such a system but I'm thinking that it might be possible in principle.

10 Print "hello"
20 Goto 10.

Hippo Crossmess everybody.
 
Last edited:

Back
Top Bottom