• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Sentient machines

Can you offer any evidence that you can either "describe red" or "experience red"? How can a color be described?

Describe red? It's light transmitted at roughly 629nm.

Experience red? Red usually makes me cautious. I stop at red lights and signs. I'm more prone to read red text as it may be a warning or something important. These experiences have been programmed into me by being raised in North America. I feel more comfortable around greens and blues than I do around reds.

A machine can easily distinguish colors, but would it feel more comfortable around some than others?
 
Where is your evidence for their unity?

The two phrases have different denotations; it's at least imaginatively possible that a non-conscious object might be able to "see red" without experiencing it. In my experience, such objects are called photodetectors and are available for a nominal sum from the local electronics shop.

I need not, as it is he who is doing the questioning. Further more, Materialism explains and predicts all phenomenon without the need for two destict experiences. I'd like evidence that they are, in fact, distinct before continuing the argument that Materialism is wrong.
 
Describe red? It's light transmitted at roughly 629nm.

Experience red? Red usually makes me cautious. I stop at red lights and signs. I'm more prone to read red text as it may be a warning or something important. These experiences have been programmed into me by being raised in North America. I feel more comfortable around greens and blues than I do around reds.

A machine can easily distinguish colors, but would it feel more comfortable around some than others?

Why not? How is there any difference between a human brain and a computer that is as complex as a human brain?
 
Why not? How is there any difference between a human brain and a computer that is as complex as a human brain?


We know, through personal experience (and some biological and anatomical deductions,) that the human brain is capable of consciousness. We don't know if the same is true of the complex computer.
 
We know, through personal experience (and some biological and anatomical deductions,) that the human brain is capable of consciousness. We don't know if the same is true of the complex computer.

1) Provide evidence that we are conscious.
2) Either you think that a computer as complex as the human brain can be conscious, or you attribute some ghost in the machine to consciousness.
 
1) Provide evidence that we are conscious.

I can't. My knowledge of my consciousness lies in that I am conscious -- I know that I am self-aware because I'm aware of it. Since the only proof of my consciousness is my direct experience of it, I can not prove it for anyone else, as they do not share my consicousness.

Likewise, I can not know, in the philosophical sence, if someone else is conscious, since I do not share their self-awareness or lack of such.

Since other human creatures are appears to be conscious (in that they behave as if they are) and since other humans are very similar to me, I can assume that they're probably conscious just as I am; but I can not prove it conclusively.

2) Either you think that a computer as complex as the human brain can be conscious, or you attribute some ghost in the machine to consciousness.

Actually, the issue is not if a computer can be conscious, but if we can know if it is conscious. With our current understanding of consciousness (which can be summed up as "We don't") we can't tell the difference between a computer that really is conscious and a computer that is not conscious but merely behaves as if it were.
 
I can't. My knowledge of my consciousness lies in that I am conscious -- I know that I am self-aware because I'm aware of it. Since the only proof of my consciousness is my direct experience of it, I can not prove it for anyone else, as they do not share my consicousness.
But aware of what, exactly?

You can't be aware of the processes that monitor and are aware of mental events. So whatever we're talking to when we talk to you is not self-aware.
 
But aware of what, exactly?

Aware of my self. "Cogito ergo sum."

You can't be aware of the processes that monitor and are aware of mental events. So whatever we're talking to when we talk to you is not self-aware.

I don't understand what you're getting at here. My ears are not self-aware, no, and my individual neurons are not self-aware either (or if they are, they haven't told me.) That doesn't mean that I'm not self aware.
 
You can't be self-aware. The processes that are aware of other things are too complex for those processes to be aware of. Plus, how would they gain information about themselves?

Your visual-processing centers may be aware of the information coming in from the eyes, but they are not aware of their own processing. Your sense of self-awareness may be aware of a large set of mental processes, but it's not capable of knowing how it works.
 
Isn't there a difference between being aware of your own existance (which is what I think self-aware usually means) and a deep understanding of the functions of the brain?
 
You can't be self-aware.

I am aware that I exist. That's what's usually meant with being self-aware.

The processes that are aware of other things are too complex for those processes to be aware of. Plus, how would they gain information about themselves?

Your visual-processing centers may be aware of the information coming in from the eyes, but they are not aware of their own processing. Your sense of self-awareness may be aware of a large set of mental processes, but it's not capable of knowing how it works.

I'm sorry, I don't understand what you mean with "self-awareness" in the above. You seem to be talking about some kind of understanding of the mechanisms of consciousness?
 
I can't. My knowledge of my consciousness lies in that I am conscious -- I know that I am self-aware because I'm aware of it. Since the only proof of my consciousness is my direct experience of it, I can not prove it for anyone else, as they do not share my consicousness.

How do you know you are truely having direct experience of your consciousness? Also, "having direct experience of consciousness" is suggesting a ghost in the machine. I.e. there is something that "experiences" consciousness.

Likewise, I can not know, in the philosophical sence, if someone else is conscious, since I do not share their self-awareness or lack of such.

Nor do you know that you are conscious.

Since other human creatures are appears to be conscious (in that they behave as if they are) and since other humans are very similar to me, I can assume that they're probably conscious just as I am; but I can not prove it conclusively.

Indeed.

Actually, the issue is not if a computer can be conscious, but if we can know if it is conscious. With our current understanding of consciousness (which can be summed up as "We don't") we can't tell the difference between a computer that really is conscious and a computer that is not conscious but merely behaves as if it were.

How is that any different? Something that behaves as if it were conscious in all respects, naturally is conscious. The only way that it wouldn't be would be to assume a ghost in the machine.
 
How do you know you are truely having direct experience of your consciousness?

Err. Because I experience it. It's much the same way I know that I'm feeling happy or sad: by feeling happy or sad.

Also, "having direct experience of consciousness" is suggesting a ghost in the machine. I.e. there is something that "experiences" consciousness.

If you're not aware that you're conscious you are per definition not conscious. "Having direct experience of consciousness" means to that I am aware of myself; you can not be self aware without knowing that you're self aware.

Nor do you know that you are conscious.

Yes, I do. Honest.


How is that any different? Something that behaves as if it were conscious in all respects, naturally is conscious.

No. No more than a human who acts angry in all respects has to actually be angry.

The only way that it wouldn't be would be to assume a ghost in the machine.

Not at all. Even assuming that consciousness is an entirely materialistic phenomena, as long as we don't know how it is created in a system (and we don't) we are not able to look at the internals of a system to determine if consciousness is created in that system. We don't know what bits are needed to create consciousness, so there's no point in looking at what bits exist in the widget.

That leaves us with the external behaviour of the system; and there is no way to determine (absolutely) consciousness from that either, since any behaviour that's indicative of self-awareness can be simulated by an automaton.

It's possible, but not proven, that consciousness necessarily arise out of complexity, in which case any system sufficently complex to exactly simulate consciousness must in fact be conscious.
 
Err. Because I experience it. It's much the same way I know that I'm feeling happy or sad: by feeling happy or sad.

We can induce a feeling of happyness or sadness in humans by manipulation of the brain. Why would it be any different for consciousness?

If you're not aware that you're conscious you are per definition not conscious. "Having direct experience of consciousness" means to that I am aware of myself; you can not be self aware without knowing that you're self aware.

Why can a computer not be aware of itself?

No. No more than a human who acts angry in all respects has to actually be angry.

So? We are debating whether a complex computer can be conscious. If there is no basis, except the assumption that humans are like me, to think that other humans are conscious, then what is stopping a computer from being conscious?

Not at all. Even assuming that consciousness is an entirely materialistic phenomena, as long as we don't know how it is created in a system (and we don't) we are not able to look at the internals of a system to determine if consciousness is created in that system. We don't know what bits are needed to create consciousness, so there's no point in looking at what bits exist in the widget.

We can induce various 'conscious' states in people through manipulation of the brain. We are well on our way to understanding what creates consciousness.

That leaves us with the external behaviour of the system; and there is no way to determine (absolutely) consciousness from that either, since any behaviour that's indicative of self-awareness can be simulated by an automaton.

So why the assumption that there is a difference?

It's possible, but not proven, that consciousness necessarily arise out of complexity, in which case any system sufficently complex to exactly simulate consciousness must in fact be conscious.

If it does not arrise out of complexity, then why do brain-damaged patients exhibit 'unconsciousness' (not being conscious)? You are begging the question here, by saying "we can never know for certain". So? We can never know anything for certain, so why does it matter?
 
Last edited:
We can induce a feeling of happyness or sadness in humans by manipulation of the brain. Why would it be any different for consciousness?

I've never claimed it would be any different.

Why can a computer not be aware of itself?

I've never claimed a computer couldn't be self aware.

So? We are debating whether a complex computer can be conscious.

You might be debating that, but I'm not.

If there is no basis, except the assumption that humans are like me, to think that other humans are conscious, then what is stopping a computer from being conscious?

Nothing; but as a computer is less like me than other humans are, my assumption that it is conscious because it is similar to me is weaker.

We can induce various 'conscious' states in people through manipulation of the brain. We are well on our way to understanding what creates consciousness.

Maybe, but we're not there yet. Until we know more about how consciousness is created, we can't tell if a system is concious by examining the internals of the system.

So why the assumption that there is a difference?

I haven't assumed that, but the possibility that there might be a difference means that we can't know for certain if the computer is actually conscious.

If it does not arrise out of consciousness, then why do brain-damaged patients exhibit 'unconsciousness' (not being conscious)?

Just because one complex system (the brain) creates consciousness doesn't mean that any complex system does the same. Complexity might be a necessary, but not sufficent, factor in creating consciousness, so we can not assume that any sufficently complex system must be conscious.

You are begging the question here, by saying "we can never know for certain". So? We can never know anything for certain, so why does it matter?

Because we're having a philosophical discussion about the nature of consciousness. (And I've not said that we can never know for certain; I've only said that with the understanding of consciousness we currently have we can't know for certain.)
 
I've never claimed it would be any different.

Oh.

I've never claimed a computer couldn't be self aware.

Oh.

You might be debating that, but I'm not.

Oh. My bad, on all 3 counts. :(

Nothing; but as a computer is less like me than other humans are, my assumption that it is conscious because it is similar to me is weaker.

What is the difference between a human brain and a computer that is every bit as complex and simulates every connect? What is your basis to say that such a computer is unlike a human?

Maybe, but we're not there yet. Until we know more about how consciousness is created, we can't tell if a system is concious by examining the internals of the system.

What is your judgement on "having to know more"? Why must we know more? We have some results, and we can assume that they will be consistant. How does our lack of knowledge have any effect on anything? Other then, say, explaining exactly how consciousness is formed.

I haven't assumed that, but the possibility that there might be a difference means that we can't know for certain if the computer is actually conscious.

Sure, the possibility is there that the invisible pink unicorn creates consciousness. So? The possibility of other theories does not deduct from the current one. What is your basis for thinking that consciousness might not be entirely physical? Other then, of course, saying that there are other explanations.

Just because one complex system (the brain) creates consciousness doesn't mean that any complex system does the same. Complexity might be a necessary, but not sufficent, factor in creating consciousness, so we can not assume that any sufficently complex system must be conscious.

Why is one level of complexity in the brain different from the same level of complexity in a computer? Becuase it's 'nature'? There is no evidence that consciousness is not a "product of complexity".

Because we're having a philosophical discussion about the nature of consciousness. (And I've not said that we can never know for certain; I've only said that with the understanding of consciousness we currently have we can't know for certain.)

Exactly, so why are you bringing in things like "we don't know yet"? That makes no difference. There is no reason to think that complexity in the human brain is any different from complexity in a computer. If complexity causes consciousness, a computer can be conscious. If complexity does not, I'd like to see why you are adding a ghost into the machine, which is an extra complication that is not needed.
 
1) Provide evidence that we are conscious.
2) Either you think that a computer as complex as the human brain can be conscious, or you attribute some ghost in the machine to consciousness.
1) You don't want to entertain the idea that we are all p-zombies, so I don't even know why you are asking this question.

2) This is a false dichotomy. The case might be that subjective experience can only arise in organic substances, so despite the structure of the computer being the same, it may not be conscious based on the fact that it is composed of different materials. No "ghost" is required.
 
I am aware that I exist. That's what's usually meant with being self-aware.
But what is 'I'? Seriously, what is it?

I'm sorry, I don't understand what you mean with "self-awareness" in the above. You seem to be talking about some kind of understanding of the mechanisms of consciousness?
No, awareness of the end result. There is a module of your mind that monitors other modules in your mind - not necessarily their input, or their internal functioning, but their output. This module cannot know its own functioning. It also cannot monitor its own output directly. This module is not aware of itself.
 
1) You don't want to entertain the idea that we are all p-zombies, so I don't even know why you are asking this question.

Why wouldn't I?

2) This is a false dichotomy. The case might be that subjective experience can only arise in organic substances, so despite the structure of the computer being the same, it may not be conscious based on the fact that it is composed of different materials. No "ghost" is required.

Perhaps it is a false dichotomy in that sense, but only if there is any meaningful difference between "organic substances" and inorganic ones. However, is there? Not in the slightest. Both are made of the same atoms, so there is no reason why there should be any difference between the two. Either there is some kind of "natural kind" to an organic substance, which is of course nonsense, or there is some special 'ghost in the machine' to organic substances. How, exactly, is there a difference between the two?
 

Back
Top Bottom