• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Sentient machines

There's been a thread or two about this before. I think it will be a complete non-issue.

First, nobody is going to be making AIs just for the hell of it (except for maybe some prototypes to show it can be done). We're going to make them to do specific tasks. Given that, I fully expect they will be "programmed" (or conditioned/evolved/whatever term you want to use) to want to do what we tell them to do -- there's no better motivation than desire.

They're sentient and many people (myself included) will think that means they are entitled to the same legal rights as humans wherever practical. However, I think their "programming" will make this a moot point. They won't care about their rights. All they'll care about is doing their job, and they'll be perfectly content with that arrangement. They'll be obsessive workaholics.

In fact, if we ever get to the point where there are "free the AIs" rallies and the like, I fully expect it's the AIs themselves who will laugh at them and tell them to mind their own damn business.

Someone (I forget who, sorry!) thought that it would be a very similar situation to the house elves in the Harry Potter books, who sneer at anyone trying to "free" them, and whose worst fear in the world is losing their jobs.

Jeremy
I understand, but not exactly my point...


I was actually being cynical ( it's my job ) about the way we treat fellow humans. Why should we be any nicer to AI's ?
 
Well, genuine Space Opera, at least. ;)

Although Star Trek is absolutely a space opera, I'd say it's absolutely science fiction as well. And genuine at that.

As a huuuge Asimov fan, I was very disappointed by I, Robot, although I'll admit it wasn't a bad movie. It was very un-Asimovian. Ditto for Robin Williams' The Bicentennial Man.

I hope they one day will make a good Asimov movie.
 
Q-Source,

Even forgetting everything we know about how the brain works, and about the role it plays in things like thinking, remembering, perceiving, and so on. Even without any of that knowledge, we can look at other people, observe that they behave very similarly to the way we do, and observe that there are no apparent differences between us. Given such observations, the most reasonable conclusion is that they are exhibiting this behavior for the same reasons we are, and are therefor also conscious.
How can you be sure that the rest is conscious?
We can't be "sure". We can't be "sure" of any synthetic proposition. What I said is that it is the most reasonable conclusion to reach based on the available evidence.

what if we all are p-zombies?. According to your reasoning, we all must be p-zombies instead.
How does my reasoning imply this? The fact that I am conscious is a tautology, because I can only define consciousness to be the various mental phenomena which I know I have. In order for other people to not have them as well, I must imagine that I am somehow special, and that my behavior is somehow determined differently than that of anybody else. Sure, this is possible. But it is not the most reasonable explanation for the available evidence. The most reasonable explanation is, as I said, that they exhibit the same sort of behavior as I do for the same reasons as I do.

I don't understand your language, why is "consciousness" still in your vocabulary?. If you think that subjective=objective processes, then at least, you should be consistent like Daniel Dennett and eliminate the word consciousness for once and all.
Why? It all depends on what you mean by "consciousness". If you insist on definitions which implicitly assume some sort of dualism, then sure, it doesn't represent anything which actually exists. I do not insist on any such definitions.

The way I see it, there are two approaches (for physicalists) with respect to handling the usage of the word "consciousness":

One approach is to simply say that since most people use the word in an incoherently defined way, and attach all sorts of dualist preconceptions to it, that it is best to just reject the word as not being useful. Note that this is not equivalent to saying that consciousness does not exist! It is, rather, saying that the word "consciousness" is not defined coherently enough for it to be meaningful to say that it does or does not exist.

The other approach is to provide a definition for the word which is coherent, and does not make any metaphysical or dualistic assumptions.

There are pros and cons to both approaches. The obvious disadvantage of the first approach is that it allows non-physicalists to claim that we are denying the existence of consciousness. The obvious (but false) implication of this being to suggest that we are denying the existence of our own mental processes, which is nonsensical. They then dismiss all of our arguments and evidence out of hand, on the basis that we are obviously being irrational and silly. Never mind that we are not actually denying the existence of any of the mental phenomena which we all know we posses. We are just denying all of the dualist preconceptions that people insist on tying to them. After all, it is much easier to simply attack the "they think we're all p-zombies" strawman, than to actually deal with our arguments and evidence.

The disadvantage to the second approach is that, in spite of the fact that "consciousness" can be defined such that it simply refers to mental phenomena, without making any implicit dualistic assumptions, people will simply ignore these definitions and try to apply their own to the things we say about it, thereby resulting in nonsensical interpretations of what we say.

The fundamental problem is that dualistic preconceptions are tightly tied into not only our basic intuitive conceptions of the mind, but also all of the language we use to talk about it. There is no ideal solution to this problem. I am sure that Dennett has his reasons for choosing the solution he has, and I likewise have my reasons for choosing the one I have.


I define consciousness to be a set of processes. Namely thinking, remembering (which are really just aspects of the same process), and sensory processing. Nobody will argue that these processes do not exist. Some people will attempt to argue that these processes aren't what they think of as consciousness. They will then toss around terms they can't define, like "phenomenal experience" and "raw feels". I won't claim that what they are calling consciousness doesn't exist. What I will say is that since I don't know what they mean by those terms, I can't address the issue at all, and that I see no reason to think that the definition I gave leaves out anything which I actually know that I have.


Dr. Stupid
 
Q-Source,It is, rather, saying that the word "consciousness" is not defined coherently enough for it to be meaningful to say that it does or does not exist.

At last, one individual who things what I think. This has been my position in the forum since I arrived.
 
This is an extremely good question. This is the one question that my very brilliant philosophy professor (this was 15-20 years ago) could not answer. We called it the "computer head problem". If you created a computer that exactly mimicked the computer brain (and maybe stuck it into a human body), would that be considered "alive" or "conscious" or have "reason", etc.

The best my professor could come up with is that it is not, but would be indistinguishable from something that does. Why? Our definitions of being alive, or having consciousness or reason are based on behavior. At least outwardly. Outwardly, both humans and the machine described would exhibit the same qualities. But really, definitions of being alive, or having consciousness or reason should not be based on behavior, but rather the causes of behavior. In the case of the "computer head", the causes of behavior are the way the computer is programmed, in the case of humans, the causes of behavior are the way they have developed naturally.

This presents big problems. If you observe an effect, you cannot tell whether the effect is "real" or "programmed" (unless you know how the object that caused the effect was created).

Ultimately the observations of behavior are indistinguishable. The "computer head" is fake (because it was intelligently designed) and the human is real (because it developed naturally).

I think it starts getting to a level of philosophy equivalent to physics reality and quantum mechanics. What is "real" or "conscious" becomes dependant on the observer.

Of course I haven't followed philosophy for the past 15-20 years, so someone much smarter than me has probably worked this out much better.
 
Why? It all depends on what you mean by "consciousness". If you insist on definitions which implicitly assume some sort of dualism, then sure, it doesn't represent anything which actually exists. I do not insist on any such definitions.


Ha!. There was already a definition of the word consciousness long before physicalists came with a new definition of what they think it is. Check any dictionary and you can still find the definition of what we mean in philosophy by consciousness.


The way I see it, there are two approaches (for physicalists) with respect to handling the usage of the word "consciousness":

One approach is to simply say that since most people use the word in an incoherently defined way, and attach all sorts of dualist preconceptions to it, that it is best to just reject the word as not being useful. Note that this is not equivalent to saying that consciousness does not exist! It is, rather, saying that the word "consciousness" is not defined coherently enough for it to be meaningful to say that it does or does not exist.

The other approach is to provide a definition for the word which is coherent, and does not make any metaphysical or dualistic assumptions.

You are being intellectually dishonest. What you and physicalists are doing is to define consciousness with its antithesis. Therefore, my subjective experiences which are intrinsic, private and directly apprehensible turn out to be (according to you) objective and accesible from a third person perspective. This is what I mean when I say that you deny the existence of such properties that make me conscious. And I am not even using any dualistic assumption because I am not a dualist.

The worst of all is not that you are saying that consciousness is its antithesis but the fact that you cannot even provide the evidence to say so. You can't never know how I feel when I think of awareness for example.

Belem
 
This is an extremely good question. This is the one question that my very brilliant philosophy professor (this was 15-20 years ago) could not answer. We called it the "computer head problem". If you created a computer that exactly mimicked the computer brain (and maybe stuck it into a human body), would that be considered "alive" or "conscious" or have "reason", etc.

The best my professor could come up with is that it is not, but would be indistinguishable from something that does. Why? Our definitions of being alive, or having consciousness or reason are based on behavior. At least outwardly. Outwardly, both humans and the machine described would exhibit the same qualities. But really, definitions of being alive, or having consciousness or reason should not be based on behavior, but rather the causes of behavior. In the case of the "computer head", the causes of behavior are the way the computer is programmed, in the case of humans, the causes of behavior are the way they have developed naturally.

No offense, but your professor doesn't sound so brilliant to me. Why would it matter whether the brain is in hardware or software? Why is "developing naturally" a prerequisite for being considered aware? Are we not, ourselves, simply a type of machine? All those questions must be answered before your professor's position is coherent.

Additionally, he must address the issue of how the "computer head" mimics consciousness, with all its subtlety, without actually reproducing it. If simulating an entire brain results in all the effects of consciousness but not consciousness itself, then you must provide some other mechanism by which those effects are produced.

This is just dualism in sheep's clothing. It rejects the notion that consciousness arises because of the way the brain functions, and says instead that humans are conscious and computer heads are not because humans have some ineffable quality that computer heads lack. You might as well go ahead and call it a soul.

Jeremy
 
Last edited:
What if you give the machine the definitions of consciousness, life, etc, then ask it if it's sentient or not. Do you think it'd blow a fuse?

Actually, the self-reflective consciousness must be able to affect the physical world (something philosophy wonders about) precisely because we can, and do, talk about it. I have a tough time imagining some "zombie" could talk about "experiencing the redness of red, or the pain-ness of pain".

Might be an interesting test, once that day arrives with AI.
 
There is no software without the machine. There is no machine that does not possess a program, even if it's a meaningless one determined at random. Postulating one implies the other.
 
Hummm...

In order to be sentient, a machine is going to have to have a "consciousness". We're going to have to know what the hell that is, first.

Belz...
 
I say that it is. My argument goes like this:

1) I know that what I think of as my "consciousness" affects my behavior, and does so in a very strong way.

2) There is extremely strong scientific evidence that human behavior is completely controlled by the brain, and that there is not any mysterious "stuff" interacting with the brain to influence human behavior.

3) My brain, and my behavior, are more or less the same as every other human being, indicating that it is extremely unlikely that, while other people's behavior is caused by their brains (as science indicates), my own is not.

4) I thus conclude that my own behavior is caused by my brain, which means that whatever it is which I think of as my "consciousness", must be something my brain is doing.

5) I thus conclude that other human beings are also conscious.

Note that if I could not conclude (4), it would not be rational for me to conclude that other people are also conscious at all. That is the thing that really boggles my mind about people who insist that there is more to consciousness than brain activity. If they really believe this, then they have absolutely no rational justification for believing that anybody other than themselves possesses this additional component!


That's how I see it, anyway.
The problem, Stimpson, is not if behavior is related to the brain—because it is very easily observable that it is—but if behavior is related to subjective experience. In your first premise, you establish that you believe your own subjective experience, the only experience that could possibly interact with you, is related with your behavior. While I could argue that this premise is wrong on the basis that subjective experience in oneself might be an illusion created by the brain, even taking the premise for granted wouldn't really help your argument much. This is because a premise offering such scant evidence of correlation between subjective experience and behavior would mean that the entire argument relies on the secundum quid fallacy. The argument says that because we know of one instance where behavior and subjective experience are related, we can know that these two phenomena are always causally related. Inductive reasoning was never meant to work in this way.
 
Actually, the self-reflective consciousness must be able to affect the physical world (something philosophy wonders about) precisely because we can, and do, talk about it. I have a tough time imagining some "zombie" could talk about "experiencing the redness of red, or the pain-ness of pain".

Might be an interesting test, once that day arrives with AI.
"Describing red" and "describing pain" are behaviors. The question is whether a subjective experiencing of "red" is related to a behavior which constitutes "describing red." A p-zombie would easily be able to "describe red" as the only way in which it would differ from a "conscious" individual would be in its ability to have the subjective experience of "red," not in its ability to describe it. It has also not been established that a person's descriptions of their subjective experience necessarily reflect their subjective experience, even assuming that they have it.
 
Last edited:
The only reasons any of us have for believing that anybody else possesses consciousness like we do, is by observing their behavior. So the question is, if some machine exhibited the same type of behavior as a human being, would there be any criteria by which we could determine that the human being is sentient, and the machine is not?
Is it possible for a human to design a computer to behave in such a way as to appear to possess consciousness? A magician can make it appear that a woman is sawn in half when she really isn't. Don't get me wrong this is no easy task. I'm quite familiar with the Turing Test and I've followed Loebner from time to time. In the long run it might be easier to porgarm a computer to *think than to appear to think for a wide range of tests.

*Depending on the definition of thinking.
 
Yet again, Batman Jr., where is your evidence for the difference between "seeing red" and "the experience of seeing red"?
 
Yet again, Batman Jr., where is your evidence for the difference between "seeing red" and "the experience of seeing red"?
An organism "describing red" and actually "experiencing red" are two distinct intellectual constructs. As such, you cannot a priori equate them, and again, I don't argue that there is a difference, merely that you cannot tell if there is or is not one.
 
Can you offer any evidence that you can either "describe red" or "experience red"? How can a color be described?
 
Yet again, Batman Jr., where is your evidence for the difference between "seeing red" and "the experience of seeing red"?

Where is your evidence for their unity?

The two phrases have different denotations; it's at least imaginatively possible that a non-conscious object might be able to "see red" without experiencing it. In my experience, such objects are called photodetectors and are available for a nominal sum from the local electronics shop.
 

Back
Top Bottom