• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Sentient machines

I can't believe Hal hasn't been mentioned!

Funny thing - as he serves humans, he is called a computer.

He starts killing people, and suddenly he's "alive".


What I've always wondered is: did he lie about the broken module, or was he really mistaken? I always thought that he lied (to get them off the ship), but making an honest mistake would be interesting too.
 
It is only like this if we are solipsists.
Er, more than one?

There are other problems with being a solipsist just beyond what I said before.
Do you aver you *are* The Solipsist, and therefore know that to be True?

I believe that I am not The Solipsist, but can never know one way or the other about you. To give our discussion the reality it appears to have, I'm willing to assume you are not The Solipsist either. I extend that same assumption to other perceived existents as well.
 
I can't believe Hal hasn't been mentioned!

Funny thing - as he serves humans, he is called a computer.

He starts killing people, and suddenly he's "alive".

Does anyone ever refer to him as "alive," apart from during the interview with the Discovery crew? If I recall, Bowman and Poole spent most of their time after that getting killed and/or trying not to get killed.

What I've always wondered is: did he lie about the broken module, or was he really mistaken? I always thought that he lied (to get them off the ship), but making an honest mistake would be interesting too.

I don't think there is any question about it. This sort of thing has cropped up before, and it has always been due to human error.

Jeremy
 
"Do Androids Dream of Electric Sheep?" is a good place to start. You'd probably know it better as Blade Runner.

I haven't read the story, but in the movie, the bad guys were definitely fully sentient humans, if highly genetically engineered to be tremendously strong, fast, geniuses at least on the order of the old man himself, but with a 4 year lifespan, and they "shouldn't" wonder about life, the universe, and everything, but it's tough to keep a good man down.

But anyway, the problem you describe is a p-zombie. It's something that looks sentient but is actually programmed to imitate sentient behavior. Either the imitation will eventually diverge and it's going to be really obvious, or you'd have to know everything about the universe in order to design a precise mimicry mechanism. Otherwise, if it's made to take in information and respond in ways that appear indistinguishable from self-awareness, it's going to become self-aware as a consequence.

I can conceive of such a thing existing, but how could it talk about "the greenness of green" and the "painfulness of pain" without having the subjective perceptual experience; the (reflective) cogito.

Would be a good experiment for the future, and possibly a way to tell the difference.

The other option is solipsism, where everyone but you is a p-zombie.

Which as may be, but kind of makes reality even dopier. And without proof, just knowing children are raped and murdered, and people put in extermination camps, is in and of itself a bad thing to do to somoene (i.e. letting the one and only real mind think that such bad things are happening to real "others" out there. Not as bad as actually experiencing it, by any means, but a terrible thing to do to someone all by itself.)
 
I can conceive of such a thing existing, but how could it talk about "the greenness of green" and the "painfulness of pain" without having the subjective perceptual experience; the (reflective) cogito.
I wish you could explain how you conceive of it, because frankly all available evidence suggests it's impossible.

If a color-blind person learns to associate colors with objects generally accepted to be of a particular color, you won't be able to figure out that they're color-blind from self-reports. You'd have to ask questions under carefully-controlled conditions to find out.

Now: what does that imply about "the greenness of green"?
 
If a color-blind person learns to associate colors with objects generally accepted to be of a particular color, you won't be able to figure out that they're color-blind from self-reports. You'd have to ask questions under carefully-controlled conditions to find out.

Now: what does that imply about "the greenness of green"?
Actually color blind people cannot tell the difference between certain colors. So there are some simple tests done by a specialist to tell if a person is color blind based on that fact.
 
Actually color blind people cannot tell the difference between certain colors. So there are some simple tests done by a specialist to tell if a person is color blind based on that fact.
I'm aware of that. However, some color-blind people learn associations between color names and certain objects, so that in casual conversation, you wouldn't be able to tell that they couldn't differentiate between colors.

If such a person swore they understood what "greenness" meant, what does that imply about the qualia hypothesis?
 
I can conceive of such a thing existing, but how could it talk about "the greenness of green" and the "painfulness of pain" without having the subjective perceptual experience; the (reflective) cogito.
We don't do a very good job of talking about such things either. I know what you mean when you say "the greenness of green", but only because I've also seen green. Your words alone doesn't really convey any meaning by themselves. It's just as easy for a computer to say "the greenness of green" as it is for a person to say it, even though the computer isn't conscious.
 
I'm aware of that. However, some color-blind people learn associations between color names and certain objects, so that in casual conversation, you wouldn't be able to tell that they couldn't differentiate between colors.

If such a person swore they understood what "greenness" meant, what does that imply about the qualia hypothesis?
I don't know. What do you think it implies?
 
It's not a difficult question. It's rudimentary, logically speaking.

Not a big fan of the Socratic Method, huh? I suppose that makes sense. It only works when used to get people to admit things they already know - and if you don't know anything, it won't work on you.
 
It's not a difficult question. It's rudimentary, logically speaking.

Not a big fan of the Socratic Method, huh? I suppose that makes sense. It only works when used to get people to admit things they already know - and if you don't know anything, it won't work on you.
I know some things, but one of the things I don't know is what you think mistaken colorblind people imply about "the qualia hypothesis". So I wish you'd just tell me.
 
As you'll soon see, this does belong here as a philosophy discussion, and not in the computers and technology section. :)

Take the following two points as being given:
  1. You have a test for evaluating sentience and conciousness. What this test is actually comprised of isn't really relevant to this discussion. The test is 100% flawless. It relies entirely on observation of and interaction with the subject being tested. (Not everyone knows what the Turing Test is, which is why I did not use that phrase.)
  2. Someone has created a computer that passes this test.
  3. The person who created the computer has died. We know little of how he was able to accomplish the creation of this computer.
  4. It is not acceptable to tear the computer apart for the purposes of attempting to reverse-engineer it.
  5. No, I am not getting this idea from that Star Trek Episode where they try to determine if Data is alive. :) I actually don't like Star Trek.
The question is...do you now conclude that this computer has attained consciosness and sentience? Or do you just conclude that it is an inanimate object that is pretending to be concious? If you make the conclusion that it is inanimate and is just pretending, then how do you do so, without circular reasoning?

Because if you make the conclusion that it is not alive, and is just pretending, how do you know that other humans are alive, and aren't just pretending? You can't inhabit other's minds. You can't live their experiences. All you can do is observe and evaluate. These observations and evaluations tell you that others are conscious. While telling you at the same time that someone in a vegetative state is not.

So what's the difference? You don't have some magical insight into other human's existence, and you don't have some magical insight into the computer's existence. Sure, you can say "Well, I know its electicity and silicon", but similar things can be said about the human brain. That is therefore not an acceptable reply.

So is it alive, or not? How do you know?

My own opinion: I'm not really sure. But if you put a gun to my head and forced me to choose one way or the other, I would consider it to be a new life.
I find it hard to take this seriously when it tells me to consider two points and then lists five.
 
I know some things, but one of the things I don't know is what you think mistaken colorblind people imply about "the qualia hypothesis". So I wish you'd just tell me.
But then I would deprive you of this opportunity to exercise your higher thinking processes and put two and two together to make four.

Won't you demonstrate your thinking skills for us, Mr. 69dodge?
 

Back
Top Bottom