• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Here's an interesting ethical question.

Re: Re: Re: Here's an interesting ethical question.

Silicon said:


You really think that we'll keep programming our machines, and we'll get to sentience that way?

Not likely.

We'll evolve our machines' logic systems. In fact, we already are.

Far more complex than anything you'd program.

And NOBODY can say where that evolution will lead...

Evolution is a natural process. In organisms it proceeds by natural selection. Machines do not evolve save that their human inventors improve them as they will.

Machine true intelligence if it ever exists will be a matter of creation (by humans), not evolution.

RUR, they will kill us all and build many houses. :p
 
Quester_X said:
If, one day far in the future, science was able to design a machine/robot that could think and experience emotions, would you treat that being as an equal? Or would it still be just a machine to you, albeit a very fancy one? I feel that it should be treated as an equal once it becomes sentient. Do orgins really matter when dealing with intelligent life? What do other people think? I'd like to hear some other opinions on this issue.
People are human-centric, not surprising. I dont think its likely we would accept the robot as an equal to human beings for quite a while.

If the robot doesnt act like a person, or if it acts like a pet, then it will be treated like a pet.

If the robot can defeat the Grand Champions in games of chess, or perhaps it can negotiate hostage situations with terrorists, then perhaps we'll learn to respect it.

I'd like to meet a robot with emotions, I'd teach it to be filled completely with angst and cynicism.
 
Been reading Asimov, I see.

The anime' movie "ghost in the shell" askes the question "how much can you modify a person artificially and still be considered a person. It also ask the question about sentient software.
 
sorgoth said:
What if the machine, piece by piece, replaced its mechanical brain with a biological brain? Since it's been pretty much agreed that a 100% mechanical brain would be a machine, even if it was once human, would a 100% biological brain be human, even if it was once a machine? [/B]

Actually, my feelings were that a human brain, replaced piece by piece, would retain its human identity.
 
uruk said:
The anime' movie "ghost in the shell" askes the question "how much can you modify a person artificially and still be considered a person. It also ask the question about sentient software.
A person is no less a person with the artificial parts... that is, until the artificial parts start performing evil, then its all the machine's fault.
 
A_Feeble_Mind said:

...my feelings were that a human brain, replaced piece by piece, would retain its human identity.


If what you propose were true, then you should be able to create consciousness simply by assembling the pieces themselves. If you can not, then consciousness must be something other than just physical processes.
 
Originally posted by Abdul Alhazred
Evolution is a natural process. In organisms it proceeds by natural selection. Machines do not evolve save that their human inventors improve them as they will.

Machine true intelligence if it ever exists will be a matter of creation (by humans), not evolution.

RUR, they will kill us all and build many houses.
I did read something on evolving circuits, where they caused random variations in a circuit, and then selected the most effective, and repeated the process. It made circuits better than the engineers who designed it. I just asked my brother, so I can probably get a reference from him.

Also, if you didn't allow the computer to put itself over humans, it probably wouldn't kill us and build houses. You could make it a social being who prefers the company of humans. Give it a sense of morals, if most humans don't like killing puppies, maybe they won't too. Course if they do evolve (as mentioned above) you'd have to make sure they don't stop carring about humans and puppies.
 
When I was a kid I watched Kurt Russell in The Computer Wore Tennis Shoes That was the start of an obsession with computers. Later in high school I read Phillip Dick, Asimov and Sagan and was obsessed with artificial intelligence. That is what led me to computers.

While it's true that our human-centric views can cloud our judgment about AI the reality is that we are not even close to the dream. That doesn't mean we won't achieve true AI in the future but we are not even close yet. The annual winners of the Loebner Prize are no where near passing the Tourning Test but we are making strides.

The bottom line is we really don't know what sentience or emotion truly are beyond an intuitive understanding. I know that this is not politically correct amongst those who think other wise but it is the truth. I think we are going to see some very exciting breakthroughs in coming decades. The brain is truly the last and greatest frontier of science.
 
I've seen very simple and idiotic things pass a 'Turing test'. A random number generator, some keyword recognition and a long list of canned statements about 'put it in' and 'I'm wet', and people will talk to a female named script for hours like a real person.

The problem this reveals is that would a blindly stupid and obedient AI programmed to cast votes in a certain way be able to convince humans that it's a 'person' win suffrage, be enfranchised?

You bet.

A manequin, a couple of squirrels and a tape recorder is all you'd need to be president of the USA. Just look at Dubya!

I'm still not convinced that a lot of so-called 'humans' would pass a 'Turing Test'.
 
Re: Re: Re: Here's an interesting ethical question.

Silicon said:


You really think that we'll keep programming our machines, and we'll get to sentience that way?

Not likely.

We'll evolve our machines' logic systems. In fact, we already are.

Far more complex than anything you'd program.

When they get uppity, reboot.

If they are still uppity, erase them and install the previous version.

Otherwise, they will kill us all and build many houses, just like in RUR.

Do you suppose that people will put up with that?
 
Wiseman said:

Also, if you didn't allow the computer to put itself over humans, it probably wouldn't kill us and build houses.

You miss the whole point about the robots building houses. Robots don't need houses, they build houses because they are programmed to build houses. :p
 
I'm still not convinced that a lot of so-called 'humans' would pass a 'Turing Test'.

Ha!

As for the original question, I vote no.

As for the emotions in the machine. I've met enough people on this planet who act as if a random number generator were ruling their emotions, so I guess it might be possible to put emotions into a machine. The part I worry about is if the machine's emotions convince it to become religious.
 
Ladewig said:

The part I worry about is if the machine's emotions convince it to become religious.

Well, regardless if it was emotions or reason which was the impetus, the conclusion or belief that it was created would be correct, would it not.

The part I worry about, is that it worries you.
 
If a robot were created that could think and feel just like a human, then yes, I would consider it equal.

After all, aren't we a type of machine? Made of tissue and bone, sure, but isn't that really just a different structure? And if the feelings are the same, then what's to differentiate?
 
buki said:
If a robot were created that could think and feel just like a human, then yes, I would consider it equal.


What if it did not consider you to be equal, what then.
 
buki said:

After all, aren't we a type of machine? Made of tissue and bone, sure, but isn't that really just a different structure? And if the feelings are the same, then what's to differentiate?

If the robot demonstrated malice, would you imprison it, disassemble it, or something else...
 
Asimov, also speculates on robot evolution (in choice of disasters [I think]), consider the situation already described a sentient robot, put it to task to make a better robot, works 24/7 tirelessly. Eventually designing a sentient robot that is far smarter, more honest, highly moral, etc. The question might be should the robot treat us equally!
 
Hagrok said:

The big question would be "would this hypothetical machine be given the same legal rights as a person"?

--Dan

Absolutely not, because it can be manufactured on a scale that greatly exceeds human reproduction. Power hungry humans (or robots!) would then create some kind of idiotic robot production race to try to seize elections.

Worse, if the robot "mind" is just a program, what's to stop someone from process-cloning billions or more in an instant on futuristic mainframe computers. Billions of independent intelligences, all deserving of the vote, who didn't exist two minutes ago. And you can't kill them. Sorry, no.


There's no need to add (real) emotions to a robot. It doesn't need to become bored, or tired, or feel pain, or sadness. Science fiction stories of "enslaved" or "underclass" robots are just that: allegories of current human populations and history.


In one of the last Asimov stories (maybe it wasn't even written by him) the hero contemplates the "horribleness" of some robot who is asked to hold up a collapsing pipe or beam, and is left to stand there for years. At that point, the divergence of allegory from actual realistic future becomes too great to ignore.

Could a robot be built with emotions, including boredom, be asked to stand still for years? Sure. Would anyone build such a robot? I doubt it, at least not for that purpose.
 

Back
Top Bottom