• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial intelligence: Can machines be programmed with morality?

At some point, machines will be programmed with morals.

  • Strongly agree

    Votes: 11 47.8%
  • Somewhat agree

    Votes: 2 8.7%
  • Neutral/Maybe

    Votes: 5 21.7%
  • Somewhat disagree

    Votes: 1 4.3%
  • Strongly disagree

    Votes: 4 17.4%

  • Total voters
    23
So military robots will be the same as nuclear bombs? Lots of countries have those.

That's actually not a bad analogy, considering that some nuclear missiles are robots.

Point being, the military is using robots today. They're not very advanced ones, and they're certainly not AI... but they are robots.

I have to ask... what's the point of asking, anyway? Under what circumstances - other than experimentation and creation of advanced androids - would we even want machines to be moral? And just how much of a 'moral' code does a robot need, under any circumstances?

My point is that it will be possible, not that it will be desirable. The difficulties I see lay not with the programming part, but with a) defining 'morality', and b) making a sufficiently advanced machine whose pattern-recognition abilities are up to the task of discerning all the necessary inputs for moral decision-making. It is the first, more than the second, that might delay development of morality-machinery, but undoubtably, depending on what morals you choose to define for it, programming said morals into said machine wouldn't be any harder than teaching morals to humans is.

...

:jaw-dropp

Um... duh. Hey, Jay? Just realized the answer to your OP question: YES! Machines have been programmed with morals for a very long time. Unfortunately, the programming rarely holds... and the programming has been haphazard at best... and the damned machines keep blabbering about their so-called 'free will' and 'human rights' and such... :o
 
Um... duh. Hey, Jay? Just realized the answer to your OP question: YES! Machines have been programmed with morals for a very long time. Unfortunately, the programming rarely holds... and the programming has been haphazard at best... and the damned machines keep blabbering about their so-called 'free will' and 'human rights' and such... :o
So I guess the "designer" that did the programming wasn't very "intelligent", huh? ;)
 
We don't even know that it's theoretically possible to put Asimov's Laws into practice, much less practically possible. Hell, we don't even know if humans can be taught principles they can be guaranteed not to violate.

There are some very serious and real questions here that can't just be handwaved away by saying "in the future, we'll surely learn how".

Yes! I, for the first time :), completely must agree and backup Melendwyr on these points!

I have been a computer developer for twenty years and actually delved into AI, Neural Networks, and Robotics. Back in da day (just after WWII and Turing), AI people were tauting 'human-like intelligence' in computers within ten years. That, my friends, was sixty years ago. None seen yet. What happened? Here's what happened:

They considered intelligence, knowledge, and even emotions to some degree to based upon simple logical principles - i.e.: build a robust enough logical grammar and eventually you'd have a brain. I think that Hofstadter pretty much demolished this in his books.

A later approach (might be ongoing unbelievably) involved programming all sorts of trivial knowledge into a large database and having a computer program do correlative analyses and then, one day, voila!, it would be intelligent. The most absurd thing I've ever heard - and this was a funded project.

Newer and more sober approaches realize the irrevocable entanglement between sensory data, system response, learning, feedback, and intelligence. COG comes to mind. Still no Spockbots out there yet. This is a better approach and is leading in the right direction, but the best 'intelligence' created in computers so far is gaged at the level of a cockroach!

Recent neural discoveries have turned the more simplistic idea (if it can be called that) of neurons into a very complex one, by a millions fold. At one time, people thought that a complex enough hardware neural network with the correct configuration, firmware, and training may eventually lead to human-like intelligence. This discovery basically dashed it to the ground. It would require billions of silicon neurons with millions of data points, all interconnected, with sensory input, and whatever else is missing in the elusive alchemy of creating AIs. This isn't just down the road, it's in the future, say, a thousand years or more from now.

After that long dissertation, I'll say outright that morality is not going to be 'programmed' into a computer. It will have to arise as a process in a complex cybernetic organism that has reached a stage where the idea of morality is relevant (if it ever was). Hmmm, just the same way that morality arose in humans as a process to cope with socialization and other pressures/advances.
 
zaayrdragon said:
The difficulties I see lay not with the programming part, but with a) defining 'morality', and b) making a sufficiently advanced machine whose pattern-recognition abilities are up to the task of discerning all the necessary inputs for moral decision-making. It is the first, more than the second, that might delay development of morality-machinery

I agree with the rest of what you say, but not this last quote. Morality is a cultural definition, not an absolute one, so can be fairly easily codified by the society which chooses to do so. The Vatican has done it. The Law books of our countries have done it. We may not like them, but there they are.

In a very real sense, all our current robots have a very strict 'morality' which they all adhere to without the least deviation. The cash register does not open unless it receives the command to do so. The nuclear bomb does not explode until given the go codes. Is not an 'If...Then' loop a moral choice? If not, then aren't we accepting that there is an absolute moral code independent of our culture?

Because of this, I would say that B) (making a sufficiently advanced machine whose pattern-recognition abilities are up to the task of discerning all the necessary inputs for moral decision-making) is the far more difficult task, as kuroyume0161 claims.

As to whether we should impose human societal values on sufficiently advanced machines, well...sounds like a reasonable idea to me if we don't want to be crushed like bugs by our robot overlords...
 
cyborg

So the question I would ask is whether we can impart a machine with actual values, or will we always be limited to simply programming it in such a way that it has no choice but to do what it is told?
Who said we have a choice but to do what we're told by our internal DNA programming?
I think you misunderstood what I was saying. I am not arguing that human beings have some sort of magical free-will in which our choices are not governed according to some mechanistic organic process (or program, if you prefer).

What I am talking about here is more a matter of degree. The difference between just being able to follow commands, and the ability to figure out for yourself what you should do. Of course that process of figuring out what you should do, itself, functions according to some set of rules. That's not the point.

By the way, it is my opinion that artificial intelligence will not be programmed as such, but rather grown. I think that the first AI will be set of neural networks which we only understand slightly better than our own brains. We will have made them, but we only barely understand how they work.

This tends to be the way things go with incredibly complex systems. You build the system with a very general idea of what it will do, but most of the details are things you could never even predict from just looking at the design, much less deliberately design for.

I think Asimov already realized this when he wrote his books. Hence the robot psychologists.


Dr. Stupid
 
cyborg


I think you misunderstood what I was saying. I am not arguing that human beings have some sort of magical free-will in which our choices are not governed according to some mechanistic organic process (or program, if you prefer).

What I am talking about here is more a matter of degree. The difference between just being able to follow commands, and the ability to figure out for yourself what you should do. Of course that process of figuring out what you should do, itself, functions according to some set of rules. That's not the point.

By the way, it is my opinion that artificial intelligence will not be programmed as such, but rather grown. I think that the first AI will be set of neural networks which we only understand slightly better than our own brains. We will have made them, but we only barely understand how they work.

This tends to be the way things go with incredibly complex systems. You build the system with a very general idea of what it will do, but most of the details are things you could never even predict from just looking at the design, much less deliberately design for.

I think Asimov already realized this when he wrote his books. Hence the robot psychologists.


Dr. Stupid

I agree - in fact, the first machines to be programmed with morality might not even be built by humans - nor programmed by humans...

By the way, it seems that the visual comprehension is progressing nicely, as a company is now offering security and service household robots that can recognize faces (only up to 10, and there's no actual sign that it can recognize a generic face from, say, a photograph, but it's progress)...

Anyone else looking forward to the $2 Million Race? I'm pulling for the VW, myself (did it qualify?).
 
Has anyone studying artificial intelligence studied how the human brain becomes a brain?

I would think the first place they would look is starting from a fetus to the human baby to the adolescent to the adult. A brain grows in stages, starting with the simple to the complex.

If you know everything about the changes and stages of brain development it would tell you how to develop something approximating it.

One of the big problems with AI would probably be that the circuitry of the brain is unlike the circuitry of machines. One is organic, the other isn't.
 
Last edited:
Has anyone studying artificial intelligence studied how the human brain becomes a brain?

I would think the first place they would look is starting from a fetus to the human baby to the adolescent to the adult. A brain grows in stages, starting with the simple to the complex.

If you know everything about the changes and stages of brain development it would tell you how to develop something approximating it.

One of the big problems with AI would probably be that the circuitry of the brain is unlike the circuitry of machines. One is organic, the other isn't.

Actually, yes such studies of pre- and postnatal brain development have been employed in AI research. Not as much for getting directly to intelligence or consciousness, but as a means to see how the developmental stages progress, what their progression signifies, and how it impacts learning and consciousness levels.

That said, there is also the next level up. Not just the development of an organism's brain, but the development of brains in general - i.e.: the evolution of rudimentary nerve bundles into a brain cortex into what higher-consciousness animals have: a cortex with cerebrum and other specialized nerve centers.

It should be no surprise that the brain of a carnivorous dinosaur was nothing like that of a human being. Nor is the brain of a carnivorous bird. Why? Wouldn't nature just find the best brain and employ it immediately in as many species as possible? Well, no. Brains develop function as part of the organism's survival strategy. A vulture needs very acute olfactory senses to find 'food', being a scavanger. Incidentally, the olfactory center in its brain occupies a large portion of its nerve center and is one that is highly sensitive and developed. In other words, although general brains have many things in common, like the organisms themselves, they are also very specialized and vary by evolutionary means.

I guess what I mean to say is that the notion of 'evolving' a brain say from a cockroach to a human wouldn't be practical since the evolutionary path from our most distant ancestors with specialized nerve cells was extremely particular - a one time event. Doing something similar 'in a lab' using AI would more than likely produce an intelligence completely alien and unlike human intelligence, unless it was carefully directed. This raises the question of whether all sentient intelligent beings may have all-around similarities (social relationships, emotions, analytical processes, instincts, voluntary and involutary actions, and so on for a long time). Without alternative examples besides humans, this is nearly impossible to answer. Sentience, intelligence, consciousness may occur not only in degrees, but in currently unimaginable formats. This runs parallel with problems of exobiology. Without other isolated instances of biology besides that of Earth, we are left wondering what is and isn't possible in the realms of lifeforms.
 
No, it isn't - it's what benefits society.

What do you think the point of society is? If there wasn't a survival benefit associated with society, what would have been the need for early man to form groups larger than individual family tribes?

Society exists because it makes life easier, safer, and more stable.

You're thinking in two dimensions. Think more heirarchial:

(Roughly drawn, sorry)

moralitychart.jpg


There are various "levels" of morality. But ALL of them boil down to survival of "something". Therefore, as I see things, morality is the product of "living" things striving to preserve themselves; be it the life of a man, of a family, of an ideal, or life itself. Morality in action is guided by this purpose.

Enough distracted ranting on a Saturday afternoon. I apologize for my ineloquence. I'm doing 5 things right now aside from typing this argument.
 
Enough distracted ranting on a Saturday afternoon. I apologize for my ineloquence. I'm doing 5 things right now aside from typing this argument.

Nicely and succinctly put, Phrost. And with pictures for the less 'read'. ;)

Yeah, it's fun trying to get people to understand that while you're doing chores, writing checks, doing work, etc. etc. that it really isn't possible to hold a thesis-level dissertation. Great for those who can research from their university office and get paid to do nothing but dissertate. But some of us actually have houses and land that needs tending and work for 10-16 hours/day 7/wk 12/yr (what are these things called 'holidays'?). When I read, it's for the work that pays my bills, not for summertime pleasure. Boring, but not all of us were 'born into it', ya know. 'Nough said...

By the way, I loved that scene from P&T's BS. ;)
 

Back
Top Bottom