Z
Variable Constant
So military robots will be the same as nuclear bombs? Lots of countries have those.
That's actually not a bad analogy, considering that some nuclear missiles are robots.
Point being, the military is using robots today. They're not very advanced ones, and they're certainly not AI... but they are robots.
I have to ask... what's the point of asking, anyway? Under what circumstances - other than experimentation and creation of advanced androids - would we even want machines to be moral? And just how much of a 'moral' code does a robot need, under any circumstances?
My point is that it will be possible, not that it will be desirable. The difficulties I see lay not with the programming part, but with a) defining 'morality', and b) making a sufficiently advanced machine whose pattern-recognition abilities are up to the task of discerning all the necessary inputs for moral decision-making. It is the first, more than the second, that might delay development of morality-machinery, but undoubtably, depending on what morals you choose to define for it, programming said morals into said machine wouldn't be any harder than teaching morals to humans is.
...
Um... duh. Hey, Jay? Just realized the answer to your OP question: YES! Machines have been programmed with morals for a very long time. Unfortunately, the programming rarely holds... and the programming has been haphazard at best... and the damned machines keep blabbering about their so-called 'free will' and 'human rights' and such...