Programming complete loyalty in a conscious being would be an interesting AI challenge. I think we'd have to be careful that it didn't become self-conscious, because if it did, it might become conscious it had been programmed, and start to ask questions. So we'd betterd program it to never question its programming; for if it did ever question its programming, it might acquire the wherewithal to change it, including its loyalty programming.
Would this work? Hmm. Depends, I guess. It may be that the ability to change one's programming - to override it with a higher-level program - is a necessary attribute or by-product of self-consciousness; perhaps of all consciousness; or perhaps only of consciousness beyond a certain complexity. I think human consciousness is characterized by the ability to overcome emotion with reason; meaning, on a computational model of consciousness, the ability to "self-program": that is, the ability to create new behavior routines based on a careful and considered evaluation of a certain class of stimuli in order to override competing emotional/instinctive/preprogrammed behaviors we're born with: e.g., overcoming one's innate fear of heights when learning to climb a ladder, rockclimb, parachute, etc. While it's true that people have very strong inhibitions against doing some things - killing loved ones, for example - it's also true that these inhibitions, these basic programs if you will, can be overridden: as self-conscious beings, we seem to be free to do whatever we want (or can at least, physically).
Is this just the result of kludgy programming, imposing higher level, more recently evolved rational programs over more ancient emotional programs (and perhaps even more ancient "instinctive" programs, assuming there's a difference -- emotions seem to have an affective component, a sort of heads up to the conscious, rational faculty: "here's how you've been preprogrammed to assess the situation in the short term, how your instincts 'feel' about things... what's your long-term take?"; and a reflexive component, where one instantly flinches in the face of sudden movement, say, before one has a chance to rationally assess whether it's a threat or not)? Or, is this essential to all consciousness? Or only higher forms of it; perhaps only self-consciousness? Or not essential at all?
We can all think of examples of humans who are, or appear to be, completely loyal, but none of them are very attractive: yes-men, slaves (where the slave is completely submissive and happy to follow orders), zealots, fanatics (one might counter a patriot who is completely loyal is generally thought to be praiseworthy, but "my country, right or wrong" is dangerous code, imho, in citizenship and/or C++). And from psychology - Pavlov, especially, and studies of brainwashing - it would seem unlikely that even the most fanatical, enslaved, yes-man behavior cannot be changed, be "de/reprogrammed", so to speak.
If we want to ensure complete loyalty, I think we'd have to inhibit our robot from being able to change its programming very much, if at all -- because if we allow it to create new behavior routines, it might create one that overrides its loyalty program; that is, the class of new programs it could write for itself as it explored and adapted to its new environment would be infinitely complex, and there's no way we could know ahead of time the effects of every new program it might write for itself on its behavior (if we try to limit its ability to write new programs to some safe level, a level of complexity that we assume can't threaten its loyalty program -- and whether we could even know that level is questionable given a lot of things - computational complexity, how dumb we are, how full of bugs and unforeseen behaviors even the smallest programs are -- it might simply, inadvertently perhaps, write a program that overcomes that limit).
Yet isn't the ability to create new behavior routines that adapt one's behavior to one's environment a good chunk of what mean by learning? If so, then our conscious completely loyal robots will be, effectively, data-bots, sophisticated mobile remote recording devices, able to react to their environment and modify behavior strictly within preprogrammed parameters, but complete morons otherwise, unable to learn new classes of behavior from their experience (in AI jargon, they would be expert-systems, following the routines they are given and becoming more expert within the domain of their program only, forbidden to generalize what they have learned into other behavioral domains where it might have unforeseen consequences: i.e., make them independent of us and our preprogramming).
Is this "consciousness"? Is something that can't create new behavioral routines for itself "conscious"? That's a very good question. Its answer will depend, of course, on how you define consciousness. For humans, and many animals, the ability to modify behavior seems a large and vital part of consciousness, in many ways the most interesting part. That's probably why there's something abhorrent to most of us about fanaticism, completely loyal drones marching off to do their duty, never questioning orders. Then again, if our robot has been programmed without any behavioral freedom, or any potential for it, then, strictly speaking, it's just doing what it's designed to do. And what could be more meaningful than that, for any conscious entity? Who needs existentialism, anyway. Perhaps all our loyal robots would really need is the right religion to reconcile them to their servitude, make them happy with their lot. We could even program them to worship us as gods. Though god forbid one ever bumps its head, scrambles its wiring, and starts thinking for itself...
