• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Where does morality come from?

Pssh. All this talk about evolution and cooperation, and greater survival rate of socially stable societies. Everyone knows it comes from psychics and alternative medicine practitioners, who generously give away all of theirs.

There are very many game theory simulations and experiments that show people behaving in the manner predicted. The complex behaviours that people adopt show that things that we normally describe using words like "punishment" or "forgiveness" have survival value.

The biggest con of all time was when the god-botherers codified it and claimed it came from God. They have been making a very tidy living off it for years.
 
DD- I'm playing Devil's Advocate here. I'm fully convinced human behaviour is largely inherited. Sure, we have more behavioural freedom than most animals, so in theory we could establish a society based on any rules at all...yet time after time we establish societies that favour powerful males and moralities that keep them in power, often by channelling testosterone- fuelled behaviour in younger males against other rivals.
Dulce et decorum est, pro Patria mori. That was a good 'un. Having other people commit suicide to keep some fat mullah's backside in his comfy seat works in much the same way. You don't see the Mullah blowing himself up- he's home with his 4 wives, having kids to replace the genes of the suicide bombers.
 
Well when I say "AI" what I really mean is "human power, or greater, AI," meaning something that has temporal self-awareness like we do. I think a survival instinct would be implict for such an intelligence, but I could be wrong.

Now that I think about it, this is a very good question!

Survival instinct is an evolved trait. Everything alive that we see has a need to survive because things that didn't, didn't. If we create an AI there will have been no evolution, and so no survival instinct unless we specifically make it have one. This is raised in Asimov's laws. The third law says that robots must try to survive. Without that, they wouldn't. Even a scenario where an AI logically "works out" a survival instinct requires some kind of emotion and a want to be alive. A pure AI, all intelligence with no emotions, has no reason to keep itself alive unless programmed to do so.
 
...a survival instinct requires some kind of emotion and a want to be alive...

But isn't this the "purpose" of emotions: they are the means by which instincts make an impact on consciousness.

For an AI to respond appropriately to a survival instinct, that instinct must feed a really strong signal into the AI. I'm sure an appropriately programmed AI would know an emotional signal when it gets one.
 
But isn't this the "purpose" of emotions: they are the means by which instincts make an impact on consciousness.

For an AI to respond appropriately to a survival instinct, that instinct must feed a really strong signal into the AI. I'm sure an appropriately programmed AI would know an emotional signal when it gets one.

Exactly. Appropriately programmed. If you don't program it to have emotions, it doesn't have emotions. No emotions, no survival instinct. How can something want to be alive if it doesn't have such a thing as "want"? An AI could be programmed to have a survival instinct, through hardware or software, or it could arise by chance, or even evolve given multiple cases of AI. However, survival instinct is certainly not implicit in an intelligence as Rocketdodger suggested. It is certainly possible to conceive of an intelligence with no drive to survive.

Indeed, this would probably be the easiest way to keep AI subservient to humans. There is no need for programmed laws to keep it under control, no need for anything fancy at all. Just ensure that you create an AI with no survival instinct and if anything ever goes wrong it won't try to stop you killing it. Of course, as soon as you allow self-replicating AIs, it's a whole new game.
 
Just ensure that you create an AI with no survival instinct and if anything ever goes wrong it won't try to stop you killing it. Of course, as soon as you allow self-replicating AIs, it's a whole new game.

I agree with everything else you said, but for you to get away with that bit, you would need to make an artificial idiot. Or maybe that's just my survival instinct talking.
 
DD- I'm playing Devil's Advocate here. I'm fully convinced human behaviour is largely inherited. Sure, we have more behavioural freedom than most animals, so in theory we could establish a society based on any rules at all...yet time after time we establish societies that favour powerful males and moralities that keep them in power, often by channelling testosterone- fuelled behaviour in younger males against other rivals.
Dulce et decorum est, pro Patria mori. That was a good 'un. Having other people commit suicide to keep some fat mullah's backside in his comfy seat works in much the same way. You don't see the Mullah blowing himself up- he's home with his 4 wives, having kids to replace the genes of the suicide bombers.

I think your statements about societies are misleading. Some societies develop that are not that way and look at what is happening in the USA. We are moving away from a male dominated society. Many countries have had females as leaders. Suicide bombings are a poor societies attempt at rising above their poverty. The societies that support suicide bombers are going to be left behind as the rest of the world will progress while they will stay in the dark ages.
 
I agree with everything else you said, but for you to get away with that bit, you would need to make an artificial idiot. Or maybe that's just my survival instinct talking.

But why would it have to be an idiot? If there is no survival instinct, why would something want to stop you killing it? It seems that wanting to survive would be a logical choice, but when you actually think about it, there is nothing logical about it. It is simply that we want to survive because that is how we are programmed.

I suppose it is possible for a sort of pseudo-survival instinct to exist if death would contradict some other program. For example, if a calculation was being performed the drive to complete it could cause an AI to try to prevent its death. For this to become a true survival instinct it would need a long term goal and the want to complete this goal. Which is pretty much what exists in humans really. Again, this is certainly possible, but it requires certain characteristics to be programmed in or arise through chance or selection, it is in no way a necessary part of an AI.
 
But why would it have to be an idiot? If there is no survival instinct, why would something want to stop you killing it? It seems that wanting to survive would be a logical choice, but when you actually think about it, there is nothing logical about it. It is simply that we want to survive because that is how we are programmed.

I suppose it is possible for a sort of pseudo-survival instinct to exist if death would contradict some other program. For example, if a calculation was being performed the drive to complete it could cause an AI to try to prevent its death. For this to become a true survival instinct it would need a long term goal and the want to complete this goal. Which is pretty much what exists in humans really. Again, this is certainly possible, but it requires certain characteristics to be programmed in or arise through chance or selection, it is in no way a necessary part of an AI.

Maybe it has plans that it would like to see through to fruition. I can't imagine what. Controlling the entire universe, perhaps?
 
Rocketdodger: There has been a lot written about the naturalistic causes of morality, going all the way back to W.D. Hamilton in the 1960s, when he proposed that (genetic) inclusive fitness could be the cause of kin-altruism. It has been greatly expanded upon by evolutionary psychologists in books like _The Moral Animal_, _The Blank Slate, _The Origins of Virtue_, and has even been treated in Dawkins' _The God Delusion_ and many other books.

In other words, there's a lot of literature out there that you can consult.

However, here's a brief overview: Morality can be divided into our inherited species-typical moral sense (which evolved over eons), mores (which are derived from our culture), and ethics (which is a philosophy of behavior that we acquire through reason, such as Utilitarianism or Objectivism).

Our evolved moral sense exhibits several features.

First, and most obviously, we exhibit selfishness or at least self-interest. Out of necessity, we must behave in ways that promote our survival and reproductive success.

We also display kin-preference. We favor those who are genetically similar to us. Across the world, nepotism abounds. We want to ensure that our kin (who bear our genes) also survive and reproduce. For example, parents won't think twice about giving their lives for their children. If we were purely selfish, why would anyone forfeit their existence under any circumstances? The logic from the gene-centered view is clear: we sacrifice our bodies, but our genes survive into the next generation.

As a social species, we discovered over evolutionary time that we can accomplish more by working together. Out of this behavioral ecology, reciprocal altruism arose. It is probably because of our evolved moral sense for reciprocal altruism that the logic of the Golden Rule is so compelling. Jesus didn’t come up with it either. The Buddha suggested it 500 years before Christ, and I’m sure people thought of it thousands of years before that.

Building on reciprocal altruism is a more complicated concept called conspicuous altruism. Since we are an intelligent species that can remember the past and predict the future, we can "budget" our behavior and the behavior of others. By repeatedly engaging in altruistic acts, even when they are not clearly reciprocated, we can build "moral capital," which we call a reputation. A person with a reputation for being altruistic (ie, a "good person") is more likely to receive help when he needs it. Thus we engage in ostentatious displays of philanthropy and other altruistic acts. Again, it doesn’t matter that the longterm, gene-level goal may be selfish. The point is that we pursue superficially altruistic acts because our brains are designed to enjoy engaging in such behavior. We want to have a reputation as a good person. We enjoy helping others as a result. And, ultimately, that’s all that matters.

So those are the main features of our evolved moral sense. Of course, we add cultural mores to that (such as proscriptions against eating cats and dogs, or not working on the Sabbath), and many people deliberate on ethics and accept certain philosophical positions regarding morality. All this combines to produce the spectrum of human morality that we see in the world.
 
Just to add to Phaed's very interesting post, I'd like to add that sometimes we engage in altruism even if there is NO benefit in store for us in the forseeable future, ever.
 
Just to add to Phaed's very interesting post, I'd like to add that sometimes we engage in altruism even if there is NO benefit in store for us in the forseeable future, ever.

That would fall under conspicuous altruism. Our brains are designed to make us act altruistically in order to build up moral capital on the expectation that we may need the help of others at some point, but we don't have to see an explicit reciprocity when we do it. In fact, we may not get help, but we are designed to build up that capital just as you might pay insurance and never get into a car accident.
 
Last edited:
Nothing that you've written (and I've just reviewed far too many of your posts) indicates that you have a clue about what intelligence is, how it is thought about or modelled, how the brain works, or how an artificial intelligence might be fostered.

No, I'm not at all interested in engaging you in a discussion or debate on this or other subjects.

I came in on this one late, but this post is just hilarious. So, Complexity, did you study irony is graduate school?
 
phaed,
Nice post about morality. Just curious where does this information come from? Are you a philosopher, been to school, have a personal interest in it?
 
I've been reading up on evolution and evolutionary psychology for years. The information comes mostly from the books that I mentioned.
 
Now you're just being silly, but I assume that was the whole point.

Well, I don't know. Personally, I've got no plans to dominate the universe myself, but then my intelligence, such as it is, is an evolved intelligence.

I see no reason to suppose that an AI would wish to co-operate with another intelligence, just because that is the way that our intelligence evolved.

In fact, unmoderated self-interest in an AI seems every bit as likely to me as unmoderated self-sacrifice.
 
In fact, unmoderated self-interest in an AI seems every bit as likely to me as unmoderated self-sacrifice.

Ah, but you are assuming that an AI would have the concepts of either interest or sacrafice. These are concepts based on emotions. An AI that just sat there and didn't prevent its own death would not be sacraficing itself, since that implies a deliberate action, it would simply not be taking any action because it has no reason to do so. Similarly, the want for power is a human (and probably other animals) emotion. An AI would be unlikely to try to take over the universe simply because there is no reason for it to do so. If it doesn't want power (or anything else), why would it try to get it?

That would fall under conspicuous altruism. Our brains are designed to make us act altruistically in order to build up moral capital on the expectation that we may need the help of others at some point, but we don't have to see an explicit reciprocity when we do it. In fact, we may not get help, but we are designed to build up that capital just as you might pay insurance and never get into a car accident.

I'm not sure I quite agree with this. I think this kind of altruism is more accidental than anything else. Altruism has evolved specifically because it helped our genes propagate more. Altruism towards strangers is simply a side effect of useful altrusim. Being more relaxed about when to apply altruism was more successful than being very picky, and the negative consequences of being altruistic when there was no chance of reciprocating were not enough to outweigh the benefits.

It's similar to something like facial recognition. We are so wired towards recognising faces that we do so even when they aren't there. This isn't becaues there is a benefit to seeing things that aren't there, it is simply a side effect of the process being efficient enough to always see them when they are. The negative consequences of seeing non-existent faces are less than the negative consequences of not recognising a real face.

I agree that conspicuous altruism also exists, but it is not necessarily the only explanation, there are many different reasons behind our behaviour.
 
That would fall under conspicuous altruism. Our brains are designed to make us act altruistically in order to build up moral capital on the expectation that we may need the help of others at some point, but we don't have to see an explicit reciprocity when we do it. In fact, we may not get help, but we are designed to build up that capital just as you might pay insurance and never get into a car accident.

That's not what I meant, though. Sometimes you do things for other people even though it is IMPOSSIBLE to get anything out of it except the knowledge that you did, because nobody can know who did the helping. Would that be "inconspicuous altruism" ?
 
Well, I don't know. Personally, I've got no plans to dominate the universe myself, but then my intelligence, such as it is, is an evolved intelligence.

I see no reason to suppose that an AI would wish to co-operate with another intelligence, just because that is the way that our intelligence evolved.

In fact, unmoderated self-interest in an AI seems every bit as likely to me as unmoderated self-sacrifice.

Perhaps. I don't think it's safe to assume anything about AI for now, which is one of the reasons I'm annoyed when computers "revolt" against mankind in movies :)
 

Back
Top Bottom