What should Morals and Ethics be?

It's possible to deny the first statement. Perhaps my suffering is simply something that I dislike but there's nothing objectively "bad" about it. I can see that as a valid rational viewpoint, but I don't actually believe that anyone believes it about themselves.

Never read Kierkegaard or Schopenhauer, I presume?
 
Oh but we can assume for the sake of argument that he has been made aware that some people believe he owes it to them not to block them out, he simply chooses not to adopt said belief as his own. In other words, the difference is that he does not have the desire to act in accordance with that belief whereas you do.


Sounds like antisocial behaviour.

In general society tolerates it in small amounts and have laws to prevent and discourage/punish you for going too far.
 
Last edited:
Some may think it should be different, but actual human morality tends to put a greater weight on avoiding the suffering of ourselves and those we view as being like us at the expense of others, and this is sensible to have developed within us as a long term survival strategy.

Suffering is not inherently a bad thing. It is a tool developed in us by evolution and in many cases can motivate us and help us to learn.

If the elimination of suffering is a goal unto itself then that can be achieved by the use of drugs, brain surgery, or by simply killing everyone painlessly. The logical consequences of that viewport have undesirable results.

The thought that someone might eat meat, have an abortion, have sex with the wrong sort of partner, or move into their neighborhood makes some people truly suffer. How can you objectively weigh their mental anguish against those who hold opposing viewpoints?
 
I call it faith because there's no evidence that maximizing collective well being is the most moral choice.
I thought we just agreed that morality is a social construct. If that is true, morality can be whatever a society agrees it to be. Is it not? And again, it certainly is not faith.
 
I thought we just agreed that morality is a social construct. If that is true, morality can be whatever a society agrees it to be. Is it not? And again, it certainly is not faith.
Ah. You were mistaken. We didn't agree that morality is a social construct. Most people seem to adopt a hybrid morality, combining social norms with personal beliefs.

I bet if you think about it, you'll find that you believe certain things are right (or wrong) even though they go against the norms of your society.
 
Suffering has come up a lot so far.
Things such as the suffering people go through voluntarily to achieve some greater goal. There's suffering and suffering. That's not suffering.
Can't actually tell you what real suffering is, but I might have an idea what it's not.
In my opinion, if you were living in a free and fair egalitarian society with all your basic needs met; if you were in good health and had access to adequate variety and choice in terms of physical and mental stimulation and needs, you would be incapable of real suffering.
 
I was of the opinion that science can't answer moral questions, I might be changing my mind. I think it might be possible to devise a universal moral framework using science, even if it's worthless in answering any detailed specifics.

See, I think you have this exactly backwards.

Ethics is rules for applying morality. Morality is based on values. And values are, pretty much by definition, subjective.

There's no way to set a "universal values" system, because they are subjective. You can't objectively determine the "best" value without already assuming a value judgement about what 's best.

That being said, once you get down to the basics of what should be valued, then science can definitely help with the specifics. Once you have your values (axioms, i.e. - "minimize human suffering"), then science can help you determine and weigh new values (minimize it at the expense of everything else? Do we value animal life? And how much? How much suffering is acceptable if it results in less in the long-term?) and apply those values to situations (well, this law/practice/custom/solution sounds good, but in the past it's had these consequences, so we should try a different option).

As has already been discussed, a good scientific case can be made that feeling good/bad, pleasant/unpleasant is a universal state of evolved neural networks, with the function of encouraging behaviour leading to reproductive success and avoiding harmful situations.
Since the universal goal of life is to reproduce, for animals with complex enough brains to experience it, maximizing happiness/minimizing suffering would be a universal goal.
So that's where we start.


Exactly, that is why we have it. It's had the evolutionary function of bolstering cooperation in closer knit groups leading to reproductive success.
It also has a dark side. In a system with limited resources reproductive success depends on out-competing other close knit groups.
Within a social group the cooperative, empathetic side of behaviour has always been seen as virtuous and the combative, selfish side as evil. The reverse is true for interactions between competing groups. Why this is so should be self-evident.
Game theory, and the fact that it evolved in the first place, shows that cooperation is the better strategy.
Since we are all stuck on this planet together our social group has basically expanded to include the whole globe.

I propose the over-all function of a moral code stay the same: The continued prosperity of the group.
Since our concept of morality has recently expanded to include such outlandish groups as other races, woman, children and even animals; I propose to expand it to include all life.
I think all living things have value.
Just as your prosperity depends on the prosperity of your society, the prosperity of society depends on the prosperity of the system it inhabits.
For the foreseeable future our prosperity depends on the prosperity of the planetary ecosystem.

It boils down to:
Maximizing happiness/minimiz suffering by striving to sustain a balanced stable ecosystem where all life has value.
More complex life with more complex brains having more individual personal value and less complex life having more aggregate/ecological value.
Something like that, I'm not sure. Does that make sense?

And everything above is a subjective value judgement, based on the goal of minimizing suffering and maximizing reproductive success of the species. You set those values, then everything else will follow from that. But the values (life is important, suffering is always bad)are subjective, as they are with any morality.

IN fact, there have been moral codes throughout history where suffering was a good thing. Things like various cultures where the suffering and/or destruction of your enemies was a positive good, or some of the Christian sects that believe in self-mortification. Even in modern society we accept some suffering for what's considered the greater good (criminal laws with punishments for offenders), and most religions incorporate at least that much (sinners go to Hell, etc). Personally, I can't see any way that any concept of Hell fits in with minimizing suffering, so that value is obviously NOT universal.

Now, all that said, I agree that minimizing suffering and maximizing happiness is probably a good basis to start from, but that's not even a framework to build a moral and ethical system on: it's at best the plan for a foundation :D
 
Suffering has come up a lot so far.
Things such as the suffering people go through voluntarily to achieve some greater goal. There's suffering and suffering. That's not suffering.
Can't actually tell you what real suffering is, but I might have an idea what it's not.
In my opinion, if you were living in a free and fair egalitarian society with all your basic needs met; if you were in good health and had access to adequate variety and choice in terms of physical and mental stimulation and needs, you would be incapable of real suffering.

I disagree. Suffering is always possible so long as one can desire anything and not get it. And if by any miracle one always gets what one desires that would cause suffering still, as boredom. Suffering can only be eliminated by controlling all of one's desires, and that can only be done by individual personal growth and not by outside forces.
 
Ah. You were mistaken. We didn't agree that morality is a social construct. Most people seem to adopt a hybrid morality, combining social norms with personal beliefs.

I bet if you think about it, you'll find that you believe certain things are right (or wrong) even though they go against the norms of your society.

If morality isn't a social construct, what the hell is it? Just because each individual creates their own version of morality does not mean it's not a social construct. We don't make those choices in a vacuum. Each of us are influenced by the people around us. Are we not?
 
Last edited:
See, I think you have this exactly backwards.

Ethics is rules for applying morality. Morality is based on values. And values are, pretty much by definition, subjective.

There's no way to set a "universal values" system, because they are subjective
I'm trying to be objective, I'm looking for a "universal value" shared by all concerned.

Since the whole concept of ethics and morality is only applicable to animals that can experience well being or suffering and well-being is a universal goal of such organisms, "minimizing suffering and maximizing happiness" seems like such a value.
You can't objectively determine the "best" value without already assuming a value judgement about what 's best.
And this boils down to "life is good".
The instinct of "wanting to live" and self preservation is also a universal property of thinking life, for obvious reasons.
 
It's included but the total prosperity might still be higher with slavery than without, under a certain set of circumstances. Under that scenarion the slaves would be thrown under the bus for the greater good... the greater good. Or, if you don't consider slaves people, they're actually not included at all.
Sure, but I think that situation isn't particularly likely. Now, do we think that there's something wrong with a moral system that would consider slavery good in a world where it did increase total human prosperity? Probably, though that's a big discussion.



Exactly. So in the inverse, then it isn't.

Yep. But again given the negative prosperity associated with being a slave, I think you'll find it difficult to make up for it with the increase that accrues to the rest of society.
 
Is it better to be a happy and a slave or unhappy and free? I agree that happy slaves are/were rare in the most modern iterations of slavery, but that wasn't always so in the ancient world.
 
Sure, but I think that situation isn't particularly likely. Now, do we think that there's something wrong with a moral system that would consider slavery good in a world where it did increase total human prosperity? Probably, though that's a big discussion.


We also have a very strong instincts about fair-play, some people more than others and with no stake in the outcome, I think most objective observers would agree: "It's just not fair!"
 
Last edited:
Is it better to be a happy and a slave or unhappy and free? I agree that happy slaves are/were rare in the most modern iterations of slavery, but that wasn't always so in the ancient world.

Well, I think life in the ancient world was pretty difficult for almost everyone, to be honest.
 
Yep. But again given the negative prosperity associated with being a slave, I think you'll find it difficult to make up for it with the increase that accrues to the rest of society.

First of all, I don't see why it follows, and second, you don't necessarily calculate prosperity by adding each individual's.
 
Oh but we can assume for the sake of argument that he has been made aware that some people believe he owes it to them not to block them out, he simply chooses not to adopt said belief as his own. In other words, the difference is that he does not have the desire to act in accordance with that belief whereas you do.
I did make the assumption, explicitly. And it follows from that assumption that he does have the desire to act, if you think a moral judgment implies a desire (that is, if I say that this outcome is better than that outcome, I am expressing a preference for the former). If you don't think this follows, then you can't say that I have a desire to act.

The problem here isn't really a moral problem, but that you're invested in a dubious theory of action where desires always and exclusively produce intentional action. That just isn't the case, but it's way off topic.
 
Not all feelings are moral. Compassion or empathy are because they sustain the basic moral norm: attention to one's neighbour.
This assertion needs support. I don't agree that "attention to one's neighbour" is the basic moral norm. It strikes me as tribalistic, which is morally arbitrary.

"The basis of morality" means that without empathy there is no sense of moral responsibility: a necessary condition for morality to exist.
This idea also needs support.

But to say that empathy is the basis of morality is not merely to say that empathy is necessary for moral action. It is to say that moral principles emerge from empathy.

See Damasio's studies: a patient with a damaged prefrontal lobe understands what a moral norm means but feels no impulse to comply with it.

Hume: you can't deduce a rule from only reasoning. One cannot logically go from being to ought.
You're misunderstanding Hume here. We can't deduce an ought from an is. But isn't this precisely what you're trying to do above? Deducing an ought (all of morality) from an is (the neurology of empathy)?

There's something inhuman about someone who's guided only by a logical moral system. It would be like HAL, the robot of 2001. Once in a while, a prick of remorse is more than useful.
There is a world of difference between saying that a feeling is useful and saying it is the basis of morality. It might well be true that we do better with empathy than without, but that's no reason to put it at the center of normative ethics. We surely make better decisions when informed by sound scientific research, but concluding that science therefore is (or should be) the basis of ethics would be a foolish thing to do.
 
Last edited:

Back
Top Bottom