What should Morals and Ethics be?

The existence of some universal morality to which axiomatic appeals can be made. This seems to be more or less where JoeMorgue keeps ending up, though he keeps stopping short of actually acknowledging it.

I personally believe shared morality should be based on universal well being. Our own and society as a whole. I don't see how "faith" plays into it unless you employ a definition of faith that I haven't heard before.

Seriously. Take a look at this thread, and previous threads, and every other instance of this debate on this forum. Social norms seem to be the prevalent conclusion most people arrive at. It's pretty obvious in debates about human rights.
I don't think that is true. But maybe we're just phrasing at it differently. I do believe that the morality practiced by a society is based on what that society values and changes over time...so maybe. That said, I believe morality should be based on well being which is in fact an absolute morality.

Think about it. Pure prisoner's dilemma, enlightened self-interest stuff. You're not concerned about what other people think. You're not concerned about custom, or tradition, No worries about how you were raised or what you were taught to believe. All that matters is a dispassionate evaluation of what works for you and what doesn't.

Most sociopaths are pretty adept at going along to get along, but not because they're concerned with morality. Their concern is whatever practical and profitable strategy they can find, for getting what they want.

Obviously the sociopaths will eat all us non-sociopaths for lunch if we give them a chance, but so what? It's not like there's a higher law that says they shouldn't. And who knows? Maybe once they've cleared us out of the way, and sorted out the kinks in their objective social darwinism approach to things, they'll build a far more productive society than we ever will. Two sociopaths in a Prisoner's dilemma are probably going to find a mutually beneficial arrangement a lot faster than a Communist and a Christian, even though the latter each believe in a moral code, and the former believe in nothing but self-interest.

I'm not a fan. Certainly a society could be entirely based on self centered principles. Might makes right. Just seems dystopic.
 
I personally believe shared morality should be based on universal well being. Our own and society as a whole. I don't see how "faith" plays into it unless you employ a definition of faith that I haven't heard before.

I don't think that is true. But maybe we're just phrasing at it differently. I do believe that the morality practiced by a society is based on what that society values and changes over time...so maybe.
I'd call this morality based on social norms.

That said, I believe morality should be based on well being which is in fact an absolute morality.
And I'd call this morality based on faith.

I'm not a fan. Certainly a society could be entirely based on self centered principles. Might makes right. Just seems dystopic.
Yeah, I'm not much of a fan either. On the other hand, it seems to work for pretty much every other animal on the planet. And it does cut the gordian knot pretty cleanly.
 
Think about it. Pure prisoner's dilemma, enlightened self-interest stuff.
The prisoner's dilemma is an example where ethical egoism is self-defeating (by acting in a strictly self-interested way, we end up worse off than we would be if we had acted for the common good--egoism seems to suggest that we should not be egoists), so it's more than a little strange that you'd bring it up as a point in favor of ethical egoism.

That's also one of the reasons egoism is self-effacing--meaning that it tends to lead to the adoption of other ethical frameworks. When it's pointed out that your ethical egoism is incommensurate with my ethical egoism (since we cannot agree on what the ultimate good is), people will unwittingly adopt methodological egoism ("Well, it just works out better for everyone if we act according to our self-interest"), abandoning egoistic justifications in favor of broad consequentialism. Notably you did precisely this with "On the other hand, it seems to work for pretty much every other animal on the planet." Although I find the idea that animals are ethical egoists...contentious. Which brings us to the next point:

People tend to confuse descriptive and normative egoism. They seem to take it for granted that if people are self-interested, they ought to be self-interested. This doesn't follow, and the adoption of ethical egoism arbitrarily excludes the interests of others, in much the same way that racism does (that is, not in terms of moral properties that people have, but due to morally irrelevant differences).
 
Last edited:
I'd call this morality based on social norms.

Morality IS a social construct...So maybe.

And I'd call this morality based on faith.

It certainly isn't the "faith" as described in Hebrews 11-1"Now faith is the substance of things hoped for, the evidence of things not seen."

A morality based on well being is simply asking does an action support well being of all the parties to the best of its abilities. I don't know how you can call that "faith".

Yeah, I'm not much of a fan either. On the other hand, it seems to work for pretty much every other animal on the planet. And it does cut the gordian knot pretty cleanly.

I'm not sure it does. Not when you consider that 99 percent of every species that has ever existed is extinct. I also don't believe that all species are sociopathic. I certainly don't believe it's true with dogs or elephants.
 
Morality IS a social construct...So maybe.
Yes, maybe values derived from social constructs are social norms. MAYBE.

It certainly isn't the "faith" as described in Hebrews 11-1"Now faith is the substance of things hoped for, the evidence of things not seen."

A morality based on well being is simply asking does an action support well being of all the parties to the best of its abilities. I don't know how you can call that "faith".
I call it faith because there's no evidence that maximizing collective well being is the most moral choice.
 
The idea that morality/ethics is variable doesn't mean it is arbitrary, nor does it mean that all moral/ethical systems work, either at all or as good as other systems.

Again people get way to hung up on some variation on asking "Are ethics subjective or objective" and think that's a bad question.

We have river we need to get cars across.

Person A: Builds a suspension bridge. Cars can now cross the water.
Person B: Builds a Truss bridge. Cars can now cross the water.
Person C: Builds a cantilever bridge. Cars can now cross the water.
Person D: Builds a bridge out of cardboard and old chewing gum. It collapses on the first car.

The fact that we have multiple viable options doesn't mean we don't also have wrong answers. It doesn't mean the bridge building is subjective, just complex.

It is possible for moral systems to fail, or to work better or worse in certain scenarios. That makes it more complex, nothing more.
Person D might have the right answer if he was being forced by the Nazis to build a bridge so that their army could cross a river and invade a territory and he didn't want them to do so.

The right and wrong answer depends upon your values.

Values depend on desire.
 
Last edited:
I call it faith because there's no evidence that maximizing collective well being is the most moral choice.
I call it based on desire. If you didn't want to maximise collective well-being then there would be no reason to do so.

Utilitarianism is ultimately based on desire, just like any moral system.
 
That's not ethics, that's just optimizing for self-interest in a race for resources against peer competitors.

Josef Stalin only had to apply that rule to a relatively short list of people who were actually in a position to do anything unto him. The vast majority of Russians, Eastern Europeans, etc. he was free to use as pawns without regard to how they might behave if the situation were reversed. Because the situation would never be reversed.*

I would say that true ethics and morality are rules you apply because you believe they are the right thing to do, not because they are the optimal strategy for self-benefit over time.


---
*Yes, it's always possible that he'd be stranded on a deserted stretch of Siberian highway, depending on the nonexistent goodwill of a vengeful kulak his policies had tortured for years. But that's just risk management.

If you only apply it to those with the power to do things back to you, then you aren't doing it right. Do follow it correctly is to have no ability to arbitrarily decide who you will treat as you want them to treat you, and who you won't, but rather treat everyone else as you want to be treated.

In your example Stalin clearly was clearly not following this method because he did indeed do things to people that he would not have wanted done to himself.

If followed correctly it doesn't optimise self-interest, but rather pushes for a collective interest. You achieve the best interest for yourself by making sure that you work at helping others in the ways you desire to be helped yourself, and in treating others in the ways you want to be treated, regardless of if they have the power to do so in return. In fact the ultimate end should be increasing all people's power to the level where they actually can help and treat you the same way you help and treat them, thus rising all in the society together.

The view you present is more of one of "treat those with equal power as I want to be treated, and damn everyone else." rather than what I presented. In fallacy terms this is called a Strawman.
 
Last edited:
That’s the problem I am alluding to. It’s definitely true that this happens often and can be demonstrated that much of our moral decisions are post-hoc rationalizations - or at least many controlled experiments show that humans will do this. And yet, what they often also show is that there is a rational answer them against which the rationalization can be shown (maybe I am not explaining this well). It seems to me that it only underscores the need to learn how to deal with cognitive biases and to train ourselves to be more rational.

Maybe a good example is someone who says “I realize I rationalize eating meat when I know that it follows from any reasonable principle that I shouldn’t eat it.”

I don't agree. Empathy might have something to do with the origin (the historical contingencies of a particular field of inquiry) of morality, but it is not the basis (the logical and philosophical foundation) of morality. The earliest known mathematics were developed in order to do things like levy taxes, facilitate trade, and track celestial bodies, but it would be an error to say that any of those things are the basis of mathematics.

Empathy makes for a poor foundation on which build a normative code, not least because of the underlying presumption of ethical egoism. Empathy could only be the basis of morality if my feelings are the basis of morality, which would be an absurdly self-important thing to believe, and immediately runs into problems with the relativity of pronouns. If I reject ethical egoism, then I have no immediate use for empathy--I can just value the well-being of others directly, since I would then have to concede that there's nothing special about me.


Consideration of the interests of others.

Not all feelings are moral. Compassion or empathy are because they sustain the basic moral norm: attention to one's neighbour. They are a necessary condition of morality. Reason does not.

"The basis of morality" means that without empathy there is no sense of moral responsibility: a necessary condition for morality to exist. See Damasio's studies: a patient with a damaged prefrontal lobe understands what a moral norm means but feels no impulse to comply with it. There are cases of very intelligent serial killers, but without the slightest empathy for their victims.

Hume: you can't deduce a rule from only reasoning. One cannot logically go from being to ought.

This does not mean that reason does not play a necessary role in the development of moral norms. Crazy compassion can be nefarious. In Spain we say that hell is paved with good intentions. It's understandable, isn't it?

There's something inhuman about someone who's guided only by a logical moral system. It would be like HAL, the robot of 2001. Once in a while, a prick of remorse is more than useful.
 
Last edited:
Not the least of which is the question, "why should I care about advancing human prosperity?"
Because you are part of society and your prosperity depends on it. The more advanced and technological society, the more true this becomes.



Advancing my own prosperity seems like a slam-dunk value proposition for me. Advancing your prosperity also makes a lot of sense, if my prosperity depends on yours. But that's a game-theoretical consideration, not a moral consideration.


Lots of research has been done on the evolution of cooperation and game-theory proves cooperation to be the most prosperous long term strategy. Defectors do have an advantage and will prosper in such a society up until the point where there are too many of them and the strategy becomes self-defeating. Defectors numbers plummet, cooperators increase - and the cycle repeats. These things have been modeled.

Your "slam-dunk" proposition gives you individual advantage at a cost to society as a whole. It decreases the total prosperity of the population and is only successful in the short term.
IOW it's a short sighted selfish strategy.
 
Last edited:
Because you are part of society and your prosperity depends on it. The more advanced and technological society, the more true this becomes.
But what if you could become even more prosperous by advancing one section of the community at the expense of another, where the section of the community being advanced includes you?

Shouldn't you then do that?
 
I'm a moral realist, and I'd like to give a simpler way of looking at this issue. Let's start with two declarative statements.

1. Adding gasoline to an out-of-fuel car is a good way to get it running again.
2. Chocolate is a good flavor of ice cream.

Sentence 1 is pretty clearly objective, while sentence 2 is pretty clearly subjective. Now let's add a new sentence:

P. The complete abolition of slavery is a good way of advancing human prosperity.

All I'm saying is that Sentence P is more like 1 than it is like 2. To put it another way, the thought process behind answering, "Why should I abolish slavery?" is similar to, "Why should I refuel my car?" and completely different from, "Why should I choose chocolate ice cream?"

At this point, someone will likely retort with, "Well, yes, but why should we advance prosperity?" but that's not really the point. All we're trying to figure out is whether or not these questions have objective answers, not whether or not we should care. To put it another way, the entire field of method does not need to tell you why life is preferable to death before it can state, objectively, that clean water is healthier than poison.

Yish, not quite. As soon as you put "good" in statement one you made it subjective. But ok, I get your point.

What I don't get is why you think P is more like 1. I think it's more like 2 myself. Productivity might have gone up since the end of slavery but that's because of technology, mainly.
 
My suffering is bad. I know this because I've experienced it. Your suffering is like mine. I know this because we are physical systems with the same properties. Therefore your suffering is bad.

It's possible to deny the first statement. Perhaps my suffering is simply something that I dislike but there's nothing objectively "bad" about it. I can see that as a valid rational viewpoint, but I don't actually believe that anyone believes it about themselves.

So sure, it's an axiom that I have to begin with, but it certainly seems to be a bare, if somewhat mysterious, fact of our universe the suffering of conscious systems is a bad thing. If you disagree go induce some suffering in yourself and see if you think, in that moment, that there's no moral quality to your suffering: that the world wouldn't be a better place without it.

Perhaps this is an illusion. Okay. Until that's demonstrated I think it's better to work on the assumption that it's not.
 
What I don't get is why you think P is more like 1. I think it's more like 2 myself. Productivity might have gone up since the end of slavery but that's because of technology, mainly.

Because the set "humans" includes the slaves, and their prosperity is included in the total of human prosperity. As long as abolishing slavery increases the prosperity of the slaves more than it decreases the prosperity of the slave owners (and everyone else), then total human prosperity is increased by abolishing slavery.

As I Am The Scum said, this doesn't address the question of why we should care about human prosperity, whether or not total human prosperity is the right metric, or even how to go about measuring it, but it is true that it's an objective question. What we do with the answer may be subjective.
 
So much to digest.
Lots of discussion concerning relative specific points. I'd like to focus on the broader concepts first.

I was of the opinion that science can't answer moral questions, I might be changing my mind. I think it might be possible to devise a universal moral framework using science, even if it's worthless in answering any detailed specifics.

As has already been discussed, a good scientific case can be made that feeling good/bad, pleasant/unpleasant is a universal state of evolved neural networks, with the function of encouraging behaviour leading to reproductive success and avoiding harmful situations.
Since the universal goal of life is to reproduce, for animals with complex enough brains to experience it, maximizing happiness/minimizing suffering would be a universal goal.
So that's where we start.

Morality must be a function of utility in order to be sustainable: high ideals don't last long against reality.
Exactly, that is why we have it. It's had the evolutionary function of bolstering cooperation in closer knit groups leading to reproductive success.
It also has a dark side. In a system with limited resources reproductive success depends on out-competing other close knit groups.
Within a social group the cooperative, empathetic side of behaviour has always been seen as virtuous and the combative, selfish side as evil. The reverse is true for interactions between competing groups. Why this is so should be self-evident.
Game theory, and the fact that it evolved in the first place, shows that cooperation is the better strategy.
Since we are all stuck on this planet together our social group has basically expanded to include the whole globe.

I propose the over-all function of a moral code stay the same: The continued prosperity of the group.
Since our concept of morality has recently expanded to include such outlandish groups as other races, woman, children and even animals; I propose to expand it to include all life.
I think all living things have value.
Just as your prosperity depends on the prosperity of your society, the prosperity of society depends on the prosperity of the system it inhabits.
For the foreseeable future our prosperity depends on the prosperity of the planetary ecosystem.

It boils down to:
Maximizing happiness/minimiz suffering by striving to sustain a balanced stable ecosystem where all life has value.
More complex life with more complex brains having more individual personal value and less complex life having more aggregate/ecological value.
Something like that, I'm not sure. Does that make sense?
 
But what if you could become even more prosperous by advancing one section of the community at the expense of another, where the section of the community being advanced includes you?

Shouldn't you then do that?


That would be inefficient by decreasing the prosperity of the whole. In the long term cooperation is the superior strategy.
 
As I Am The Scum said, this doesn't address the question of why we should care about human prosperity, whether or not total human prosperity is the right metric, or even how to go about measuring it, but it is true that it's an objective question. What we do with the answer may be subjective.


Maybe it's just me but I'm not at all comfortable using "human prosperity" as a metric. I cannot see how it could be justified in any objective way.
I'm trying to be objective in devising a moral framework. :boggled:
 
Last edited:
Because the set "humans" includes the slaves, and their prosperity is included in the total of human prosperity.

It's included but the total prosperity might still be higher with slavery than without, under a certain set of circumstances. Under that scenarion the slaves would be thrown under the bus for the greater good... the greater good. Or, if you don't consider slaves people, they're actually not included at all.

As long as abolishing slavery increases the prosperity of the slaves more than it decreases the prosperity of the slave owners (and everyone else), then total human prosperity is increased by abolishing slavery.

Exactly. So in the inverse, then it isn't.
 
There's a contradiction in your reasoning--in order for him to be aware that he owes it to others not to block him (if he isn't, there's nothing to account for), he has to agree that it is better not to block people, which means he and I must have similar propositional attitudes--we both hold that it is better not to block everyone else. It cannot therefore be true that I desire to act and he does not, because the acknowledgment that one outcome is better than the other is the very thing you want to call a desire. All we can say is that we both prefer one outcome over another, but he has experienced a failure of will where I have not. That's not explicable in terms of the presence or absence of desire, if you hold that a moral judgment is (or entails) a desire.

Oh but we can assume for the sake of argument that he has been made aware that some people believe he owes it to them not to block them out, he simply chooses not to adopt said belief as his own. In other words, the difference is that he does not have the desire to act in accordance with that belief whereas you do.
 
Last edited:

Back
Top Bottom