What should Morals and Ethics be?

I'm saying his recursion is exactly as valid as yours.

If we're not allowed to start at "suffering is bad and should be reduced" as self defining then "What is the nature of reality" is no further down the turtle poll then "Prove to me suffering is objectively bad."

You misunderstand. I have no issue with determining morals based on a share set of values. I take issue, however, with the suggestion that there are such things as objective morals, for the reasons I've given.
 
You wouldn't be equivocating two definitions of the word "wrong", would you?

Morally wrong and factually wrong are not the same thing.

But that's not what I'm asking right now. The question: What are the consequences of being mathematically wrong?

If you can't think of any, then I'm wondering why you think those types of consequences are relevant to this topic.
 
But that's not what I'm asking right now. The question: What are the consequences of being mathematically wrong?

Haven't I already answered that?

If you can't think of any, then I'm wondering why you think those types of consequences are relevant to this topic.

Because morals are about what's right or wrong. It's inherent to the thing, so I think it's important, you know? It's not the case with math or physics.

In my view, in order to establish objective morality, one'd have to find a way to determine what those objective values are. To begin with, I don't think that's even possible, and so far no one's come up with such a way to do that. And second, well the whole point of morality is to enforce certain behaviours. So, ok, you establish that murder is objectively wrong. Now what? That's the second fundamental problem with the proposal: now nothing.
 
I'm sure this has been pointed out, but just to make sure:

There is no "objective" morality in a sense that anything would be universally right or wrong.

However, if we as members of a group agree on certain ideals and goals, for instance: well-being, then we can derive objective determinations from there.

I don't know where people get the idea that if something isn't universal than a) it can't be "objective" and b) it is necessarily arbitrary. Both these assumptions miss the point of morality entirely.
 
I'm sure this has been pointed out, but just to make sure:

There is no "objective" morality in a sense that anything would be universally right or wrong.

However, if we as members of a group agree on certain ideals and goals, for instance: well-being, then we can derive objective determinations from there.

So far so good.

I don't know where people get the idea that if something isn't universal than a) it can't be "objective" and b) it is necessarily arbitrary. Both these assumptions miss the point of morality entirely.

Because objective means it is invariant regardless of the person. Morality is absolutely variant from person to person. That's actually a good thing.

As for arbitrary... Hmmm.. it depends on what we mean by it. It's certainly arbitrary in that it's not better or worse than another value, objectively, but not arbitrary in the sense that it's chosen without any sort of reason.
 
Usually this line of argument is presented to insert some "God" into the equation and then claim that this would be the only legit source of "objective" morality.

While in reality this would then be subjective to this God character, whose subjective view would then be forced on other people to adapt. Not very "objective" IMHO.

As for arbitrary... Hmmm.. it depends on what we mean by it. It's certainly arbitrary in that it's not better or worse than another value, objectively, but not arbitrary in the sense that it's chosen without any sort of reason.

This statement would, of course, stand and fall with a sensible definition of "good" (as in "better or worse"). I would argue that some axioms are more useful than others and therefore somehow "better". After all the whole purpose of morality is to organise living together in some kind of group, so the "usefulness" would be a somehow sensible quality criterion.
 
I guess if you're using a VERY broad definition of "universe", sure. It's ridiculous, though. That it can't be observed might put it out of our ability to determine if it exists, but it doesn't put it in a different universe. I'm talking about other actual reality bubbles, so to speak.

That's the thing: They are other reality bubbles. They're separated from us by a gulf of expanding time and distance that cannot be crossed, even by light itself.
 
Ok you start. What morals can you determine based on reality? Remember, you can't use the opinions of individuals as a basis.

Go.

Firstly, morals and ethics are not arbitrary, they have evolved. Just as life is an emergent property of how the laws of the universe function, morals and ethics are emergent properties of social interaction. Morals and ethics evolved because they convey an evolutionary advantage, they are emergent properties of the laws of physics.

Morals and ethics are only applicable to minds able to experience life. This is self evident.
All evolved minds have a single universal purpose, to continue living in order to reproduce. That is why brains evolved.
Evolution gave minds feelings with the above goal in mind. The simplest is probably hunger. An organism running out of resources feels 'hungry', bad. Finding food and eating feels satisfying, good. Same with other emotions.

All organisms seek out situations that feel good and avoid situations that feel bad. Evolution has programmed them to do so because it led to reproductive success.
Well-being is therefore the universal and objective goal of all natural minds (and as a bonus it's the subjective goal of any individual mind as well).
Start there.
 
Last edited:
And what if I do anyway? "Being wrong" means nothing objectively. It only means something for those holding that value. You've still not explained how it would work if it was an objective, universal truth.

Okay. Here's a framework: I know that there is something bad about my own suffering. I know this through experience. I can deduce from our common origins and makeup that your suffering is the same as mine, so I know there's something bad about that too. That seems pretty universal and objective so far.

So what does it mean for something to be wrong? In this (simplified) framework, it means that it causes more suffering than it prevents. How is the world different if you act according to this morality than if you don't? The world has less suffering in it. That seems pretty concrete to me.

You can, of course, go on to say "why should I care about suffering?". To that I have two replies: the first echoes JoeMorgue, which is to say that everything has to start somewhere. But the second is that I'm not going to completely dodge the question. The answer is simply that you already do. As evidence I present the fact that you try to avoid it. Of course there are other things that are important, and our moral system needs to take them into account as well, which is why this is an oversimplified version of it in order to just get the basic idea across.
 
Just like spirits who can't interact with our universe don't exist or, for that matter, do other universes in a hypothetical multiverse. If there's no way to know, then, as I said for all practical purposes, they don't exist.

To avoid getting too hung up on the multiverse thing, I just want to go back to where it started, which was this comment. My point is that there are places that we know exist whose properties we cannot investigate and will never know. There are true facts about those places that we can't know. So what?

This demonstrates that there are facts that we can't know. Moreover their non-existence would imply that other facts that we do have access to were false, and well, we have access to those things and can thus find out that it turns out they aren't false (for instance that spacetime is on average flat to within certain parameters). So it's not the case that "for all practical purposes they don't exist", because assuming their non-existence would lead to erroneous conclusions about other things. So again, we know that these things exist even though we know there are certain facts about them that we can never determine.

I would have thought this was a rather banal statement, but given that others disagree I think it's worth making the demonstration.

It's probably not particularly important because I don't happen to think that morality is unknowable.
 
Okay. Here's a framework: I know that there is something bad about my own suffering. I know this through experience. I can deduce from our common origins and makeup that your suffering is the same as mine, so I know there's something bad about that too. That seems pretty universal and objective so far.

So what does it mean for something to be wrong? In this (simplified) framework, it means that it causes more suffering than it prevents. How is the world different if you act according to this morality than if you don't? The world has less suffering in it. That seems pretty concrete to me.

Can I be the one to push the button?:cool:
The logical consequence of that one is pretty neat, there is zero suffering if everyone were dead. However, in the act of killing someone you can induce suffering in two ways, either to the person you're killing or to their friends and loved ones who learn of this person having been killed.

The first problem is handled by killing the person faster than their nervous system can register in their brain, for example by putting them in a nuclear fireball. The second problem is handled by killing everyone at the same time, install a grid of nukes around the planet with overlapping fireballs and then press the button.
 
Who decided the button needs pressing?
If you focus on the real issue of well-being you would not make such a mistake.

I'm sorry, you said you wanted reasoned argument. My suggestion is clearly an optimal answer to the problem statement of "morality" as "reducing suffering."
 
But reduced suffering is not the goal, how could you possibly justify that?
I'm not, Roboramma is not.

Reducing suffering is a consequence.
 
Last edited:
But reduced suffering is not the goal, how could you possibly justify that?
I'm not, Roboramma is not.

Reducing suffering is a consequence.

Roboramma defined morally "wrong" as "causes more suffering than it prevents" with, presumably, morally "right" being defined as "prevents more suffering than it causes." My suggestion causes zero suffering, the minimum possible, and it prevents all further possible suffering, the maximum possible. Therefor it is not just morally "right" but the most morally "right" thing to do. So when do I get to push the button already?
 
Last edited:
He was illustrating how reducing suffering is an objective goal. The same goes for the will to live and wanting to be happy.
IOW 'well-being', which would encompass all the above.
 
Last edited:
He was illustrating how reducing suffering is an objective goal. The same goes for the will to live and wanting to be happy.

So what? Of course reducing suffering is an objective goal, but so is increasing suffering, or most any other goal you can think of. And he wasn't just illustrating that it is an objective goal, but defining that goal as being "moral."
 
Last edited:

Back
Top Bottom