• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Can moral values be rational?

Can moral values be rational?

  • No; they're axiomatic and arational.

  • Yes; moral values can be rationally derived from amoral axioms.

  • On Planet X, it's immoral to question the rationality of morals.


Results are only viewable after voting.

theprestige

Penultimate Amazing
Joined
Aug 14, 2007
Messages
74,214
Location
The Antimemetics Division
I say no. I say morality is axiomatic. Rationality can be applied to the goal of maximizing whatever good you value most, but it cannot tell you which good to value most.

You and I may agree on a rational approach to minimizing murder. But if we disagree on what constitutes murder, there is no rational way to determine which of us is "right".
 
Moral Values derive from the fact that as biological entities, we have homeostasis build in: we know when things are not good when they make it hard for us to live our lives. Hence it is a moral value to act to change unfavourable conditions until they are no longer unfavourable.
A moral guideline for this is to use as little effort as possible, affecting as few people as necessary in the process of changing your conditions, as it is likely that any major change you cause will have negative impacts on the wellbeing of others.
This is based on the fundamental law of interaction called tit-for-tat: if your actions cause harm, expect that other will act to harm you back, even if it doesn't come as an advantage to them. Minimize your interactions or work to make sure they benefit others, or your efforts to improve your life will be met with resistance and thus be less likely to succeed.
From this it follows that humans should cooperate to make it harder for individuals to be out of range of reciprocity: if someone can harm others without risking harm to themselves, that individual will cause more harm than good, as there is no corrective to their actions. Hence people should cooperate to prevent individuals to become too removed from society, or to bring those who are back into the sphere of reciprocity.

All together, this can be summed up like this:
- cause no harm unless there is an urgent need, and try to make sure the harm is reversible
- always help others when it comes at little or no cost to you
- always cooperate with others against those with a lot more power than the average individual; never cooperate with those with a lot of power against those with as much power as you have or less

to summarize the summary:

always punch up, never punch down.
 
I think they can be irrational so if they can be irrational then I suppose they can be rational? Or you can value something that makes no damn sense.

That being said, mostly they are not rational or irrational.
 
I think they can be irrational so if they can be irrational then I suppose they can be rational? Or you can value something that makes no damn sense.

That being said, mostly they are not rational or irrational.
All moral values make no damn sense. That's the point. You value the propagation of the human species? What sense does that make? Perpetuation of the species? Your cognitive function is slaved to biological imperatives? Might as well say you don't actually think at all...

... Tell me why you value propagation of the species.
 
Moral Values derive from the fact that as biological entities, we have homeostasis build in: we know when things are not good when they make it hard for us to live our lives. Hence it is a moral value to act to change unfavourable conditions until they are no longer unfavourable.
A moral guideline for this is to use as little effort as possible, affecting as few people as necessary in the process of changing your conditions, as it is likely that any major change you cause will have negative impacts on the wellbeing of others.
This is based on the fundamental law of interaction called tit-for-tat: if your actions cause harm, expect that other will act to harm you back, even if it doesn't come as an advantage to them. Minimize your interactions or work to make sure they benefit others, or your efforts to improve your life will be met with resistance and thus be less likely to succeed.
From this it follows that humans should cooperate to make it harder for individuals to be out of range of reciprocity: if someone can harm others without risking harm to themselves, that individual will cause more harm than good, as there is no corrective to their actions. Hence people should cooperate to prevent individuals to become too removed from society, or to bring those who are back into the sphere of reciprocity.

All together, this can be summed up like this:
- cause no harm unless there is an urgent need, and try to make sure the harm is reversible
- always help others when it comes at little or no cost to you
- always cooperate with others against those with a lot more power than the average individual; never cooperate with those with a lot of power against those with as much power as you have or less

to summarize the summary:

always punch up, never punch down.
I'd pretty much agree with this interpretation.

Morals are such a loose and vague concept that it can mean anything to anybody. Problem is, when your morals conflict with others', and then we get to a situation where it's not more about conversation than it is getting offended.

I don't think there is such a thing as axiomatic morals.
 
I'd pretty much agree with this interpretation.

Morals are such a loose and vague concept that it can mean anything to anybody. Problem is, when your morals conflict with others', and then we get to a situation where it's not more about conversation than it is getting offended.

I don't think there is such a thing as axiomatic morals.
Okay. So what are the axioms from which you reason to your morals?
 
Probably how I was raised, like most people. An education certainly helps. I don't have an axiom for it.

Basically what you believe is right. That can go a whole lot of different ways.
 
All moral values make no damn sense. That's the point. You value the propagation of the human species? What sense does that make?
Perpetuation of the species? Your cognitive function is slaved to biological imperatives? Might as well say you don't actually think at all...

... Tell me why you value propagation of the species.
That's not how I interpreted what ahhell meant. Of course we are hardwired to propagate our species. There's no moral value to it.

Do ant colonies have a moral code? It's essentially a human construct, and we have to find a way to fight our way out of tribalism on a moral foundation.
 
I do think there is a moral value to propagating - the logic being that even if we haven't figured out the whole "make the world great for everyone" business, and believe that it can't be done, that doesn't mean that future generations might not figure it out.

The fact that you had the chance to play the game of life gives you the obligation to pass the chance to play on to someone else.
 
I agree with theprestige that basic moral values are neither rational nor irrational.

They are not irrational because there is some sense to them - an irrational value would be, say, to encourage murder or prevent propagation of the species.

The main reason I think they are not rational is that if they were, we should be able to agree on what they are. Even if we could agree on some obvious ones - do no harm, protect the weak, respect others - there is no way to agree on the priorities nor on which takes precedence when they come into conflict.

In practice, moral values and their priorities are defined by a nation's history - particularly religion - and since this is usually not a rational process, the resulting values are not rational either.

So moral values are arational and it is only how we implement them that rationality (hopefully) comes in.
 
This was something that we argued up and down on a thread about science being able to derive morality.

I and a couple of others (Kevin?) argued that no, you have to assume a moral value first, but you can use reason to show whether your values are consistent or if such and such an action furthers that value, but NOT that the value is a scientific one.

Some others, including Tassman, I believe, started waving around Sam Harris's book, The Moral Landscape and claiming that the book had explained how moral values were too scientifically derived. Yet, those people showed no evidence of understanding the book and much of what they argued showed evidence that they hadn't even read it, often claiming that Harris was arguing from an evolutionary point of view, when that view is explicitly repudiated within the pages of the book.

Why? Because behaviour derived from evolution is merely that which has, on balance, been able to shuffle our genes forward through the eons, but unless keeping genes alive is the moral bedrock then it doesn't help out much to tell us whether, say, murder, rape, theft, gluttony, infidelity and lying is bad, if such behaviour helps you pass on your genes.

That said, I think that while science does not tell us what is moral, we can at least test our moral values against our intuitions.
 
I agree with theprestige that basic moral values are neither rational nor irrational.

They are not irrational because there is some sense to them - an irrational value would be, say, to encourage murder or prevent propagation of the species.

The main reason I think they are not rational is that if they were, we should be able to agree on what they are. Even if we could agree on some obvious ones - do no harm, protect the weak, respect others - there is no way to agree on the priorities nor on which takes precedence when they come into conflict.

In practice, moral values and their priorities are defined by a nation's history - particularly religion - and since this is usually not a rational process, the resulting values are not rational either.

So moral values are arational and it is only how we implement them that rationality (hopefully) comes in.
Here is the thing though... You can use reason to show that any particular nation or religion's values are incoherent. If this were not true, then we wouldn't be able to change values.
 
I'd pretty much agree with this interpretation.

Morals are such a loose and vague concept that it can mean anything to anybody. Problem is, when your morals conflict with others', and then we get to a situation where it's not more about conversation than it is getting offended.

I don't think there is such a thing as axiomatic morals.
can it mean anything to anybody?
 
I agree with theprestige that basic moral values are neither rational nor irrational.

They are not irrational because there is some sense to them - an irrational value would be, say, to encourage murder or prevent propagation of the species.
The main reason I think they are not rational is that if they were, we should be able to agree on what they are. Even if we could agree on some obvious ones - do no harm, protect the weak, respect others - there is no way to agree on the priorities nor on which takes precedence when they come into conflict.

In practice, moral values and their priorities are defined by a nation's history - particularly religion - and since this is usually not a rational process, the resulting values are not rational either.

So moral values are arational and it is only how we implement them that rationality (hopefully) comes in.
That statement isn't as self-evident as you seem to think it is. There's nothing inherently irrational about encouraging murder or preventing the propagation of the species, as there is no inherent value in a lack of murder or the propagation of the species.

It's all just a case of what we want and how to get there, and yes, what we want is in some way encoded in our genetic make-up. But that doesn't mean that someone who, for example, opposes the propagation of the species, is actually irrational, because that very encoding is as worthless as anything else.
 
Perhaps folk should have a look at Kant's "categorical imperative" argument, that is the usual argument those that claim moral principles are not axiomatic use - or a variation of it. I think it's all word play and typical of all ideologies/philosophies/religions - they aren't very good models of the real world.

ETA: Always a good starting point Stanford Encyclopaedia of Philosophy - Kant's moral section: https://plato.stanford.edu/entries/kant-moral/

I see it had a "substantive revision Fri Jan 21, 2022" have to have a look to see what was changed.
 
This was something that we argued up and down on a thread about science being able to derive morality.

I and a couple of others (Kevin?) argued that no, you have to assume a moral value first, but you can use reason to show whether your values are consistent or if such and such an action furthers that value, but NOT that the value is a scientific one.

Some others, including Tassman, I believe, started waving around Sam Harris's book, The Moral Landscape and claiming that the book had explained how moral values were too scientifically derived. Yet, those people showed no evidence of understanding the book and much of what they argued showed evidence that they hadn't even read it, often claiming that Harris was arguing from an evolutionary point of view, when that view is explicitly repudiated within the pages of the book.

Why? Because behaviour derived from evolution is merely that which has, on balance, been able to shuffle our genes forward through the eons, but unless keeping genes alive is the moral bedrock then it doesn't help out much to tell us whether, say, murder, rape, theft, gluttony, infidelity and lying is bad, if such behaviour helps you pass on your genes.

That said, I think that while science does not tell us what is moral, we can at least test our moral values against our intuitions.
One may use science to explain why particular moral values seem to be held, for example various experiments show that other primates, including monkeys have notions of fairness.

However, that is not the same as saying that moral values are rationally derived, anymore than a scientific explanation for a liking for symmetry would mean that aesthetic values are rationally derived.

Moral axioms are by definition not logically derived from higher axioms, because they wouldn't be axioms.

One may use logic and reason to investigate whether a moral framework is internally consistent, or to investigate moral choices, but the values are as arational as a question as to whether Monet, Beethoven or Dan Brown are "better"
 
Moral Values derive from the fact that as biological entities, we have homeostasis build in: we know when things are not good when they make it hard for us to live our lives. Hence it is a moral value to act to change unfavourable conditions until they are no longer unfavourable.
A moral guideline for this is to use as little effort as possible, affecting as few people as necessary in the process of changing your conditions, as it is likely that any major change you cause will have negative impacts on the wellbeing of others.
This is based on the fundamental law of interaction called tit-for-tat: if your actions cause harm, expect that other will act to harm you back, even if it doesn't come as an advantage to them. Minimize your interactions or work to make sure they benefit others, or your efforts to improve your life will be met with resistance and thus be less likely to succeed.
From this it follows that humans should cooperate to make it harder for individuals to be out of range of reciprocity: if someone can harm others without risking harm to themselves, that individual will cause more harm than good, as there is no corrective to their actions. Hence people should cooperate to prevent individuals to become too removed from society, or to bring those who are back into the sphere of reciprocity.

All together, this can be summed up like this:
- cause no harm unless there is an urgent need, and try to make sure the harm is reversible
- always help others when it comes at little or no cost to you
- always cooperate with others against those with a lot more power than the average individual; never cooperate with those with a lot of power against those with as much power as you have or less

to summarize the summary:

always punch up, never punch down.
But why are those good?

It's literally the Naturalistic fallacy
 
Perhaps folk should have a look at Kant's "categorical imperative" argument, that is the usual argument those that claim moral principles are not axiomatic use - or a variation of it. I think it's all word play and typical of all ideologies/philosophies/religions - they aren't very good models of the real world.

ETA: Always a good starting point Stanford Encyclopaedia of Philosophy - Kant's moral section: https://plato.stanford.edu/entries/kant-moral/

I see it had a "substantive revision Fri Jan 21, 2022" have to have a look to see what was changed.
Yes, it is an argument that only an action driven by a motive that you would want to see or that could be universalized can be considered moral. This makes certain intuitive sense until you start thinking of counter-examples. If you wish to deceive a would-be murderer looking for a victim it would violate the categorical imperative because that would mean wanting to see or allowing deception to become universalized.

Of course, the main challenger to this is consequentialism, typically utilitarianism that Bentham believed was based on what he assumed was the unassailable principle of promoting pleasure over pain and that nobody could possibly want the latter over the former. Except, that came under scrutiny from other utilitarians such as J.S Mill who pointed out that some pains are worth having, particularly for deferred gratification. Others have pointed out that stringent forms of utilitarianism leads to injustice such as, say, killing someone, even an innocent person, to prevent greater harms happening, etc...
 
But why are those good?

It's literally the Naturalistic fallacy
They are good if they result in you,. personally, being in a better situation than before without a higher risk of negative backlash.
It doesn't have to be "good" beyond that point.

The good comes from not rewarding negative sum behavior: little benefit.for me at large expense for others.
 
I do think there is a moral value to propagating - the logic being that even if we haven't figured out the whole "make the world great for everyone" business, and believe that it can't be done, that doesn't mean that future generations might not figure it out.

The fact that you had the chance to play the game of life gives you the obligation to pass the chance to play on to someone else.
The position of Pessimist antinatalism is that the world being "great" or even "alright", for anyone, is a logical impossibility due to the nature of existence, and that arguments to the contrary are demonstrably irrational delusions fostered by biological imperatives that ultimately have no purpose. As such, the act of propagating the species IS the immoral choice, since it perpetuates a useless cycle of suffering from which there is no escape.

Which is to say that it's all a matter of perspective really.
 
The position of Pessimist antinatalism is that the world being "great" or even "alright", for anyone, is a logical impossibility due to the nature of existence, and that arguments to the contrary are demonstrably irrational delusions fostered by biological imperatives that ultimately have no purpose. As such, the act of propagating the species IS the immoral choice, since it perpetuates a useless cycle of suffering from which there is no escape.

Which is to say that it's all a matter of perspective really.
it's that plus the presumption that there cannot possibly be a different opinion - which is why it is not a credible position and should be ignored.
They can do what they want, but they can't tell others what to do or think.
 
it's that plus the presumption that there cannot possibly be a different opinion - which is why it is not a credible position and should be ignored.
They can do what they want, but they can't tell others what to do or think.
I don't know what presumption you are talking about; there's people in favour of the propagation of the species, and people who are against it, each side confident in the correctness of their conclusion, each side as presumptuous as the other. You have demonstrated this yourself by characterizing the propagation of the species as the "moral" choice.
 
They are good if they result in you,. personally, being in a better situation than before without a higher risk of negative backlash.
It doesn't have to be "good" beyond that point.

The good comes from not rewarding negative sum behavior: little benefit.for me at large expense for others.
But that is still a value judgement.
 
That statement isn't as self-evident as you seem to think it is. There's nothing inherently irrational about encouraging murder or preventing the propagation of the species, as there is no inherent value in a lack of murder or the propagation of the species.

It's all just a case of what we want and how to get there, and yes, what we want is in some way encoded in our genetic make-up. But that doesn't mean that someone who, for example, opposes the propagation of the species, is actually irrational, because that very encoding is as worthless as anything else.
Yeah, one weird example I can think of is male bears killing their cubs to force the females into eustress or minimize competition for future food.

I don't think it's a moral choice though.
 
I think we need to remember that "rational" and "irrational" are themselves human concepts. We presume in some way that they reflect the core logical principles of the universe, but not really. However true and rigid such things as cause and effect, the basic ideas of mathematics, and so forth, might be, there is no value judgment there. The laws of the universe tell us how things happen but not why or whether they should. Reason is ours alone.

As soon as we start talking of rational and irrational, we have blundered into the relativistic jungle of values. We can say perhaps that we create moral values based on what is rational or irrational, but what we consider rational and irrational is our own creation to start with. I think it's turtles all the way down.
 
I agree with theprestige that basic moral values are neither rational nor irrational.

They are not irrational because there is some sense to them - an irrational value would be, say, to encourage murder or prevent propagation of the species.

The main reason I think they are not rational is that if they were, we should be able to agree on what they are. Even if we could agree on some obvious ones - do no harm, protect the weak, respect others - there is no way to agree on the priorities nor on which takes precedence when they come into conflict.

In practice, moral values and their priorities are defined by a nation's history - particularly religion - and since this is usually not a rational process, the resulting values are not rational either.

So moral values are arational and it is only how we implement them that rationality (hopefully) comes in.
I'll add one: do unto other as... (you know the rest).

It's called the Golden Rule, but about now if kind of feels like the Bronze Rule.

What absolute morals are who can say?
 
I think we need to remember that "rational" and "irrational" are themselves human concepts. We presume in some way that they reflect the core logical principles of the universe, but not really. However true and rigid such things as cause and effect, the basic ideas of mathematics, and so forth, might be, there is no value judgment there. The laws of the universe tell us how things happen but not why or whether they should. Reason is ours alone.

As soon as we start talking of rational and irrational, we have blundered into the relativistic jungle of values. We can say perhaps that we create moral values based on what is rational or irrational, but what we consider rational and irrational is our own creation to start with. I think it's turtles all the way down.
For me rational just means "supported by observable evidence" and irrational "contradicted by observable evidence", which seems to me to be an objective difference rather than just a human concept. There's a gray area in between, which includes a lot of religious beliefs, where there is (and in many cases never can be) any observable evidence, which I wouldn't place in either category. Moral values might fall into any of the three categories.
 
Whether we like it or not, we're born into social contract to a large degree. I don't think that's true with other species.

Doesn't matter where it is, usually it's just the rules of your local tribe.
 
For me rational just means "supported by observable evidence" and irrational "contradicted by observable evidence", which seems to me to be an objective difference rather than just a human concept. There's a gray area in between, which includes a lot of religious beliefs, where there is (and in many cases never can be) any observable evidence, which I wouldn't place in either category. Moral values might fall into any of the three categories.
Maybe it depends on how the term is used (which itself seems worth thinking about), but people here are using it to evaluate outcomes. We can say the decision of whether we should destroy the world is a rational one, and it would be hard to argue otherwise - I mean if we act in a way we know will cause the world to end with all human experience erased from the universe forever, that would be about as irrational as it gets - but the world does not care. It's a value judgment, even if it's an obvious and compelling one.
 
as soon as you have interactions, you have morals - even if the interaction is just with your inner self.
 
I think we need to remember that "rational" and "irrational" are themselves human concepts. We presume in some way that they reflect the core logical principles of the universe, but not really. However true and rigid such things as cause and effect, the basic ideas of mathematics, and so forth, might be, there is no value judgment there. The laws of the universe tell us how things happen but not why or whether they should. Reason is ours alone.

As soon as we start talking of rational and irrational, we have blundered into the relativistic jungle of values. We can say perhaps that we create moral values based on what is rational or irrational, but what we consider rational and irrational is our own creation to start with. I think it's turtles all the way down.
I'm not sure I follow.

Rationality just means application of reasoning to the morality. It says nothing about whether those moral values are good or bad.
 
I'm not sure I follow.

Rationality just means application of reasoning to the morality. It says nothing about whether those moral values are good or bad.
The universe behaves according to a logic of existence we call rational. There is the inherent assumption that reason has value. It does for us, of course, but that's because we're us. The universe has no concepts.
 
Back
Top Bottom