• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

What is the appeal of "objective morality"

But these characteristic features are aimed at coming to a true conclusion over a false conclusion. Everything about uniquely rational capacities aims at drawing the right conclusion, where "right conclusion" really means "the one that is more likely to be true." Thus, all rational beings have a decided preference for truth, at least broadly and generally speaking, and hence all rational beings recognize that believing truth is better, broadly and generally speaking, than believing a falsehood.


That is also incorrect... a rational person uses the Scientific method to minimize the possibility of his/her biases and subjectivity interfering with arriving at an objective evidence for a theory.

People can be rational and still be fooled by illusions delusions biases prejudices and bad data that appeared trustworthy.

Using the scientific method has been so far the best and most reliable method (if applied honestly and with good intentions) for obviating as much as possible of the human subjectivity. This is a fact proven by the success of science.

Humans are subjective and morality is subjective... the only way one can construe any meaning for an objective morality is by saying something that is agreed upon by all humans EVER.

Good luck with finding that.

But even then... it still is subjective in relation to humanity and only objective in relation to the individuals.
 
Last edited:
... First, moral norms are norms of practical, not theoretical, reasoning ....


QED!!!

What you have just said makes it utterly obvious that therefore the term "objective norms" is as meaningless as a "square circle"

Practical is absolutely subjective... there is nothing objective about practical by the very meaning of the English word.

And norms are norms in relationship to something by the very meaning of the word.

So "objective norms" is a vacuous phrase that has no possible significance just as "bombastically humble".
 
...
The standard reply to the kind of reasoning you give above is that you have trivialized the claim of psychological egoism. You first said everything we do is motivated by self-interest, but now you suggest that self-interest is synonymous with choosing what to do. No one denies that the soldier chose to jump on the grenade, and hence preferred it to other alternatives, but it does not seem that doing so could reasonably be called "self-interest".
...


Incorrect... there is an explanation for altruism based upon sociological and biological evolution that makes it very obvious why such an action can be based upon the psychological and biological makeup of the person.

Just because you do not know about something, does not mean it is not there... this is an argumentum ad ignorantiam illogical fallacy.

Consider this:
  • If I do nothing then most of us will be killed.
    The enemy wins and my kith and kin will be eventually killed or worse​
  • If I run away then I will survive as a coward.
    I might still be killed by the enemy anyway
    Even if I survive my kith and kin will suffer from my cowardice​
  • If I shout then my mates might not react in time.
  • If I jump on it and get killed my mates will survive.
    They will exact revenge
    My mates will honor my memory
    My kith and kin will be exalted​

All the above calculation would have taken place in less than a second so there was no real conscious deliberation there. It was all a consequence of the MILITARY SOCIAL comradery and drilling to the point of instinct.

Why one person would do it before the others?

We all have different reaction times.... others might have chosen the shouting or run away or just froze out of shock and fear.

Is that reducing the heroism?

Not by any means... just as we admire a musician or athlete for his/her superior learned and innate biological skills so do we extoll the soldier who jumps on a grenade to save his mates due to his SUPERIOR INNATE REFLEXES formed by his social and biological makeup.

But there was nothing objective about... it still is something that emanates out of the human biological and social formation.
 
Last edited:
Thirty four posts from ten days ago have been moved to AAH. (Usually, moderation intervention is to more current posts than this, but these posts only just recently came to the attention of the moderation staff.)

The reason for the move was an unhealthy combination of rule 11 and rule 12 violations. Further action to those posts and other posts in this thread are under discussion. Meanwhile, may I recommend everyone posting in this thread to stay generally near the topic and not attack each other.

Thank you in advance.
Replying to this modbox in thread will be off topic  Posted By: jsfisher
 
I have never said anything about Craig's discussion of objective morality, because I am not familiar with it.

But the claim that objective morals must "be part of the physically existing cellular make-up of each human person," is just honestly nutty, whether it comes from Craig or from your own inference.

You and I have made no progress in pages of discussion. I believe I'll let this thread die.


Well I'm sorry you regard that as nutty. But what what I tried to describe to you (perhaps I was not clear), was only the idea that our genes (which are supposed to be physically existing, i.e. objectively real, molecules making up the cells in our body), might play a significant role in determining the way that any of us behave ... including perhaps (to some extent) the way that we behave towards others around us.

Afaik, that's not nutty at all. Instead it's precisely and entirely what biologists and geneticists do say about the role of our genes in general. I.e. the chemical functioning of those genes determine all sorts of physical ("objective", in that sense) features in all animals, inc. perhaps various aspects of our behaviour.

I was simply accepting (and I said the same thing to you in one of my very first replies) that some of the actions that we might describe as "moral", might have some influence from the "objective" physically real chemistry that goes on in our genes.

So I am granting you that as a possibility. And afaik, there are fairly clear examples in certain psychiatric conditions, where the individuals physical behaviour can be linked to various aspects of cell chemistry in the brain.

However, as I said in the previous post - it seems as if there is a far simpler and more direct explanation of why humans (perhaps more than most other animals) exhibit the sort of altruistic or compassionate behaviour that I suppose we include under the term "morals", and that is a learnt pattern of behaviour, as a matter of "nurture rather than nature", which is primarily a matter of simply learning in our earliest years what is best for our own self preservation ... and in the much wider long-term picture, I expect that could be linked back to a gradual process of evolution where early hominids were adapting to living within larger social groups.
 
Well I'm sorry you regard that as nutty. But what what I tried to describe to you (perhaps I was not clear), was only the idea that our genes (which are supposed to be physically existing, i.e. objectively real, molecules making up the cells in our body), might play a significant role in determining the way that any of us behave ... including perhaps (to some extent) the way that we behave towards others around us.

Afaik, that's not nutty at all. Instead it's precisely and entirely what biologists and geneticists do say about the role of our genes in general. I.e. the chemical functioning of those genes determine all sorts of physical ("objective", in that sense) features in all animals, inc. perhaps various aspects of our behaviour.

I think the dispute revolves around the word "determines." I would say that biology "allows for" moral behavior, rather than determining it.

I'd put it as close to the "language instinct." We are born with the capacity to speak, but whether we do or not, and which language we speak, depends on exposure. There's an interplay with the external world involved.

Another parallel might be the immune system. We are born with the capacity to react to foreign molecules, but to prevent an autoimmune response, our immune system has to learn the difference between "self" and "other."

Neither language nor the immune system is entirely determined by your genes. The capacity is, but how it plays out isn't.
 
In practice, it is not usually all that difficult to distinguish one from the other, but I am not familiar with demarcation principles in practical philosophy.

It is not an easy subject. In fact many encyclopedias of philosophy (I have consulted four) don’t include the entry “moral”. I think the next quotations could be useful.

Peter Singer: «The Triviality of the Debate over "Is-Ought" and the Definition of "Moral», American Philosophical Quarterly, Vol. 10, No. 1 (Jan., 1973), pp. 51-6.

It has long been a commonplace in the debate about the definition of morality that moral terms are used in many different ways at different times and by different people. The search for a definition, therefore, is not a search for the one true definition which expresses all that anyone has ever meant by the term. On the contrary, the search has been for the best definition, the definition that will express the most important or the most useful of the various meanings that moral terms have in ordinary speech”. (51)​

The neutralist view, then, is that whether a principle is a moral principle for a particular person is determined solely by whether that person allows the principle to override any other principles which he may hold. Any principle at all is capable of being a moral principle for a person, if that person should take it as overriding. (52)
[Descriptivism]: In other words, a judgment is not a moral judgment unless it is somehow connected to suffering and happiness, and a judgment is also not a moral judgment unless it is an impartial judgement, in the sense that it does not arbitrarily place more importance on the suffering and happiness of a particular person or group of persons. (53)
[Intermediate position]: It might be thought that one can maintain that moral principles are, by definition, prescriptive, so that to assent to a moral principle is to commit oneself to acting upon it when it is appropriate to do so, and at the same time maintain that, while a moral principle can have any content whatsoever, it must satisfy the formal requirement of universalizability. (54)​

I bet for the intermediate position.
 
Last edited:
Well I'm sorry you regard that as nutty. But what what I tried to describe to you (perhaps I was not clear), was only the idea that our genes (which are supposed to be physically existing, i.e. objectively real, molecules making up the cells in our body), might play a significant role in determining the way that any of us behave ... including perhaps (to some extent) the way that we behave towards others around us.

Afaik, that's not nutty at all. Instead it's precisely and entirely what biologists and geneticists do say about the role of our genes in general. I.e. the chemical functioning of those genes determine all sorts of physical ("objective", in that sense) features in all animals, inc. perhaps various aspects of our behaviour.

I was simply accepting (and I said the same thing to you in one of my very first replies) that some of the actions that we might describe as "moral", might have some influence from the "objective" physically real chemistry that goes on in our genes.

So I am granting you that as a possibility. And afaik, there are fairly clear examples in certain psychiatric conditions, where the individuals physical behaviour can be linked to various aspects of cell chemistry in the brain.

However, as I said in the previous post - it seems as if there is a far simpler and more direct explanation of why humans (perhaps more than most other animals) exhibit the sort of altruistic or compassionate behaviour that I suppose we include under the term "morals", and that is a learnt pattern of behaviour, as a matter of "nurture rather than nature", which is primarily a matter of simply learning in our earliest years what is best for our own self preservation ... and in the much wider long-term picture, I expect that could be linked back to a gradual process of evolution where early hominids were adapting to living within larger social groups.

But all of this merely describes our behavior in causal terms.

Objective morality is prescriptive. If there is such a thing, it does not get its source from any facts about our DNA. Our DNA may explain certain of our behavior, but it cannot make it good or bad. It can make us tend to characterize it as good or bad, but there is no objective meat there.
 
It is not an easy subject. In fact many encyclopedias of philosophy (I have consulted four) don’t include the entry “moral”. I think the next quotations could be useful.

Peter Singer: «The Triviality of the Debate over "Is-Ought" and the Definition of "Moral», American Philosophical Quarterly, Vol. 10, No. 1 (Jan., 1973), pp. 51-6.

It has long been a commonplace in the debate about the definition of morality that moral terms are used in many different ways at different times and by different people. The search for a definition, therefore, is not a search for the one true definition which expresses all that anyone has ever meant by the term. On the contrary, the search has been for the best definition, the definition that will express the most important or the most useful of the various meanings that moral terms have in ordinary speech”. (51)​

The neutralist view, then, is that whether a principle is a moral principle for a particular person is determined solely by whether that person allows the principle to override any other principles which he may hold. Any principle at all is capable of being a moral principle for a person, if that person should take it as overriding. (52)
[Descriptivism]: In other words, a judgment is not a moral judgment unless it is somehow connected to suffering and happiness, and a judgment is also not a moral judgment unless it is an impartial judgement, in the sense that it does not arbitrarily place more importance on the suffering and happiness of a particular person or group of persons. (53)
[Intermediate position]: It might be thought that one can maintain that moral principles are, by definition, prescriptive, so that to assent to a moral principle is to commit oneself to acting upon it when it is appropriate to do so, and at the same time maintain that, while a moral principle can have any content whatsoever, it must satisfy the formal requirement of universalizability. (54)​

I bet for the intermediate position.

That's a start, and one by a very well-regarded ethicist.

From these excerpts, it appears that two of the three focus on principles, while the third focuses on judgments. Maybe the loss of context explains that surprising difference.
 
The standard reply to the kind of reasoning you give above is that you have trivialized the claim of psychological egoism. You first said everything we do is motivated by self-interest, but now you suggest that self-interest is synonymous with choosing what to do. No one denies that the soldier chose to jump on the grenade, and hence preferred it to other alternatives, but it does not seem that doing so could reasonably be called "self-interest".

Only if we define "self-interest" so broadly to mean "whatever I choose to do" could we claim that psychological egoism is plausible, but doing this simply trivializes the thesis to: everyone chooses to do whatever he chooses to do.



You seem to be following my line of thought pretty well. I'll try to break it down bare bones:

I suspect that it is not possible to act in any way other than in self-interest.

Any act that we can call altruism is necessarily an act of self-interest.

If it is possible to act altruistically based on anything besides self-interest, then that altruism is no longer morally meaningful. To me, and probably you.

So, yes, people choose to do what they choose to do. They choose to do what they prefer to do. Acting on one's preferences is the same as acting in self-interest. To look at it any other way is to remove the meaning of a choice, in this context.

What do you think?


Also, I am still hoping to get your opinion on the term "enlightened self-interest", and if you think "enlightened" is redundant.
 
You seem to be following my line of thought pretty well. I'll try to break it down bare bones:

I suspect that it is not possible to act in any way other than in self-interest.

Any act that we can call altruism is necessarily an act of self-interest.

If it is possible to act altruistically based on anything besides self-interest, then that altruism is no longer morally meaningful. To me, and probably you.

So, yes, people choose to do what they choose to do. They choose to do what they prefer to do. Acting on one's preferences is the same as acting in self-interest. To look at it any other way is to remove the meaning of a choice, in this context.

What do you think?

I think that "self-interest" and "intentionally" are not synonymous. I think that interpreting self-interest so broadly that, by definition, whenever anyone chooses to do anything at all, he has acted in his self-interest makes the claim of psychological egoism tautological and hence empty.

I don't think that self-interest and preference are the same thing. Self-interest must reflect something about the situation so that this choice is better for me than the other choice -- not just that I choose it because I feel obligated to choose it, but because it actually produces some tangible outcome which literally is in my own personal interest.

One may be motivated to sacrifice his life to save others because he thinks it's the right thing to do or because he loves them and is willing to die that they may live. In the former case, he is ignoring what is good for him in order to do what he thinks he is obligated to do. In the latter, he is giving up his own personal interest in order to further the interests of others. (There are other reasons for altruistic sacrifice, of course, but I think the reasoning is the same.)

Let's think of it in terms of utility, and pretend for a moment that the egoist actually calculates outcomes prior to choosing an action. If I sacrifice my life, I lose the sum of whatever happiness I might have had otherwise. Now, if I don't sacrifice my life, then I will surely feel pain of losing others to the grenade. I will miss them and I will likely feel survivor's remorse. But I sincerely doubt that the sum total of my utility in that case will be negative. I think it will be positive, even given the pain of their deaths. If this estimate is plausible, then clearly sacrificing my life for others is not genuinely in my self-interest.

So, contrary to your suggestion, I think that people do truly selfless acts on occasion, and that the only means to deny this is either to conclude that people who do such acts are badly misinformed on their own self-interest, or to define self-interest so broadly that "selfless (intentional) act" is an oxymoron. The first option seems implausible and paternalistic, while the second involves a stipulative definition to make the question analytic and hence say nothing of substance about the real world.



Also, I am still hoping to get your opinion on the term "enlightened self-interest", and if you think "enlightened" is redundant.

I don't think it's redundant. It's a commonly used phrase to distinguish the fact that one may mistakenly believe that stealing from others is in his self-interest, but if he takes into the account of being suspected and punished, either formally or informally, he may well realize that his naive view of self-interest was mistaken.

I've suggested, for instance, that an ideally rational person would not feel bound by his own moral preferences if he realized these are merely subjective opinions. But if he takes a careful view of his self-interest, he will still behave according to the moral preferences of society, more often than not. That's a reasonably "enlightened" view of self-interest.
 
Self(ish) interest, with regard for self without regard for others. This is crucial as in universe where there is single entity it does/can not apply and it allows for self interest with or without regard which is then foundation of what is being debated: morality as probabilistic byproduct of interaction.
 
Fair enough. Here's the beginning of an argument for the first norm I mentioned.

First, when I say that a statement is objective, I mean that any rational person acquainted with relevant evidence and arguments regarding the statement would come to the same conclusion regarding its truth. Thus, the usual observational claims about the world around us are objective, as are the claims of mathematics.


You can demonstrate something to me about the world around us. You could demonstrate a mathematical claim to me mathematically (I think. Really far from my field, with the math). But what could you demonstrate about a moral norm to show its truth? Can you give an example? If not, then can you describe how one might demonstrate it even in principle?



Now, I don't think that I can adequately define "rational being", but at a first pass, it is a being capable of reason, of distinguishing good argument from bad, of evaluating evidence and coming to the conclusion most supported by those factors of which he's acquainted. These are, I think, the characteristic features of rationality.

But these characteristic features are aimed at coming to a true conclusion over a false conclusion. Everything about uniquely rational capacities aims at drawing the right conclusion, where "right conclusion" really means "the one that is more likely to be true." Thus, all rational beings have a decided preference for truth, at least broadly and generally speaking, and hence all rational beings recognize that believing truth is better, broadly and generally speaking, than believing a falsehood.


Your appeal here is very appealing. I would like to think of myself as a rational person. And I would certainly agree that I prefer to engage with people who prefer truth over falsehood. I mean, if I expect to have a productive discussion with them, that is. The problem is, it is still just an appeal to someone-or-other. An appeal to all rational people (generally, most of the time)? An appeal to my audience, whom I am currently buttering up? Any which way, I think it is a rhetorical device better suited for a courtroom than the kind of discussion we are trying to have.

The other big problem you are having with your argument here is that it is trying to make an objective fact out of an ought statement, which I think is the problem you were attempting to solve in the first place.

(As usual, there is an issue that actual flesh-and-blood persons are imperfectly rational, which allows for the possibility that real persons can reject an objective claim even if they have enough knowledge to support it, but this is no more or less an issue for objective norms than for other objective statements.)


Again, I will ask you how one might begin to gain knowledge of an objective norm?



I've sketched the meaning of the word "objective" above.

A norm is, roughly, an ought statement or at least an evaluational statement of some form. For the most part, I think that "ought statement" suffices for our purposes.

So, all that's left is to distinguish moral norms from non-moral norms. First, moral norms are norms of practical, not theoretical, reasoning (i.e., they are about what we do, not what conclusions we draw). But it is not so easy, I think, to distinguish moral from non-moral practical norms. Roughly, I think the difference is that the latter exclusively refers to self-interested reasons for action, whereas the former includes concern for others, but this is not as precise as I'd like.

In practice, it is not usually all that difficult to distinguish one from the other, but I am not familiar with demarcation principles in practical philosophy.


At this point, I really don't think it's necessary to distinguish moral from non-moral norms. From my view, you are still stuck at the is-ought problem. Sure, if we want result x, the best course of action is y. But why x? Well, if we want z, then, of course x. And on it goes, recursion; Recursion until we run out of alphabet, and even further than that, on and on until someone can produce . . .

ULTIMATE TURTLE!!!!
 
You can demonstrate something to me about the world around us. You could demonstrate a mathematical claim to me mathematically (I think. Really far from my field, with the math). But what could you demonstrate about a moral norm to show its truth? Can you give an example? If not, then can you describe how one might demonstrate it even in principle?

Below, you respond to an argument intended to demonstrate that a certain non-moral norm is objective. Let's focus on that for now (as you also suggested), since it is surely the easier task. If I can't convince you that there are objective non-moral norms, then I don't expect you to believe that objective moral norms are possible.

Your appeal here is very appealing. I would like to think of myself as a rational person. And I would certainly agree that I prefer to engage with people who prefer truth over falsehood. I mean, if I expect to have a productive discussion with them, that is. The problem is, it is still just an appeal to someone-or-other. An appeal to all rational people (generally, most of the time)? An appeal to my audience, whom I am currently buttering up? Any which way, I think it is a rhetorical device better suited for a courtroom than the kind of discussion we are trying to have.

I'm not trying to take a poll of rational persons. On the contrary, my point is (I hope) deeper than that.

It would be literally irrational to be indifferent between truth and falsity, generally speaking. My point is that a defining feature of rationality is precisely this preference -- we value reason for its capacity to produce true beliefs. To be indifferent between accepting truth or falsity is incompatible with being rational.

If that is correct -- and I think it is -- then it follows that all rational beings necessarily prefer truth over falsity. The necessity of this preference suffices to show that it is an objective norm. (See this post to recall the relevant definition of objectivity.)

The other big problem you are having with your argument here is that it is trying to make an objective fact out of an ought statement, which I think is the problem you were attempting to solve in the first place.

Well, that is precisely what I'm alleging: that the norm "It is better to believe truth than falsity," is indeed objectively true. I see no inherent contradiction in thinking that norms can be objective.


Again, I will ask you how one might begin to gain knowledge of an objective norm?

I should think that, generally speaking, it requires reflection on the nature of rationality, the relation between cause and effect, etc. I don't think I have a more illuminating answer than that.





At this point, I really don't think it's necessary to distinguish moral from non-moral norms. From my view, you are still stuck at the is-ought problem. Sure, if we want result x, the best course of action is y. But why x? Well, if we want z, then, of course x. And on it goes, recursion; Recursion until we run out of alphabet, and even further than that, on and on until someone can produce . . .

ULTIMATE TURTLE!!!!

Sadly, not recursion but corecursion -- an infinite regress, if we buy that practical reasoning is like your illustration.

I think that it's not like that. I think that there is some final end in the regress you've mentioned, something that we desire simply for its own sake. For Aristotle, it was happiness; for Hume, it was more or less anything that we simply desired; for Mill, again, it was happiness (though not the same meaning as Aristotle).

Anyway, I mention that merely as an aside, and perhaps we shouldn't go down that path, because the existence and nature of "final ends" is not really relevant at present. Let's do as you suggest and see whether I can convince you that any of the norms of practical or theoretical reasoning are properly objective.

(As we think about this, we may also think about more formal norms, such as, "If one accepts 'P & Q', then he must also accept P." Surely, you don't think that whether this is a good norm of reasoning comes down strictly to opinion, do you?)
 
Last edited:
You can demonstrate something to me about the world around us. You could demonstrate a mathematical claim to me mathematically (I think. Really far from my field, with the math).

Let's return to this. What you said about math made me think a bit.

Suppose that I carefully showed you a valid mathematical argument. I made sure that you understand every inference in the argument, and hence were certain of its validity, so that without a doubt, you concluded that the truth of the premises would ensure the truth of the conclusion.

Assuming that you were a person of basic reasoning skills, we could do this. And, indeed, the fact that a careful exposition of any valid mathematical argument would entail that the audience would assent, given basic abiilities of reasoning, is what we mean when we say that mathematics is objective. It deals with the sort of stuff that any two rational persons should be able to come to an agreement on.[1]

And that is what I take objective to mean: that a rational person, given sufficient evidence, background information (?) and acquaintance with any relevant arguments would come to the correct conclusion about the truth value of the proposition at hand.

Now, let's suppose that we do this experiment with, oh, say the proof that sqrt(2) is irrational. Suppose that our friend Bob follows every step in the argument, but nonetheless rejects the conclusion which so clearly follows from the preceding argument. I believe we would think something's wrong with Bob. If we genuinely thought that he understood what each proposition in the argument means and that each step is valid, then we would think he surely ought to accept the consequence.

And we would be right. Given an argument such that one recognizes that each proposition is meaningful and that each step is a valid logical inference, one ought to accept that the premises entail the conclusion. To do otherwise is a failure of rationality, and every rational being would accept the above bolded statement.

So, just a thought: how is that bolded statement not an objective norm?

(Sorry, I do know that I'm not a terse writer. I hope my lengthy exposition doesn't bore you.)

[1] Here, I should point out that in practice, some valid mathematical arguments (like Cantor's proof) do not generate universal acceptance, but I think that we can attribute that to the limited rationality of actual people. Certainly, no one thinks that, just because some cranks don't follow Cantor's argument, mathematics must be subjective after all.
 
I think that "self-interest" and "intentionally" are not synonymous. I think that interpreting self-interest so broadly that, by definition, whenever anyone chooses to do anything at all, he has acted in his self-interest makes the claim of psychological egoism tautological and hence empty.


Honestly, I kinda know what tautology means, but not in its entirety. And there seem to be several definitions and connotations. Could you spell out how you mean it? Also, I'm just taking it on your word that my claims represent psychological egoism, but I'm alright with that if you are.

I don't think that self-interest and preference are the same thing. Self-interest must reflect something about the situation so that this choice is better for me than the other choice -- not just that I choose it because I feel obligated to choose it, but because it actually produces some tangible outcome which literally is in my own personal interest.



Aha! Now we're getting somewhere! We've been using different definitions of self-interest, obviously. So let's ditch self-interest, at least for now. Let's talk about preference, since we were at least able to agree that when the soldier jumped on the grenade, that was the choice that he preferred.

When you say "tangible outcome", does that include emotions?

One may be motivated to sacrifice his life to save others because he thinks it's the right thing to do or because he loves them and is willing to die that they may live. In the former case, he is ignoring what is good for him in order to do what he thinks he is obligated to do. In the latter, he is giving up his own personal interest in order to further the interests of others. (There are other reasons for altruistic sacrifice, of course, but I think the reasoning is the same.)



Right. I think that altruism is when someone makes a sacrifice for the benefit of another. I think it is like a sacrifice play in chess. Give up the queen in order to make an even greater gain, or to take a less severe punishment. And so with morality. The fact that people take the actions that they find the most rewarding, and/or the least punishing is the very thing that allows us to make moral evaluations of character or action.

The soldier jumping the grenade is a good scenario to discuss, because I think it pushes the limits of sacrifice. But maybe a more mundane case would make my position clearer. Let's imagine a little old lady living on a government pension. She lives mostly off of generic brand canned soup over bulk rice. But she gives a large portion of her cheque to charity. Maybe she does it because she empathizes with people that are even less fortunate than her, and it makes her happy to see other people happy. And/or she likes the feeling that it gives her to be the agent of this altruism and this satisfies her moral sense. Personally, I would rank those as good moral reasons for a good outcome. On the other hand, maybe she's a total meanie and her main motivation is to rub her superiority in the face of her sister out of pure spite. Well, still a good outcome on the charity side of things, but not something I would rank as well on my own moral scale. In any of these cases we see what they are like by seeing what they find rewarding.

Here I'll reiterate that I can't imagine an example of someone making a meaningful choice without that choice being based on what they prefer. I'm not sure what that would even mean? Before, I gave the example of someone infected by a brain parasite. Or maybe someone could be possessed by some kind of spirit, but I don't believe in that kind of thing. And whether it's a brain parasite or a spectral parasite, I don't think the actor could be said to be making choices in a meaningful way. Could you give some other kind of example?

Let's think of it in terms of utility, and pretend for a moment that the egoist actually calculates outcomes prior to choosing an action. If I sacrifice my life, I lose the sum of whatever happiness I might have had otherwise. Now, if I don't sacrifice my life, then I will surely feel pain of losing others to the grenade. I will miss them and I will likely feel survivor's remorse. But I sincerely doubt that the sum total of my utility in that case will be negative. I think it will be positive, even given the pain of their deaths. If this estimate is plausible, then clearly sacrificing my life for others is not genuinely in my self-interest.

So, contrary to your suggestion, I think that people do truly selfless acts on occasion, and that the only means to deny this is either to conclude that people who do such acts are badly misinformed on their own self-interest, or to define self-interest so broadly that "selfless (intentional) act" is an oxymoron. The first option seems implausible and paternalistic, while the second involves a stipulative definition to make the question analytic and hence say nothing of substance about the real world
.

Okay, not sure why you chose this example with an egoist as the central character? The moral philosophy that I'm trying to put forth is a descriptive one. Its success or failure doesn't depend on some individual acting as if they believe in it or not. Also, I don't see the relevance of such an implausible scenario. As if anyone, in such an instance, thinks about that stuff?

I did find your comments on the definitions of self-interest provoking, too, but I am done for the night. Cheers til next time.
 
And that is what I take objective to mean: that a rational person, given sufficient evidence, background information (?) and acquaintance with any relevant arguments would come to the correct conclusion about the truth value of the proposition at hand.

Now, let's suppose that we do this experiment with, oh, say the proof that sqrt(2) is irrational. Suppose that our friend Bob follows every step in the argument, but nonetheless rejects the conclusion which so clearly follows from the preceding argument. I believe we would think something's wrong with Bob. If we genuinely thought that he understood what each proposition in the argument means and that each step is valid, then we would think he surely ought to accept the consequence.

And we would be right. Given an argument such that one recognizes that each proposition is meaningful and that each step is a valid logical inference, one ought to accept that the premises entail the conclusion. To do otherwise is a failure of rationality, and every rational being would accept the above bolded statement.

So, just a thought: how is that bolded statement not an objective norm?

There are two related "escapes."

They apply equally in mathematics and moral reasoning. The first is to show that the root assumptions are invalid or that the argument fails to include certain desirable properties of a system.

So, for example, in mathematics, I can explain how division works. Everything goes along fine until someone points out I have not accounted for division by zero - with zero as the denominator, my carefully constructed proof goes sideways. One has to be careful to define the properties allowed and disallowed going in. Those definitions are not part of the proof. We can call them a-rational, but they can form the basis of rejecting the logical train, even while acting as a rational being.

The second problem is in application. The entire process may be fine, and the rational person may agree with the conclusion, but there's still the step of proving it works beyond the conceptual realm. Again, this arises for both mathematics and moral systems.

This latter challenge is huge. I would not accept a conceptual framework, no matter how good the proof "on paper" that yields erroneous results. We commonly do this with mathematics - if the math says light should act like a wave, but it only sometimes acts like a wave, then the math, not the experiment is wrong. So too with moral reasoning. If the moral calculus tells me I should kill my offspring, it is likely I will judge it flawed, instead of accepting the outcome.

I think this is precisely what we see in the abandonment of religious values based on ancient texts. Here we have a moral system one might argue is objective. But we both dismiss the premises (a God giving us commands) and dismiss the specific results (slavery, et al). The moral system is judged, not simply on a rational basis, but with our own subjective morality.
 
Last edited:
Objective Morality: the belief that there is a super-natural force that imposes arbitrary rules upon us using a "Zero Tolerance" policy: everything is "Black" or "White". It is the wish to be free from thinking about how difficult morality is to discover and apply. It is the desire of an infantilized mind to be told what to do.
The appeal of objective morality is simply that, without it, we have to conclude that rape, murder and child abuse are only wrong in the sense that homosexuality is "wrong" in some countries or that being an independent woman is "wrong" in some countries.
 
The appeal of objective morality is simply that, without it, we have to conclude that rape, murder and child abuse are only wrong in the sense that homosexuality is "wrong" in some countries or that being an independent woman is "wrong" in some countries.

Why is that troubling?

Wouldn't it be even more troubling if it turned out that homosexuality or women's rights were objectively wrong? Is there some meta-rule in play to ensure my own preferred moral stance will turn out to be objectively correct?

Be careful what you wish for.
 
I suspect that it is not possible to act in any way other than in self-interest.

Any act that we can call altruism is necessarily an act of self-interest.

If it is possible to act altruistically based on anything besides self-interest, then that altruism is no longer morally meaningful. To me, and probably you.

So, yes, people choose to do what they choose to do. They choose to do what they prefer to do. Acting on one's preferences is the same as acting in self-interest. To look at it any other way is to remove the meaning of a choice, in this context.

If the adjective "self-interested" fits anything an human can do, you need two additional words to define:
(a) My action causes some benefits to me without consideration to the damages made to other people.
(b) My action causes some benefits to other people without consideration to some damages caused to me.

(a) is usually called "egoism".
(b) is usually called "altruism".

Almost all moral theories consider (b) as "moral" and (a) as "immoral".

Do you agree? How do you call (a) and (b)?

NOTE: In order to defend some kind of "intelligent selfishness" you ought to show that (b) is more intelligent than (c). This is not easy,
 

Back
Top Bottom