Richard T. Garner and "Beyond Morality"

This one is easy: If we allow people to steal from the weak, for example, then YOU could get things stolen from you when YOU become weak.

That does not mean that my stealing is not in my self-interest.

There are situations in which I could steal and no one would ever know that a theft had even occurred. So, my theft would not encourage others to steal or others to later steal from me. (Think of a deathbed swindle, for instance.)

So, a rational person, purely self-interested, would not adopt the rule, "Don't steal." He would adopt the rule, "Don't steal in situations which might encourage others to later steal from me." Oh, he'd probably tell people that he followed the simpler rule -- it's in his self-interest to lie like that -- but obviously he would, like every other rational being in similar situations, steal whenever it was purely in his benefit to do so.

To be sure, "Don't steal unless it benefits you," isn't what most of us would call a moral rule. But it's what people would follow if they do what you claim: act out of self-interest, only indirectly benefiting others.

In fact, it seems to me that your theory is moving to becoming hedonistic egoism.
 
But what I think is that you haven't demonstrated how science reveals this to be moral in the first place.

I think your opponent will happily agree that science can give us better medicine and teach us how to make more and better crops but will ask you to prove how that is itself moral.

If we accept that "well being" is the only stable value in which morality can exist, then it follows that utilizing science to improve our well-being would be moral.

You do realize that you didn't even address Soba's question, right?

He asked, "How does science reveal promoting well-being to be moral?"

Your response is, "If we assume promoting well-being is the only stable value, then it's good that science promotes it."

That's rather begging the question.
 
In the second case, it makes sense to not allow people to be pushed off of bridges. Because, someday, YOU could be the one pushed off a bridge.

What nonsense!

How am I more likely to be pushed off the bridge than I am to be saved by the act of pushing someone else off the bridge?

If I'm scared of being sacrificed by being pushed, I should also be scared of being sacrificed in the first scenario.

You have not explained our different emotional reactions in the least. This is an example where our moral intuitions are self-evidently incoherent, but you're trying to pretend that self-interest explains the incoherence. It does not and it cannot.

(There was a pretty good episode of Radiolab which examined how the incoherence in these judgments corresponds to a fight between different parts of the brain.)
 
(There was a pretty good episode of Radiolab which examined how the incoherence in these judgments corresponds to a fight between different parts of the brain.)

There has also apparently been some evidence that certain people who have a mental impairment in one particular part actually have no difficulty saying they would push the fat man and see the two questions as essentially asking the same calculation.
 
Yes, Wowbagger, the theory of altruism reducing to some form of perceived self-interest is called psychological egoism.

I really think that the concept is stretched to become untenable in many cases of obvious altruism and eventually becomes an unfalsifiable hypothesis.

Egoism is also inconsistent with his claim that we are ultimately motivated by well-being on a grand scale. He says that the only stable value for moral rules is one which leads to planet-wife good effects. I can't see how he squares this with egoism.
 
Last edited:
1. Individual or collective rights?
2. Socioeconomic or politic rights?
3. What is well-being?
4. What is happiness?
5. What is better: more well-being for less people or less well-being for more people?
6. And so on.
Just because these are difficult questions doesn't mean they don't have answers that can be discovered.

But, in most cases, I think those are going to be false choices. For example: A comprehensive moral system would have more well-being for more people. We would not have to choose between "more well-being for less people "or "less well-being for more people". Most things in life are not really zero-sum games, like that, anymore.

Other problems arise when our opponent doesn’t accept our criterion to decide. This happens with people of different cultures and also with some notorious intellectuals. Hume, Camus or Dostoevsky, for example.
It's not up to them to decide! Nature decides what objective truths are. We can try to tap into that, or not. People can accept them, or not. If not, then it's to their own peril.

No one "decides" what objective moral truths are. We discover and figure out what objective moral truths are, as accurately as we can.

Wow, did you misunderstand Smith.

He was not writing about "seemingly altruistic acts actually have hidden self-interest motives behind them." He was writing about seemingly self-interested acts which actually have effects beneficial to society as a whole.

Not the same thing at all.
I guess it does sound like I reversed that.

But, I also contend that when you benefit society as a whole, you, in turn, get further rewarded.

It's that much harder to say whether society has improved or not.
Just because it is a difficult question, does not mean it cannot be answered.

I wonder if you can support this claim. It's not obvious to me.
Steven Pinker wrote a really thick book on the subject, with tons of research. And, his demonstration that violence has generally gone down a whole lot remains non-controversial among most social scientists. So, I think it's a reasonable claim.

The two World Wars are rather hideous, and stand out like a sore thumb in all the numbers regarding violence. But, even taking them into consideration, the trend towards lower violence, overall, is apparent.

It's an inclined saw-tooth chart, meaning there are setbacks where violence gets a little worse. (Or, on rare occasions, a LOT worse) But, the general trend is that violence levels lurch downwards over time.

You are equivocating on the use of "care for". I don't care about, say, Iraqi civilians out of any sense of self-interest
If we allowed Iraqi civilians to be bombed, we are opening ourselves up to be bombed for the same reasons, THEN you WOULD care on a personal level, a LOT more! We cannot have such bombing of civilians any more, for that reason. And, humans, today, are more often going to be smart enough to recognize that, than we were in the days of such bombings.

Look, we agree, I assume, why bombing civilians is militarily useful: if we destroy enough cities, the enemy will capitulate, thereby saving lives of our ground forces.
Yeah, but that turns out to be very short-term, greedy thinking. Yes, we would save the lives of our own ground forces, but at the cost of destroying a good chunk of the world economy! It will end up costing us a LOT MORE, in the long run, in lost opportunities, than we would save in ground troops.

But, even that implies we must choose between one (troops) or the other (bombings). If we can develop a diplomatic solution to resolve our issues, then we don't need either one!

Yes, Wowbagger, the theory of altruism reducing to some form of perceived self-interest is called psychological egoism.
I was looking for that word, thanks!

The concept follows from what we would *expect* from our roots in Natural Selection.

And, one does NOT need to be aware of it happening. It is unlikely for someone to say "I should give blood because I (or someone I care about can) hope to get blood in return." out loud.

But, genes for inducing such altruistic actions tend to stick around longer than those that do not, because it works out better for those genes, in the end. And, we might not even be aware of that happening.

That does not mean that my stealing is not in my self-interest.
Yeah, but that is short-term, greedy thinking. Society found that it is better off not allowing such theft to take place. You, or someone you care about, could be the victim of such a theft, if it were arbitrarily allowed to happen.

Morals puts the breaks on things like that.

There are situations in which I could steal and no one would ever know that a theft had even occurred. So, my theft would not encourage others to steal or others to later steal from me. (Think of a deathbed swindle, for instance.)
Ah, but in this day and age of increased surveillance and information exchange, you are increasingly more likely to have your thievery discovered!

It is true that carefully planned thefts can still go undetected today. But, it is becoming increasingly difficult and risky to do such things.

In fact, it seems to me that your theory is moving to becoming hedonistic egoism.
Ah, no. I think you have completely misunderstood my points, if you come to that conclusion about them.

Remember what I said about welfare consequentialism. That, it seems, would naturally overrule attempts at hedonistic egoism.

In other words: Truly hedonistic egoism is... unstable.

That's rather begging the question.
Perhaps I can develop a better defense on that. But, what I communicated seems to be the way morality naturally seems to work. And, it is beyond our control to change that.

How am I more likely to be pushed off the bridge than I am to be saved by the act of pushing someone else off the bridge?
I think the more important point of the trolley problem is that most people have a NATURAL inclination against shoving people, and not one against inanimate objects. Even if the idea I am communicating is wrong, there are still reasons that happens, that we can figure out.

Trolleys are relatively new things in human history. Our morality was forged long before their existence. So, it does make sense that we would detect anomalies in our moral systems, with examples like that.

Egoism is also inconsistent with his claim that we are ultimately motivated by well-being on a grant scale. He says that the only stable value for moral rules is one which leads to planet-wife good effects. I can't see how he squares thus with egoism.
Once we recognize that our own self-interests ARE, in fact, tied to the best interests of society, our psychological egoism transforms into caring about welfare consequentialism or "well-being" of a society.

I guess we could better call it "psychological grandscaleism"? Unless you have a better term.

Should that really be a surprise, though?
Our genes figured out, a long time ago, that it is in their own "selfish interests" to work well with other gene combinations. Individuals figured out that it is in their own self-interests (without quotes) to work well with other individuals. Etc.

The whole science of Game Theory demonstrates that, as well!
 
If we allowed Iraqi civilians to be bombed, we are opening ourselves up to be bombed for the same reasons, THEN you WOULD care on a personal level, a LOT more! We cannot have such bombing of civilians any more, for that reason. And, humans, today, are more often going to be smart enough to recognize that, than we were in the days of such bombings.

Balderdash!

Whether the U.S. carpetbombs other cities has relatively little to do with whether I am likely to be in a city being bombed. Anyone who reasons thus is simply being irrational.

On the contrary, bombing your potential enemies to oblivion rather decreases the odds they will bomb you. This is almost a moot point, since the U.S. has great air superiority currently and will likely have the same superiority (and geographic advantages) for the remainder of my life.

Are you seriously suggesting, by the way, that I don't understand my own reasons for opposing carpet bombing of civilians?

Yeah, but that turns out to be very short-term, greedy thinking. Yes, we would save the lives of our own ground forces, but at the cost of destroying a good chunk of the world economy! It will end up costing us a LOT MORE, in the long run, in lost opportunities, than we would save in ground troops.

But, even that implies we must choose between one (troops) or the other (bombings). If we can develop a diplomatic solution to resolve our issues, then we don't need either one!

The last sentence is irrelevant, since I'm speaking of situations in which diplomacy has failed.

As far as the preceding paragraph, I simply don't see that this is the least bit plausible from my own self-interested perspective. It seems overwhelmingly more likely to me that my interests are best served by obliterating the enemy quickly and decisively so that my nation is no longer endangered (depending, of course, on possible side effects of doing so).

Insofar as I doubt your argument is sound, it cannot be the real, hidden reason I oppose targeting of civilians.
 
Last edited:
Yeah, but that is short-term, greedy thinking. Society found that it is better off not allowing such theft to take place. You, or someone you care about, could be the victim of such a theft, if it were arbitrarily allowed to happen.

Morals puts the breaks on things like that.

Wrong.

It is always in my long-term best interest to steal in situations where the crime will never be discovered and I will therefore never face a greater risk of being the victim of theft. In such (admittedly rare) situations, it is never in my best interest not to steal, because stealing benefits me with no possible loss.

You seem to pretend that if I do it, then others will do it, but in fact, in the situation I have in mind, my actions have no effect on the actions of others.

Ah, but in this day and age of increased surveillance and information exchange, you are increasingly more likely to have your thievery discovered!

So what? That modifies my point not a single bit.

It is true that carefully planned thefts can still go undetected today. But, it is becoming increasingly difficult and risky to do such things.


Ah, no. I think you have completely misunderstood my points, if you come to that conclusion about them.

Remember what I said about welfare consequentialism. That, it seems, would naturally overrule attempts at hedonistic egoism.

In other words: Truly hedonistic egoism is... unstable.

Then why are you taking such pains to pretend that the reason we don't steal or target civilians in war is mere self-interest? Your argument is incoherent, which is all the more bizarre, since you are clinging to an obviously weak position (we don't want to target foreign civilians because that somehow increases the likelihood that Iraq, say, will bomb the cities in which we live -- and if we think there's some other reason, then we're fooling ourselves).

I think the more important point of the trolley problem is that most people have a NATURAL inclination against shoving people, and not one against inanimate objects. Even if the idea I am communicating is wrong, there are still reasons that happens, that we can figure out.

Trolleys are relatively new things in human history. Our morality was forged long before their existence. So, it does make sense that we would detect anomalies in our moral systems, with examples like that.

The fact that the example involves trolleys is really irrelevant -- you know that, right? And, of course, we all agree that the reasoning of the trolley example shows that our moral intuitions are a nasty mess and require sorting out.

My point is that your solution is no solution at all. I no more fear being pushed than having the train shunted onto my path, so this fear cannot explain our different intuitions.

Once we recognize that our own self-interests ARE, in fact, tied to the best interests of society, our psychological egoism transforms into caring about welfare consequentialism or "well-being" of a society.

This is an obviously false claim. My best interests are served when I take advantage of society in ways that I will not be punished or indirectly suffer for. This is a clear and obvious fact.

I guess we could better call it "psychological grandscaleism"? Unless you have a better term.

Should that really be a surprise, though?
Our genes figured out, a long time ago, that it is in their own "selfish interests" to work well with other gene combinations. Individuals figured out that it is in their own self-interests (without quotes) to work well with other individuals. Etc.

The whole science of Game Theory demonstrates that, as well!

Er, no. Maybe you failed to understand the prisoner's dilemma. According to game theorists, the rational person is the one who defects when the game is played -- at least in one-off versions of the game. Game theory makes clear the unfortunate fact that acting in self-interest often ends up making everyone suffer, but the theory never says "We ought not to act thus!"
 
Er, no. Maybe you failed to understand the prisoner's dilemma. According to game theorists, the rational person is the one who defects when the game is played -- at least in one-off versions of the game.
Most things in life are not one-off games. Every interaction we make has an increased chance of further interactions down the road, leading to an (almost) always unknown number of future interactions.

Once we understand that, the rest of your arguments are easier to debunk.

Then why are you taking such pains to pretend that the reason we don't steal or target civilians in war is mere self-interest?
I see we are getting a little muddled, here. I think the introduction of psychological egoism has become an unnecessary distraction. The larger point to consider is, in fact, welfare consequentialism or "well-being of a society".

Egoism is only a part of that, NOT the sole component, nor even the central component.

Whether the U.S. carpetbombs other cities has relatively little to do with whether I am likely to be in a city being bombed. Anyone who reasons thus is simply being irrational.
You might not consciously think in those terms, but as a whole, society has an eerie way of doing so. At least according to theory.

On the contrary, bombing your potential enemies to oblivion rather decreases the odds they will bomb you.
This is short term thinking. Some OTHER country could come along and bomb you, for similar justifications you bombed the first country to oblivion. Is this REALLY controversial?

Might I suggest you read Steven Pinker's "The Better Angels of Our Nature". He goes into these sorts of things better than I do.

Are you seriously suggesting, by the way, that I don't understand my own reasons for opposing carpet bombing of civilians?
It is natural for humans to not always understand their reasons for opposing things.
The standard examples is how people so often claim God opposes something. Since God does not really exist, there must be some other, hidden motivation, they are not even aware of, themselves.

The last sentence is irrelevant, since I'm speaking of situations in which diplomacy has failed.
The decision to carpet bomb is often made under the assumption that there would be no diplomatic solution. Today, countries tend to try a LOT hard to find such solutions.

As far as the preceding paragraph, I simply don't see that this is the least bit plausible from my own self-interested perspective. It seems overwhelmingly more likely to me that my interests are best served by obliterating the enemy quickly and decisively so that my nation is no longer endangered (depending, of course, on possible side effects of doing so).
It may seem obvious, to someone who has not studied all of the complicated economics involved. But, we learned a LOT MORE about all of that, since the people in charge felt that way, in WW2.

It is always in my long-term best interest to steal in situations where the crime will never be discovered and I will therefore never face a greater risk of being the victim of theft. In such (admittedly rare) situations, it is never in my best interest not to steal, because stealing benefits me with no possible loss.
That is an old-fashioned, outdated way of looking at it. We know A LOT MORE about the impact of theft, on society, than we did in the old days when egoism was more pure.

We know that escalation of 'undiscoverable' thefts will actually, eventually lead to more discovery of such theft!

There could also be a genetic component, though I am not sure if that is confirmed. The theory is that genes for being inclined to steal, if successful, will lead to the spread of more thieves. If those are slowly weeded out, over the natural course of generations, we are left with genes much less likely to steal.

I will grant you that there could even be an ESS (evolutionarily stable strategy) involved: A point at which there is an optimal number of thieves: Too many, and too much gets stolen. Too few, and people are less wary of them, opening themselves up to getting more stolen. There might be a happy medium where thievery is at a minimum level, but that level would not naturally be zero. (Even if we would prefer it to be zero.)

If the ESS theory is true. (And, there is some good science consistent with it), then your "always in my long-term best interest to steal..." claim is the one that is false. A small number of such crimes might accidentally end up being in our best interest, but not even most of them.

You seem to pretend that if I do it, then others will do it, but in fact, in the situation I have in mind, my actions have no effect on the actions of others.
One of the spookier things we learned about how members of society interact, is that this assumption does not always work out so well. If we find ourselves restraining from committing such crimes, there is a statistical likelihood that other people, facing similar pressures, would do the same. Of course, there is NO GUARANTEE that would be the case, but it appears to usually work out that way, on a statistical basis.

So what? That modifies my point not a single bit.
Surveillance and information exchange are Game Strategy changers! They make the strategy of being a thief less desirable.

The fact that the example involves trolleys is really irrelevant -- you know that, right?
Yes, I know. But, there were very few such equivalent dilemmas throughout most of our early history.

My best interests are served when I take advantage of society in ways that I will not be punished or indirectly suffer for. This is a clear and obvious fact.
And, historically speaking: People who act on those assumptions are usually the ones who, ironically, find themselves getting punished a lot more often! The assumption that you won't get caught, carries with it an increasingly statistical likelihood that you WILL, eventually get caught!
 
Last edited:
Most things in life are not one-off games. Every interaction we make has an increased chance of further interactions down the road, leading to an (almost) always unknown number of future interactions.

Once we understand that, the rest of your arguments are easier to debunk.

You've debunked nothing at all. I've never denied that most things in life are not one-off games. But, on those rare occasions where acting in a selfish manner has no negative impact on me, if I am purely self-interested, I will be selfish.

This is my point. I think it's wrong to break deathbed promises, to carpet-bomb civilians, to steal in situations where my theft has no chance of being caught, and so on. And I think that because of the harm done to others, not some imagined risk that if I act like that, others will.

I see we are getting a little muddled, here. I think the introduction of psychological egoism has become an unnecessary distraction. The larger point to consider is, in fact, welfare consequentialism or "well-being of a society".

Egoism is only a part of that, NOT the sole component, nor even the central component.

If you'd like to drop the unlikely claim that every moral reason is fundamentally selfish, then we can move on.

You might not consciously think in those terms, but as a whole, society has an eerie way of doing so. At least according to theory.

This is short term thinking. Some OTHER country could come along and bomb you, for similar justifications you bombed the first country to oblivion. Is this REALLY controversial?

Your "theory", such as it is, is far too vague to really explain anything.

We shunt the trolley to kill the one man, because we don't want to be one of the five on the other track.

We don't push the man on the track because we don't want to be pushed.

We don't carpet bomb, because we don't want to be bombed.

Bin Laden did attack the U.S., because apparently he would not mind being attacked by passenger planes.

We don't steal because we don't want others to steal from us. But we do push every legal advantage in business negotiations, because apparently we wouldn't mind others trying to legally take advantage of us.

These snippets don't count as a theory. You simply take a moral judgment and find some way -- often, some bit of magical thinking that, if I do this, others will do it to me -- to recast it as self-interest.

Might I suggest you read Steven Pinker's "The Better Angels of Our Nature". He goes into these sorts of things better than I do.

It is natural for humans to not always understand their reasons for opposing things.
The standard examples is how people so often claim God opposes something. Since God does not really exist, there must be some other, hidden motivation, they are not even aware of, themselves.

Er, no. No, that doesn't follow. Even if God doesn't exist, the presumption that those who invoke God really believe in God and fear eternal justice is enough to explain their behavior.

But, of course, not every moral realist invokes God in any case, so let's not pretend he does.

The decision to carpet bomb is often made under the assumption that there would be no diplomatic solution. Today, countries tend to try a LOT hard to find such solutions.

It may seem obvious, to someone who has not studied all of the complicated economics involved. But, we learned a LOT MORE about all of that, since the people in charge felt that way, in WW2.

That is an old-fashioned, outdated way of looking at it. We know A LOT MORE about the impact of theft, on society, than we did in the old days when egoism was more pure.

We know that escalation of 'undiscoverable' thefts will actually, eventually lead to more discovery of such theft!

Weird. I know no such thing. I especially don't know that my rare opportunity to take something without risk statistically increases my chance of being caught.

There could also be a genetic component, though I am not sure if that is confirmed. The theory is that genes for being inclined to steal, if successful, will lead to the spread of more thieves. If those are slowly weeded out, over the natural course of generations, we are left with genes much less likely to steal.

I will grant you that there could even be an ESS (evolutionarily stable strategy) involved: A point at which there is an optimal number of thieves: Too many, and too much gets stolen. Too few, and people are less wary of them, opening themselves up to getting more stolen. There might be a happy medium where thievery is at a minimum level, but that level would not naturally be zero. (Even if we would prefer it to be zero.)

If the ESS theory is true. (And, there is some good science consistent with it), then your "always in my long-term best interest to steal..." claim is the one that is false. A small number of such crimes might accidentally end up being in our best interest, but not even most of them.

One of the spookier things we learned about how members of society interact, is that this assumption does not always work out so well. If we find ourselves restraining from committing such crimes, there is a statistical likelihood that other people, facing similar pressures, would do the same. Of course, there is NO GUARANTEE that would be the case, but it appears to usually work out that way, on a statistical basis.

NONSENSE!

This is pure magic. The thought that, if I don't do it even in situations where my action is undetectable then others won't either is simple magical thinking.

Surveillance and information exchange are Game Strategy changers! They make the strategy of being a thief less desirable.

Yes, I know. But, there were very few such equivalent dilemmas throughout most of our early history.

And, historically speaking: People who act on those assumptions are usually the ones who, ironically, find themselves getting punished a lot more often! The assumption that you won't get caught, carries with it an increasingly statistical likelihood that you WILL, eventually get caught!

I didn't speak of assumptions that I won't get caught. I'm speaking of those rare situations when I indeed won't be caught.

And, in fact, in other cases, theft is a choice-worthy act from pure prudential reasoning. My actions have negligible effect in increasing others' willingness to steal, so, as long as any negative consequences are outweighed by the reward of theft (if, for instance, the theft is unlikely to lead to my punishment, so that the expected benefit is positive), then I ought to steal.

You really should just drop this pretense the egoism leads to moral behavior. It is implausible in the extreme and not at all central to your claim, which was the (equally unjustified) theory that all morality aims to planet-wide well-being.
 
In fact, no, I don't think you are saying that.

You're claiming that whether a given person has a given "moral preference" can be objectively ascertained. This is a fact about the cognitive states of persons. This claim (which, of course, is far from being clearly demonstrated) is not about "objectively ascertaining the building blocks of morality".

You do see the difference, right?

No, obviously I don't think there is a difference. If there were some kind of unbridgeable gap between the two, my entire argument would fall apart. So it's a bit silly to expect me to say "yes of course" here and move along.

On the contrary: I think it is obvious that preferences are indeed the building blocks of morality. I can't think of any sensible definition of morality that would not be based on "X is preferable over Y", whether X and Y are outcomes, beliefs, or anything else really. I understand that this is seen as a contentious point, not only by you but by most philosophers, but I honestly don't think it should be. Preferences have to be the starting point of morality because they are literally the only thing that motivates us. Knowing what those preferences are tells us what we find morally desirable. It's only the first step, but it seems like a pretty solid first step to me.

There are points where my argument may indeed be wrong, but I don't think this is one of them.

My point is that I do not grant it as obvious that morality is about the consequences of actions. This claim requires argument.

What if I widen it a bit to "morality is about desiring one state of reality over another"? Would you agree that this is obviously true?

(Obviously this should not be taken to mean that the inverse is also true)

Let me refresh your memory. You said, "The consequences of actions, and whether or not they satisfy moral preferences, can be determined objectively." So, I'd like for you to objectively determine whether the NPR interviewee was, given your own personal "moral preferences", morally responsible for this outcome. How do you intend to "objectively determine" that?

I know what I said, and I stand by it. My claim is that moral issues can be answered objectively in principle. It is entirely unreasonable to ask me to prove that by doing it here and now. Your request is equivalent to replying to a claim that "advanced AI may one day be possible" with "Oh yea? Let's see your source code".

Look, I'm willing to overlook many of your unjustified claims. I'm willing to grant, for the sake of argument, that perhaps some day we can objectively determine the "moral preferences" of an individual just by looking at his brain structure. Is that obviously true? Certainly not, but let's grant it.

But now you want to go further. You want to argue that we can take a certain causal chain and determine whether it "fits" the moral judgment criteria we found in a given brain. But it seems to me that we do not have a complete and coherent set of rules ("moral preferences") that unambiguously apply to every situation. It would be great if we did. There would be far fewer moral dilemmas. But that's not how the average human thinks. We are a jumble of conflicting, ambiguous, vague, incoherent moral rules of thumb. And if you really could objectively determine the moral preferences, that is what you would get -- a mess of so-called preferences that sway one this way or that, depending on how we think of the moral hypothetical.

Ah, now this I find a much better argument. Yes, I agree that human preferences tend to be somewhat contradictory. A problem that. However, let's use common sense here: In practice, humans are perfectly capable of deciding which of two outcomes they desire more if they are sufficiently distinct. Yes, a human may have difficulty deciding which of two similar cars is better, and yes the answer may vary depending on the way the question is framed / the time of day / the mood / their horoscope. But given that we are already capable of working towards a better future, albeit slowly and stumblingly, we already know that this is not an unsolvable issue.

Indeed, this is why moral philosophers aim to find a coherent theory of morality. Our intuitions (yes, yes, "preferences") are not well-developed for the task of determining what ought to be done. We would like, rather, to have an objective set of principles (NOTE: I mean that the principles themselves are objectively true, not that I can objectively determine whether a person accepts those principles) to unambiguously determine what ought to be done.

We can, of course, doubt whether the moral philosophers will be able to do this, but it is what they aim for when they look for an objective basis for morality.

What we would like is a simple manual of life that gives us clear instructions at all times. What we can get, at best, is the ability to logically determine how to satisfy our preferences in an optimal way. I think it makes more sense to settle for what is possible, instead of insisting that something can't be called an "objective moral theory" unless it does something that is logically impossible (i.e. tell us what we should do regardless of what we want in a matter that is desirable to us... in a way that can't be argued with).

(Again, I am very much confused why you would use "intuition" and "preference" interchangeably. They mean very different things. I just hope I'm not misinterpreting you as a result.)

I won't grant you this point, because you have given no reason to think that (1) there is a significant majority who holds nearly the same moral preferences or (2) this set of nigh-universal moral preferences is constant and does not change over time. (And, as before, I balk at calling this an "objective morality".)

I agree that both objections (1) and (2) are valid. (2) I think might not be that big of an issue, since there is no law saying that moral rules can't change, but (1) might be a deal breaker.

If you do not feel I should call this an objective moral theory, I am curious what you would call objective. Bear in mind that "moral is what my dog Sally thinks is moral" constitutes an objective moral theory under at least one definition. (personally I also feel that human morality should be about what humans want, but maybe I'm just crazy like that. :o )

I know several serious attempts at providing a rigorous justification for truly objective morality, including Kantianism and Utilitarianism. We may doubt whether these attempts are as successful as one would like, but they are not well-described by any of the three items above. In particular, philosophical theories of morality do not boil down to, "Morality is out there, somewhere."

I can't be certain about Kant, since I never managed to finish reading his bloody books, but I'm pretty certain Utilitarianism does not make the claim that morality comes from something other than human preferences. In fact the word "utility" rather strongly implies the opposite.

So far as I understand your claims, you're stuck guessing that (1) individuals have a coherent set of moral preferences and (2) there is a single, coherent set of moral preferences that is unchanging (else it would not count as universal) and used by nearly everyone. There are some other issues (such as the implicit claim that moral preferences are always or almost always consequentialist in nature), but these two are surely enough for now.

Actually, one of the main points of my argument was that 90% coherence/universality would still be pretty damn good. Certainly something that a solid ethical system could be based upon. So your objections are not nearly the hammerblows that you make them out to be, valid though they are.
 
[...]
What if I widen it a bit to "morality is about desiring one state of reality over another"? Would you agree that this is obviously true?

No. Deontological theories do not define morality in terms of end states.

(Obviously this should not be taken to mean that the inverse is also true)

I know what I said, and I stand by it. My claim is that moral issues can be answered objectively in principle. It is entirely unreasonable to ask me to prove that by doing it here and now. Your request is equivalent to replying to a claim that "advanced AI may one day be possible" with "Oh yea? Let's see your source code".

Fair enough, let's look at the argument we both agree is stronger.

Ah, now this I find a much better argument. Yes, I agree that human preferences tend to be somewhat contradictory. A problem that. However, let's use common sense here: In practice, humans are perfectly capable of deciding which of two outcomes they desire more if they are sufficiently distinct. Yes, a human may have difficulty deciding which of two similar cars is better, and yes the answer may vary depending on the way the question is framed / the time of day / the mood / their horoscope. But given that we are already capable of working towards a better future, albeit slowly and stumblingly, we already know that this is not an unsolvable issue.

What we would like is a simple manual of life that gives us clear instructions at all times. What we can get, at best, is the ability to logically determine how to satisfy our preferences in an optimal way. I think it makes more sense to settle for what is possible, instead of insisting that something can't be called an "objective moral theory" unless it does something that is logically impossible (i.e. tell us what we should do regardless of what we want in a matter that is desirable to us... in a way that can't be argued with).

The problem is worse than that. So long as our "preferences" are incoherent, as shown by examples like the trolley problem, we shouldn't pretend that a simple calculation will be able to tell us what to do. It's rather similar to starting with an ad hoc collection of axioms, some inconsistent with others, and expecting a theorem prover to sort it all out in the end.

Evidently, we have a tendency to "prefer" to shunt the trolley onto the other track, to not push the man off the bridge, and to recognize that the two situations are morally equivalent even though we don't want to treat them that way. In simple terms, your approach is bound to suffer from GIGO unless we fix this morass of pre-theoretic "preferences" from the start.

(Again, I am very much confused why you would use "intuition" and "preference" interchangeably. They mean very different things. I just hope I'm not misinterpreting you as a result.)

Because, of course, I think it's a terrible misuse of the term "preference", but I've decided not to get more distracted by it than I have to.

Look, it's not that I "prefer" that rape is immoral or that I prefer, say, stealing to rape and that's where my moral rules come from. It's that I have a deep conviction that it is immoral, regardless of my preferences. "Preference" just doesn't enter into it.

Now, that said, I still don't want to be distracted by this dull semantic sidetrack.

I agree that both objections (1) and (2) are valid. (2) I think might not be that big of an issue, since there is no law saying that moral rules can't change, but (1) might be a deal breaker.

If you do not feel I should call this an objective moral theory, I am curious what you would call objective. Bear in mind that "moral is what my dog Sally thinks is moral" constitutes an objective moral theory under at least one definition. (personally I also feel that human morality should be about what humans want, but maybe I'm just crazy like that. :o )

An "objective moral theory" is one whose judgments can be justified as correct by any rational person suitably educated. This involves two distinct parts: that the fundamental principles of the theory are sufficient to draw clear, unequivocal judgments, at least given sufficient information about the acts in question, and that the principles themselves can be justified as correct in some way.

Now, I won't tell you that any theory I've seen satisfies these conditions, but this is what both the Kantians and the Utilitarians (and, for that matter, the hedonistic egoists) aim for. This is what I'd call an objective theory.

The difference can be illustrated using mathematics as an example. Persons not trained in mathematics have all sorts of ad hoc beliefs about, say, the nature of infinite sets, or numbers with infinite decimal expansions. If we had a machine to determine what ad hoc beliefs they genuinely have, we would be able to draw conclusions from these beliefs. We would not be surprised if the conclusions thus drawn were riddled with inconsistency.

On the other hand, if we start with a well-chosen set of axioms, we can draw conclusions in a systematic manner (hopefully) free from inconsistency. We wouldn't get a theory as beautiful and useful as ZF if we simply read the brains of men-in-the-street and cobbled together the most popular beliefs. Rather, we start with a careful selection of principles we have reason to believe are useful in some way.

I hope you can see the point of the analogy. I wouldn't regard the fact that I can read a jumble of half-thought quasi-mathematical principles from the brains of untrained persons an "objective theory of the infinite". And if I do the same with morality, I surely won't call that result an "objective morality".


I can't be certain about Kant, since I never managed to finish reading his bloody books, but I'm pretty certain Utilitarianism does not make the claim that morality comes from something other than human preferences. In fact the word "utility" rather strongly implies the opposite.

You're equivocating on the use of the word "preference". The so-called "moral preferences" that you've been talking about are not the same as the use of preferences in utilitarianism.

We start with an argument that an act is moral insofar as it increases happiness in the aggregate. This is the principle of utility. We didn't get that principle by imagining that we can read minds of everyone on earth and they mostly agree that this is the right principle. Rather, Mill (for instance) offers a clear argument to the effect that this is the right principle.
 
For example: A comprehensive moral system would have more well-being for more people. We would not have to choose between "more well-being for less people "or "less well-being for more people".

You are assuming a utopian situation: a land of milk and honey. This is not the actual human condition based on the limit of consumable goods.

The issues that I have raised go to the root of moral principles. If you maintain that these problems can be solved by a scientific or objective check, you should be able to imagine an experiment to resolve these problems. As you know science explains things by hypothetical deductive method, based on the deduction of implications of the hypothesis and its verification in experience. I wonder how an experiment could be done to resolve the problems that I have arisen.


Nature decides what objective truths are. We can try to tap into that, or not. People can accept them, or not. If not, then it's to their own peril.

"Nature"? What is “Nature”? I have a cancer. “Nature” has decided I have to die in one year. Believe me that I do not intend to obey to “Nature” and I hope to overcome my cancer with artificial means and live much more than one year.

Perhaps we don't speak of the same "Nature". Can you scientifically define "Nature", please?

People can accept them, or not. If not, then it's to their own peril.

Here you are objectively wrong. Not only Hume, Camus and Dostoevsky practised the alternative moralities which I had spoken above, but they were celebrated people and died without any peril derived of their beliefs. A lot of people think that their own family, their own freedom and their own person are more important that common morality and lives without danger. Furthermore, many people are clever egoists that convince people that they are working for the general well-being and become Founding Fathers or similar.

The idea that Nature punishes the bad guys is illusory and derives of the religious belief in a Justice God. It is a consolation belief. No. Nature is morally indifferent, cold, frozen I would say. Let Nature quiet and let us search for morality elsewhere.
 
NONSENSE!

This is pure magic. The thought that, if I don't do it even in situations where my action is undetectable then others won't either is simple magical thinking.

I know this wasn't directed at me, but there's actually a valid argument to be made for this kind of reasoning. Let's say you clone someone and put them and their clone in a prisoner's dilemma type situation. You can safely assume that no matter what they choose, they will both choose the same option (unless they are choosing randomly, or are really undecided, or something happens after cloning that sways their decision...). In this case, it is entirely rational for the original to choose to cooperate, purely based on the reasoning that the other will do so as well, and it is more desirable for both to cooperate than it is for both to defect. The same argument can be made for any two people that are sufficiently similar.

I'm not saying this to get into another separate argument with you, I merely wanted to point out that you shouldn't accuse others of magical thinking so readily. Sometimes there's more to an argument than you think.

No. Deontological theories do not define morality in terms of end states.

And according to the phlogiston theory, phlogiston is necessary to create fire. However, modern theories of physics don't include phlogiston, because it is outdated. Why would a moral theory need to take into account deontological theories, if there is no reason to believe those theories to be correct? There have been plenty of philosophers who go around saying "X is intrinsically good" or "everyone should follow rule Y", but without any compelling argument of why this should be the case, I don't see why I'd have to take those theories seriously.

The problem is worse than that. So long as our "preferences" are incoherent, as shown by examples like the trolley problem, we shouldn't pretend that a simple calculation will be able to tell us what to do. It's rather similar to starting with an ad hoc collection of axioms, some inconsistent with others, and expecting a theorem prover to sort it all out in the end.

This argument seems similar to one I often hear about theoretical limitations of Bayesian methods, such as intractability. "Oh sure, it works in practice, but does it work in theory?" always seemed a bit of a strange objection to me. If our primitive brains can work with our admittedly incoherent preferences right now, why would you think the same could not be done mathematically? Do you think mathematical formulas are somehow more limited than our human brains?

I'll agree that it couldn't be done with a "simple" calculation, though. At least, not without massive simplification.

Evidently, we have a tendency to "prefer" to shunt the trolley onto the other track, to not push the man off the bridge, and to recognize that the two situations are morally equivalent even though we don't want to treat them that way. In simple terms, your approach is bound to suffer from GIGO unless we fix this morass of pre-theoretic "preferences" from the start.

Wait, what? Why would you say that the two situations are morally equivalent? Consequentialism does not say that at all. I'd say people's intuitions here are entirely right to regard the two situations as different.

Perhaps the issue here is that you are taking a narrower view of consequentialism than I am? The way I see it, consequentialism takes into account ALL consequences, including your own feelings of moral disgust, society's reaction of moral disgust, all practical concerns etc. The way I see it, it's perfectly rational for a consequentialist to adhere to a strict set of moral rules, simply because he estimates that such a set of rules would result in better expected consequences overall.

If you assume consequentialism to mean that you are only allowed to maximise lives lost/saved right now, without taking any other factors or common sense into account, then I can see why you would not consider it to be a no-brainer issue.

An "objective moral theory" is one whose judgments can be justified as correct by any rational person suitably educated. This involves two distinct parts: that the fundamental principles of the theory are sufficient to draw clear, unequivocal judgments, at least given sufficient information about the acts in question, and that the principles themselves can be justified as correct in some way.

*snipped for length*

I think you are failing to distinquish between two aspects of objectivity. In your example, objectivity of mathematics always means that the correct answer does not depend on the subject, namely the person you ask the question. Only the output is right/wrong, because the input isn't about people. The moment you ask a question about people, there are two parts of the question that might be called objective/subjective: The input and the output. For example, if I ask "What is the probability that smoking will give you cancer"?, then the answer depends on the subject that you are examining. The input is subjective! However, once you decide which person you are examining, the entire process is merely a matter of doing the statistics correctly. The answer may still be incorrect/inaccurate, but the process is objective!

When you ask someone whether statistical research on cancer is objective or subjective, people will usually say "Objective!", because the entire process is handled objectively. However, when the subject is morality, people tend to confuse these two types of objectivity and insist that a moral system isn't objective unless BOTH input AND output are objective. In order to work around this confusion, I distinguish between "universality", which means the degree by which the output is independent of the input (more similarity amongst humans --> more universality of human morality) and "objectivity" which I use in the same sense as statistics on cancer being objective, namely the process being objective.

This is a bit of a long answer, but please please please take the time to fully understand what I mean. There is nothing more frustrating than having an ethics debate run aground because of loose and shifting definitions. This is pretty much what happens to me every time I defend this ethical theory, where people will object that it is not objective(input), and when I show that it is, people argue that I haven't shown that it is really objective, where they mean objective(output), and when I show that it is, people go back to claiming that it's not objective(input) and the whole process repeats itself ad infinitum without anybody noticing they keep using the same word to denote different concepts.

In the future, if you say that my theory is not objective, please tell me in which sense it's not objective. Or just copy the universality vs objectivity distinction I laid out.
 
Last edited:
The moment you ask a question about people, there are two parts of the question that might be called objective/subjective: The input and the output. For example, if I ask "What is the probability that smoking will give you cancer"?, then the answer depends on the subject that you are examining. The input is subjective! However, once you decide which person you are examining, the entire process is merely a matter of doing the statistics correctly. The answer may still be incorrect/inaccurate, but the process is objective!

What definition of "subjective" are you using here?

The standard definition is something like "Based on or influenced by personal feelings, tastes, or opinions". You seem to be using a definition more like "Varies between individuals" which tends to be a consequence of subjective judgement, but not all things that vary by individual are subjective. For instance, your smoking example is not subjective. The input is variable. But one's chances of getting cancer aren't based on personal opinions or feelings.

Height varies by individual, but height is in no way subjective.

That an objective answer is universal is a consequence of the definitional fact that it does not depend on opinions and feeling.
 
I know this wasn't directed at me, but there's actually a valid argument to be made for this kind of reasoning. Let's say you clone someone and put them and their clone in a prisoner's dilemma type situation. You can safely assume that no matter what they choose, they will both choose the same option (unless they are choosing randomly, or are really undecided, or something happens after cloning that sways their decision...). In this case, it is entirely rational for the original to choose to cooperate, purely based on the reasoning that the other will do so as well, and it is more desirable for both to cooperate than it is for both to defect. The same argument can be made for any two people that are sufficiently similar.

I'm not saying this to get into another separate argument with you, I merely wanted to point out that you shouldn't accuse others of magical thinking so readily. Sometimes there's more to an argument than you think.

Well, I don't know what I think of this example, aside from the fact that even if it "worked" for clones (who have, presumably, been raised so similarly that they have almost no behavioral differences due to experiences), that hardly makes it relevant here. The situations in which this thinking works are exceedingly remote at best, and even then, I'm not sure there isn't a fundamental error regarding causality.

But, let's let it pass, since it's a somewhat fanciful aside, not at all relevant to calculating whether or not my commission of an undetectable crime literally makes it more likely others will victimize me.

And according to the phlogiston theory, phlogiston is necessary to create fire. However, modern theories of physics don't include phlogiston, because it is outdated. Why would a moral theory need to take into account deontological theories, if there is no reason to believe those theories to be correct? There have been plenty of philosophers who go around saying "X is intrinsically good" or "everyone should follow rule Y", but without any compelling argument of why this should be the case, I don't see why I'd have to take those theories seriously.

I suppose you have to realize that the fact you think morality is consequentialist and that deontologism is stoopid is not an argument. The fact is that deontologism is still taken seriously as a moral school, so you shouldn't take for granted that morality is consequentialist.

This is a skeptical forum. You don't get to simply say, "Obviously, I'm right! So, why should I take other theories seriously?"

More to the point, you haven't shown any reason to believe that, if we "read" the moral "preferences" of others, we'll find that they are uniformly consequentialist. Even if you think morality "ought" to be about consequences, you do not know whether this is how people actually think. So, perhaps you shouldn't presume that you'll exclusively find consequentialist moral preferences in the addled noggins of your hypothetical subjects.


This argument seems similar to one I often hear about theoretical limitations of Bayesian methods, such as intractability. "Oh sure, it works in practice, but does it work in theory?" always seemed a bit of a strange objection to me. If our primitive brains can work with our admittedly incoherent preferences right now, why would you think the same could not be done mathematically? Do you think mathematical formulas are somehow more limited than our human brains?

I'll agree that it couldn't be done with a "simple" calculation, though. At least, not without massive simplification.

If our moral "preferences" are incoherent, then I see no reason to have any faith in the consequence of such calculations. It's not the same as using Bayesian methods to update a small portion of our beliefs. The issue here is that incoherence in moral "preferences" are a regular feature of our deliberations, and it's precisely when we are torn this way and that that we need a method for choosing what to do.

If your proposal works, it will work in those cases where there is no dilemma at all, because you only aim to apply the moral "preferences" we already have. Dilemmas occur when we have various such "preferences" and cannot choose between them.

Wait, what? Why would you say that the two situations are morally equivalent? Consequentialism does not say that at all. I'd say people's intuitions here are entirely right to regard the two situations as different.


Perhaps the issue here is that you are taking a narrower view of consequentialism than I am? The way I see it, consequentialism takes into account ALL consequences, including your own feelings of moral disgust, society's reaction of moral disgust, all practical concerns etc. The way I see it, it's perfectly rational for a consequentialist to adhere to a strict set of moral rules, simply because he estimates that such a set of rules would result in better expected consequences overall.

If you assume consequentialism to mean that you are only allowed to maximise lives lost/saved right now, without taking any other factors or common sense into account, then I can see why you would not consider it to be a no-brainer issue.

My own personal feelings of guilt, effects on others' sense of empathy, etc., are inconsequential when we are considering the good done by saving an additional four lives, as far as I can tell, so yes, I think a consequentialist would say that in both cases, one should be sacrificed (whether by pushing or pressing a lever) in order to save five.

I don't think it's even close, from a consequentialist's point of view.

I think you are failing to distinquish between two aspects of objectivity. In your example, objectivity of mathematics always means that the correct answer does not depend on the subject, namely the person you ask the question. Only the output is right/wrong, because the input isn't about people. The moment you ask a question about people, there are two parts of the question that might be called objective/subjective: The input and the output. For example, if I ask "What is the probability that smoking will give you cancer"?, then the answer depends on the subject that you are examining. The input is subjective! However, once you decide which person you are examining, the entire process is merely a matter of doing the statistics correctly. The answer may still be incorrect/inaccurate, but the process is objective!

When you ask someone whether statistical research on cancer is objective or subjective, people will usually say "Objective!", because the entire process is handled objectively. However, when the subject is morality, people tend to confuse these two types of objectivity and insist that a moral system isn't objective unless BOTH input AND output are objective. In order to work around this confusion, I distinguish between "universality", which means the degree by which the output is independent of the input (more similarity amongst humans --> more universality of human morality) and "objectivity" which I use in the same sense as statistics on cancer being objective, namely the process being objective.

This is a bit of a long answer, but please please please take the time to fully understand what I mean. There is nothing more frustrating than having an ethics debate run aground because of loose and shifting definitions. This is pretty much what happens to me every time I defend this ethical theory, where people will object that it is not objective(input), and when I show that it is, people argue that I haven't shown that it is really objective, where they mean objective(output), and when I show that it is, people go back to claiming that it's not objective(input) and the whole process repeats itself ad infinitum without anybody noticing they keep using the same word to denote different concepts.

In the future, if you say that my theory is not objective, please tell me in which sense it's not objective. Or just copy the universality vs objectivity distinction I laid out.

Let me point out that the meaning of "objective" in this context is fairly well-settled, and it doesn't mean, "I can objectively determine whatever subjective moral principles you have."

That said, let's not quibble over semantics. My point really is that you've proposed nothing better than a calculating devise for each person to determine what his own subjective moral "preferences" require. This is, at heart, an embrace of pure relativism, an attempt to help each person become a bit better at acting consistently with his addled, pre-theoretic "preferences". If you want to call that program "objective", go ahead, but you should be aware that you are inviting confusion, since anyone with even a passing familiarity with actual moral philosophy will think you mean something quite different.
 
Last edited:
What definition of "subjective" are you using here?

I am describing two different definitions of objective/subjective, based on the way I see it being used around here and in other debates. I am saying that I routinely see people say things like "using science to determine what a person's desires are isn't objective, because it only measures what a person wants". I think that this kind of thinking only results in confusion, and so I proposed a distinction between "objective"(the result can't be argued with) and "universal"(applies to everyone).

That an objective answer is universal is a consequence of the definitional fact that it does not depend on opinions and feeling.

Unless I'm misinterpreting you somehow, this flat out contradicts what you just said. For example, you agreed that the smoking example is objective, but it's not universal. Things like health benefits are almost always about average effects, meaning that what may be healthy for one person may be unhealthy for another. No universality there.
 
I am describing two different definitions of objective/subjective, based on the way I see it being used around here and in other debates. I am saying that I routinely see people say things like "using science to determine what a person's desires are isn't objective, because it only measures what a person wants". I think that this kind of thinking only results in confusion, and so I proposed a distinction between "objective"(the result can't be argued with) and "universal"(applies to everyone).

That an objective answer is universal is a consequence of the definitional fact that it does not depend on opinions and feeling.

Unless I'm misinterpreting you somehow, this flat out contradicts what you just said. For example, you agreed that the smoking example is objective, but it's not universal. Things like health benefits are almost always about average effects, meaning that what may be healthy for one person may be unhealthy for another. No universality there.

I'm 5'10". if you measure my height, you will always get that.
My friend Bob is 5'7" a height measurement isn't universal between us, but that doesn't make it subjective, it's just variable between things being measured. The universality of objective measurement doesn't mean that everyone you measure will be the same height, just that everyone measuring Bob or I will pull the same numbers regarless of anyone's feelings.

Variable measurements for different subjects have absolutely nothing to do with objectivity and subjectivity. Averages as predictions vs individual results likewise have nothing to do with it either.

The only question relevant to determining whether something is objective is whether it's dependent on opinions and feelings or not.

What you're proposing is like saying the weight of a rock is subjective because different rocks have different weights.
 
Well, if you call that a genetic basis, then of course everything a human can possibly do (mathematics, ballet, sitcoms, sneezing, suicide, ditch-digging, and so on) has a genetic basis. That's a pretty trivial claim. We won't get much mileage from that, I think.
I disagree. You will get all the mileage possible, because that is exactly right. Everything we do is a result of human genetics and that includes both moral and immoral behaviour and that's all there is to it.
There are no moral universals. A spider society would have some rules that are very like ours and some that would horrify us- but they would be fine by spider standards.
But, in context, the claim about morality was supposed to mean something more -- namely, that we can make non-trivial discoveries about morality using what we know about natural selection (if I understood correctly). I don't see that this fact (humans are capable of moral reasoning, and humans are the product of natural selection) leads to a great payoff.
It doesn't, because there is no great payoff. This is why I think the whole question is a bit silly. People do what they do because they are people. There are no grounds whatever for claiming that what constitutes good behaviour for humans is universally good in some way other than good for human survival in a human environment. I think seeking moral universals is hunting for unicorns.
What corresponding deep facts will we learn about ballet?
Look at any athletes- dancers, divers, sprinters, long distance runners. Look at the difference in body shape. If you can't see gene selection at work, I don't think you are looking hard enough. If all human society consisted of ballet dancers, divers and runners, we would be speciating like crazy. We're not of course, because in a gene pool of 7 billion, any variation is slight.
ETA: I should note as well that your description of football didn't have anything much to do with natural selection. A creationist could also see that humans have hands and feet and come to the same conclusions. This is reasoning based on the form of humans (which, yes, comes from natural selection), but uses nothing about natural selection per se.
Remember that among humans, "natural" selection is often exaggerated by conscious selection. The people who play pro sport are both self selected (they have the ability and the desire) and extraneously selected by scouts and trainers who know what to look for in a good exemplar of a specific sport.
Team sports are in fact a microcosm of society. Behave anti-socially (eg not trying hard enough, not passing the ball, not "being a team player") will get you kicked out fast. That's selection at the sharp end. The same sort of behaviour in the wider realm of life can have similar, if slower and less obvious, results.
 
Last edited:
I'm 5'10". if you measure my height, you will always get that.
My friend Bob is 5'7" a height measurement isn't universal between us, but that doesn't make it subjective, it's just variable between things being measured. The universality of objective measurement doesn't mean that everyone you measure will be the same height, just that everyone measuring Bob or I will pull the same numbers regarless of anyone's feelings.

Variable measurements for different subjects have absolutely nothing to do with objectivity and subjectivity. Averages as predictions vs individual results likewise have nothing to do with it either.

The only question relevant to determining whether something is objective is whether it's dependent on opinions and feelings or not.

Right, so would you agree that the act of measuring someone's opinions and feelings is objective, even though the answer does technically depend on someone's opinions and feelings?

And would you therefore also agree that the ability to measure people's moral values, and then analyse which course of action optimizes/satisfies them, would yield an entirely objective way of determining how to satisfy human moral values?
 
Last edited:

Back
Top Bottom