Richard T. Garner and "Beyond Morality"

Prove to me that moral values tend towards a single, well-defined limit over time. Show me how we arrive at this conclusion.
The evidence comes in two important forms: Historical and experimental. (There is a third I promoted earlier: Matching theoretical evidence, but we will brush that one aside, for now, since it requires its own set of sub-evidences.)

For historic evidence, we have data that clearly shows throughout human history, these general trends (with occasional setbacks):
1. Violence has gone done
2. Mortality across various ages has gone down. Especially in infants
3. Health has gone up
4. Economies are more stable
5. Average Wealth has gone up
6. Intelligence has gone up
7. Altruistic behavior, among people and countries, has gone up
8. Suffering has gone down
Etc.

Every element in measuring well-being seems to already have generally improved, over time.

For experimental evidence, we have data that:
1. Groups that are more social outcompete groups that are more aggressive, when compared. And, the aggressive ones are more likely to become social if given the opportunity, than the social ones to become more aggressive. This was done with animals, at least.
2. Strong in-group/out-group tendencies will form, even when designation to a group is clearly random. But, that these will fade if both groups find themselves trying to solve a common problem, or fight a common enemy.
3. People will formulate rules that are more fair towards everyone, when they don't know what role they will play in that society.
4. Several economics evidences about resource sharing working out for everyone's benefit, and how even greedy societies will try to seek these deals out, once known that they can exist.
Etc.

None of those directly support the claim, but are consistent with what we would expect out of the claim. And, some of them could be modified to test it more directly.

The second experiment becomes more important, as members of every given group, on Earth, starts to realize that we are all facing common problems.

The third becomes more important as elites start to realize that the elitism of their own descendants is not guaranteed. A wealthy man might secure tremendous resources for their children. But, those fortunes could still be lost. The more generations you go down, the more likely the dynasty will crumble. So it pays elitists to be a little "Rawlian" as well.

Rather, one requires an argument that they actually are what we should value, independently of what we do value -- or worse, what we value "in the limit".
I can show that it becomes increasingly irresistible for any one person to feel they should value that which seems to be better for their society, over time.

And, it becomes increasingly important to them, that they change their minds, if it turns out they were wrong.

Oh goody! Another claim we ought to accept on your authority!
He admits it. But, he is not claiming to have an objective morality, so it does not matter for his arguments. It matters more to my end of the issue, since I am the one making the Objective claim.

Morality is, apparently, like magic!
It is no more magical than a giant termite mound, or a swarm of bees, or a school of fish, etc.
 
To that end, I would first ask this: IF God existed, and God made decrees about what we should and should not do, would that (hypothetically) count as an Objective Morality? Most people, including error theorists, would seem to say yes. If not, then the word would have no meaning, and it would be pointless to continue on those grounds.

Actually, most philosophers from Socrates or Plato onwards reject that idea.

In one formulation the Euthyphro problem is posed like this:

Is what is morally good commanded by God because it is morally good, or is it morally good because it is commanded by God?

The upshot is that either things are morally good, and therefore God merely reveals them to be so, or what she commands could be anything whatsoever including those things which we generally do not consider to be good at all such as killing one's own family or committing acts of genocide, and is thus arbitrary.
 
To that end, I would first ask this: IF God existed, and God made decrees about what we should and should not do, would that (hypothetically) count as an Objective Morality? Most people, including error theorists, would seem to say yes. If not, then the word would have no meaning, and it would be pointless to continue on those grounds.

Then you are dabbling in pseudo-scientific hippy woo and committing the naturalistic fallacy.

This idea has been rejected since the time of T.H. Huxley who appears in my signature.
 
Though, for the purposes of my debate: I think I can, at the very least, make a convincing case that: By investigating moral truths, we have a better opportunity to learn more about our values. If we dismiss them as non-existent, (and were merely efforts to control people in the past, as error theory is prone to do), then we would be less inclined to be able to do that.

There is no guarantee we would find objective truths, nor am I claiming error-theory is non-progressive. I am claiming that the hunt for objective truths will make us a lot more progressive than error-theory would.

Why is "progressiveness" necessarily good? Some people might argue that being conservative or traditionalist is good. They might argue that because evolution has made men and women different that implies it is right for men and women to have different roles in society such that men are hunters and promiscuous fornicators and women are left at home bringing up the baby and doing the housework. That, after all, is what sociobiology tells us is the natural, therefore right way of doing things, right?
 
The evidence comes in two important forms: Historical and experimental. (There is a third I promoted earlier: Matching theoretical evidence, but we will brush that one aside, for now, since it requires its own set of sub-evidences.)

For historic evidence, we have data that clearly shows throughout human history, these general trends (with occasional setbacks):
1. Violence has gone done
2. Mortality across various ages has gone down. Especially in infants
3. Health has gone up
4. Economies are more stable
5. Average Wealth has gone up
6. Intelligence has gone up
7. Altruistic behavior, among people and countries, has gone up
8. Suffering has gone down
Etc.

Every element in measuring well-being seems to already have generally improved, over time.

You'll have to tell me what this has to do with morality per se. It is not at all clear to me.

Is truth-telling relevant for social well-being? Do you have any historical evidence that persons are more willing to tell the truth these days, for anything other than purely self-interested reasons?

Is the moral theory to which we are converging a complete theory? What does this moral theory tell us about the proper distribution of resources? What does it tell us about whether we ought to expend limited medical resources on the elderly and frail or on the young and resilient?

What you've said, essentially, is that society is getting better at solving some problems. Decidedly so! Science is a wonderful thing! But the fact that people live longer now gives no evidence at all that we care more about living. Your historical account fails to recognize the growing effectiveness of our technology.

Note as well some other conclusions we can draw. For instance, carbon emissions must be an objective moral value, because they've been increasing for centuries. Also, efficient weapons of mass destruction have improved, so these must be good for the well-being of humanity. And, let's not forget biological diversity -- there was too much of it, apparently, but humans have made great strides in rectifying that. So, these are some of the values that we can see in you objective morality. Kudos!
For experimental evidence, we have data that:
1. Groups that are more social outcompete groups that are more aggressive, when compared. And, the aggressive ones are more likely to become social if given the opportunity, than the social ones to become more aggressive. This was done with animals, at least.
2. Strong in-group/out-group tendencies will form, even when designation to a group is clearly random. But, that these will fade if both groups find themselves trying to solve a common problem, or fight a common enemy.
3. People will formulate rules that are more fair towards everyone, when they don't know what role they will play in that society.
4. Several economics evidences about resource sharing working out for everyone's benefit, and how even greedy societies will try to seek these deals out, once known that they can exist.
Etc.

I note that fairness is not the same as well-being in society. There are (arguably, at least) social arrangements which are decidedly unfair, but efficient.

But, in any case, none of this "experimental" data supports the claim that moral opinions tend towards one coherent limit theory.

None of those directly support the claim, but are consistent with what we would expect out of the claim. And, some of them could be modified to test it more directly.

The second experiment becomes more important, as members of every given group, on Earth, starts to realize that we are all facing common problems.

The third becomes more important as elites start to realize that the elitism of their own descendants is not guaranteed. A wealthy man might secure tremendous resources for their children. But, those fortunes could still be lost. The more generations you go down, the more likely the dynasty will crumble. So it pays elitists to be a little "Rawlian" as well.

I can show that it becomes increasingly irresistible for any one person to feel they should value that which seems to be better for their society, over time.

I don't think you can show any such thing. Indeed, you seem to suggest that, over time, crime and selfish acts will vanish, and this just seems implausible in the extreme to me. Do you really think this?

Do you think that the increasing income disparity is consistent with your claim about just how enlightened the wealthy are becoming?

This is all just wishful thinking, from where I sit.
 
Why would I do that?

It has already been well established in accepted science that selfish systems can become altruistic, once it is recognize that acting altruistic is in the best interest of the selfish entity. That has been a standard part of evolutionary biology since about the 70's, and has not fundamentally changed since.

Perhaps you should read up on the subject.

You seem to have misread your Dawkins. Dawkins is often at pains to explain that selfishness in genes is metaphorical and does not mean that people must be selfish. There are lots of kinds of human behaviour that buck the trend of gene selfishness. One of those would be people choosing not to have children. And another is personal benevolence for its own sake.

If you have any sources that argue that human beings can only act in their self-interest then I would like you to direct me to them. If not then your claim about "standard part of evolutionary biology" is likely a smokescreen.
 
Then your thinking is fine, but trivial, and irrelevant to my point.

In the real world, the worry about getting caught is a real one, and an increasingly worrisome one for potential thieves. And, THAT changes the game strategy, and how we approach the morality of the subject.

In the book A Clockwork Orange, the main character, Alex, is given a treatment that makes him sick whenever he tries to commit a crime. The point being made by Anthony Burgess is that Alex has not become more moral by being unable to commit crime, he has just been rendered incapable of criminal actions. And your claim that better crime detection and prevention equates to improved morality seems to be the same as saying that Alex has become more moral because he is incapable of being a criminal. I think that misses an important part of what we think of as morality.
 
I was also recently reminded of John Rawls' Theory of Justice: That the rules of society work out best for everyone, if you don't know what role you will be playing in that society, before you make the rules. (http://en.wikipedia.org/wiki/A_Theory_of_Justice) I think more people are innately realizing this, and that could be part of a natural path towards "objective utopia".

You seem to be helping yourself to the word "innate" in a way that is not meant by almost anyone else.

In other words, whenever you see good things happen you declare those things innate. And the lack of those things happening seems to be an inability to recognize innate things. But if that is the case, then what makes those things innate?

Is everything innate? Is rocket science innate?
 
There's that experiment with two kids and a cake. Adult says "thou shall not eat the cake". Adult leaves the room and... Cake disappears, kids claim to be innocent.
This is probably a good point to bring up: Much of our morality is shaped by society, and dependent on development, or both. It would be naïve to assume the behavior of children indicates how morality could be flexible and context-dependent. This, however, indicates objective morality is more complicated, not that it does not exist.

You'll have to tell me what this has to do with morality per se. It is not at all clear to me.
Those are all various ways to measure different aspects of Well Being: Our health, wealth and happiness. Please read more of this thread, so that is clearer.

Is truth-telling relevant for social well-being?
It does appear to be so. I think that we can expect honesty to naturally increase, in part because surveillance and information exchange make it increasingly difficult to get away with lying. Self-interest becomes, perhaps accidentally, social interest, in this case.

Is the moral theory to which we are converging a complete theory? What does this moral theory tell us about the proper distribution of resources?
As noted before, that egalitarianism is more stable than elitism.

We can even add to that, saying: Resources are also generated more effectively in an egalitarian society than an elitist one. Suppose we have 100 units of a valuable, but somewhat renewable, resource. If 99 units go to the elite, and 1 unit goes to others, we might not see much more growth of that. Perhaps there would be a total of 105 units in a decade.
But, if we were more egalitarian: Say 2 units went to the top 1% of the elite, and the other 98 units spread evenly across the rest of the population, we might find MORE of them being grown: So that, at the end of the decade there might be 125 total units. The market would more likely be motivated to find more innovative ways to increase production. So, EVERYONE wins!

What does it tell us about whether we ought to expend limited medical resources on the elderly and frail or on the young and resilient?
Like all resources, medical ones do not need to remain so limited. We continue to improve them and add to them, across the board. However, there does seem to be a bias towards the young and resilient, in most cases. But, since there are more elderly in the hospital than young people, most of the budget ends up towards the elderly, anyway.

It probably says something about prolonged life care. But, I am not entirely sure what that is, yet. Our ability to preserve quantity of life has gone further than our ability to deliver quality of life. Since that is a very recent trend, in human history, we are still largely at the proto-science stage of figuring that out: Trying a combination of different things amongst different families to see what works.
So far, there seems to be no harm in going along with the preferences of the person or their families.

What you've said, essentially, is that society is getting better at solving some problems. Decidedly so! Science is a wonderful thing! But the fact that people live longer now gives no evidence at all that we care more about living. Your historical account fails to recognize the growing effectiveness of our technology.
Not at all! The growing effectiveness of our technology is a KEY GAME STRATEGY CHANGER that impacts our morality.

For instance, carbon emissions must be an objective moral value, because they've been increasing for centuries.
This is an example of recognition of a trend that will probably reverse itself. We now recognize that too much carbon emissions has been bad for the planet, and by extension, our well being. So, we are finding ways to reduce them. Sometimes, it takes a while to figure these things out, but once we do, we take action!

Also, efficient weapons of mass destruction have improved, so these must be good for the well-being of humanity.
Ironically, the existence of mutually-assured destruction may have had a big impact on how peaceful states are to each other.

But, this example is poor one for an additional reason: Fewer people are dying at the hands of weapons of mass destruction, than there used to be, in spite of the stockpiles. That, I think, is a sign of progress.

And, let's not forget biological diversity -- there was too much of it, apparently, but humans have made great strides in rectifying that.
Another example of a trend that will likely reverse itself, thanks to recent recognition of the dangers.

I note that fairness is not the same as well-being in society. There are (arguably, at least) social arrangements which are decidedly unfair, but efficient.
If they worked, we would probably converge on them, once discovered.

But, in any case, none of this "experimental" data supports the claim that moral opinions tend towards one coherent limit theory.
Not directly. But, it shows us some things that are consistent, at least, with what we would expect. Some of them could be modified to show it more directly. For example: adding a variable to the Robbers Cave Experiment, that would have someone from the in-group persuade people against something that would be in their long-term best interest, and see how long it takes to figure that out.

Indeed, you seem to suggest that, over time, crime and selfish acts will vanish, and this just seems implausible in the extreme to me. Do you really think this?
I do NOT think they will vanish completely. The theory predicts there will always be moments when such crimes increase for a little while, before improving again: In the inclined-saw-tooth fashion.

Secondly, for some crimes, there might be an ESS for an optimal number of criminals: More criminals leads to more crime, of course. But, fewer criminals than the ESS would ALSO lead to more crime, since fewer people would be wary of them.

Do you think that the increasing income disparity is consistent with your claim about just how enlightened the wealthy are becoming?
The disparity is not really as bad as it used to be, in human history. And, the disparity that exists today, I predict, will eventually dissolve. As if it were a low point on an inclined saw-tooth chart.

Actually, most philosophers from Socrates or Plato onwards reject that idea.
Can you, or phiwum, give me an example of something that WOULD be an objective morality IF it existed?

Why is "progressiveness" necessarily good? Some people might argue that being conservative or traditionalist is good.
First of all, I meant progressive scientifically: As in we will likely make more discoveries about morality under objective frameworks than through error theory.

But, since you brought it up: The world changes. And, our morals need to change with it. Having progressive values allows one to be on the curve or ahead of it. Being too conservative means you fall behind.

That, after all, is what sociobiology tells us is the natural, therefore right way of doing things, right?
For the last time: I am NOT making a naturalistic fallacy! Just because something is natural, does not mean it is right. And, besides, what is "natural" is not always obvious to such moralists, anyway: It is easy for them to be wrong about what is natural.

Instead, I am pointing out that people will tend to converge on ideas that work for the better of their well-being. And, this is about as inevitable as our revolving around the sun, in the long run.

When someone decides that they should do what everyone tends to naturally converge on, then THEY are the ones making the fallacious leap. All I can do is report on that, as something that happens.

I can NOT prove that they should do it. But, since it becomes increasingly irresistible to do that, over time, it does not even matter what I prove or not.

Do you understand the distinction?

You seem to have misread your Dawkins. Dawkins is often at pains to explain that selfishness in genes is metaphorical and does not mean that people must be selfish.
Actually, his chapter on game theory makes it clear that genuinely selfish people will find it in their best interest to be altruistic, since it genuinely would be in their best interests.

But, yes, I am aware that it is metaphorical at the gene level. Welcome to the thread! I can tell you are new here, because I already said that a dozen times.

There are lots of kinds of human behaviour that buck the trend of gene selfishness. One of those would be people choosing not to have children. And another is personal benevolence for its own sake.
Those would have hidden selfish agendas behind them. For example: Someone who can not, or decided to not, have children, can still contribute to the raising of other children.

If you have any sources that argue that human beings can only act in their self-interest then I would like you to direct me to them. If not then your claim about "standard part of evolutionary biology" is likely a smokescreen.
It is not as simple as that: Once societal interest is locked into an aspect of self-interest, it is hard to tease them apart. (Though, not impossible.)

I think societal interest is the more important point to harp on. This talk of self interest is only a distracting tangent.

And your claim that better crime detection and prevention equates to improved morality seems to be the same as saying that Alex has become more moral because he is incapable of being a criminal. I think that misses an important part of what we think of as morality.
It appears as though you hit upon one of the ugly truths about morality: It does take societal intervention to shape itself towards being better for society.

You seem to be helping yourself to the word "innate" in a way that is not meant by almost anyone else.
What I mean is that there might be a strange sense growing among people that something like Rawl's Theory of Justice might be right. But, they cannot place their finger on exactly why. Nor would they know that name until it was taught to them.

In other words, whenever you see good things happen you declare those things innate.
No. Just because I used one example of a good thing that was innate, does NOT imply that I think all good things are innate. Lots of bad things are innate, too, but we can grow out of them: Such as the many biases built into our brains.

Is this debate going to be recorded and available on You Tube?
It will be if I win! ;)

But, it will ALSO be available even if I lose, and get splattered on stage. So, YES, it will be available for all the world to see, either way.



However, I think the direction the debate goes will be different than this thread, which seemed to pick apart ONLY ONE example of HOW objective morality COULD work, in theory. I suspect the debate will be more philosophically oriented.

I was hoping more people, on here, would be willing to defend Error Theory, specifically. Oh well.
 
Last edited:
It does appear to be so. I think that we can expect honesty to naturally increase, in part because surveillance and information exchange make it increasingly difficult to get away with lying. Self-interest becomes, perhaps accidentally, social interest, in this case.

Goodness, what a utopia this sounds like!

But I don't believe that you really understand morality. You're confusing moral behavior with prudent behavior in a dystopian totalitarian state. Not quite the same thing, really.

As noted before, that egalitarianism is more stable than elitism.

Right. So, that's why the current world is tending toward a more equal distribution of resources, right? Except, and correct me if I'm wrong (with cites) it isn't.

We can even add to that, saying: Resources are also generated more effectively in an egalitarian society than an elitist one. Suppose we have 100 units of a valuable, but somewhat renewable, resource. If 99 units go to the elite, and 1 unit goes to others, we might not see much more growth of that. Perhaps there would be a total of 105 units in a decade.
But, if we were more egalitarian: Say 2 units went to the top 1% of the elite, and the other 98 units spread evenly across the rest of the population, we might find MORE of them being grown: So that, at the end of the decade there might be 125 total units. The market would more likely be motivated to find more innovative ways to increase production. So, EVERYONE wins!

Oh, look. If you make up the numbers, they do what you want. How about that?

That really is convincing, that is.

Like all resources, medical ones do not need to remain so limited. We continue to improve them and add to them, across the board. However, there does seem to be a bias towards the young and resilient, in most cases. But, since there are more elderly in the hospital than young people, most of the budget ends up towards the elderly, anyway.

Sorry, I was asking how we ought to distribute them, and how we know that. You can't tell me how we do distribute them in order to tell me how we should. That's irrelevant unless we're in Panglossia.

Indeed, even if you show me a history of how the distribution has gone in the past, that's no evidence of what the limit will be. After all, we could be on a very long saw tooth, as you put it. Finite snippets of history don't determine the limit.

This is an example of recognition of a trend that will probably reverse itself. We now recognize that too much carbon emissions has been bad for the planet, and by extension, our well being. So, we are finding ways to reduce them. Sometimes, it takes a while to figure these things out, but once we do, we take action!

Oh, so the past is a good predictor of the moral trends, except when it says something you don't like.

Or, to put it differently, you already know what you think our values ought to be. We ought to care about the environment and our progeny. Uh oh! Here's some evidence of short-sighted and selfish behavior, inconsistent with what we ought to do! Well, no matter. It turns out this behavior is an aberration.

So, the behaviors you like are growth towards "objective" morality, the behaviors you don't like are aberrations. I'm not quite sure the word "objective" means what you think it does.
 
Ironically, the existence of mutually-assured destruction may have had a big impact on how peaceful states are to each other.

But, this example is poor one for an additional reason: Fewer people are dying at the hands of weapons of mass destruction, than there used to be, in spite of the stockpiles. That, I think, is a sign of progress.

On what scale? On the century scale, this is just not so. On the decade scale, there's been (I'm guessing) a slight decrease since WWII, but how can we tell that this downward tick isn't an aberration?

Why not cite your sources, so we don't have to take your word for it?

By the way, that still leaves the obvious conclusion: having efficient weapons of mass destruction is a great moral value for humanity. So are surveillance systems and the death of privacy.
 
Part of the problem here is that Wow's thesis is not really a scientific hypothesis at all -- at least not in Popper's sense.

Take the first part of his thesis: over time, human morality converges on a (unique) limit. This claim is not falsifiable, since convergence to (or divergence from) a limit is consistent with any behavior on a finite segment. That is to say, at any point in time, we've only seen the progress of human morality up until now. That's not enough to confirm or refute the hypothesis that it is approaching a limit, since limiting behavior is (quasi-)global, not local.

Think of a function f:R->R and suppose that you know the behavior of f on [0,z]. Clearly, you cannot say whether or not f has a limit as x -> oo. In this sense, the hypothesis that f has a limit is neither provable nor refutable. Since it is not refutable, it is not a Popperian scientific hypothesis.

NOTE: More complicated hypotheses like these are studied in Kevin Kelly's book, "The Logic of Reliable Inquiry", where he does not buy Popper's criterion. I'm not saying we all have to be Popperians, but if Wow is, then this observation should surely give him pause.

In any case, it is clear that all of the various history he brings up really adds nothing to the claim, since the behavior of a function on a finite domain does not give evidence of its limiting behavior. This is why he feels free to proclaim that the changes he likes are evidence of moral progress, and the changes he doesn't (like the very long-term increase in carbon emissions) are simply aberrations. Finite behavior doesn't tell us a darned thing.
 
Last edited:
I was hoping more people, on here, would be willing to defend Error Theory, specifically. Oh well.

OK I will play devil's advocate, specifically from the view of Moral Scepticism, with particular reference to Error Theory.

Let's take a question that you referred to in your first post as our starting point:

Can science answer normative questions?

First, let us find some agreement as to what that question means. I would say, that for the sake of clarity, we ought to agree these things:

1. That by 'normative', we mean that a question pertains to the 'rightness' or 'wrongness' of behaviour, or the 'goodness' or 'badness' of a state of things, leading to ideas about what we 'ought' to do.

2. If science is going to provide an answer to a normative question, then we expect that science alone should provide that answer. If we instead base our answer on some normative presupposition which we consider to be axiomatic, i.e. 'we ought to do that which promotes well-being', we haven't in fact answered any question using science alone, but merely referred to a normative presupposition that we already have, that is not based on methodological naturalism, and merely begs the question.

That should be an adequate starting point. Your thoughts?
 
Take the first part of his thesis: over time, human morality converges on a (unique) limit. This claim is not falsifiable, since convergence to (or divergence from) a limit is consistent with any behavior on a finite segment.
There are several ways one could falsify the theory.

The most direct way would be if the data was cyclical in nature, instead of inclined. If there was no clear trend in any direction on some measure, such as violence, then it would be hard to justify that we would be moving towards any particular values on that measure.

Also, the theory is highly dependent on the Theory of Evolution by Natural Selection being accurate. If that was scientifically disproven, it would unravel the particular theory I was promoting.

If experiments showed data that would contradict what we would expect from the theory, then that would cast doubt on it, as well. For example, if we found populations of animals raised to be more aggressive or more social were nearly EQUAL in competitions for genetic fitness and such, that would be a surprise.

Since the theory makes predictions about moral authorities, we could falsify it by finding examples that do not match the predictions. True authorities do not really exist, and any such claimants can only stay in power if they relent much of their authority to the needs of societies' well-being, as that changes; unless perhaps the group is very small and isolated, such as in a religious cult.

This is why he feels free to proclaim that the changes he likes are evidence of moral progress, and the changes he doesn't (like the very long-term increase in carbon emissions) are simply aberrations.
We ONLY JUST RECOGNIZED that long-term increase in carbon emissions is bad for us, a few decades ago! Now that we do recognize it, MOST people will eventually take steps to reduce it, even in spite of the current crop of climate denialists.

But I don't believe that you really understand morality. You're confusing moral behavior with prudent behavior in a dystopian totalitarian state. Not quite the same thing, really.
I argue that the nature of morality has a lot more to do with such types of prudence than most people realize.

And, a lot of these deterrent technologies were only put in place because... can you guess why?... We generally valued protection against these sorts of crimes!

Right. So, that's why the current world is tending toward a more equal distribution of resources, right? Except, and correct me if I'm wrong (with cites) it isn't.
I predict this will change. It might even revert in our lifetimes.

I would also like to note the MORE IMPORTANT fact that, in the past, inequality was worse: Especially in a utility sense. Today, the poor have a better chance to have access to more foods, than the poor of the past.

Oh, look. If you make up the numbers, they do what you want. How about that?
Those numbers were made up for illustrative purposes. But, the economics principle is sound enough. If you want to debate it, I suggest starting another thread. (I am trying to remember the name of the principle. I will have to get back to you on that.)

Sorry, I was asking how we ought to distribute them, and how we know that. You can't tell me how we do distribute them in order to tell me how we should.
It might be worth it, at this point, to bring up some other points:

There are some issues that might be morally neutral: It does not matter which way we go, it would have no material impact on well-being.

Sometimes we might assume an issue is morally neutral for a long time, only to discover there are some aspects of it that impact our well-being in a significant manner. After which, there will be increasing pressure to go with what works best for our well-being.
Other times, there were issues we assumed were morally important, only to find out later that they are irrelevant. Or, at least today they are irrelevant, though they might have been in the past.

There are also some issues that might have more than one correct answer, but NOT an infinity of correct answers. We might find well-being can be maximized at a 10/90 split of a resource OR a 30/70 split of that same resource. But, NOT at any other ratio. So, asking which of the two maximizers is best would be largely irrelevant. Though, we might be closer to achieving one than the other: If we are already at a 28/72 split, we would be better off aiming for 30/70, because there would be more turmoil and suffering trying to get to the other one.
(This is like the concept of the Moral Landscape, which implies there could be more than one moral peak.)

After all, we could be on a very long saw tooth, as you put it. Finite snippets of history don't determine the limit.
One of the ways we can tease out a long saw tooth from no saw tooth at all, is to see the impact various factors have on the issue: such as the impact of formal science, the rise or fall of associated regimes, or natural disasters interfering with what would otherwise be clearer trends.

For example: If we find a small injection of science always tends to make the chart swing in some direction, that could indicate the direction towards better well-being. Though, we also have to test the veracity of the science, to be sure of that, since science is not perfect, and could temporarily take people in the wrong direction.

Oh, so the past is a good predictor of the moral trends, except when it says something you don't like.
Some of these issues we only just discovered to be issues, relatively recently, in the first place.

You are asking the equivalent of this: "If nuclear energy was truly more efficient than coal, then how come they did not use nuclear power in the 1700's?"

It turns out this behavior is an aberration.
If it wasn't one, there would be a lot more of them!

On what scale? On the century scale, this is just not so. On the decade scale, there's been (I'm guessing) a slight decrease since WWII, but how can we tell that this downward tick isn't an aberration?
Pinker's book covers this very well. WWII certainly was an aberration, in a lot of respects, but even taking it into account, the data still shows a very clear decrease, overall, in deaths from weapons of mass destruction.

From what you write, it appears as though you are not acclimated with a LOT of data on these subjects, which is normal. But, if you wish to participate in these types of debates, you will need insights into a lot of what folks like me are talking about. This book is a good place to start, even though it is rather long.
 
1. That by 'normative', we mean that a question pertains to the 'rightness' or 'wrongness' of behaviour, or the 'goodness' or 'badness' of a state of things, leading to ideas about what we 'ought' to do.
Sounds good.

2. If science is going to provide an answer to a normative question, then we expect that science alone should provide that answer. If we instead base our answer on some normative presupposition which we consider to be axiomatic, i.e. 'we ought to do that which promotes well-being', we haven't in fact answered any question using science alone, but merely referred to a normative presupposition that we already have, that is not based on methodological naturalism, and merely begs the question.
Well, this is the trick, isn't it?

In short: Science cannot force an individual person to act. Though, it does seem to be a reliable way to compel a collective society to act.

If science demonstrated that "If you do X, it will be bad for various things you care about.", someone could ask: "What is morally wrong with doing things that are bad for things I care about?"

Though it is hard to imagine a rational person asking such counter-questions*, I will leave that be, for the sake of philosophy.
(*Even if you have to accuse them of making a naturalistic fallacy, in order to do so.)

What is more interesting is that it becomes harder for a collective society to escape from such ideas, than it is an individual: You can fool all people some of the time, or some of the people all of the time. But, you can't fool all of the people all of the time. (And the first one is becoming increasingly difficult, too!)

So, we end up seeing societies almost inevitably working towards what is in the best interests of their own collective values (such as well-being), in the long run.

In this way, science, alone, could be said to be resolving normative questions. This works out better demonstrated at the society level, rather than the individual level, because morality is really an emergent property of societies, and NOT quite as accurately described as a property of the individual.

How is that, for a start?
 
Last edited:
That is pretty good. But, that is limited to Human Nature.

Actual Nature goes a few layers deeper than that: Into the realm of natural forces. Everything about our morality begins with those, historically. Though, one cannot really reduce morality to such physical forces, we can still make certain kinds of predictions based on our understanding of them.

If you think that is controversial, we should start another thread about it.

I continue without seeing any definition of “Nature”. Are you saying “Nature” is “physical forces”? I fear that this concept is not operational in Ethics. Electrons, force fields or the law of Gravity does not explain almost nothing of human behaviour and even less of morality. It would be good that you define “Nature” or “natural” for a better common understanding. Thank you.

When forced to choose between the two, it appears as though egalitarianism is more stable than elitism. If you must know.

No. I don’t know. Distances between classes are growing in the globalized world. But this is not our problem. "Stable" is not "good". "Growing" is not "good" also. I can consider “stable” something awful. No contradiction.

I think it is a sign of great misinterpretation when you ask me to find contradictions in statements that do not contradict what I am saying very much (except in one spot).

Sorry, but if you are defending cooperative and equalitarian morality, you cannot say the same thing than I have said. I am Nietzschean (on this occasion) and I’m against rationality, equalitarianism, socialism, democracy, science and progress. I am Zaratustra, the Übermensch Prophet! You don’t understand what I’m claiming!

This could be true. Though, I would add that it is a "weapon" we can expect to naturally emerge from evolutionary forces. (…)I would call it more of a "discovery", than an "invention". But, that is a minor point

No. It is the main point. You have forgotten “resentment". Resentment is the sacerdotal instrument against the vital force of the noblemen. Life is struggle and the triumph of the strong over the weak. So the leaders of the weak ones, the Priests, have invented an unnatural rule: morality and equalitarianism. They try to put down the great men by means of a subtle preach for compassion, humility, humanitarianism and so on. This is not life but illness and decline.

It does not matter what Nietzsche denies. I do not know everything about his life, but I am willing to bet that even his sense of moral values tended to converge on what is in the best interest of society, anyway.
(…)
And, those affections, freedoms, will, etc. do not come from nowhere!

And, such things usually seem to act for the good of society, in most people, in the long run, anyway. So, all that ranting and raving against it got him nowhere out of it!

Absolutely no! Do not worry about Nietzsche and watch what I say, for I'm so Nietzschean as him, in spite of I write much worse. And I say you that “convergent values in the interest of society” must be destroyed. I repeat you: they are illness and decline. The Super-Man doesn’t worry about what happen with the herd. Obviously, Nietzschean Super-Man is not Superman made in USA. He is just his opposite.

Personal note: And, this Nietzsche didn’t notice it, there are a lot of Super-Men between us. At least they think be Super-Men. But Nietzsche thought in the Super-Man as a proud eagle that flies without hiding his splendour and violence. These eagles are too vulnerable. Our actual Super-Men are less ingenuous. They are a hybrid between noblemen and priests. They do not exhibit their power so confidently; however they eat much more lambs.

And, sometimes, the "Law of the Herd" could be wrong about what is in their own best interests.)

No. What I’m now defending is that “Law of the Herd” is wrong because is of the Herd.

I hope I have been clearer and you have noticed now that my salonist nietzscheanism is all contradictory with your democratic communitarianism.

So...

a. Show me any contradiction in my Nietzschean account or…
b. Show me an experience (observation or experiment) that concludes I am wrong.

I continue thinking that you are not able to do it.
 
If science demonstrated that "If you do X, it will be bad for various things you care about.", someone could ask: "What is morally wrong with doing things that are bad for things I care about?"

Though it is hard to imagine a rational person asking such counter-questions*, I will leave that be, for the sake of philosophy.
(*Even if you have to accuse them of making a naturalistic fallacy, in order to do so.)

I guess I'm about to do a hard-to-imagine thing. Or I'm not rational.

By and large, there is absolutely nothing morally wrong with my doing things that are bad for things I care about. I care about, for instance, my pickup truck, but I see nothing morally wrong with taking a box knife to the interior. I like the shade tree in my back yard, but there is nothing wrong with cutting it down (assuming that only my interests are involved here, and not the neighbors').

Once again, you seem not to understand that morality and prudence are different.
 
Hi Wowbagger,

I just recently ordered Mackie's book on ethics. I will try and give a synopsis to let you have some target practice for the debate.

Cheers,

AS
 

Back
Top Bottom