Richard T. Garner and "Beyond Morality"

I hesitated to use the term Group Selection to describe this theory. That is how most people have talked about it, in the past. For various technical reasons, that I won't bother getting into right now, that had been problematic. So, I figured I would try a new approach with this "stabilization" wording.

However, in a naïve, non-technical sense, perhaps thinking of this in terms of "group selection" (with scare-quotes) might be helpful in understanding the idea, in approximate terms.

That is: Those moral values that allow a society to survive and thrive better are selected for, and society will more likely continue selecting for them, until something better happens to come along. Any values that detract from the survival of society will be filtered out, as they try to enter. Sometimes it takes longer than others, but filter out, they will.
And, sometimes, there are trade-offs: A value gets selected is slightly bad for some small part of society, but sticks around because it is so much better for so much more of society, than the opposite alternative.

Does that help, somewhat?

I continue without seeing any definition of “Nature”. Are you saying “Nature” is “physical forces”? I fear that this concept is not operational in Ethics. Electrons, force fields or the law of Gravity does not explain almost nothing of human behaviour and even less of morality. It would be good that you define “Nature” or “natural” for a better common understanding. Thank you.
We don't have to reduce it that far. Beginning with the process of Natural Selection is good enough. Almost everything else I am saying follows from there.

No. I don’t know. Distances between classes are growing in the globalized world. But this is not our problem. "Stable" is not "good". "Growing" is not "good" also. I can consider “stable” something awful. No contradiction.
According to theory, "stable" is inevitable, for some moral value. (Maybe maintaining distances between classes will be part of that, but I doubt it.)

It doesn't matter if YOU think it is "good" or not. It is, objectively, what morality will end up being about, anyway.

That is the point in pointing in building an objective morality.

I do not have a choice in the matter, either. If I said "I prefer not to have morality stabilize around well-being", it would be like saying "I prefer not to live on a planet that is revolving around the sun!"

Sorry, but if you are defending cooperative and equalitarian morality, you cannot say the same thing than I have said. I am Nietzschean (on this occasion) and I’m against rationality, equalitarianism, socialism, democracy, science and progress. I am Zaratustra, the Übermensch Prophet! You don’t understand what I’m claiming!
I suspect cooperation and equalitarian will likely be part of the objective morality we stabilize on. But, whether I am right or wrong about that, neither I, nor any philosopher you can name would have a choice about that!

One of the things that happens when you demonstrate an objective morality is that quite a LOT of philosophy gets bypassed! It's like centuries of thought and writings on the subject cried out in terror, and were suddenly silenced.

Philosophy can still be useful for systematics purposes, though. So, it's not a total loss.

So the leaders of the weak ones, the Priests, have invented an unnatural rule: morality and equalitarianism.
There are different ways, I guess, of describing the same history. The only difference is that I would say this "invention" is more like an "inevitable emergence".

a. Show me any contradiction in my Nietzschean account or…
b. Show me an experience (observation or experiment) that concludes I am wrong.

I continue thinking that you are not able to do it.
Like I said, before, it is pointless for me to find a contradiction in something that, largely, does not contradict what I am even saying.

Nietzsche's speech is largely right. But, he equates "conformism" with "approbation or the interest of society", for some reason. Just because one is bad, doesn't mean the other is. It might not be in the interests of society for everyone to be as "conformist" as Nietzsche is saying.

Would Nietzsche claim it is 'conformist' to live on a planet that is revolving around the sun?

Once again, you seem not to understand that morality and prudence are different.

What if I told you morality has... a way... of matching what is prudent? At least in the long run?

I just recently ordered Mackie's book on ethics. I will try and give a synopsis to let you have some target practice for the debate.
Thanks! It takes place December 7th!

I will be posting responses to Garner's book, soon.
 
Here's a taste of responding to Garner:

In chapter 5 of Garner's book, he lists objections to consequentialism.

http://beyondmorality.com/wp-content/uploads/2012/06/CHAPTER_FIVE_June_2012.pdf

He thinks supplying counterexamples is one method of doing so, such as:

(1) There is some act that maximizes the total good but
it is one that we are obligated not to do.
Suppose we
can maximize the total happiness in the world by publicly
torturing a universally hated villain. According to the
utilitarian, this is what we ought to do, but in fact, says the
critic of utilitarianism, it is something that we ought not do.​

I say that *IF* publicly torturing a universally hated villain REALLY WAS best for our well-being, then society WOULD end up feeling more obligated to do that!

Maybe not right away, but over time: We currently have a revulsion towards public displays of torture. But, if the case can be made that it would be best, then the trend will reverse!

Our revulsion towards such a thing might have emerged (since medieval times) from nature already establishing that it is a bad thing, perhaps because the value of personal dignity overruled the desire for torture.

If anyone can devise a society in which public torture would clearly, unambiguously benefit society in the long run: I predict it WOULD be something that happens a lot more often, in that society!


Garner doesn't take into account that societies can converge on different ideas being moral, over time, as they are demonstrated to be useful or not. He seems to think that just because "torturing a universally hated villain" is tasteless now, that it will always be so, even if (hypothetically) proven otherwise. And, that seems rather short-sighted to me.
 
Last edited:
Can you, or phiwum, give me an example of something that WOULD be an objective morality IF it existed?

First, what you're asking for is a bit complicated. If, indeed, there is no such thing, then it may be difficult to describe what it would be like. Rather like asking me what a round square would be like, if it existed.

But in this case, I think the issue is not that hard. Let's look at Kant (keeping in mind that I am not an ethicist, so may not get the argument quite right).

Kant argues that a necessary feature of rationality is the recognition that rationality itself is an end in itself, i.e., that each rational being necessarily values his own rationality. Hence, rationality is objectively good in itself, that is, is an end in itself. Since this is an objective value (i.e., one which is necessarily recognized by every rational being), it follows that one ought to act in accordance with this value, that is, one ought always to treat all rational beings as valuable for their own sake, as ends in themselves.

So, Kant argues that this value is objective because it is part of the nature of rationality to value rationality. No rational being would be indifferent to losing their capacity for deliberation, self-examination, self-determination, judgment, etc.

Mill, on the other hand, argues that happiness is the sole end of all human action, and hence the sole criterion by which to judge actions. Any end-directed thing is judged as good or bad according to its appropriateness for securing its end. Now, moral judgments judge actions, and hence the sole criterion for moral judgments must also be in terms of happiness produced. In this sense, Mill gives a plausibility argument to the effect that Utility is the right principle by which to judge morality.
 
Okay, but then what is the difference between moral and immoral behaviour.
Definition.
Which appears to be missing from this thread, as is often the case.

Behaviour is behaviour. That's it . I don't think objective morality exists.
I think there are objectively best rules of behaviour for an individual human in a human society and I think they are grounded in biology.

I have said clearly that this gives no insight into objective morality, because I don't think there is any such thing.
 
There are several ways one could falsify the theory.

The most direct way would be if the data was cyclical in nature, instead of inclined. If there was no clear trend in any direction on some measure, such as violence, then it would be hard to justify that we would be moving towards any particular values on that measure.

No. Any finite observation of cycles is consistent with any limiting behavior. After all, there are infinitely many polynomial functions that go through the points (0,1), (1,0),(2,1), (3,0), ..., (50,1), (51,0). The behavior of such a polynomial may look cyclical between 0 and 51, but of course it is not cyclical globally.

You cannot say anything about behavior of a function in the limit just by looking at a finite piece of it. Apparently cyclical behavior in the observed data does not refute the thesis that it approaches a limit.

Also, the theory is highly dependent on the Theory of Evolution by Natural Selection being accurate. If that was scientifically disproven, it would unravel the particular theory I was promoting.

It would not contradict the thesis that human moral values converge in the limit. It may take away the proposed mechanism, but it wouldn't refute the thesis itself.

Indeed, the theory of natural selection is evidently independent of your thesis, since it seems to be you and you alone that thinks natural selection entails a limit point for moral behaviors.

If experiments showed data that would contradict what we would expect from the theory, then that would cast doubt on it, as well. For example, if we found populations of animals raised to be more aggressive or more social were nearly EQUAL in competitions for genetic fitness and such, that would be a surprise.

Yes, let's talk about that. Ants attack other colonies of the same species. Many species "cannibalize" their own kind. Are they necessarily failing at the evolution game? Do we expect ants to eventually stop fighting others of the same species? Is that how evolution will work for them?

Part of what I think is ludicrous in your "theory" is that you seem to be predicting what evolution will do (despite the fact that the moral behaviors we're discussing are more socially based than genetically). But, near as I can tell, real biologists don't predict what mutations will be successful before they happen. They don't try to guess how evolution will turn out, because there is no single outcome for an evolutionary history.

So, why do you insist that evolution will inevitably create the one, true morality? It's not really a pre-determined mechanism.

Since the theory makes predictions about moral authorities, we could falsify it by finding examples that do not match the predictions. True authorities do not really exist, and any such claimants can only stay in power if they relent much of their authority to the needs of societies' well-being, as that changes; unless perhaps the group is very small and isolated, such as in a religious cult.

I suppose Catholicism is a religious cult now?

Anyway, these particular predictions are irrelevant for the main point. You claim that humanity as a whole is progressing towards a single, unique limit of morality. It is this claim I want to examine, since this is central. Other, more specific claims are less useful.

Consider it this way. Suppose I said that God exists and wants us to love one another and this is the true morality. Then I said everyone knows in his heart what God wants for us. This is why we think murder is bad -- because in our heart, we know God wants us not to kill. And look! People do think murder is bad! So that's evidence for my theory!

The proper response would be, of course, that the general agreement of the wrongness of killing can be explained in other ways.

So can the particular predictions of your theory, and these particular predictions do not support the broader claim, namely, that moral progress has a certain limiting behavior.
 
Last edited:
What if I told you morality has... a way... of matching what is prudent? At least in the long run?

I would be unimpressed without evidence that goes beyond your "just-so" stories and ad-hoc claims that we don't want to be shoved, but don't mind getting killed by a trolley purposely shunted onto our tracks.
 
Delete.

I've decided to focus on one simple point -- Wow's theory is not falsifiable.

This post contained replies to other points he's made, but I've decided I don't want to get distracted at present. There are far too many loose threads in this conversation to follow every one of them, so let's just keep to one claim at present.
 
Last edited:
No. Any finite observation of cycles is consistent with any limiting behavior. After all, there are infinitely many polynomial functions that go through the points (0,1), (1,0),(2,1), (3,0), ..., (50,1), (51,0). The behavior of such a polynomial may look cyclical between 0 and 51, but of course it is not cyclical globally.

I'll confess that as I played around with this idea today, I couldn't get the polynomial to look cyclical. My naive idea didn't work.

But then I hit on the Taylor expansion for sine, which nicely illustrates the point. Take ever greater expansions to get polynomials that look cyclical over ever larger ranges. See https://en.wikipedia.org/wiki/File:Sine_GIF.gif for a nice animation to illustrate this point.

Again, my point is that behavior over a finite range does not refute any claim about behavior in the limit.
 
Last edited:
If science demonstrated that "If you do X, it will be bad for various things you care about.", someone could ask: "What is morally wrong with doing things that are bad for things I care about?"

Though it is hard to imagine a rational person asking such counter-questions*, I will leave that be, for the sake of philosophy.
(*Even if you have to accuse them of making a naturalistic fallacy, in order to do so.)

What is 'morally' wrong with doing things that are bad for the things that I care about? It is perfectly rational to ask that question. Say, I was a psychopath that cared a great deal about gratifying my desires to cause others pain. Is it therefore 'morally wrong', for me to commit to some action that would limit my ability to do this?

What is more interesting is that it becomes harder for a collective society to escape from such ideas, than it is an individual: You can fool all people some of the time, or some of the people all of the time. But, you can't fool all of the people all of the time. (And the first one is becoming increasingly difficult, too!)

So, we end up seeing societies almost inevitably working towards what is in the best interests of their own collective values (such as well-being), in the long run.

So, it seems that morality in this description of yours is merely the self- interest of following the herd mentality in order to 'get along'? If I find myself in a society that follows Sharia Law, am I then 'morally wrong' to disagree with stoning a woman to death because she has been raped?

In this way, science, alone, could be said to be resolving normative questions. This works out better demonstrated at the society level, rather than the individual level, because morality is really an emergent property of societies, and NOT quite as accurately described as a property of the individual.

When you say that morality is an 'emergent property' of societies, what exactly do you mean by 'emergent property'? Also, why should this emergent property be binding on individuals?

Also, as there are (and have been, and will be) many societies, do their moral views all hold equal weight? If we have conflicting views between societies, which is right? How do you decide without imposing your own moral views, which merely begs the question?

So far you have put forward three ideas which you seem to suggest might be the basis for an objective morality:

1. That it would be 'bad' for a person to do something that conflicts with the good of something that they care about.

2. That it is difficult for an individual to escape the moral norms of the society that they find themselves in.

3. That normative morality (right and wrong, good and bad) are 'emergent properties' of societies, and in some mysterious way, must presumably be binding on the individuals in those societies.

I have rebutted these three views. I would also suggest that they present an incoherent view. For example, in your system, it cannot be bad for an individual to do something that conflicts with their own desires, if that action also conflicts with the desires of society (which are manifest in some kind of 'emergent property'). Yet you state that a rational person would not ask whether doing something that is against one's interests is bad.

Further, I would suggest that if science alone can answer normative questions, it should be possible to give an example of a normative question that science answers definitely, without reference to other normative moral axioms that are already held (such as 'it is good to maximise well-being').
 
Last edited:
What if I told you morality has... a way... of matching what is prudent? At least in the long run?

When I earlier replied to this, I didn't realize precisely what had been snipped.

You said it was irrational to wonder whether it is immoral to act in ways that are bad for things you care about. I said that I don't think it's immoral to, for instance, vandalize my own truck.

Do you disagree? If not, then you should concede that your earlier claim was nonsense. If so, you should tell me why it's immoral for me to vandalize my own truck. (And if this reason goes beyond the fact that I care for the truck, then it still does not confirm your previous claim.)
 
Last edited:
Definition.
Which appears to be missing from this thread, as is often the case.

Behaviour is behaviour. That's it. I don't think objective morality exists.
I think there are objectively best rules of behaviour for an individual human in a human society and I think they are grounded in biology.

I have said clearly that this gives no insight into objective morality, because I don't think there is any such thing.

The parts in bold are the same thing.

You've effectively said:

"I don't think objective morality exists. I think there is an objectively best morality grounded in biology."

How are 'rules of behaviour', not 'moral rules'?
 
Last edited:
Here's a taste of responding to Garner:

In chapter 5 of Garner's book, he lists objections to consequentialism.

http://beyondmorality.com/wp-content/uploads/2012/06/CHAPTER_FIVE_June_2012.pdf

Thanks for providing a link to Garner's book. Having read some of it, I am left with the inescapable conclusion that he is going to have you for breakfast. I am trying not to be patronising, but there are several things that it might be help for you to realise, if you haven't already:

1. That you are about to engage in a debate on moral philosophy with someone who has expertise in this field, whereas you seem to be naive about the subject matter that you are engaging in. This comment is a good example of such naivety:

One of the things that happens when you demonstrate an objective morality is that quite a LOT of philosophy gets bypassed! It's like centuries of thought and writings on the subject cried out in terror, and were suddenly silenced.

If you had demonstrated an 'objective morality', it would be a simply stunning demonstration which has surpassed the best thinkers of the last two millennia. You stand to make a substantial career for yourself in philosophy. Yet you seem to lack an appreciation of what such a demonstration would entail. Your position, so far, is incoherent, let alone demonstrating some kind of objective truth.

2. That Garner's position is the sceptical one in the debate. You are arguing for some kind of 'Moral Realism', based on biology or society (it is still not clear to me what your position is). Key to this argument will be, and I'm quoting from Garner's book here:

Since moral realists want to say that moral judgements are sometimes true, I have suggested that it is up to them to explain how they understand this, and to support their claims.

3. That your own position is most definitely more 'woo-ish'. This 'emergent property of morality' which has the special metaphysical quality of being binding on human behaviour sounds suspiciously like the FSM. I can't prove it doesn't exist, but there seems to be little reason to suppose it does.

I would really recommend that you read this paper:

http://people.duke.edu/~alexrose/dditamler.pdf

This is as 'sciency' a view of meta-ethics as you will get. No added woo, I promise.
 
The evidence comes in two important forms: Historical and experimental. (There is a third I promoted earlier: Matching theoretical evidence, but we will brush that one aside, for now, since it requires its own set of sub-evidences.)

For historic evidence, we have data that clearly shows throughout human history, these general trends (with occasional setbacks):
1. Violence has gone done

By what metric? It seems to me that the 20th century was one of the most violent in human history.

2. Mortality across various ages has gone down. Especially in infants

Is living longer a moral behavior?

3. Health has gone up

Is being healthy a moral behavior?

4. Economies are more stable

By what metric?

5. Average Wealth has gone up

By what metric?

6. Intelligence has gone up

By what metric?

7. Altruistic behavior, among people and countries, has gone up

By what metric?

8. Suffering has gone down

By what metric?

For experimental evidence, we have data that:
1. Groups that are more social outcompete groups that are more aggressive, when compared. And, the aggressive ones are more likely to become social if given the opportunity, than the social ones to become more aggressive. This was done with animals, at least.

Cite? What kind of animals? And why would the same necessarily apply to humans?

2. Strong in-group/out-group tendencies will form, even when designation to a group is clearly random. But, that these will fade if both groups find themselves trying to solve a common problem, or fight a common enemy.

The has been demonstrated on a small scale, yes.

3. People will formulate rules that are more fair towards everyone, when they don't know what role they will play in that society.

Much as I love Rawles' veil of ignorance, I'm not sure what it has to do with your larger argument. You seem to be appealing to things like natural selection and inevitable convergences. No one who is formulating the rules of a society is actually operating under the veil of ignorance in real life. In fact, the fact that people formulate rules that are less fair when they DO know what their role in society is would seem to contradict your idea that selfishness is synonymous with altruism.

4. Several economics evidences about resource sharing working out for everyone's benefit, and how even greedy societies will try to seek these deals out, once known that they can exist.

Look at current wealth/resource distribution. It's more concentrated than ever. As science and technology have improved, elites have improved their abilities to control a disproportionate amount of the world's wealth/resources.

The third becomes more important as elites start to realize that the elitism of their own descendants is not guaranteed. A wealthy man might secure tremendous resources for their children. But, those fortunes could still be lost. The more generations you go down, the more likely the dynasty will crumble. So it pays elitists to be a little "Rawlian" as well.

How would they ever have not realized this?
 
Last edited:
Can you, or phiwum, give me an example of something that WOULD be an objective morality IF it existed?

I'm not sure why I need to do that. It is you who is arguing for an objective morality, remember.

However, as phiwum points out, you may wish to look at Kant's Categorical Imperative which is a nice try at least but has a number of flaws. Or Utilitarianism beginning with Bentham, and modified by John Stuart Mill and other philosophers.

First of all, I meant progressive scientifically: As in we will likely make more discoveries about morality under objective frameworks than through error theory.

But, since you brought it up: The world changes. And, our morals need to change with it. Having progressive values allows one to be on the curve or ahead of it. Being too conservative means you fall behind.



Again, I think you are employing fallacious reasoning here. The fact that the world changes is almost trivially true. But it does not follow from that that our morals need to change with it. In fact, if you arguing for a truly objective morality then it is difficult to see how it can be so contingent on the time at which a moral statement is uttered.

For the last time: I am NOT making a naturalistic fallacy! Just because something is natural, does not mean it is right. And, besides, what is "natural" is not always obvious to such moralists, anyway: It is easy for them to be wrong about what is natural.

Instead, I am pointing out that people will tend to converge on ideas that work for the better of their well-being. And, this is about as inevitable as our revolving around the sun, in the long run.

When someone decides that they should do what everyone tends to naturally converge on, then THEY are the ones making the fallacious leap. All I can do is report on that, as something that happens.

I can NOT prove that they should do it. But, since it becomes increasingly irresistible to do that, over time, it does not even matter what I prove or not.

Do you understand the distinction?

Not really. It sounds to me you are dressing up a theory of descriptive ethics as normative ethics but not realizing that the two are distinct.

Actually, his chapter on game theory makes it clear that genuinely selfish people will find it in their best interest to be altruistic, since it genuinely would be in their best interests.

I think you mean pretend to be altruistic. The thing is that usually we can distinguish between acts of genuine altruism and someone acting to get into another person's good books. In the latter case, apparent altruism is self-interst and in the former case it is altruism.

I think that psychological egoism is an unproven position.

But, yes, I am aware that it is metaphorical at the gene level. Welcome to the thread! I can tell you are new here, because I already said that a dozen times.

Those would have hidden selfish agendas behind them. For example: Someone who can not, or decided to not, have children, can still contribute to the raising of other children.

Again, I think this is a stretch and one that you cannot prove. Your counter-example also appears to be wrong as well given that you are demonstrating that people are more altruistic than they may appear.

It is not as simple as that: Once societal interest is locked into an aspect of self-interest, it is hard to tease them apart. (Though, not impossible.)

I think societal interest is the more important point to harp on. This talk of self interest is only a distracting tangent.

It appears as though you hit upon one of the ugly truths about morality: It does take societal intervention to shape itself towards being better for society.

No. The point I am making is that a person who is forced to give blood or give money to charity is not acting in a manner that is equally moral to the person who gives of her own volition.

What I mean is that there might be a strange sense growing among people that something like Rawl's Theory of Justice might be right. But, they cannot place their finger on exactly why. Nor would they know that name until it was taught to them.

No. Just because I used one example of a good thing that was innate, does NOT imply that I think all good things are innate. Lots of bad things are innate, too, but we can grow out of them: Such as the many biases built into our brains.


It will be if I win! ;)

But, it will ALSO be available even if I lose, and get splattered on stage. So, YES, it will be available for all the world to see, either way.



However, I think the direction the debate goes will be different than this thread, which seemed to pick apart ONLY ONE example of HOW objective morality COULD work, in theory. I suspect the debate will be more philosophically oriented.

I was hoping more people, on here, would be willing to defend Error Theory, specifically. Oh well.

Okay well I look forward to seeing it. Unfortunately, I think that your opponent may bring up a number of the topics we have and also be as convinced as we are that your reasoning is fallacious.
 
Definition.
Which appears to be missing from this thread, as is often the case.

Behaviour is behaviour. That's it . I don't think objective morality exists.
I think there are objectively best rules of behaviour for an individual human in a human society and I think they are grounded in biology.

I have said clearly that this gives no insight into objective morality, because I don't think there is any such thing.

Well then you are taking the same position as Wowbagger's opponent. Which again makes me wonder how you can both think you are arguing the same thing.
 
As noted before, that egalitarianism is more stable than elitism.

We can even add to that, saying: Resources are also generated more effectively in an egalitarian society than an elitist one. Suppose we have 100 units of a valuable, but somewhat renewable, resource. If 99 units go to the elite, and 1 unit goes to others, we might not see much more growth of that. Perhaps there would be a total of 105 units in a decade.
But, if we were more egalitarian: Say 2 units went to the top 1% of the elite, and the other 98 units spread evenly across the rest of the population, we might find MORE of them being grown: So that, at the end of the decade there might be 125 total units. The market would more likely be motivated to find more innovative ways to increase production. So, EVERYONE wins!

What you are forgetting is that if you gave 99 units to the most productive 1% of members of society and executed the least productive 10% of all people because they are lazy and useless then the top 1% would end up not only doubling the supply of the resource but also provide 1,000s of units of another slightly inferior resource that would be happily used by the remaining members of society thus demonstrating the worth of elitism over egalitarianism. Some people might complain that you cannot execute people just for being lazy but those people don't count because they are too lazy even to exercise their Rawlsian sense of justice and anyway science shows them to be objectively wrong.
 
If science demonstrated that "If you do X, it will be bad for various things you care about.", someone could ask: "What is morally wrong with doing things that are bad for things I care about?"

Though it is hard to imagine a rational person asking such counter-questions*, I will leave that be, for the sake of philosophy.
(*Even if you have to accuse them of making a naturalistic fallacy, in order to do so.)

Actually you should see what people do and say in real life. You will be surprised at how many counter-examples you will find to what seems to obvious and irrational to you.

In Real Life, many people:

drink alcohol
smoke tobacco and marijuana
snort cocaine
gamble
have unprotected sex

They do these things even when they know that these things are bad for things they care about (their health, their finances, their families, even society as a whole).

Sometimes they will make the argument that their habits are not immoral and that the state should not be able to prohibit their behaviour. They argue that these things are liberties that should be enshrined and protected regardless of how much demonstrable harm that smoking of the wacky backy can do to people's bodies. Similarly, they will claim the right to gamble even though it can be demonstrated that the vast majority of people will lose money.

So do you argue that it is obvious that people should be deprived of the right to damage themselves engaging in activity that they want to do regardless of how much demonstrable harm it does to them?
 
No. Any finite observation of cycles is consistent with any limiting behavior. After all, there are infinitely many polynomial functions that go through the points (0,1), (1,0),(2,1), (3,0), ..., (50,1), (51,0). The behavior of such a polynomial may look cyclical between 0 and 51, but of course it is not cyclical globally.

We do not need to wait an infinite length of time, to see if there is an incline or a cycle. To test theory, we only need to look at the data in scales of generations of people. The slowest level of change would occur between generations, if any change is to be made, regarding moral progress.

It would not contradict the thesis that human moral values converge in the limit. It may take away the proposed mechanism, but it wouldn't refute the thesis itself.
It would likely refute the thesis, because the thesis is an extension on what we would expect from Natural Selection being true. If it was not, there is no reason to assume morality would converge on any values, at all, especially consequentialism.

Yes, let's talk about that. Ants attack other colonies of the same species. Many species "cannibalize" their own kind. Are they necessarily failing at the evolution game? Do we expect ants to eventually stop fighting others of the same species? Is that how evolution will work for them?
The strategy works under the selection pressures they faced in the past. Though, there is no reason to assume they will not eventually grow out of that, as humans are.

Part of what I think is ludicrous in your "theory" is that you seem to be predicting what evolution will do (despite the fact that the moral behaviors we're discussing are more socially based than genetically).
But, near as I can tell, real biologists don't predict what mutations will be successful before they happen. They don't try to guess how evolution will turn out, because there is no single outcome for an evolutionary history.
Biologists can make predictions on Ultimate patterns we can expect, though not necessarily the proximate details in which they will play out.

For example, we can predict that bacteria will evolve to last against anti-bacterial soap. We might not be able to predict the little details for how that will happen. But, given a good understanding of evolution, we can at least predict that it will likely happen.

For morality, my specific predictions might be guesses. But, the Ultimate factors are not: That morality will always tend to revolve around a form of consequentialism.

I suppose Catholicism is a religious cult now?
Catholic doctrine has changed and transformed, over the years, to match closer to the realities of real-world consequentialism. They might be slower at it, but if they did not adapt to some degree, Catholicism would either be over, or it would only remain convincing to a very small group of people.


Consider it this way. Suppose I said that God exists and wants us to love one another and this is the true morality. Then I said everyone knows in his heart what God wants for us. This is why we think murder is bad -- because in our heart, we know God wants us not to kill. And look! People do think murder is bad! So that's evidence for my theory!
But, we do not need god to do it! Science, and natural forces, have done a good enough job compelling us to think the same message!

Funny how that happens, isn't it?

I would be unimpressed without evidence that goes beyond your "just-so" stories and ad-hoc claims that we don't want to be shoved, but don't mind getting killed by a trolley purposely shunted onto our tracks.
My theory is that consequentialism is the key, driving-force value. I see "prudence" as almost a synonym of that. What is prudent is almost equivalent to what results in the best consequences. Maybe not exactly, but close enough to not bother arguing over.

If so, you should tell me why it's immoral for me to vandalize my own truck. (And if this reason goes beyond the fact that I care for the truck, then it still does not confirm your previous claim.)
It is hard to moralize someone doing anything to their own personal property, even destruction, as long as no one gets hurt while doing so.

Vandalizing someone else's truck is easier: Society would generally be morally compelled to not vandalize things that do not belong to them, as wide allowance of such things would have bad consequences for society, in general.

Though, for the sake of argument: The same thing does apply to your own property, to a lesser degree: If society raised people who widely vandalized things they cared about, even if it only belonged to them, that would likely lead to bad consequences for society, too: Though to a lesser degree than vandalizing other people's property, perhaps.
 
We do not need to wait an infinite length of time, to see if there is an incline or a cycle. To test theory, we only need to look at the data in scales of generations of people. The slowest level of change would occur between generations, if any change is to be made, regarding moral progress.

If I read you right, this is something new.

Every generation, we should see moral "improvement" or else your theory is wrong -- is this what you mean?

Notice that this is much more specific than what I thought you said. I thought you were merely committed to our moral progress approaching a fixed point "in the limit", which places no restriction at all on finite behavior. But now you've said that we should see improvement "between generations" (whatever that means).

I suspect that I misunderstand you, however. So, let's be clear. Suppose I am tracking prevailing moral opinions over time, or social well-being or -- well, or what? Tell me what I should be tracking over time, and tell me which observations are inconsistent with your hypothesis.


It would likely refute the thesis, because the thesis is an extension on what we would expect from Natural Selection being true. If it was not, there is no reason to assume morality would converge on any values, at all, especially consequentialism.

In fact, I don't see anyone else arguing that this thesis is a consequence of natural selection at all, so I'm not sure I see the close connection you suggest.

But never mind. Let's focus on the data, and not the theoretical underpinning of the theory.

The strategy works under the selection pressures they faced in the past. Though, there is no reason to assume they will not eventually grow out of that, as humans are.

Stunning! Ants will eventually stop interspecies conflict.

Well, I appreciate the explicit answer. I can't say that I find any real plausibility that natural selection is so predictable and acts in a species-wide manner like this, but at least it is clear what you're committed to.

Would it bother you if entomologists, well-read in the theory of natural selection, did not find this prediction persuasive?

But, we do not need god to do it! Science, and natural forces, have done a good enough job compelling us to think the same message!

You rather missed my point. The imaginary theist's argument is similar to your argument. I don't find the imaginary theist's argument persuasive in the least, and for the same reasons, I don't find your argument persuasive in the least.

It is hard to moralize someone doing anything to their own personal property, even destruction, as long as no one gets hurt while doing so.

So, let me remind you what you said earlier.

If science demonstrated that "If you do X, it will be bad for various things you care about.", someone could ask: "What is morally wrong with doing things that are bad for things I care about?"

Though it is hard to imagine a rational person asking such counter-questions*, I will leave that be, for the sake of philosophy.
(*Even if you have to accuse them of making a naturalistic fallacy, in order to do so.)

Now, I value my truck. I care about it. Science can surely demonstrate that if I vandalize my truck, it will be bad for my truck, which I care about. (In particular, it will lower the value of the truck, which is important to me, it will make it less comfortable to drive, which is also important, and so on.)

Now, I asked, "What is morally wrong with doing things that are bad for things I care about?" You have answered: nothing, per se. Nothing at all is morally wrong with doing things that are bad for things I care about, unless someone other than I am impacted.

So, you agree you misspoke earlier, yes?


Vandalizing someone else's truck is [not relevant in the least].

Though, for the sake of argument: The same thing does apply to your own property, to a lesser degree: If society raised people who widely vandalized things they cared about, even if it only belonged to them, that would likely lead to bad consequences for society, too: Though to a lesser degree than vandalizing other people's property, perhaps.

But not wrong because it damages something I care about. Wrong only insofar as it harms the interests of others. This is not relevant to your claim, which was that everyone who is rational recognizes that it is immoral to damage things that they care about.

If I and only I care about something, and my actions to damage it have no negative impact on others, then there is nothing at all immoral if I damage the things I care about.
 
Last edited:
Wowbagger, this is from the paper I linked to earlier and I would regard it as a pretty good summary of Nihilism/Error Theory. It is likely that your opponent will be making a similar argument:

Nihilism consists in the following claims: a) normative terms–good, bad, right, duty, etc– do not name real properties of events or things, either natural nor non-natural ones; b) all claims about what is good in itself, or about categorical moral rights or duties, are either false or meaningless; c) the almost universal beliefs that there are such properties and that such claims are true can be “explained away” by appropriate scientific theory. Nihilism takes the form of what J. L. Mackie [1977] calls an “error theory.” It does not deny that beliefs about norms and values can motivate people’s actions. It does not deny the felt “internalism” of moral claims, nor does it deny that normative beliefs confer benefits on the people who hold them. Indeed nihilism is consistent with the claim that such beliefs are necessary for human survival, welfare and flourishing. Nihilism only claims that these beliefs, where they exist, are false. It treats morality as instrumentally useful —instrumentally useful for our nonmoral ends or perhaps the nonmoral ends of some other biological systems, such as our genes for example.

(My bold)

http://people.duke.edu/~alexrose/dditamler.pdf
 
Last edited:

Back
Top Bottom