Richard T. Garner and "Beyond Morality"

Perhaps we should wait for the evidence that "well-being" is the limit of all moral evolution.

I hope the evidence will come in three forms:

The easiest one to develop is historic: To show that detractions from well being have only been temporary set-backs in an otherwise upwards progression towards it.

The most difficult one is scientific studies. There might not be any that have tested this idea, directly, yet. But, we can still see if any studies, so far, are suggestive of the idea, or otherwise. And, we might be able to design a new experiment that can more directly test this, that can be conducted in the future. And, that is what I hope to get some insight on tomorrow evening!

(At the very least, the idea has predictive power for potential fresh insights into morality. I have yet to see anything like that coming from Error Theory. If the predictions prove to be incorrect, that would still be knowledge gained.)

And, possible theoretical support, from effects already accepted from Natural Selection, and possibly other natural processes. We can state the theory in terms of how it inevitably emerges from them. Though, it will take more research and development to describe how the layers of abstraction get built between those processes and our actual morality.

I will admit that "Well Being" is currently easier to define, than it is to prove.

Or, at least, some definition of "well-being", so that the question "Should we value well-being?" is meaningful.
The general health, wealth, and happiness of the collective society.

That's a concise version. We can go into further details of what those terms mean, if you wish.

Happiness, I know, is potentially dangerous to include, because it sounds more subjective. And it yields issues of manipulation. (If someone is hooked up to a machine to stimulate their brain to always feel happy, are they truly happy?)
But, there are some ways we can tease out useful, objective ways to measure aspects of "happiness"; including physiological stress (which could coincide with health), ability to get their responsibilities taken care of, progress towards stated goals (if any), etc.
 
Morality is self-interest writ large. Most laws protect property rather than people. Free speech is fine until you say what your family, your peer group or your government does not want said. Consider Mr. Snowden. Dying for your country is dandy until the people on top are required to do so- which is why it's OK to carpet bomb entire cities, but not to assassinate political leaders, as that might just come back at you.
The people with the gold make the rules. That's human morality.
Gather allies.
Don't challenge the leaders till you are in a position of strength.
Help people who are in a position to help you.
Remember favours.
These are moral behaviour in a communicative, tribal ape.
What's interesting is how "self-interest" tends to "writ-larger" over time.

It is NO LONGER moral to carpet-bomb cities, because if that's allowed, then someone you care about might be in a city that gets carpet-bombed.

People with the gold might have started making the rules, but some of that power gets yielded to those without gold, once their voice is something to care more about.



I honestly feel philosophers keep chasing this simple notion looking for something complex which really isn't there.
Well, actually, I think philosophers are making naïve oversimplifications of something that is terribly complex. Each different philosophy is picking on, and magnifying the importance of, different aspects of that complexity.

But, that's besides the point. I do think we largely agree.
 
I will admit that "Well Being" is currently easier to define, than it is to prove.


The general health, wealth, and happiness of the collective society.

That's a concise version. We can go into further details of what those terms mean, if you wish.

Yes, let's go into further detail.

For one thing, we need a way of measuring how adoption of various moral rules impacts well-being. For this, we need a way to measure well-being which is complete enough that we can make rough estimates of the effects of different moral rules.

We need to be able to compare, for instance, whether there are situations in which a small, servile population can produce greater well-being than a society in which everyone is treated equally.

If this is seriously intended as something like a scientific theory, in principle even if the details are beyond us in practice, we need a clearly defined means of measuring well-being of a society. It will not do to have a vague definition which does not correspond to objective measurement.
 
What's interesting is how "self-interest" tends to "writ-larger" over time.

It is NO LONGER moral to carpet-bomb cities, because if that's allowed, then someone you care about might be in a city that gets carpet-bombed.

Wow. That's not even close to a reason that I might find plausible.

I really don't worry about the slim probabilities an acquaintance is in Syria, Iraq or Iran. (This despite knowing an Iranian citizen or two.)

Rather, it is no longer moral to carpet-bomb cities for the simple reason that we regard it as immoral to attack unarmed non-combatants generally speaking. I don't think that the increased mobility of society enters into it. I'm far more likely to know a soldier who faces increased risk if we don't carpet-bomb a civilian population and break their will to fight than I am to know someone who suffers from the bombing.
 
What's interesting is how "self-interest" tends to "writ-larger" over time.

It is NO LONGER moral to carpet-bomb cities, because if that's allowed, then someone you care about might be in a city that gets carpet-bombed.

What????

There were huge debates about whether or not it was morally justified to carry out "area bombing" or "terror bombing" of civilians during World War Two. That is an empirical fact. But the arguments did not revolve around people saying, "I could have relatives in Hamburg, for all I know!"

There are plenty of ways in which morality could be considered to be not about self-interest. Common ones would be about giving aid money to charity - to feed people we will never meet and who can never reciprocate. Or about giving blood. Most people who do this do not do it out of self-interest as far as can be made out.
 
For one thing, we need a way of measuring how adoption of various moral rules impacts well-being. For this, we need a way to measure well-being which is complete enough that we can make rough estimates of the effects of different moral rules.
If we can devise one, that would be grand!

But, I don't think we need one. We don't have a single, complete way to measure "health", but we have a lot of small ways to measure various aspects that go into "health". And, from that we can state if someone is generally in "good health" or not.

Most measurements of "well being" get packaged together: Those with more wealth, also have more health, also have less suffering, etc. If we pick a few different ways to tap into each of those, we can get a general measure of "well being".

So if we look as violence: That seems to generally go down, by most measures, over time.

Prosperity: That seems to generally improve, over time, even in spite of setbacks. A "bad economy" today is still bad, but NOT quite as bad as one would be even decades ago!

Medicine is helping us live longer. Mortality rates at various ages gets lower, over time, especially for infants.

Intelligence: That seems to go up, by most measures, over time.

Freedoms: Those, general, increase over time.

Pick any part of well-being you wish, and you should see that it likely improves, over time, if you look at the data. Exceptions will be small in number.

None of it perfectly linear, but in the form of an inclined saw-tooth graph.

The UN has a few indexes for measuring these things, as a place to start, if you want something comprehensive. Though, my point is that it probably hardly matters: Making up your own will likely show the same trends, (unless you cherry-pick based on only the small number of exceptions).

It will not do to have a vague definition which does not correspond to objective measurement.

The funny thing is that we don't need to have a single, be-all, end-all definition. Most reasonable ones will suffice (meaning: Most people are willing to see 'well being' defined that way), because they would all point to the same direction.

Wow. That's not even close to a reason that I might find plausible.
Really? Then why did you write this:

it is no longer moral to carpet-bomb cities for the simple reason that we regard it as immoral to attack unarmed non-combatants generally speaking.
That's just a general way of saying you "care about people in a city that might be carpet-bombed".

You do NOT need to know someone to care about them, sufficiently enough to not carpet-bomb them.

You, for example, seem to care about "unarmed non-combatants", in general, sufficiently enough to not carpet-bomb them. We have that in common, at least!

It seems our military leaders in the past did not care quite as much as we do! They were willing to sacrifice a bunch of them, to win a war. (One, I might add, that was often pretty petty!) Today, they typically take better care.
 
cornsail said:
The problem with "well being" as a basis of morality is that it raises the question "whose well being"?
Everyone's!

There are very few things in life, any more, that are actually zero-sum games. There are creative ways to turn every competition for limited resources into NON-zero-sum games, where everyone wins! It is often NOT EASY to develop these solutions. Sometimes they are hard to negotiate, because they may require a short-term sacrifice by one or more parties. But, once these strategies are figured out by someone, they have the opportunity to spread to everyone.

In our early history, we had stronger in-group/out-group biases, where we would act in the best interests of everyone in our group. If we were paranoid, and suspect the out-group might try to kill us, we would be tempted to go off on a pre-emptive strike, and kill them off first.

Today, our "in-group" has become a LOT more inclusive, and even includes almost every stranger we meet on a daily basis. So, we are more inclined to develop non-zero-sum strategies with more people. (And, perhaps, even other animals!)

So is it your contention that that which is moral is that which increases well being?

And in response to the question "whose well being?" you say "everyone's". So this suggests that the morality of an action is determined by its affect on the net average of the well being of sentient creatures, correct?

If you haven't already, then you should verse yourself on the arguments for and against utilitarianism.

I can also add this:

If God existed, and declared "X is a good thing, and Y is a bad thing!" Those would, no doubt, become facts, because this God is omnipotent, and his word IS reality.
If someone then says "I should support X, and not Y", you could say they are making a naturalistic fallacy.

Replace "God" with "science", and the case is not much different. (Though, science would be provisional and not "law".)

If science declared "X is a good thing, and Y is a bad thing!", it would only be natural for most people to say " I should support X, and not Y", even though that might also be a naturalistic fallacy.

If I am accused of conflating "Is" and "Ought", it is only because everyone naturally does that, anyway! Don't shoot the messenger!

My important contribution would be to demonstrate that there is no other stable manner in which morality can exist. Any attempt to defy facts would not last forever. (According to theory.)

Huh? You seem to be suggesting that everyone either believes that morals are determined by God or believes that science can answer moral questions? Or am I misunderstanding? I don't believe either.

We do NOT need to be aware of it happening, for it to be an influence on us!

Indeed, of course not. As I said, I wasn't sure what you meant by "proto". Apparently you mean something unconscious. I'm not sure what you are basing your claim of "proto values" on, though, or how far in the evolutionary past you are suggesting it was at work.

Much of what we do is guided by hidden influences we are not aware of. It took science centuries to work out the various biases in our brains. There could still be more of them to discover!

Yes, of course there are more. We only have a shallow understanding of biases in the brain at present.

Perhaps this line of reasoning should start from scratch:

I think we can (hopefully) accept the fact that a conscious brain is a prerequisite for most accepted definitions of morality. (If not, we can debate that in another thread!)
But, what happened BEFORE we had such a consciousness? Morality did not just suddenly appear out of nowhere! It was built on top of biological systems that were already in place. Those systems (in hindsight) could be said to be a "proto-morality".

Is that clearer?

I think that's clearer.

But this seems incredibly speculative. And I don't know why pre-morality biological systems would be considered "proto-moral" rather than just "non-moral". I also don't know why morality should be assumed to treat these "proto moral" drives as foundational as opposed to simply being separate systems.

See my "layers" comment, next:

This is a fair point. Perhaps it would be best to explain it in terms of Layers: Systems are built on top of systems, but the workings of higher systems might not necessarily be reduced to the workings of the lower ones. Though, we can still predict aspects of one based on the other.

This line of thinking was inspired by how Antonio Damasio explains how consciousness works, in his book "Self Comes to Mind". He defines layers of consciousness this way: A proto-self, in the form of basic survival systems; a core self, where a 'narrator' forms from maps of the proto-self; and an autobiographical self on top of that, where memories and language contribute to a sustained sense of "being aware". Each one is built on top of the other. But, the autobiographical self cannot, necessarily, be reduced to merely the behavior of the proto-self.
Though, having said that, we can still make some types of predictions about one layer, based on findings of another.

I am still researching and developing this line of argument, so I cannot tell you where good places to draw the lines between layers would be, for morality yet. But, if such layers are found to be practical, that could better explain what I mean.

Damasio's theory also sounds incredibly speculative.

Natural selection would be a proto-morality. But, thanks to the workings of various layers between that and our current, actual morality, we can NOT reduce our morality to just natural selection.

Huh? I don't think it makes any sense at all to describe or define natural selection as a form of morality, proto or otherwise. Natural selection is not a biological system. It is not a physical thing at all. It's just the logical necessity that those organisms that reproduce tend to be more adept at reproducing than those organisms who do not reproduce.

However, that does NOT imply we can't make predictions about morality based on our understanding of natural selection, and vice-versa.

We could, but predicting what people will on average find to be moral or immoral is not the same as answering moral questions.
 
Last edited:
There were huge debates about whether or not it was morally justified to carry out "area bombing" or "terror bombing" of civilians during World War Two. That is an empirical fact. But the arguments did not revolve around people saying, "I could have relatives in Hamburg, for all I know!"
After World War II we DID start to care a lot more about people in Hamburg, even if they weren't relatives. Even if we didn't know them.

My larger point is that the circle of altruism is one that continues to expand.

We started to recognize that: There is a chance you would be bombing someone who could end up being important to you, in the future.
And: All the bombing did was wreck future expansion of the world economy, by taking out a chunk of the current one.

THAT is why carpet-bombing is no longer considered moral.

And, in those ways, it becomes in our own self-interest not to do it!

If the opposite was true: *IF* we lived in an alternative universe, where carpet bombing certain cities actually lead to better well-being for everyone, it would NOT be immoral.
But, that's not the universe we happen to live in. We live in one where such bombings have a way of hurting the very bomber doing it!
 
After World War II we DID start to care a lot more about people in Hamburg, even if they weren't relatives. Even if we didn't know them.

My larger point is that the circle of altruism is one that continues to expand.

We started to recognize that: There is a chance you would be bombing someone who could end up being important to you, in the future.
And: All the bombing did was wreck future expansion of the world economy, by taking out a chunk of the current one.

THAT is why carpet-bombing is no longer considered moral.

And, in those ways, it becomes in our own self-interest not to do it!

If the opposite was true: *IF* we lived in an alternative universe, where carpet bombing certain cities actually lead to better well-being for everyone, it would NOT be immoral.
But, that's not the universe we happen to live in. We live in one where such bombings have a way of hurting the very bomber doing it!

Sorry, but this is completely fatuous.

Some people give blood to strangers. But not out of self-interest. Others do not do this.

Some people give money to strangers. But not out of self-interest. Others do not do this.

Some people spend long periods of time caring for the elderly, the ill, the homeless, the mentally handicapped. Why is that self-interest?

On the other hand, I could steal from people who are weak, or cheat people to get ahead or tease people for my own amusement and all of these would be in MY self-interest.

Why are these things not moral and why are the other things I mentioned before moral?

Or are they?


Besides, having denied you are Sam Harris, you pretty much give a complete regurgitation of his ideas, right down to the comparison of "well-being" with "health". But you are making an assumption that "well-being" is all that should be cared about. This is a consequentialist ethic.

If you want to convince your opponent of objective morality then you have to demonstrate that consequences are all that matter. Yet, there is another school of thought that suggests that motives matter. This is deontological ethics.

Can you do this?
 
In terms of descriptive vs. normative ethics, we can use the famous trolley experiment, hopefully.

There is a runaway trolley barrelling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

A descriptive ethics might explain how people come to moral judgments. We could do a survey and find out how many people pull the lever.

Apparently, most people say they would pull the lever.

However, in the follow-up question:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

Apparently in this case most people say they would NOT push the fat guy.

Now, given that the number of lives saved or lost in both cases is the same this can tell us something about ethics in a descriptive sense.

But having the survey results does not tell me what I should do which is the normative question.

If you think there is no distinction between descriptive and normative judgments then you would have to explain why it is morally right to pull the lever in the first case but not push the fat guy in the second.

Potentially there could be an answer but I cannot see what it is.
 
So this suggests that the morality of an action is determined by its affect on the net average of the well being of sentient creatures, correct?
Well, that's an approximate way to do it.

Though, as morality improves, we can probably cover a lot more people than merely "average" ones.

We could, but predicting what people will on average find to be moral or immoral is not the same as answering moral questions.
No, it must be more objective than that.

We must find what will, on 'average', BE better for well being, regardless of what people think when asked.

If you haven't already, then you should verse yourself on the arguments for and against utilitarianism.
Oh, I have!

And, the arguments against it don't matter a hill of beans, because being utilitarian, specifically "welfare utilitarian", is something societies are going to do, in the long run, and it's beyond our control.

"Wellfare utilitarian" could be seen as almost a synonym of "well being".

Huh? You seem to be suggesting that everyone either believes that morals are determined by God or believes that science can answer moral questions? Or am I misunderstanding? I don't believe either.
Well, not quite.

What I am saying is that the answers to moral questions that science determines will eventually be accepted by everyone. (Or at least most people.)

Even if you don't believe science can answer moral questions, you will most likely find yourself agreeing with the answers science churns out, eventually. Unless, of course, you wish to be one of those Footnotes of History I was talking about earlier.

In the corporal punishment example: Science determined that it is bad for students. If you accept that, then you allowed science to answer the question. If, instead, you held true to the command "spare the rod and spoil the child", then you will be a Footnote of History.

Indeed, of course not. As I said, I wasn't sure what you meant by "proto". Apparently you mean something unconscious.
That is correct.

I'm not sure what you are basing your claim of "proto values" on, though, or how far in the evolutionary past you are suggesting it was at work.
I am suggesting it was "at work" in even the earliest of life forms. Though, it is important to understand that we only recognize it as "proto-morality" in hindsight.

And I don't know why pre-morality biological systems would be considered "proto-moral" rather than just "non-moral".
To emphasize that morality was built from it as a key ingredient.

I also don't know why morality should be assumed to treat these "proto moral" drives as foundational as opposed to simply being separate systems.
It grants us the opportunity to find the origins of our morality. And, how they get enforced.
Or, in other words: Its a framework for finding objective moral truths, at least according to the theory I am describing.

Damasio's theory also sounds incredibly speculative.
His theory is LOT more developed than any other theory about how consciousness evolved, at the moment. There is a TON of science behind each claim he makes. Though going over the detail, here, would be too much of a tangent.

It's just the logical necessity that those organisms that reproduce tend to be more adept at reproducing than those organisms who do not reproduce.
"Getting more adept at reproducing" could be modeled as "Life forms automatically 'value' adeptness of reproducing".

Yes, I am anthropomorphizing a bit. We do not literally mean they value it. But, we can characterize them that way, the same way we characterize genes as being "selfish".

If that is considered a proto-morality, we can first predict that mothers will care for their children. But, that of course, is hardly news.

We can also predict that moral codes that have an impact on our children will be put under more scrutiny than ones that don't. And, we can start to see the beginnings of where most of the rest of our morals and values come from.

Each moral decision we make can trace its roots back down to those core and "proto" values. Even if one can not necessarily be "reduced" from the other, especially when collective interests of the society start to get recognized.
 
Well, that's an approximate way to do it.

Though, as morality improves, we can probably cover a lot more people than merely "average" ones.

No, it must be more objective than that.

We must find what will, on 'average', BE better for well being, regardless of what people think when asked.

Oh, I have!

And, the arguments against it don't matter a hill of beans, because being utilitarian, specifically "welfare utilitarian", is something societies are going to do, in the long run, and it's beyond our control.

"Wellfare utilitarian" could be seen as almost a synonym of "well being".

Well, not quite.

What I am saying is that the answers to moral questions that science determines will eventually be accepted by everyone. (Or at least most people.)

Even if you don't believe science can answer moral questions, you will most likely find yourself agreeing with the answers science churns out, eventually. Unless, of course, you wish to be one of those Footnotes of History I was talking about earlier.

In the corporal punishment example: Science determined that it is bad for students. If you accept that, then you allowed science to answer the question. If, instead, you held true to the command "spare the rod and spoil the child", then you will be a Footnote of History.

That is correct.

I am suggesting it was "at work" in even the earliest of life forms. Though, it is important to understand that we only recognize it as "proto-morality" in hindsight.

To emphasize that morality was built from it as a key ingredient.

It grants us the opportunity to find the origins of our morality. And, how they get enforced.
Or, in other words: Its a framework for finding objective moral truths, at least according to the theory I am describing.

His theory is LOT more developed than any other theory about how consciousness evolved, at the moment. There is a TON of science behind each claim he makes. Though going over the detail, here, would be too much of a tangent.


"Getting more adept at reproducing" could be modeled as "Life forms automatically 'value' adeptness of reproducing".

Yes, I am anthropomorphizing a bit. We do not literally mean they value it. But, we can characterize them that way, the same way we characterize genes as being "selfish".

If that is considered a proto-morality, we can first predict that mothers will care for their children. But, that of course, is hardly news.

We can also predict that moral codes that have an impact on our children will be put under more scrutiny than ones that don't. And, we can start to see the beginnings of where most of the rest of our morals and values come from.

Each moral decision we make can trace its roots back down to those core and "proto" values. Even if one can not necessarily be "reduced" from the other, especially when collective interests of the society start to get recognized.

This sounds rather similar to the confident predictions made by the Soviet Union and the role individuals would have in it. Those opposed would end up in the Dustbin of History.

But what is so wrong, morally, with being a Footnote in History?
 
Some people give blood to strangers. But not out of self-interest. Others do not do this.

Some people give money to strangers. But not out of self-interest. Others do not do this.

Some people spend long periods of time caring for the elderly, the ill, the homeless, the mentally handicapped. Why is that self-interest?
A lot has been written about how seemingly altruistic acts actually have hidden self-interest motives behind them. Adam Smith talked of the Invisible Hand (which has its limits, but is approximately right). Richard Dawkins wrote about how altruism benefits even "selfish" genes. Some of the other names escape me, at the moment.

On the other hand, I could steal from people who are weak, or cheat people to get ahead or tease people for my own amusement and all of these would be in MY self-interest.
This one is easy: If we allow people to steal from the weak, for example, then YOU could get things stolen from you when YOU become weak.

Besides, having denied you are Sam Harris, you pretty much give a complete regurgitation of his ideas, right down to the comparison of "well-being" with "health".
I am trying to complete the argument Sam Harris floundered on.

When asked "What is the scientific reason we should value well-being?"

Sam would likely respond:

"Anyone who argues otherwise is not going to contribute anything to the conversation, and is not worth talking to!"
(That is not an exact quote, but I believe it sums up his feelings.)

I am trying to do better than that. My response, in short, is:

"Well-being is the only stable manner in which morality can exist. Any deviations from it will not last forever."

It might not be a perfect response. But, I hope you recognize it as an improvement over Sam's.

My end of the debate only needs to demonstrate that having an "objective morality" will grant us better access to understanding our values, than error theory would, even if my stability theory is incorrect.

If it turns out to be wrong, then whatever is discovered to take its place would then become the answer to the question "What is the scientific reason we should value well-being?"

This is a consequentialist ethic.
And, unlike Sam, I do accept that it is consequentialist ethic*. Specifically "welfare consequentialism".

And, as such, consequentialism could be seen as a rough synonym for "well-being".

(*At least now I do. Should you discover some of my writings in the past, where I would vehemently deny this, please disregard them. I learned a lot more since then.)

If you want to convince your opponent of objective morality then you have to demonstrate that consequences are all that matter. Yet, there is another school of thought that suggests that motives matter. This is deontological ethics.

Can you do this?
Yes. If one's motives turn out to lead to bad consequences, we can predict that they will eventually be fought against. And, those motives will not win, in the end.

Of course, someone can also fight against motives that lead to good consequences. But, in that case, the motives will eventually win, and the fighter will become a Footnote of History.

Now, given that the number of lives saved or lost in both cases is the same this can tell us something about ethics in a descriptive sense.
In the second case, it makes sense to not allow people to be pushed off of bridges. Because, someday, YOU could be the one pushed off a bridge.

And, for what it's worth: Those railway workers accepted the risk that they might get hit by a train (as long as it was an extremely rare occurrence!), whereas the guy on the bridge did NOT accept the risk of being pushed off by another person, simply for being in the wrong place at the wrong time.

Wellfare utilitarianism is not merely about the raw numbers of lives saved vs. dead in a single event. It's about what rules would likely lead to more and better lives, in the long run, across many events, with many different people, interacting with different motives.

In that context, it makes sense to have a revulsion against pushing a person, vs. flipping a switch. And, against killing someone who did not accept a risk, vs. someone who did.
 
This sounds rather similar to the confident predictions made by the Soviet Union and the role individuals would have in it. Those opposed would end up in the Dustbin of History.
Ironically, the Soviet Union found themselves to be the ones in the dustbin.

It turns out communism is, objectively, wrong. We can go into the reasons why (Marx's starting axioms have been largely debunked by science, for example.) But, that might be too much of a sidetrack for this thread.

But what is so wrong, morally, with being a Footnote in History?
If God existed, what is so wrong, morally, with getting smited by Him?

If Hell was a real place, what is so wrong, morally, about going to Hell?

You COULD be a Footnote, if you want to**. You COULD be smited by God, if you wanted to (and assuming he existed). You COULD go to Hell if you want to (assuming it was a real place).

(** please don't!)

But, we are in the task of finding objective moral truths. Anyone can try to defy those truths. Doesn't mean they don't exist.
 
Last edited:
A lot has been written about how seemingly altruistic acts actually have hidden self-interest motives behind them. Adam Smith talked of the Invisible Hand (which has its limits, but is approximately right). Richard Dawkins wrote about how altruism benefits even "selfish" genes. Some of the other names escape me, at the moment.


This one is easy: If we allow people to steal from the weak, for example, then YOU could get things stolen from you when YOU become weak.


I am trying to complete the argument Sam Harris floundered on.

When asked "What is the scientific reason we should value well-being?"

Sam would likely respond:

"Anyone who argues otherwise is not going to contribute anything to the conversation, and is not worth talking to!"
(That is not an exact quote, but I believe it sums up his feelings.)

I am trying to do better than that. My response, in short, is:

"Well-being is the only stable manner in which morality can exist. Any deviations from it will not last forever."

It might not be a perfect response. But, I hope you recognize it as an improvement over Sam's.

My end of the debate only needs to demonstrate that having an "objective morality" will grant us better access to understanding our values, than error theory would, even if my stability theory is incorrect.

If it turns out to be wrong, then whatever is discovered to take its place would then become the answer to the question "What is the scientific reason we should value well-being?"

And, unlike Sam, I do accept that it is consequentialist ethic*. Specifically "welfare consequentialism".

And, as such, consequentialism could be seen as a rough synonym for "well-being".

(*At least now I do. Should you discover some of my writings in the past, where I would vehemently deny this, please disregard them. I learned a lot more since then.)

Yes. If one's motives turn out to lead to bad consequences, we can predict that they will eventually be fought against. And, those motives will not win, in the end.

Of course, someone can also fight against motives that lead to good consequences. But, in that case, the motives will eventually win, and the fighter will become a Footnote of History.

In the second case, it makes sense to not allow people to be pushed off of bridges. Because, someday, YOU could be the one pushed off a bridge.

And, for what it's worth: Those railway workers accepted the risk that they might get hit by a train (as long as it was an extremely rare occurrence!), whereas the guy on the bridge did NOT accept the risk of being pushed off by another person, simply for being in the wrong place at the wrong time.

Wellfare utilitarianism is not merely about the raw numbers of lives saved vs. dead in a single event. It's about what rules would likely lead to more and better lives, in the long run, across many events, with many different people, interacting with different motives.

In that context, it makes sense to have a revulsion against pushing a person, vs. flipping a switch. And, against killing someone who did not accept a risk, vs. someone who did.

Okay, well it makes sense to talk in terms of welfare utilitarianism, certainly. And it is certainly true that we can improve our well-being by utilizing science. But what I think is that you haven't demonstrated how science reveals this to be moral in the first place.

I think your opponent will happily agree that science can give us better medicine and teach us how to make more and better crops but will ask you to prove how that is itself moral.
 
But what I think is that you haven't demonstrated how science reveals this to be moral in the first place.
If we accept that "well being" is the only stable value in which morality can exist, then it follows that utilizing science to improve our well-being would be moral.

Or, at the very least: It would be the only stable manner in which science could be considered moral.

I think your opponent will happily agree that science can give us better medicine and teach us how to make more and better crops but will ask you to prove how that is itself moral.
Yes, you are very likely correct about that!

I will try to emphasize, though, that this "Stability Theory" is only a theoretical manner in which we can identify objective moral truths. Even if it turns out to be incorrect, that does NOT necessarily mean there are no moral truths: Only that we were wrong about what those truths were.

Either way: As we investigate such things in this manner, we will learn more about our morality and values, in the process!

Other ideas, such as Error Theory, do not seem to lend themselves towards making new, innovative discoveries about values.
 
There is no big problem in elaborating a list of moral goods provided by consensus in our own society. You have the Declaration of Human Rights as the most obvious example. But there are some problems when we try to demonstrate that these rights are a objective or absolute values. These problems arise when we left the general proclamation and enter into specific moral arena, that is to say when we have to discuss them with other people with different moral outlooks. Then, the universal moral definitions become vague or enter in mutual conflict without a clear solution. Here are some examples:

1. Individual or collective rights?
2. Socioeconomic or politic rights?
3. What is well-being?
4. What is happiness?
5. What is better: more well-being for less people or less well-being for more people?
6. And so on.

Other problems arise when our opponent doesn’t accept our criterion to decide. This happens with people of different cultures and also with some notorious intellectuals. Hume, Camus or Dostoevsky, for example.
Hume: There is not contradiction (no objective reason) in preferring that the world perish before I have a toothache.
Camus: If I have to choose between my mother and justice, I choose my mother.
Dostoevsky: Imagine a perfect welfare society where people are happy, but not free. Imagine a “conservative dandy” that say: “I prefer to be free that happy” and began to destroy things on a whim. Can you provide a moral objective reason against him?

In these cases the opposition to our consensual good is radical: Hume, logical; Camus, feelings; Dostoevsky, absolute freedom. And we have not an objective argument against them.
 
A lot has been written about how seemingly altruistic acts actually have hidden self-interest motives behind them. Adam Smith talked of the Invisible Hand (which has its limits, but is approximately right). Richard Dawkins wrote about how altruism benefits even "selfish" genes. Some of the other names escape me, at the moment.

Wow, did you misunderstand Smith.

He was not writing about "seemingly altruistic acts actually have hidden self-interest motives behind them." He was writing about seemingly self-interested acts which actually have effects beneficial to society as a whole.

Not the same thing at all.
 
If we can devise one, that would be grand!

But, I don't think we need one. We don't have a single, complete way to measure "health", but we have a lot of small ways to measure various aspects that go into "health". And, from that we can state if someone is generally in "good health" or not.

Even for the individual, it is not easy to say whether his health has improved or worsened. Maybe I have more stamina, but I have also developed memory loss. Am I better or worse?

It's that much harder to say whether society has improved or not. Perhaps the wealthy are living longer, which the inner city poor are falling into more addictions and coal miners are shortening their lives through respiratory illness. Is society healthier or not?

The same issues arise with each of your claims. Is the world wealthier or not? Well, it depends on what you mean. The gap between rich and poor is widening, if I understand correctly. Is this relevant for "well-being"?

Most measurements of "well being" get packaged together: Those with more wealth, also have more health, also have less suffering, etc. If we pick a few different ways to tap into each of those, we can get a general measure of "well being".

So if we look as violence: That seems to generally go down, by most measures, over time.

I wonder if you can support this claim. It's not obvious to me. In the 20th century, war-making became very efficient, continuing its very long trend. When we take chemical, biological and nuclear weapons into account, as well as other great advances in the art of waging war, will we actually find that violence has decreased over time?

I honestly don't know.

[...]

Really? Then why did you write this:


That's just a general way of saying you "care about people in a city that might be carpet-bombed".

You do NOT need to know someone to care about them, sufficiently enough to not carpet-bomb them.

You, for example, seem to care about "unarmed non-combatants", in general, sufficiently enough to not carpet-bomb them. We have that in common, at least!

It seems our military leaders in the past did not care quite as much as we do! They were willing to sacrifice a bunch of them, to win a war. (One, I might add, that was often pretty petty!) Today, they typically take better care.

You are equivocating on the use of "care for". I don't care about, say, Iraqi civilians out of any sense of self-interest -- certainly, nowhere near as much as U.S. soldiers are of interest to me. But you said that anti-carpet bombing sentiments are motivated by self-interest. This is silly!

Look, we agree, I assume, why bombing civilians is militarily useful: if we destroy enough cities, the enemy will capitulate, thereby saving lives of our ground forces.

Now, any simpleton can see that an American soldier is far, far more likely to have a beneficial effect on my life than a civilian living in a country we are fighting. That soldier's life is worth probably 1000 foreign civilians, in terms of my own personal self-interest. So, it would be silly to think that I should oppose carpet-bombing out of self-interest.

Your argument just doesn't work.
 
Wow, did you misunderstand Smith.

He was not writing about "seemingly altruistic acts actually have hidden self-interest motives behind them." He was writing about seemingly self-interested acts which actually have effects beneficial to society as a whole.

Not the same thing at all.

Yes, Wowbagger, the theory of altruism reducing to some form of perceived self-interest is called psychological egoism.

I really think that the concept is stretched to become untenable in many cases of obvious altruism and eventually becomes an unfalsifiable hypothesis.

"What if I give blood?"
"Ah! That's because you hope to get blood in return."

"What if I give money to the mentally handicapped?"
"Ah! That is because one day you will be weak and require assistance."

"What if I jump on a handgrenade to save others?"
"Ah! That is because you hope for posthumous glory!"

"What if I give all my money to charity except the bare minimum and undergo obvious sacrifices to help others?"
"Ah! That is because you selfishly wish to glow with smug and supercilious self-satisfaction!"

In other words, someone can always dream up, on an ad hoc basis, the basest and most selfish reason why someone might do something apparently altruistic but it will not prove that ultimately they are being selfish, especially when far clearer examples of selfishness exist.
 
Last edited:

Back
Top Bottom