David Hume vs. Sam Harris

So no serious reply then?

Everyone agrees science can help us get what we value, but not determine what we shoud value (which Harris claims, though sometimes not).

And no, we did not evolve to be utilitarians. Most of us consider the process, and not just the outcome, to be important. Suppose someone drives a car, and drives on another person, killing him. The courts will sentence the driver harsher if he intentionally tried to kill him than if he was just irresponsible and did so by accident. Yet the outcome in both cases is the same, one man died.



While I agree with almost all of what you wrote above, I think the sentence that I bolded is wrong. We did evolve (or, rather, it evolved in us) the machinery to be utilitarians -- that's why we are utilitarians. We also evolved the machinery to be deontologists (why we are deontologists); and the machinery necessary for value/character ethics. And the language/cognitive ability that allows us to negotiate these various ways of thinking within a larger language/social community.

I think one of the mistakes we often make is to assume that morality/ethics is one thing. I don't think it is. Much like definitions of words, I think our moral senses share a family resemblance but we are not restricted to one way of thinking.
 
Last edited:
Sorry for the delay.

After putting more thought into this, I think I might have yet a third manner in which to address what I called The First Mile of this debate.

Recall that the Argument Against Science in the First Mile generally went like this:
"You must make a value judgment to use science, and that value judgment is not, itself, science. It is a value judgment."

My first approach was to accept this as a valid point, and move on. Though, some of you say this would no longer be defending Sam Harris' views. The second approach was to try to build a case for science in the First Mile: one that I might have the imagination for, but would be difficult for many of you to swallow very well.

Although, I should add that it is possible Sam Harris might not be delving into the First Mile with science, himself, so much. As I re-read bits of his book, I run into quotes such as this, on page 37:

Science cannot tell us why, scientifically, we should value health. But once we admit that health is a proper concern of medicine, we can then study and promote it through science.​

But, assuming the First Mile is still an issue with other parts of his views, here is...

Approach to First Mile Argument #3: Use a Different Word for It, than 'Science'

Perhaps the heart of the issue most of you are taking up is in Harris' decision to define the term Science very broadly. So broadly as to encompass practically anything that, to his view, "works". If we re-orient our wording, a little, perhaps we can preserve the essential aspects of Harris' meaning, while avoiding issues with Hume's whole is/ought argument.

There are a few sub-approaches I thought of for doing this:

Sub-Approach #3A: Use an Existing Word as a substitute for 'science', whenever the word 'science' interferes with Hume

A friend of mine suggested the words "Critical Thinking" might be suitable. The value of Critical Thinking compels us to use science, not just for technology and engineering and medical research, etc. But, also (for a lot of us) moral decision making.

Another possible word would be "Humanism". Humanism values the health and well-being of conscious creatures (namely humans, though other species tend to be respected, as well). And, Humanism values the tools of science in making moral decisions. Anyone who attends a bio-ethics panel, at a Humanist conference, should no doubt find this is true. (I am not sure if Harris, himself, would like using this word, but who cares about that?)

Sub-Approach #3B: Make Up A New Word

I have not put much thought into what the word should be. But, for now, I shall use "Morlience". What is morlience? It is the value of using science to make moral decisions. Morlience, itself, is not science. It is a value. So, it doesn't break Hume's is/ought distinction. And, it doesn't break the fact/value distinction, either. (assuming such distinctions have value)

To use the framework of my previous posts, morlience could be defined as "Using science to cover the Middle Distance and Last Mile of moral decision making." Morlience covers the First Mile.

Importantly: Morlience also implies that we should value the health and well-being of conscious creatures, when using science to make moral decisions.

All of the essential ingredients in Sam Harris' arguments would be preserved, if I spelled them out or not, using this word. We are simply using a new word to do it, so that 'Science' can go back to being strictly an "Is" thing.

If you think the word 'morlience' is kinda stupid (and, I probably won't argue with you), perhaps we can come up with something better?

Sub-Approach #3C: Dani-Style Sub-Definitions.

Most of you are going HATE this. But, I throw it out there, anyway, for the sake of completeness:

We could define the word Science in multiple ways:

Science1: A systematic enterprise for building and organizing knowledge in the form of testable explanations and predictions about the universe. (Everything it comes up with is a fact or an "is").

Science2: The value of using science to make moral decisions. (Everything it comes up with is a value, or an "ought")

(My own, favorite definition of science is: "The discipline of generating new empirical knowledge". But, that is not useful in this discussion, because it does not make the is/ought distinction clear. That is why I went with the Wikipedia definition, instead, for this thread.)

Perhaps, after disliking this third Sub-Approach so very much, you will begin to think one of the other two options are more acceptable.

So, yeah, this is basically an exercise in semantics. So sue me.
 
I would respond to more of you, but I would just be repeating myself much too often, if I did. Hopefully, most of you will read everything I wrote for everyone else, not just responses to your own posts.

A question for the Harrisites: What if science determines that a society attains maximal well-being when it is mono-racial?
A good question! But, here's the thing:

If (hypothetically speaking) it turns out that mono-racial societies truly are more successful at achieving well-being, it would be difficult to argue that they shouldn't be mono-racial. If (again, hypothetically), the Nazis were right, and eliminating Jews really could bring about a utopia for superior beings, it would then be more difficult to claim that what they did was evil. BUT...

It turns out that, in reality, it doesn't work out that way. Relative fitness matters more in evolution, than absolute fitness. Folks like Jared Diamond have demonstrated this is true for races as well as species. The Nazis, for example, got that wrong: Jews are perfectly capable of being good, productive members of society, etc.

This looks like a reprehensible position to take, to us, because we already know how and why eugenics was not going to work, from a scientific perspective, and find it hard to imagine a world that is any different being any better.

But, I argue that if nature was different: If the world was sooooo different, that eugenics would actually be an unqualified success at improving overall well-being for everyone: The folks in that twisted, alternative universe would think that we were the reprehensible ones.

Maximize pleasure? OK, hook everyone up to a pleasure-centre stimulator...

There is no single way one can define 'good health'. If you measure only their mental capacity, you could overlook an unhealthy body crumbling around that mind. If you only measured physical health, you could overlook a mind that is turning to delusional mush.

For similar reasons, there is no one way to define 'well-being'. If we only measured wealth, we would end up amassing fortunes by generating great suffering and misery. If we only measured happiness, we could continuously stimulate our pleasure centers, ignoring other responsibilities required to maintain our bodies.

Once we decide to act on improving the well-being of a society, we can take a multi-pronged approach that addresses all of the reasonable ways one can be well.

Science is not making moral decisions in this case. It's the tool we use to help us obtain the facts once we've made the moral decision, which is the first step you described.
I am talking about cases where the decision has NOT been made, yet. Science can answer moral questions, or at the very least, help us answer them.

It would be unscientific to make the decision first, and justify it later. The idea is to use science to discover what the best answer is likely to be, before we reach any final conclusions. (Though, we are allowed to make predictions.)

Other than this, yeah, big freakin' deal. I don't think most people are claiming that this is a big deal. Just that it's wrong, plain wrong, to claim that the first step is scientifically arrived at.
Why would it be wrong? What if David Hume is outdated?

Some of the science of consciousness seems to be indicating that all "oughts" start out as "ises" in the brain, anyway. According to that line of research: We just forget the original "is" state, and classify it as an "ought" in a later process. Perhaps it is too early to be confident about these sorts of discoveries. But, I suspect we will see more of these types of arguments, in the future, that erode the very difference between "is" and "ought" in various ways. I think it is less likely that these distinctions will become any stronger in the future.

Indeed, I think the biggest worry of Sam Harris and those who take his side in this debate is that they are concerned with how the Taliban can be condemned if there are no moral facts. They seem to want to say something like "Sorry dudes, but it's a scientific fact that forcing people to wear burkas is wrong". It is perfectly possible to condemn the morality of the Taliban, but it is not possible to do so with the help of moral facts, because there are no such facts.
The Taliban is detrimental to the well-being of the people under its power. They live in a society unable to grow economically, and sustain wealth, unable to achieve new heights of health and happiness, etc. Unable to express themselves fully. Small businesses and innovation do not thrive in that environment. The list goes on and on. Each way you can reasonably measure well-being is hurt. And, there is no reason for it. Lift people out of that situation, and they can thrive better. The Taliban even hurts its own leaders in these ways. Yes, the upper ranks of the Taliban have lots of power, and perhaps lots of money. But, they do not live with the same opportunities as even a middle-class American has, to live as good and as full of a life.

Those are facts that can be empirically deduced, in various ways.

Yes, you CAN condemn the Taliban WITHOUT science. You do NOT need science to see them as rotten.

But, there are also liberal extremists who would defend such existence by saying things like "Well, maybe that is better for them! Who are we to judge?" Science can refute such claims.

And I must say that I'm very, very disappointed that Dawkins and Shermer buy into Harris' idea. They if anyone ought to see just how flawed it is.
I wonder if it takes a certain type of mentality to see this through. Shermer promoted provisional lawmaking in several of his early books. Dawkins is very big on breaking the tyranny of discontinuous minds.

My early hypothesis is that those who think in terms of gray-areas and provisional everything are more likely to see the sense in Sam's arguments. But, those who think more in terms of black-or-white, and definitive versions of everything are less likely to get it through their skulls. But, I could be wrong about that.

And, whatever the reason, this is certainly a non-intuitive concept to get across. I will grant you that!

-It seems to me that both sides agree that science can't decree that people should do things that they do not (at any level) want to do. Am I wrong in saying this?
Assuming the person is sane and responsible. (An insane person could, for example, decide not to ever breathe, even when in clean air. Science might say they probably really ought to do that.)

-It seems to me that both parties agree that science CAN tell people what they should do given a set of preferences. Am I wrong about this?
I can agree with this. But, those preferences could also, themselves, be potentially scientifically demonstrated, I suppose.

-It seems to me that both sides disagree on what it takes for science to be considered able to "answer a moral question", and that this is the main source of disagreement. That one side defines moral facts in such a way that they cannot exist, and then conclude that science cannot provide them, while the other concludes that if moral facts are defined in a useful way, then science can provide them. Am I mistaken, here?
That seems like a reasonable summary where most of us disagree. Though, I think some of us disagree in different ways than that.

I think the most useful question in this debate is whether discussions about what morality is can be replaced by using neuroscience to directly measure people's preferences and see what is considered moral (seeing how morality necessarily comes from the preferences/ideals of thinking beings. No one argues anything for another source of morality as far as I can tell). I would say that the answer to this is yes in principle...
I think other lines of investigation, other than neuroscience might be useful when making conclusions. But, on the whole, I agree with this.

Either way, I suggest that the most important thing right now is that we narrow down where people really disagree with one another.
Good idea. My earlier semantic post attempts to do away with the sillier aspects of what we might disagree on. Hopefully, some real meat will emerge.

Let's get this straight at the outset: this argument is only about what you call the "First Mile".
This assumes Sam Harris was really building a First Mile case. After re-reading parts of his book, it does not seem that way. Though, I will re-read the whole thing later in the month, when I finish my current book I am into.

Why should one care?
It's the way of the future, dude!

From what I've read of Harris' book, his argument is basically "utilitarianism is right, because it's obviously right".
I don't think well-being is strictly utilitarianism. Utilitarianism could be one aspect of well-being, but not the sole factor.

If I recall correctly, Sam Harris argues that there is a lack of satisfaction inherent in strictly sticking to what is utilitarian. I might also add that arts and leisure could be components of well-being not inherently covered by utilitarianism.

One thing I am glad of though, as someone who cares about animals, is that Harris and his fans will surely be moving toward veganism based on their "well being of conscious creatures" moral principle.
Well-being would balance our nutritional needs against vegetarianism, though vegetarianism is still a viable option for someone to choose.

Also, if it turns out that other creatures have a consciousness, they might not have as much of a consciousness nor the same type of consciousness as humans. The decision of how much to eat of them might be weighed against that.

Their opinion is not as important as mine, in my conception of well-being.
For this to work as a science, your own conception must be empirically weighed against others. Can you demonstrate, reliably to independent parties, that getting the most fish, specifically, is the most effective way to achieve health and well-being for most folks in your society?

Good character is a hallmark of well-being defined how? Some argue that good character is shown through hardship when pleasure is denied.
Yeah, I can see that. (Though there might be a subtle difference between good-character-out-of-necessity in a situation of hardship, versus the good-character-out-of-expectation that emerges from a well-off society, it probably does not mean much.)

I do not think "good character", alone, can cut it as a moral objective. It can, as I implied, emerge as one part of well-being. But, it does not cover health, economic needs, environmental sustainability, and happiness, etc; that the word "well-being" can bring into it.

The issue we are trying to point out is that it is your first mile that is the issue. Science can intercede at all the other points, and I don't think that is controversial at all.
Lucky for you, I am not going to bother debating the First Mile much, anymore.

Would covering the Middle Distance and Last Mile still contradict Hume, in some way? That is what I would like to know. If not, then the debate has ended. If so, then perhaps Hume is outdated.

Regarding the self-evident nature of well-being -- I was arguing that Sam Harris was wrong in calling it self-evident. He thinks that it is; I disagree. There are many things that different people think are self-evident. When others argue with them, as you point out, it should be obvious that what they think is self-evident simply isn't.
I agree that it might not be self-evident. But, it is still defendable.

I will have to re-read the book to see if Harris truly thinks it must be self-evident, because that might not be accurate. If he thought it was self-evident, why spend so much of the chapters harping on what a good life is, versus a bad one?

The problem with well-being as the ultimate goal is that it sometimes becomes impossible to decide what to do when one person's well-being comes into conflict with another's.
This is going to be an issue, no doubt. But, not an insurmountable one. We can use science to deduce who is right or wrong, or where compromises can be formed, if one is appropriate. We can develop answers. We have the methodology.

Do you stop a rapist who might feel that he is maximizing his well-being in the act? I think we would all say, yes, because he is hurting someone else. How do we decide between the two, though?
The rapists can also be said to be hurting themselves, in the long run, even if they do not think that way or realize it, at the time of the attack. Rape might work under certain conditions where society had not developed certain expectations of freedoms and rights for its citizens, yet. (And, indeed, for much of human history rape had been acceptable.)

But, the circumstances change when people expect a certain level of control over their bodies. Then, rape ceases to benefit anyone in the long run, not even the rapists themselves. Not even if they feel it is better for their well-being, at the time.

We can show these trends through history, scientifically. And, after such, we would most likely conclude, scientifically, that the rapist should be stopped.

The harder case, if we want to get into the problems with utilitarianism is scapegoating -- one can easily maximize the happiness of many people by sacrificing one in certain situations (the classical case is a pure thought experiment, but you can think of one of those old Star Trek episodes). But that just doesn't seem fair. That is the problem with utilitarianism. It doesn't answer all ethical questions for us. We have more than one way to answer our moral dilemmas.
Historically, scapegoating has NOT shown to be effective or productive in actually solving problems. It only provides a superficial, temporary sense of relief. (I doubt scapegoating would even be considered utilitarian, though that might be a different debate.)

The problems we face, as a society, will often need innovative approaches to solve, which takes more science. Not easy answers or quick, inefficient fixes.

Harris assumes utilitarianism as the ultimate morality, and then proceeds to tell us how science can help us achieve utilitarian goals.
As I stated in an earlier reply: I do not think utilitarianism really covers it. It would only be one facet of well-being, not the whole thing.

What is well-being? If it is happiness, then it seems to encourage us to all be blissfully ignorant, i.e. sated fools. If it encompasses something more than mere happiness, then I would love to hear those other things as long as they are not subjective and unquantifiable...

As I stated at the top of this post, there needs to be multiple ways to define 'well-being'.

But, there are some objective, quantifiable, deeper ways we can do it, aside from the simple, obvious ones like wealth and longevity. One could also measure such things as environmental sustainability, disaster response effectiveness, violent crime rates, ability to sustain and generate economic growth, education prospects (ability to generate and spread knowledge), opportunities for innovative enterprises to flourish, effectiveness of healthcare systems, etc.

People who say that science is all that is needed for ethical decisions seem to regard the philosophical, fundamental underpinnings of ethics as ultimately decided and not very productive.
Those philosophical underpinnings were useful for much of human history. And, to a certain extent, they might still be useful. But, over time, they will become more and more quaint and outdated, as science takes over more and more of the process of thinking about ethics.

Perhaps even more disturbingly is that some studies such as the one reported by the Economist here suggest that those who can apply utilitarian thinking most consistently appear to score highly in psychopath tests.
Blatant fallacy.

First of all, as I argued above, utilitarianism is only one aspect of well-being. Not the whole thing.

Second of all, just because we might promote utilitarianism does not automatically mean we are raising psychopaths.

If remorse is valued as part of building a good society, psychopaths would be picked out and dealt with.

We know a lot, today, about how psychopaths operate, and what is going on in their brains neurologically. We also know something about how they emerge evolutionarily, as a small percentage of the population. We know so much that we are in the early stages of designing treatments for the disorders associated with them. (Only a few years ago, it was considered completely untreatable.)

What evidence or data can resolve the issue of which one that is correct?

Sophronius answered this fairly well:

The correct one, if by "correct" efficaciousness is meant, I imagine to be consequentialism. I don't know if any studies were done in this area, but I strongly anticipate that it could be shown that consequentialism is more effective in increasing wealth and welfare in a country.

Deontology is more like the old fashioned religious dogma. It implies we value strict adherence to rules, and does not imply that those rules should change. We show that the world changes over time, so the rules must change over time. We also know that no rules can cover all situations, even in the short term. So, that goes out the window as possible gold standard.

Virtue ethics seems naïve and unrealistic, and inherently unstable. Any society built on straight virtue would likely find themselves evolving into a consequentialist or deontologist society before long, depending on leadership.

We do things because we ultimately want to, not because they are good in a way that is separate from our desires. There is no "intrinsic good". This doesn't mean that science can't tell us what we should do (given what we want), just that it cannot provide nonsensical "just so" morality. It seems quite impossible to me for science to give 100% objective morality. But it seems silly to me to insinuate that this means that ethics is purely relative and all ethical views are equally good.
In short, you are yielding the First Mile to something other than science.

Fine. Whatever.

The following answers are available...
None of those actually applied. See above.

If evolution has given humanity a tendency toward certain cooperative behaviors, that can provide a basis for morality that isn't objective but isn't relative, either.
Nice of you to join us!

I think this covers a lot of what some of us are trying to say.
 
You may personally think it is the least important thing to debate over, but if we are still discussing Sam Harris' ideas it isn't.
I think the more important parts of Sam Harris' ideas are in the realm of health and well-being, provisional models of rules, recognition that morality exists in a landscape (there can be more than one peak of correct answers, as well as valleys of bad ones). Etc.

Getting through the First Mile is the least important part of what he is saying, assuming he is actually saying that. There seems to be evidence emerging that he might not be a First Mile Science guy, after all.

Even more problematic is that Harris wants his scientific morality to prove how much more moderate religious moralities are also wrong. He's especially keen on trying to dress up his anti-Islam bigotry with a "scientific" veneer.
I think this has a lot more to do with living conditions. We should take issue with bad regimes no matter what religion, (or lack thereof) happens to be in charge.

How about this for a compelling reason: on many moral dilemmas science does not have to say anything. Science simply doesn't enter into it.
But, you are wrong. Science, for example, is changing how we view eye-witness testimony, statistics, genetic evidence, lie-detection systems, etc. in the courtroom. As a practical matter, science seems to be able to say a lot about morality, when it matters most: In our justice system.

I think Harris' ideas are pseudo-science.
Example?

Does science imply that? I don't think it does.
You never heard of the Scientific Method?

Or, to a lesser degree, the application of systematics within scientific disciplines?

No, it implies that there is no absolute point of view that determines whether decisions are equally or unequally valid. It implies that from every point of view some decisions will appear equally valid and others not, and that will be different from every point of view.
Science goes beyond "point of view". The findings of science can be verified independently of anyone's point of view. The answers developed by science would be empirical in nature.

Yes, there can be more than one right answer, and the best for one group might be different than for another. But, that does not necessarily imply relativism. That is simply the landscape of morality at play.

No, we don't.
There are several well-respected works on this. The amount of evidence demonstrating the role genes have played, in the moral frameworks we build is massive.

The best book was perhaps one of the first: The Selfish Gene by Richard Dawkins, but make sure you read the Second Edition or later. Other books include, but are not limited to:
The Blank Slate by Steven Pinker, Breaking the Spell by Daniel Dennett (mostly about religion, but goes into the evolution of altruism and such), Most of Michael Shermer's books, etc.

I have yet to see a legitimate science book that refutes the influence of our evolutionary heritage in moral decision making. Though, of course, environmental factors also play a role.


No, it implies the exact opposite. Moral systems emerge naturally, but different moral systems develop in different circumstances. What seems wrong in one environment may seem right in another, and vice versa.
Yes, this is correct. But, it is not relativism, because what makes a moral system correct or not is dependent on empirical facts about genes and environment. Moral systems are not merely correct as a matter of view point.

Good to see that you disagree with one of the most seriously anti-scientific concepts in Harris philosophy. But if the idea that "morality is about the well-being of conscious creatures" is not self-evident, what is left of his philosophy? It seems you disagree with him on the fundamental underpinning of his ideas.
I still agree with the parts that actually make a difference: health and well-being, provisional models of rules, empirical investigation and experimentation of those rules, recognition that morality exists in the landscape, etc.

If science is used in the Last Mike, I do not think it makes much of a difference if it was there for the First Mile.

If you are taking the teleological out of morality, then you are taking the morality out of morality.
It might seem that way, if your ideas about morality are outdated. Last I heard, there was no Definitive Authority that morality must have a teleological component. Perhaps most churches would disagree, but they tend to be outdated on a lot of things.

Sam Harris wants his scientific "moral landscape" to be prescriptive and not just descriptive; asnd that's where he tries to cram teleological thinking into science.
Not if the "prescription" is "no teleological stuff". I suspect you're still not getting something, here. But, I have yet to figure out what it is.

None of this answers the question: "Is it right to convince people that the world is about to end?"
I am assuming the person who reads those facts is sane. Perhaps to an insane person it would not. But, that is hardly an argument in your favor.

From the older thread:

Take these two phrases:


"Science has no evidence that the world is going to end any time soon."
And
"We ought not go around telling people that the world is going to end soon."

I will argue that the difference between those two statements is largely of relevance only to English professors.

To a sane brain, the two are going to be acted on equivalently.

No, because that is not the topic we are discussing. What you need to explain is: "Can science determine that it is immoral."
I have shown you that it is detrimental to well-being in every reasonable way one can measure it.

We can also see that well-being is a reasonable moral value because it is an extension of our genes' compulsion to reproduce and thrive, but as adapted for conscious creatures.

If it was moral people, their ideas on how to make moral decisions owes itself to the foundations of their morality, independent of science.
You misunderstand me. As a practical matter, pragmatic people would want their ideas to be as realistic and reliable as possible. And so a lot of what is judged to be practical is based more on the findings of science than just about anything else.

No, it wasn't settled. And fls didn't present anything that indicates that it was science that solved the problem.
Again, we are assuming the reader is sane.

It is not good enough to defend Sam Harris' claims.
I am not Sam Harris. Nor do I claim to be a defender of his claims. Granted, much of what I think about morality has been shaped by his book (though not solely by his book. Folks like Dawkins and Pinker were also big influences). But, that does not mean I must agree with him on everything. I did not agree as much with what he wrote in previous books, in fact.

And, you might be wrong about Sam Harris, anyway. I think his claims might be different from what you are claiming. I will have to re-read his whole book to find out where the discrepancies are. But, as I said in some other posts: He might not be building a First Mile case, anyway.

In short, the problem is an an excess of moral realism and a lack of moral relativism.
Moral realism does not imply actual reality. It also encompasses delusions of what reality is. Science has a way of breaking through those delusions, to help us build a more reliable view of reality.
 
Aside: let me just say that although I don't agree with 100% of what you say, I admire your dedication to this discussion. How many people are you debating with at once? Pretty sure I'd go mad trying to keep up with all that.

On topic:

A good question! But, here's the thing:

If (hypothetically speaking) it turns out that mono-racial societies truly are more successful at achieving well-being, it would be difficult to argue that they shouldn't be mono-racial. If (again, hypothetically), the Nazis were right, and eliminating Jews really could bring about a utopia for superior beings, it would then be more difficult to claim that what they did was evil. BUT...

It turns out that, in reality, it doesn't work out that way.

That's true, but I feel you're taking the wrong approach to answering such questions. Of course the answer to questions along the lines of "would you do something evil if it were good instead" is "yes, so?". But there is a reason that there are so many valid seeming objections to wellfare utilitarianism. Of course that's only for unrealistic fringe cases, but still. I think "maximise total amount of welfare in the universe" fails in extreme cases because it doesn't take into account that it is people which we ultimately care about. Extreme examples such as "kill all humans and replace with orgasmium" can cause problems in this case. Likewise, "maximum welfare for the greatest number of people" seems to imply that it is morally right to produce as children as possible even at the expense of happiness of existing people, and so on. I think that the ethical system of Desirism deals with all of these objections in a very simple and satisfactory way.


Assuming the person is sane and responsible. (An insane person could, for example, decide not to ever breathe, even when in clean air. Science might say they probably really ought to do that.)

Ah, no. I specifically said "things that they do not (at any level) want to do." to include any and all desires, not just whimsical wants. Science cannot say that deciding not to breath is the incorrect course of action IF AND ONLY IF the person ultimately desires not to breath. I.e. if after measuring and weighing all of a person's preferences the result is that they are better of dead, even if that is not a conscious decision, science cannot say that that person should keep on breathing. (Unless a different perspective is taken, i.e. from the perspective of people who want him alive for whatever reason)

I can agree with this. But, those preferences could also, themselves, be potentially scientifically demonstrated, I suppose.

If by demonstrated you mean measured, then yes. See the example of neuroscience.

I think other lines of investigation, other than neuroscience might be useful when making conclusions. But, on the whole, I agree with this.

I think the example of neuroscience is critical because it removes that one last bit of nature that people like to view as magical -the human mind- away. If even the human mind and everything that goes on in it can be measured, determined and controlled, then even the "first mile" can be determined (as in measured, not produced) objectively by science. There is no longer any need for philosophy on the subject at that point.


We do things because we ultimately want to, not because they are good in a way that is separate from our desires. There is no "intrinsic good". This doesn't mean that science can't tell us what we should do (given what we want), just that it cannot provide nonsensical "just so" morality. It seems quite impossible to me for science to give 100% objective morality. But it seems silly to me to insinuate that this means that ethics is purely relative and all ethical views are equally good.

In short, you are yielding the First Mile to something other than science.

Fine. Whatever.

What? No, you don't get it. Let me give you an analogy with something other than morality, to show how that doesn't follow:

Person 1: Science cannot answer questions about the human mind. It is purely in the realm of philosophy.
Person 2: Yes it can! We can measure thoughts, form theories around them, make falsifiable predictions with this knowledge and test them... what can't science do?
Person 1: Well Science cannot produce the thoughts themselves. What thoughts someone has varies from person to person. Therefore it is subjective. Therefore not in the realm of science, which is objective. Therefore science cannot answer questions about the human mind.
Person 2: Uh, science cannot produce thoughts, no. That is not the purpose of science. Science has to observe reality and then provide an understanding of it. You can say that minds are subjective in the sense that they vary from person to person, but questions about the human mind are 100% objective and can be entirely determined by science.
Wowbagger: You're yielding the first mile to something other than science! You're with the enemy!

No, science cannot just pull morality out of thin air. Science has to look at our minds to determine what is considered moral, and to find out which moral views are more or less in accordance with reality. The same holds for any other field of science. You can say that this means I am yielding the first mile to reality... but so what?


Originally Posted by Earthborn
None of this answers the question: "Is it right to convince people that the world is about to end?"

If I may, you didn't address this question to me, but I think/hope I can answer it in a fairly clear and comprehensive manner:

1) A large part of the apparent subjectivity of this question lies in the word "right". Different answers come out when using different definitions of "right". You must be careful that you do not take this to mean that the underlying reality is subjective. I may define hot as cold and cold as hot, but a candle will still burn my fingers.
2) One way of defining "right" is "in accordance with a certain set of moral views. In this case the issue is subjective: it depends on which morality is used. It could be "wrong" from our perspective, but "right" from theirs.
3) I think the only way this question can be handled objectively is if you ask "can it be scientifically determined whether people who convince others that the world is about to end are acting erroneously?" Erroneously in this case means acting counter to their own ultimate desires: That is the only kind of "objectively wrong" that makes sense, seeing how there is no such thing as intrinsic right or wrong, and people necessarily do things because directly or indirectly they desire to do so. Someone who means to do A and does B instead can objectively be said to be in error.

Under this view of "objectively wrong" I will answer yes, but not in all cases. If someone has altruistic intentions, if they truly want to help their fellow man by informing them of their impending doom, then they can be objectively said to be in error since the result is counter to their intentions. Rather than helping, they are harming: They are wrong even by their own standards. If a person has no altruistic intentions whatsoever -i.e. they are actually a robot programmed to inflict maximum harm- then they cannot be said to be wrong in any meaningful sense. Virtually everyone however has both: selfish as well as altruistic desires. In most cases, harm is not done because people are selfish and evil, but because they are shortsighted and/or rationalize their behaviour. If a person would regret their actions if they were fully aware of the consequences, then they can still be said to objectively be in error by their own standards. The kind of person who wants to implement Sharia law can be said to be objectively wrong because they think it will lead to greater well-being, but it doesn't. People do care about the well-being of others - it's in our human nature - which is why we can generally agree on the morality of things like theft and murder.
 
Last edited:
Wowbagger said:
Some of the science of consciousness seems to be indicating that all "oughts" start out as "ises" in the brain, anyway. According to that line of research: We just forget the original "is" state, and classify it as an "ought" in a later process. Perhaps it is too early to be confident about these sorts of discoveries. But, I suspect we will see more of these types of arguments, in the future, that erode the very difference between "is" and "ought" in various ways. I think it is less likely that these distinctions will become any stronger in the future.

Of course a feeling of ought or a belief of ought comes from ises in the brain. However, discovering that Joe's brain state leads him to feel that we ought to nuke Canada does not do anything to "tell us what our morals should be" as Harris puts it.

This assumes Sam Harris was really building a First Mile case. After re-reading parts of his book, it does not seem that way. Though, I will re-read the whole thing later in the month, when I finish my current book I am into.

What, in your opinion, is the value of his book, assuming that it does not make a first mile case?

That we should value well-being? We already do. That we should scientifically study how well-being might be increased? We already do.

I wrote some thoughts on the early parts of the book in a different thread:

"I've read the introduction and first chapter of Sam Harris' The Moral Landscape. The book, so far, is much better than the TED talk in the sense that I had no clue what he was on about in his talk. That's not to say the book is good, though.

The book argues that maximizing "the well-being of conscious creatures" must be the fundamental goal of morality. That is, actions that positively impact well-being are (by definition essentially) more moral than actions that negatively impact well-being. Harris argues that this is a valid starting point, because to not value well-being is nonsensical. I'm inclined to agree with this last point -- at a fundamental level, meeting value necessarily increases well-being in some sense. The glaring problem is that Harris does not make the jump from the fact that all people value well-being to the conclusion that people should value other people's (or animals) well-being, nor does he seem to acknowledge that a jump is there to be made.

Let me clarify that last point. Yes, we could say that well-being is necessarily valuable to individuals. But to each individual, the only thing that is fundamentally valuable is their own well-being. Since individuals can feel empathy, the well-being of others can certainly be valuable to us. But it cannot be said to be necessarily or fundamentally valuable to us. It is easy to think of situations in which the well-being of others either has no value to us or in which it has value, but that value is out-weighed by some other value.

The relevance of science, claims Sam Harris, is that it can help us figure out the impact on overall well-being that certain actions, decisions or practices have. This can help us to answer moral questions assuming that we accept Harris' definition of moral value. For example (this is my example, not Harris'), if we could demonstrate that homosexuality does not decrease human well-being, and that discrimination against homosexuals does decrease overall well-being, we could prove that homosexuality is not immoral and that intolerance of it is immoral.

The many counter-intuitive implications of utilitarianism are well-known. One of the most famous "moral dilemmas" is whether you should push one person in front of a train in order to save five people further down the track, where most people tend to say no. I think for most people, their sense of morality takes the utilitarian ideal into account, but certainly also takes non-utilitarian ideals into account as well.

Despite Sam Harris' sometimes embarrassing use of logic and incredulity toward any disagreement, I do agree with him that science seems to me to be undervalued when it comes to questions of how to improve society or whether to engage in certain practices. When faced with a question such as state health care policy or spanking our children people seem to gravitate toward ideological or intuitive reasons rather than scientifically tractable reasons that could be explored with evidence. However, there are certainly already plenty of scientists researching "well-being" or things relevant to it. Psychologists, sociologists, pharmacists, economists, policy researchers... Sam Harris is not (as far as I've gotten in the book) advocating anything be done that isn't already being done by science, he would merely have us re-define morality so that we could insert the label "moral" into the conclusions of some of these scientific studies."

If I recall correctly, Sam Harris argues that there is a lack of satisfaction inherent in strictly sticking to what is utilitarian. I might also add that arts and leisure could be components of well-being not inherently covered by utilitarianism.

Harris' book seems to argue for utilitarianism. Again, not that there is anything wrong with that, except that it is argued for poorly. Some quotes from the book:

"Once we see that a concern for well-being (defined as deeply and as inclusively as possible) is the only intelligible basis for morality and values, we will see that there must be a science of morality, whether or not we ever succeed in developing it: because the well-being of conscious creatures depends upon how the universe is, altogether. Given that changes in the physical universe and in our experience of it can be understood, science should increasingly enable us to answer specific moral questions. For instance, would it be better to spend our next billion dollars eradicating racism or malaria? Which is generally more harmful to our personal relationships, “white” lies or gossip? Such questions may seem impossible to get a hold of at this moment, but they may not stay that way forever. As we come to understand how human beings can best collaborate and thrive in this world, science can help us find a path leading away from the lowest depths of misery and toward the heights of happiness for the greatest number of people. Of course, there will be practical impediments to evaluating the consequences of certain actions, and different paths through life may be morally equivalent (i.e., there may be many peaks on the moral landscape), but I am arguing that there are no obstacles, in principle, to our speaking about moral truth."

"For instance, to say that we ought to treat children with kindness seems identical to saying that everyone will tend to be better off if we do."

"Do pigs suffer more than cows do when being led to slaughter? Would humanity suffer more or less, on balance, if the United States unilaterally gave up all its nuclear weapons? Questions like these are very difficult to answer. But this does not mean that they don’t have answers."

"The fact that it could be difficult or impossible to know exactly how to maximize human well-being does not mean that there are no right or wrong ways to do this—nor does it mean that we cannot exclude certain answers as obviously bad."

Well-being would balance our nutritional needs against vegetarianism, though vegetarianism is still a viable option for someone to choose.

Also, if it turns out that other creatures have a consciousness, they might not have as much of a consciousness nor the same type of consciousness as humans. The decision of how much to eat of them might be weighed against that.

Indeed. However, in my opinion it's not a difficult question at all. There are no nutritional needs that require meat in order to be met and animals have brain activity, nervous systems and behavior that are all indicative of consciousness and capacity for suffering. Furthermore, the amount of corn, soy and whatever else it takes to feed a cow from infancy to slaughter would feed far more people than the amount of meat you get at the end would feed.

I wonder if Harris and supporters will use this "maximizing well-being" (call it what you want, I call it utilitarianism) moral philosophy to change their lives in any difficult ways? Because from his book and TED talk it seemed he was mainly interested in going after easy targets that a secular audience would already agree with. The amount he talks about the Taliban... hell even the vast majority of Afghanis hated the Taliban when they were in power.

I respect Singer, because he asked difficult moral questions, came up with some counter-intuitive answers and significantly changed his own behavior in accordance with his conclusions.

I don't want to come off as completely against Harris though. Like I said, I do think science is often undervalued when it comes to things like education (see the "jigsaw"), behavioral economics (see the book "Nudge"), and raising children.
 
Last edited:
angrysoba said:
Perhaps even more disturbingly is that some studies such as the one reported by the Economist here suggest that those who can apply utilitarian thinking most consistently appear to score highly in psychopath tests.

Blatant fallacy.

First of all, as I argued above, utilitarianism is only one aspect of well-being. Not the whole thing.

Second of all, just because we might promote utilitarianism does not automatically mean we are raising psychopaths.

If remorse is valued as part of building a good society, psychopaths would be picked out and dealt with.

We know a lot, today, about how psychopaths operate, and what is going on in their brains neurologically. We also know something about how they emerge evolutionarily, as a small percentage of the population. We know so much that we are in the early stages of designing treatments for the disorders associated with them. (Only a few years ago, it was considered completely untreatable.)

I perhaps should have phrased that initially with, "If these results are correct..." or something along those lines because it is merely an observation and that being the case I don't see how it can be a fallacy.

My point here is that it would indeed be ironic if psychopaths tended to be the ones most capable of applying strict utilitarianism. I do understand that Sam Harris is still undecided about certain ethical norms and even wonders aloud in the book why it is that we as people generally have no problem pushing the button on the dashboard to divert the trolley and kill one man instead of five but find that we can't push the fat man off the bridge.

I understand that you have said "utilitarianism is only one aspect of well-being" repeatedly and I also understand that Sam Harris is not advocating Benthamite "strict utilitarianism" (which is sometimes referred to as "swine utilitarianism") though your own mentioning of arts and leisure would suggest you are thinking more along the lines of the utilitarianism of J.S Mill in which he distinguishes between higher-order and lower-order pleasures.

I also realize that Sam Harris and you are not talking about "raising psychopaths" and I am probably quite skeptical that you can raise someone to be a psychopath unless you performed some surgery on their brain. All I meant with my observation is that it is somewhat ironic, or would be, don't ya think?, if those best equipped to most consistently apply the principles of increasing human well-being were, themselves, psychopaths?

I'm also pointing to the way that if this were true it would seem to conflict with our intuitive notions of morality.


ETA: I will also say that whether true or false these results and these experiments should be welcomed by you and Sam Harris if only because it provides just a little more data for the project of determining human values scientifically.
 
Last edited:
Harris is simply inconsistent. He claims when he feels like it that he has solved the is/ought problem, which is the equivalent of claiming to have squared the circle or accelerated a hydrogen atom to 1.1c, but other times he drops the pretence. My view is that he knows exactly what he's doing and does it deliberately, but it's possible he's simply a sloppy thinker.

Also you aren't going to solve the is/ought problem with any amount of semantic manoeuvring, any more than you can accelerate a hydrogen atom to 1.1c by redefining the words in a physics textbook to suit yourself.
 
Harris is simply inconsistent.

That may be.
My view is that he knows exactly what he's doing and does it deliberately, but it's possible he's simply a sloppy thinker.
In general, he certainly is not a sloppy thinker. Just watch him in a debate with theists. Even his debate with Craig was not sloppy (maybe strategically poor, but not sloppy).
 
Pretty sure I'd go mad trying to keep up with all that.
I might have gone a little mad, myself.

I think "maximise total amount of welfare in the universe" fails in extreme cases because it doesn't take into account that it is people which we ultimately care about.
This is a good point to consider. I think taking an evolutionary approach would probably do a lot in reminding us of this.

Extreme examples such as "kill all humans and replace with orgasmium" can cause problems in this case.
Of course, what it means to "be human" could even change over time. (Perhaps dramatically so if Kurzweil's technological singularity becomes a reality, and/or transhumanism takes off in the population.)

I think the example of neuroscience is critical because it removes that one last bit of nature that people like to view as magical -the human mind- away
Oh yes, neuroscience is no doubt critical for that reason, and others.

But, there is also no reason to stick solely with that. If other lines of investigation could be taken, to help confirm the findings or define where they don't apply, I think that would also be helpful.

Wowbagger: You're yielding the first mile to something other than science! You're with the enemy!
No, yielding the first mile to something other than science does NOT make one the enemy. They would be taking my First Approach to the issue.

When you wrote this: "It seems quite impossible to me for science to give 100% objective morality.", it sounded like that is what you were doing.

But, I do agree that ethics is NOT purely relative, and that all ethical views are not equally good.


And, I tend to think the Last Mile is more important for people to accept. Though, it looks like there is little debate on that matter, here.

Under this view of "objectively wrong" I will answer yes, but not in all cases. If someone has altruistic intentions, if they truly want to help their fellow man by informing them of their impending doom, then they can be objectively said to be in error since the result is counter to their intentions. Rather than helping, they are harming: They are wrong even by their own standards. If a person has no altruistic intentions whatsoever -i.e. they are actually a robot programmed to inflict maximum harm- then they cannot be said to be wrong in any meaningful sense.
They are not "wrong" regarding their intentions, but they are being maliciously deceptive, which is usually "wrong" morally.

Also you aren't going to solve the is/ought problem with any amount of semantic manoeuvring, any more than you can accelerate a hydrogen atom to 1.1c by redefining the words in a physics textbook to suit yourself.
That's because it is still a lot more useful to define hydrogen as 1.00.

The problem is that it was Sam Harris who was redefining science. I was simply offering an alternative to reverse that: Science remains defined how you think it should be defined. But, a different word is used to encapsulate the bulk of Sam's points.

If the core meaning of his points have value, perhaps that will be easier to see, once we no longer have to swallow it under the word "Science" as much.

I won't sue, but I agree with you. It's just semantics, it doesn't move the dispute down the road.
If my assumption is correct, that Sam Harris is defining science much too broadly: This semantics approach could move the discussion forward, because it bypasses the whole Hume objection, and extracts the core meaning of his points, which we can then discuss.

What, in your opinion, is the value of his book, assuming that it does not make a first mile case?
There are still plenty of people who would object to science touching morality, in the Middle Distance and Last Mile. Perhaps not on this Forum. But, whenever we run into such folks, this book gives us a good framework for generating arguments against those objections.

Apparently, if Sam Harris is correct, there are still a bunch of "extreme liberals", who are a little more like straight relativists, and do not see objections to burquas and other aspects of the Taliban that we do. This books shows us some directions we can make to demonstrate they are objectively wrong.
(I have not run into such folks, myself. But, apparently there is a reason Sam had to write much of his book to address their arguments.)

Also, folks like me, who already accepted the idea of science answering moral questions, learned a bit about just how far science has gone, and will continue to go, in these directions.

The book could be more valuable, still, if it went into more details. (Perhaps into the relevant findings of neuroscience, if he did not want to commit to specific answers in morality.) Even the parts after the introduction felt like an introduction to a larger book. I never said the book was perfect.

But, it has now been about a year since I read the whole thing. It is high-time for a re-reading, I guess.

That we should value well-being? We already do. That we should scientifically study how well-being might be increased? We already do.
We might already do that. But, apparently, there are some folks who didn't get the message.

There seem to be folks on this very thread who seem to take issue with well-being being a something morality should value. We can formulate objective reasons why we should do so, with some help from this book.

The glaring problem is that Harris does not make the jump from the fact that all people value well-being to the conclusion that people should value other people's (or animals) well-being, nor does he seem to acknowledge that a jump is there to be made.
Evolutionary biology shows us how such a jump can be made. Once a survival strategy of altruistic behavior takes off, it will successfully increase well-being for everyone, including those contributing large shares altruistic behavior.

More specifically: We know that there is a Peter-Singer-style expanding circle of altruism (increasing well being for 'other people'), in humans, in spite of the fact that our very genes are selfish (inclined to increase well being only for ourselves and kin).

Perhaps Sam doesn't go into that, very much, in his book. But, I have read others that emphasize these sorts of points.

Harris' book seems to argue for utilitarianism.
As I stated earlier, I do not think 'well-being' is straight utilitarianism. Rather, utility could be one aspect of 'well-being'.

My point here is that it would indeed be ironic if psychopaths tended to be the ones most capable of applying strict utilitarianism.
Yes, it would be ironic if psychopaths took a position to be more objectively moral than others. But, would that be a bad thing?
 
Last edited:
Science, for example, is changing how we view eye-witness testimony, statistics, genetic evidence, lie-detection systems, etc. in the courtroom. As a practical matter, science seems to be able to say a lot about morality, when it matters most: In our justice system.
Justice systems don't solve moral dilemmas. They enforce the prevailing morality in a society.

You never heard of the Scientific Method?
I have heard of the scientific method, but I have never heard it described as a "discipline or framework for making moral decisions"

Science goes beyond "point of view". The findings of science can be verified independently of anyone's point of view.
Yes. That is not true of morality.

Yes, there can be more than one right answer, and the best for one group might be different than for another. But, that does not necessarily imply relativism. That is simply the landscape of morality at play.
What does imply relativism is that what some will see as a peak in the moral landscape, others will see as a valley, and what looks like a valley to some will be a peak to others.

There are several well-respected works on this. The amount of evidence demonstrating the role genes have played, in the moral frameworks we build is massive.
I wasn't disputing the amount of evidence, I was disputing that your claim that we know how much of a role our evolutionary heritage plays in the decisions we make. That's a different claim; just showing the amount of evidence for evolutionary influences on our behaviour does not mean you show how much they influence our behaviour.

The best book was perhaps one of the first: The Selfish Gene by Richard Dawkins, but make sure you read the Second Edition or later. Other books include, but are not limited to:
The Blank Slate by Steven Pinker, Breaking the Spell by Daniel Dennett (mostly about religion, but goes into the evolution of altruism and such), Most of Michael Shermer's books, etc.
I don't consider anyone of those people as relevant experts to the claim you made.

I have yet to see a legitimate science book that refutes the influence of our evolutionary heritage in moral decision making. Though, of course, environmental factors also play a role.
Is there a difference between "our evolutionary heritage" and "environmental factors" ?

But, it is not relativism, because what makes a moral system correct or not is dependent on empirical facts about genes and environment.
No, the "genes and environment" are only what have shaped the viewpoint. It does not show whether or not it is correct. Natural selection is not nature's moral judgement; just because a set of moral principles has survived does not prove it is "correct". Behaviours that we may consider immoral might be traced to our evolutionary heritage. And we seem to have inherited several contradicting moralities; so on what basis do we declare which one is "correct"?

It might seem that way, if your ideas about morality are outdated. Last I heard, there was no Definitive Authority that morality must have a teleological component.
Is morality not inherently "how we ought to behave" ? How we bring about our goals (telos) and how to avoid the things we ought not to do? I fail to see how one can seperate teleology from morality, because every moral question asks "should I do this or that to get closer to my goal?". Sam Harris certainly doesn't remove the teleological component from his argument, rather he introduces a telos: "well-being of conscious creatures".

I am assuming the person who reads those facts is sane.
A dangerous assumption to make, considering that you haven't defined sanity, and sanity itself is a value judgement so introducing it in your argument means asking us to assume the very thing you are supposed to prove.

Perhaps to an insane person it would not. But, that is hardly an argument in your favor.
Or perhaps the fact that you can declare as "insane" anyone who points out that you haven't presented a logical argument gives you a convenient excuse for not presenting a logical argument.

I have shown you that it is detrimental to well-being in every reasonable way one can measure it.
And I have shown you that is irrelevant as it is not the issue.

We can also see that well-being is a reasonable moral value because it is an extension of our genes' compulsion to reproduce and thrive, but as adapted for conscious creatures.
It is not the only moral value, there are others that contradict with it. I also fail to see why it is a more reasonable moral value and your argument makes it seem even more unreasonable, though it nicely shows how difficult it can be to remove teleology from moral values: you personify genes as being goal oriented instead of being chemicals mindlessly reacting with other chemicals.

But even if we take this personification seriously, your argument doesn't make any sense. Our genes have a "compulsion to reproduce and thrive" often in ways that are to the detriment of other conscious creatures.

You misunderstand me.
I think I understand you just fine. I was mocking your poor argumentation.

And so a lot of what is judged to be practical is based more on the findings of science than just about anything else.
"Judged practical" is not the same thing as "judged moral"

Granted, much of what I think about morality has been shaped by his book (though not solely by his book. Folks like Dawkins and Pinker were also big influences).
If those are the people you look up to as moral role model, I'll pray that the IPU may have mercy on your sole.

I think his claims might be different from what you are claiming.
Could be.

But, as I said in some other posts: He might not be building a First Mile case, anyway.
No, he doesn't seem to be building such a case. He seems to be glossing over it.

Moral realism does not imply actual reality. It also encompasses delusions of what reality is. Science has a way of breaking through those delusions, to help us build a more reliable view of reality.
Moral relativism is a more reliable view of moral reality.
 
Justice systems don't solve moral dilemmas. They enforce the prevailing morality in a society.
Prevailing morality seems to favor science, when it matters most, then.

I have heard of the scientific method, but I have never heard it described as a "discipline or framework for making moral decisions"
It is a discipline or framework for generating empirical knowledge. In this specific case, it would be generating empirical knowledge about moral decisions.

Yes. That is not true of morality.
Not conventional morality, true. But, it will be true of science-based morality.

What does imply relativism is that what some will see as a peak in the moral landscape, others will see as a valley, and what looks like a valley to some will be a peak to others.
From a scientific perspective, peaks and valleys would be objectively the same for everyone. But, to climb from one peak to another would involve visiting a valley, making such a transition undesirable. (Unless a bridge directly between two peaks could be made, which is not impossible though very rare.)

I was disputing that your claim that we know how much of a role our evolutionary heritage plays in the decisions we make.
Our estimations for how much of a role it plays improves over time, even if we do not have very precise answers, yet.

Is there a difference between "our evolutionary heritage" and "environmental factors" ?
Given the fact that evolutionary heritage was driven by environmental factors, I suppose the difference is moot. I meant to imply genetic heritage.

No, the "genes and environment" are only what have shaped the viewpoint. It does not show whether or not it is correct.

As Paul2 pointed out earlier: Evolution has given humanity a tendency toward certain cooperative behaviors, that can provide a basis for morality. That means it is not relative.

Though, I think it could, potentially, be objective. Paul2 doesn't think so.

Natural selection is not nature's moral judgement; just because a set of moral principles has survived does not prove it is "correct". Behaviours that we may consider immoral might be traced to our evolutionary heritage. And we seem to have inherited several contradicting moralities;
I agree with this.

so on what basis do we declare which one is "correct"?
As I stated, earlier, there are several objective ways one can measure the well-being of a society. This can help us determine which are correct.

Sam Harris certainly doesn't remove the teleological component from his argument, rather he introduces a telos: "well-being of conscious creatures".
But, there is no teleological aspect of well-being in his arguments. Our perceptions of well-being could be transformed over time.

A dangerous assumption to make, considering that you haven't defined sanity, and sanity itself is a value judgement so introducing it in your argument means asking us to assume the very thing you are supposed to prove.
Don't be silly. There are several objective ways to measure the sanity of someone. This is true even if our perceptions of sanity change and transform over time.

Or perhaps the fact that you can declare as "insane" anyone who points out that you haven't presented a logical argument gives you a convenient excuse for not presenting a logical argument.
Are you arguing we should take the moral views of such certified insane people seriously?

It is not the only moral value, there are others that contradict with it.
Pick one and defend it against well-being.

But even if we take this personification seriously, your argument doesn't make any sense. Our genes have a "compulsion to reproduce and thrive" often in ways that are to the detriment of other conscious creatures.
That is why I modified it to take the value of consciousness into consideration.

"Judged practical" is not the same thing as "judged moral"
You were the one talking about pragmatics:
Was it "science" that produced this solution? Or just pragmatic and moral people?
You just argued against yourself.

Moral relativism is a more reliable view of moral reality.
You said it yourself: Relativism wouldn't help anyone know the actual peaks from the actual valleys! It would be up to everyone's point of view!

Science can help us figure that out, objectively, and independently of individual observers! That makes it more inclined to be more reliable and realistic.

Relativism would give more power to human delusions.
 
Last edited:
That's because it is still a lot more useful to define hydrogen as 1.00.

Sorry, but I can't draw any sense out of this sentence in context. It appears to be a total non sequitur.

The problem is that it was Sam Harris who was redefining science. I was simply offering an alternative to reverse that: Science remains defined how you think it should be defined. But, a different word is used to encapsulate the bulk of Sam's points.

If the core meaning of his points have value, perhaps that will be easier to see, once we no longer have to swallow it under the word "Science" as much.

Assuming that his points have value and acting on that assumption, when that is in fact the only issue under contention, is a textbook example of begging the question.

However even if that weren't the case, you can't get around the is/ought problem by redefining your terms any more than you can get around the light speed barrier by redefining your terms. At best you could obfuscate the issue by using non-standard terms to try to make it look like you'd solved it.

The cherry on top of this sundae of pointlessness is that we already have a word - "philosophy" - for what you want to discuss. I suspect that the reason you don't want to use it is that if you said "science can't make moral claims without involving philosophy" it would be obvious the is/ought problem has not been solved. If you substitute some other word in for "philosophy" you might be able to make it look, to the unsophisticated, as if you've made some progress on the problem.
 
Yes, it would be ironic if psychopaths took a position to be more objectively moral than others. But, would that be a bad thing?

It wouldn't be bad if psychopaths were more objectively moral than others as such but the point would not be that they are more moral but better equipped to be. Why is this? Because they would be able to suppress sentiment - or wouldn't even understand such things - while coldly calculating which course of action is considered most "moral".

The reason this bothers me is that it conflicts with our intuitive understanding of morality and points to a possible problem with Harris' formula of actions (and/or intentions?) which promote the greater flourishing of human well-being are more objectively moral than those which fail to reach such heights.
 
Yeah, I can see that. (Though there might be a subtle difference between good-character-out-of-necessity in a situation of hardship, versus the good-character-out-of-expectation that emerges from a well-off society, it probably does not mean much.)

I do not think "good character", alone, can cut it as a moral objective. It can, as I implied, emerge as one part of well-being. But, it does not cover health, economic needs, environmental sustainability, and happiness, etc; that the word "well-being" can bring into it.


Well, specifically it is one part of how Aristotle defined the goal of moral life -- which can certainly be redefined as well-being. He seemed to mean something more by it, though. His form of value ethics, though, is a perfectly valid approach to the subject.


Would covering the Middle Distance and Last Mile still contradict Hume, in some way? That is what I would like to know. If not, then the debate has ended. If so, then perhaps Hume is outdated.


I don't see how.


I agree that it might not be self-evident. But, it is still defendable.

I will have to re-read the book to see if Harris truly thinks it must be self-evident, because that might not be accurate.


I haven't read the book myself, but I don't see how he can argue anything new if he isn't proposing that as his argument. And many of the things I have heard him say in lectures and in some of his other writings does point to a first mile argument.


This is going to be an issue, no doubt. But, not an insurmountable one. We can use science to deduce who is right or wrong, or where compromises can be formed, if one is appropriate. We can develop answers. We have the methodology.

The rapists can also be said to be hurting themselves, in the long run, even if they do not think that way or realize it, at the time of the attack. Rape might work under certain conditions where society had not developed certain expectations of freedoms and rights for its citizens, yet. (And, indeed, for much of human history rape had been acceptable.)

But, the circumstances change when people expect a certain level of control over their bodies. Then, rape ceases to benefit anyone in the long run, not even the rapists themselves. Not even if they feel it is better for their well-being, at the time.

Based on what? Who's well being? Society's as a whole? Why put the burden there? That is part of the point -- science doesn't answer first mile issues very well. Why not speak of society's artistic achievements as the ultimate goal instead?

Yes, we can always construct meta-ethical systems that deal with the first set of problems.


Here's one from a deontological perspective -- we should always act so as to treat others as an end and not as a mere means to an end. So, you see a guy raping a woman down an alley. What do you do? If you stop him you are not treating him as an end, but as a means to an end in order to stop the woman's pain. If you don't stop him you allow him to treat the woman as a means to an end. Can't resolve the issue in the real world, so we apply meta-ethical thinking and realize that the rapist has broken the 'moral sphere' since he is treating the woman as a means to an end. We are allowed to stop his action of treating her as a means to an end within our meta-ethical framework.

So, why are talking about well-being and not the end's principle?

We can show these trends through history, scientifically. And, after such, we would most likely conclude, scientifically, that the rapist should be stopped.


We can also rationalize most anything, but that does not mean we are using science to prove it.


Historically, scapegoating has NOT shown to be effective or productive in actually solving problems. It only provides a superficial, temporary sense of relief. (I doubt scapegoating would even be considered utilitarian, though that might be a different debate.)

The problems we face, as a society, will often need innovative approaches to solve, which takes more science. Not easy answers or quick, inefficient fixes.

As I stated in an earlier reply: I do not think utilitarianism really covers it. It would only be one facet of well-being, not the whole thing.

That is my point. Utilitarianism doesn't cover this issue; it can't deal with scapegoating. That is one of its major weaknesses. Deontology deals with scapegoating extremely well, though. And that is the point. There are different ways that we think about moral issues; formalizing them is not so easy, though, because they arise from different types of thinking about the world. I fear that you would have to end up diffusing your definition of well-being to such a degree that it might become useless as a definition of anything if it were to account for consequentialist thinking as well as fairness and good character.



As I stated at the top of this post, there needs to be multiple ways to define 'well-being'.


There are. That is why this whole process can't be objective. We have to decide which ways of defining 'well-being' that we use in any particular situation. Unfortunately, as we can see by using any number of the trolley-car examples we seem to use different types of modules for thinking morally about different situations. Some people appear to be very utilitarian and think in terms of consequentialism most of the time; other folks think largely in terms of fairness. Most people use different ways of thinking in different situations based on how it feels to them.

Look into Jonathan Haidt's work for examples of the different ways that folks approach moral problem solving.

There is no question that we could come up with some form of moral arithmetic that someone will say is the end all and be all. Aristotle did it. Kant did it. Mill sort of did it. Peter Singer has done it, updating Bentham and Mill.

The problem is that not everyone is willing to sign on because those constructed rational systems are not how we do morality. We can't even speak of morality without mentioning value. And value is meaningless from a 'purely rational' standpoint; values arise out of emotion, desire, motivation. They are all first mile things, as Hume pointed out.
 
You said it yourself: Relativism wouldn't help anyone know the actual peaks from the actual valleys! It would be up to everyone's point of view!

Science can help us figure that out, objectively, and independently of individual observers! That makes it more inclined to be more reliable and realistic.

Relativism would give more power to human delusions.



Science isn't needed in that situation. We already live in a world of different strokes for different folks (and, by that I don't mean the modern age, but that different people are constructed with different desires and with desires that come into conflict with others). The reason that morality is not based in relativism is because it is not a system for deciding individual meaning, but a system that allows people to interact within a group setting.

Morality is not one thing. The argument against emotivism is just a waste of time because people don't actually argue that emotion is the end all and be all of moral decision making. Hume certainly never offered that as his argument.

Emotion, including empathy, is the origin of moral decision making. The rest depends on a complex set of calculations based on predicting outcomes and interactions with other humans. We don't need science to tell us what works in these situations; we could certainly use it, but it would be superfluous in most situations since all we need do is negotiate with others. "What, you want the coconut too? But if we both want it we can't both have it. Hey, why don't we cut it in half?" Science would be very helpful examining more complex situations in which we have a tendency to fool ourselves, but that isn't controversial. These arguments arise over people saying that science can tell us that we ought to want the coconut in the first place. Sure, science can tell us about the nutritional value, etc. of a coconut, but why is nutritional value important? Why is life important? Ultimately it comes down to a feeling. That is what Hume was on about.
 
Last edited:
Look into Jonathan Haidt's work for examples of the different ways that folks approach moral problem solving.

Sam Harris actually does discuss Haidt, particularly his article "The Emotional Dog and It's Rational Tail". The problem Harris identifies with Haidt's work is that whereas he makes a compelling case that people generally create a moral framework from their emotions etc... rather than what they believe are rational reasons, Harris thinks this is often no different from the way that some people hold other beliefs about which there are clearly objective facts.

Haidt said:
Both sides present what they take to be excellent arguments in support of their positions. Both sides expect the other side to be responsive to such reasons, each side concludes that the other side must be close-minded or insincere. In this way the culture wars over issues such as homosexuality and abortion can generate morally motivated players on both sides who believe that their opponents are not morally motivated.

Harris simply doesn't think that morality is a special case here and the example he gives of a disagreement which fits this model is one that almost everyone on JREF is familiar with: 9/11 Truthers. Yet, whereas some people would like to look at Haidt's work as confirmation that there is no such thing as objective morality we wouldn't say that there is no objective truth to the events of 9/11 despite people holding their own beliefs with huge conviction and apparently immune to reasoning.

Ichneumonwasp said:
The problem is that not everyone is willing to sign on because those constructed rational systems are not how we do morality. We can't even speak of morality without mentioning value. And value is meaningless from a 'purely rational' standpoint; values arise out of emotion, desire, motivation. They are all first mile things, as Hume pointed out.

Well, this is why the psychopath thing worries me a bit as it would seem counter-intuitive. Maybe it needn't be a problem for ovjective morality but I can't help thinking that this is not what people think morality is.

There's an interesting article by Jonathan Bennett called the Conscience of Huckleberry Finn in which he gives some examples of the conflict between (Humean) sympathy and (Kantian) reason. I'm not sure how relevant it is to this debate but you might like it:

http://www.earlymoderntexts.com/jfb/huckfinn.pdf
 
Sam Harris actually does discuss Haidt, particularly his article "The Emotional Dog and It's Rational Tail". The problem Harris identifies with Haidt's work is that whereas he makes a compelling case that people generally create a moral framework from their emotions etc... rather than what they believe are rational reasons, Harris thinks this is often no different from the way that some people hold other beliefs about which there are clearly objective facts.



Harris simply doesn't think that morality is a special case here and the example he gives of a disagreement which fits this model is one that almost everyone on JREF is familiar with: 9/11 Truthers. Yet, whereas some people would like to look at Haidt's work as confirmation that there is no such thing as objective morality we wouldn't say that there is no objective truth to the events of 9/11 despite people holding their own beliefs with huge conviction and apparently immune to reasoning.


Right, but to answer Harris, there is something that happened outside of people's brains on 9/11 that observers can agree upon. They can also disagree, but what occurred is not changed by how anyone thinks about it.

When it comes to differing styles of moral decision making, there is not an objective occurrence to which everyone has access. While it may be objectively true that I feel that murder is wrong, you are not privy to my feeling that murder is wrong. And, as you point out, it is not objectively true that every person feels that murder is wrong necessarily.

When it comes to morality, people don't just say, 'but that's how I feel about it" and that's the end of the debate. We aren't all emotivists. We do, however, begin our moral thinking in emotion, and sometimes the negotiation amongst different people never seems to reach a satisfactory conclusion. Since morality begins in a private area and then becomes a public issue I think it is quite different from a public physical event like 9/11 about which people may decide on private beliefs.



Well, this is why the psychopath thing worries me a bit as it would seem counter-intuitive. Maybe it needn't be a problem for ovjective morality but I can't help thinking that this is not what people think morality is.

There's an interesting article by Jonathan Bennett called the Conscience of Huckleberry Finn in which he gives some examples of the conflict between (Humean) sympathy and (Kantian) reason. I'm not sure how relevant it is to this debate but you might like it:

http://www.earlymoderntexts.com/jfb/huckfinn.pdf



It bothers me too. And thank you very much for the link. I'll get to it soon.
 
Last edited:
I basically think Hume is right that there it doesn't follow logically that oughts can be derived from is's. Yet, going back to what I mentioned earlier I also don't think that is-ought and fact-value distinctions are identical.

The Nazis are bad. is a value statement yet it is clearly an is-statement rather than an ought-statement.

Or to use the type of example that Harris likes to use:

The Taliban are bad. This is a value statement but not an ought-statement too.

In this case, Harris points out that he was once told at an academic seminar that this was merely his opinion. He attempted to find a hypothetical example in which it couldn't possibly be considered merely an opinion by imagining a tribe of people who simply poked out the eyes of every third child. If we were to meet a group like this couldn't we agree that it was bad? The woman he was arguing with denied it would be bad if they were doing it for religious reasons.

However, Sam Harris contends that we could indeed determine that it was objectively bad if, for example, everyone who was blinded like this suffered. And we could objectively determine if they suffered, and if society also suffered as a result of having fewer sighted-people, and if the elders also could be shown that the religion that they were doing this for had no objective truth (remember that people always disagree about religion which doesn't prove that religious truths are relative but only ensures that lots of people are simply wrong).

Sam Harris is saying that the more we know about such situations and the level of happiness of certain societies the more we will be able to objectively increase the happiness/well-being of people in general. Even this is still somewhat controversial, it seems, as there are plenty of people who are wont to say, "How could you know what increases the well-being of people?"

In that case, what looked like a value judgment, could also be a factual judgment:

It is better for a woman to live in Sweden than for her to live in Afghanistan under the Taliban.

This case may well be a factual statement and a value statement.

It is perhaps this which explains Harris' subtitle, "How Science Can Determine Human Values".

Of course, I realize that this still leaves the Is-Ought problem unresolved and I think Harris has conceded that he is unable to do so.

My own throw-away point to this is: So what? Hume is right to show we cannot logically infer ought from is but remember that Hume also attempted to dig out the foundations of causation and was distressed to find he couldn't even logically infer that what will happen can be known on the basis of what has happened. His problem of induction couldn't be resolved and yet we now accept that induction still seems to work if we judge it inductively. It's intractability doesn't stop us believing that causation is real.
 

Back
Top Bottom