Sam Harris: Science can answer moral questions

The comparison to health is a red herring as there are not extra ‘competing axioms’ of health and any 'undecidability' of the healthiness of an action can be, in principal, solved by having better knowledge of the physical world. In contrast, there are many competing moral axioms that cannot be decided by reference to evidence. Indeed there is great doubt as to whether moral axioms are meaningful or whether moral statements can be truth bearing. Certainly the opposite cases cannot be proven. Also, there are plenty of 'hard cases' which can be easily quoted which demonstrate the difficulty of deciding between moral axioms. As far as I am aware, there are no 'hard cases' of health that cannot in principle be decided by science alone.

That you no longer are capable of even recognizing that 'health' is analogous is perhaps the best argument in favor of Harris' proposal. Once 'health' is wrested from the grip of teleology, it doesn't even occur to us to question the idea that one can make reference to the physical world in order to answer health questions.

Also, there are particular single actions that are unhealthy for everyone, e.g. being vaporised in a nuclear blast, and therefore there is a clear objective basis for health.

Yet we are unable to tell whether being vaporized in a nuclear blast would be perceived as good for everyone? I find that a bit hard to believe. :)

There is also another massive problem for Harris which magnifies all his other problems. Harris says that the well-being of conscious creatures must be the basis for deciding values. Yet he does not seem to provide a working definition of what consciousness entails or a justification of his definition as a dividing line in terms of well being. If we assume that Harris has a broad definition in mind, simply ‘the capacity to feel well-being or otherwise’, we must include the well-being of all conscious creatures in the entire universe into our ‘worst possible misery for everyone’ formula. If it wasn’t bad enough already, perhaps if we start removing Harris' anthropocentric arguments and replace the words ‘human’ and everyone’ with ‘all conscious creatures in the universe’, we can see how distorted, truly subjective and meaningless his supposed objective basis for morality becomes. We have to start wondering what the worst possible misery for individual tadpoles looks like, if they have the capacity to feel pain and how much tadpole worst possible misery equals one human worst possible misery, (if we presume that all human worst possible misery is an equal amount of misery, which is almost certainly either meaningless, undecidable or wrong). As anyone should see, this is only going to lead to truths of the most subjective kind.

I think this is your strongest argument against the idea that Harris is proposing utilitarianism - i.e. you have shown that his proposals fail to be meaningful or provide some of the necessary information if you try to force them into that particular slot.

Linda
 
Imagine if it was the early days of science. We knew we had to do experiments. We knew we had to form hypotheses. We knew they had to be testable. But suppose we hadn't thought about repeatability yet. After all, why do something you've already done?

Then suppose someone says to us: "Hey, I just got bored and repeated an experiment and got a different result. It turned out someone made a mistake the first time, but then I started thinking, what if the results depended on factors we didn't recognize. Our scientific progress is slowing because of this and I suggest we add a new axiom."

Proposed axiom: "Efforts *should* be made to repeat scientific experiments to ensure they were done correctly and have no unknown dependencies. If repeated experiments produce results that conflict with the original results, any conclusions drawn from the original results *should* be rejected or at least re-examined"

Now, I would say that this seems subjectively reasonable. It would allow science to progress better and produce more useful conclusions. And rational scientists did accept essentially precisely this as a scientific axiom.

But what of this objection: "Wait. That's a moral axiom, saying what people *should* do. It's certainly not provable and probably not true, no matter how subjectively reasonable it seems. If we accept that, we're just not doing science any more."

The fact is, when a lack of axioms impedes science from drawing useful conclusions, axioms are added that are fundamentally proscriptive moral ones. So even if we agreed that science could not reach moral conclusions without a proscriptive moral axiom, that wouldn't say much. Science can't reach any conclusions without such axioms.

I would argue, in fact, that science has already accepted enough proscriptive moral axioms such that no more are needed. But so what if that's not so? If science can't reach useful conclusions because it's missing an axiom, then there is nothing unscientific about adding one.
 
Imagine if it was the early days of science. We knew we had to do experiments. We knew we had to form hypotheses. We knew they had to be testable. But suppose we hadn't thought about repeatability yet. After all, why do something you've already done?

Then suppose someone says to us: "Hey, I just got bored and repeated an experiment and got a different result. It turned out someone made a mistake the first time, but then I started thinking, what if the results depended on factors we didn't recognize. Our scientific progress is slowing because of this and I suggest we add a new axiom."

Proposed axiom: "Efforts *should* be made to repeat scientific experiments to ensure they were done correctly and have no unknown dependencies. If repeated experiments produce results that conflict with the original results, any conclusions drawn from the original results *should* be rejected or at least re-examined"

Now, I would say that this seems subjectively reasonable. It would allow science to progress better and produce more useful conclusions. And rational scientists did accept essentially precisely this as a scientific axiom.

But what of this objection: "Wait. That's a moral axiom, saying what people *should* do. It's certainly not provable and probably not true, no matter how subjectively reasonable it seems. If we accept that, we're just not doing science any more."

The fact is, when a lack of axioms impedes science from drawing useful conclusions, axioms are added that are fundamentally proscriptive moral ones. So even if we agreed that science could not reach moral conclusions without a proscriptive moral axiom, that wouldn't say much. Science can't reach any conclusions without such axioms.

I would argue, in fact, that science has already accepted enough proscriptive moral axioms such that no more are needed. But so what if that's not so? If science can't reach useful conclusions because it's missing an axiom, then there is nothing unscientific about adding one.

Actually, I think that what your example illustrates is that the idea that we build knowledge using axioms is foolish. Obviously what you describe above is not how the idea of repeatability entered science. It was simply the natural consequence of wanting red cloth or crops to grow (one-off events are not useful in that regard). Similarly, the idea that we build morals from axioms by pretending that we use prescriptive statements obviously does not describe the idea that there are better and worse ways to live. This was simply the natural consequence of finding pleasure and pain in our daily lives.

You will notice that it is far easier to do science than it is to describe what science is. Which suggests that the practise is unrelated to philosophers' failed attempts to constraint it a priori.

Linda
 
A dead creature has no emotions. In the case of bacteria and gnats, it has no mourning living relatives either. In case of humans, the trauma of surviving relatives and friends is significant and long-lasting. In case of antelopes... idunno, haven´t researched their psychology in the aftermath of losing a friend.
I think you might just have morally justified genocide there (just make sure you kill all the mourners and mourners of the mourners).

Imagine if it was the early days of science. We knew we had to do experiments. We knew we had to form hypotheses. We knew they had to be testable. But suppose we hadn't thought about repeatability yet. After all, why do something you've already done?

Then suppose someone says to us: "Hey, I just got bored and repeated an experiment and got a different result. It turned out someone made a mistake the first time, but then I started thinking, what if the results depended on factors we didn't recognize. Our scientific progress is slowing because of this and I suggest we add a new axiom."

Proposed axiom: "Efforts *should* be made to repeat scientific experiments to ensure they were done correctly and have no unknown dependencies. If repeated experiments produce results that conflict with the original results, any conclusions drawn from the original results *should* be rejected or at least re-examined"

Now, I would say that this seems subjectively reasonable. It would allow science to progress better and produce more useful conclusions. And rational scientists did accept essentially precisely this as a scientific axiom.

But what of this objection: "Wait. That's a moral axiom, saying what people *should* do. It's certainly not provable and probably not true, no matter how subjectively reasonable it seems. If we accept that, we're just not doing science any more."

The fact is, when a lack of axioms impedes science from drawing useful conclusions, axioms are added that are fundamentally proscriptive moral ones. So even if we agreed that science could not reach moral conclusions without a proscriptive moral axiom, that wouldn't say much. Science can't reach any conclusions without such axioms.

I would argue, in fact, that science has already accepted enough proscriptive moral axioms such that no more are needed. But so what if that's not so? If science can't reach useful conclusions because it's missing an axiom, then there is nothing unscientific about adding one.
I'm not seeing how that would be a new axiom. If the original axiom for using science was to be as accurate as possible, the conclusion to repeat experiments would still follow from that same axiom.
 
I'm not seeing how that would be a new axiom. If the original axiom for using science was to be as accurate as possible, the conclusion to repeat experiments would still follow from that same axiom.
Probably so. That doesn't make my point any less valid, but it does show that I don't write very good stories.

I guess you'd have to change my story so they hadn't yet accepted predictive validity as a primary scientific goal and only had isolated concretes like "do experiments". But then a fair argument could be made that what they were doing wasn't really science.

Okay, it's not as good an analogy/story as I thought. Thanks for bumming me out.

But the point is, science has to have at least some root moral axiom like "one should value scientific truths based on their predictive validity". Otherwise, science could never tell us how to do science, which would leave it nothing to do at all.
 
I don't have the energy left for any prolonged discussions on this, but I'll chip in to comment on this:

Proposed axiom: "Efforts *should* be made to repeat scientific experiments to ensure they were done correctly and have no unknown dependencies. If repeated experiments produce results that conflict with the original results, any conclusions drawn from the original results *should* be rejected or at least re-examined"

Now, I would say that this seems subjectively reasonable. It would allow science to progress better and produce more useful conclusions. And rational scientists did accept essentially precisely this as a scientific axiom.

But what of this objection: "Wait. That's a moral axiom, saying what people *should* do. It's certainly not provable and probably not true, no matter how subjectively reasonable it seems. If we accept that, we're just not doing science any more."

This is the same mistake that Harris made. "we should test things to find out more of reality" or "repeatability increases the accuracy of our findings" are NOT moral oughts. They are purely practical measures. Likewise, assuming that there is such a thing as causality (necessary for science) although it cannot be proven is not the same as saying "well, let's say that our ultimate objective is to increase wellfare for everyone". One is a reasonable assumption garnered from what we have seen of the world so far, the other is a moral proposition.

It really irks me that Harris doesn't seem to grasp the difference between the two.
 
This is the same mistake that Harris made. "we should test things to find out more of reality" or "repeatability increases the accuracy of our findings" are NOT moral oughts. They are purely practical measures.
I don't see what you think the difference is between these two things. Anything that tells us what we should do is a moral ought. Many oughts achieve practical goals.

Likewise, assuming that there is such a thing as causality (necessary for science) although it cannot be proven is not the same as saying "well, let's say that our ultimate objective is to increase wellfare for everyone". One is a reasonable assumption garnered from what we have seen of the world so far, the other is a moral proposition.
I 100% agree. Adding arbitrary, unjustified axioms is not doing science. However, the point is that science can contain axioms under the right conditions. So proving that something requires an axiom does not prove it's outside the realm of science.

It really irks me that Harris doesn't seem to grasp the difference between the two.
I think you have the problem a bit wrong. The issue is that Harris thinks he can pick an axiom because it seems true to him and that he can add it to science. The level of justification that would be required to permit adding a root moral axiom and be able to justify calling the result 'science' is comparable to what's needed to accept causality or predictive validity. That is, a hell of a lot more than we presently have for any proposed moral axiom.

That's one of the reasons I say we can't really make morality scientific yet. If we need an additional axiom, we don't know what it should be yet. If we don't need an additional axiom, we don't know how to proceed without one yet.
 
Last edited:
... Adding arbitrary, unjustified axioms is not doing science. However, the point is that science can contain axioms under the right conditions. So proving that something requires an axiom does not prove it's outside the realm of science. ...
It seems to me that the issue of morality is all about that particular axiom. Once the axiom is established, science can be applied to morality. The problem is though that this axiom is at the heart of morality. That's the part that runs into the is/ought problem and that's the part that science cannot be applied to.
 
It seems to me that the issue of morality is all about that particular axiom. Once the axiom is established, science can be applied to morality. The problem is though that this axiom is at the heart of morality. That's the part that runs into the is/ought problem and that's the part that science cannot be applied to.

Yes, I think you have succinctly stated the problem in regards to those who disagree that Harris has solved the is/ought problem. No one has argued against the idea that given an axiom to start the morality ball rolling, science can then be used to help us achieve the various goals that follow from that axiom.
 
It seems to me that the issue of morality is all about that particular axiom. Once the axiom is established, science can be applied to morality. The problem is though that this axiom is at the heart of morality. That's the part that runs into the is/ought problem and that's the part that science cannot be applied to.
I agree that this is a rational response to this part of my argument. However, my response would be to point to the other parts of my argument where I argued that proscriptive moral claims could actually be objectively valid relational facts -- we just don't know exactly how yet. Once we understand how people actually do form proscriptive moral claims, we should be able to figure out how the input produces the output and objectively validate that process as being error free or reject it as fraught with error.

Moral claims may appear to be value judgments. But once we understand at least what a value judgment purports to measure, we can equate them to relational facts. For example, since we understand reasonably well what "I *like* ice cream" is purporting to measure (how much pleasure one gets from eating ice cream) it's not hard to make it an objective relational fact "ice cream and I are so constructed that when I consume ice cream, I experience pleasure."

We can understand, thanks to knowing what "liking" really is, that the latter objective claim is a reasonable equivalent to the former subjective claim and use the latter in substitution for the former. Once we know what "ought" really is, we should be able to find reasonable objective equivalents to subjective, proscriptive moral claims. We don't yet know what such claims will look like, and only future science can tell us. So that's a significant way science will eventually affect our understanding of moral claims.

You don't need much science to understand descriptive moral claims. They are akin to "it looks green to me". But to understand what "it doesn't just look green to me, it *is* green" really means, you need to understand the science of color vision and optics because that's where "really being green" lives. Similarly, we will need to understand the science of making moral judgments to learn what "really *is* wrong" actually means.
 
Last edited:
I feel that what some here advocate as "scientific morals", namely a set of moral attitudes that we discover to be very popular among humans, would actually be simply a statistical analysis of conteporary human psychology. It would tell something about humans and their moral thinking, not about the validity or orthodoxy or reasonability or obligatority or goodness or cruelty or selfishness or indifference of this moral thinking.
 
I feel that what some here advocate as "scientific morals", namely a set of moral attitudes that we discover to be very popular among humans, would actually be simply a statistical analysis of conteporary human psychology. It would tell something about humans and their moral thinking, not about the validity or orthodoxy or reasonability or obligatority or goodness or cruelty or selfishness or indifference of this moral thinking.

Exactly. Although I must have misunderstood you, because I thought this is what you were proposing with your scale of good and bad acts (which would only make sense in the setting of some sort of psychological consensus on good and bad acts).

We see/saw this in the field of health. We use the way we feel to tell us whether we are well. And for thousands of years, it was our intuitions which told us the state of our health. We even built up a field of folk medicine, whereby the presence or absence of mental states was used to indicate the presence or absence of some sort of process interfering with our health. And the idea that our intuitions are tapping into some sort of objective truth is so pervasive, that it persists today, even in the face of our knowledge that they are often very wrong. Modern medicine and the substantial progress made in health came about because we began to discard the idea that statistical popularity or a cataloging of human intuitions was revealing truths about the human body. We made the most progress when we found ways to discard these intuitions (double blinding).

We are still trying to practice a folk morality, as philosophy still seems to be hung up on the idea that prescriptions produced by moral intuitions form the basis for thinking about good and bad actions.

Linda
 
I agree that this is a rational response to this part of my argument. However, my response would be to point to the other parts of my argument where I argued that proscriptive moral claims could actually be objectively valid relational facts -- we just don't know exactly how yet. Once we understand how people actually do form proscriptive moral claims, we should be able to figure out how the input produces the output and objectively validate that process as being error free or reject it as fraught with error.

Moral claims may appear to be value judgments. But once we understand at least what a value judgment purports to measure, we can equate them to relational facts. For example, since we understand reasonably well what "I *like* ice cream" is purporting to measure (how much pleasure one gets from eating ice cream) it's not hard to make it an objective relational fact "ice cream and I are so constructed that when I consume ice cream, I experience pleasure."

We can understand, thanks to knowing what "liking" really is, that the latter objective claim is a reasonable equivalent to the former subjective claim and use the latter in substitution for the former. Once we know what "ought" really is, we should be able to find reasonable objective equivalents to subjective, proscriptive moral claims. We don't yet know what such claims will look like, and only future science can tell us. So that's a significant way science will eventually affect our understanding of moral claims.

You don't need much science to understand descriptive moral claims. They are akin to "it looks green to me". But to understand what "it doesn't just look green to me, it *is* green" really means, you need to understand the science of color vision and optics because that's where "really being green" lives. Similarly, we will need to understand the science of making moral judgments to learn what "really *is* wrong" actually means.
"I like ice-cream" is the kind of statement which is either a fact or not, however "ice-cream tastes great!" is a personal value judgement. The same would go for "I believe dying for your country is honourable" vs "dying for your country is honourable". While in future we may get better at discovering if a particular value judgement comes from a poorly functioning brain, I don't see how we could ever use science to discern why one value judgement might be "better" than another.
 
I thought this is what you were proposing with your scale of good and bad acts (which would only make sense in the setting of some sort of psychological consensus on good and bad acts).
My scale is more objective than that, surveying anyone´s opinion about anything is not a part of it.

It is a logical scale from the most selfish possible to the most unselfish possible, ranging from illegal destruction ...to... egocentric free competition ...to... pursuit of full equality ...to... self-sacrifice on behalf of others ...to... being the victim of illegal destruction.
 
Last edited:
My scale is more objective than that, surveying anyone´s opinion about anything is not a part of it.

It is a logical scale from the most selfish possible to the most unselfish possible, ranging from illegal destruction ...to... egocentric free competition ...to... pursuit of full equality ...to... self-sacrifice on behalf of others ...to... being the victim of illegal destruction.

It's not objective. You have simply chosen some characteristics on which you think there is psychological consensus as to 'good' and 'bad', such as equality, selfishness, selflessness, and property rights, and put them onto an arbitrary scale, without any rationalization for the choice of characteristics or with respect to intervals (maybe you have a rational basis for the intervals, but you didn't offer one at the time).

The actual measurement of those characteristics may be objective, but your choice of characteristics and the form the scale takes is the part which would vary depending upon consensus (and represents an example of the "is/ought problem").

Linda
 
<snip>

We see/saw this in the field of health. We use the way we feel to tell us whether we are well. And for thousands of years, it was our intuitions which told us the state of our health. We even built up a field of folk medicine, whereby the presence or absence of mental states was used to indicate the presence or absence of some sort of process interfering with our health. And the idea that our intuitions are tapping into some sort of objective truth is so pervasive, that it persists today, even in the face of our knowledge that they are often very wrong. Modern medicine and the substantial progress made in health came about because we began to discard the idea that statistical popularity or a cataloging of human intuitions was revealing truths about the human body. We made the most progress when we found ways to discard these intuitions (double blinding).

<snip>

That's because the human body is a meat machine with a fairly standard set of operating limits for the harmonious functioning of its sub-systems.

Underlying all of medical practice are value judgements about cost, quality and quantity which are in the realm of moral philosophy, not science.
 
It's not objective.
As objective as the Celsius system for measuring temperature, for example.

You have simply chosen some characteristics on which you think there is psychological consensus as to 'good' and 'bad'
I have chosen objectively observable characteristics, but not alleged any political or other consensus about them being considered good or bad.

an arbitrary scale
The scale is not more arbitrary than the Celsius system for measuring temperature, for example.

without any rationalization for the choice of characteristics
A lot more than "without any" is available here:
http://www.johnjoemittler.com/ethics/English/ch_04.html

with respect to intervals
It is arguable, whether the distance between "illegal destruction" and "legal selfishness" should be equal to the distance between "legal selfishness" and "pursuit of full equality. That problem goes away by using +b (instead of +2) for illegal destruction, and +a (instead of +1) for legal selfishness, where b > a > 0. Other than that, there are no "arbitrarities", or then you are using criteria that makes a lot of science "arbitrary".

"an example of the "is/ought problem"
The system does not include any "is/ought", and therefore has no such problem either. While the system might indeed be used as an information source for making such decisions.
 
Last edited:
I don't see what you think the difference is between these two things. Anything that tells us what we should do is a moral ought. Many oughts achieve practical goals.

You also thought that "the sky is blue" is a value judgement. You simply don't seem to understand what morality is.

I'll give it another shot though:

-A moral ought tells people what they should do, not out of practical considerations but because the act is "good" or "bad" in and of itself. People who say that murder is wrong generally don't mean that murder should be discouraged in civilised society as it decreases general quality of life, but because the notion does not sit well with them on a fundamental level. Their human nature is opposed to it. That makes it a moral ought.
-A practical "ought" is simply a factual claim about the most effective way to achieve a certain thing. For example, "X has the highest chance of achieving Y." Such a claim is either true or false. Someone who says that murder should be discouraged in society if one wants to achieve a higher level of welfare (however this is measured) is making a practical claim.


If you honestly don't think there's a difference between the two, then there is no point in talking about morality in the first place. It certainly does not follow that morality is objective. Rather it would argue in favour of moral nihilism.
 
Last edited:
As objective as the Celsius system for measuring temperature, for example.

I agreed that the measurement itself could be considered objective. The part which isn't is the choice of what to measure. For example, choosing to determine size by measuring temperature doesn't quite get you the answer you are looking for.

I have chosen objectively observable characteristics, but not alleged any political or other consensus about them being considered good or bad.

Of course you have alleged consensus. You have ranked disparate ideas in terms of 'something'. If consensus varies on that 'something', then the rankings would change.

The scale is not more arbitrary than the Celsius system for measuring temperature, for example.

Well, the intervals on a Celsius scale are meaningful, for one thing. But more importantly, if you want to know the size of something, the Celsius scale will give you objective numbers, but they don't mean what you think they mean.

A lot more than "without any" is available here:
http://www.johnjoemittler.com/ethics/English/ch_04.html

I read the link. He also does not provide any justification, other than to simply declare what the intervals represent. It's not a matter of whether a scale can be declared, but rather whether that scale forms a valid and reliable measure of whatever it is that is of interest to us.

It is arguable, whether the distance between "illegal destruction" and "legal selfishness" should be equal to the distance between "legal selfishness" and "pursuit of full equality. That problem goes away by using +b (instead of +2) for illegal destruction, and +a (instead of +1) for legal selfishness, where b > a > 0.

Yes, the use of rank does do away with the potential for invalid assumptions. But then it also obviates most of what you claimed for your system, such as:

"Add 1000 people and 10,000 situations and actions, with this mathematical model you can still keep track of the statistical trends of moral behaviour, total moral balance, statistical highs and lows, etc."

Other than that, there are no "arbitrarities", or then you are using criteria that makes a lot of science "arbitrary".

That is why we tend to ask for careful study of the reliability and validity of a measure, rather than declarations (there is a huge body of science behind the idea of measuring constructs).

The system does not include any "is/ought", and therefore has no such problem either. While the system might indeed be used as an information source for making such decisions.

At some point, you have to justify why we ought to measure selfishness if we are interested in whether an action is good or bad.

Linda
 

Back
Top Bottom