Sam Harris' The Moral Landscape Challenge

He seems to have made a thesis, but the text is circling around it, not really trying to support it. It's not like reading John Locke.

Sounds very similar to the arguments made in favor of the existence of "natural rights" made by libertarians, as well as Christian "presuppositional apologetics". In all three cases, the root error turns out to be playing fast and loose with deeming something "axiomatic" and not distinguishing observation and deduction from assumption.
 
Sounds very similar to the arguments made in favor of the existence of "natural rights" made by libertarians, as well as Christian "presuppositional apologetics". In all three cases, the root error turns out to be playing fast and loose with deeming something "axiomatic" and not distinguishing observation and deduction from assumption.

I'm still willing to hold off on a final review until I actually finish, but I'm getting the same wishy-washy feeling I got when I read earlier essays, but with more disappointment since the length of the book provided enough capacity to structure a proper argument. In particular he seems to be consistently advancing pragmatic ethics but not calling it such. The only two explanations I can think of are that either he is ignorant of most of contemporary philosophy or has published a book that elaborates on others' philosophical works, but for some reason has not given them credit.

In any case, I think I should suspend discussing the book itself in this thread, since it seems to intended for discussing his recent Challenge. Regarding which, my opinion is that he could be sincere, but feel this is a million dollars of publicity purchased for $1000.
 
I think that's true (except to say that you needn't start with axioms--you can start with principles understood as something like rules that do not express propositions), but the moral axiom will necessarily be normative. Deducing the normative from the natural is the thing that we don't seem to be able to do.

I haven't read Harris' book, he strikes me as a lazy thinker on this topic.

Agreed. The irony is that Harris refers to Hume in just this way:

Many of my critics piously cite Hume’s is/ought distinction as though it were well known to be the last word on the subject of morality until the end of time. Indeed, Carroll appears to think that Hume’s lazy analysis of facts and values is so compelling that he elevates it to the status of mathematical truth:

http://www.project-reason.org/newsfeed/item/moral_confusion_in_the_name_of_science3/

Yes, Hume was being 'lazy', when he made his observations about the gap between facts and values. He just couldn't be bothered to solve the problem. Luckily we have Harris here to do the heavy lifting...

Such arrogance belies Harris' fundamental failure to come to terms with Hume. The is-ought problem completely punctures 'How science can determine human values'. I don't think 1000 words is necessary, a paragraph ought to do.

Harris wrote a book and gave a lecture in which his language equivocates continually between a strong claim:

Science can determine human values

and a weak claim:

If we accept basic moral values (e.g. suffering = bad), science might be able to help us reach good outcomes.

Not surprisingly, he fails in making a case for the stronger claim, and his contribution to the weaker claim is derivative and unremarkable
 
Last edited:
It's not a hypothetical situation:

Which way should the balance swing? Assuming that we want to maintain a coherent ethical position on these matters, this appears to be a circumstance of forced choice: if we are willing to drop bombs, or even risk that rifle rounds might go astray, we should be willing to torture a certain class of criminal suspects and military prisoners; if we are unwilling to torture, we should be unwilling to wage modern war

At least there he said "Assuming that we want to maintain a coherent ethical position..."

When it comes to state-sanctioned torture, I'm ok with some degree of a lack of internal coherence. I'd rather be a torture-opposing hypocrite than a torture-supporting utilitarian.


Harris does raise an interesting philosophical question, and it's one that is worth thinking about.

In essence, it seems that he's trying to challenge people about inconsistent reasoning, by saying that if we accept that, in waging war, it's OK for innocent people to die - and to then try to ignore that reality by referring to them by using euphemisms such as "collateral damage" - then why is it not OK to occasionally torture people whom we know are bad, particularly if it seems that it will yield a benefit? Why is it OK for innocents to die, but it's not OK to hurt a bad person, particularly if there is a very good reason to think it will help people?

You don't have to agree with such arguments, but I do think that it is worthwhile having those arguments, and worthwhile considering such issues, as they get us to think about why we hold the beliefs that we do.

Peter Singer is another philosopher who, probably more notoriously than Sam Harris, has raised issues about consistency in reasoning by trying to get people to think about why it's OK to kill ("put down") a beloved pet when it is ill, but it's not OK to permit very sick people access to euthanasia. (Of course, Singer has argued about even more contentious issues such as bestiality and killing severely disabled newborns, too.)
 
My discussion with people in this thread, summarized:

Me: I agree with Harris that human morality must necessarily be based on human preferences (or what he calls wellbeing), though not necessarily with his other points. I also agree that you can use science and logic to objectively establish what those preferences are, and how to go from there.
JREF guy 1: So you are saying that you can determine what my preferences ought to be? How dogmatic!
Me: No, I am saying that science can establish what your preferences are, and from there...
JREF guy 2: Why should we care about preferences? You need to establish a moral reason why preferences matter.
Me: Preferences are the only thing that are able to motivate people, so they are what matter by default. You don't need a reason to do something other than your wanting it. Wanting to help others is reason enough.
JREF guy 1: You haven't answered how science can establish what preferences you ought to have!
Me: I'm not saying that science can do that! I am saying that even though science has to start from somewhere, this does not make the whole thing just a matter of opinion! I am saying that even though preferences vary from person to person, they are still part of the physical universe and you can measure them objectively!
JREF guy 3: You have to start with a moral axiom. Moral axioms are true a priori, so you can just make up some random moral rules and call it good.
Me: That's a terrible way to do morality! You can't just make stuff up!
JREF guy 4: But how do you derive an ought from an is?
Me: Moral Preferences are a part of reality. All you need to do is examine what people's preferences are, which you can do objectively, and then you can, again objectively, establish how to go from there.
JREF guy 2: But what if someone does not have any moral preferences? How can your dogmatic religion tell them what they ought to do?
Me: .... it can't. Science can't provide a universally compelling morality because there is no such thing. This is not a problem with science. Nothing can do that, philosophy can't do that, logic can't do that, because the notion is stupid. So stop bringing it up as if it is a downside of what I am proposing.
JREF guy 1: Oh, so you are just saying that science can evaluate whether people are dead, and it has nothing to say about morality. Well anyone agrees with that.
Me: No, listen to me. What I am saying is that science can first objectively measure moral standards, and then given those moral standards, science can be used to objectively determine the best course of action. So the whole process is objective, opinion does not enter into it once you have established people's preferences. So what I am saying is that people on JREF should not go around saying things like "Oh morality is all just opinion", or acting like moral claims can never be wrong, things like that.
JREF guy 1: Oh. So where do you get the preferences from?
Me: ... :mad:


This whole thing is infuriating. It is as if just mentioning the world "moral" brings people into this mindset where none of the usual standards of reason and logic apply. If I claim that it's possible to derive moral answers, then I somehow have to prove a universally compelling morality that can convince psycopaths that murder is bad, even though this is impossible and thus an entirely unreasonable standard. Meanwhile, other people can go around claiming that they can just come up with some random axioms and have them be true "a priori" without anyone objecting to this. Someone literally claims that morality is a "separate magisterium" and nobody blinks. Meanwhile I am the unreasonable guy who thinks that moral claims must follow the usual standards of logic and scientific/factual backing. What the hell.

Btw, I agree that I use the word "science" too much when "logic" or "reason" are better, I guess I got that bad habit from Harris. Not that it changes the point.
 
Last edited:
Sophronius, I'm of the opinion that we all may just be confused by terminology (as you graciously concede above). But let's, for the moment, continue to use 'science' as a term that necessarily includes logic and reason, and set it against those moral codes passed on to communities by the self-appointed representatives of various imaginary 'supreme beings' (codes which, it should be noted, have generally stood the test of time, and in the case of 'do as you would be done by' are nigh universal).

Those Sky Daddy morals are generally expressed in the form of "thou shalt" and "thou shalt not", or at least "thou ought" and "thou oughtn't". They are, in my understanding, answers to moral questions. "Ought I?" is answered by priests and rabbis and so on, without the aid of science (but not always without logic and reasoning).

When someone claims that science can answer moral questions, I want to know how they objectively arrive at those answers, how scientists address 'ought I?' without merely replacing priests as the arbitrary decider of what is good and proper, what is moral. There may well be proponents of such a move, prepared to argue that white coats are better than black robes when it comes to making stuff up, but I can't see that being supported by logic and reasoning, just by self- (and class-) interest.

It might help me, and others, appreciate your position if you could offer up a concrete example of what you consider to be a moral question, along with an explanation of how science (logic, reasoning) can answer it, without depending on a prior answer which has been plucked from the air. The mere absence of pretence that some omnipotent beardy gave you the prior answer is not enough to tempt me to follow the priests of science. If all there is are answers plucked from the air, I shall pluck them myself and science can focus on the things it actually can do.
 
He's pretty ok with the fact that this is only his "in principal" position, and that there are various metrics to judge well-being by.

The point I am making is that some people may put justice ahead of well-being.

How do you objectively demonstrate that they are wrong?
 
(much good criticism snipped)

Meanwhile I am the unreasonable guy who thinks that moral claims must follow the usual standards of logic and scientific/factual backing. What the hell.

I'll even dispute this bit. I don't think there is such a thing as a "moral calculus," at least not one that follows logic and fact. I think the burden should be on the claimant to show it works like that.

My criticism is based on how I actually form moral judgements. I never analyse the pros and cons and sum up using some value system. My real-world moral choices are always pre-formed and only after I know the "right" answer do I then seek to justify it to others by way of reason and logic.

It is precisely the experience of knowing whether or not I like some food or other. I can, in hindsight, tell you why I like it - the sweetness or the texture or the nutritional content - but in truth, I either like it or not and know this unarguably and immediately, without consideration.

Even if you were to come up with some statistical measurement about how I form moral judgements, it would still fail because, until you ask me, you can't know how I've characterized any particular set of circumstances, and each circumstance is a one-off. Even when the situation may seem identical, my internal milieu may not be and, bingo!, a different answer emerges.

Finally, the observable of how I actually act, or what I tell you, may or may not stem from a moral stance. I can lie and I can act immorally. In what sense, then, are morals subject to scientific "observation?"
 
It is as if just mentioning the world "moral" brings people into this mindset where none of the usual standards of reason and logic apply.

Well, we need to be able to define "moral" without using the word "moral."
Seriously.

If I claim that it's possible to derive moral answers, then I somehow have to prove a universally compelling morality that can convince psycopaths that murder is bad, even though this is impossible and thus an entirely unreasonable standard.

No. Nobody here would claim the fact that some people can't be convinced that bigfoot isn't real means the claim that "bigfoot is a myth" is factually incorrect.

The problem with the argument about "morality" involves the circularity of the core premise and trying to derive factual correctness from some people's subjective feelings.

It's like saying if 55% of people don't like pickles, and this dislike can be measured by science, we can deem pickles "disgusting" in some fundamental way, use science to weed out pickle-containing food products, and thus demonstrate "how science to determine human values."

Meanwhile I am the unreasonable guy who thinks that moral claims must follow the usual standards of logic and scientific/factual backing. What the hell.

Do you think "Pickles are gross" needs scientific/factual/logical backing?
 
If I claim that it's possible to derive moral answers, then I somehow have to prove a universally compelling morality that can convince psycopaths that murder is bad, even though this is impossible and thus an entirely unreasonable standard.
I don't think it is an unreasonable standard. It is what "deriving moral answers" means. Sure you can find people's preferences by fMRI brainscans and detailed behavioural studies -- or by just asking them -- but that will only give you "moral opinions". Using the word "answer" suggests more definitive "answers" to moral questions.

Claiming it is possible to derive moral answers, while claiming it is impossible to prove a universally compelling morality, is a bit like claiming you can fly while at the same time claiming you can't "actually soar through the air".

Meanwhile I am the unreasonable guy who thinks that moral claims must follow the usual standards of logic and scientific/factual backing. What the hell.
Yes, you are the unreasonable guy. :rub:
Claiming that unscientific things must necessarily follow the usual standards of logic and science, can do that to a person.
 
When someone claims that science can answer moral questions, I want to know how they objectively arrive at those answers, how scientists address 'ought I?' without merely replacing priests as the arbitrary decider of what is good and proper, what is moral. There may well be proponents of such a move, prepared to argue that white coats are better than black robes when it comes to making stuff up, but I can't see that being supported by logic and reasoning, just by self- (and class-) interest.

It might help me, and others, appreciate your position if you could offer up a concrete example of what you consider to be a moral question, along with an explanation of how science (logic, reasoning) can answer it, without depending on a prior answer which has been plucked from the air. The mere absence of pretence that some omnipotent beardy gave you the prior answer is not enough to tempt me to follow the priests of science. If all there is are answers plucked from the air, I shall pluck them myself and science can focus on the things it actually can do.

Well, every time I try to logically show every step of the way, someone just goes up one level of abstraction again and asks "But how can science answer moral questions?" or "How can you derive an ought from an is?", just as if I've said nothing.

But very well, I can offer a concrete example this time if you think it will help, but I have to start from the beginning (otherwise people will claim that I am skipping an essential step). Let's make things very simple, because in practice moral questions are always unclear and it is hard (though not in principle impossible, as I attempt to show) to conclusively answer a moral question. Let's simplify by saying there are only three people, with simple desires:

Person 1 is a child who only cares about survival
Person 2 is the mother who only cares about the preferences of the child being satisfied.
Person 3 is the uncle, who cares equally about surviving and the preferences of the mother being satisfied.

We can distinguish here between selfish desires and altruistic or what you might call moral desires. The child is purely selfish. The mother is purely altruistic, but only with respects to the child. The uncle is more realistic as he has both selfish and moral desires, as a real person would. So how could morality flow from this?

Well, the mother could argue that the fairy in the sky dictates that children should always receive most of the food. This is a bogus argument because there is no such fairy, and as such the argument is based on nothing. The child could argue that he has constructed a moral axiom, that says that children should always receive most of the food because it is true a priori. This is a bogus argument because just saying something is an axiom or true a-priori proves nothing, and again there is no reason to favour this argument.

The uncle could try to be reasonable, and argue that each person is trying to satisfy their preferences regardless, and so it only makes sense to base the decision of what to do on those preferences. Based on the behaviour of the others, he can logically deduce what the preferences of each person are, in a process that is difficult in practice but not in any way merely a matter of opinion (preferences are facts, they are part of the physical universe like everything else in it). The mother could argue that she is satisfying the preferences of the sky fairy but she would be lying or deceiving herself (at least in this example, given the above). The preferences of each person are objective fact and the uncle can tell what they are. The uncle might therefore suggest that they all do away with pretence and work together to satisfy their desires instead. Once everyone is honest about what they want, the uncle agrees to give half of his food to the mother, who in turn gives all of her food to the child. This might go against the uncle's instincts, since he cares about the mother, but he accepts it since that is what the mother wants. Everyone agrees that this outcome is better at least compared to the alternative where everyone fights each other or argues each other to death.

Now hold on, I hear you cry. This is not a system of morality, this is three people agreeing to satisfy each others preferences! Well okay, but this is the basis for a moral society. A group of three people don't need moral rules because everyone knows each other. But what if more people arrive? Everyone has somewhat different preferences and it is hard to keep track of it all. So now it might be optimal to establish moral rules, guidelines or rules of thumb basically, that everyone agrees help satisfy the preferences of people in the group even though they can never do so perfectly. A rule might be established that says that children in general should receive more food/care, because there are a lot of mothers who strongly desire this, and a lot of uncles who wish for their sisters to be happy (even though they do not care for their nieces and nephews directly). A rule in general might be established not to hurt others. But what if psychopaths are introduced? Surely their desire to hurt others is not intrinsically worth less than other preferences? Well, no, because there is no such thing as "intrinsic worth" in the first place. Other people won't value that psychopath's preferences and moral rules won't take them into account, as long as the psychopaths are in the minority. In fact many might desire that they be killed outright, given that their preferences pose a threat to the preferences of so many others. But what if those psychopaths have mothers who value their well being? Well, in that case moral rules might be established that say that no, not even psycopaths must be killed, though their preferences may not otherwise be satisfied. As you can see, moral rules emerge.

So how does science enter into this? Well, if you have neuroscience you can read someone's mind and objectively establish their preferences, which might be really useful in creating more effective moral rules. More to the point, however, the fact that this is possible conclusively shows that preferences are part of the physical universe. Even in todays society, you still have people trying to claim the old trick where a fairy in the sky happens to have the same preferences as them. Or someone will claim in objective universal moral standards (which happens to be the same as the preferences of the person claiming this.) And often times it works, sadly, because people are gullible. But these claims are objectively bogus, since they are based on nothing. The only thing that is not bogus is moral rules based on people's preferences because guess what, that is the only thing that motivates people and therefore the only thing a society can be build on. Which means that my whole moral argument is based on what motivates people in reality, not what I feel ought to motivate people in principle, because I am not one the sky fairy charlatans trying to have my own preferences unduly maximised (as everyone here seems to accuse me off. I wonder why people evolved to be so suspicious of other people's moral and political arguments? Ahem... :rolleyes:).

So yea, the whole thing I am claiming is that no, you can't just come up with a random moral system and call it as good as any, it has to be based on people's preferences or else nobody has reason to care. No, an argument based on sky fairies is not as valid as any other. No, morality is not arbitrary. Yes, the fact that you can measure people's preferences scientifically matters. Combine this with the fact that you can objectively determine (using science) which actions deliver which outcomes (deterministic universe and all that), and you arrive at the inevitable conclusion that there is only one best option for society to take, whatever that is. So no, you should not go around saying that it is all just opinion, anyway. The whole process is objective, even though it is based on people's preferences which do vary, because there simply is no bloody alternative.

So, yea, really long post. Maybe not what you were looking for when you asked for a concrete example. But bloody hell, what else am I supposed to do, when every post I make just results in the same old reply of "but you missed the first step". In fact, who wants to bet some guy still claims that I have not shown why people ought to care about preferences? Or how all morality is subjective because you can't convince a psychopath to care about someone else's preferences? I am not holding my hopes up, anyway.:p



P.s.: In replying to this lengthy post, please do not quote separate paragraphs and address them individually. Quote mining never leads to productive discussion in my experience, and in this case the reply would just get ridiculously long. Please address the whole argument in its entirety instead.
 
1) You have a different definition of "quote mining" than most people
2) "neuroscience" is unlikely to ever be able to read minds, and just asking people what their preferences are would do just as well, so this is not a "yay science" thing
3) people are universally hypocritical in their "moral preferences"
4) google's "personal results" on searches seem to do everything described and more?
 
I don't think it is an unreasonable standard. It is what "deriving moral answers" means.

It is an absurd standard and has nothing to do with what "deriving moral answers" means. You can't convince a crazy person that 2+2=4 so why is convincing a crazy person that murder is wrong the litmus test?

Oh, because you once heard it was and never gave it any thought. :rolleyes:
 
It is an absurd standard and has nothing to do with what "deriving moral answers" means. You can't convince a crazy person that 2+2=4 so why is convincing a crazy person that murder is wrong the litmus test?

Oh, because you once heard it was and never gave it any thought. :rolleyes:

You think all murder is wrong?
 
Well, every time I try to logically show every step of the way, someone just goes up one level of abstraction again and asks "But how can science answer moral questions?" or "How can you derive an ought from an is?", just as if I've said nothing.

But very well, I can offer a concrete example this time if you think it will help, but I have to start from the beginning (otherwise people will claim that I am skipping an essential step). Let's make things very simple, because in practice moral questions are always unclear and it is hard (though not in principle impossible, as I attempt to show) to conclusively answer a moral question. Let's simplify by saying there are only three people, with simple desires:

Person 1 is a child who only cares about survival
Person 2 is the mother who only cares about the preferences of the child being satisfied.
Person 3 is the uncle, who cares equally about surviving and the preferences of the mother being satisfied.

We can distinguish here between selfish desires and altruistic or what you might call moral desires. The child is purely selfish. The mother is purely altruistic, but only with respects to the child. The uncle is more realistic as he has both selfish and moral desires, as a real person would. So how could morality flow from this?

I don't think it can. Even in this simple set up, you have bound the mother to the child so that they are one entity. The mother would only act toward the survival of the child, since that is the child's only preference.

Where is the "moral" decision? Aren't you just forcing her to choose in a deterministic and hard-coded fashion by defining her as an automaton? Are we talking meat machines here, Asimov's robots?

If you construct a predetermined, programmable universe, you may have scientific morality, but it won't be authentic morality, will it?
 
You think all murder is wrong?

No, is that what you got from my post? Perhaps you should read what I wrote instead of hearing the echo of what you heard in some junior college philosophy class.
 
No, is that what you got from my post? Perhaps you should read what I wrote instead of hearing the echo of what you heard in some junior college philosophy class.

Oh, I read what you wrote. You were very clear. You managed to oversimplify to the point of inaccuracy and burn a strawman all at once.
Can't say I've ever seen that particular combination before. Congrats! :)
 
Oh, I read what you wrote. You were very clear. You managed to oversimplify to the point of inaccuracy and burn a strawman all at once.
Can't say I've ever seen that particular combination before. Congrats! :)

Well then, it's reading comprehension you lack. :)
 
Well then, it's reading comprehension you lack. :)

Well. Maybe. Perhaps you can help me parse this?

Here's what you wrote:

you said:
You can't convince a crazy person that 2+2=4
This is true.
2+2 equalling four is a clear matter of factual correctness, but crazy people don't care about facts, so some cannot be swayed into accepting it.
you said:
so why is convincing a crazy person that murder is wrong the litmus test?
Are you not here implying that "murder is wrong" is a factually correct statement, similar to the way "2+2=4" is a factually correct statement?
If not, why mentions crazy people and 2+2=4 at all, especially following it up with the conjunction "so"?
 
I don't think it is an unreasonable standard. It is what "deriving moral answers" means. Sure you can find people's preferences by fMRI brainscans and detailed behavioural studies -- or by just asking them -- but that will only give you "moral opinions". Using the word "answer" suggests more definitive "answers" to moral questions.

Claiming it is possible to derive moral answers, while claiming it is impossible to prove a universally compelling morality, is a bit like claiming you can fly while at the same time claiming you can't "actually soar through the air".

Yes, you are the unreasonable guy. :rub:
Claiming that unscientific things must necessarily follow the usual standards of logic and science, can do that to a person.

Then you have defined moral answers away as something that is impossible to prove and declared me to be wrong by (your) definition. That does not seem to me a constructive way to hold a discussion. Why would you define morality or moral answers as something that is necessarily unscientific? Giving morality a free pass makes no more sense to me than doing the same with religion. You know that whole shtick where a creationist will say "everything in the universe must have a cause except for god because he exists outside of reality?" Yeah, that's special pleading. Saying that all claims must follow standards of logic and science except for moral claims, because moral claims are in a separate magisterium and so usual rules don't apply? Exactly the same thing.

1) You have a different definition of "quote mining" than most people
2) "neuroscience" is unlikely to ever be able to read minds, and just asking people what their preferences are would do just as well, so this is not a "yay science" thing
3) people are universally hypocritical in their "moral preferences"
4) google's "personal results" on searches seem to do everything described and more?

With quote mining I meant quoting paragraphs out of context instead of addressing the whole argument, you're right it's not what it usually means exactly but you get what I mean. No, asking people what their preferences are is not a very reliable way to find out their preferences. People will always claim to be more altruistic than they really are for example, which is why economists like revealed preferences (which is also far from ideal). Anyway, the point of the neuroscience example is that it proves that preferences are part of the physical universe and observing them is a matter of scientific fact and not opinion. Yes people being hypocritical is an added difficulty. I don't know what you mean by google's personal results.

marplots said:
I don't think it can. Even in this simple set up, you have bound the mother to the child so that they are one entity. The mother would only act toward the survival of the child, since that is the child's only preference.

Where is the "moral" decision? Aren't you just forcing her to choose in a deterministic and hard-coded fashion by defining her as an automaton? Are we talking meat machines here, Asimov's robots?

If you construct a predetermined, programmable universe, you may have scientific morality, but it won't be authentic morality, will it?

Ah, you don't believe in a deterministic, materialistic universe? Well that explains why you don't find my argument persuasive. Indeed, everything I said relies upon the very fact that people are automatons in the sense that you can measure their preferences and predict their outcomes in the same way as you might predict the weather. I don't see how I can convince you of determinism though. Is there any reason you reject it?
 
Last edited:

Back
Top Bottom