Sam Harris' The Moral Landscape Challenge

(snipped to address what was addressed to me)

Marplots
I don't think it can. Even in this simple set up, you have bound the mother to the child so that they are one entity. The mother would only act toward the survival of the child, since that is the child's only preference.

Where is the "moral" decision? Aren't you just forcing her to choose in a deterministic and hard-coded fashion by defining her as an automaton? Are we talking meat machines here, Asimov's robots?

If you construct a predetermined, programmable universe, you may have scientific morality, but it won't be authentic morality, will it?

Ah, you don't believe in a deterministic, materialistic universe? Well that explains why you don't find my argument persuasive. Indeed, everything I said relies upon the very fact that people are automatons in the sense that you can measure their preferences and predict their outcomes in the same way as you might predict the weather. I don't see how I can convince you of determinism though. Is there any reason you reject it?

I actually do hold with determinism and materialism. But I would also allow for complexity and emergence. It may very well be that morality is a higher order function that cannot be measured save for building the entire "machine" and seeing what it does.

It is not merely determinism at issue, but predictability. The people you created were defined as predictable along the parameters you created for them. I do not think these would be people at all.

The distinguishing feature that makes them automata isn't based on determinism or materialism, but directed and predictable actions/behaviors (in this case moral feelings). All we would need to do, to see if this were so, would be to find someone who chose differently under the same set of circumstances - is it your opinion that such a thing never or cannot happen? Surely it is possible even under materialism with a biology tuned finely enough.

In other words, are morals fixed? If they are, your thesis can proceed. If they are not, then the enterprise becomes an historical one and suffers the same measurement problems as any chance event in the past.

We can even play around a bit and see how sensitive our moral sense is. By altering certain key points, we may be able to induce a kind of Necker Cube flipping.

Let us ask, "Is it OK to have sex with a young woman you meet in a bar who seems willing to have sex with you?" and then see if we can flip the answer on moral grounds. (Let us also stipulate that you very much wish to have sex with her.)

"You happen to see her ID and it shows she is only 15."

"In her wallet, you find an authentic birth certificate and she's really 18."

"The birth certificate indicates she is your (much) younger sister."

"She tells you you are adopted."

"You are adopted, but DNA reveals you are her father."

"This is all a dream."

We could, if we liked, make this circular and the "measurement" you record would depend on where in the cycle it occurred.

And the question would remain: "Is it OK to have sex with a young woman you meet in a bar who seems willing to have sex with you?"

My contention is that enumeration will never get you an answer to this question in all the circumstances under which it might arise.
 
P.s.: In replying to this lengthy post, please do not quote separate paragraphs and address them individually. Quote mining never leads to productive discussion in my experience, and in this case the reply would just get ridiculously long. Please address the whole argument in its entirety instead.

I also find that snip/snipe style of discussion to be counterproductive most of the time, and its often the mark of a poster who merely wants a scrap and not an honest attempt to reach agreement. However, if you present (for example) an argument that is peppered with assumptions and errors, it can be impossible to 'address the whole argument in its entirety', and to demand that someone does sounds dishonest. When someone builds an argument in steps that are not all as clear-cut as they'd like to think, addressing the argument in its entirety is impracticable. Again, it's essentially saying "Ignore the bits along the way where my maths doesn't add up, just tell me what's wrong with my solution".

In your 'concrete example' (which isn't, really - I was hoping for an actual question of morality, with an actual answer derived from science), you ask us to consider an artificial world with an immediate assumption that we have read the minds of its three inhabitants. Because neuroscience, apparantly. Not my speciality, I grant you, but I don't think it can 'read' minds, can it?

I rankled at the mention of ceteris paribus in an earlier post. I first encountered it as a science undergraduate, in an economics lecture. Economics isn't really a science, is it? Any proposition that is hedged with "all other things being equal" doesn't strike me as being of any real value. Other things don't remain the same, in economics or morality.

Having established your imaginary world on assumed foundations that suit you, you then make a distinction between 'selfish' and 'moral'. I'm tempted to stop there, because there seems little point in addressing 'the whole argument in its entirety' when it's started with an assumption that minds can be 'read' in any meaningful or useful way and an unscientific determination of what is 'moral'. Why is selfishness never the right answer to a moral question? Why is altruism 'good'? Did you arrive at these ideas by reading minds and following logic, or are you constructing an argument to support ideas which have the sole merit of not being passed down to you by the elders of a the tribe, as a form of collective wisdom regarding everyone's preferences and the deterministic nature of the world priests of the sky fairy? Why is pulling them out of your white-coated arse any better than having been pulled out of the arse of a black-frocked priest? Do please try to avoid ad hominem in your answer, it's the ideas we're concerned with, not whether the person introducing them believes in god/s or fiddles with choirboys or sacrifices virgins.
 
@Marplots: I am not sure what you are trying to argue. Your whole post kind of seems to be based on magical thinking, to be perfectly honest.Yes, when you add more factors and details, any issue becomes more complex. This does not mean that it becomes impossible to predict the answer. There is no single point where an organism transforms from a simple, predictable automaton (bacteria to start with) to a complex creature that is somehow intrinsically greater than an automaton and therefore cannot be predicted in principle. Indeed, I very firmly hold that the same entity in the exact same situation will always act in the exact same way: Same input, same output, every time.

Of course, the same person in a different situation will act differently. So of course in your example the best action (given selfish and moral preferences, as always) will vary when new information arrives. There is nothing incongruous about this. Humans are complex creatures and social matters tend to be complex as well. But no amount of complexity will bring you to "impossible to find the answer in principle".

@jiggeryqua: I find it a bit strange that you protest about my request (about not snipping) and then proceed to request that I do not engage in ad hominem (why on earth do you think I would?) but I will let it pass.

Yes, neuroscience can read minds. Again, humans are perfectly predictable automatons. Neuroscience is still in its infancy, but from what I understand it can be used to tell what your actions will be before you consciously realize it at least. Science also can be used to directly manipulate your actions, emotions etc, though again I'm not sure how this works exactly. Either way this is clearly possible, as long as you grant that we live in a materialistic deterministic universe, as then the same input will always give the same output.

There is absolutely nothing unscientific about ceteris paribus. Think about what you do when examining medicine: You do everything in your power to ensure that two scenarios are exactly the same, except for the thing you wish to test which is the medicine. All things being equal: ceteris paribus. It is very scientific to measure effects in isolation. (criticism of economics in general is still justified, but not the topic)

I am disappointed to find you going back to exactly the same old "but why are preferences good?" point that I have already spent so much time to explain. Once again, I don't need an outside reason to say that preferences motivate people: It is a fact that they do. There is no reason why I would need a reason to be altruistic other than my desire to be altruistic (and the fact that I desire it does not make it any less altruistic, of course). Making a distinction between selfish and altruistic preferences (or what I call moral preferences in this case) is not in any way unscientific. Insisting that I have to proof the existence of selfish and altruistic desires is outrageous. Insisting that I need to prove that altruistic desires are moral is pretty much a tautology.

You keep being confused between universal morality and objective morality, and I am at a loss how to clear up this confusion. I have tried everything I could think of, yet you still insist that I must prove that there is some outside written-in-the-stars reason to act on our moral desires, as if moral desires were somehow special in that they are the only kind of desire that need an additional reason to act upon them other than wanting to do so. If you see a way to get us past this point tell me, because I tried everything I could think of.

Maybe offering definitions could help? I dunno.
1) Objective morality: You decide the matter using maths and logic and science and such, and you don't get a different answer just because someone wants a different answer (reality is what doesn't go away when you stop believing in it.)
2) Universal morality: It applies to everyone, moral rules are a physical fact of the universe that you can tap into and it makes no difference what people's preferences actually are. A psychopath is factually, logically incorrect to want to hurt people, like thinking that 2 + 2 = 5.

I am claiming 1), and every time you answer as if I am claiming 2).
 
@Marplots: I am not sure what you are trying to argue. Your whole post kind of seems to be based on magical thinking, to be perfectly honest.Yes, when you add more factors and details, any issue becomes more complex. This does not mean that it becomes impossible to predict the answer. There is no single point where an organism transforms from a simple, predictable automaton (bacteria to start with) to a complex creature that is somehow intrinsically greater than an automaton and therefore cannot be predicted in principle. Indeed, I very firmly hold that the same entity in the exact same situation will always act in the exact same way: Same input, same output, every time.

I know you don't want me to cut stuff up, but I think it's important to get past this bit.

The part I highlighted is wrong. Can we predict the weather? Can we, in principle, gather enough information to do so?

The answer depends on what you mean by weather and by predict. If you mean with extreme fidelity and over a long period, the answer is no, it cannot be done - save by one way only: let the weather happen and see. Complexity and emergence do have a role to play here. This isn't magical thinking at all, but a fact about the world we find ourselves in.

This then feeds into the part I bolded above. There doesn't have to be a "single point" of transition between a mound and a hill to know that both are useful, and different descriptions.

Finally, the same input, same output would only apply, even in principle (which I disagree with anyhow) if the machine doing the processing doesn't change. But we know it does. What a baby thinks is "OK" is probably not what the same kid will think in a few years. Morals do change. The machine changes. So, even if input remains the same, the results can still vary and vary considerably.

We are often reminded that you can turn a conservative into a liberal by sending them to college and change a liberal into a conservative by making them rich. As shallow as that may be, we certainly should agree that people change their views over time.

We should try to nail down this bit before getting to the other stuff. But let me reiterate, none of this puts morality on a pedestal or requires magical thinking, all it requires is accepting a certain amount of random in a sensitive system. (And, I suppose that recursion and a hierarchy help too.)
 
Oh dear, the second time recently that I've written a lengthy post, been pretty damn sure I posted it, then can't see it at all. Ah well, here's one of the more important bits of it, that'll have to do:

Sophronius, here's a moral question for you. Please explain how science will answer it:

Is it morally acceptable for scientists to read people's minds?

As I understand it, your scientific system of morality requires us to read minds in order to determine an answer to that question. With luck, the answer will be yes. It might, however, be no - and the action you took to find the answer turns out to be immoral. Is it that you don't care whether your initial action is moral or immoral? In that case, why search for moral answers at all? They're subordinate to 'what I want', apparantly. Or is it that you've pulled a moral standpoint out of the air and arbitrarily decided that it's morally good to read people's minds. In which case, why bother, you can pull morality out of the air and we're back to the ad hominems - my arbitrary morality is better than their arbitrary morality because they believe in sky-daddies and wear black and fiddle with choirboys and sacrifice virgins.
 
If one's moral philosophy is based on the assumption that humans are perfectly predictable automatons that respond to the same input with the output, then it is based on a flagrant denial of a rather obvious scientific truth.

Humans (as well as other organisms and many of our machines) have memories. At it most basic level that means that inputs can change the internal state of the "machine" so that it changes how it responds next time it gets the same input. It is "learning".

The assumption would make science itself impossible; what is science other than humans trying to learn? To change their internal states to make themselves behave more knowledgeable in the future. Perfectly predictable automatons don't do science because they can't respond differently by accumulated knowledge.

The assumption would also make morality impossible. Morality concerns itself with questions such as "what ought I do in that situation" and assumes that situation alone doesn't dictate the behaviour.
 
@marplots: You raise the question of whether it is possible -in principle- to predict the weather, and then conclude that it is too hard. This does not answer the question: Predicting weather only becomes harder and less accurate as the horizon lengthens, but at no point does it become in principle impossible. It simply does not follow. It is true that you can never justify 100% confidence of your predictions, and the accuracy lowers with complexity, but this is not the same thing.

As for your second point, yes people's preferences will change over time. This does add an extra layer of complexity, but it is not deal-breaking. At any point, you are still only interested in measuring and satisfying the preferences of people at that moment. Only if people prefer for their future preferences to be maximised do those preferences become relevant at that point as well, and in that case the calculation just becomes harder.

Note that people already do calculations to decide which action is best at any point: They just tend to be rough estimates, mental calculations using rules of thumb and such. However, the fact that calculations are being done and answers are being derived shows that I am claiming nothing extraordinary here.

@higgeryqua: I always do a quick ctrl-A ctrl-C of the text before posting it, just in case.

Your question is somewhat of a trick question. Initially it seems as if you are merely asking whether reading minds is moral, but then you go on to ask whether you are allowed to ask the question using my method in the first place. I see, you want me to prove that my method is good without using my method? Ok, fine, you've shown that you need to be able to answer moral questions to some degree before you can use science to do it better. Well making do with what we've got is exactly what humanity has been doing for eons now, so how is this an argument against using science and logic and reason to do it better?

Edit: gotta catch a train, more later.
 
@higgeryqua: I always do a quick ctrl-A ctrl-C of the text before posting it, just in case.

I usually do that, especially on a long post, but never mind. I usually preview it as well, so I can catch typos... ;)

Your question is somewhat of a trick question. Initially it seems as if you are merely asking whether reading minds is moral,

I am, and I notice I didn't get an answer as to whether it is, either with or without science.

but then you go on to ask whether you are allowed to ask the question using my method in the first place.

No, I explored the implications of answering the question using your method. You have not addressed those implications.

I see, you want me to prove that my method is good without using my method?

No, I was hoping you'd see that your 'method' does not (and cannot) do what you imagine it does.

Ok, fine, you've shown that you need to be able to answer moral questions to some degree before you can use science to do it better.

No, I believe I've shown that you need to be able to answer moral questions without science, and indeed that any application of science to moral questions is unrelated to questions of morality but acts merely as a measure against which actions can be compared to predetermined moral answers.

Well making do with what we've got is exactly what humanity has been doing for eons now, so how is this an argument against using science and logic and reason to do it better?

It's not. It's an ongoing red herring, this suggestion that denying the validity of your claim is somehow anti-science or anti-progress. I have consistently acknowledged the value of science in establishing facts, which can (and probably should) be used to assess whether a given action is in accordance with a set of moral principles. But those moral principles cannot be arrived at by scientific methods, and so science cannot be said to 'answer' moral questions.

Edit: gotta catch a train, more later.

I look forward to it.
 
In a course on moral justice I took back in the 80s, there was some discussion of psychological research done a few decades prior to that, in an attempt to calibrate appropriate criminal punishments and compensations for torts based on consensus gleaned from people's stated preferences. (Much like Harris appears to be proposing, but in a much more limited and seemingly straightforward domain.)

Typical such studies consisted of asking people how much money they would require to get them to voluntarily submit to various treatments and conditions -- spending certain amounts of time in prison, losing the use of a limb temporarily or permanently, various corporal punishments, forms of public censure or humiliation, and so forth. In other studies, subjects ranked the various adverse events in pairs relative to one another: which would they prefer, A or B?

The consensus that resulted from these studies was that the studies were useless for their intended purpose. Not because subjects' answers were dishonest, but because they were inconsistent, across the board, on all levels, individually and collectively. Things that some people do voluntarily, like live in certain parts of the country, were ranked by a majority as worse than a prison sentence. Even a single individual would often prefer A over B, B over C, and C over A. (Obviously this wouldn't happen when A, B, and C were directly comparable like prison terms of different lengths, but it happened routinely when comparing different types of things, like prison terms, corporal punishments, and lifestyle restrictions.)

Harris's thesis makes a point of the hypothetical measurability of people's moral preferences, while apparently assuming that those moral preferences, once measured, will be logically consistent in some way. But if they're not, how can they be used? Once you discover (as you probably will) that people have moral preferences for A over B, B over C, and C over A, where do you go from there?

Respectfully,
Myriad
 
Well. Maybe. Perhaps you can help me parse this?

Here's what you wrote:


This is true.
2+2 equalling four is a clear matter of factual correctness, but crazy people don't care about facts, so some cannot be swayed into accepting it.

Are you not here implying that "murder is wrong" is a factually correct statement, similar to the way "2+2=4" is a factually correct statement?
If not, why mentions crazy people and 2+2=4 at all, especially following it up with the conjunction "so"?

No, I am not implying that murder is wrong. People have stated that moral questions are subjective because you can't get a crazy person to see their murders as wrong.

My point is simply that what crazy people do isn't a standard anything is held to, so why do they use it in a discussion of morality.

In fact, even the views of people who are emotionally charged about a subject are never the deciders. That's how we get lynch mobs.

So, let's not let claims about what those people would do derail the discussion.
 
No, I am not implying that murder is wrong. People have stated that moral questions are subjective because you can't get a crazy person to see their murders as wrong.

My point is simply that what crazy people do isn't a standard anything is held to, so why do they use it in a discussion of morality.

In fact, even the views of people who are emotionally charged about a subject are never the deciders. That's how we get lynch mobs.

So, let's not let claims about what those people would do derail the discussion.

No, people have started that what is considered "moral" is subjective because it's not a question of factual correctness. You can't just look at alternative moralities and write them off as "crazy" simply by virtue of the fact that they're held by less than 50% of the population. Not if you want to call it scientific, at least.

Any actually, "emotional charge" is possibly the key factor in representative democracy. Everything from "tough on crime" campaigns to "universal health care" support and opposition is fundamentally emotion-based.
 
You can't just look at alternative moralities and write them off as "crazy" simply by virtue of the fact that they're held by less than 50% of the population. Not if you want to call it scientific, at least.

Strawman. It wasn't written off for the reason you cite (highlited). It was written off because, upon examination, it was shown to be the deluded viewpoint of a crazy person.


Any actually, "emotional charge" is possibly the key factor in representative democracy. Everything from "tough on crime" campaigns to "universal health care" support and opposition is fundamentally emotion-based.

We haven't reached that point in the discussion but there is nothing saying that any one of these is morally good or bad, just because it came from a supposed key factor in representational democracy. It isn't the source of the idea, but the idea itself, that decides whether it is moral or not.
 
The consensus that resulted from these studies was that the studies were useless for their intended purpose. Not because subjects' answers were dishonest, but because they were inconsistent, across the board, on all levels, individually and collectively. Things that some people do voluntarily, like live in certain parts of the country, were ranked by a majority as worse than a prison sentence. Even a single individual would often prefer A over B, B over C, and C over A. (Obviously this wouldn't happen when A, B, and C were directly comparable like prison terms of different lengths, but it happened routinely when comparing different types of things, like prison terms, corporal punishments, and lifestyle restrictions.)

Harris's thesis makes a point of the hypothetical measurability of people's moral preferences, while apparently assuming that those moral preferences, once measured, will be logically consistent in some way. But if they're not, how can they be used? Once you discover (as you probably will) that people have moral preferences for A over B, B over C, and C over A, where do you go from there?

Respectfully,
Myriad

Harris is arguing that we SHOULD value internal consistency (above gut responses), and we can use science to stop being moral hypocrites.

But - that's how you get an otherwise liberal atheist supporting state-sanctioned torture. It's internally consistent, if you accept the fact that war is necessary and results in suffering and death! And scientific models need to be internally consistent!

This is what happens when you treat subjective opinions as though they were mathematical facts.
 
Strawman. It wasn't written off for the reason you cite (highlited). It was written off because, upon examination, it was shown to be the deluded viewpoint of a crazy person.

So science answers moral questions by saying "this is the answer...and if you don't agree, you're mad"?
 
Strawman. It wasn't written off for the reason you cite (highlited). It was written off because, upon examination, it was shown to be the deluded viewpoint of a crazy person.
So, what's the objective standard by which you can judge someone else's morality as "the deluded viewpoint of a crazy person"?

In actual science, the merit is tested and judged by its results. The mathematical formulas work in physics. They get satellites in orbit and whatnot.




It isn't the source of the idea, but the idea itself, that decides whether it is moral or not.

The idea decides if its moral?
People make judgments. People come up with their own moralities and definitions of what is "moral".
 
With quote mining I meant quoting paragraphs out of context instead of addressing the whole argument, you're right it's not what it usually means exactly but you get what I mean. No, asking people what their preferences are is not a very reliable way to find out their preferences. People will always claim to be more altruistic than they really are for example, which is why economists like revealed preferences (which is also far from ideal). Anyway, the point of the neuroscience example is that it proves that preferences are part of the physical universe and observing them is a matter of scientific fact and not opinion. Yes people being hypocritical is an added difficulty. I don't know what you mean by google's personal results.

Asking people is as accurate as you can get, even in theory. A hypothetical mind-reading neuroscience machine is going to run into the same problems that pop up on psychological surveys. A revealed preference isn't necessarily going to be more true than a stated preference. My revealed preference might very well imply that I either "do" or "ought to" support torture, but neither of those are correct. Not in my opinion, at least. But this whole subject is about opinions about opinions, which are based on other opinions.

Also, I doubt everyone sees themselves as more altruistic than they really are; some people probably think they value freedom and liberty and individuality only, but in practice end up behaving quite altruistically.
 
Last edited:
So, what's the objective standard by which you can judge someone else's morality as "the deluded viewpoint of a crazy person"?

In actual science, the merit is tested and judged by its results. The mathematical formulas work in physics. They get satellites in orbit and whatnot.

Well, that could be a good measure . . . not whether a moral system can put satellites into space, that's just silly :) . . . but what is the expected outcome of a better moral system and does the new moral system give better results. For instance, if one of the outcomes is a kinder gentler society does the new system deliver, ie.- is violence down, hatred lowered, under the same laws are fewer people going to jail?

One of the issues with using psychopaths as the standard for measuring moral systems is that it relies on the moral view of a small, outlying group to decide what is best for everyone. A better system is to start with what the largest groups hold as moral, for example, "no murdering" is a universal moral dictum, so that would definitely be included. The issue with this moral law/code/guideline is that each group only applies it to their own in group, ie.- a lot of tribes, societies, etc., see themselves as "the chosen ones" and that is reflected in the names they give themselves. This in/out group distinction it becomes easy for injustice to be systemized where the in group can do anything it wants to the out group.

Science would improve on this because it shows that there are no chosen ones. There is only one in group which covers all humanity. That immediately makes for a better moral system.

The idea decides if its moral?
People make judgments. People come up with their own moralities and definitions of what is "moral".

Reread what I said in context. When deciding on the merits of an idea, the idea is what is judged, not the source of the idea.
 
Well, someone has now won the essay competition that Sam Harris set up. I was pleased to see that Sam Harris entrusted the judging to a critic of his and this critic's choice of winner has written a very pithy demolition of Harris's book:

http://www.samharris.org/blog/item/the-moral-landscape-challenge

Harris says that he will reply to the post and I predict he will say something along the lines of "Oh yeah, but when I say science I include analytical philosophy obviously."

Yet that would only show that his assumption that morality is equal to maximizing the well-being of conscious creatures is not empirically verifiable.



Sam Harris's theory reminds me of an episode of Bagpuss where the mice claim to make chocolate biscuits out of butterbeans and breadcrumbs in a mill. The chocolate biscuits fly out the front and then they are quickly taken away by the mice. It turns out that they keep feeding the same chocolate biscuit into the mill.

It's as if Harris is saying, "Look, I made a chocolate biscuit!"
And we keep saying, "You didn't make it. You already had it. The mill is a fraud!"
 
Last edited:
I think it would be fair of me to say that I think it was actually very good of Sam Harris to run a fair competition and to post the winner even though it seems to me to be pretty damning. Fair play to him.
 

Back
Top Bottom