David Hume vs. Sam Harris

It depends what you mean by "well-being". If it includes altruism because that's what people want to do, then everyone will act to maximise their well-being by definition. If it doesn't include altruistic acts - we already know that people do that.

I don't think so. It seems to me that people make decisions that are quite contrary to their well-being all the time, and I don't have to go into altruism. See learned helplessness, addiction, and ressentiment for examples.

But research into human psychology is poor overall, and having some better research might be nice.
 
I don't think so. It seems to me that people make decisions that are quite contrary to their well-being all the time, and I don't have to go into altruism. See learned helplessness, addiction, and ressentiment for examples.

But research into human psychology is poor overall, and having some better research might be nice.

All of the things that people do that are bad for them long term are usually based on doing things that feel good short-term. People don't become alcoholics because drink makes them feel bad. They do it because drink makes them feel good, or perhaps later on, not drinking makes them feel bad.
 
All of the things that people do that are bad for them long term are usually based on doing things that feel good short-term. People don't become alcoholics because drink makes them feel bad. They do it because drink makes them feel good, or perhaps later on, not drinking makes them feel bad.

Not true of learned helplessness or ressentiment, in both of which cases people do things that are bad for them over the short term.

In any event, saying that people do things that make them feel good (which even would have to be revised for the case of learned helplessness) is a very different claim from saying that they do it for their well-being.
 
So, I finished re-reading the book. And, I'll say this much one more time: It is very badly written, with good points stuck in the Notes section*, and tirades against small matters in the main body. But, that is all superficial stuff about his writing style. I still largely agree with most of his actual arguments.

I would highly recommend reading the new Afterword in the paperback edition, where he more succinctly resonds to criticism. It's not a perfect set of responses, but is still a measurable improvement over the long-winded rambles of the actual book content.

(* I happened to notice that they didn't update the books' Index, in the new Sept. 2011 printing, to reflect the addition of the afterword. So, all the pages that reference the Notes section are wrong.)

I will have more to say, right soon, about the "death of Hume". But, in the mean time, I felt I was obligated to respond to this comment:

If you can tell me what the definition of utilitarianism is and exactly how Harris' ideas differ from utilitarianism then there's the basis for a conversation, but I feel safe in saying that you can't and won't do that.
The definition of Utilitarianism could vary. Yes, one could define that word to match Harris' points. But, then again, one could also define "Santa Claus" as "Someone who runs a pizza parlor in New York City."

Historically, the use of the word Utilitarianism is not as open-ended or flexible as the science of morality proposed in the book. Perhaps Harris doesn't make this transparent in his writing. But, that's a problem about his horrid writing style, rather than his actual arguments.

The definition of "well-being" is meant to be one that evolves with our understanding of conscious creatures: something utility, alone, does not typically account for. Though, utility could still be a factor to consider, in evaluating the trade-offs associated with well-being.
 
On Deriving "Oughts" from "Ises", and the Death of the David Hume Distinction

Neurologically, there seems to be no distinction between "ought" and "is". All "oughts" derive from states in the brain, based on how we think the world IS, whether that perception happens to be accurate or not. Sam argues that one can improve the quality of their "oughts" by improving the accuracy of their "ises". That is where the Science of Morality takes off, and that is ultimately what renders the David Hume Distinction outdated. Only philosophers with an interest in taxonomizing human thought would care about such a Distinction. It has no bearing on the empirical reality of how moral values are formed. That is Harris' book in a nutshell.

Perhaps it is useful to split "is" into two types: "The Ontological Is" is how we perceive the world before we are informed by science. All "oughts" actually derive from "Ontological Ises". "The Empirical Is" is how we perceive the world, once we have been informed by science. Neurologically, an "Empirical Is" becomes indistinguishable from an "Ontological Is", once the scientific facts are accepted.

My own take distinguishes the "parts" of moral decision making. I admit this is also a model for illustrative purposes, that has little bearing on the empirical world. But, I hope it gives us some insight into exactly where Hume is getting "killed" in the process.

(I am going to refer to the "Miles" analogy I wrote way back in Post #40, so please read that, if you have not done so, already: http://www.internationalskeptics.com/forums/showthread.php?postid=7645647#post7645647 .)

The Hume Distinction is not getting killed in the First Mile, at least not empirically. We have to start with valuing things: science and an evolving sense of well-being*, to name a few. Science cannot tell us that we SHOULD value these things**. But, it can tell us HOW or WHY we might tend to value these things. However, this makes very little difference, because David Hume is still going to "die" before this all is over, in spite of this fact.

(The First Mile is roughly analogous to the "Value Problem" that Sam describes his new afterword.)

(* I can reiterate why well-being is a worthy value, if I must, in another post. But, so far, I have not heard any alternatives that even make any sense.)
(** Though, part of me still thinks even this might change. )

The slow death of the David Hume Distinction actually begins in the Middle Distance. As our tools of science sharpen, they will increasingly point towards how we "ought" to act. More of our "Ontological Ises" will have started life as "Empirical Ises".

For example, one can scientifically study the impact of corporal punishment applied to school children. We can also construct deep theories as to why it is, or is not, an effective form of discipline, which could then be tested. Apparently, there are still some schools in the U.S. that allow such punishment to occur; despite the fact that it only makes children MORE violent and LESS productive as students. (Harris summarizes this science in the Notes section: page 214 in the hardcover edition, and page 230 in the paperback; note #88.)

Of course, lots of folks might think it was wrong even before the science came out. But, no one is claiming that other things couldn't go into the middle distance, either. Before science, all of our "ises" were "ontological ises".

The final death blow to this Distinction comes in the Last Mile, where we make our final decision. Compare these two statements:

"Science shows us that corporal punishment discourages obedience, and leads only to more violence, in school children."​

And

"Corporal punishment of school children is wrong."​

I argue that only English professors (and perhaps philosophers with nothing better to do) would care about the distinction, here. To a sane brain, they would be acted on the same way. The "Empirical Is" has become an Ought, and a basis for one aspect of morality!

If the science was different: If, for argument sake, corporal punishment was consistently proven to encourage good behavior in children, it would become much harder to argue that it was wrong.

And, it is worth reiterating that even those who already "knew" it was wrong, before the science, would still be basing that judgment on how they perceived the nature (the "is") of the discipline. Their view was simply not as sharp on the facts, than what came out of the science.

It should go without saying that we can prove, scientifically, that any dissenting opinion on this matter is, in fact, wrong.

Perhaps that example was a little too easy? I think the point becomes even clearer when we consider the more counterintuitive findings related to the science of morality. Consider the unpleasant wrath of the "Peak/End Rule"!

Here, I decided to quote Wikipedia, but only because I am starting to get tired of putting everything into my own words. From http://en.wikipedia.org/wiki/Peak-end_rule :

In one experiment, one group of people were subjected to loud, painful noises. In a second group, subjects were exposed to the same loud, painful noises as the first group, after which were appended somewhat less painful noises. This second group rated the experience of listening to the noises as much less unpleasant than the first group, despite having been subjected to more discomfort than the first group, as they experienced the same initial duration, and then an extended duration of reduced unpleasantness.​

This also applies to exposure to cold and unpleasant medical procedures, such as colonoscopies. It might be morally prudent to prolong the length of medical procedures, so that the patient remembers it as less unpleasant than it really was, even though that doesn't sound like it should make sense. Sam talks more about this on page 77 of his book (either edition).

I argue that the old-fashioned, Hume-inspired take on "oughts" is woefully inadequate for handling these sorts of discoveries. And, as we march into the future, we will likely run into more of them. Therefore the David Hume Distinction can be safely ignored by everyone, except maybe historians!
 
Neurologically, there seems to be no distinction between "ought" and "is". All "oughts" derive from states in the brain, based on how we think the world IS, whether that perception happens to be accurate or not.
This seems close to saying...
1. oughts are thoughts.
2. thoughts are brain states.
3. brain states are ises.
C. oughts can be derived from ises.

Is that fair? The problem with this approach is that it doesn't account for brain states that lead to oughts we clearly disapprove of. Promoters of this approach tend to handwave away lunatics, psychotics, etc. as if those labels were explanations. They are not. In this context they are just labels and dismissing them is special pleading.

Sam argues that one can improve the quality of their "oughts" by improving the accuracy of their "ises".
Sure we can. Would you or Mr. Harris say we ought to? Do you see the problem here?

If I've misunderstood your opening then I do apologize. I think I'm addressing a more fundamental aspect than your post addresses. Also, I did try to follow the rest of your post but it seems to go off in a different direction. My apologies again if you think it addresses what I wrote above.
 
This seems close to saying...
1. oughts are thoughts.
2. thoughts are brain states.
3. brain states are ises.
C. oughts can be derived from ises.
I think it would be more accurate to say:

1. Ises are thoughts, reflecting our perception of reality (accurate or not)
2. Oughts are toughts that are based on ises.
C. Oughts are derived from ises.

It would not make sense for someone to say: "Corporal punishment is clearly counterproductive for students, and since I want productive students more than anything, I should allow use of corporal punishment for them." Anyone who does is insane.

In my numbering, "brain state" and "thought" are more interchangeble.

In yours, it sounds like "brain state" is the physical state of neurons, and "thoughts" are implied as the emergent information?

Is that fair?
Fairness has nothing much to do with it. Harris' points depend on how the mind actually works, whether we like it or not.

The problem with this approach is that it doesn't account for brain states that lead to oughts we clearly disapprove of. Promoters of this approach tend to handwave away lunatics, psychotics, etc. as if those labels were explanations. They are not. In this context they are just labels and dismissing them is special pleading.
No legitimate neuroscience endeavor is going to "handwave" anybody with labels. They will aim to study the brain, figure out exactly what is going on, and improve diagnosis and treatment over time.

This is NOT a matter of special pleading. In the case of lunatics and psychopaths, we can do two things:
1. Demonstrate that their actions are detrimental to well-being of others, (and even themselves, in the long run).
2. Point to those parts of the brain causing their dementia. In the case of psychopaths, neuroscience is identifying where, why, and how, they do not care about the feelings of others.

Secondly, it IS possible for someone to be WRONG about morality. Those pushing corporal punishment would like productive students, but they value the Bible ("Don't spare the rod, lest ye spoil the child."), more-so than they do their own students' productivity. This detracts from everyone's well being: Teachers, school admins., families, and students alike, in the long run. (Though, in the short run in certainly effects the students and their families a lot more.)


Sure we can. Would you or Mr. Harris say we ought to? Do you see the problem here?
We do not have a choice. As a matter of neurological certainty, our Oughts DO improve as our ises become more accurate. Perhaps there might exist some form of insanity where this does not apply, but we are talking about a rare (if not preposterous) set of cases.

Also, I did try to follow the rest of your post but it seems to go off in a different direction.
The whole second part was a different direction. I was breaking down the moral decision-making process to show how empirical facts ultimately change what we think we ought to do.

The first part ignored the distinction between empirical and ontological ises, to show that in either case: what we feel we ought to do is derived from what we perceive the world is. That seems to be how our brains work, whether philosophers like it or not.
 
I think it would be more accurate to say:

1. Ises are thoughts, reflecting our perception of reality (accurate or not)
2. Oughts are toughts that are based on ises.
C. Oughts are derived from ises.

It would not make sense for someone to say: "Corporal punishment is clearly counterproductive for students, and since I want productive students more than anything, I should allow use of corporal punishment for them." Anyone who does is insane.
Ok. Thank you for that. However in your example you introduce another ingredient: ones values ("...since I want..."). This can be taken as smuggling in an ought and from there deriving an ought from an ought is much less controversial. Also, I suspect your last sentence (re sanity) indicates a problem as I'm sure we'll get to.

In my numbering, "brain state" and "thought" are more interchangeble.

In yours, it sounds like "brain state" is the physical state of neurons, and "thoughts" are implied as the emergent information?
No that's fine with me. I don't mean to get too rigorous here and I appreciate your providing a broad outline.


This is NOT a matter of special pleading. In the case of lunatics and psychopaths, we can do two things:
1. Demonstrate that their actions are detrimental to well-being of others, (and even themselves, in the long run).
2. Point to those parts of the brain causing their dementia. In the case of psychopaths, neuroscience is identifying where, why, and how, they do not care about the feelings of others.
I'm afraid I just see this as more smuggling-in of oughts. In the first case via "well-being" and in the second via an appeal to emotion. For the outlined formulation to provide a defense against handwaving, it would have to show how the neuro-atypical's oughts derived from their particular ises are qualitatively different from those of neuro-typicals.

We do not have a choice. As a matter of neurological certainty, our Oughts DO improve as our ises become more accurate. Perhaps there might exist some form of insanity where this does not apply, but we are talking about a rare (if not preposterous) set of cases.
I'm not really contending this. Instead I'm pointing to problematic cases where the ises and oughts conflict with the majority. Clearly these cases exist. The difference is in the extra ingredient: ones values. And complicating the picture is the issue of those who either choose not to, or cannot, or simply do not improve their ises or the derivation of their oughts. We cannot say well, they ought to without going completely recursive.

The whole second part was a different direction. I was breaking down the moral decision-making process to show how empirical facts ultimately change what we think we ought to do.
I have no problem with this.

In any event, thank you for the excellent reply.
 
Skeptics, scientists, and mathematicians often have atypical neurology, as well.

What matters is NOT the fact that neurology is "atypical". What matters is how accurate the perception of reality is. We can test this accuracy scientifically.

The insane are, by definition, delusional and not seeing the world accurately.

In the brain: The more accurate our "ises" are, the more efficient the oughts will be.

Oughts will ultimately appeal to the general well-being of society. If you have trouble accepting that, then name an alternative that makes sense.

Psychopaths have a severely distorted sense of what constitutes 'well-being', which conflicts with what can be scientifically deduced to be good for 'well-being'.

In a similar way: Someone who believes that eating nothing but packets of sugar all day long is the healthiest diet one can have, has a distorted sense of what constitutes 'health'.


As for 'sneaking in values', that is irrelevant. First of all, those values are also states in the brain.

Secondly, as I stated in some of my previous posts:
When it comes to empirical "ises", David Hume is not getting "killed" in the First Mile. We have to value something, and science probably can't tell us that we should value those things. (at least not yet.)
Even then, The Hume Distinction is still dead by the time we get to the end of the moral journey.
 
Neurologically, there seems to be no distinction between "ought" and "is".

Semantically, there is. When people express propositions, we don't examine them by observing brain states. We examine their meaning.

All "oughts" derive from states in the brain, based on how we think the world IS, whether that perception happens to be accurate or not.
I would say arise instead of derive, just to avoid a possible misinterpretation that could lead to an equivocation in the context of this discussion. Derive, here, doesn't mean the same as in "you can't derive an ought from an is". Apart from this, I have no objection to this point.

Sam argues that one can improve the quality of their "oughts" by improving the accuracy of their "ises".
Yes, I think this is true in a consequentialist approach. The accuracy of our desired outcome depends on the accuracy of the facts in the first place.

That is where the Science of Morality takes off, and that is ultimately what renders the David Hume Distinction outdated.
No.

Only philosophers with an interest in taxonomizing human thought would care about such a Distinction. It has no bearing on the empirical reality of how moral values are formed. That is Harris' book in a nutshell.
Do you really want to play this game? It's pointless, and ad hominem.

If we didn't taxonomize human thought, we wouldn't be able describe reality in the first place. Why do you bother using expressions such as the First Mile, the Middle Distance and the Last Mile? Why the distinction among them? They're all Miles, and belong to the same group as Miles Davis! Why do you make a distinction between "ontological ises" and "empirical ises"? You're just taxonomizing!

Perhaps it is useful to split "is" into two types: "The Ontological Is" is how we perceive the world before we are informed by science. All "oughts" actually derive from "Ontological Ises". "The Empirical Is" is how we perceive the world, once we have been informed by science. Neurologically, an "Empirical Is" becomes indistinguishable from an "Ontological Is", once the scientific facts are accepted.
I only have the same objection as before. I would use arise instead of derive to avoid a misinterpretation that could potentially lead to an equivocation.

The Hume Distinction is not getting killed in the First Mile, at least not empirically.
The Hume distinction is about the First Mile.

We have to start with valuing things: science and an evolving sense of well-being*, to name a few. Science cannot tell us that we SHOULD value these things**. But, it can tell us HOW or WHY we might tend to value these things. However, this makes very little difference, because David Hume is still going to "die" before this all is over, in spite of this fact.
This, in fact, is consistent with Hume's observation.


(The First Mile is roughly analogous to the "Value Problem" that Sam describes his new afterword.)

(* I can reiterate why well-being is a worthy value, if I must, in another post. But, so far, I have not heard any alternatives that even make any sense.)
(** Though, part of me still thinks even this might change. )
There is a better alternative: my well-being.

The slow death of the David Hume Distinction actually begins in the Middle Distance. As our tools of science sharpen, they will increasingly point towards how we "ought" to act. More of our "Ontological Ises" will have started life as "Empirical Ises".
The David Hume Distinction is not about the Middle Distance.

The final death blow to this Distinction comes in the Last Mile, where we make our final decision. Compare these two statements:
"Science shows us that corporal punishment discourages obedience, and leads only to more violence, in school children."​
And
"Corporal punishment of school children is wrong."​
The Hume Distinction is not about the Last Mile.

I argue that only English professors (and perhaps philosophers with nothing better to do) would care about the distinction, here. To a sane brain, they would be acted on the same way. The "Empirical Is" has become an Ought, and a basis for one aspect of morality!
Whether you like it or not, there is a difference. If you value accuracy, you'll acknowledge the difference.

In the second sentence, you're saying that it is wrong to discourage obedience and cause more violence in school children. That's a value judgment, and I agree with it, but no one argues that we can't have broad moral consensuses. Try an example with actions that cause more controversial outcomes, and the difference is obvious.

I argue that the old-fashioned, Hume-inspired take on "oughts" is woefully inadequate for handling these sorts of discoveries. And, as we march into the future, we will likely run into more of them. Therefore the David Hume Distinction can be safely ignored by everyone, except maybe historians!

You should first try to understand what David Hume meant because I think you're misinterpreting his observation. I don't particularly like the way Hume expressed it, and my views on this subject are not directly influenced by Hume, but I agree with his observations, and you seem to agree too.
 
On a whim I decided to re-read the opening post of this thread:

"But can there be any difficulty in proving, that vice and virtue are not matters of fact, whose existence we can infer by reason?
Today, we know they are matters of fact. To pretend otherwise is to ignore centuries of discoveries about what makes civilizations thrive or fail.

Take any action allow'd to be vicious: Wilful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In which-ever way you take it, you find only certain passions, motives, volitions and thoughts. There is no other matter of fact in the case.
There is the little matter of how accurate they are perceiving the world.

It would be insane to claim murder is justified, as long as the murderer is viewing his victim in a completely unrealistic manner. :rolleyes:

The vice entirely escapes you, as long as you consider the object. You never can find it, till you turn your reflexion into your own breast, and find a sentiment of disapprobation, which arises in you, towards this action. Here is a matter of fact; but 'tis the object of feeling, not reason. It lies in yourself, not in the object."
There are other ways to find it, other than introspection and feelings, other things like that.

The counterintuitive findings of the Peak-End Rule demonstrate this. Sometimes it DOES take external facts, besides your own feelings, to determine what constitutes correct moral action.

David Hume, from A Treatise on Human Nature
... is outdated.
 
Neurologically, there seems to be no distinction between "ought" and "is".

That's like saying "neurologically, there seems to be no distinction between nouns and verbs". It's flawed because nouns and verbs aren't neurological structures. Neither are oughts and ises. So why should there be a neurological distinction between them?

You could instead say "neurologically, there seems to be no distinction between our concepts of "ought" and "is". But this is simply incorrect. If we assume concepts are neurological then of course there is a neurological distinction between them or they would be perceived as identical concepts. And the nature of the neurological distinction is not something that can be said to be properly understood by cognitive science at this time anyway.

All "oughts" derive from states in the brain, based on how we think the world IS, whether that perception happens to be accurate or not.

"Ought" beliefs or claims arise from brain states that are based, in part, on how we think the world is, yes.

Sam argues that one can improve the quality of their "oughts" by improving the accuracy of their "ises". That is where the Science of Morality takes off, and that is ultimately what renders the David Hume Distinction outdated. Only philosophers with an interest in taxonomizing human thought would care about such a Distinction. It has no bearing on the empirical reality of how moral values are formed. That is Harris' book in a nutshell.

"Society ought to be X and I think people ought to (A) instead of (B), because I think (A) contributes more to X."

In this case if we could prove (B) actually contributes more to X, we could allow this person to improve the quality of their intermediary "oughts", but only in the sense of "given that this is your goal, here's how you can better carry it out". Determining how to most best achieve goals improves "oughts" related to those goals, but doesn't necessarily have anything to do with morality. It could, but you would only be determining how to best execute a pre-determined set of morals.

Science already does this all the time. Research on the link between tobacco and lung cancer allowed people to improve on the "should I smoke or not smoke?" question simply because it makes them more aware of the consequences. It's nothing new and I don't see why it needs to be called "moral science".
 
Last edited:
Semantically, there is. When people express propositions, we don't examine them by observing brain states. We examine their meaning.
Semantics is a fine thing to study, but it has no bearing on empirical reality.

I would say arise instead of derive, just to avoid a possible misinterpretation that could lead to an equivocation in the context of this discussion.
Okay, fine. Whatever.

David Hume had no way of knowing neurology would undermine his ideas.

If we didn't taxonomize human thought, we wouldn't be able describe reality in the first place.
There is nothing wrong with taxonimizing human thought. I am only pointing out that it has no bearing on empirical reality.

In biology: Linnaean taxonomy might be a useful way to sort one's species. But, it has no bearing on the actual natural relationships between those species. The Hillis Plot is better, but still prone to imperfections.
In astronomy: The downgrading of Pluto did not make it suddenly disappear from space. It was merely an act of taxonomy.

Why do you make a distinction between "ontological ises" and "empirical ises"? You're just taxonomizing!
Yes, I said that.

But, you'll notice my point is that 'Ontological Ises' are ultimately indistinguishable from 'Empirical' ones. Even if you try to separate them, they turn into the same thing.

Philosophers have called one group of thoughts "is" and another "oughts", which might be useful for them to some degree. But, it would be a mistake to assume they are empirically distinguishable, in the brain.

The Hume distinction is about the First Mile.
I agree with you, when it comes to pre-science, ontological ideas. But, then again, those values are also states in the brain. I made that clear in the first paragraph. So, Hume loses that argument from the get-go.

The more complicated factor is how empirical ideas come into the picture. First, you have to value empirical ideas, and Hume can take that victory home, if he wants to. It is a very small one. At some point, those empirical ideas inform what you ought to do. So, it does apply to all the miles, in that sense.

There is a better alternative: my well-being.
We are highly social animals. Your well-being is irrevocably tied to the well-being of other humans. You may THINK acting selfish is for your own good, but we can demonstrate that, in most cases, you would very likely be much worse off, in the long run. We could say, scientifically, that such selfishness is symptomatic of a distorted view of the world.

Also: If you are going to try to come up with an alternative to well-being, try not to make it an example of well-being. (Hint: Avoid using the words "well-being".)

Try an example with actions that cause more controversial outcomes, and the difference is obvious.
Stem cell research. Those opposing it have distorted views of what constitutes a conscious entity. We can demonstrate, scientifically, that their views are distorted. And, yet valuable medical science is being held up, because of these distorted views.
 
Skeptics, scientists, and mathematicians often have atypical neurology, as well.
Okay, I regret not defining those terms as I intended them to be understood here in this context. I wanted a non-pejorative term for sociopaths, psychopaths, and the like whose brains produce behaviors generally deemed as grossly undesirable or detrimental. If you have another value-neutral term I am open to suggestion.

What matters is NOT the fact that neurology is "atypical". What matters is how accurate the perception of reality is.
Why does this matter? If you say because it produces "better" oughts then you're inadvertently bringing in the idea of what oughts ought to be. Value-laden terms like "better" smuggle in oughts.

Oughts will ultimately appeal to the general well-being of society. If you have trouble accepting that, then name an alternative that makes sense.
Again I have to object to this vague term "well-being" as well as the qualifier "of society". They smuggle in an ought by introducing assumed objective values.

Psychopaths have a severely distorted sense of what constitutes 'well-being', which conflicts with what can be scientifically deduced to be good for 'well-being'.
Distorted sense of well-being? Are there objective criteria for well-being?

In a similar way: Someone who believes that eating nothing but packets of sugar all day long is the healthiest diet one can have, has a distorted sense of what constitutes 'health'.
That doesn't happen. We get people who eat poorly because they value doing so over long-term consequences. Here is that recursion problem again. Are you saying they ought to consider the long-term consequences with more importance? If so then we're back to the starting line in having to explain where this ought came from. I know this is difficult. Identifying these hidden oughts is tricky but I hope you see why it is necessary.


As for 'sneaking in values', that is irrelevant.
Obviously I disagree. Smuggling in values smuggles in assumed oughts. We're trying to get to an ought from a position without any oughts.

We have to value something, and science probably can't tell us that we should value those things.
I agree and this is what I see as the fatal flaw in Harris' position. Try your corporal punishment example again this time without relying on values in any way. You won't be able to do it. No doubt inadvertently but I assure you you will smuggle them in if you try.

(at least not yet.)
Not ever I'd guess. At least the case has not been made that it ever can even in principle. Not by Harris. Not by anyone I have encountered.
 
That's like saying "neurologically, there seems to be no distinction between nouns and verbs". It's flawed because nouns and verbs aren't neurological structures. Neither are oughts and ises. So why should there be a neurological distinction between them?
In fMRI studies, we found that the same areas of the brain act on ideas we might consider as "is" or "ought" identically. Sam Harris spells out a lot of this research in his book. Much of it is counterintuitive to how most people rationalize their thoughts, including how David Hume wrote about his own.

To contrast this, there IS a neurological distinction between disgust and attraction. Different parts of the brain are clearly active when each of those emotion are felt. (Though, there is some overlap.)

And the nature of the neurological distinction is not something that can be said to be properly understood by cognitive science at this time anyway.
Your information is outdated. Cognitive science is making use of neurology already. The fMRI is a large reason why this is possible.

Determining how to most best achieve goals improves "oughts" related to those goals, but doesn't necessarily have anything to do with morality. It could, but you would only be determining how to best execute a pre-determined set of morals.
That "pre-determined set of morals" is also based on states of the brain, and are also subject to adjustments when new facts are found out.

It's nothing new and I don't see why it needs to be called "moral science".
What is new is that science can ultimately determine human values, with this branch of thinking. Perhaps there is a better way for me to describe how and why, but I will have to think of it, later.
 
Why does this matter? If you say because it produces "better" oughts then you're inadvertently bringing in the idea of what oughts ought to be. Value-laden terms like "better" smuggle in oughts.
That value-laden sense of "better" is also a state in the brain, and one that is not processed any differently than what would be considered an "is".

Distorted sense of well-being? Are there objective criteria for well-being?
I offered some ideas in that direction, in other posts.

Are there objective criteria for health? If you say yes, then why the double-standard? If you say no, then you have nothing useful to say about health.

We get people who eat poorly because they value doing so over long-term consequences.
What if someone was misinformed. What if they joined a Sugar Cult that brainwashed them into eating only sugar. Unrealistic, I suppose, but still possible.

There are cults who believe suicide is best for their health.

I agree and this is what I see as the fatal flaw in Harris' position. Try your corporal punishment example again this time without relying on values in any way. You won't be able to do it. No doubt inadvertently but I assure you you will smuggle them in if you try.
We are smuggling in nothing more than what is already a set of states in the brain, subject to the same adjustments as any thought "is" or "ought".
 
David Hume had no way of knowing neurology would undermine his ideas.

But it didn't. Neurology might say "This and this will bring about human happiness", but it can't say that we should do so.

Read Bertrand Russell's take on this. It's very easy to read. Russell agrees with Hume (small wonder), and exemplifies:

Bertrand Russell said:
The framing of moral rules, so long as the ultimate Good is supposed known, is matter for science. For example: should capital punishment be inflicted on theft, or only for murder, or not at all? Jeremy Bentham, who considered pleasure to be the Good, devoted himself to working out what criminal code would most promote pleasure, and concluded that it ought to be much less severe than that prevailing in his day. All this, except the proposition that pleasure is the Good, comes within the sphere of science.

It is not really very relevant if morality is objective or not, because:

Bertrand Russell said:
Whatever our definition of the "Good," and whether we believe it to be subjective or objective, those who do not desire the happiness of mankind will not endeavour to further it, while those who do desire it will do what they can to bring it about.
 
But, in the mean time, I felt I was obligated to respond to this comment:

The definition of Utilitarianism could vary. Yes, one could define that word to match Harris' points. But, then again, one could also define "Santa Claus" as "Someone who runs a pizza parlor in New York City."

This shows that you don't actually know what the term means.

Historically, the use of the word Utilitarianism is not as open-ended or flexible as the science of morality proposed in the book. Perhaps Harris doesn't make this transparent in his writing. But, that's a problem about his horrid writing style, rather than his actual arguments.

Let me guess... you got this impression from Harris' book? I thought I warned you that accepting Harris' statements about philosophy is exactly as wise as taking a homeopath's statements about science as guaranteed to be accurate.

The definition of "well-being" is meant to be one that evolves with our understanding of conscious creatures: something utility, alone, does not typically account for. Though, utility could still be a factor to consider, in evaluating the trade-offs associated with well-being.

Utilitarianism is usually defined as a maximalist, universalist, consequentialist moral philosophy that takes the good to be maximising something, and that something varies between philosophers. What Harris calls "well-being" is closest to welfare utilitarianism - the version that takes the good to be maximising the fulfilment of the preferences people would have if they were fully-informed and rational.

So based on what you've said, there is in fact no substantial difference at all between Harris' position and welfare utilitarianism except:

1. Harris misrepresents utilitarianism to make himself seem special.
2. Harris misrepresents his version of welfare utilitarianism as a solution to the is/ought problem.
3. Harris talks about neuroscience a lot in the hope that the halo effect will blind you to the sleight of hand he is engaging in with regard to the is/ought problem.


Neurologically, there seems to be no distinction between "ought" and "is".

Neurologically speaking I think you'd be hard pressed to distinguish between a mathematician thinking about a flawed proof and a mathematician thinking about a correct proof, or a person having a "eureka!" moment and correctly solving a puzzle and a person having one of those false "eureka!" moments where you think you've had a great flash of insight but it turns out you're just Sam Harris.

That doesn't mean there isn't a meaningful difference between logical and illogical thinking.

All "oughts" derive from states in the brain, based on how we think the world IS, whether that perception happens to be accurate or not. Sam argues that one can improve the quality of their "oughts" by improving the accuracy of their "ises".

No ****, Sherlock. What did you think we thought they derived from? Fairies? Immaterial souls? Magical underpants? What did you think we thought improvements in our factual knowledge of the universe did?

As an earlier poster said, Harris talks a mix of the obvious and bollocks. This part is him stating the obvious. Nobody is contesting these issues. Harris needs to pretend these issues are contested so he looks like he's doing something interesting.

I snipped a large bunch of content which is all quite straightforwardly irrelevant. The is/ought problem is the First Mile problem. You cannot solve the is/ought problem at any later stage. All you can do (and all Harris has done) is covertly sneak in a moral value judgment in the First Mile, and then try to pretend in the later miles that he never did that and he is "solving the is/ought problem".

I argue that the old-fashioned, Hume-inspired take on "oughts" is woefully inadequate for handling these sorts of discoveries.[/B] And, as we march into the future, we will likely run into more of them. Therefore the David Hume Distinction can be safely ignored by everyone, except maybe historians!

The old-fashioned, Hume-inspired take on "oughts" is solely relevant to the First Mile, in your taxonomy.

Existing utilitarian moral philosophy handles absolutely everything after the First Mile exactly as well as Harris' moral philosophy because it is exactly the same as Harris' moral philosophy. There is no difference, whatever Harris has told you.
 
That value-laden sense of "better" is also a state in the brain, and one that is not processed any differently than what would be considered an "is".


Based on what evidence?

Look, I'm a neurologist. I have a general sense of how much we know about this issue. There is absolutely no way that anyone knows that is the case. Looking at it just from a general sense, those statements are not going to be equivalent from a processing viewpoint or we would never be able to distinguish is and ought and we would never be able to distinguish between directions of fit in language. But we can.

I fear that Harris is far off into BS land. He really ought to know better.
 
In fMRI studies, we found that the same areas of the brain act on ideas we might consider as "is" or "ought" identically. Sam Harris spells out a lot of this research in his book. Much of it is counterintuitive to how most people rationalize their thoughts, including how David Hume wrote about his own.


OK, I think I see a source for the problem. I am now officially completely embarrassed that Sam Harris is getting a PhD in some aspect of Neuroscience.

If he doesn't know the difference between the same general area being responsible for two different types of processing dealing with a similar input and the same area constituting two different types of processing there is absolutely no hope. I can't believe he published something with that sort of claim. Now I know I don't want to read this book.
 

Back
Top Bottom