Richard T. Garner and "Beyond Morality"

One problem with the term "normative" is that it, by definition can never, ever have a truth value, at all. Not ever. That seems to be the problem I ran into, here.

"Normative" is NOT merely a matter of what we should or should not do. If that were the case, we COULD answer normative questions with science. But, the term goes beyond that: It implies that no one could ever possess the right to declare they found the truth on the matter. Not at all. Not by definition.

That actually makes it a form of fiction. Perhaps it is a useful fiction in academia. But, it is not something that exists in the real world. In nature, all things could be found out: Even what our values should be!
It is only in the realm of philosophical fiction that something could be thought of as having no truth value, what-so-ever.

So, when asked the question: "Can science answer normative claims?", my current answer is thus:

We can DO BETTER than that! Technically, by presumptuous definition, values considered "normative" cannot directly be answered by science.

However, science can demonstrate how claims often considered to be normative CAN be answered in an objective manner, instead. And that those answers will be more reliable and accurate than any philosophical discussion about "normative values" would have you believe.

Anyone care to punch holes in that controversial statement?
 
Weird. I know no such thing. I especially don't know that my rare opportunity to take something without risk statistically increases my chance of being caught.
You do not see how increase in theft causes more police activity, which in turn causes more thieves to be caught?
NONSENSE!

This is pure magic. The thought that, if I don't do it even in situations where my action is undetectable then others won't either is simple magical thinking.
Problem with your logic is, no action is ever 100% undetectable. And criminal actions which may be undetectable by a police department lazy and spoiled by low crime rate, may well be detectable by the same police department spurred by a crime wave.
 
Last edited:
One problem with the term "normative" is that it, by definition can never, ever have a truth value, at all. Not ever. That seems to be the problem I ran into, here.

"Normative" is NOT merely a matter of what we should or should not do. If that were the case, we COULD answer normative questions with science. But, the term goes beyond that: It implies that no one could ever possess the right to declare they found the truth on the matter. Not at all. Not by definition.

That actually makes it a form of fiction. Perhaps it is a useful fiction in academia. But, it is not something that exists in the real world. In nature, all things could be found out: Even what our values should be!
It is only in the realm of philosophical fiction that something could be thought of as having no truth value, what-so-ever.

So, when asked the question: "Can science answer normative claims?", my current answer is thus:

We can DO BETTER than that! Technically, by presumptuous definition, values considered "normative" cannot directly be answered by science.

However, science can demonstrate how claims often considered to be normative CAN be answered in an objective manner, instead. And that those answers will be more reliable and accurate than any philosophical discussion about "normative values" would have you believe.

Anyone care to punch holes in that controversial statement?

Sure. The most obvious shortcoming is in the opening sentence. "By definition" is not a substitute for argument, and there is nothing in the definition of normativity that is obviously contradictory with truth-bearing.

If you think that this is an important and insightful claim, then it requires argument. Why is it that statements of the form, "If you do not want to get hurt, you shouldn't pee on the third rail," are not truth-bearing? How about, "One ought (insofar as he seeks truth and coherence) to accept the entailments of his beliefs, or change his beliefs"?

Or did you mean only some normative statements aren't truth-bearing? If so, which ones?
 
ETA: I see now why we're talking past one another, and the fault is mine. The quote you are replying to completely failed to convey my point, because my brain was obviously a bit scrambled. In context, it's clear that I meant to say, "I especially don't know that my rare opportunity to take something without risk statistically increases my chance of being caught victimized by another thief."

Sorry for the misunderstanding. I think the sentence is still reasonable as stated, but it's not at all what I meant. I really should read what I write before hitting "Submit".


You do not see how increase in theft causes more police activity, which in turn causes more thieves to be caught?

We are speaking of the unusual circumstance where I know that the crime itself will not be detected. In that situation, I neither tempt others to commit thefts (since no one knows I did so) nor do I risk being arrested.

Problem with your logic is, no action is ever 100% undetectable. And criminal actions which may be undetectable by a police department lazy and spoiled by low crime rate, may well be detectable by the same police department spurred by a crime wave.

Er, if my crime was not detected, then it's causally related to any crime wave.

Anyway, the case I have in mind is the deathbed swindle, or some similar situation, where there is no victim aware of his loss at all. And if you insist that even such cases are not 100% undetectable, no matter! As long as the risk of capture or detection is small enough that the expected value of the crime is well into the positive side of the balance, the same conclusions result.

Honestly, let's recall the argument I was reacting to. It was this: the reason you shouldn't steal is because when you steal, others are more likely to steal and they might steal from you. This fear that my own occasional theft is likely to make me a victim of a crime wave is, to my mind, quite silly. The effect I have on crime statistics is utterly negligible and effectively 0 if my crime is, in fact, undetected.

And yet, I do not steal in those situations. Why not? Well, whatever the reason, it is not because I am silly enough to think that this is how I prevent others from taking my stuff.
 
Last edited:
Sure. The most obvious shortcoming is in the opening sentence. "By definition" is not a substitute for argument, and there is nothing in the definition of normativity that is obviously contradictory with truth-bearing.
Apparently, there is. Almost all of the philosophers I am talking to on this issue seem to indicate as much.

Wikipedia claims: "Whether or not a statement is normative is logically independent of whether it is verified, verifiable, or popularly held." http://en.wikipedia.org/wiki/Normative

Etc.

Why is it that statements of the form, "If you do not want to get hurt, you shouldn't pee on the third rail," are not truth-bearing?
Ironically, you sound a lot like me when you say that.

But, the philosopher would ask: "Why should I value not getting hurt?"

How about, "One ought (insofar as he seeks truth and coherence) to accept the entailments of his beliefs, or change his beliefs"?
The philosopher asks: "Why should value the entailments of my beliefs so much, that I should change them, if I don't accept them?"

I'm just giving you a taste of what I am up against.
 
Apparently, there is. Almost all of the philosophers I am talking to on this issue seem to indicate as much.

Wikipedia claims: "Whether or not a statement is normative is logically independent of whether it is verified, verifiable, or popularly held." http://en.wikipedia.org/wiki/Normative

You understand that WP is not saying normative statements are neither true nor false?

I cannot agree with the nameless philosophers you keep citing. You have to provide some argument to the effect that normative statements are not truth-bearing (i.e., are literally not statements at all, according to one common definition of the term).

Etc.

Ironically, you sound a lot like me when you say that.

But, the philosopher would ask: "Why should I value not getting hurt?"

No, the philosopher would recognize that the conditional statement I uttered is true whether or not you value not getting hurt. There's a reason I stated it as a conditional statement. If you don't want to get hurt, then you shouldn't pee on the third rail. This statement is simply true, whether or not you want to avoid pain.

The philosopher asks: "Why should value the entailments of my beliefs so much, that I should change them, if I don't accept them?"

I'm just giving you a taste of what I am up against.

You seem to be confused. I didn't say you should value entailments. I said that if you aim for coherence and correctness (and, perhaps I should add, completeness to whatever extent possible) of your beliefs, then you should either believe their consequences or change the beliefs. The reason is simple: you will not gain completeness without extending your beliefs in a consistent fashion. In order to do this, one ought to believe the logical entailments of his beliefs, because this is a way to add new beliefs without adding new inconsistencies (any such inconsistencies would already be implicit in the old set of beliefs, and hence violate our previous stated desire for coherence, necessitating revision).

Maybe you want to ask why we value completeness, coherence, etc., but that question is irrelevant to my statement, since it was, again, a conditional statement.
 
Wowbagger, I think you are very far from correct. But as I am coming in rather late would you rather receive criticism of Error Theory or of your theory?
 
We are speaking of the unusual circumstance where I know that the crime itself will not be detected. In that situation, I neither tempt others to commit thefts (since no one knows I did so) nor do I risk being arrested.



Er, if my crime was not detected, then it's causally related to any crime wave.

Anyway, the case I have in mind is the deathbed swindle, or some similar situation, where there is no victim aware of his loss at all. And if you insist that even such cases are not 100% undetectable, no matter! As long as the risk of capture or detection is small enough that the expected value of the crime is well into the positive side of the balance, the same conclusions result.

Honestly, let's recall the argument I was reacting to. It was this: the reason you shouldn't steal is because when you steal, others are more likely to steal and they might steal from you. This fear that my own occasional theft is likely to make me a victim of a crime wave is, to my mind, quite silly. The effect I have on crime statistics is utterly negligible and effectively 0 if my crime is, in fact, undetected.

And yet, I do not steal in those situations. Why not? Well, whatever the reason, it is not because I am silly enough to think that this is how I prevent others from taking my stuff.

Yeah, this is an example of why Utilitarianism ultimately can't explain a lot of our ethical rules. The example I use in my courses is photographing somebody while they're changing, without their knowledge. If the impetus is on increasing hedons and reducing harm, the ethical directive is therefore to take as many photos as I need to satisfy myself, and never tell the photographee.

This produces the best hedon sum, but sounds ethically wrong to most people. Utilitarians explain that this proves our instincts are not good at understanding ethics; critics of Utilitarianism think this proves Utilitarianism does not produce correct ethical advice. There is no way to objectively compare ethical models so who knows who's right?

I like that example better than stealing, because it's unusual to steal something and having the victim unaffected. I think that would be an edge case and ignores the common problem associated with stealing: the victim is almost always worse off whether the thief is caught or not.

In contrast, the victim of a peep photo appears to only be affected if they or somebody close to them finds out.
 
Last edited:
If you could please explain to me - in your view:

What features does a moral code necessarily have that a set of rules for behaviour, does not have (and/or vice-versa)?
None.
Moral behaviour is a subset of behaviour, just as silly behaviour is.

Can you tell me what features silly behaviour has that a set of rules for behaviour does not?

I'm sure you can in specific instances. Walking along a cliff edge is behaviour, hopping , while blindfolded, in the same place, is silly.

But how would you generalise to a definitive rule? How would you provide a general test for silly / not silly?

I don't think it's a meaningful question. Context is all.
If, however, you define "silly" as "self destructive" you can generalise easily.
 
Slavery and abolition made no apparent impression on biology or natural selection. Does this mean you cannot understand what the fuss was between the two camps?

I already did.
These are matters of local custom and economics.

If you choose to label one "bad" and one "good", you need to define those terms. Then you need to demonstrate why one is "good" and the other "bad" and whether those categories overlap at all.

If (west African) slavery had never happened, the American Civil War might not have happened. Is that good / bad?
And there might be practically no black people in America, the West Indies or Europe. Bad? Good?
For whom? The natives of Hispaniola? The natives of Africa? The natives of Europe? The world?

If the slave trade had not happened, the world would be a measurably different place, but whether a better or worse place, I leave to you.
 
Yeah, this is an example of why Utilitarianism ultimately can't explain a lot of our ethical rules. The example I use in my courses is photographing somebody while they're changing, without their knowledge. If the impetus is on increasing hedons and reducing harm, the ethical directive is therefore to take as many photos as I need to satisfy myself, and never tell the photographee.

This produces the best hedon sum, but sounds ethically wrong to most people. Utilitarians explain that this proves our instincts are not good at understanding ethics; critics of Utilitarianism think this proves Utilitarianism does not produce correct ethical advice. There is no way to objectively compare ethical models so who knows who's right?

Or else Utilitarians claim that if we regularly do things like this, we will eventually get caught and the result will be worse off in general. Of course, this requires rather a lot of assumptions -- what's the likelihood of getting caught, how much harm is done due to the loss of trust because some small number of peepers are caught, and so on?

Alternatively, a Utilitarian will use these sorts of situations to say that Utility should be used to choose between competing rules, not individual acts, but the problem there is that they are guaranteed to generate less happiness than act-based Utilitarianism. The best rule we can use is the principle itself -- at least, that would be the case if we could make reliable calculations of the expected payoff from every act.

In the end, it's very hard to apply the principle of Utility, since it seems to presume rather more knowledge about how our actions affect social welfare than we have at hand.

I like that example better than stealing, because it's unusual to steal something and having the victim unaffected. I think that would be an edge case and ignores the common problem associated with stealing: the victim is almost always worse off whether the thief is caught or not.

Hence my emphasis on deathbed swindles, although you still have a point that someone loses. Your example is probably better than mine.
 
You understand that WP is not saying normative statements are neither true nor false?

It is indicating that there is no "true answer" for normative statements.

Once you add conditions, it is no longer considered a normative statement, apparently. And, on top of that, the philosopher will then ask: "Why should we value those conditions?"

If you explain why, this goes into infinite regression:

"Well, because those conditions yield the best consequences."
"Why should we value consequentialism?"
"Because natural forces will tend to have us do that."
"Why should we value what natural forces have us do?"
"Because if you don't, you will be a one of the depressions in the saw tooth chart, a 'Footnote of History', etc."
"Why should we value not being a 'Footnote of History'?"
Etc. and so on.

I think the cracks in philosophical thinking, on these points, start to show, when things like this happen.
 
Wowbagger, I think you are very far from correct. But as I am coming in rather late would you rather receive criticism of Error Theory or of your theory?
Could you summarize both?

If I had to choose, I would rather hear about Error Theory, in this thread, I suppose. But, some more criticism of the theory I am promoting wouldn't be a bad idea, either.
 
Or else Utilitarians claim that if we regularly do things like this, we will eventually get caught and the result will be worse off in general. Of course, this requires rather a lot of assumptions -- what's the likelihood of getting caught, how much harm is done due to the loss of trust because some small number of peepers are caught, and so on?

Probably, but that's also an argument for being careful, particularly if getting caught is considered the rare path. It's also an argument for making a 'system' that guarantees nobody gets caught. ie: government could invest in undetectable peeping technology... for the social good.

A parallel moral would be that we should arrest and incarcerate criminals, even though we know we might make mistakes. The ability to make the latter path rare means following the rule and making an effort to reduce mistakes is the 'right' approach under Utilitarianism.



Alternatively, a Utilitarian will use these sorts of situations to say that Utility should be used to choose between competing rules, not individual acts, but the problem there is that they are guaranteed to generate less happiness than act-based Utilitarianism. The best rule we can use is the principle itself -- at least, that would be the case if we could make reliable calculations of the expected payoff from every act.

In the end, it's very hard to apply the principle of Utility, since it seems to presume rather more knowledge about how our actions affect social welfare than we have at hand.

Yeah, I've never been very impressed with Utilitarianism's "implementability" - one proposal that came up over half a century ago to accomodate the examples that work out on paper, but feel very wrong (such as the peeper photos example and its extreme versions like slavery) was "rule utilitarianismWP" and I found that even less justifiable and workable.



Hence my emphasis on deathbed swindles, although you still have a point that someone loses. Your example is probably better than mine.

There's a loss for the victim's estate, whether the estate goes to an hier or the community.

The most extreme example that results from the peeper photo case is things like snuff films. Out of the seven billion people on the planet, there may be a million who would derive 'enjoyment' from a snuff film. If the math worked out that killing one victim who is an orphan and has no relatives to grieve caused that person X amount of suffering, but it created X+1 amount of enjoyment, is that a justification for greenlighting Death Becomes Her. Further: is is also morally correct that we should raise people to accept this situation and receive enjoyment from this film, in an effort to decrease unhappiness (moral angst over the killing of an innocent victim) and increase happiness (more enjoyment of the killing).
 
Yeah, this is an example of why Utilitarianism ultimately can't explain a lot of our ethical rules. The example I use in my courses is photographing somebody while they're changing, without their knowledge. If the impetus is on increasing hedons and reducing harm, the ethical directive is therefore to take as many photos as I need to satisfy myself, and never tell the photographee.

First, you have to think of it in terms of social policy: Would you want to live in a society where people could freely take photos of others changing, without their permission? (Regardless of the reasons, which we will get to.) If most people say "No", this is an indication that it could be immoral to do so. Even though that is not a guarantee (Most people could be wrong), you can get a sense of why it would be immoral to do such a thing.

But, for utilitarianism to answer the question, the reasons need to be established:

The reasons, in this example, boil down to the importance of human dignity and privacy, and the breaks we need to put on allowing ourselves to disrespect either of them. Without sufficient breaks, physical violence could, in fact, increase. Science bears this out, and modern societies happened to hit upon these rules as solutions, during their evolution. Pinker calls it the Civilizing Process, and writes several chapters about it in his book.

Once we realize that granting everyone the freedom to disrespect everyone's privacy would actually increase physical violence, which is far worse off than merely having perverts collecting images of you: It makes sense to enforce privacy rules as a moral issue.

Of course, most people are not aware of that science or history. So, it becomes easy to claim utility is not good for this sort of thing. But, I hope we can see how this kind of information emerges from talking about morality on objective terms, than it would from subjective ones.
 
None.
Moral behaviour is a subset of behaviour, just as silly behaviour is.

Can you tell me what features silly behaviour has that a set of rules for behaviour does not?

I'm sure you can in specific instances. Walking along a cliff edge is behaviour, hopping , while blindfolded, in the same place, is silly.

But how would you generalise to a definitive rule? How would you provide a general test for silly / not silly?

I don't think it's a meaningful question. Context is all.
If, however, you define "silly" as "self destructive" you can generalise easily.

I have a question that might bring clarity:

Did you mean to say there is an 'objectively best' set of rules of behaviour, i.e. that there is an objectively best description of the way that people tend to behave, that has no bearing on right and wrong, or good and bad?

Because when you say rules for behaviour, it makes it sound like those are rules prescribing what we ought to do (which is the same as a moral code).
 
It is indicating that there is no "true answer" for normative statements.

No.

Did you read your own quote?

"Whether or not a statement is normative is logically independent of whether it is verified, verifiable, or popularly held."

Now, I've highlighted the relevant bit. You see it? The issue is whether a statement is normative, not whether a normative statement is true.

You've misread the quoted passage, quite simply.

Once you add conditions, it is no longer considered a normative statement, apparently. And, on top of that, the philosopher will then ask: "Why should we value those conditions?"

You are using these terms in unusual ways. From the point of view of logic, a statement is normative if a normative operator occurs in it. So, for instance, a conditional statement is normative if there is an ought in it somewhere.

If you mean something else by "normative statement", please tell me what you mean. I'm not going to guess.

If you explain why, this goes into infinite regression:

"Well, because those conditions yield the best consequences."
"Why should we value consequentialism?"
"Because natural forces will tend to have us do that."
"Why should we value what natural forces have us do?"
"Because if you don't, you will be a one of the depressions in the saw tooth chart, a 'Footnote of History', etc."
"Why should we value not being a 'Footnote of History'?"
Etc. and so on.

I think the cracks in philosophical thinking, on these points, start to show, when things like this happen.

Since all of this is quite beyond the scope of my point, I will ignore it.

The fact is that (at least some) statements in which normative operators occur are evidently truth-bearing. I gave a couple of examples.

If you want to argue otherwise, please provide a clear refutation of my examples.

If you want to amend your claim so that my examples don't count, then do so explicitly.
 
Did you read your own quote?

"Whether or not a statement is normative is logically independent of whether it is verified, verifiable, or popularly held."

Now, I've highlighted the relevant bit. You see it? The issue is whether a statement is normative, not whether a normative statement is true.
Perhaps you should have highlighted the "logically independent" part, like this:

"Whether or not a statement is normative is logically independent of whether it is verified, verifiable, or popularly held."

That would have focused on where I seemed to be wrong.

This implies that a Normative statement COULD be verified, or it could NOT be verified. Either way, it could still be a normative statement.

The problem, though, is that normative statements we are dealing with in morality are assumed to be the unverifiable type. So, that is what I was harping on.
 
Probably, but that's also an argument for being careful, particularly if getting caught is considered the rare path. It's also an argument for making a 'system' that guarantees nobody gets caught. ie: government could invest in undetectable peeping technology... for the social good.

Cute, but unless peepers and only peepers (and perhaps only those who are also exhibitionists) know bout the technology, I'm afraid that its availability would have a rather negative effect.

[...]
There's a loss for the victim's estate, whether the estate goes to an hier or the community.

Yes, agreed, and for that reason at least (unless we make some even more unusual circumstances), your peeper example is better.

The most extreme example that results from the peeper photo case is things like snuff films. Out of the seven billion people on the planet, there may be a million who would derive 'enjoyment' from a snuff film. If the math worked out that killing one victim who is an orphan and has no relatives to grieve caused that person X amount of suffering, but it created X+1 amount of enjoyment, is that a justification for greenlighting Death Becomes Her. Further: is is also morally correct that we should raise people to accept this situation and receive enjoyment from this film, in an effort to decrease unhappiness (moral angst over the killing of an innocent victim) and increase happiness (more enjoyment of the killing).

Arguably, feelings of guilt, anxiety over the callousness with which persons are killed, the desensitization of the public and the corresponding increase in callous treatment of others and so on, seem to make it unlikely that the net result is positive. You'd have, in this case, billions of people who are concerned about how badly people are treated, and you can't neglect that effect.

But, again, it's hard to forecast effects with any certainty. The fact is that, if killing the occasional orphan produces more pleasure than pain for society as a whole, then that's a good thing to do. For most folks, this is a hard pill to swallow, but the committed Utilitarian can always argue that such an act simply is not going to produce good outcomes in the broad picture.
 
First, you have to think of it in terms of social policy: Would you want to live in a society where people could freely take photos of others changing, without their permission? (Regardless of the reasons, which we will get to.) If most people say "No", this is an indication that it could be immoral to do so. Even though that is not a guarantee (Most people could be wrong), you can get a sense of why it would be immoral to do such a thing.

But, for utilitarianism to answer the question, the reasons need to be established:

The reasons, in this example, boil down to the importance of human dignity and privacy, and the breaks we need to put on allowing ourselves to disrespect either of them. Without sufficient breaks, physical violence could, in fact, increase. Science bears this out, and modern societies happened to hit upon these rules as solutions, during their evolution. Pinker calls it the Civilizing Process, and writes several chapters about it in his book.

Once we realize that granting everyone the freedom to disrespect everyone's privacy would actually increase physical violence, which is far worse off than merely having perverts collecting images of you: It makes sense to enforce privacy rules as a moral issue.

Of course, most people are not aware of that science or history. So, it becomes easy to claim utility is not good for this sort of thing. But, I hope we can see how this kind of information emerges from talking about morality on objective terms, than it would from subjective ones.

But you're invoking the categorical imperative here:

Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.


Which clashes with Utilitarianism in this example. If the photographer is going to be greatly gratified by taking the photos and knows that no-one else will be harmed by doing this, the utilitarian argument is for taking the photos.

Here is another example:

Albert is a funeral director. He is also a necrophile who gains great pleasure from having sex with the dead. Every Tuesday night Albert locks up his funeral parlour and has sex with dead bodies.

The utilitarian has to say that Albert should keep doing this as it gives him great pleasure and does not harm anyone else.
 

Back
Top Bottom