Better the illusions that exalt us ......

We stick to reality, not to wishful thinking.

1. Please specify what kind of objective, medical harm the comatose patient will be diagnosed with.
2. Please describe the harm difference to a role play, where the woman voluntarily acts as if she were comatose.

The correct answers are

1. None.
2. None.

which clearly demonstrates how invalid your standpoint is.
That is what you call reality where you come from?

No harm difference between consensual and non-consensual non-consensual sex?

Why don't you buzz off too?
 
Why don't you buzz off too?
Hey, your arguments are getting stronger. :D

So, why don't we agree you shoot me dead in order to make me buzz off? By some kind of mysterious mental magic my mere consent makes my harm surely vanish into thin air.

Oh, I forgot this only works in your wonder world, but not in reality.
 
Last edited:
Hey, your arguments are getting stronger. :D

So, why don't we agree you shoot me dead in order to make me buzz off? By some kind of mysterious mental magic my mere consent makes my harm surely vanish into thin air.

Oh, I forgot this only works in your wonder world, but not in reality.
Ah, now you are avoiding the point with meaningless babble.

That is probably wise given the impossibility of explaining your bizarre claim that there is no harm difference between consensual and non-consensual sex.
 
Ok. You do not like it if I bow out, so I will try once again.
Ah good. You are going to try draw attention away from the contradiction between your earlier argument and your own moral system by going back to the beginning again

And of course you misrepresent everything we have said right after your own false claim of misrepresentation.

Do you actually understand the concept of ethics?
Herzblut proposed a hypothetical situation. This is quite a helpful way to explore an idea, and so that is what I think we were doing. I did not know very much about the utilitarian position then, but what I understood of it seemed to say that one must, as a matter of moral good, pursue happiness and minimise harm. Since harm is defined in terms of pleasure and pain the harm in the hypothetical situation cannot be demonstrated if the focus is on the two people most concerned. And so the pursuit of the nurse's pleasure is a moral act within the terms of the system
Simply repeating this nonsensical claim will not make it true. You still have not told us how a person raping a comatose woman would increase the happiness of the community. It is not enough to say that something is true, you have to say why something is true.
To deal with that rather obvious consequence Utilitarianism has been modified to include wider harm than the two people directly concerned, and so we now move to the aggregate happiness of a wider group including the nurse's employers and the patient's family and, for all I know, our lizard overlords on alpha centauri.
No, as Bentham said, "the community". Please cite examples of this prior Utilitarianism that you say Bentham modified. Oh I forgot, you don't do the "justifying your claims" thing.

So if this is a consequence of your own idiolectic Utilitarianism, it is not a consequence of the system set out the the person who coined the term.

But you have not even dealt with the two people. You simply look at the nurses brief moment of pleasure and ignore the woman, her family, her friends, his colleagues then say "look - happiness has been maximised!!!".
It is assumed that this will be the case, and so the action is said to be immoral. There is no practical way to prove this and it is not a certain outcome at all.
So you think it a practical possibility that the community will be happy about the rape of an insensate woman? Nonsense, it is a certain outcome.
In physics people learned a lot about the laws of motions by asking themselves "what if there were no friction". Similarly it is useful in this case to ask oneself "what if there were no harm?" or "what if there was less harm than pleasure?". And so I asked it. I do not think it is legitimate to make either of the moves made here: ie 1. to deny the premise of the hypothetical on the grounds that the harm is self evident: friction exists but the hypothetical case in which it does not does teach us things, and that is the point.2. to expand the scope of the harm till it has no meaning, in order to demonstrate that all things fit with the theory.
And since none of us did these things, the point is irrelevant.

But why do you say it is illegitimate for us to ask "how does any moral system cope with this hypothetical? How does yours?

After all you say you will respect any choice that does not harm. And now you are asking yourself "what if there were no harm", then similarly, by your own words you must respect the nurses choice. Or are these hypotheticals only useful for ethical systems you dislike?
I now see that it is the utilitarian point of view that indeed, if there is no harm then there is no moral problem. This is in keeping with what I first thought and I do not think there are any who have denied this is the outcome for a utilitarian point of view.
And your own point of view. You will respect any choice that does not cause harm. So if there is no harm then there is, according to your own ideas, no moral problem.
I do not accept this. I think that even where there was no harm the action would still be wrong.
But that is not what you said. You said you will respect any choice that does not cause harm. Why won't you explain this?
According to him the whole of the utilitarian ethic is summed up in "love they neighbour" and "do as you would be done by", as propounded by Jesus of Nazareth. In that case he has done nothing new and all we have is new big word which does not have a meaning beyond ethical systems which went before.
No, he said the "spirit" of it is summed up on the Golden Rule. He said that this was the ideal perfection of Utilitarianism.

But in case you have failed to notice we don't live in an ideal or perfect world. He goes on to say that Utilitarianism is a means of making the "nearest approach" to this ideal.

Oh, by the way, if it is wrong by definition to steal someone's choice, right or wrong, should I never stop someone who is attempting to commit suicide?

Should I never kidnap someone who is in a cult and attempt to deprogram them?

Should I never throw someone a surprise birthday party?

And finally:
but I must not steal it if that person is an adult of full capacity.
So it is alright to steal the choice of an adult not of full capacity then? In a coma for example?
 
That is probably wise given the impossibility of explaining your bizarre claim that there is no harm difference between consensual and non-consensual sex.
But you know that I talk of a sexual act on a comatose woman, compared to the same physical act by the same man with the same woman who only acts as if she were comatose, don't you?

To save your philosophy you have to introduce some mysterious mental magic of mere consent to reduce harm in the second case (which is physically identical to the first case).

Tell me, just out of curiosity, is it supernatural ..eh.. beings who perform this kind of magic?

Edit: Oh, btw, you haven't even shown there is any harm to be reduced at all.
 
Last edited:
But you are missing an essential piece of the puzzle here -- the moral agent only needs to bring the pleasure or pain of any other entity into the equation if the moral agent thinks they will gain utility from such a consideration. We are not talking about maximizing overall happiness objectively -- we are talking about maximizing it from the standpoint of the moral agent.

It is the ultimate form of selfishness, but leads to good deeds from good people. Why? Because good people typically place great utility in considering the pleasure of others.

You are quite correct, I had not understood this was your point. You are saying that one need not consider the happiness of anyone else unless that benefits you. And that is what Bentham is saying too. I think that the problem lies in the part I have bolded. How do you define "good people". Bentham tells us that they are moral if they act to maximise their happiness, since that is his definition of moral. You earlier said that the word moral is redundant: and so I take it that you mean people of integrity, as you defined it earlier. That is, people who follow through on their own values, whatever they may be. Which seems to mean that a nazi is doing a good deed when he murders a lot of jews. He does not think he will gain utility from considering the pleasure or the pain of the jew: he considers he will gain utility from ridding the world of jews. I am sorry if that is not what you are arguing but it is what I understand from what you have said. And if it is correct then I think your position is consistent and it is utilitarian: I just don't happen to agree with it.



We are not expanding the scope of harm until it has no meaning. We are simply letting everyone who is affected by an action define whether or not it harms them, rather than applying some kind of an objective measurement.

How are you doing this?

What is wrong with such an approach? I really don't get why you insist that there is a absolute measure of harm -- what happens when someone claims you are harming them and your measurement says otherwise? What happens when someone insists something is not harming them and your measurement says otherwise? I asked herzblut this question and of course he dodged it, but really it is the heart of the issue here. Shouldn't people get to define what harms them?

I do not think there is an absolute measure of harm, and if you have inferred this from something I said I apologise for not being clear. Part of my objection to utilitarianism as a functional system is that there is no objective measure and there is no scale we can adopt. I do not see how we can do the "calculus", because if A says action X harms him and B says it gives him pleasure I cannot see how we do the addition. Similarly if A says action A harms him and B and C say it gives them pleasure, then a mere headcount settles what we should do unless we have a scale? Does that not follow?

As I understand you, you would say that B and C should do what pleases them unless they see utility in not harming A. But if they say it gives them pleasure to do X they have already done the sums. So A will be harmed and that will be moral. And once again we are in Omelas. It is true that it is possible that in such a situation B and C and all the rest will all decide that the fate of the child bothers them to the extent that consideration of his happiness will be so uncomfortable to them that they will not allow the situation described in Omelas to arise or to persist. But there is nothing in the system which demonstrates that outcome is inevitable, and as I look around the world I have to say I do not think it is all that likely.

But you (like herzblut) refuse to explain how this could be. We have asked you two to give us some examples of actions that cause no harm yet you consider wrong.

Well the hypothetical was the situation which Herzblut used as an example. I do not really understand how you can say he did not, since that is the point of it (if I am wrong about that, Herzblut, I apologise) On your own reasoning, if the nurse does not perceive that he gets any utility from taking into account the harm he does to anyone else that settles it. He pursues his own pleasure, and that is perfectly moral. There are such people, as you know. If the action is kept secret then there is no harm to anyone. Nobody is experiencing pain or emotional distress at all and the nurse is having a great time. It can be argued that it cannot be kept secret but I do not think that is true. If your man Fritzl can keep a conscious woman and several children in a cellar for 24 years without detection I think this nurse can quite clearly keep this much more limited action a secret too. I said earlier that, from my perspective, he keeps it a secret because he knows it is morally wrong. But on youe reasoning (which is different from Robin's, I think) he keeps it a secret in order to ensure it is morally right. So long as nobody knows there is no harm, within the terms of this system. Yet for me the action is quite clearly wrong whether there is harm or not.

So far, both of you have failed to do so. What you have done is given us examples of actions that others may consider harmless and you consider wrong. We want examples of actions you consider harmless.

I stuck with Herzblut's example because I thought it was adequate to the case. And I have explained that the point of the hypothetical was to get to what utilitarianism means. I do not consider there is no harm in that case: but on your version of utilitarianism there is none, so far as I can see. It is intrinsic to the position. Yet I can accept the proposition there is no harm and I still consider the action to be wrong. This is because I have a further test: I believe you should not steal another person's choice. The nurse is doing that and that is what makes it wrong for me even where no harm can be shown to exist. If I followed Kant it would also be wrong, because the nurse is using the woman as an object and not as a subject. That may be a variation of the same thought, I am not sure. In both cases the moral wrong is independent of harm because in this situation there is no harm.

I do not mean to imply that you do not think there is harm. I am sure you do believe that there is. All I am trying to say is that there is no demonstrable harm within the terms of the system you espouse. And this is why I think you have to stretch the term "harm" beyond what it will bear to arrive at the conclusion there is something wrong here. I find I need a second principle for the same reason


As an example, herzblut has said that he thinks it is possible to breach human dignity, thereby making an immoral choice, without harming anyone. Yet, he has consistently ignored all our questions as to how this is possible. He has not given a single example of an action that he considers 1) to be a breach of human dignity and 2) not to harm anyone. If he can't think of any examples, given his penchant for wild hypotheticals, then why is he so sure of the position?

As I said, his hypothetical is just such an example.
 
But you know that I talk of a sexual act on a comatose woman, compared to the same physical act by the same man with the same woman who only acts as if she were comatose, don't you?
You forgot that you stipulated "voluntary".
To save your philosophy you have to introduce some mysterious mental magic of mere consent to reduce harm in the second case (which is physically identical to the first case).
There is no harm in the second case, since the act is voluntary. What nonsensical verbal gymnastics are you now using to state there is harm in the second case? There is none. It is consensual. The sex in the first case is non-consensual. That is the difference between them.

I don't understand why you don't understand that.

I don't understand why you insist that there is no harm difference between consensual and non-consensual sex.

The rest of your post is, as far as I can see is just gibberish.

None of this avoids the fact that to save your philosophy you have to explain how rape of a comatose woman could be construed as increasing the happiness of the community.

(Oh and by the way, you have noticed, haven't you, that Fiona is now claiming the woman in your example is harmed?)
 
Last edited:
As an example, herzblut has said that he thinks it is possible to breach human dignity, thereby making an immoral choice, without harming anyone. Yet, he has consistently ignored all our questions as to how this is possible. He has not given a single example of an action that he considers 1) to be a breach of human dignity and 2) not to harm anyone. If he can't think of any examples, given his penchant for wild hypotheticals, then why is he so sure of the position?
As I said, his hypothetical is just such an example.
Oh dear, I spoke too soon. Now you are back to claiming the woman was not harmed.

Can you make up your mind on this one? Just find a position and stick with it.
 
Ah good. You are going to try draw attention away from the contradiction between your earlier argument and your own moral system by going back to the beginning again

I assume that is your honest perception of what I have said and you are entitled to it, of course. I see no contradiction at all, but there you are :)

And of course you misrepresent everything we have said right after your own false claim of misrepresentation.

I do not think I have misrepresented you. It is true that earlier I conflated your view with rocketdodger's and I should not have done so. I did not at first see that you were arguing two different versions of utilitarianism, and I have acknowledged that

Do you actually understand the concept of ethics?

I think I do. Do you?

Simply repeating this nonsensical claim will not make it true. You still have not told us how a person raping a comatose woman would increase the happiness of the community. It is not enough to say that something is true, you have to say why something is true.

Well I think you misunderstand Bentham, and you think I do. I have shown why I think it is reasonable to read it the way I do. You disagree. Rocketdodger apparently does not, if I read his last post correctly. Bentham states that we are under the control of "pleasure and pain". Increasing our pleasure is moral behaviour by definition. Increasing our own pleasure will increase the sum of human happiness. I see nothing at all in Bentham to suggest that the actions of the individual should increase the happiness of the community: or indeed should take any account of it at all. Where in his list of pleasures and pains do you find this? It is true that he includes the pleasures and pains of benevolence and malevolence as "extra-regarding": but only insofar as the contemplation of the pleasure or pain of another gives utility to the agent himself.

No, as Bentham said, "the community". Please cite examples of this prior Utilitarianism that you say Bentham modified. Oh I forgot, you don't do the "justifying your claims" thing.

I did not say Bentham modified it, because I do not think he did. I think Mill did. We have already discussed Bentham's use of the word community, and I found it a bit ambiguous. I have thought about this again and I have a better understanding now, I think.

Bentham said that the community is a "fictitious body" and it means nothing more than "the sum of the interests of the several members who compose it" I mentioned earlier that it was possible to resolve what seemed to me to be a confusion, if we assumed that "fictitious body" meant something akin to "corporate person". I am now fairly convinced that he did mean this. As I see it, Bentham means exactly what he says when he says the individual must maximise his own happiness, and that is the definition of morality. Happiness is equated with pleasure, and I do not see anything which suggests that means more than the ordinary use of the word implies. For the individual that is the end of the matter.

But as you rightly said, he was largely concerned with law and government. In order to apply this insight to the development of a legal system he assumes that the "community" can he characterised as a "fictitious body". I base this on this passage: ""It is in vain to talk of the interest of the community, without understanding what is the interest of the individual. A thing is said to promote the interest, or to be for the interest, of an individual, when it tends to add to the sum total of his pleasures: or, what comes to the same thing, to diminish the sum total of his pains. An action then may be said to be conformable to the principle of utility, or, for shortness sake, to utility, (meaning with respect to the community at large) when the tendency it has to augment the happiness of the community is greater than any it has to diminish it." That is the same kind of concept as a "corporate person" and he presumes that this "body" can pursue its own "happiness" in exactly the same way as an individual can. In effect he is anthropomorphising the community.

He argues that the the right way to construct a legal system is to do what the individual does, and to maximise the happiness of the community. Here he has a problem because the community is not able to experience pleasure nor it it able to act: only people can do those things. So in order to give effect to this idea he needs the calculus, and an account of the various sanctions and different sensibilities which have a bearing on how acts should be judged and sanctions imposed. But none of this has any bearing on the actions of individuals at all; it is to do with the actions of individuals in their capacity as legislators or judges or that kind of thing.


So if this is a consequence of your own idiolectic Utilitarianism, it is not a consequence of the system set out the the person who coined the term.

We will just have to differ about that, I am afraid

But you have not even dealt with the two people.

Yes, I have.

You simply look at the nurses brief moment of pleasure and ignore the woman,

No I have taken the woman into account. I have to really, in order to get to "two", you see. She is not harmed because she is not conscious. So when we confine to two, he has increased his pleasure and he has not diminished hers. Happines has, indeed, been maximisd :) She has, of course, been wronged, in my own view. But she has not been harmed

her family, her friends, his colleagues then say "look - happiness has been maximised!!!".

No this is where we move on to the wider group. I do not think that has anything to do with Bentham's utilitarianism as it relates to the conduct of individuals, for the reasons I have given. It does have a bearing on the construction of a legal system.

So you think it a practical possibility that the community will be happy about the rape of an insensate woman? Nonsense, it is a certain outcome.

I think they will be indifferent if they do not know about it. I think it is not possible to derive the inevitability of the harm from the principles laid down, because harm is said to be subjective and only accessible by asking the person concerned. I think in order to assert it is a certain outcome you must abandon that subjectivity in favour of some other insight. This is precisely what thinking about the hypothetical helped me to understand.

And since none of us did these things, the point is irrelevant.

Well we will have to disagree about that.

But why do you say it is illegitimate for us to ask "how does any moral system cope with this hypothetical? How does yours?

I do not understand what you are asking me here. I cannot find the bit you quote. But if you are asking how my own view copes with the situation of the nurse and the patient it does so by using a second principle beside the first

After all you say you will respect any choice that does not harm. And now you are asking yourself "what if there were no harm", then similarly, by your own words you must respect the nurses choice. Or are these hypotheticals only useful for ethical systems you dislike?

No. I have two principles not one. I will respect any choice which does not harm so long as it is in accord with what I am presently describing as the necessity not to steal another person's choice. The nurse clearly steals the patient's choice and that is the important factor here. Or in Kant's terms, he treats the patient as an object, which is wrong. I do not pretend there are no problems with my formulation. I have said that my ideas are not a fully developed system: but that I find them useful in a variety of situations: this is one of them

And your own point of view. You will respect any choice that does not cause harm. So if there is no harm then there is, according to your own ideas, no moral problem.

But that is not what you said. You said you will respect any choice that does not cause harm. Why won't you explain this?

I have explained this.

No, he said the "spirit" of it is summed up on the Golden Rule. He said that this was the ideal perfection of Utilitarianism.

But in case you have failed to notice we don't live in an ideal or perfect world. He goes on to say that Utilitarianism is a means of making the "nearest approach" to this ideal.

How?

Oh, by the way, if it is wrong by definition to steal someone's choice, right or wrong, should I never stop someone who is attempting to commit suicide?

Correct. It is perfectly legitimate to try to persuade them they are making the wrong choice: and in certain circumstances it might be legitimate to prevent them from committing suicide(for example if you are in a state where it is illegal): and it is reasonable to openly take action to stop them because of your perception that they are not of sound mind or for some other reason. But if, for example, you appeared to go along with their choice yet substituted some placebo for the sleeping pills they thought they were taking, then that would be wrong. It would not completely steal their choice, since they could still do it later, but I think it would be wrong to deceive them

Should I never kidnap someone who is in a cult and attempt to deprogram them?

This is admittedly a difficult one for me, but on balance no, I do not think you should. If you have reason to believe that person is not making a true choice then you might be justified because you cannot steal a choice they are not making. But in practice I think that is very difficult to establish

Should I never throw someone a surprise birthday party?

I am inconsistent in this: I do not see it is wrong to do this, though you have certainly stolen their choice.Though I would hate it if someone did it to me and so I reserve the right to be furious if someone steals my own choice in that way :)

And finally:

So it is alright to steal the choice of an adult not of full capacity then? In a coma for example?

It is not automatically ok to steal the choice of a child or an adult without full capacity. It is almost always wrong (and thank you for the birthday example which led to that qualification) to steal the choice of a fully capable adult and it is usually wrong to steal the choice of someone not of full capacity. If, as I have suggested, there is reason to believe the person is not of sound mind:and there is no open way to intervene that might be one example. Such circumstances are very rare indeed and being in a coma is not one of them. Another instance where I think it is ok to steal choice sometimes is in the case of a child. For example it is not good putting a 2 year old into a sweet shop and asking him to choose: it only causes him grief. It is better to reduce the range of choice by offering one of two or three things he likes. That is the kind of example I had in mind. It is certainly not my view that a decision to steal a choice does not need very strong justification
 
Last edited:
Oh dear, I spoke too soon. Now you are back to claiming the woman was not harmed.

Can you make up your mind on this one? Just find a position and stick with it.

I am not able to understand what you find difficult in this. Within the terms of utilitarianism it is not demonstrable that the woman is harmed. That has no bearing on my own view at all: I am merely exploring the system. The system says that such harm is subjective, and we cannot ask her so we cannot know the answer. If you assume harm then you must be basing on something other than subjective report.
 
There is no harm in the second case, since the act is voluntary.
Proof each voluntary act is harmless.

The sex in the first case is non-consensual. That is the difference between them.
Proof each non-consensual act is harming.

Proof each act is less harming if voluntary. Explain why shooting me to death is less harmful to me just because I gave my consent.
I don't understand why you don't understand that.
Try harder.
 
Last edited:
You are quite correct, I had not understood this was your point. You are saying that one need not consider the happiness of anyone else unless that benefits you. And that is what Bentham is saying too. I think that the problem lies in the part I have bolded. How do you define "good people". Bentham tells us that they are moral if they act to maximise their happiness, since that is his definition of moral. You earlier said that the word moral is redundant: and so I take it that you mean people of integrity, as you defined it earlier. That is, people who follow through on their own values, whatever they may be. Which seems to mean that a nazi is doing a good deed when he murders a lot of jews. He does not think he will gain utility from considering the pleasure or the pain of the jew: he considers he will gain utility from ridding the world of jews. I am sorry if that is not what you are arguing but it is what I understand from what you have said. And if it is correct then I think your position is consistent and it is utilitarian: I just don't happen to agree with it.

Err, sorry, when I said "good" I meant "what most people consider good," as in someone you wouldn't mind living next door to. Yes the nazis might have been people of integrity (although I doubt that very much) but that doesn't mean I would want them as friends!

That, really, is the whole point of my position. I don't think labeling people as moral or immoral really gets us anywhere -- what matters is whether or not you want to interact with them.

How are you doing this?

Well.. if something I do harms them, and they speak up, then I will consider what they have to say seriously rather than dismiss it as if I know better than them. It is that simple.

I do not think there is an absolute measure of harm, and if you have inferred this from something I said I apologise for not being clear. Part of my objection to utilitarianism as a functional system is that there is no objective measure and there is no scale we can adopt. I do not see how we can do the "calculus", because if A says action X harms him and B says it gives him pleasure I cannot see how we do the addition. Similarly if A says action A harms him and B and C say it gives them pleasure, then a mere headcount settles what we should do unless we have a scale? Does that not follow?

Only if you think the results of X have an equal effect on everyone. But why would it? I don't pretend to think that this calculation is simple at all. Luckally, people have developed template solutions to the most common and important calculations -- such as whether you should kill someone, whether you should help someone in need, whether you should beat or love your child, whether you should rape comatose patients, etc.

As I understand you, you would say that B and C should do what pleases them unless they see utility in not harming A. But if they say it gives them pleasure to do X they have already done the sums. So A will be harmed and that will be moral. And once again we are in Omelas. It is true that it is possible that in such a situation B and C and all the rest will all decide that the fate of the child bothers them to the extent that consideration of his happiness will be so uncomfortable to them that they will not allow the situation described in Omelas to arise or to persist. But there is nothing in the system which demonstrates that outcome is inevitable, and as I look around the world I have to say I do not think it is all that likely.

Well, it is pretty common in developed countries actually. Take the U.S., for instance -- a significant portion of the population opposes the rape of comatose patients, even if they will never know the victim or anybody who knows the victim, because just the thought that something like that is going on in their environment makes them very uncomfortable I.E. has much negative utility for them. This is what herzblut is incapable of recognizing and I am glad to see that you at least finally understand even if you don't agree.

But actually I agree with you in that I wouldn't want to take my chances by just "assuming" the neighboring tribe will see the utility in not bombing my tribe -- at least until I learn to trust them.



for me the action is quite clearly wrong whether there is harm or not.

Well my original argument in this thread was that you probably do perceive harm, and that is why you think it is wrong. Why do you think stealing a choice is wrong? I bet if you reduce your answer sufficiently it will boil down to some kind of harm being caused.


I do not mean to imply that you do not think there is harm. I am sure you do believe that there is. All I am trying to say is that there is no demonstrable harm within the terms of the system you espouse. And this is why I think you have to stretch the term "harm" beyond what it will bear to arrive at the conclusion there is something wrong here. I find I need a second principle for the same reason

Well I see what you are saying, but don't forget that to a utilitarian "harm" is merely negative utility, so really we aren't stretching our definition.
 
I am not able to understand what you find difficult in this. Within the terms of utilitarianism it is not demonstrable that the woman is harmed. That has no bearing on my own view at all: I am merely exploring the system. The system says that such harm is subjective, and we cannot ask her so we cannot know the answer. If you assume harm then you must be basing on something other than subjective report.

Yeah but don't forget that others can be harmed regardless of whether the victim is harmed -- you just said you understood that concept!

There are many reasons the rape would generate negative utility for me even if I would never meet the victim. Namely, the fear that such a thing might happen to the people I care about should they fall into a coma. So the victim's utility change is irrelevant to my utility change (unless the nurse could somehow be magically constrained to only do this to that one comatose woman and nobody else, etc.)

It is analagous to the scientific conundrum where it is impossible to observe something without changing it. As soon as someone else learns about the "system" (which is initially the nurse and patient here) they become part of the "system" and you can no longer say with any confidence that no harm is done.
 
I see nothing at all in Bentham to suggest that the actions of the individual should increase the happiness of the community
You mean, apart from the fact that he clearly says so?
IX. A man may be said to be a partizan of the principle of utility, when the approbation or disapprobation he annexes to any action, or to any measure, is determined by and proportioned to the tendency which he conceives it to have to augment or to diminish the happiness of the community: or in other words, to its conformity or unconformity to the laws or dictates of utility.

Jeremy Bentham - Principles of Morals and Legislation
You quoted it yourself, so you have no excuse not to see it.

Bentham is not being unclear or ambiguous here. So it is clearly you who have misunderstood Bentham.

I think that pretty much invalidates the rest of what you said.
 
Proof each voluntary act is harmless.
A "want" is by definition something that will increase the person's utility, so if the action involves no physical harm then clearly no harm is involved.
Proof each non-consensual act is harming.
Conversely something we don't "want" is something that will decrease our utility. So a non-consensual act will clearly involve a greater decrease in utility beyond any physical harm that it may give.
Proof each act is less harming if voluntary. Explain why shooting me to death is less harmful to me just because I gave my consent.
Because if being shot to death was genuinely a "want" then continued life had no further utility to you. So if I shoot to death someone who's life was a utility to them then clearly the harm to them would be greater than the harm to you since I have deprived them of something that would provide them with continued happiness on top of the obvious physical harm.

An example of someone who genuinely wanted to be shot to death would be a person about to burn to death, with no possibility of escape.

Just think about why you have never asked someone to shoot you to death and you will get the idea.
Try harder.
Trying...trying...trying... Nope. Still rape - harmful, consensual sex - harmless.

Checking with other people ... checking...checking... Nope. They still all agree with me.

It still seems to be only you who thinks there is no harm difference between consensual sex and rape.
 
Err, sorry, when I said "good" I meant "what most people consider good," as in someone you wouldn't mind living next door to. Yes the nazis might have been people of integrity (although I doubt that very much) but that doesn't mean I would want them as friends!

That, really, is the whole point of my position. I don't think labeling people as moral or immoral really gets us anywhere -- what matters is whether or not you want to interact with them.


I think it is ok to define moral as "people I wouldn't mind interacting with" if it pleases you. As I said, it is a stipulative definition, and it has potential to make things less clear, but now I know what it is it is fine. I do not really see why you wish to ditch the term "good" because, as you have just shown, this is the ordinary word we use to describe what we otherwise call moral and it is easier to use it than not to. I imagine you have some good reason for wishing to change the language, but I am generally reluctant to do this unless the reasons are compelling. I think the way we use words tells us a lot about how we think, if we let it. If we do not show the compelling reasons, or explain the stipulative definition, we are in danger of "semantics" in the pejorative sense of that word (semantics is not always a bad thing; but if the definition hides assumptions or reduces clarity then it is not helpful). But it is always good to get to common understanding when you have a peculiar definition.

Well.. if something I do harms them, and they speak up, then I will consider what they have to say seriously rather than dismiss it as if I know better than them. It is that simple.

If all you mean is that you ask the person, it does not help us if they cannot answer, does it? That is the point of employing a hypothetical - it lets us explore the limits of a position. I think we are agreed about the respect for others which is implicit in your position. But I wonder if that is the end of the matter. If a young teenager says it will harm her if she is not allowed to go to a nightclub next Saturday does that settle it? So far as the harm goes yes it does. She will be unhappy: that is harmful by utilitarian definition. But most of us would not respect the parent who allowed that to be the final answer in every case. It is easy to say that the harm to the teenager is less than the harm to the parent of an evening of worry. But I do not think you can prove it because I do not think there is a measure we can use. Indeed a lot of arguments with teenagers end up just like that and it is not easy to win if you found on harm, in my experience. I think there is more to it. It is that simple.


Only if you think the results of X have an equal effect on everyone. But why would it? I don't pretend to think that this calculation is simple at all. Luckally, people have developed template solutions to the most common and important calculations -- such as whether you should kill someone, whether you should help someone in need, whether you should beat or love your child, whether you should rape comatose patients, etc.

No. Bentham specifically states that X will have differential effects on people, so I do not assume the effects are equal on everyone. But that is my point. If you cannot do the sums then you adopt a "template". This is the point of law and also of Mill's secondary principles, I think. What I am contesting is the idea that those templates exist because someone, somewhere has done the sums. I do not think they have, because I do not think it is possible. So the templates are based on something else. Bentham would like those templates (law, for example) to be derived from doing those sums and he thinks that would lead to better results. Perhaps it would. Since it is impossible the point is moot

Well, it is pretty common in developed countries actually. Take the U.S., for instance -- a significant portion of the population opposes the rape of comatose patients, even if they will never know the victim or anybody who knows the victim, because just the thought that something like that is going on in their environment makes them very uncomfortable I.E. has much negative utility for them. This is what herzblut is incapable of recognizing and I am glad to see that you at least finally understand even if you don't agree.

But actually I agree with you in that I wouldn't want to take my chances by just "assuming" the neighboring tribe will see the utility in not bombing my tribe -- at least until I learn to trust them.

It is pretty common in all countries, actually. That is part of the way human beings are made, IMO. But it is only part of how we are made, and so it is also pretty common to behave as if we do not give a toss. And this is because we often don't. That is true both within countries and between them.

Well my original argument in this thread was that you probably do perceive harm, and that is why you think it is wrong. Why do you think stealing a choice is wrong? I bet if you reduce your answer sufficiently it will boil down to some kind of harm being caused.

I understand that you are assuming that I am really basing on some kind of harm: this is why I say you are stretching the definition beyond what I can accept to make your theory work. I can only ask you to accept what I say. I do not think stealing someone's choice necessarily causes them harm. Bentham would wave away this objection by saying it is because it causes me harm. I disagree, but I cannot prove this any more than he can prove the reverse. For me this is about a second principle which is separate from the concept of harm. I find I need it to build a moral rule of thumb. I need it in the hypthetical case we have been discussing because if there is no demonstrable harm I still think the rape is wrong, and I need a principle beyond harm to get there. You say you don't and that is fine. But you have not shown me where the harm is if the rape is secret and the nurse is fine with it. I do not think you can show it is there. Yet I still think the woman as been wronged. I am repeating myself, sorry. But I do not understand how your position works, truly.

Well I see what you are saying, but don't forget that to a utilitarian "harm" is merely negative utility, so really we aren't stretching our definition.

Where is the "negative utility" in the situation of the secret rape of a comatose woman by a nurse who is not made unhappy at all by the act? That is the problem.
 
@ Robin. You quote out of context and I have already explained why I disagree. We will have to leave it at that, I think :)
 
Ah Fiona... once again playing with hypotheticals to make utilitarians and skeptics seem like something they're not so she can feel like her own different poorly communicated values trump.

Predictable.

Unconvincing to anyone but herself... but predictable. So what are you using to determine harm? And why would you take away the nurses choice since you respect things that cause no harm. What is the harm... how have you determined it... what do you do in that situation and why based on what and why do you think it's different than what Robin or Rocketdodger would do? Why does it cause harm when you are the person considering morality and you don't allow it to "cause harm" when utilitarians are considering the situation? That's a mighty bizarre hypothetical to change depending on who is viewing it, don't you think?

Why don't you just say what you would do in the situation and why... how does that mesh with your statement that you must respect something that causes no harm and it's wrong to take away another's choice. A comatose woman doesn't have the brain to make a choice. A sexually aroused man does.

How do you think what you do is different or better than what a utilitarian or uber skeptic would do... and based on what criteria-- don't use a straw man... use the actual criteria Robin has repeatedly gone out of his way to give you.
 
Last edited:
Sorry about the slow response, I was away at a conference.

Off the top, it seems like you respond to any concrete objection to utilitarianism by redefining utility to fit whatever moral intuition we have at the moment. In essence, you've defined utility theory as maximizing good or even going so far as maximizing ethical behavior. This doesn't serve as a defense of any theory insofar as it is a vacuous tautological definition.

Generally, utility theory is treated as maximizing happiness or pleasure. Clearly, overall utility is maximized, but the theory (as it is used in philosophical circles) normally allows the trade-off of the utility of one person for the greater utility of another. If you intend something different than this, I would appreciate it if you would clearly define what you mean by maximizing utility.

I should hardly think that it necessary to give the first type anything, they will probably seek it out or create it themselves.
A utility monster is someone who has greater marginal utility than everyone else, under all circumstances. Even, if the utility monster is already acquiring sufficient resources to survive, even if they want for nothing, their psychology is such that they would get more happiness from any unit of resources than everyone else put together. So even if I'm starving and without food I'll die, the utility monster would do better to get that food because they'll get more utility from that food than I could ever get over the course of my entire lifetime.

And yes, I think it is good to devote disproportionate resources to those in incredible pain.
We can actually think of incurable depression as a type of negative utility monster. No matter how many resources we give them, they are still unhappy, but because they are unhappy, utility theory predicts(and you agree) that we give them a disproportionate segment of resources. This seems to lead to a contradiction, insofar as those resources are wasted. Now we're not maximizing any utilitarian metric. We're following a rule tailored to a specific moral instance.

Firstly, this would cause lifelong unhappiness to the people who have to kill an innocent man.
Let a machine do it.
Secondly, it would cause unhappiness to his family.
Don't let the family know, or only kill the incurably depressed without family.
Thirdly, it would cause unhappiness to the general community who generally have the feeling that we should never give up on a person in pain.

I would think that a truly utilitarian community would want to minimize pain, thus they would be pleased by the death of the depressed, insofar as it minimized the overall unhappiness of the community and freed up resources for more promising individuals.

If you disagree then it seems to me that this is another example of what I mention at the top: failing to define ethics with your definition of utilitarianism. You say we maximize utility, but also that maximizing utility entails living in a community that has specific rules(like never giving up). It is those rules that constitute a system of ethics in your explanation not utility theory. Which leaves you not with a coherent theory of ethics, but a hodgepodge of different rules to fit different situations.

Fourthly incurable doesn't mean always incurable. But if we simply bump off everybody who suffers from this illness we would never find a cure and we would be guaranteeing an endless continuation of unhappiness.
Not always, but this is a thought experiment, so we stipulate that it is incurable as part of the experiment. Generally thought experiments are explained in basic college philosophy classes. I would have expected you to be familiar with the technique.

Moreover, I'm not sure why a utilitarian wouldn't calculate the probability of a cure and multiply that by the expected utility. If not killing led to less happiness minus unhappiness than killing after weighting by probability, I would expect a utilitarian to support the execution.(At the very least they would support it as long as they are unaware of the specifics.) In other words, if the cure was very unlikely and the utility gained if the cure exists is small, then we should expect that they support the execution.

(Ie they support the execution in theory but are unaware of the specifics of an particular execution).

So killing him would reduce the utility, not increase it.
As I noted above, only if your definition of utility is a non-definition.

Again I cannot comment until someone will explain how an act that brings temporary pleasure to one individual at the expense of widespread and lasting unhappiness could ever at any stretch at all be called "maximising utility".

He is particularising pleasure, not maximising utility.

This is an example of something that is clearly and unambiguously wrong under any version of Utilitarianism.
How do you distinguish pleasure from happiness?

Also, If some person is wired so that raping a comatose woman gives them a sublime and lasting happiness, I would think that a society filled with utilitarians would be in support of the rapist performing their rape. It maximizes the rapist's utility at no expense to the comatose woman's. I'm not sure why other people would have a problem with letting the rapist maximize their personal utilty and thus the community total. Unless you are adding another piecemeal rule that excludes rape from allowable behaviors under your system of ethics.

In this example you are moving utility around. You are lessening the utility of the shopkeeper to increase the utility of your child. So this is not maximising utility, merely rearranging it. If you are supposed to care for another's happiness as much as your own, as J.S. Mill says, then this is not Utilitarianism.
Again this is a thought experiment. Thus the example stipulates that more net happiness occurs from breaking the law, agreement, etc... than not. The candy bar may be one of the most meaningful experiences that child has the whole month, whereas the shopkeeping may not even know he was robbed. So we should expect that even under J.S. Mill's definition you steal. You value your utility just as much as another's, but in this case you just get a lot more utility from your action than the other loses. Are you saying that you can't make quantitative comparisons/evaluations in utility theory? Or are you duct taping another rule onto your system of ethics that also says you should honor laws, promises, and agreements even if not doing so would increase utility?

Since none of these suggestions would maximise utility I would have to say we do not lock up innocent people, we do not sterilise criminals and we do provide educational opportunities for under-achievers.
How do they not maximize utility? Locking up any person that has a reasonable probability of committing a crime and preventing the birth of criminals would certainly increase the utility of our society, insofar as it drastically lowers crime. Should I add no prior restraint as an additional rule tacked on to your definition of utility?

Also, it seems to me that if we give our educational resources to people that are most likely to succeed, then we will have more successful people. More successful people, producing more, and just being happy in general seems to be in coherence with my understanding of utility theory.

I am not sure how any moral system would handle that? Which course of action are you suggesting is the dutiful or honourable one?

Well, a system that values duty like Kantian ethics, would say that you honor your commitment to your children and you don't work overtime. But something you seem to miss is that what is dutiful or honorable will vary depending on the system of ethics one adopts. It almost seems like you want me to tell you what I think is the correct answer so that you can make up another rule to plug this hole in your theory.

Maybe he should find another doctor to work with.
Again, I think you fundamentally misunderstand what a thought experiment is.

No, of course not. I am just puzzled as to why you think Utilitarianism would suggest this.

I don't know of any ethical system that can turn us into omniscient super-heroes, we are humans, we do our best.
Utilitarianism is ambiguous in this respect. If I define utility from the perspective of an individual's actions then they are not immoral by actions made on limited information, but they are also not immoral by maximizing their utility at the expense of society's. It is not uncommon for individuals to value their personal utility more than that of others, they can't see from another's perspective and thus have limited information. Whereas, utility viewed from an omniscient view of society can avoid personal bias as to what constitutes utility, but leads us to the conclusion that if an individual made the wrong decision from limited information, they made an immoral decision.

This is a double bind with utilitarianism. Either sociopaths that are not aware they are hurting other people when they hurt and kill are moral because they didn't know they were making a mistake, or people who make decisions that lead to negative consequences are immoral because they aren't maximizing global utility. You can't have it both ways.

And in turn I am very curious to see how you propose to deal with my replies.

It seems to me that you've defined utilitarianism using the following rules:
#1 Maximize the utility of yourself and others.
#2 Never give up on people with very large marginal negative utility
#3 Don't allocate too many resources to people with very large positive marginal utility.
#4 Decrease your personal utility upon the execution of people if that execution would otherwise increase overall social utility.
#5 Decrease your personal utility upon the rape of any comatose women if that rape would otherwise increase overall social utility.
#6 If lying, cheating, or breaking laws will increase overall social utility decrease your personal utility to offset any global gains in utility.
#7 If policies are instituted that will prevent people from committing crimes in the future decrease your personal utility to offset any gains from those policies.
#8 If educational resources are distributed to the people most likely to benefit from them decrease your personal utility to compensate for any global gains in utility.
#9 Gain maximal utility from performing whatever action is most dutiful in any particular set of circumstances.
#10 Actions made that increase global utility are ethical, but individual decisions that decrease global utility are not unethical.
#11 Additional rules may be added as additional objections are fielded.

Do you see the problem with this approach? You completely dodge the issue of what is ethical and just define it to justify your theory as necessary. If we are going to have a coherent discussion on this topic you are going to need to come up with a clear, consistent, general, and simple definition of utilitarianism. Moreover, it seems likely that any well defined system will have problems. It is far more honest to clearly define the system and admit its difficulties than to vaguely define the system and obscure its difficulties.
 
Last edited:
@ Robin. You quote out of context and I have already explained why I disagree. We will have to leave it at that, I think :)
What have I quoted out of context? You? Or Bentham? I can't see how the context helps you, you clearly said that you couldn't see anything at all in Bentham to suggest that the actions of the individual should increase the happiness of the community: or indeed should take any account of it at all.

I demonstrated that Bentham explicitly said so. The reason you gave before was SEP's flawed conjecture that Bentham's definition of sympathetic sensibility somehow made him a proponent of psychological egoism.

I pointed out why this was invalid before (my policeman example) and you never responded to this.

I am puzzled as to why you insist on this point when it is so clearly contradicted by Bentham's own clear statement.
 

Back
Top Bottom