Better the illusions that exalt us ......

Robin, potentially valid answers to the question are:

1. Yes.
2. No.
3. I don't know.

What's your choice?
 
Edit: What if the comatose patient isn't comatose at all, but actually the wife of the male-nurses. They both get their kicks from a certain roleplay, where she acts as if she was comatose. Their consensual sexual acts might have consequences identical to the case of the comatose patient. Please explain why the couple should be punished for their role-play as if it was not a role-play, just because you fantasize that harm is all that matters.
*sigh* Please point out where I have suggested that fantasising harm is all that matters, or that consensual sexual games should be punished.

Or anything even remotely like that.

Maybe you can tell me about what "ignorant idiocy" it is for me to have said the thing you pretended I said.

Remember, ablutions to loo'ard.
 
*sigh* Please point out where I have suggested that fantasising harm is all that matters, or that consensual sexual games should be punished.
*sigh* still not understanding your own standpoint?

The general point is: actions might lead to same consequences whether or not the moral agent had bad intentions or good intentions.

Leaning only on consequences to judge an action leads to the absurd conclusion that intentions are irrelevant, example see above.

Once you have understood the problem, you might want to try to think of a resolution.
 
The attempt is punishable because it is an attempt to cause harm. We punish any attempt to cause harm because that is a good way of preventing harm.
So an action, a failed attempt, is bad although it does not produce any harm. Such action is even bad if it factually produces happiness, you say.

This contradicts your original stance that an action is bad, if and only if it produces harm.

You should not hold self-refuting stances.
 
Last edited:
*sigh* still not understanding your own standpoint?
Oh, I see. I don't understand what I am saying.

The general point is: actions might lead to same consequences whether or not the moral agent had bad intentions or good intentions.

Leaning only on consequences to judge an action leads to the absurd conclusion that intentions are irrelevant
Explain exactly why you think that a person who wished to maximise happiness would consider an intention to reduce happiness as irrelevant.

How could you maximise happiness by ignoring intentions to minimise it? That does not make a lick of sense.

The commonsense and rational conclusion is that if you want to maximise happiness then intentions are very relevant.
, example see above.
What example, do you imagine, illustrates this point?
Once you have understood the problem, you might want to try to think of a resolution.
You have yet to produce a problem. If I want to maximise happiness then clearly intentions are crucial.
 
The commonsense and rational conclusion is that if you want to maximise happiness then intentions are very relevant.
Of course. That's why you have to praise the male-nurse for his good intentions to maximize happiness. And he factually delivers. Each sexual act to the comatose patient increases his own happiness and leaves invariant the patient's.
 
Last edited:
If you claim utilitarianism respects human dignity as an indicator for the morality of decisions, then you have the burden of proof. That's because, per definition, utilitarianism only respects one indicator, namely utility.

And what is utility, herzblut? Apparently the only definition you are aware of is "the people I pay my monthly water and electric bill to."

Allow me to educate you: according to http://www.merriam-webster.com/dictionary/utility utility is

1: fitness for some purpose or worth to some end
2: something useful or designed for use

So you are now saying the concept of human dignity 1) has no fitness or purpose in the human world and 2) is not useful. You already admitted that you think breaching dignity causes no harm. It would seem as if your opinion of human dignity is very low indeed. So why do you champion it so?
 
So an action, a failed attempt, is bad although it does not produce any harm. Such action is even bad if it factually produces happiness, you say.

This contradicts your original stance that an action is bad, if and only if it produces harm.
Look, here is a hint. Next time you claim that I have said something - quote the part where I say it

Because this constant misrepresentation is becoming tedious.
You should not hold self-refuting stances.
You should not lie.
 
You guys do realize that this is rather off topic from the OP, right?

Perhaps you'd want to start another thread to continue the debate?
 
it explicitely is a Crime Against Sexual Self-determination.

Why is the attempt punishable? Because the mere attempt causes harm? How so? What harm?

Edit: What if the comatose patient isn't comatose at all, but actually the wife of the male-nurses. They both get their kicks from a certain role play, where she acts as if she was comatose. Their consensual sexual acts might have consequences identical to the case of the comatose patient. Please explain why the couple should be punished for their role play as if it was not a role play, just because you fantasize that harm is all that matters.

If you cannot infer the logical connection between acting to minimize harm and acting to prevent future harm then this forum really isn't for you. But don't let that stop you from posting here -- it hasn't stopped the others like you.
 
Of course. That's why you have to praise the male-nurse for his good intentions to maximize happiness. And he factually delivers. Each sexual act to the comatose patient increases his own happiness and leaves invariant the patient's.

And an orangutan would stop there.

An intelligent human, on the other hand, would continue the hypothetical scenario, and come to the following conclusions:

a) Many people would infer that allowing such behavior increases the probability of harm to their comatose relatives.

b) Many people would infer that allowing such behavior increases the probability of harm to themselves, should they lapse into a coma.

c) Many people would infer that allowing such behavior increases the probability of harm to people they care for, even the least bit, should any of them lapse into a coma.

d) Because of a, b, and c, many people would decide that overall harm would be minimized by prohibiting such behavior.

edit: Oh, and just to head you off -- the raping does harm to the victim, in the eyes of observers, because they say it does. Who are you to tell them otherwise? Who are you to dictate what does and does not harm someone?
 
Last edited:
Look, here is a hint. Next time you claim that I have said something - quote the part where I say it
Quote the part where I said what you said.

Regarding the silly self-contradictions and continuous modifications of your stance, please explain your stance in clear, precise, unambiguous words. E.g.

The moral worth of a moral agent's act is exclusively determined by <complete list of all determiners and description how the moral agent knows what to do concretely>
 
And an orangutan would stop there.
Regarding the silly self-contradictions and continuous modifications of your stance, please explain your stance in clear, precise, unambiguous words. E.g.

The moral worth of a moral agent's act is exclusively determined by <complete list of all determiners and description how the moral agent knows what to do concretely>
 
*sigh* still not understanding your own standpoint?

The general point is: actions might lead to same consequences whether or not the moral agent had bad intentions or good intentions.

Leaning only on consequences to judge an action leads to the absurd conclusion that intentions are irrelevant, example see above.

The general point is: actions most likely will not lead to the same consequences when they are repeated in the future if the agent had bad intentions or good intentions.

Leaning on consequences of all possible future actions, or even a representative subset of future actions, leads to the useful conclusion that intentions are important because they affect the probability distribution of future outcomes.
 
Regarding the silly self-contradictions and continuous modifications of your stance, please explain your stance in clear, precise, unambiguous words. E.g.

The moral worth of a moral agent's act is exclusively determined by <complete list of all determiners and description how the moral agent knows what to do concretely>

Um... the only contradictions are between what you think we said and what we actually said (common on this forum) and and the only continuous modifications around here are of what you think our stance is (also common on this forum).

I can't speak for Robin (although I assume her answer will be something similar), but it is quite simple for me: The moral worth of a moral agent's act is exclusively determined by the net amount of happiness created or prevented (which is equivalent to the net amount of harm prevented or created) by the act. The moral agent knows what to do concretely by attempting such a calculation to the best of their ability given the situation.

What you seem to be hung up on are the assumptions that 1) happiness or harm levels will only be changed for those immediately affected by the act, when in reality billions of people can be eventually affected and 2) the act is limited to the immediate slice of time after the decision, when in reality an act can affect the environment of the agent (and the agent theirself) forever after the act.

Your argument that if the patient isn't harmed then nobody is harmed, as well as your argument that a failed attempt at rape causes no harm, is indicative of these incorrect assumptions.
 
The moral worth of a moral agent's act is exclusively determined by the net amount of happiness created or prevented (which is equivalent to the net amount of harm prevented or created) by the act.
Define happiness and harm and provide an algorithm to calculate its net amount. Prove that defined happiness exists at all. Prove that such a calculation is always possible and always leads to a unique result. Provide the time scale for running the calculation. Provide contact details of the authority from where I can get the latest calculation results.

The moral agent knows what to do concretely by attempting such a calculation to the best of their ability given the situation.
He knows nothing.

Please explain to him concretely (1) how to detect and measure whose 'happiness' and 'harm' and (2) how to calculate an estimated net sum and (3) how to estimate the error of this calculation. Also specify the maximum error allowed. Prove the result is always possible and unique. Prove that humans can normally perform this calculation, and specify the required education (mathematics?).

Specify how a valid calculation result then points to one particular action in any given concrete situation and how he shall act if his result is invalid because it exceeds the maximum error allowed.

Provide concrete samples of calculating the morally best action for different situations.
 
Last edited:
But the problem is that we cannot walk away from Omelas, nor can we guarantee that no child will, as a consequence of our society, live in suffering.

I do not see the relevance of that, I am afraid. We are talking about the adequacy of Utilitarianism as a moral system. There is no doubt that in Omelas the happiness of the greatest number is achieved: and that the harm is minimised. It follows from Utilitarianism that this is a good society based on excellent moral principles. And for me it serves to demonstrate that Utilitarianism is not good enough.

Also, Omelas is not really an instance of Utilitarianism because it assumes that people could be happy after they have seen the child. If not, then the system would not deliver happiness.

It is certainly an instance of Utilitarianism. It is a given that the people are happy, and to propose some other scenario does nothing at all to answer the point actually raised. The people in Omelas are happy. Those who leave are not, and must be added to the sum of the misery: but they are few.

Finally it is no good walking away from Omelas unless you can walk to some circumstance that would help the child.

I agree and that is why I mentioned that the story does not deal with resistance. But this follows from a different moral intuition. Utilitarian has no problem here, so there is no reason to walk away in the first place. Sorry, Robin, but for me your response here demonstrates that you are not in fact a Utilitarian.

That is the messy circumstance we find ourselves in - Omelas everywhere.

Agreed. But again this has no bearing on whether this is a good thing. And utilitarianism says that it is. I cannot agree

Not at all, the two are inextricably entwined. If there is harm then happiness cannot be maximised unless the harm is reduced.

This does not really follow, as Omelas shows. The trouble is that the harm and the happiness are not located in the same person. Harm to you does not necessarily entail any reduction in my happiness at all. And for some folk it would actually increase their happiness,it in certain circumstances - for example if you were executed because you had killed someone close to them.

Would not the family of the comatose patient be happy knowing that strict procedures were followed within the nursing home to ensure that abuse of patients can never take place?

Yes. But what would they choose if the only way they could get care for the comatose patient was by consenting to the abuse? They might decide that death was better than dishonour or they might not. It is not obvious to me how you do this arithmetic. And that is a fundamental problem is it not?

This is plain wrong, Utilitarianism focusses on society as a collection of individuals, Bentham's shopkeeper for example.

I think you would need to elaborate this for me a little. As I understand it it is essential to utilitarianism to do the arithmetic beyond the individual: indeed you bring in the family of the comatose patient as an essential component of the calculus else it does not work at all as you admit. So I cannot agree that the individual is central except in the trivial sense. This is like a "hive" conception of the individual: and that is very different from true individual morality. Again I do not now much about this so if you can summarise how you see the individual as central to utilitarianism it would help me

How is asking "what would the world be like?" focussing morality at an individual level???? Is the world an individual?

No of course not. But the difference between this kind of thought experiment and the utilitarian conception is the second half of that sentence.."if everybody did that". It is unversalisablity that is the crux. Utilitarianism cannot address such a question because, as you seem to strongly defend in this thread, it denies that actions can be harmful or happiness-inducing independent of the person: in this sense it is a relative concept and so you cannot intelligibly ask the question in those terms (though this is a little at odds with the relational elements which you also bring in. That is part of my confusion, actually)

If you say we should focus on individuals and not focus on consequences then we should certainly never ask "what would the world be like?"

Outcomes matter for all philosophical systems. That is why I brought in Omelas. You seem to be conflating two things. Consequences are part of the way we compare moralities. Consequences for the self (who we want to be) and also for the other. That does not change the fact that a difference between Kant's kind of morality and Utilitarianism is that in the first the moral consequences are located within the moral agent: and in utilitarianism they are placed outside. This is not a clear distinction since it is more a spectrum than a division: but I am simplifying in a way that makes sense to me. I am sorry if that is not very clear. I have never studied philosophy and I am thinking this through as I go.

I wish people would stop saying that sexually abusing a comatose patient does not result in harm.

Or at least I wish they would back up this statement with some kind of rational argument.

The harm is not demonstrable through a utilitarian approach. I do think it is morally wrong. So do you. What I am suggesting is that this moral insight came first: and you are stretching a utilitarian analysis in order to bring this example within its scope. I do not think you can do it. Even if I give you this example by allowing you to bring in all the people who can conceivably be harmed by this nurse's action in any definition of harm you like, I think it is clear that the arithmetic of harm is a post hoc justification. It is inherent in the system that the harm identified will have to stretch and twist to deal with individual cases. Eventually we lose sight of the word harm altogether, as it is used in ordinary english. I really do not mean to insult but I do not honestly believe you did the sums then came to the conclusion: I think it was the other way around.

As I have pointed out, the nurse could only get away with his actions by going to extreme measures to keep it secret. So he knows that objective harm would be revealed if the act were revealed. If revealing the act would reveal objective harm, then, clearly and unambiguously, objective harm would be done.

This is circular. He keeps it secret yes. I think he does so because he knows it is morally wrong. (or perhaps he knows that other people think it is, though he himself disagrees, and he is therefore unwilling to accept the sanctions which would be imposed....but leave that aside to keep it simple).That is not the same at all as saying he knows objective harm would be revealed, however. I think that he concludes it is wrong for the same reasons you do: and that may have nothing to do with utilitarianism. You say that he knows objective harm would be revealed: that is the question not the answer. You cannot assume it, as you have done, and so support your case.

As I have also pointed out a society that was trying to maximise happiness would have this man doing hard time. It would enforce procedures in nursing homes that would prevent this from happening.

This is all clear an unambiguous.

I am afraid it is not clear to me at all.

So I am utterly bewildered as to how anybody could suggest that any version of Utilitarianism would condone his actions any any way.

It is because the harm is not demonstrable in utilitarian terms. It is morally wrong, we are agreed: but it is not obviously morally wrong because of the harm and this is shown by the fact that to demonstrate the harm you have to cast the net very wide and cast it after the case is presented, not before. As with Omelas

I have run out of time. I think much of what I have said suggests answers to some of your other points but if not then perhaps we can continue this discussion later. And maybe in another thread, as Lonewulf suggests.
 
Define happiness and harm.
For me, something along the lines of being pleased with something, and wanting it to continue, versus being unpleased and wanting it to stop. For anyone else, it is up to them to define it.

and provide an algorithm to calculate its net amount.
1) Observe how happy or harmed a person is.
2) Add the value you come up with to the running total.
3) Repeat with another person.

Prove that defined happiness exists at all.
I define happiness to be the condition of being happy. People say they are happy. Thus, happiness exists.

Prove that such a calculation is always possible
I don't think it is always possible -- many times one can't get at much data, many times people are not forthcoming about their happiness, many times it doesn't even matter because like I said people are not supercomputers.

What is the point of this question?

and always leads to a unique result.
All events in the universe are unique. The results of all decisions are events. Thus all results are unique.

What is the point of this question?

Provide the time scale for running the calculation.
When my fiancee asks me to get her a glass of water, less than a second.

When a government decides whether or not to fund a mission to space, probably on the order of months or years.

What is the point of this question?

Provide contact details of the authority from where I can get the latest calculation results.

Whoever is making the calculation.

What is the point of this question?

Please explain to him concretely (1) how to detect and measure whose 'happiness' and 'harm' and (2) how to calculate an estimated net sum and (3) how to estimate the error of this calculation. Also specify the maximum error allowed. Prove the result is always possible and unique. Prove that humans can normally perform this calculation, and specify the required education (mathematics?).

Specify how a valid calculation result then points to one particular action in any given concrete situation and how he shall act if his result is invalid because it exceeds the maximum error allowed.

Consider a person trying to recognize a face. Or, a person trying to walk home in a city. Or, a person writing a poem. Or, a person doing anything people do.

Please explain to them concretely (1) how to detect and measure the utility of each feature that may contribute to the overall utility of any decision during the act (2) how to calculate a mathematical representation of the sum of those utilities and (3) how to estimate the error of this calculation. Also specify the maximum error allowed. Prove the result is always possible and unique. Prove that humans can normally perform this calculation, and specify the required education (mathematics?).

Specify how a valid calculation result then points to one particular action in any given concrete situation and how he shall act if his result is invalid because it exceeds the maximum error allowed.

Provide concrete samples of calculating the morally best action for different situations.

My fiancee is sick and wants some water. For her to get up and get it would be painful and tiring. For me to get it is trivial. The reward she will give me in love and companionship, and the reward I get from knowing I helped someone I care about, outweighs the trivial pain and energy of my walk to the refrigerator. I decide to get her water. Note that the template of this decision is most likely hard coded into my neural system by now and I will probably automatically help my fiancee without thinking.

Do you claim that there is an absolute moral rule for helping your fiancee get water?

A man is injured on the road. If I do not help him, his condition may worsen. If I do help him, I will be late to work. The reward I am likely to get in gratuity from him, and the reward I get from knowing that I helped someone, outweighs the harm caused by me being late to work since my job can wait. I decide to help the man. Note that the template of this decision is most likely hard coded into my neural system by now and I will probably automatically help injured people on the road without thinking.

Do you claim that there is an absolute moral rule for helping injured people on the road?

Many Americans are loosing jobs to foreign workers. If I prevent jobs from going to foreign workers, some people will be helped and some will be harmed. If I allow jobs to go to foreign workers, some people will be helped and some will be harmed. I decide that I can't make this decision in a split second and should research the issue until I feel I know enough to make a correct decision. At that point, I will probably have many meetings with many different people who will advise me. After that, I will reach a decision I feel maximizes happiness. If there are others like me, we will combine our decisions in a way we feel maximizes happiness given the constraint of democracy.

Do you claim that there is an absolute moral rule for international economics?
 

Back
Top Bottom