Off the top, it seems like you respond to any concrete objection to utilitarianism by redefining utility to fit whatever moral intuition we have at the moment.
On the contrary I can easily demonstrate that I have consistently applied the classic Benthamite definition of utility and utilitarianism throughout. Similarly I have always used "happy" in it's normal sense of a human emotion.
You, on the other hand, have consistently suggested that Utilitarians must be happy at things that normal human psychology would make them deeply unhappy and so are using some non-standard definition of "happy". Similarly you are using a definition of "maximise" that is more consistent with "particularise".
A utility monster is someone who has greater marginal utility than everyone else, under all circumstances. Even, if the utility monster is already acquiring sufficient resources to survive, even if they want for nothing, their psychology is such that they would get more happiness from any unit of resources than everyone else put together. So even if I'm starving and without food I'll die, the utility monster would do better to get that food because they'll get more utility from that food than I could ever get over the course of my entire lifetime.
So surely then they will use
less resources than everybody else put together? So no problem.
We can actually think of incurable depression as a type of negative utility monster. No matter how many resources we give them, they are still unhappy, but because they are unhappy, utility theory predicts(and you agree) that we give them a disproportionate segment of resources. This seems to lead to a contradiction, insofar as those resources are wasted. Now we're not maximizing any utilitarian metric. We're following a rule tailored to a specific moral instance.
No, you have simply altered the original example. The original example merely took more resources to achieve happiness. In your new example infinite resources would provide the same utility as no resources, so utility theory would not expend resources on a project that would not increase happiness. But then again, neither would any other ethical system.
You have also failed to take into account that each person is a producer as well as a consumer of resources.
Because we all know it is not really killing when you do it with a machine, eh?
Don't let the family know,
They might become suspicious when he stopped breathing and rotted away to a skeleton.
... or only kill the incurably depressed without family.
In which case you have abandoned your original thought experiment and started a new one.
I would think that a truly utilitarian community would want to minimize pain, thus they would be pleased by the death of the depressed
, insofar as it minimized the overall unhappiness of the community and freed up resources for more promising individuals.
Do you see what I mean? You are saying utilitarians would be pleased at something normal human psychology would make them extremely displeased. Thus you are using a different definition of unhappiness (or a different definition of utilitarian).
Using the normal definition of happiness and the Benthamite definition of Utilitarian, as I have consistently done, killing an innocent man would cause widespread unhappiness and must be rejected.
If you disagree then it seems to me that this is another example of what I mention at the top: failing to define ethics with your definition of utilitarianism.
On the contrary it is, as I have pointed out, an example of you using your private definition of happiness. I have merely consistently applied Benthamite Utilitarianism and the normal definition of happiness.
Not always, but this is a thought experiment, so we stipulate that it is incurable as part of the experiment. Generally thought experiments are explained in basic college philosophy classes. I would have expected you to be familiar with the technique.
You do understand the difference between a thought experiment and a hypothetical, don't you?
Moreover, I'm not sure why a utilitarian wouldn't calculate the probability of a cure and multiply that by the expected utility. If not killing led to less happiness minus unhappiness than killing after weighting by probability, I would expect a utilitarian to support the execution.(At the very least they would support it as long as they are unaware of the specifics.) In other words, if the cure was very unlikely and the utility gained if the cure exists is small, then we should expect that they support the execution.
However if you apply the classic definition of Utilitarianism and happiness to the problem, as I have consistently done throughout, you will find out that you are suggesting that we should pursue all the things that cause unhappiness in human psychology - pessimism, defeat, cowardice, killing the innocent. I am not sure how you think those things will produce any happiness at all, never mind maximise it.
And with no cure the root unhappiness you were trying to eliminate will simply persist.
But by looking for a cure you pursue those things that human psychology normally associates with happiness - optimism, courage, curiousity, inquiry, quest, knowledge. Searches for cures usually produce utility of their own - medical insights and technological spin offs. Even if cure is never found then this approach will clearly produce more happiness and must be adopted by classic Utilitarianism.
As I noted above, only if your definition of utility is a non-definition.
As noted above, I have consistently applied the Benthamite definition. You can find it in his "Principles of Morals and Legislation"
How do you distinguish pleasure from happiness?
The key distinction is between maximise and particularise.
Also, If some person is wired so that raping a comatose woman gives them a sublime and lasting happiness, I would think that a society filled with utilitarians would be in support of the rapist performing their rape.
Unless of course they were human beings on planet earth in which case the majority are wired so that the very idea of rape gives them deep and lasting unhappiness. ant naturally the more sublime and lasting the rapists happiness the deeper and more unbearable would be the community's unhappiness.
It maximizes the rapist's utility at no expense to the comatose woman's. I'm not sure why other people would have a problem with letting the rapist maximize their personal utilty and thus the community total.
Unless of course there were an alternative that would result in even more happiness, like the detection, arrest and imprisonment of a rapist. Or that medical institutions would immediately adopt procedures that would make the abuse of patients impossible. Thus relatives of coma patients and recovered coma patients would feel reassured that nothing of this sort should never occur.
Clearly and unambiguously this would produce much more happiness among more people, using the normal definition of utilititarianism (as I have consistently done throughout) we must clearly take the prevention, detection path.
Unless you are adding another piecemeal rule that excludes rape from allowable behaviors under your system of ethics.
We would only need a piecemeal rule if rape produced net happiness. But since rape clearly produces net unhappiness then it is by definition disallowed under the classic definition of Utilitarianism (that, as I may have forgotten to mention) I have used throughout).
Or are you duct taping another rule onto your system of ethics that also says you should honor laws, promises, and agreements even if not doing so would increase utility?
Please feel free to quote the part where I have even remotely implied such a thing. Otherwise stick to what I say, not what you pretend I say.
How do they not maximize utility? Locking up any person that has a reasonable probability of committing a crime and preventing the birth of criminals would certainly increase the utility of our society, insofar as it drastically lowers crime.
So you are proposing that when a person is found guilty of a crime, we put them and their entire family into prison for their entire life or a sizeable proportion of it, thus boosting the prison population to many, many times it's current level requiring a tax hike of massive proportions.
Tax hikes increase happiness under your definition?
Also since we are talking about the normal definition of happiness (as I have used throughout) human psychology normally produces unhappiness at the incarceration of guiltless people and so you are proposing a massive increase of unhappiness in this area too.
Don't forget the unhappiness of all the guiltless people you intend to lock away. Do you have any research to indicate that locking innocent people away just in case will result in a significant lowering of crime?
And Herzblut mentioned one particular who tried the sterilising route and produced an era we do not normally associate with happiness.
Also, it seems to me that if we give our educational resources to people that are most likely to succeed, then we will have more successful people. More successful people, producing more, and just being happy in general seems to be in coherence with my understanding of utility theory.
You have evidence that more success produces more happiness? Isn't it strange that so many successful people are now downsizing?
But there is plenty of evidence that the massive underclass you are proposing to create would be massively unhappy.
Well, a system that values duty like Kantian ethics, would say that you honor your commitment to your children and you don't work overtime.
Kant says a doctor has no duty to his patients does he? Or that we have no duty to our fellow man? If you say so, I have not read that bit.
But something you seem to miss is that what is dutiful or honorable will vary depending on the system of ethics one adopts.
Precisely, for example a Utilitarian would say the opposite, that the doctor must save the patients if he is their only hope. Thousands of lives saved increase utility more than a couple of kids who don't see their father.
I wonder how your Kantian doctor's children would feel once they grew up and found what a terrible price they paid for their bedtime stories.
It almost seems like you want me to tell you what I think is the correct answer so that you can make up another rule to plug this hole in your theory.
How quaintly arrogant of you to assume you have the correct answer or that anyone else would think so.
Again, I think you fundamentally misunderstand what a thought experiment is.
You feel justified in arbitarily changing the conditions of your thought experiments so why shouldn't I?
Utilitarianism is ambiguous in this respect. If I define utility from the perspective of an individual's actions then they are not immoral by actions made on limited information, but they are also not immoral by maximizing their utility at the expense of society's. It is not uncommon for individuals to value their personal utility more than that of others, they can't see from another's perspective and thus have limited information. Whereas, utility viewed from an omniscient view of society can avoid personal bias as to what constitutes utility, but leads us to the conclusion that if an individual made the wrong decision from limited information, they made an immoral decision.
On the contrary Mill is very specific and unambiguous on this point that individuals can only make a decision upon the information that they have available and are not expected to take responsiblity for global utility.
This is a double bind with utilitarianism. Either sociopaths that are not aware they are hurting other people when they hurt and kill are moral because they didn't know they were making a mistake, or people who make decisions that lead to negative consequences are immoral because they aren't maximizing global utility. You can't have it both ways.
There is no double bind here, sociopaths are neither moral nor immoral - they are sociopaths. Every ethical system reaches the same conclusion.
It is far more honest to clearly define the system and admit its difficulties than to vaguely define the system and obscure its difficulties.
I have openly discussed the limitations of Utilitarianism elsewhere, but nothing you have said here is even remotely a difficulty peculiar to Utilitarianism.