Sorry about the slow response, I was away at a conference.
Off the top, it seems like you respond to any concrete objection to utilitarianism by redefining utility to fit whatever moral intuition we have at the moment. In essence, you've defined utility theory as maximizing good or even going so far as maximizing ethical behavior. This doesn't serve as a defense of any theory insofar as it is a vacuous tautological definition.
Generally, utility theory is treated as maximizing happiness or pleasure. Clearly, overall utility is maximized, but the theory (as it is used in philosophical circles) normally allows the trade-off of the utility of one person for the greater utility of another. If you intend something different than this, I would appreciate it if you would clearly define what you mean by maximizing utility.
I should hardly think that it necessary to give the first type anything, they will probably seek it out or create it themselves.
A utility monster is someone who has greater marginal utility than everyone else, under all circumstances. Even, if the utility monster is already acquiring sufficient resources to survive, even if they want for nothing, their psychology is such that they would get more happiness from any unit of resources than everyone else put together. So even if I'm starving and without food I'll die, the utility monster would do better to get that food because they'll get more utility from that food than I could ever get over the course of my entire lifetime.
And yes, I think it is good to devote disproportionate resources to those in incredible pain.
We can actually think of incurable depression as a type of negative utility monster. No matter how many resources we give them, they are still unhappy, but because they are unhappy, utility theory predicts(and you agree) that we give them a disproportionate segment of resources. This seems to lead to a contradiction, insofar as those resources are wasted. Now we're not maximizing any utilitarian metric. We're following a rule tailored to a specific moral instance.
Firstly, this would cause lifelong unhappiness to the people who have to kill an innocent man.
Let a machine do it.
Secondly, it would cause unhappiness to his family.
Don't let the family know, or only kill the incurably depressed without family.
Thirdly, it would cause unhappiness to the general community who generally have the feeling that we should never give up on a person in pain.
I would think that a truly utilitarian community would want to minimize pain, thus they would be pleased by the death of the depressed, insofar as it minimized the overall unhappiness of the community and freed up resources for more promising individuals.
If you disagree then it seems to me that this is another example of what I mention at the top: failing to define ethics with your definition of utilitarianism. You say we maximize utility, but also that maximizing utility entails living in a community that has specific rules(like never giving up). It is those rules that constitute a system of ethics in your explanation not utility theory. Which leaves you not with a coherent theory of ethics, but a hodgepodge of different rules to fit different situations.
Fourthly incurable doesn't mean always incurable. But if we simply bump off everybody who suffers from this illness we would never find a cure and we would be guaranteeing an endless continuation of unhappiness.
Not always, but this is a thought experiment, so we stipulate that it is incurable as part of the experiment. Generally thought experiments are explained in basic college philosophy classes. I would have expected you to be familiar with the technique.
Moreover, I'm not sure why a utilitarian wouldn't calculate the probability of a cure and multiply that by the expected utility. If not killing led to less happiness minus unhappiness than killing after weighting by probability, I would expect a utilitarian to support the execution.(At the very least they would support it as long as they are unaware of the specifics.) In other words, if the cure was very unlikely and the utility gained if the cure exists is small, then we should expect that they support the execution.
(Ie they support the execution in theory but are unaware of the specifics of an particular execution).
So killing him would reduce the utility, not increase it.
As I noted above, only if your definition of utility is a non-definition.
Again I cannot comment until someone will explain how an act that brings temporary pleasure to one individual at the expense of widespread and lasting unhappiness could ever at any stretch at all be called "maximising utility".
He is particularising pleasure, not maximising utility.
This is an example of something that is clearly and unambiguously wrong under any version of Utilitarianism.
How do you distinguish pleasure from happiness?
Also, If some person is wired so that raping a comatose woman gives them a sublime and lasting happiness, I would think that a society filled with utilitarians would be in support of the rapist performing their rape. It maximizes the rapist's utility at no expense to the comatose woman's. I'm not sure why other people would have a problem with letting the rapist maximize their personal utilty and thus the community total. Unless you are adding another piecemeal rule that excludes rape from allowable behaviors under your system of ethics.
In this example you are moving utility around. You are lessening the utility of the shopkeeper to increase the utility of your child. So this is not maximising utility, merely rearranging it. If you are supposed to care for another's happiness as much as your own, as J.S. Mill says, then this is not Utilitarianism.
Again this is a thought experiment. Thus the example stipulates that more net happiness occurs from breaking the law, agreement, etc... than not. The candy bar may be one of the most meaningful experiences that child has the whole month, whereas the shopkeeping may not even know he was robbed. So we should expect that even under J.S. Mill's definition you steal. You value your utility just as much as another's, but in this case you just get a lot more utility from your action than the other loses. Are you saying that you can't make quantitative comparisons/evaluations in utility theory? Or are you duct taping another rule onto your system of ethics that also says you should honor laws, promises, and agreements even if not doing so would increase utility?
Since none of these suggestions would maximise utility I would have to say we do not lock up innocent people, we do not sterilise criminals and we do provide educational opportunities for under-achievers.
How do they not maximize utility? Locking up any person that has a reasonable probability of committing a crime and preventing the birth of criminals would certainly increase the utility of our society, insofar as it drastically lowers crime. Should I add no prior restraint as an additional rule tacked on to your definition of utility?
Also, it seems to me that if we give our educational resources to people that are most likely to succeed, then we will have more successful people. More successful people, producing more, and just being happy in general seems to be in coherence with my understanding of utility theory.
I am not sure how any moral system would handle that? Which course of action are you suggesting is the dutiful or honourable one?
Well, a system that values duty like Kantian ethics, would say that you honor your commitment to your children and you don't work overtime. But something you seem to miss is that what is dutiful or honorable will vary depending on the system of ethics one adopts. It almost seems like you want me to tell you what I think is the correct answer so that you can make up another rule to plug this hole in your theory.
Maybe he should find another doctor to work with.
Again, I think you fundamentally misunderstand what a thought experiment is.
No, of course not. I am just puzzled as to why you think Utilitarianism would suggest this.
I don't know of any ethical system that can turn us into omniscient super-heroes, we are humans, we do our best.
Utilitarianism is ambiguous in this respect. If I define utility from the perspective of an individual's actions then they are not immoral by actions made on limited information, but they are also not immoral by maximizing their utility at the expense of society's. It is not uncommon for individuals to value their personal utility more than that of others, they can't see from another's perspective and thus have limited information. Whereas, utility viewed from an omniscient view of society can avoid personal bias as to what constitutes utility, but leads us to the conclusion that if an individual made the wrong decision from limited information, they made an immoral decision.
This is a double bind with utilitarianism. Either sociopaths that are not aware they are hurting other people when they hurt and kill are moral because they didn't know they were making a mistake, or people who make decisions that lead to negative consequences are immoral because they aren't maximizing global utility. You can't have it both ways.
And in turn I am very curious to see how you propose to deal with my replies.
It seems to me that you've defined utilitarianism using the following rules:
#1 Maximize the utility of yourself and others.
#2 Never give up on people with very large marginal negative utility
#3 Don't allocate too many resources to people with very large positive marginal utility.
#4 Decrease your personal utility upon the execution of people if that execution would otherwise increase overall social utility.
#5 Decrease your personal utility upon the rape of any comatose women if that rape would otherwise increase overall social utility.
#6 If lying, cheating, or breaking laws will increase overall social utility decrease your personal utility to offset any global gains in utility.
#7 If policies are instituted that will prevent people from committing crimes in the future decrease your personal utility to offset any gains from those policies.
#8 If educational resources are distributed to the people most likely to benefit from them decrease your personal utility to compensate for any global gains in utility.
#9 Gain maximal utility from performing whatever action is most dutiful in any particular set of circumstances.
#10 Actions made that increase global utility are ethical, but individual decisions that decrease global utility are not unethical.
#11 Additional rules may be added as additional objections are fielded.
Do you see the problem with this approach? You completely dodge the issue of what is ethical and just define it to justify your theory as necessary. If we are going to have a coherent discussion on this topic you are going to need to come up with a clear, consistent, general, and simple definition of utilitarianism. Moreover, it seems likely that any well defined system will have problems. It is far more honest to clearly define the system and admit its difficulties than to vaguely define the system and obscure its difficulties.