Better the illusions that exalt us ......

I do not really see why you wish to ditch the term "good" because, as you have just shown, this is the ordinary word we use to describe what we otherwise call moral and it is easier to use it than not to. I imagine you have some good reason for wishing to change the language, but I am generally reluctant to do this unless the reasons are compelling.

The problem with "good" is that humans are very quick to take up arms and harm each other in the name of "good." I think this is a result of the absolutist standpoint that fundamentalist ideologies have beaten into the population over time.

And it wouldn't be so bad if indeed the world was black and white, but it isn't -- it is shades of grey. So while clearly a society that rapes all medical patients would be "bad" according to us, what about a society that helps the elderly and terminally ill commit suicide? What about a society that encourages their women to cover up? What about a society that doesn't embrace a democratic government system? These are not inherently bad things, but if your morality is a dichotomy between good and bad its really the only choice you have.

You see this all over the place -- people get it in their heads that their way is "good" and all of a sudded anything that people do differently is "bad" to them.

If a young teenager says it will harm her if she is not allowed to go to a nightclub next Saturday does that settle it? So far as the harm goes yes it does. She will be unhappy: that is harmful by utilitarian definition.

Yes, it is -- and the moral decision according to the teenager would be to allow them to go to the club. But the moral agent in question is the parent, and they need to sum up the relative utilities of letting the child be autonomous versus protecting them from potential harm (that the child theirself doesn't realize). Then, they need to consider the utility offered by either forcing the child to obey, or trying to reach a compromise, or giving in, etc. The process is endless. Luckally our brains are capable machines.

No. Bentham specifically states that X will have differential effects on people, so I do not assume the effects are equal on everyone. But that is my point. If you cannot do the sums then you adopt a "template". This is the point of law and also of Mill's secondary principles, I think. What I am contesting is the idea that those templates exist because someone, somewhere has done the sums. I do not think they have, because I do not think it is possible. So the templates are based on something else. Bentham would like those templates (law, for example) to be derived from doing those sums and he thinks that would lead to better results. Perhaps it would. Since it is impossible the point is moot

My argument this entire thread has been that everyone does this implicitly when they make a moral decision. You do it yourself -- you assign a utility to possible actions based upon those two rules in your own brand of morality, and then choose the higher. That is what I meant when I said every system is reducible to utilitarianism given a proper definition of utility.

I am not arguing that everyone performs a robotic mathetmatical calculation of utilities in their heads. I am arguing that our mind automatically weighs the cost and benefits of any action as best it can and then chooses what it thinks is best. Animals do it. Worms do it. The fact that you think you don't do it is indicative of how subconscious the process is. But, it is a fact -- for an agent to make a decision, it must have some way to choose, and whatever method they use to choose can be grouped under the wonderfully robust term utility.

Where is the "negative utility" in the situation of the secret rape of a comatose woman by a nurse who is not made unhappy at all by the act? That is the problem.

It can't be a truly secret rape because we know about it. Don't you see? Even though it is hypothetical, the fact that we are observers to the hypothetical means it won't be secret. For it to be secret, we would have to have no knowledge of it, which would mean you couldn't use it as a hypothetical in a discussion about morality!

Think about what you are saying -- that the only people affected by the rape are the nurse and the patient. But this is impossible, because you, I, Robin, and anyone else already know about the hypothetical situation and hence we are also affected by it. It occupies a place in my mind now, until I forget about this thread some time in the future. And since it affects me, it must give me some utility, and once we have established that then I can claim it in fact gives me negative utility because just imagining a woman being raped is uncomfortable for me. In other words, your hypothetical gives me real world negative utility, and that is where the harm is done.
 
Off the top, it seems like you respond to any concrete objection to utilitarianism by redefining utility to fit whatever moral intuition we have at the moment.
On the contrary I can easily demonstrate that I have consistently applied the classic Benthamite definition of utility and utilitarianism throughout. Similarly I have always used "happy" in it's normal sense of a human emotion.

You, on the other hand, have consistently suggested that Utilitarians must be happy at things that normal human psychology would make them deeply unhappy and so are using some non-standard definition of "happy". Similarly you are using a definition of "maximise" that is more consistent with "particularise".
A utility monster is someone who has greater marginal utility than everyone else, under all circumstances. Even, if the utility monster is already acquiring sufficient resources to survive, even if they want for nothing, their psychology is such that they would get more happiness from any unit of resources than everyone else put together. So even if I'm starving and without food I'll die, the utility monster would do better to get that food because they'll get more utility from that food than I could ever get over the course of my entire lifetime.
So surely then they will use less resources than everybody else put together? So no problem.
We can actually think of incurable depression as a type of negative utility monster. No matter how many resources we give them, they are still unhappy, but because they are unhappy, utility theory predicts(and you agree) that we give them a disproportionate segment of resources. This seems to lead to a contradiction, insofar as those resources are wasted. Now we're not maximizing any utilitarian metric. We're following a rule tailored to a specific moral instance.
No, you have simply altered the original example. The original example merely took more resources to achieve happiness. In your new example infinite resources would provide the same utility as no resources, so utility theory would not expend resources on a project that would not increase happiness. But then again, neither would any other ethical system.

You have also failed to take into account that each person is a producer as well as a consumer of resources.
Let a machine do it.
Because we all know it is not really killing when you do it with a machine, eh?
Don't let the family know,
They might become suspicious when he stopped breathing and rotted away to a skeleton.
... or only kill the incurably depressed without family.
In which case you have abandoned your original thought experiment and started a new one.
I would think that a truly utilitarian community would want to minimize pain, thus they would be pleased by the death of the depressed
, insofar as it minimized the overall unhappiness of the community and freed up resources for more promising individuals.
Do you see what I mean? You are saying utilitarians would be pleased at something normal human psychology would make them extremely displeased. Thus you are using a different definition of unhappiness (or a different definition of utilitarian).

Using the normal definition of happiness and the Benthamite definition of Utilitarian, as I have consistently done, killing an innocent man would cause widespread unhappiness and must be rejected.
If you disagree then it seems to me that this is another example of what I mention at the top: failing to define ethics with your definition of utilitarianism.
On the contrary it is, as I have pointed out, an example of you using your private definition of happiness. I have merely consistently applied Benthamite Utilitarianism and the normal definition of happiness.
Not always, but this is a thought experiment, so we stipulate that it is incurable as part of the experiment. Generally thought experiments are explained in basic college philosophy classes. I would have expected you to be familiar with the technique.
You do understand the difference between a thought experiment and a hypothetical, don't you?
Moreover, I'm not sure why a utilitarian wouldn't calculate the probability of a cure and multiply that by the expected utility. If not killing led to less happiness minus unhappiness than killing after weighting by probability, I would expect a utilitarian to support the execution.(At the very least they would support it as long as they are unaware of the specifics.) In other words, if the cure was very unlikely and the utility gained if the cure exists is small, then we should expect that they support the execution.
However if you apply the classic definition of Utilitarianism and happiness to the problem, as I have consistently done throughout, you will find out that you are suggesting that we should pursue all the things that cause unhappiness in human psychology - pessimism, defeat, cowardice, killing the innocent. I am not sure how you think those things will produce any happiness at all, never mind maximise it.

And with no cure the root unhappiness you were trying to eliminate will simply persist.

But by looking for a cure you pursue those things that human psychology normally associates with happiness - optimism, courage, curiousity, inquiry, quest, knowledge. Searches for cures usually produce utility of their own - medical insights and technological spin offs. Even if cure is never found then this approach will clearly produce more happiness and must be adopted by classic Utilitarianism.
As I noted above, only if your definition of utility is a non-definition.
As noted above, I have consistently applied the Benthamite definition. You can find it in his "Principles of Morals and Legislation"
How do you distinguish pleasure from happiness?
The key distinction is between maximise and particularise.
Also, If some person is wired so that raping a comatose woman gives them a sublime and lasting happiness, I would think that a society filled with utilitarians would be in support of the rapist performing their rape.
Unless of course they were human beings on planet earth in which case the majority are wired so that the very idea of rape gives them deep and lasting unhappiness. ant naturally the more sublime and lasting the rapists happiness the deeper and more unbearable would be the community's unhappiness.
It maximizes the rapist's utility at no expense to the comatose woman's. I'm not sure why other people would have a problem with letting the rapist maximize their personal utilty and thus the community total.
Unless of course there were an alternative that would result in even more happiness, like the detection, arrest and imprisonment of a rapist. Or that medical institutions would immediately adopt procedures that would make the abuse of patients impossible. Thus relatives of coma patients and recovered coma patients would feel reassured that nothing of this sort should never occur.

Clearly and unambiguously this would produce much more happiness among more people, using the normal definition of utilititarianism (as I have consistently done throughout) we must clearly take the prevention, detection path.
Unless you are adding another piecemeal rule that excludes rape from allowable behaviors under your system of ethics.
We would only need a piecemeal rule if rape produced net happiness. But since rape clearly produces net unhappiness then it is by definition disallowed under the classic definition of Utilitarianism (that, as I may have forgotten to mention) I have used throughout).
Or are you duct taping another rule onto your system of ethics that also says you should honor laws, promises, and agreements even if not doing so would increase utility?
Please feel free to quote the part where I have even remotely implied such a thing. Otherwise stick to what I say, not what you pretend I say.
How do they not maximize utility? Locking up any person that has a reasonable probability of committing a crime and preventing the birth of criminals would certainly increase the utility of our society, insofar as it drastically lowers crime.
So you are proposing that when a person is found guilty of a crime, we put them and their entire family into prison for their entire life or a sizeable proportion of it, thus boosting the prison population to many, many times it's current level requiring a tax hike of massive proportions.

Tax hikes increase happiness under your definition?

Also since we are talking about the normal definition of happiness (as I have used throughout) human psychology normally produces unhappiness at the incarceration of guiltless people and so you are proposing a massive increase of unhappiness in this area too.

Don't forget the unhappiness of all the guiltless people you intend to lock away. Do you have any research to indicate that locking innocent people away just in case will result in a significant lowering of crime?

And Herzblut mentioned one particular who tried the sterilising route and produced an era we do not normally associate with happiness.
Also, it seems to me that if we give our educational resources to people that are most likely to succeed, then we will have more successful people. More successful people, producing more, and just being happy in general seems to be in coherence with my understanding of utility theory.
You have evidence that more success produces more happiness? Isn't it strange that so many successful people are now downsizing?

But there is plenty of evidence that the massive underclass you are proposing to create would be massively unhappy.
Well, a system that values duty like Kantian ethics, would say that you honor your commitment to your children and you don't work overtime.
Kant says a doctor has no duty to his patients does he? Or that we have no duty to our fellow man? If you say so, I have not read that bit.
But something you seem to miss is that what is dutiful or honorable will vary depending on the system of ethics one adopts.
Precisely, for example a Utilitarian would say the opposite, that the doctor must save the patients if he is their only hope. Thousands of lives saved increase utility more than a couple of kids who don't see their father.

I wonder how your Kantian doctor's children would feel once they grew up and found what a terrible price they paid for their bedtime stories.
It almost seems like you want me to tell you what I think is the correct answer so that you can make up another rule to plug this hole in your theory.
How quaintly arrogant of you to assume you have the correct answer or that anyone else would think so.
Again, I think you fundamentally misunderstand what a thought experiment is.
You feel justified in arbitarily changing the conditions of your thought experiments so why shouldn't I?
Utilitarianism is ambiguous in this respect. If I define utility from the perspective of an individual's actions then they are not immoral by actions made on limited information, but they are also not immoral by maximizing their utility at the expense of society's. It is not uncommon for individuals to value their personal utility more than that of others, they can't see from another's perspective and thus have limited information. Whereas, utility viewed from an omniscient view of society can avoid personal bias as to what constitutes utility, but leads us to the conclusion that if an individual made the wrong decision from limited information, they made an immoral decision.
On the contrary Mill is very specific and unambiguous on this point that individuals can only make a decision upon the information that they have available and are not expected to take responsiblity for global utility.
This is a double bind with utilitarianism. Either sociopaths that are not aware they are hurting other people when they hurt and kill are moral because they didn't know they were making a mistake, or people who make decisions that lead to negative consequences are immoral because they aren't maximizing global utility. You can't have it both ways.
There is no double bind here, sociopaths are neither moral nor immoral - they are sociopaths. Every ethical system reaches the same conclusion.
It is far more honest to clearly define the system and admit its difficulties than to vaguely define the system and obscure its difficulties.
I have openly discussed the limitations of Utilitarianism elsewhere, but nothing you have said here is even remotely a difficulty peculiar to Utilitarianism.
 
If you aren't using instinct to make a moral choice, you are using some degree of utilitarianism... you are weighing the costs and benefits based on what you believe they are... not just to you, but to society and your view of yourself, etc... what you think some god wants.

And if you are just following instincts, that too, is utilitarian... we evolved to be social animals... moral animals-- we are instinctively averse and automatically revolted by some behavior and seek to shun or punish those who engage in it.

No matter how you cut it, all moral choices are utilitarian. It's just that some people use illusions of gods and the wrath of those gods or the punishment of those gods in weighing whether to do something or not. Others have a more evolved sense of empathy or understanding of long term goals and don't need illusions to behave morally. Many times the illusions allow people to behave very immorally while imagining themselves to be having superior morals to those they cause suffering to.

Just because people think their morals are coming from some "special place"-- doesn't mean that they aren't making decisions using the same tools as every one else and behaving just as morally or immorally by any actual measure on the subject. All morality is, in a sense, utilitarian and subjective... governments, religions, social orders, philosophies, rules, etc. are just attempts to codify and guide and hone this to the purported benefit of the majority without causing more harm as perceved by the members of that group.

None of these codifications need to ever be based on illusions. In fact, it can be very dangerous when they are. Those that base their morality on illusions appear to have a tendency to imagine themselves morally superior to those who don't-- yet they never demonstrate that moral superiority.

This thread is about the OP-- not utilitarianism. Utilitarianism became a straw man in attempt to define those who derive their morality from "non illusions". It has been used to make skeptics that the OP author disagrees with look less moral than him in his head.

Zosima, you have bought into a straw man view of what utilitarianism is.... besides, it was only mentioned to take the topic off the straw man in the OP. Robin is correct. And I have no doubt that you are brilliant enough to see this if you read through this thread in it's entirety. Your understanding of utilitarianism is not correct... and it distracts from the discussion in the OP as it was meant to do.
 
Last edited:
On the contrary I can easily demonstrate that I have consistently applied the classic Benthamite definition of utility and utilitarianism throughout. Similarly I have always used "happy" in it's normal sense of a human emotion.

Before I respond to anything else I'd appreciate it if you concisely identify your "classic Benthamite definition of utility and utilitarianism". It appears to me that you deviate from the Bentham's definition from one sentence to the next. For example, sometimes you defend Bentham's conception of Utility, at other times you defend J.S. Mill's conception of Utility, and at other times it seems you defend Kantian ethics and call it Utility. As I understand it, all three of these theories are distinct, very very different, and often contradictory.

So could you explain exactly what utilitarian position you advocate?
 
Last edited:
Zosima, you have bought into a straw man view of what utilitarianism is.... besides, it was only mentioned to take the topic off the straw man in the OP. Robin is correct. And I have no doubt that you are brilliant enough to see this if you read through this thread in it's entirety. Your understanding of utilitarianism is not correct... and it distracts from the discussion in the OP as it was meant to do.

You may believe that, but we'll see. I've never seen anyone clearly and consistently define utilitarianism, in such a way that it does not entail awful contradictions.

But following an argument to its logical conclusions does not necessarily make a straw man. A bad argument and a straw man may seem similar on the surface because they both are terribly flawed, but it is important not to confuse the two.

I've given Robin a chance to define exactly what (s)he means. So either we'll quickly see which of the two it is, or we'll see someone trying to be as vague as possible.

Generally utilitarianism in its purest form fails all the moral scenarios I've mentioned, and a modified version of it succeeds at some but in a way that makes it impossible to pass the others.

ETA: Utilitarianism is distinct from Cost Benefit Analysis. Technically, utilitarianism is a specific type of CBA. For example, we could imagine a system of ethics that uses CBA to maximize unhappiness and minimize happiness. This would clearly not be utilitarianism. There are also other philosophical theories that don't even operate on the happiness/unhappiness dimension. They look at maximizing other virtues to generate ethical conclusions. Historically, they might be the children of utility theory, but these theories are distinct.
 
Last edited:
Okay... but what do you think of the OP? And how do you think it applies to utilitarianism if at all...
 
Okay... but what do you think of the OP? And how do you think it applies to utilitarianism if at all...

Well, personally I'd rather talk about ethics than exaltation.

But insofar as the OP is concerned, I completely agree with the first couple of responses to the OP, there is plenty of 'exaltation' to be gained from the beauty derived from study of the natural world.

I'd say the discussion of utilitarianism is, at best, tangentially related and that doesn't really bother me. But as I understand it, utilitarianism was put forth as a sort of science of ethics that could exalt us by leading to a better way to live. I strongly disagree with this idea.
 
ETA: Utilitarianism is distinct from Cost Benefit Analysis. Technically, utilitarianism is a specific type of CBA. For example, we could imagine a system of ethics that uses CBA to maximize unhappiness and minimize happiness. This would clearly not be utilitarianism. There are also other philosophical theories that don't even operate on the happiness/unhappiness dimension. They look at maximizing other virtues to generate ethical conclusions. Historically, they might be the children of utility theory, but these theories are distinct.

Exactly. This is why I was careful to stipulate that what I am talking about is not the commonly held notion of what "utilitarianism" is.

Henceforth, I suggest we all abandon the term "utilitarianism" since nobody here agrees with the common notion -- lets just all use "utility theory." Nobody can argue with that.
 
What about Unitarians...
shall we bicker about what they are?

Lol, I don't know much about Unitarians and I fear what I don't understand. Thus I contend Unitarians are evil!

rocketdodger said:
Henceforth, I suggest we all abandon the term "utilitarianism" since nobody here agrees with the common notion -- lets just all use "utility theory." Nobody can argue with that.

A noble goal, I wish you luck. I've yet to see a statement that nobody can argue on JREF.
 
The problem with "good" is that humans are very quick to take up arms and harm each other in the name of "good." I think this is a result of the absolutist standpoint that fundamentalist ideologies have beaten into the population over time.

And it wouldn't be so bad if indeed the world was black and white, but it isn't -- it is shades of grey. So while clearly a society that rapes all medical patients would be "bad" according to us, what about a society that helps the elderly and terminally ill commit suicide? What about a society that encourages their women to cover up? What about a society that doesn't embrace a democratic government system? These are not inherently bad things, but if your morality is a dichotomy between good and bad its really the only choice you have.

You see this all over the place -- people get it in their heads that their way is "good" and all of a sudded anything that people do differently is "bad" to them.

As I said, I am fine so long as I understand how you are using the word (or rather not using it :)): it was just that to some extent I agree with your sig and we needed a definition. As a side issue though, it seems to follow that you will have to have stipulative definitions of a whole lot of other words as well, on this reasoning. What about "patriotism", for example?

Yes, it is -- and the moral decision according to the teenager would be to allow them to go to the club. But the moral agent in question is the parent, and they need to sum up the relative utilities of letting the child be autonomous versus protecting them from potential harm (that the child theirself doesn't realize). Then, they need to consider the utility offered by either forcing the child to obey, or trying to reach a compromise, or giving in, etc. The process is endless. Luckally our brains are capable machines.

Both are moral agents, surely? How do you measure?

My argument this entire thread has been that everyone does this implicitly when they make a moral decision. You do it yourself -- you assign a utility to possible actions based upon those two rules in your own brand of morality, and then choose the higher. That is what I meant when I said every system is reducible to utilitarianism given a proper definition of utility.

Yes, I understand that is what you believe. It is what Bentham says and I accept that you consistently take his view. But I do not agree, is all. :)

I am not arguing that everyone performs a robotic mathetmatical calculation of utilities in their heads. I am arguing that our mind automatically weighs the cost and benefits of any action as best it can and then chooses what it thinks is best. Animals do it. Worms do it. The fact that you think you don't do it is indicative of how subconscious the process is. But, it is a fact -- for an agent to make a decision, it must have some way to choose, and whatever method they use to choose can be grouped under the wonderfully robust term utility.

The problem I have is encapsulated in the bit I bolded. As I said, it is exactly like Freud - every instance of something which does not fit the theory is "subconsciously" in accord with it. Well maybe: but it makes the theory unfalsifiable and therefore it is useless. If you say that any decision an agent makes increases utility, then the term has been widened to meaninglessness: if you say that some do and some don't then the agent need not be basing on utility:

It can't be a truly secret rape because we know about it. Don't you see? Even though it is hypothetical, the fact that we are observers to the hypothetical means it won't be secret. For it to be secret, we would have to have no knowledge of it, which would mean you couldn't use it as a hypothetical in a discussion about morality!

Think about what you are saying -- that the only people affected by the rape are the nurse and the patient. But this is impossible, because you, I, Robin, and anyone else already know about the hypothetical situation and hence we are also affected by it. It occupies a place in my mind now, until I forget about this thread some time in the future. And since it affects me, it must give me some utility, and once we have established that then I can claim it in fact gives me negative utility because just imagining a woman being raped is uncomfortable for me. In other words, your hypothetical gives me real world negative utility, and that is where the harm is done.

The secrecy is a given. That is the point of such hypothetical examples - it lets us consider the implications and limits of an idea. Robin has drawn a distinction between a "hypothetical" and a "thought experiment" That is a distinction I have not met and don't understand so it is possible this is the source of confusion. So call it a thought experiment if you need that word in order to accept the stipulation. Now if it is secret, as stipulated, can you answer the question?
 
Last edited:
Before I respond to anything else I'd appreciate it if you concisely identify your "classic Benthamite definition of utility and utilitarianism". It appears to me that you deviate from the Bentham's definition from one sentence to the next. For example, sometimes you defend Bentham's conception of Utility, at other times you defend J.S. Mill's conception of Utility, and at other times it seems you defend Kantian ethics and call it Utility. As I understand it, all three of these theories are distinct, very very different, and often contradictory.

So could you explain exactly what utilitarian position you advocate?
They are almost identical. Bentham came at it from a legislative point of view Mill from a perspective of personal morals. He expanded upon Bentham and pointed out a couple of errors he made, moved away from his rigid calculus, but that is about it.

My definitions come (as I have been pointing out for a while now) from the introduction to Bentham's "Principles of Morals and Legislation" which you can easily find on line with a bit of googling. Mill did not move away from these definitions so I also use them.

Feel free to point out these alleged awful contradictions and try not to be too vague about it this time!
 
Let me short cut the debate by suggesting one example where I regard Utilitarianism as indicating an unacceptable result, or at least one that strongly contradicts my own moral intuition.

If a society has a strong taboo against homosexuality then openly gay couples in that community would cause widespread unhappiness. While I don't regard the unhappiness as justified, I nevertheless recognise that it is real. So happiness in such a society might be maximised if gays simply stayed in the closet.

I think this indicates a position where decreasing the community's net happiness would be no bad thing.

That is why I regard the maximisation of freedom as the better primary rule. If we have the most freedom that is compatible with sharing a limited and imperfect world with so many others, then people are in the best position to maximise their own happiness anyway. I nevertheless regard the utility principle as a valuable moral guideline.
 
Last edited:
My argument this entire thread has been that everyone does this implicitly when they make a moral decision. You do it yourself -- you assign a utility to possible actions based upon those two rules in your own brand of morality, and then choose the higher. That is what I meant when I said every system is reducible to utilitarianism given a proper definition of utility.
Bentham said this too. He said that all his detractors end up arguing in favour of the utility principle.
It can't be a truly secret rape because we know about it. Don't you see? Even though it is hypothetical, the fact that we are observers to the hypothetical means it won't be secret.
Yes, the hypothetical seems to say "Utilitarians ought to praise this act as long as they are completely oblivious to it".
 
As a side issue though, it seems to follow that you will have to have stipulative definitions of a whole lot of other words as well, on this reasoning. What about "patriotism", for example?

Of course. "Patriotism" makes me sick.

Both are moral agents, surely? How do you measure?

Look I am saying that it is nonsense to ask "was action X moral in an objective/global/overall sense?" The only question that has meaning is "was action X moral from the standpoint of<whoever>?" Because really, when you say something is bad, all you are really saying is you think it is bad.

You seem to be asking "is the action moral according to myself," which is something only you can answer!

Now if it is secret, as stipulated, can you answer the question?

If what is a secret?
 
They are almost identical. Bentham came at it from a legislative point of view Mill from a perspective of personal morals. He expanded upon Bentham and pointed out a couple of errors he made, moved away from his rigid calculus, but that is about it.

Feel free to point out these alleged awful contradictions and try not to be too vague about it this time!

Note, that you did not ask me to define the difference. Now that you have I make a good faith attempt to be as clear as possible:

Bentham supported a socially good based utilitarianism, whereas Mill supported a rights based libertarianism. In fact, most modern philosophers only consider Mill a utilitarian from a historical perspective, but if you look at what theory his is most similar to, it is most like libertarianism.
http://www.mises.org/reasonpapers/pdf/09/rp_9_1.pdf

Also, there is a good overview in the wikipedia page on J.S Mill, if you need to learn the basics.

"This philosophy has a long tradition, although Mill's account is primarily influenced by Jeremy Bentham, and Mill's father James Mill. However his conception of utilitarianism was so different from Bentham's that some modern thinkers have argued that he demonstrated libertarian ideals, and that he was not as much a consequentialist as was Bentham, though he did not reject consequentialism as Kant did."

"Mill defines the difference between higher and lower forms of happiness on the principle that those who have experienced both tend to prefer one over the other. This is, perhaps, in direct opposition to Bentham's statement that "Pushpin is as good as an Opera," that if a simple child's game like hopscotch causes more pleasure to more people than a night at the opera house, it is more imperative upon a society to devote more resources to propagating hopscotch than running opera houses."

To claim that these are just minor details is to express a misunderstanding of how philosophical enterprises work.

My definitions come (as I have been pointing out for a while now) from the introduction to Bentham's "Principles of Morals and Legislation" which you can easily find on line with a bit of googling. Mill did not move away from these definitions so I also use them.

This is the sort of vagueness I was talking about.
#1 Opts not to make any attempt to define usage of utilitarianism.
#2 Actively tries to conflate the philosophies of two very different philosophers.

Clearly Robin, wishes to maintain a position that gives as much wiggle room as possible.
 
Let me short cut the debate by suggesting one example where I regard Utilitarianism as indicating an unacceptable result, or at least one that strongly contradicts my own moral intuition.

If a society has a strong taboo against homosexuality then openly gay couples in that community would cause widespread unhappiness. While I don't regard the unhappiness as justified, I nevertheless recognise that it is real. So happiness in such a society might be maximised if gays simply stayed in the closet.

I think this indicates a position where decreasing the community's net happiness would be no bad thing.

Why admit this example but not the numerous other examples?

1. If we kill unhappy people in secret, this maximizes overall utility.
2. If we steal without getting caught, this maximizes overall utility.
3. If a rapist rapes someone who doesn't know and doesn't get caught this maximizes overall utility.

Do you see how this is just like closeting gays? Utility theory classifies ethical actions according to harms to happiness, if people do not know about a terribly unethical action then there is no harm. Thus doing intuitively unethical things in secret becomes ethical.
 
Why admit this example but not the numerous other examples?

1. If we kill unhappy people in secret, this maximizes overall utility.
2. If we steal without getting caught, this maximizes overall utility.
3. If a rapist rapes someone who doesn't know and doesn't get caught this maximizes overall utility.
*sigh* for the reasons I am now tired of repeating
 

Back
Top Bottom