Better the illusions that exalt us ......

I have some questions for Rocket & Robin.

1. How do you deal with utility monsters? The positive utility monster is a person that gets more pleasure from an action than anyone else. A negative utility monster is constantly in incredible pain. Doesn't utility theory predict that if such people exist we should give them a disproportionately large percentage of the resources?

2. Lets imagine a person has an incurable form of depression. They even admit that they're incredibly unhappy. Why don't we kill this person under utility theory?

3. You seem to claim that violations of dignity are prevented because we feel some sort of harm when dignity is violated. How do sociopaths fit into this? Is it moral for people who don't value dignity to violate it? If one doesn't get pleasure from preserving dignity and doesn't feel pain from violating it, wouldn't they be morally justified in raping a comatose woman? She doesn't feel any pleasure or pain, so utility is maximized.

4. What about Promises? Contracts? Laws? Do I violate a contract,promise, or law as soon as the utility of it becomes negative? If I don't have any money in my pocket, but I know it will make my son happy if I bring home a candy bar and I know that the store I steal it from won't even notice it is gone , do I steal it anyway?

5. How do you propose a utilitarian society deals with prior restraint? If someone comes from a family of child molesters, do we lock this person up? What about parents that commit crimes, should we sterilize them? On the flip-side, do we not bother providing educational opportunities for people who come from communities that historically under-perform academically?

6. What about duty and honor? Lets say I'm a doctor than can save 10 people a day if I work alone, but can only save 7 per day if I come home to spend time with my children. If I neglect my duty to my children, I can save thousands of people before they grow up, but my children grow up without a father. What if 100 people will be saved if I kill 10 innocent people?

7. How does your conception of utilitarianism work under limited information? If 100 people will be saved with 10% probability upon performing action A and 10 people will be saved with 100% probability if I perform action B? Are action A and action B equally moral? What if I pick B and it turns out I would have actually saved 100 people, but I just didn't know. Did I make the morally wrong decision?

These are generally the sorts of problems that people have with utilitarianism in the philosophical community. I'm very curious to see how ya'll propose to deal with them.
 
Because you ignore the undeniable fact that violating somebody's dignity, e.g. by lying, does not neccessarily cause objective harm.

First, I don't believe there is such a thing as objective harm -- you should have gathered as much from the discussion thus far. So I will just revise your statement to use "subjective" harm as addressed to me.

Second, what are you saying here? Are you saying that lying is always wrong, because lying is always a violation of human dignity? How is lying a violation of human dignity in every case?

How is euthanasia involved?

What?

I am talking about the simple facts of your example case:

1) You think shooting down the plane with at least one innocent is a violation of dignity.
2) The only other option is allowing the plane to crash and kill 4001 innocents.
3) You contend that because human dignity must not be violated, the plane must not be shot down.
4) Therefore you do not think allowing the plane to crash violates anyone's dignity -- otherwise the solution would not be so simple.

How is this? Why is shooting down the plane and killing 1 innocent a violation of dignity while allowing it to crash and kill 4001 innocents is not?
 
[
It may be that the author is operating from a different definition of 'respect' than you.
which is exactly what I said. I just added that I was unfamiliar with his definition.
I can imagine an expression of respect that I would call 'warrior-respect' ( I am not saying it is your position). There you would essentially express respect by asserting that a person is strong enough to take anything you would hurl at them, and that you would not insult them by holding back and implying weakness. With this understanding of respect you could easily give honesty and frankness primacy.

Another possible expression of respect, and one I assume to be more prevalent, would be to assert that the person has certain limits that you wont cross without good reasons.
That is not called respect, that is called diplomacy.

I am pretty certain that most people would not agree that you can have respect without honesty.
For example you would not come into intimate standing distance, without expressed consent, or you would not give unsolicited value judgement of religious beliefs.
He has already given an unsolicited value judgement of their religious belief, just not to their face.

But if a two people are willingly engaged in a conversation about a subject I don't think you can meaningfully call any opinion expressed there as "unsolicited". The art of conversation would be lost if all we could do was stare at one another, each waiting for the other to say "Tell me what you think about ..."

And if you are going to translate "I find your belief ridiculous" as "Your belief is ridiculous" then you must be consistent and translate "I don't share your belief" to "Your belief is wrong" and so it is a value judgement in any case.
Of course. This practically means that you mostly stay non-commital and that you expect the same courtesy regarding your beliefs from the other party.
Assuming that you expect any such "courtesy", I don't think it courteous for someone to discuss a subject with me and hold back what they really thought about it.
Affirming that you share their beliefs would indeed be lying to them, and asserting the superiority of your belief by calling theirs ridiculous would be condescending.
How is it condescending to treat someone as an intellectual equal?

Let's face it, if someone says "I believe in God" and I say "I don't believe in God", we are asserting the superiority of our respective beliefs and it is useless to pretend otherwise.
Actually he was pretty unspecific about what he finds or finds not ridiculous. The actual phrasing was 'Even those of us who sympathize intellectually have good reasons to wish that the New Atheists continue to seem absurd. If we reject their polemics, if we continue to have respectful conversations even about things we find ridiculous...'
He is rather specific that the things he finds ridiculous are the same things he is intending to conduct "respectful" conversations about. Whether or not he completely agrees with Dennet and Dawkins is beside the point we are currently debating.
Your conversations with your brother probably have a different social dynamic compared to conversations with mere acquaintances. There the mutual respect is likely asserted by other means.
It is asserted in a number of ways. In this case our respect is asserted by the fact that we can make honest statements about our attitude to our respective beliefs.

But if he pretended simply not to share my opinion and later found out that he found it ridiculous, but had not told me, then I should have held that as a mark of disrespect.
There is nothing to say against this if it happens in the course of a discussion on this topic or a setting that is prone to philosophical discussions. If the offer to debate is in a neutral setting and unsolicited, I would see it similar to a religious group ( for example jehova's witnesses ) approaching someone to talk about their beliefs.
Can you give an example of Dawkins or Dennett offering unsolicited debate in a neutral setting? I am not sure what you mean. I am pretty sure that they don't stop strangers in the street or knock on their doors and say "God doesn't exist".
 
What a bizzare kind of ultra-libertarianism. What I regard more dangerous nowadays is the other end of stupidity, clowns like Chavez, Correa, Ortega who frantically nationalize all that generates money, just to run the national economy full speed against the brick wall.

They're both evils. If some people got their way over here we'd be paying for goods with Walmart dollars(or gold) and getting our justice from JP Morgan.

Already, a lot of things are privatized in the US. We vote on Diebold voting machines, our war is being fought by Blackwater and Halliburton, our prisons are run by The Corrections Corporation of America. Its easier to see the harms of too much privatization when the top officials in the Environmental Protection Agency used to work for the the worst polluters.

I guess we react to the extremes we see.
 
What a bizzare kind of ultra-libertarianism. What I regard more dangerous nowadays is the other end of stupidity, clowns like Chavez, Correa, Ortega who frantically nationalize all that generates money, just to run the national economy full speed against the brick wall.
I never thought I would say these words to you - I totally agree.
 
I have some questions for Rocket & Robin.

1. How do you deal with utility monsters? The positive utility monster is a person that gets more pleasure from an action than anyone else. A negative utility monster is constantly in incredible pain. Doesn't utility theory predict that if such people exist we should give them a disproportionately large percentage of the resources?
I should hardly think that it necessary to give the first type anything, they will probably seek it out or create it themselves.

And yes, I think it is good to devote disproportionate resources to those in incredible pain.
2. Lets imagine a person has an incurable form of depression. They even admit that they're incredibly unhappy. Why don't we kill this person under utility theory?
Firstly, this would cause lifelong unhappiness to the people who have to kill an innocent man.

Secondly, it would cause unhappiness to his family.

Thirdly, it would cause unhappiness to the general community who generally have the feeling that we should never give up on a person in pain.

Fourthly incurable doesn't mean always incurable. But if we simply bump off everybody who suffers from this illness we would never find a cure and we would be guaranteeing an endless continuation of unhappiness.

So killing him would reduce the utility, not increase it.
3. You seem to claim that violations of dignity are prevented because we feel some sort of harm when dignity is violated. How do sociopaths fit into this? Is it moral for people who don't value dignity to violate it? If one doesn't get pleasure from preserving dignity and doesn't feel pain from violating it, wouldn't they be morally justified in raping a comatose woman? She doesn't feel any pleasure or pain, so utility is maximized.
Again I cannot comment until someone will explain how an act that brings temporary pleasure to one individual at the expense of widespread and lasting unhappiness could ever at any stretch at all be called "maximising utility".

He is particularising pleasure, not maximising utility.

This is an example of something that is clearly and unambiguously wrong under any version of Utilitarianism.
4. What about Promises? Contracts? Laws? Do I violate a contract,promise, or law as soon as the utility of it becomes negative? If I don't have any money in my pocket, but I know it will make my son happy if I bring home a candy bar and I know that the store I steal it from won't even notice it is gone , do I steal it anyway?
In this example you are moving utility around. You are lessening the utility of the shopkeeper to increase the utility of your child. So this is not maximising utility, merely rearranging it. If you are supposed to care for another's happiness as much as your own, as J.S. Mill says, then this is not Utilitarianism.
5. How do you propose a utilitarian society deals with prior restraint? If someone comes from a family of child molesters, do we lock this person up? What about parents that commit crimes, should we sterilize them? On the flip-side, do we not bother providing educational opportunities for people who come from communities that historically under-perform academically?
Since none of these suggestions would maximise utility I would have to say we do not lock up innocent people, we do not sterilise criminals and we do provide educational opportunities for under-achievers.
6. What about duty and honor? Lets say I'm a doctor than can save 10 people a day if I work alone, but can only save 7 per day if I come home to spend time with my children. If I neglect my duty to my children, I can save thousands of people before they grow up, but my children grow up without a father. What if 100 people will be saved if I kill 10 innocent people?
I am not sure how any moral system would handle that? Which course of action are you suggesting is the dutiful or honourable one?

Maybe he should find another doctor to work with.
7. How does your conception of utilitarianism work under limited information? If 100 people will be saved with 10% probability upon performing action A and 10 people will be saved with 100% probability if I perform action B? Are action A and action B equally moral? What if I pick B and it turns out I would have actually saved 100 people, but I just didn't know. Did I make the morally wrong decision?
No, of course not. I am just puzzled as to why you think Utilitarianism would suggest this.

I don't know of any ethical system that can turn us into omniscient super-heroes, we are humans, we do our best.

These are generally the sorts of problems that people have with utilitarianism in the philosophical community. I'm very curious to see how ya'll propose to deal with them.
And in turn I am very curious to see how you propose to deal with my replies.
 
Firstly, this would cause lifelong unhappiness to the people who have to kill an innocent man.

Secondly, it would cause unhappiness to his family.

Thirdly, it would cause unhappiness to the general community who generally have the feeling that we should never give up on a person in pain.

Wishful thinking:
Action T4 (German: Aktion T4) was a program in Nazi Germany officially between 1939 and 1941, during which the regime of Adolf Hitler systematically killed between 200,000 to 250,000[1] people with intellectual or physical disabilities.

The T4 program developed from the Nazi Party's policy of "racial hygiene," the belief that the German people needed to be "cleansed" of "racially unsound" elements, which included people with disabilities.

http://en.wikipedia.org/wiki/Action_T4


Since none of these suggestions would maximise utility I would have to say we do not lock up innocent people, we do not sterilise criminals and we do provide educational opportunities for under-achievers.

Again, wishful thinking.
Eugenics is a social philosophy which advocates the improvement of human hereditary traits through various forms of intervention. Throughout history, eugenics has been regarded by its various advocates as a social responsibility, an altruistic stance of a society, meant to create healthier and more intelligent people, to save resources, and lessen human suffering.
....
Since the postwar period, both the public and the scientific communities have associated eugenics with Nazi abuses, such as enforced racial hygiene, human experimentation, and the extermination of undesired population groups. However, developments in genetic, genomic, and reproductive technologies at the end of the 20th century have raised many new questions and concerns about what exactly constitutes the meaning of eugenics and what its ethical and moral status is in the modern era.

http://en.wikipedia.org/wiki/Eugenics
 
Last edited:
Wishful thinking:

...


Again, wishful thinking.
I think you illustrate my point nicely.

There have been terrible times in our history when people not only contemplated, but did these things, and the particularly bad one you mention.

During that time happiness was maximised nowhere. There is, in most countries and especially Germany, a quiet conviction that these things should never be done or even contemplated in the future.

I said that sterilising criminals and locking up innocent people related to criminals does not maximise utility. You prove my point.
 
What I don't get is why herzblut keeps accusing us of "wishful thinking."

How is it not also "wishful thinking" that humans would obey some kind of absolute moral law?

It seems like moral absolutists have this pressing need to identify every behavior with "right" and "wrong." Great. More power to you. But then what?

Does labeling something as wrong prevent people from doing it?

EDIT: ...and herzblut has not answered the questions... still...
 
Last edited:
What I realize, and what moral absolutists don't, is that the ice crystals in the rings of Saturn don't really give a hoot one way or the other.

I do not think subective contrasts with absolutist: it contrast with objective and that is not the same thing. So I do not understand the point you are making

Yes. It also follow from my position that being "moral" doesn't suggest anything other than following one's morality -- a useless tautology.

Your misunderstanding must be from the fact that you interpret "moral" to mean somehow "good" or "just" or any other positive connotation. I do not interpret it that way.

I am now hopelessly lost. I have checked the dictionary and can find no sense of the word moral which does not refer to right and wrong/good and bad. A dictionary does not prescribe what words mean, however: people do that. In my limited experience the word always carries implications of good and bad/right and wrong. Obviously this is not the case where you come from. What things are included in one's morality as you conceive of the term? What things are excluded. How do you decide what is in and what is out?
 
I think he's quite clearly saying that morality is a subjective conclusion... not an objective fact... there is no outside determinant of morality... it is a subjective collective human opinion as to what is right or wrong. And it's better to have a rational basis behind making such decisions rather than some illusion about what some god wants or some illusion that some poet thinks is better than 10,000 truths.
 
On the contary, Utilitarianism, as defined by J.S. Mill would regard Omelas as a bad and immoral place.

You are not getting it because you are still insisting that your straw version of Utilitarianism is correct. But it is not.

Mill said specifically that you should value another persons good as much as you value your own. If you valued another persons good as much as your own then you could never, not even in principle be happy in Omelas.

So by Mill's definition a Utilitarian should walk away from Omelas.

I do not think I am attacking a strawman, Robin. This seems to be a contradiction within Mill's thought whereby he parts company with utilitarianism. The utilitarian says the the criterion by which an action is judged as moral is based on the outcome of maximising happiness. Mill does not abandon that. And as I have read he is particularly concerned with the level of the the family/group/society. That seems to imply he is concerned with the aggregate. So it follows that if he values everyone's happiness as much as he values his own then the overall utility is what counts. The situation in Omelas does just that and therefore it must be moral. The child only gets one vote, the same as everybody else.

If, on the other hand, you argue that the child is in some way special, then it follows that sometimes the moral course is to diminish aggregate happiness: and that is entirely at odds with the system. How do you reconcile these?

As a general rule Utilitarianism would say it is wrong because our overwhelming weight of experience and evidence is that it would cause unhappiness. Even if the family agreed to it, even if all the friends agreed to it it could not be considered good because the co-workers and the owners of the medical establishment are all parties.

I don't think so. It is true that the utilitarian will base on this where the utility cannot be known: this is because it is really hard for human beings to foresee the real consequences of their actions often. However in this case we know the outcomes for all the concerned parties and there is no need to guess. For a utilitarian real outcomes are trumps and it is their position that is it is right to break the normally accepted moral rules if utility is increased thereby. Mill also mentioned that while wider groups can be considered it is normally sufficient to look at outcomes for family and perhaps friends. We are not able to investigate effects on the whole world and utilitarianism does not require this of us. So I do not think it is legitimate to do as you suggest to get to the conclusion you prefer. Once again I get the impression that your moral intuition came first and was not derived from utilitarianism. But then you go on to say:

If nobody involved at all had any problem with it whatsoever then we could not establish that the nurse had done anything morally wrong until the patient herself woke up.

If the nurse checks with his employer and his co-worker and ensures that the woman's friends and family are on board with his little plan and everybody agreed that it was morally OK for the nurse to do what he did and the patient herself said, "that's fine, I wasn't using my body anyway", then on what basis was the nurse's action wrong? I think it a slightly implausible example.

So that impression is wrong and you are consistent in your position. y own morality is quite inchoate but it is best summarised as "you must not steal another person's choice". This works for me in quite a lot of situations. It may be this approach has a name in moral philosophy but I do not know it. It gives me a working rule of thumb in lots of different cases, however and so I think it is quite a useful principle. It works in this case because it does not matter how many people think it is ok to rape the comatose patient: the nurse is stealing her choice and for me that is morally wrong by definition.

But how do you get to that conclusion that it was immoral.

My own morality is quite inchoate, but it is best summarised as "you must not steal another person's choice". This works for me in quite a lot of situations. It may be this approach has a name in moral philosophy but I do not know it. It gives me a working rule of thumb in lots of different cases, however and so I think it is quite a useful principle. It works in this case because it does not matter how many people think it is ok to rape the comatose patient: the nurse is stealing her choice and for me that is morally wrong by definition.
 
I do not think subective contrasts with absolutist: it contrast with objective and that is not the same thing. So I do not understand the point you are making

Never mind, I don't think we disagree about anything in this regard. We just have a different way of describing things.

What things are included in one's morality as you conceive of the term? What things are excluded. How do you decide what is in and what is out?

A "morally right" decision is simply a decision that is made in accordance with the values one holds.

A "moral" person is one who makes morally right decisions.

If nazis held the value that Jews should be eliminated, then they were morally right to set up the holocaust. If the rest of the world held the value that systematically killing innocent people is a bad thing, then they were morally right to wage war with the nazis.

Who do I think is morally wrong? People who claim to hold one set of values but act according to a different set. Ayn Rand would call this a lack of integrity. So for example a christian who murders is morally wrong since they claim to hold the value that murder is wrong. A psychopath who murders is not morally wrong since they do not claim to hold such a value.
 
I think he's quite clearly saying that morality is a subjective conclusion... not an objective fact... there is no outside determinant of morality... it is a subjective collective human opinion as to what is right or wrong. And it's better to have a rational basis behind making such decisions rather than some illusion about what some god wants or some illusion that some poet thinks is better than 10,000 truths.

Yes.

Furthermore, even if it were an objective fact, the only way we could reach a conclusion about the truth of it would be through subjective human opinion!

So any way you slice it, subjectivity is inescapable. Might as well deal with it rather than run from it.
 
Just a quick note. There is a field of philosophical study called analytic ethics (which is concerned with what it is we are doing when we use words like right/wrong, good/evil) as opposed to normative ethics (which is concerned with what is right/wrong, good/evil). Rephrasing the question about the meaning of morality might help solve the confusion (and help open whole new vistas of confusion as well).

Many of us here, it seems, view morality as a human construct - or artifact, if you will. I certainly do. It is, it seems, ridiculous to try to point to a property "right/wrong, good/evil" in a thing or act - such properties do not exist. In this way of thinking, to speak of objective morality is meaningless. However, it is possible, as many are suggesting here, to construct our morality with regard to rational considerations. An example might be rules on conduct seemingly necessary for large numbers of people to live peacefully and cooperatively in relatively confined spaces such as cities. Another might be understandings of genetic predispositions. Another might be environmentally stable survival strategies (a kind of game theory perhaps). There are, it seems, many more possible foundations for ethics/morality than some law layed down by a father figure - which is pretty much what theistic religion offers us.

Two key concepts to analytic ethics include: universalizability and prescriptivity. Universalizability describes the "objective' or "intersubjective" aspect of a moral code - essentially how a moral code applies to everyone. Prescriptivity describes the obligatory force of moral codes (the "should/ought" function). These are oversimplifications, but they help illuminate the functions of morality. It is possible to have universalizability without absolutism - this is something many do not understand. Hence the inevitable question, "Where do you get your morals from?" which really just represents a fallacy of false alternatives in the ways moral codes can be universalizable.
 
Last edited:
Agreed Dglas... to me this whole thread is about people trying to make skepticism sound like something it's not... and "illusions" sound like a basis for morality. The usual stereotype.

I cannot think of a single illusion that is better than the truth, unless by better you mean better at making the believer feel better. And I'd prefer ignorance over an illusion--but I'm using the dictionary definition of illusion (a false perception). Morality is not based on illusions... Humans get their morality from similar sources... it's just that some people imagine them coming from something divine. They don't have "more" or "better" morality than anyone else... but they seem to imagine that they do because of their illusions about where it comes from.
 
I've been avoiding posting to this thread - I couldn't find my waders.

For now, I very much agree with articulett's last post (#336) in all respects.

I may retro-chime-in later.
 
I've been avoiding posting to this thread - I couldn't find my waders.

For now, I very much agree with articulett's last post (#336) in all respects.

I may retro-chime-in later.

Thanks Complexity (my favorite woo whisperer)...
to me these threads always sound like apologist propaganda...
they pretend to be discussions... but they are really tsk tsking skeptics -- skeptics that don't exist even... a straw man stereotype of skeptics (or atheists). I suspect that the people doing so, are doing it to prop up their own preferred "illusions" of themselves as "reasoned diplomats" unlike those immoral unexalted skeptics/atheists that they imagine others to be.

It is dishonest really... they pretend to want a discussion, but they really want to inflict their opinion on others in a backhanded way because it makes them "feel" superior or nicer or better than the stereotyped persona they view others through. It gets old, and it's always the same. I know that people don't like being called an apologist... but I think that's because they'd rather have the belief that they are "fair minded" then find out the truth about their own biases. To them, that illusion is better then any objective test that would reveal otherwise. For me, I'd rather know where I am mistaken... through evidence and the opinion of people I trust-- then to go on imagining myself as being right or unbiased or communicating clearly when I'm not.
 
I have some questions for Rocket & Robin.

1. How do you deal with utility monsters? The positive utility monster is a person that gets more pleasure from an action than anyone else. A negative utility monster is constantly in incredible pain. Doesn't utility theory predict that if such people exist we should give them a disproportionately large percentage of the resources?

No, because there is no rule that states an individual must agree with the utility value anyone else comes up with. So if you say doing X gives you great utility, I can say you doing X gives me very little utility, and we are both right.

2. Lets imagine a person has an incurable form of depression. They even admit that they're incredibly unhappy. Why don't we kill this person under utility theory?

It depends on how much utility the act of killing them will generate.

3. You seem to claim that violations of dignity are prevented because we feel some sort of harm when dignity is violated. How do sociopaths fit into this? Is it moral for people who don't value dignity to violate it? If one doesn't get pleasure from preserving dignity and doesn't feel pain from violating it, wouldn't they be morally justified in raping a comatose woman? She doesn't feel any pleasure or pain, so utility is maximized.

Yes they would be morally justified in doing so, if it maximizes their utility.

I would also be morally justified in castrating him, if it maximizes my utility.

The rest of the world would be morally justified in putting me in jail for doing so, if it maximizes their utility.

4. What about Promises? Contracts? Laws? Do I violate a contract,promise, or law as soon as the utility of it becomes negative? If I don't have any money in my pocket, but I know it will make my son happy if I bring home a candy bar and I know that the store I steal it from won't even notice it is gone , do I steal it anyway?

Does doing so maximize your utility? Personally, I feel that stealing represents the failure to be able to earn my own way in life. Thus if I stole, I would feel like a failure. Also, I feel like stealing is cheating a fellow individual, and I don't like cheating. Thus stealing has negative utility for me even without the prospect of being caught, and I never do it.

5. How do you propose a utilitarian society deals with prior restraint? If someone comes from a family of child molesters, do we lock this person up? What about parents that commit crimes, should we sterilize them? On the flip-side, do we not bother providing educational opportunities for people who come from communities that historically under-perform academically?

They can deal with it however they want! What is more important, personal freedom and potential or public safety and order? I dunno the answer to these questions on a mass scale. What I DO know is that they are not nearly as trivial as any moral absolutists contend.

6. What about duty and honor? Lets say I'm a doctor than can save 10 people a day if I work alone, but can only save 7 per day if I come home to spend time with my children. If I neglect my duty to my children, I can save thousands of people before they grow up, but my children grow up without a father. What if 100 people will be saved if I kill 10 innocent people?

Yeah, what if? Give me a case by case question about it, concerning what I personally would do, and I will be happy to answer. But I refuse to dictate what choices people should make as if there is some absolute law that makes me right and them wrong.

I can honestly tell you that I think a doctor should help me when I need help, because I am a selfish organism. But because I realize this is out of selfishness, I also do not hold it against them when they choose differently.

7. How does your conception of utilitarianism work under limited information? If 100 people will be saved with 10% probability upon performing action A and 10 people will be saved with 100% probability if I perform action B? Are action A and action B equally moral? What if I pick B and it turns out I would have actually saved 100 people, but I just didn't know. Did I make the morally wrong decision?

It works the same way all other moral systems work under limited information.

The only way you can make a morally wrong decision is to act without being in accordance with your values at the time you make the decision.

These are generally the sorts of problems that people have with utilitarianism in the philosophical community. I'm very curious to see how ya'll propose to deal with them.

Thats because they refuse to accept the fact that utility can be defined in any way one wants. As I have said before, every moral system can be reduced to utilitarianism given a suitable definition of "utility."
 
Thanks Complexity (my favorite woo whisperer)...


:o

I'd have posted earlier, but doing so properly would require a lot more time and patience than I've had.

Anyway, I've been reading the posts with interest, much frustration, some anger, some disappointment, and some pleasure. Thank you for some of the latter.

I have been a bull in a china shop in a few other threads recently (I like to think that several of my posts were surgical and witty, but I'm sure some others took a different view).

You'd think I'd get it out of my system, but it turns out it is my system.
 
Last edited:

Back
Top Bottom