• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Prescriptive, descriptive, and motivated morality

You know, Saizai, thinking about it a bit more, I think what you may be getting at more than either a descriptive or prescriptive morality (you do tend toward the prescriptive, though) is more of a genealogy of morals -- the how and why and where of morality's origins. The danger here is moving from the genealogy itself to a prescriptive program which frequently follows what has been called the naturalistic fallacy (though I'm not certain how much of a real fallacy it is). Personally I find this a fascinating topic for discussion.
 
How do I tend toward prescriptive? It seems to me that if anything I'm more on the very situational / to-each-their-own end. (I'm even willing to not condemn neo-Nazis!)

I'm not trying for a historical genealogy of morals so much as a psychological one (if you can call it that). I doubt that what I've given here is particularly novel in that regard; it seems to me practically self-evident given current neuroscience.


How do you see a fallacy coming into this? I'm not familiar with the one you are citing; please explain.


My argument, summarized, is simple:
1. People act based on maximizing the utility function for all entities they consider, each weighted for the amount with which they are empathized.
2. Therefore, if one wants (as social policy) to increase cooperation and decrease strife in general, one need only teach people to understand the consequences of their actions, and teach empathy with all social groups the society deems worthy of cooperating with.

That last clause (which ones are 'worthy') is somewhat circular I admit; thus the resort to utilitarianism to say which ones (all) it "should" be purely from the perspective of maximizing win-win.
 
To which I would ask, "So what?"

So I find it objectionable.

I wonder if you were stuck in a concentration camp in WWII if you'd think differently?

I can't predict what my reaction to being in a concentration camp would be (though I have relatives who died in them); I can point to Buddhists imprisoned in China for an example of how one can come out of such situations still retaining empathy.

Is it objectively wrong to hack a two year old up with knife? Would this practice ever be subjectively acceptable?

Presumably you don't include surgery, autopsy, organ donation, etc.

No, it's not "objectively" wrong. Moreover, for some setups that strongly de-empathize the child and strongly empathize some other entity that would gain as a result of this action, it might be considered acceptable or even admirable.



The utility of this is pretty simple: it gives a rational, research-supported way to a) describe what pretty much everyone is REALLY basing their moral decisions on (possibly with the exception of people who have cognitive dissonance resulting from e.g. religious mandates); and b) thereby show how to change someone's moral perception without having to resort to arguing about what they claim on the surface to be the the justifications.

So instead of just warring ideas that talk past each other (like we have routinely in public 'discussions' on any moral issue, viz. abortions) we can address the real issues at hand (empathy & with whom).

I would expect that equal usage of this technique would tend to make people move towards moderate, balanced positions overall, rather than favoring any particular ideology.
 
I can't predict what my reaction to being in a concentration camp would be (though I have relatives who died in them); I can point to Buddhists imprisoned in China for an example of how one can come out of such situations still retaining empathy.
I don't care whether they came out empathetic, though it may be noble. Although, I can assure you that all came out realizing they'd seen evil incarnate and wouldn't hesitate calling it such...I am quite equally sure they would be insulted by your equivocation in labeling it as objectively evil. There are no set(s) of facts subjective/objective that justify the mass killing of innocent Jews or any other group for that matter.

It seems as if we are able to speak of good and evil in such abstract terms because the world we live in has been, for the most part, sanitized of such horrid acts for a sufficient period of time that we have had the luxury of forgeting that true evil does exist and it is most certainly real. This was the main point of my statement. I think you are suffering greatly from a purely theoretical mindset. If you truly saw the face of evil as did those poor people in concentration camps I think you'd be singing quite a different tune.

It's one thing to deal with these issues abstractly as you have, but it's entirely different to be staring evil and death in the face while someone like you is waving from the sidelines saying "it's only a figment of your imagination, it's just an idea."

Presumably you don't include surgery, autopsy, organ donation, etc.
Cute. Assuming you were joking, red herring averted...

No, it's not "objectively" wrong. Moreover, for some setups that strongly de-empathize the child and strongly empathize some other entity that would gain as a result of this action, it might be considered acceptable or even admirable.
You do realize adopting a position like this puts you dangerously close to being righfully described as a raving lunatic...I think your "ethics" are beginning to unravel as we speak.

Honestly, It's almost depressing that there are people who can, straight-faced, make statements like the one you just made. If this is the best representation of reality the rational, scientific mindset can get us, I think I've seen enough...
 
Last edited:
I don't care whether they came out empathetic, though it may be noble. Although, I can assure you that all came out realizing they'd seen evil incarnate and wouldn't hesitate calling it such...I am quite equally sure they would be insulted by your equivocation in labeling it as objectively evil. There are no set(s) of facts subjective/objective that justify the mass killing of innocent Jews or any other group for that matter.

Actually, the interviews I've heard do not have them considering their captors "evil incarnate".

As for innocent Jews et al - I happen to strongly empathize with them, so to me that is immoral.

But to Nazis at the time, Jews were merely a sort of pest problem, and killing them no more 'immoral' than killing ants. Clearly, I have rather more empathy for them than the Nazis did, but I cannot rationally say that it is for any "objective" reason.

If you truly saw the face of evil as did those poor people in concentration camps I think you'd be singing quite a different tune.

*shrug* Perhaps. At this point that's groundless speculation.

It's one thing to deal with these issues abstractly as you have, but it's entirely different to be staring evil and death in the face while someone like you is waving from the sidelines saying "it's only a figment of your imagination, it's just an idea."

I didn't say that it is a "figment of your imagination". I said that it is subjective. There's a pretty major difference IMO.


You do realize adopting a position like this puts you dangerously close to being righfully described as a raving lunatic...I think your "ethics" are beginning to unravel as we speak.

Honestly, It's almost depressing that there are people who can, straight-faced, make statements like the one you just made. If this is the best representation of reality the rational, scientific mindset can get us, I think I've seen enough...

This seems like just a pointless ad hominem.

Do you have a real argument behind it? Perhaps a medical definition of lunacy that you believe I fall into?
 
Actually, the interviews I've heard do not have them considering their captors "evil incarnate".
Hmmmm. I wonder if you've cherry picked interviews that fit your mindset...I can't help but wonder, because I can't find any to support your point.

It's pointless anyway. I understand where you are coming from and have no interest in your "moral" framework as it stands.

A person who is unable to label the butchering of a innocent two-year-old as an objectively evil act is not someone to be taken seriously in the first place. Have fun with your theories though...but you're right, I think delusional is a better description of the condition.
 
How do I tend toward prescriptive? It seems to me that if anything I'm more on the very situational / to-each-their-own end. (I'm even willing to not condemn neo-Nazis!)

Um, O.K. Then why bother with any discussion? You are either saying, "this is a good idea and I think we should follow it" in which case you are proposing a moral code; or you mean to say, "eh, take it or leave it, who really cares?" in which case, who really cares? I don't see much sense in "take it or leave it". That isn't morality. That's wishy-washy to the point of ridiculousness.

I'm not trying for a historical genealogy of morals so much as a psychological one (if you can call it that). I doubt that what I've given here is particularly novel in that regard; it seems to me practically self-evident given current neuroscience.

All genealogies of morals are psychological and/or rational by necessity (an historical review would be a decription of morals through history, not a genealogy). Utilitarian ethics is based clearly on an examination of our underlying psyche and the role that pleasure and the avoidance of pain play in valuation. Kant's deontology rests on hidden valuations and pretends to be purely rational while suceeding in being only purely vacuous. Many people like his formulations of the categorical imperative because they are fancy reformulations of the golden rule and sound impressive. But his attempt to create morality from a purely rational basis was demolished within a generation and shown to be the sham that it is. Your formulation is a restatement of the golden rule that simply uses the word "empathy" -- the idea encoded by the golden rule. Essentially what you have done is say that we are naturally empathic so we should teach more empathy. That is prescriptive, whether or not you realize the fact. Trying to avoid the word "should" and say, "we'd be better off if we taught more empathy" is simply passing the buck and being prescriptive while trying to hide behind words. Morality is about actions. You can describe actions or you can promote actions. There is no other ground. If you really wish to say that Neo-Nazis are just doing their thing and I have no right to diss them, then that falls into the realm of vulgar relativism -- all moralities are equally valid. I don't think anyone really believes that, even when they say they do.


How do you see a fallacy coming into this? I'm not familiar with the one you are citing; please explain.

Well, I didn't say that I saw fallacy definitely coming into it. I said that it teetered on the brink of what has been called the naturalistic fallacy (which, again, I'm not sure should really be called a fallacy, per se). The naturalistic fallacy is supposed to be the fallacy of deeming something good simply because it is natural to us. We are compassionate naturally, so compassion is good. This ignores the fact that we are also competitive and mean at times -- naturally -- so should we decide that competition and greed are good? Our minds consist of numerous in-built tendencies toward certain types of action. Simply because we have those tendencies does not make them good. Why choose empathy? Why not choose competition as your starting place to build an ethic? It's just as natural as empathy. From a naturalistic standpoint I don't see any cear rationale for choice amongst the possibilities. We really need to adopt some other type of stance than "natural" to decide on an ethic, and that is one of the arenas in which the utilitarians have a leg up (though this requires a particular way of interpreting "utility"). We act ethically because there is utility in acting ethically. It is the means by which social life becomes possible and we may continue living (since we are not the swiftest nor the strongest).


My argument, summarized, is simple:
1. People act based on maximizing the utility function for all entities they consider, each weighted for the amount with which they are empathized.
2. Therefore, if one wants (as social policy) to increase cooperation and decrease strife in general, one need only teach people to understand the consequences of their actions, and teach empathy with all social groups the society deems worthy of cooperating with.

That last clause (which ones are 'worthy') is somewhat circular I admit; thus the resort to utilitarianism to say which ones (all) it "should" be purely from the perspective of maximizing win-win.


And I think this formulation demonstrates that no system works in and of itself. Is it possible that what you dislike is not "prescriptive morality" but the previous systems that don't seem to work by themselves and seem to poduce weird or overly restrictive moral decisions? You seem to rely on the idea of utility. Why not just become a utilitarian? Part of the reason is that you seem to see the need to introduce a new term -- empathy -- that doesn't easily fit into the common misperception of utilitarian thought (utilitarianism has been vulgarized to some extent in most philosophic discussions of it). But J.S. Mill (I'm going to ignore Bentham and Sidgwick for ease of discussion) saw the same need, which is why he spent nearly 20 pages of his 70 or so page essay on utilitarianism writing about "justice". He tried to re-formulate the idea of utility to view justice as one of the higher forms of utility so that it could be incorporated into his schema. Whether or not he was successful, I suppose, depends on everyone's own opinion, but he clearly did not mean to propose a system that did not incorporate ideas of justice into it -- as his ideas are frequently portrayed.

In other words, Mill saw that the easy way of thinking about utility (naively thinking the greatest happiness for the greatest number, without explicating what that means) didn't work (in fact he seems to be quite explicit about this early on in his essay). We are much more complex creatures than that. Kant wanted one idea -- duty -- to rule the roost. You seem to be saying (or so it seems to me and I would agree, though I may be reading into what you have written) that Mill was more correct and that we need more than one way of looking at the way we think ethically -- ideas of empathy and utility. You have tried to formulate it in a single idea -- empathy. But when you discuss that idea you fall back on other ethical ideas to bolster it. Why not give up the simple formulation and admit what biology tells us -- the first rule of biology is variation? The way that some individuals survive calamity is that sexual reproduction creates immense variation. We are necessarily variable creatures, and our minds are not monoliths. We don't have a single value, like empathy, that can explain all of our ethical life. I think that value pluralism makes the most sense. Unfortunately this means that there is no rock solid "objective" ethical system out there that can possibly account for all of human morality, short of us simply choosing one idea and sticking to it -- but then that wouldn't precisely be "objective" either (this all depends on one's definition of "objective" but that is another can of worms).
 
About the issue of "objective" morals or ethics, I think we need to be very careful about what we mean in such discussions. I see two different meanings of the word "objective" in this situation. There is a sense in which "objective" means "true even in the absence of human beings" -- as in it is an objective fact that the earth revolves around the sun. That would continue to be the case whether or not humans were here to observe it or not. The other sense, and what we seem to mean most when discussing ethical ideas, is "inter-subjective". In other words, ethical ideas are "objective" only in the sense that we agree to them. We seem to have built into us the idea that murder is wrong, so we agree that murder is wrong (this aside from the fact that the definition of murder contains the idea of it being wrong). The problem I see with this is -- just how many of us need to agree for the idea to be "objective"? Is this not just fiat by majority? Do we all agree on anything, even that killing is wrong? On what grounds should we decide that those who disagree with idea of killing being wrong are simply wrong? I think Charlie Manson is nuts. But the only grounds I can use to decide this, it seems, is my internal sense that what he did was very, very wrong and that almost everyone agrees to this. But how does this make the idea of Charlie Manson being wrong "objective"? It is clearly inter-subjective for a huge number of people but not everyone. The best I think we can do is arrive at rules that almost everyone agrees upon. Personally I do not like to call those rules "objective", but that is just my personal preference. I hold this opinion, in part, because I fear what we commonly do in philosophical discussions -- subtly switching from the second definition of "objective" (inter-subjective) to the first (true even in the radical abseince of human beings).
 
But to Nazis at the time, Jews were merely a sort of pest problem, and killing them no more 'immoral' than killing ants. Clearly, I have rather more empathy for them than the Nazis did, but I cannot rationally say that it is for any "objective" reason.


No objective reason? The Nazis falsely classified the Jews as sub-human. It seems rather simple to me to objectively describe their actions as an error of the most horrific type.

I have to agree with Blutarski at this point.
 
I think this may be my last post in this thread. This discussion brings up two very good observations from two well known men. I always have these thoughts in the back of my mind as I discuss these topics:



[FONT=Arial,Helvetica,Geneva,Sans-serif,sans-serif][SIZE=-1]If we present man with a concept of man which is not true, we may well corrupt him. When we present him as an automation of reflexes, as a mind machine, as a bundle of instincts, as a pawn of drive and reactions, as mere product of heredity and environment, we feed the nihilism to which modern man is, in any case, prone. I became acquainted with the last stage of corruption in my second concentration camp, Auschwitz. The gas chambers of Auschwitz were the ultimate consequence of the theory that man is nothing but the product of heredity and environment--or, as the Nazis liked to say, "of blood and soil." I am absolutely convinced that the gas chambers of Auschwitz, Treblinka, and Maidanek were ultimately prepared not in some ministry or other in Berlin, but rather at the desks and in lecture halls of nihilistic scientists and philosophers. [/SIZE][/FONT][SIZE=-1][FONT=Arial,Helvetica,Geneva,Sans-serif,sans-serif]–Viktor Frankl, The Doctor and the Soul: Introduction to Logotherapy[/FONT][/SIZE]




[FONT=Arial,Helvetica,Geneva,Sans-serif,sans-serif][SIZE=-1]And the fact that [the revolutionist] doubts everything really gets in his way when he wants to denounce anything. For all denunciation implies a moral doctrine of some kind; and the modern revolutionist doubts not only the institution he denounces, but the doctrine by which he denounces it. Thus he writes one book complaining that imperial oppression insults the purity of women, and then he writes another book (about the sex problem) in which he insults it himself. He curses the Sultan because Christian girls lose their virginity, and then curses Mrs. Grundy because they keep it. As a politician he will cry out that war is a waste of life, and then, as a philosopher, that all life is a waste of time. A Russian pessimist will denounce a policeman for killing a peasant, and then prove by the highest philosophical principles that the peasant ought to have killed himself. A man denounces marriage as a lie, and then denounces aristocratic profligates for treating it as a lie. He calls a flag a bauble, and then blames the oppressors of Poland or Ireland because they take away that bauble. The man of this school goes first to a political meeting, where he complains that savages are treated as if they were beasts; then he takes his hat and umbrella and goes on to a scientific meeting, where he proves that they practically are beasts. In short, the modern revolutionist, being an infinite sceptic, is always engaged in undermining his own mines. In his book on politics he attacks men for trampling on morality; in his book on ethics he attacks morality for trampling on men. Therefore the modern man in revolt has become practically useless for all purposes of revolt. By rebelling against everything he has lost his right to rebel against anything. [/SIZE][/FONT][SIZE=-1][FONT=Arial,Helvetica,Geneva,Sans-serif,sans-serif]–G. K. Chesterton, "The Suicide of Thought," Orthodoxy[/FONT][/SIZE]
 
Last edited:
And excellent observations they are.

Like many folks here I quickly tire of "philosophy" when it binds itself in it words. Most 'philosophic' discussions tend to lose themselves when words are twisted and alternative meanings are used in one part of a discussion and others in a different part of the same discussion under the guise of 'progress in thought'.

Take the issue of "objective" and its counter "relativism". I think most people are disgusted with idea of vulgar relativism -- again, the idea that all values, all ideas are equally valid. But I don't see any way out of some form of relativism. Our ethics and our ideas are relative to what we are. We bring values to the table because of who and what we are, and to this extent I agree with Saizai. But I cannot accept the idea that all values or all ways of viewing the world are equal. Some are better than others. Because we cannot escape some form of relativism (unless we wish to posit an absolute morality in the laws of God), I don't think we can pretend that we have a rock solid defense in "objective" reality or valuation. But there are fairly rock solid traits that we almost all share that I think we can use to arrive at near moral consensus. I think it is a mistake to think dualistically -- either there is an objective morality or there is vulgar relativism. Our moral world is necessarily relative to us, but this does not mean that we must agree that Nazis are righteous dudes too.
 
You could not be more correct.

I have always thought of the objective ethics as the melody in a song. We are free to improvise over the melody (relativism), but some core tenants cannot be altered lest the song become incoherent.

It's also like building a home without a foundation...and at the risk of boring people with quotes here's my last one:



I remember lecturing at Ohio State University, one of the largest universities in this country. I was minutes away from beginning my lecture, and my host was driving me past a new building called the Wexner Center for the Performing Arts. He said, “This is America’s first postmodern building.” I was startled for a moment and I said, “What is a postmodern building?” He said, “Well, the architect said that he designed this building with no design in mind. When the architect was asked, ‘Why?’ he said, ‘If life itself is capricious, why should our buildings have any design and any meaning?’ So he has pillars that have no purpose. He has stairways that go nowhere. He has a senseless building built and somebody has paid for it.” I said, “So his argument was that if life has no purpose and design, why should the building have any design?” He said, “That is correct.” I said, “Did he do the same with the foundation?” All of a sudden there was silence. You see, you and I can fool with the infrastructure as much as we would like, but we dare not fool with the foundation because it will call our bluff in a hurry. --Ravi Zacharias, Address to the United Nations 10 September 2002

Emphasis mine...
 
Last edited:
To quickly reply:

The reason I do not like utilitarianism is because it lacks a real concept of benefit-to-other; it cannot justify self-sacrifice. Whereas my version can (if one empathizes with the others more than oneself).

The other reason I like my version is because it is neurologically sufficient; mirror neurons plus internal modeling plus emotional reactions are all you need to explain it. It doesn't resort to any external things like "god said to".

In mine, those neural phenomena are the foundation.

It does of course also mean that for a psychopath or autistic who lacks an understanding or empathy for others, then it dissolves back to straight utilitarianism.


"Prescriptive" morals that I refer to are not based on these neurological facts, but rather on pure axioms about platonic ideals, deific doctrine, or the like. Thus I think they are less reliable - as can been shown e.g. by the cognitive dissonance of a strict Jew trying to act morally and within the commands of their religion. Or a fundamentalist Christian doing the same.


If you think that the descriptive basis of my theory is inaccurate, please show me a good counterexample.

The "motivated" aspect of it is, essentially, applying utilitarianism at the social/cultural level to that description of how people operate at the individual level.
 
The reason I do not like utilitarianism is because it lacks a real concept of benefit-to-other; it cannot justify self-sacrifice. Whereas my version can (if one empathizes with the others more than oneself).

I'm sorry, but that is simply false. The whole idea of utilitarianism is benefit to other. Self sacrifice that benefits the entire community and creates greater happiness for all is considered noble to utilitarians (well to the community; the sacrificer is left out of the consequentialist bargain). Utilitarianism is not hedonism. Mill, for instance, considered it yet another example of the golden rule in action -- the greatest happiness principle is based on treating others with respect to increase general happiness and decrease general pain.

The other reason I like my version is because it is neurologically sufficient; mirror neurons plus internal modeling plus emotional reactions are all you need to explain it. It doesn't resort to any external things like "god said to".

I haven't heard anything in it that is neurologically sufficient for a complete ethic. First, you are putting far too much explanatory power on mirror neurons (which is OK in my book since I have done the same). While these neurons undoubtedly form the basis for what we know of as empathy they do not in and of themselves account for the full range of empathy that would account for what you want them to do -- like self-sacrifice. They are primarily important in understanding social interaction and comprehension. They do not provide fellow-feeling unless they are linked through other systems involved with pleasure, fear, etc. Since you seem to realize that we also need emotional reactions, etc., though you have really already answered your own question. Empathy is not all there is to human interaction and not all there is to human ethical behavior. If all that mattered was empathy, then my empathy for what a killer might have to go through would stop me from wanting any retribution against him. But that is not the human response.

Now, with that said, our brains are neurologically sufficient for ethics. But the functioning of our brains includes both emotions and reasoning. Ethics includes both emotion and reasoning. I have yet to hear of one single idea that explains ethics in a way that works for most people. Empathy is an absolutely key ingredient. But it is not the whole story.

A trucker doesn't fix his breaks one year. He has trouble stopping suddenly when a squirrel jumps out and he flattens it. Later he has trouble stopping when a kid jumps out and he flattens him. Are these situations equal? Why not if not and why if they are? Is the only issue empathy for the parents of the dead kid? What about the squirrel? What about the horrible anguish of the trucker? How much empathy for him? How do we assign empathy for each situation? What if one person thinks the poor trucker is getting a raw deal because he didn't really mean it?

Or look at it from one other view. How have you really added much to the discussion? By mentioning mirror neurons? The rest of your "neurology" is a black box. That's also fine by me as far as explanations go because most of neurology (really neuropsychology since neurology proper is a medical subspecialty) is a black box. But if you want to pretend that you have any sort of neurological explanation, then I don't buy it.

It does of course also mean that for a psychopath or autistic who lacks an understanding or empathy for others, then it dissolves back to straight utilitarianism.

Why should they rely on utilitarianism? Why shouldn't everyone? Why shouldn't everyone rely on deontology? Why not virtue ethics?


"Prescriptive" morals that I refer to are not based on these neurological facts, but rather on pure axioms about platonic ideals, deific doctrine, or the like. Thus I think they are less reliable - as can been shown e.g. by the cognitive dissonance of a strict Jew trying to act morally and within the commands of their religion. Or a fundamentalist Christian doing the same.

Then you are using the term incorrectly. Prescriptive means telling people what to do. Your morality is prescriptive. What you object to is older moral systems because you seem to think they are too intellectualized. That's fine. I agree to some extent. I think all ethical systems that think they rely purely on reason are either flat out wrong or are fooling themselves -- that is why I brought Kant into the discussion. However, more traditional ethical analyses have been based on the ultimate good for mankind and have viewed ethics from what can be called a positive and negative direction -- not only what not to do, but also what to do in order for us to lead a good life. So Aristotle examined what is good (what else are you going to base an ethic on?) and decided that is was some form of happiness or human flourishing. He concentrated on the virtues more than anything else but ultimately seems to have pinned his ethic on basic human psychology -- when I read the Nichomachean Ethics it sounded in a vague way to me like a recapitulation of Abraham Maslow's Heirarchy of Human Needs. Aristotle finally seems to alight on a form of self-actualization as the best human good. So it seems to me that he bases his ethics on human psychology, which is based on human neurology.

The Utilitarians take a slightly different slant, though they follow Aristotle's lead in analyzing the "good". The only thing we do for itself is some form of happiness or pleasure or avoidance of pain. They just decided that a proper ethic consisted of everyone sharing this -- universalizing happiness. This is based in human psychology and hence human neurology.

Kant searched for the rational basis of morals and thought he found it. He didn't, but he still arrived at a pretty darn good ethics. The whole idea was based on rationality and duty. It was based in human psychology and therefore human neurology.

You have decided that the ultimate good is empathy. OK. I don't see any difference between the above systems and yours except that the others went into much greater detail to rationalize their choice.

All human morals are necessarily based in our neurology. There is no other option unless there is a non-material portion to our minds. There is nothing special about invoking mirror neurons. That will not make anything you say any more correct or believable, especially because we know so very little about the dang things. And since they were discovered in non-human primates whose ethics we do not know I think it is very dangerous to think that mirror neurons explain anything better than "well it just happens up there in the brain somehow".

If you think that the descriptive basis of my theory is inaccurate, please show me a good counterexample.

See above.

The "motivated" aspect of it is, essentially, applying utilitarianism at the social/cultural level to that description of how people operate at the individual level.

Utilitarianism is applying utilitarianism at the social level to the description of how people operate at the individual level. So basically you are telling me that you are proposing utilitarianism?


What I see you doing really is trying to return ethics to its foundation in feelings. I think that is commendable. Hume did the same thing. The problem is that there are huge numbers of folks out there that don't want to believe that ethics ultimately boils down to basic human motivations. Our basic human motivations are pleasure and pain. So some philosophers try to show that we don't just want pleasure and to avoid pain, but we have other drives that they think prove that there are true "objective" moral truths. One supposed demonstration runs like this -- suppose that in the future we have the ability to convert everyone to a brain in a vat and give everyone every pleasure that they would desire -- "high pleasures", "low pleasures", the whole gamut. All you have to do is press this little red button and you and everyone else in the world will be suffused with joy. Would you do it? I'm sure there are some people out there who would say "yes" but most folks would say "no". The reason? Because it wouldn't be true. It wouldn't be real. So we are supposed to have this "objective" drive toward the real. But if you subject this finding to the analysis that Aristotle did then I don't think we could call the will to truth a basic drive. Why care about truth, why care about the real? To avoid others controlling us? To avoid pain? We seem to feel pain when we think someone is fooling us. Even this example may show that at the basis of all human motivation is the search for pleasure and avoidance of pain. That may be what the word motivation really means.

So, yes, I agree with the program to return ethical thinking to its roots in basic human motivations and any and all attempts to describe ethics from a neurological perspective. I just don't think we are even near the ballpark yet.
 
I'm sorry, but that is simply false. The whole idea of utilitarianism is benefit to other. Self sacrifice that benefits the entire community and creates greater happiness for all is considered noble to utilitarians (well to the community; the sacrificer is left out of the consequentialist bargain). Utilitarianism is not hedonism. Mill, for instance, considered it yet another example of the golden rule in action -- the greatest happiness principle is based on treating others with respect to increase general happiness and decrease general pain.

Aah, I see what you mean then.

When I was referring to 'utilitarianism', I mean on the individual level.

The only way I think that one can have a morality that shows how people behave is to have it on the individual level. I agree that on the societal level your argument works perfectly; it's just that it is quite difficult (in general) to show how societal-level 'desires' work on individuals.


I haven't heard anything in it that is neurologically sufficient for a complete ethic. First, you are putting far too much explanatory power on mirror neurons (which is OK in my book since I have done the same). While these neurons undoubtedly form the basis for what we know of as empathy they do not in and of themselves account for the full range of empathy that would account for what you want them to do -- like self-sacrifice. They are primarily important in understanding social interaction and comprehension. They do not provide fellow-feeling unless they are linked through other systems involved with pleasure, fear, etc. Since you seem to realize that we also need emotional reactions, etc., though you have really already answered your own question. Empathy is not all there is to human interaction and not all there is to human ethical behavior. If all that mattered was empathy, then my empathy for what a killer might have to go through would stop me from wanting any retribution against him. But that is not the human response.

Oh clearly. I though it was clear that when I refer to 'empathy' I am using the full-stack approach, i.e. that mirror neurons themselves are only one step of the full thing.

See my term paper here for the fully annotated neurological explanation: http://community.livejournal.com/academic_empath/3568.html

Should be understandable without requiring significant neuroscience background; it's intended for an intro audience.

Now, with that said, our brains are neurologically sufficient for ethics.

One hopes. :)

But the functioning of our brains includes both emotions and reasoning. Ethics includes both emotion and reasoning. I have yet to hear of one single idea that explains ethics in a way that works for most people. Empathy is an absolutely key ingredient. But it is not the whole story.

Approximately agreed. FWIW I do not make a very strong distinction between these things - emotion is, essentially, a sort of inferential 'logic' that very strongly (over)emphasizes association.

There are valid ways to apply this - eg a Bayes net does so - but obviously it needs to be tempered by logic to not reach the wrong conclusion just because something is associated.

A trucker doesn't fix his breaks one year. He has trouble stopping suddenly when a squirrel jumps out and he flattens it. Later he has trouble stopping when a kid jumps out and he flattens him. Are these situations equal? Why not if not and why if they are? Is the only issue empathy for the parents of the dead kid? What about the squirrel? What about the horrible anguish of the trucker? How much empathy for him? How do we assign empathy for each situation? What if one person thinks the poor trucker is getting a raw deal because he didn't really mean it?

It's not a matter of "assigning" empathy to each of these. It's a matter of what the trucker has to start with.

However, it's unclear from what I quoted what the question is exactly. Which action(s) are you asking the morality of and from whose perspective?

(My system does require that you specify perspective.)


Why should they rely on utilitarianism? Why shouldn't everyone? Why shouldn't everyone rely on deontology? Why not virtue ethics?

That isn't a "should"; it's just an observation that if empathy doesn't function then the empathy-weighted (personal) utilitarianism I gave turns into non-weighted (i.e.e straight) utilitarianism.


Then you are using the term incorrectly. Prescriptive means telling people what to do. Your morality is prescriptive. What you object to is older moral systems because you seem to think they are too intellectualized.

No, because they aren't based on accurate description of how people work, and because they aren't based on real-world information.

Yes, I am equivocating a bit on 'prescriptive'; consider it perhaps "a priori (theological) prescriptive" vs "a posteriori prescriptive". And that I don't prescribe what to *do* (unlike them) in any particular situation; rather, I prescribe how to take advantage of how people do act in particular situations to change their behavior and the social world in general.

Hopefully that double distinction makes sense.

You have decided that the ultimate good is empathy.

No. I said that that is how people operate.

What should be done with that information - i.e. how to shape social policy - is something I punt to the more general, impersonal 'ethics' that you cited several examples of.


As for you brain-in-a-vat point... that seems to me to be very different. And I would separately disagree with the claimed "real"ness of emotion as meaningless. But that's a whole 'nother conversation. ;)
 
Saizai, I mostly agree with those statements, but a couple of quibbles....

The only way I think that one can have a morality that shows how people behave is to have it on the individual level. I agree that on the societal level your argument works perfectly; it's just that it is quite difficult (in general) to show how societal-level 'desires' work on individuals.

Just as with "private language" I don't think "private morality" makes any real sense. The whole point of morality is to devise some means by which we can interact as a group.

I think I would rephrase your approach to say that the neurological underpinnings of empathy are the beginnings of morality. But that cannot comprise the whole of morality.

I used the previous examples to show what I find irritating in many discussions of ethics and morality -- this idea that there is some "objective" truth in the universe that "is" morality and that our valuations cannot be the true bedrock. In a way, then, I have the same objections that you have. While I would love for the words "objective morality" to have some meaning so that I could always say the Nazis are evil, I don't see any way around some form of relativism, as I previously mentioned (our morality is relative to who and what we are and the ways that we think). Almost all of us can agree that murder and genocide and baby slashing are wrong; and I think that is firmly based in our biology. Unfortunately, variation being the rule, there are always exceptions -- you have pointed out two of the big ones in autism and sociopathy (so, we can't even use biology as the absolute firm bedrock in any objective sense, if that means something that everyone can agree upon).

My view of what morality is -- we begin in our biology. We have particular drives that include selfish motivations as well as social motivations. We can view every drive as ultimately depending on pleasure and pain (that is what many philosophers have done through the ages, but I am not completely convinced that each and every drive devolves down to that level). If pleasure and pain are the bedrock (just to play devil's advocate), that does not mean that they explain everything about morality. We obviously have very distinct ways of experiencing pleasure and pain. Empathy is a positive drive and we feel pain when we see someone being mistreated. So this is one of those bedrock entities that drive moral action. But for us to arrive at an "ethic" we seem to need to negotiate with others. That is where reason seems to me to come into play. Our morality, therefore, does not seem to me to be one thing. It has an origin -- in our emotions/motivations. And it has further elaboration in our reasoning ability. Part of this negotiation process involves emphasis on the more social aspects of our drives -- empathy -- and de-emphasizing the "selfish" drives -- greed. But those are both natural drives. That is why I tried to caution about the naturalistic fallacy and emphasize that other factors must play a vital role in the full fruit of what we call morality.

I find the arguments that try to remove ethical behavior from the bedrock of pleasure/avoidance of pain to be very irritating. Most of the time we seem to be able to demolish the arguments against that view by further examination. Most of the arguments that I have heard (I just put one of them in above -- the magic button argument) seem to devolve back to avoidance of pain in some guise. We must begin in some human motivation -- that is what valuation means, after all. Even empathy can be viewed as one form of avaoidance of pain -- we empathize because not to do so creates social pain for us.

I don't know if you would find this perspective interesting or not, but I also think that the "morality must come from religion" idea to be an inversion of what I think is really going on. It seems to me that religion is the product of our morality, rather than the reverse. As hunter-gatherers, at some point, we must have developed the sense that killing is wrong and that we should extend this idea beyond our immediate family unit -- that is the whole basis of social systems and the only means by which we could have survived our hostile environment. But the aility to symbolize -- don't kill others -- is just as easily applied to animals as other humans. Since we are omnivores we are stuck killing others. But we see this action as wrong. So, how to reconcile the two? Create a world behind the world in which the animal spirit can survive and somehow rejoin the herd. Then generalize this to humans. Religion.
 
I think that putting it this way would resolve the issue that you're poking at:

Empathy is the force that drives us to be moral. Logic (with introspection) is (occasionally) the means by which we figure out how to optimize that force.

Without it, we neurotypicals still feel the drive to optimize it, but don't necessarily know how to best do so.

I think pain-aversion is insufficient; is e.g. lack of joy and merely having happiness a "pain"? Probably not, but we still have a (perhaps lesser) drive to reach that higher state, ne? So I would rather treat it as trying to maximize that function.

Some people do it through drugs (the 'magic button') until it doesn't work well for them and the problems outweigh the benefits; some through dissociation; some through empathy and good works; some through self-editing through meditation; etc etc etc. But everyone tries.

FWIW I am a vegetarian in very large part because I do feel empathy for animals (though not as much as for other humans or as for myself), and therefore cannot justify their death merely for my culinary benefit when it's not necessary for my survival.

If it is an issue of I kill something else or I die, then that something else had better start running - because I have most empathy for myself. But outside that, I will stay a vegetarian.

Hopefully that gets to the point of what you were saying.
 
I think pain aversion is insufficient too. But part of what I am saying is that I think empathy is also insufficient. It is one of the primary drives (but, then again, when we look back at why it is a drive does it devolve into pain aversion and pleasure or is that a fiction that we create when we ask the question, "why empathy?"). Empathy is one of the starting places. Another starting place is pleasure and avoidance of pain. Another issue is consequentialism. Another might be the issue of duty. Empathy is clearly a starting place. There is no morality without it, that's for sure. But I'm not so sure that it is THE starting place. Personally, I think there are many others that mix into the soup we call ethics. That is why, I think, the field becomes so very murky whenever anyone tries to introduce a "system that explains it all".

I'm a little unsure if the whole "pleasure and pain aversion" thing constitutes the whole of motivation or if drives exist independent of those motivating powers. Whenever we examine the "drives" -- toward control, power, knowledge, truth, hunger, etc. and ask the question "Why eat?" or "Why truth?" we seem to end in some version of pain avoidance or pleasure. I just don't know if they are really a bedrock or an epiphenomenon of the way we ask questions. I tend toward the epiphenomenon side and think that pain avoidance and pleasure help those motivating drives along.

But empathy cannot be the sole basis for morality because it does not provide a mechanism for valuation. Valuation is key to morality. Valuation seems to depend critically on the issue of pleasure and pain avoidance -- that is how we come to value certain things over others. But there again, pleasure and pain avoidance are not sufficient because we need to ask the question -- "why is that painful?", "why is that pleasurable?". There are clearly other factors at play in valuation, just as I think there are other factors at play in the whole process of empathy -- "why empathy for the bleeding dude and not the one holding the gun?". We have a whole nexus of assumptions underneath what we tend to see as the beginning places for morality and which seem to show us something about the very nature of emotions. I think many folks tend to view emotions as feelings that happen to us. But that isn't what they are -- they are forms of engagement with the world. There is no such thing as empathy by itself. There is empathy for this or that thing with a set of rules that accompany it. This is why I think the mirror neuron approach is fine and all, but it doesn't explain very much about how we really seem to work. Yes, they have to be linked to emotional systems, etc. -- but that is where the real work happens, the direction of fit, the real creation of morality. I think that is what we need to explain.

OK, I'll stop rambling now.
 
Sorry, just a bit more rambling.......

Perhaps this might help clarify my position by way of analogy......

I don't see our nervous system as ONE THING. Whether or not the nature of reality is ONE (strings, whatever), at the level of human description we are not one thing. Down to our very core (at the biological level) we are colonies. Every cell in our body is a colony. Even above that level we are a colony depending critically on inhabiting microorganisms on our skin, in our gut, etc. They die, we die.

I think our nervous system is analogous. We have many different computational modules, some of which are involved in those processes we call emotions, some of which are involved in those processes we call reason, some of which are involved in those processes we call perception, etc. (and many of which are multimodal). In order for these computational units to work and make sense of the blustering array of data out there, they require a certain architecture (which gives them a direction of action). During the evolutionary process many different forms of these computational units arose and have persisted -- when they helped organisms survive. In and of themselves they are not integrated. "We" try to integrate them in our functioning since we work best by seeing ourselves as a single unit (or more properly, they integrate to some degree in multi-modal units). But since these individual units can come into conflict with one another, they simply do -- so we have drives for x action and y action. We have numerous different drives that comprise what I think we call morality, just as each of our cells are comprised not of one unit but of a colony of prokaryotes.

That is why, I think, that any "ethical system" is doomed for failure. Variety is the first lesson of biology when discussing groups. But it also seems to be the first lesson of biology when it comes to discussing individuals. This, incidentally, is why I think we are good general problem solvers -- because we are endowed with several competing problem solving units that try to work together.
 
Last edited:
I think duty devolves to empathy.

Empathy causes actions via emotions as it were. Most people don't have much in the way of Buddhist non-attachment, so for them empathy is effectively sufficient to change behavior because that tie is very strong and will go right into the complex pleasure/pain thing we all know and love. For some folk (like me) that gets more ... complicated, but I think it's still basically accurate.

Some people do often have more empathy for the guy with the gun than the bleeding dude. Cops for example, assuming the guy with the gun is a cop. Guy on the ground is going to get cuffed (none too gently), searched, and left there while the amblance comes. Ain't too empathic hm?

Dunno what you mean by consequentialism; explain/integrate?

Your point re neural plurality is only partially accurate. It is self-integrating; that is in fact one of the most important features about it as a system. Cases where it isn't - like split-brain patients (with a severed corpus collosum), blindsight (ability to know where things are but not that they are there), etc are fascinating for just that reason, they violate the rule. It is extremely difficult to address the question of how much of it we "consciously" regulate; in my informed opinion the field isn't anywhere near having the data to answer that. Perhaps in another hundred years.
 

Back
Top Bottom