The reason I do not like utilitarianism is because it lacks a real concept of benefit-to-other; it cannot justify self-sacrifice. Whereas my version can (if one empathizes with the others more than oneself).
I'm sorry, but that is simply false. The whole idea of utilitarianism is benefit to other. Self sacrifice that benefits the entire community and creates greater happiness for all is considered noble to utilitarians (well to the community; the sacrificer is left out of the consequentialist bargain). Utilitarianism is not hedonism. Mill, for instance, considered it yet another example of the golden rule in action -- the greatest happiness principle is based on treating others with respect to increase general happiness and decrease general pain.
The other reason I like my version is because it is neurologically sufficient; mirror neurons plus internal modeling plus emotional reactions are all you need to explain it. It doesn't resort to any external things like "god said to".
I haven't heard anything in it that is neurologically sufficient for a complete ethic. First, you are putting far too much explanatory power on mirror neurons (which is OK in my book since I have done the same). While these neurons undoubtedly form the basis for what we know of as empathy they do not in and of themselves account for the full range of empathy that would account for what you want them to do -- like self-sacrifice. They are primarily important in understanding social interaction and comprehension. They do not provide fellow-feeling unless they are linked through other systems involved with pleasure, fear, etc. Since you seem to realize that we also need emotional reactions, etc., though you have really already answered your own question. Empathy is not all there is to human interaction and not all there is to human ethical behavior. If all that mattered was empathy, then my empathy for what a killer might have to go through would stop me from wanting any retribution against him. But that is not the human response.
Now, with that said, our brains are neurologically sufficient for ethics. But the functioning of our brains includes both emotions and reasoning. Ethics includes both emotion and reasoning. I have yet to hear of one single idea that explains ethics in a way that works for most people. Empathy is an absolutely key ingredient. But it is not the whole story.
A trucker doesn't fix his breaks one year. He has trouble stopping suddenly when a squirrel jumps out and he flattens it. Later he has trouble stopping when a kid jumps out and he flattens him. Are these situations equal? Why not if not and why if they are? Is the only issue empathy for the parents of the dead kid? What about the squirrel? What about the horrible anguish of the trucker? How much empathy for him? How do we assign empathy for each situation? What if one person thinks the poor trucker is getting a raw deal because he didn't really mean it?
Or look at it from one other view. How have you really added much to the discussion? By mentioning mirror neurons? The rest of your "neurology" is a black box. That's also fine by me as far as explanations go because most of neurology (really neuropsychology since neurology proper is a medical subspecialty) is a black box. But if you want to pretend that you have any sort of neurological explanation, then I don't buy it.
It does of course also mean that for a psychopath or autistic who lacks an understanding or empathy for others, then it dissolves back to straight utilitarianism.
Why should they rely on utilitarianism? Why shouldn't everyone? Why shouldn't everyone rely on deontology? Why not virtue ethics?
"Prescriptive" morals that I refer to are not based on these neurological facts, but rather on pure axioms about platonic ideals, deific doctrine, or the like. Thus I think they are less reliable - as can been shown e.g. by the cognitive dissonance of a strict Jew trying to act morally and within the commands of their religion. Or a fundamentalist Christian doing the same.
Then you are using the term incorrectly. Prescriptive means telling people what to do. Your morality is prescriptive. What you object to is older moral systems because you seem to think they are too intellectualized. That's fine. I agree to some extent. I think all ethical systems that think they rely purely on reason are either flat out wrong or are fooling themselves -- that is why I brought Kant into the discussion. However, more traditional ethical analyses have been based on the ultimate good for mankind and have viewed ethics from what can be called a positive and negative direction -- not only what not to do, but also what to do in order for us to lead a good life. So Aristotle examined what is good (what else are you going to base an ethic on?) and decided that is was some form of happiness or human flourishing. He concentrated on the virtues more than anything else but ultimately seems to have pinned his ethic on basic human psychology -- when I read the Nichomachean Ethics it sounded in a vague way to me like a recapitulation of Abraham Maslow's Heirarchy of Human Needs. Aristotle finally seems to alight on a form of self-actualization as the best human good. So it seems to me that he bases his ethics on human psychology, which is based on human neurology.
The Utilitarians take a slightly different slant, though they follow Aristotle's lead in analyzing the "good". The only thing we do for itself is some form of happiness or pleasure or avoidance of pain. They just decided that a proper ethic consisted of everyone sharing this -- universalizing happiness. This is based in human psychology and hence human neurology.
Kant searched for the rational basis of morals and thought he found it. He didn't, but he still arrived at a pretty darn good ethics. The whole idea was based on rationality and duty. It was based in human psychology and therefore human neurology.
You have decided that the ultimate good is empathy. OK. I don't see any difference between the above systems and yours except that the others went into much greater detail to rationalize their choice.
All human morals are necessarily based in our neurology. There is no other option unless there is a non-material portion to our minds. There is nothing special about invoking mirror neurons. That will not make anything you say any more correct or believable, especially because we know so very little about the dang things. And since they were discovered in non-human primates whose ethics we do not know I think it is very dangerous to think that mirror neurons explain anything better than "well it just happens up there in the brain somehow".
If you think that the descriptive basis of my theory is inaccurate, please show me a good counterexample.
See above.
The "motivated" aspect of it is, essentially, applying utilitarianism at the social/cultural level to that description of how people operate at the individual level.
Utilitarianism is applying utilitarianism at the social level to the description of how people operate at the individual level. So basically you are telling me that you are proposing utilitarianism?
What I see you doing really is trying to return ethics to its foundation in feelings. I think that is commendable. Hume did the same thing. The problem is that there are huge numbers of folks out there that don't want to believe that ethics ultimately boils down to basic human motivations. Our basic human motivations are pleasure and pain. So some philosophers try to show that we don't just want pleasure and to avoid pain, but we have other drives that they think prove that there are true "objective" moral truths. One supposed demonstration runs like this -- suppose that in the future we have the ability to convert everyone to a brain in a vat and give everyone every pleasure that they would desire -- "high pleasures", "low pleasures", the whole gamut. All you have to do is press this little red button and you and everyone else in the world will be suffused with joy. Would you do it? I'm sure there are some people out there who would say "yes" but most folks would say "no". The reason? Because it wouldn't be true. It wouldn't be real. So we are supposed to have this "objective" drive toward the real. But if you subject this finding to the analysis that Aristotle did then I don't think we could call the will to truth a basic drive. Why care about truth, why care about the real? To avoid others controlling us? To avoid pain? We seem to feel pain when we think someone is fooling us. Even this example may show that at the basis of all human motivation is the search for pleasure and avoidance of pain. That may be what the word motivation really means.
So, yes, I agree with the program to return ethical thinking to its roots in basic human motivations and any and all attempts to describe ethics from a neurological perspective. I just don't think we are even near the ballpark yet.