I was of the opinion that science can't answer moral questions, I might be changing my mind. I think it might be possible to devise a universal moral framework using science, even if it's worthless in answering any detailed specifics.
See, I think you have this exactly backwards.
Ethics is rules for applying morality. Morality is based on values. And values are, pretty much by definition, subjective.
There's no way to set a "universal values" system, because they are subjective. You can't objectively determine the "best" value without already assuming a value judgement about what 's best.
That being said, once you get down to the basics of what should be valued, then science can definitely help with the specifics. Once you have your values (axioms, i.e. - "minimize human suffering"), then science can help you determine and weigh new values (minimize it at the expense of everything else? Do we value animal life? And how much? How much suffering is acceptable if it results in less in the long-term?) and apply those values to situations (well, this law/practice/custom/solution sounds good, but in the past it's had these consequences, so we should try a different option).
As has already been discussed, a good scientific case can be made that feeling good/bad, pleasant/unpleasant is a universal state of evolved neural networks, with the function of encouraging behaviour leading to reproductive success and avoiding harmful situations.
Since the universal goal of life is to reproduce, for animals with complex enough brains to experience it, maximizing happiness/minimizing suffering would be a universal goal.
So that's where we start.
Exactly, that is why we have it. It's had the evolutionary function of bolstering cooperation in closer knit groups leading to reproductive success.
It also has a dark side. In a system with limited resources reproductive success depends on out-competing other close knit groups.
Within a social group the cooperative, empathetic side of behaviour has always been seen as virtuous and the combative, selfish side as evil. The reverse is true for interactions between competing groups. Why this is so should be self-evident.
Game theory, and the fact that it evolved in the first place, shows that cooperation is the better strategy.
Since we are all stuck on this planet together our social group has basically expanded to include the whole globe.
I propose the over-all function of a moral code stay the same: The continued prosperity of the group.
Since our concept of morality has recently expanded to include such outlandish groups as other races, woman, children and even animals; I propose to expand it to include all life.
I think all living things have value.
Just as your prosperity depends on the prosperity of your society, the prosperity of society depends on the prosperity of the system it inhabits.
For the foreseeable future our prosperity depends on the prosperity of the planetary ecosystem.
It boils down to:
Maximizing happiness/minimiz suffering by striving to sustain a balanced stable ecosystem where all life has value.
More complex life with more complex brains having more individual personal value and less complex life having more aggregate/ecological value.
Something like that, I'm not sure. Does that make sense?
And everything above is a subjective value judgement, based on the goal of minimizing suffering and maximizing reproductive success of the species. You set those values, then everything else will follow from that. But the values (life is important, suffering is always bad)are subjective, as they are with any morality.
IN fact, there have been moral codes throughout history where suffering was a good thing. Things like various cultures where the suffering and/or destruction of your enemies was a positive good, or some of the Christian sects that believe in self-mortification. Even in modern society we accept some suffering for what's considered the greater good (criminal laws with punishments for offenders), and most religions incorporate at least that much (sinners go to Hell, etc). Personally, I can't see any way that any concept of Hell fits in with minimizing suffering, so that value is obviously NOT universal.
Now, all that said, I agree that minimizing suffering and maximizing happiness is probably a good basis to start from, but that's not even a framework to build a moral and ethical system on: it's at best the plan for a foundation
