Ethically, our moral obligations or duties stem from our common human condtition
1. Mortality
2. Ability to suffer both psycological and physical pain
3. Our ability to reason
4. Our need for social cooperation in order to enhance survival.
Woha. Where does all this come from? I thought the original question was "A rational definition of morality". This is just a shopping list of what you think is important.
Let's break this down. "Morality" consists of rules of thumb used to distinguish between 'good' and 'bad' behavior. These rules are arrived at by a process of social evolution in which a society discovers empirically which rules work and which ones don't; "though shalt not steal" is a good example. Over a process of time some of these rules are often encoded into religious texts and they become God's word.
So that's how morality arises and how we can start to get an empirical handle on it.
Rationally, we can also start to formulate a 'theory of morality', and try to understand the mechanisms involved. Do moral rules follow patterns? Do the most successfull sets of rules conform to some general principles? Taking a top-down approach, we can start to develop the field of ethics. Ethics is the process by which moral rules are arrived at by the application of reason and values. Ethics sits at a higher level of sophistication than morality because it attempts to explain
why certain moral rules are 'good' or 'bad'. Where a moral rule is pretty simple ("Thou shalt not kill"), ethics can be used to resolve all the 'moral dilemmas' that arise when this a moral rule is naively applied ("... but it's ok to kill if somone is trying to kill you").
But to apply ethics, you have to have a criterion by which you can compare outcomes; a previous poster mentioned the train track dilemma, in which you have to decide whether to switch a train to one track, which would result in the death of all the children in a schoolbus stuck on the tracks; or to another track, which would result in the death of all the passengers in the train because there is say a broken bridge. Which is the better outcome? To decide this, you need to appeal to
values ("I value children more than I value adults", or "I value the integrity of the family group maintained by the adult"). Some values might have truly nasty outcomes ("I value the purity of the racial group"). So how do we rationally decide which are 'good' values and which are 'bad' values?
One way is to appeal to principles, which is not so arbitrary as it sounds and it works pretty well in physics; for example Occam's principle, or the principle of least action, or the principle of equivalence. Basically you select principles that are in some way distinguish themselves, but also (obviously) have the virtue of producing moral rules that have empirically been demonstrated to be of great value. So for example, the ethical principal of equivalence (everybody is treated the same way) is rather special because it is the only such principle that does not contain some additional and arbitrary way of partitioning the victims. Having selected a principle, you can ask if a particular value leads to a violation of that principle. Obviously, "I value the purity of the racial group" does, because it would lead to laws that treat people differently depending on their race.
Bang the whole caboodle together and you can start to perform systematic experiments. For example, the classic contest run by Anatol Rapoport based around the prisoner's dilemma payoff matrix ("goodness function"), which treated all the contestants the same ("principle of equivalence") but allowed each contestant to choose its own behavior ("moral behavior"). The non-obvious outcome was that the tit-for-tat behavior was most robust in all environments. Expressed in human terms, it's "I'll scratch your back if you scratch mine, but if you kick me in the shin I'll kick you back in the 'nads." Sound familiar?
So that's the long answer.
The short answer is yes, there is a rational basis for morality, and those who have trouble grasping this are probably victims of what a logician might describe as a 'catagory error'; the belief that 'morality' contains some super-special ingredient or property that it does not. For example, you believe that human intelligence is somehow unique, and if someone were to ever show you a computer that is abundantly intelligent you will end up objecting: "Yea, ok, that's a really good
simulation of intelligence, but it's still not really intelligent." Ditto morality, if you think it is somehow derived from God.