AkuManiMani
Illuminator
- Joined
- Jan 19, 2008
- Messages
- 3,089
Well consider the system proposed by Isaac Asimov in Foundation - a system where a 'higher intelligence' has created a set of rules to live by. The higher intelligence intends only good in the long run for those following the Foundation, so it becomes moral to follow the dictates of the higher intelligence, and immoral to oppose them. Foundation is really entirely about religion and the nature of higher intelligence.
I agree with you there. Asimov's Foundation is a good hypothetical example of a non-theistic religion.
Morality is extended personal ethics? An interesting idea, but not necessarily one I'd endorse. For instance, I wouldn't particularly like to be cheated, I don't cheat others, but where do you draw the line? Most of us would be willing to take a person's money for something they don't really need if they wanted to give the money to you. How far of a stretch is it from taking someone's money for a product or service they may or may not need because they insist on buying it to televangalism? And how far from there to the shell game on the street?
The extension of personal ethics to other people ["do unto others as you would have them do unto you"] is just a good starting rule of thumb. The more basic premises are:
(a)"I am alive; My life has inherent value."
(b)"Other people are alive; their lives must also have inherent value.
(c)"It is preferable to act in consideration of this value to increase collective benefit".
From there one can come up with many differing systems for determining [to varying degrees of success] what maximizes benefit for the self and other equivalent "selves". It is within this realm of equivalents [i.e. empathy] that a social group is formed. Outside of the realm of sentient entities morality has no meaning; there is no morality without subjective experience. While the basis of what is considered "benefit" is a varying subjective experience said subjective experience still has an objective reality to it. Therefore, one can logically and objectively determine how successful a method/system is at maximizing it.
Being as how conditions vary and change overtime these systems and methodologies are peripheral and tentative. For instance, a "thou shalt not" of ages past may not provide the same [or any] benefit today as it once did in its original context. Like science and technology, moral and ethical systems must evolve over time, but the philosophical goal at the core of it should remain relatively constant.
I don't believe in any morality. I don't think there's a morality gene, or hardwired instincts that certain things are immoral. There's probably various genes and traits we have hardwired that enhance survival, but morality can't simply be about propagation of your personal genetic code. If it was hardwired into me, wouldn't I lack a choice about believing it?
I disagree with the statement that you don't believe in morality. Clearly your social actions aren't made in an ethical vacuum and I assume that you aren't a sociopath. Judging from your statements so far I think that you merely do not believe in a transcendent/metaphysical basis for morality. I would argue that morality does not need one to be valid and that it has very real practical benefit regardless. I would also argue that given the afore mentioned practical value, and the universality of morality within complex social organisms [with the rule proving exception of sociopaths] morality must also have some biological basis.
Last edited: