Richard T. Garner and "Beyond Morality"

Perhaps you should have highlighted the "logically independent" part, like this:

"Whether or not a statement is normative is logically independent of whether it is verified, verifiable, or popularly held."

That would have focused on where I seemed to be wrong.

This implies that a Normative statement COULD be verified, or it could NOT be verified. Either way, it could still be a normative statement.

The problem, though, is that normative statements we are dealing with in morality are assumed to be the unverifiable type. So, that is what I was harping on.

You know, it's traditional to say, "Sorry, you were right, I misread it," rather than to tell me that my explanations don't live up to your standard, but never mind.

If you want to amend the claim to deal only with morality, then I may grant that these are not verifiable -- certainly not empirically so. To be sure, even then we have to be careful on what we mean, because it's not clear what a "normative statement in morality" is.

But you should realize that there's a world of difference[1] between "is neither true nor false" and "is not decidable" -- especially when we restrict decidability to empirical methods.

[1] At least, many folks think so and so it is dangerous to presume otherwise without argument.
 
It is indicating that there is no "true answer" for normative statements.

Once you add conditions, it is no longer considered a normative statement, apparently. And, on top of that, the philosopher will then ask: "Why should we value those conditions?"

If you explain why, this goes into infinite regression:

"Well, because those conditions yield the best consequences."
"Why should we value consequentialism?"
"Because natural forces will tend to have us do that."
"Why should we value what natural forces have us do?"
"Because if you don't, you will be a one of the depressions in the saw tooth chart, a 'Footnote of History', etc."
"Why should we value not being a 'Footnote of History'?"
Etc. and so on.

I think the cracks in philosophical thinking, on these points, start to show, when things like this happen.

Firstly, 'philosophical thinking' is not limited to moral scepticism. There have been philosophers making arguments similar to yours for centuries, and they are very much philosophical arguments. You're already using their terms, 'consequentalism', 'utilitarianism', etc.

There seems to be a tendency to say, 'move over philosophy, it's time for science to answer these questions', which is informing your whole argument.

This tendency belies a mistaken assumption which only serves to display naivety of the subject matter, and smacks of arrogance. It is extremely reminiscent of the Creationist tone when interjecting into debates on evolution. I'm sorry if these words sound harsh, but that is how I see it.

Whether you like it or not, moral scepticism, simply is the more 'scientific' and less 'wooish' position in moral philosophy. Science can help us to accurately describe the world, and it can help us to get what we want, but when all the facts are known, there will still be moral disagreements. Therefore, the Moral Error theorist says that normative statements are all false. Ultimately, there is no objective property of the universe that can answer these questions.

Yes, it is perfectly possible to start with a tautological moral axiom such as, 'well-being is good and should be maximised', and then use science to help us work out how we can maximise well-being, but that claim is neither novel, nor one that is based on logic or evidence. It is 'science after the (assumed) fact'. Science after the fact cannot give credibility that the 'fact' at dispute (in this case, the founding moral axiom), is 'scientific' knowledge.

It is a preference, among competing preferences.

Yes, a lot of people hold that preference, but the fact that a lot of people hold the same opinion, does not make it scientific truth, without some underpinning in reality which is knowable or measurable in some way. What moral realists have consistently failed to do, is to describe the way in which that 'moral truth' is knowable or verifiable.

If I assume that 'fairies are real', is a true statement, that doesn't make a 'science of fairies' respectable does it? That doesn't make 'fairy belief' a more scientific view than 'fairy scepticism'? Even if a lot of people believe in fairies and act is if they were real?

So, the default position for sceptics, should be some form of moral scepticism, on the basis of consistency. If we look at moral claims through the lens of reason and evidence alone, we will find that all of them are false.

The moral sceptic would say that murder is not inherently 'wrong'. Yes it might be a behaviour that many people (including the moral sceptic) find abhorrent. Yes, there might be an argument against murder that is easily made that many people find compelling. Yet, the 'wrongness' in this case refers to a preference, or a group of preferences, in other words, subjective opinion. Yet in our everyday discourse, we refer to moral statements such as 'murder is wrong', as if they hold some undeniable truth about the matter, which must in some way be discoverable by reason or evidence. Yet, if I asked you to prove to me that murder was wrong, using reason and evidence only, and without reference to a tautological moral axiom that you have just assumed, because it is your preference, you would be unable to do that.

That is the essence of Moral Error theory. The 'error' is the idea that these ideas of right and wrong and good and bad, refer to some kind of truth, which is knowable or verifiable in some way, some kind of truth that is not derived from our preferences, which are the product of our genes, our environment and human invention.

And I'm not a Moral Error theorist... so that was heavy lifting devil's advocacy and I hope you appreciate it.
 
Last edited:
First, you have to think of it in terms of social policy: Would you want to live in a society where people could freely take photos of others changing, without their permission? (Regardless of the reasons, which we will get to.) If most people say "No", this is an indication that it could be immoral to do so. Even though that is not a guarantee (Most people could be wrong), you can get a sense of why it would be immoral to do such a thing.

Yes, that's possible. I'm not sure it's 'true' - As I pointed out, the alternative is that we could be wonky and everybody could be behaving immorally. We accuse entire cultures of this today.




But, for utilitarianism to answer the question, the reasons need to be established:

The reasons, in this example, boil down to the importance of human dignity and privacy, and the breaks we need to put on allowing ourselves to disrespect either of them. Without sufficient breaks, physical violence could, in fact, increase. Science bears this out, and modern societies happened to hit upon these rules as solutions, during their evolution. Pinker calls it the Civilizing Process, and writes several chapters about it in his book.

Once we realize that granting everyone the freedom to disrespect everyone's privacy would actually increase physical violence, which is far worse off than merely having perverts collecting images of you: It makes sense to enforce privacy rules as a moral issue.

Of course, most people are not aware of that science or history. So, it becomes easy to claim utility is not good for this sort of thing. But, I hope we can see how this kind of information emerges from talking about morality on objective terms, than it would from subjective ones.

I think you'd have to connect these photos to increased violence more soundly. Pinker is not talking about this example (I assume you're referring to Better Angels of our Nature). I don't see the connection, which is the relevant point.

It's also drifting from Utilitarianism to Rule Utilitarianism, as I explained. The latter being a 'fixer-upper' to paper over the major problems in Utilitarianism and still call it valid.

What I mean by this is that 'privacy' has different aspects, and it may not be valid to say that protecting a person's secrets has the same value as protecting an anonymized image.

The alternate example cases are as I described: we're not saying that everybody's privacy should be violated - just one or two people. The rest of us can be assured we're safe. It still 'sounds wrong'.
 
But, for utilitarianism to answer the question, the reasons need to be established:
Oops. I actually meant Consequentialism instead of Utilitarianism, in this post! Sorry about that.

But you're invoking the categorical imperative here:

Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.


Which clashes with Utilitarianism in this example. If the photographer is going to be greatly gratified by taking the photos and knows that no-one else will be harmed by doing this, the utilitarian argument is for taking the photos.

Actually, I meant Consequentialism. Does that change things?

If you want to amend the claim to deal only with morality, then I may grant that these are not verifiable -- certainly not empirically so.
And, I claim they can be empirically verifiable. Once you take into consideration the theory that morality can only exist in one way, any other consideration is fictional, even if you want to disagree with it.
 
Actually, I meant Consequentialism. Does that change things?

No, the Categorical Imperative is Kantian ethics, and his ethical theory was deontological, rather than consequentialist (or in other words, we begin with a rule, rather than with the effects).

Having said that, there is an idea called rule-utilitarianism which attempts to get the best of both worlds, by saying for example that we should all stop at red lights whether or not we need to because following the rules ultimately results in better consequences, but it strains for coherency because we can think of many cases where we would end up better off by breaking the rules.

http://en.wikipedia.org/wiki/Rule_utilitarianism

In the case of the categorical imperative, we would have to accept that it is correct sometimes to lie even though societies where everyone goes round lying at all times would be difficult to maintain.
 
What moral realists have consistently failed to do, is to describe the way in which that 'moral truth' is knowable or verifiable.
I assume you accept the Theory of Evolution, right? If so, can you answer this:

"How do we REALLY KNOW what is 'most fit' for a species?!"

The short answer is that we typically know, from hindsight, what WAS the most fit, of the available gene variations, over time.

But, I imagine philosophers could break into the same arguments about it, as we are for morality:

"Oh sure, survival and reproduction is good and should be maximized. But, that is merely the axiom you are starting with. Can we REALLY claim that Natural Selection should be selecting for those?”

Even though most of our knowledge about Natural Selection came from hindsight, we can STILL make large-scale predictions about its future. For example: We can predict that bacterial agents will adapt to be immune from anti-bacterial soap, once too many people are using it too often.

The proximate details might be harder to unravel: We might NOT be able to figure out which specific adaptations those bacteria will take. But, that does not make the larger statement any less accurate.

In the same way, moral truth can be knowable and verifiable. So far, most of our knowledge has to come from hindsight: What worked for the best consequences of society, in the past. But, we might be able to make large predictions about our future morals based on what we figure out about it.

That is the essence of Moral Error theory. The 'error' is the idea that these ideas of right and wrong and good and bad, refer to some kind of truth, which is knowable or verifiable in some way, some kind of truth that is not derived from our preferences, which are the product of our genes, our environment and human invention.
That is an assumption they are making, that is not backed by anything.

Moral truth happens to emerge as a property of human societies, but once emerged, it acts independently of our genes and human intervention. And, it is probably a couple of degrees of separation from the environment, as well.

Yet, if I asked you to prove to me that murder was wrong, using reason and evidence only, and without reference to a tautological moral axiom that you have just assumed, because it is your preference, you would be unable to do that.
Natural forces, beyond our control, tend to bend our morals towards making murder wrong. This is not a preference. This is not an assumption. This, apparently, is an objective, empirical truth. And, it can be verified, in across multiple lines of investigation.

This tendency belies a mistaken assumption which only serves to display naivety of the subject matter, and smacks of arrogance. It is extremely reminiscent of the Creationist tone when interjecting into debates on evolution. I'm sorry if these words sound harsh, but that is how I see it.
Funnily enough I see the arrogant insertion of philosophical fictions, into scientific realms, as reminiscent of the Creationist tone.

Yes, it is perfectly possible to start with a tautological moral axiom such as, 'well-being is good and should be maximised', and then use science to help us work out how we can maximise well-being,
Morality does not really work that way. Well-being is WHAT ALL moral questions end up becoming about. There is no other stable manner in which morality can exist. (according to theory)

It is NOT like we can just decide "'well-being is good and should be maximized". The decision was made for us, long ago, since before we were even humans. That is my point.

but that claim is neither novel,
I did not claim it was novel. I know this is an old idea. Though, I am experimenting in how to convey the idea, using different words.

nor one that is based on logic or evidence.
It is based on the evidence that well-being seems to improve, over time, in an inclined saw-tooth manner; and from scientific experiments that are consistent with the notion (even if it was not directly tested, yet). And, logically, it follows from Natural Selection.

And I'm not a Moral Error theorist... so that was heavy lifting devil's advocacy and I hope you appreciate it.
I do! :)

I hope you don't take my responses personally.

Yes, that's possible. I'm not sure it's 'true' - As I pointed out, the alternative is that we could be wonky and everybody could be behaving immorally. We accuse entire cultures of this today.
That was only a starting point. We can approximate what might be morally correct, based on what seemed to work in hindsight. And, we can acquire that perspective, in part, by examining current social policy.

The real trick comes in when we investigate that starting information further: Can we identify how it lead to better consequences? Can we identify any sources of deception, such as hidden information as to why it actually lead to bad consequences? And, either way, can we improve upon that for the future?

I think you'd have to connect these photos to increased violence more soundly. Pinker is not talking about this example (I assume you're referring to Better Angels of our Nature). I don't see the connection, which is the relevant point.
Pinker did not go into that example.

But, if I recall, he and others go into the value that the concept of 'human dignity' brings to us. Too much, and we are wasting resources dignifying things to a level they do not really need. Too little, and we are too tempted to treat each other badly, and everyone suffers more.

There might be a minimal level of dignity we could apply to all humans, so that we are less likely to fall into the trap of treating each other like crap, and all of society suffers.

Though, there is also an advantage to be had, by exceeding that level a little bit: Err on the side of slightly too much dignity, rather than too little. So, that we have a margin against dipping too far into the bad end.

It sounds like these anonymized photo examples could be part of that.

The alternate example cases are as I described: we're not saying that everybody's privacy should be violated - just one or two people. The rest of us can be assured we're safe. It still 'sounds wrong'.
It would 'sound wrong' if the rules were arbitrary. Why does one or two people get different rules from anyone else? Can those rules leak into anyone else, including myself and other people I care about?
 
And, I claim they can be empirically verifiable. Once you take into consideration the theory that morality can only exist in one way, any other consideration is fictional, even if you want to disagree with it.

While you may claim this, of course, others (including me) have not yet been convinced by your arguments.
 
Well-being is WHAT ALL moral questions end up becoming about. There is no other stable manner in which morality can exist. (according to theory)

Two problems with this.

1.) It may not be true that the ultimate goal is well-being, if you prefer other ends such as "justice", for example.

2.) Well-being is not only vague, as has been pointed out, but "maximizing well-being" is itself a difficult concept. Maximizing it for whom? And should it be equally distributed? And should we have an obligation to produce more people who can appreciate this well-being thus maximizing it in moral tokens (or individual conscious beings) rather than maximizing moral type (i.e the quality within each being).

In fact, now that I think of it there are many, many more problems involved, but I have not enough time right now to go into them.
 
Having said that, there is an idea called rule-utilitarianism which attempts to get the best of both worlds, by saying for example that we should all stop at red lights whether or not we need to because following the rules ultimately results in better consequences, but it strains for coherency because we can think of many cases where we would end up better off by breaking the rules.
Morality can not be strictly rule-based, because there are too many variables creating too many exceptions for too many things, in the environment, especially as that environment changes over time.

"Guideline-based" is the more realistic approach: These are the general rules to adhere to, but you can break them if you have a sufficient reason to do so.

"Sufficient" can be calculated in several ways, include risk/reward analysis.

It might be interesting to note that sometimes the cost of calculations could exceed the savings you would get from the answer. If someone knows they could probably save $5.00 on some purchase, but it would cost them $80.00 to find out how, they would probably not bother with it (assuming it would only apply to them, this once, and not knowledge they can sell to other people, later).

Moral calculations could sometimes end up the same way: There might be a way to figure out how to improve everyone's consequences by 3%, but the costs of finding out how might make everyone worse off by 10%, before the answer is found.

Consequentialism can take into account the costs of its own discoveries.

In the case of the categorical imperative, we would have to accept that it is correct sometimes to lie even though societies where everyone goes round lying at all times would be difficult to maintain.
Ah, the reasons where lying is allowed can not be arbitrary!

It is arbitrariness of exceptions that is the issue, not the mere existence of exceptions.

While you may claim this, of course, others (including me) have not yet been convinced by your arguments.
I am curious to know how you think morality works (or doesn't).
 
I am curious to know how you think morality works (or doesn't).

I don't think that I have any deep insights.

I wish I did.

I tend to be a moral realist, but don't pretend to have an convincing argument that realism is correct. I like much of Kant, but can't say that he has convincingly showed the categorical imperative (in any form) is the right moral principle. I tend not to care much for Utilitarianism, though I can see why some people find it attractive. Certainly, Mill's justification seems plainly fallacious to me.

So, I'm afraid you won't get any firm answers from me. Ethics is hard.
 
Morality can not be strictly rule-based, because there are too many variables creating too many exceptions for too many things, in the environment, especially as that environment changes over time.

"Guideline-based" is the more realistic approach: These are the general rules to adhere to, but you can break them if you have a sufficient reason to do so.

"Sufficient" can be calculated in several ways, include risk/reward analysis.

It might be interesting to note that sometimes the cost of calculations could exceed the savings you would get from the answer. If someone knows they could probably save $5.00 on some purchase, but it would cost them $80.00 to find out how, they would probably not bother with it (assuming it would only apply to them, this once, and not knowledge they can sell to other people, later).

Moral calculations could sometimes end up the same way: There might be a way to figure out how to improve everyone's consequences by 3%, but the costs of finding out how might make everyone worse off by 10%, before the answer is found.

Consequentialism can take into account the costs of its own discoveries.

I could sum that up as "Do be good, don't be bad!" and when asked what is good or bad, reply that "Good is when good things happen as a result, and bad is when bad things happen as a result".

So how about Democracy Simulator's example, only with the further embellishment:

Can it be said to be morally good if I take peeping-Tom videos of my neighbour if I make sure that no one ever finds out about it except me who uses my pictures of her for personal gratification and thus increase the happiness in the world?
 
I could sum that up as "Do be good, don't be bad!" and when asked what is good or bad, reply that "Good is when good things happen as a result, and bad is when bad things happen as a result".

So how about Democracy Simulator's example, only with the further embellishment:

Can it be said to be morally good if I take peeping-Tom videos of my neighbour if I make sure that no one ever finds out about it except me who uses my pictures of her for personal gratification and thus increase the happiness in the world?

What's she look like?

Because, after all, you might be even better if you shared them with me. But you would be a much, much worse person if my wife ever found out.
 
What's she look like?

Because, after all, you might be even better if you shared them with me. But you would be a much, much worse person if my wife ever found out.

Well, if I showed them to you and you never disclosed this, then presumably I have doubled or tripled the amount of happiness in the world. Yet there is some nagging feeling that it might be wrong to do this. Perhaps it is merely cultural conditioning, and in the future, when scientists have worked it out for us, we will find nothing wrong with the idea.
 
I've been over these, before. But, okay:
1.) It may not be true that the ultimate goal is well-being, if you prefer other ends such as "justice", for example.
Your preference does not matter. Neither does mine.

All moral goals towards "justice" end up naturally turning into goals towards consequentialism.

2.) Well-being is not only vague,
I do like the term Welfare Consequentialism better. But, I can use "well-being" as a short hand.

And, it is no more vague than the "fitness" that Natural Selection selects for.

Maximizing it for whom? And should it be equally distributed?
Morality stabilizes on maximizing the consequences of society across several different measures, roughly divided between health, wealth, and happiness.

Most of those measures tend to converge on the same answers, so that something that is better for our health is also often better for our wealth, etc.

It is rare to have trade-offs between most of the measures. But, when they do occur, there can be more than one correct answer: Perhaps a 20/80 ratio works just well, consequence-wise, as a 5/95 ratio. So, in those cases, we can NOT say one is more moral than the other, all else being equal. Though, if it would cost less to go towards the one we are already closer to.

And should we have an obligation to produce more people who can appreciate this well-being thus maximizing it in moral tokens (or individual conscious beings) rather than maximizing moral type (i.e the quality within each being).
I think population size would be more dependent on environmental and survival strategy factors. Our 'obligation' to produce more people or not would probably be naturally guided by that (even if we don't realize it). After that problem is solved, emergent morality works on improving quality among that population.
 
Can it be said to be morally good if I take peeping-Tom videos of my neighbour if I make sure that no one ever finds out about it except me who uses my pictures of her for personal gratification and thus increase the happiness in the world?
The same answer applies. If that was outright allowed to happen, even if no one would ever find out about it*, it still would still erode the notion of human dignity enough, to potentially increase violence.

(*and the more it happens, the harder it is to hide.)


Our revulsion towards Peeping-Tom videos (in a typical person) does not come from nowhere! It is not some arbitrary preference we just happened to have, for the sake of having it.

No, that revulsion evolved within our society, for some reason.

The reason I offered is theoretical, and could be wrong. But, if it is, then whatever takes its place would be the reason our morality veers away from allowing it.
 
I've been over these, before. But, okay:
Your preference does not matter. Neither does mine.

All moral goals towards "justice" end up naturally turning into goals towards consequentialism.


I do like the term Welfare Consequentialism better. But, I can use "well-being" as a short hand.

And, it is no more vague than the "fitness" that Natural Selection selects for.

Morality stabilizes on maximizing the consequences of society across several different measures, roughly divided between health, wealth, and happiness.

Most of those measures tend to converge on the same answers, so that something that is better for our health is also often better for our wealth, etc.

It is rare to have trade-offs between most of the measures. But, when they do occur, there can be more than one correct answer: Perhaps a 20/80 ratio works just well, consequence-wise, as a 5/95 ratio. So, in those cases, we can NOT say one is more moral than the other, all else being equal. Though, if it would cost less to go towards the one we are already closer to.


I think population size would be more dependent on environmental and survival strategy factors. Our 'obligation' to produce more people or not would probably be naturally guided by that (even if we don't realize it). After that problem is solved, emergent morality works on improving quality among that population.

The same answer applies. If that was outright allowed to happen, even if no one would ever find out about it*, it still would still erode the notion of human dignity enough, to potentially increase violence.

(*and the more it happens, the harder it is to hide.)


Our revulsion towards Peeping-Tom videos (in a typical person) does not come from nowhere! It is not some arbitrary preference we just happened to have, for the sake of having it.

No, that revulsion evolved within our society, for some reason.

The reason I offered is theoretical, and could be wrong. But, if it is, then whatever takes its place would be the reason our morality veers away from allowing it.

If I understand you correctly, you seem to be advocating a naturalistic fallacy (I know you have denied it) which is coupled to a Panglossian fantasy with a Whiggish historiography that what is right is determined by evolution and will inevitably show us a better world, if not the best of all possible worlds and we will be drawn towards it.

And there's no need for any conscious deliberation.

I find that to be wishful thinking in the extreme.

Most of your arguments to objections of the theory seem to be ad hoc, as if you are just making things up on the fly.
 
If I understand you correctly, you seem to be advocating a naturalistic fallacy (I know you have denied it)
To review my points in that regard:

1. Morality naturally works on society as a collective. If morals are considered to be "binding" in any way, it would be more to that collective, than to individuals.

Claiming that The Society is committing a naturalistic fallacy, at that level, would be weird. It would be like claiming bees are committing a naturalistic fallacy by forming swarms. This is something that emerges from the collective, not something the collective specifically sets out to do.

2. If we can demonstrate that morality stabilizes on a particular value, (which seems to be consequentialism), and can not exist in any other form, then asking "Should I value consequentialism?" would be as strange as asking "Should bees swarm?" or even: "Should the Earth revolve around the Sun?"
This is a property of morality that emerges from the collective, not something the members of the collective directly decide upon.

3. When an individual decides to go along with what seems moral, you COULD accuse that person of going for a naturalistic fallacy.

However, I am not the one committing one for merely pointing that out. Nor am I advocating that everyone should do that. I am merely making the observation that morality works that way.


which is coupled to a Panglossian fantasy with a Whiggish historiography that what is right is determined by evolution and will inevitably show us a better world, if not the best of all possible worlds and we will be drawn towards it.
I think that's exaggerating my points.

But, it's not like we have much choice in the matter. If YOU don't like the way evolution is going for us: Good, bad, or indifferent, then YOU can live in a different universe, I guess.

And there's no need for any conscious deliberation.
There is PLENTY of room for deliberation. Most of these ideas are theoretical, and better science could probably improve what we know about what they attempt to describe.

The problem is that philosophers like to make useless points based on fiction. But, it is hard to blame them, since morality IS a tough nut to crack, objectively. It's just easier for some to give up the ship and claim either "morality is subjective" or "morality doesn't really exist!"
 
Last edited:
Most of your arguments to objections of the theory seem to be ad hoc, as if you are just making things up on the fly.
I should also reiterate the greater point in doing all of this:

These questions CAN be answered by science.

Even if my answers are wrong: Someone, somewhere can figure out what the right answer is.

Some of the answers are complicated.

But, assuming they can't have answers is silly.
 
To review my points in that regard:

1. Morality naturally works on society as a collective. If morals are considered to be "binding" in any way, it would be more to that collective, than to individuals.

Claiming that The Society is committing a naturalistic fallacy, at that level, would be weird. It would be like claiming bees are committing a naturalistic fallacy by forming swarms. This is something that emerges from the collective, not something the collective specifically sets out to do.

2. If we can demonstrate that morality stabilizes on a particular value, (which seems to be consequentialism), and can not exist in any other form, then asking "Should I value consequentialism?" would be as strange as asking "Should bees swarm?" or even: "Should the Earth revolve around the Sun?"
This is a property of morality that emerges from the collective, not something the members of the collective directly decide upon.

3. When an individual decides to go along with what seems moral, you COULD accuse that person of going for a naturalistic fallacy.

However, I am not the one committing one for merely pointing that out. Nor am I advocating that everyone should do that. I am merely making the observation that morality works that way.

The problem here is that you cannot demonstrate that "morality stabilizes on a particular value". Well, I suppose you could take a vote or do a survey, but even then you haven't demonstrated what you set out to do which is to show an objective morality which presumably requires an objective value.

I've received J.L Mackie's book, Ethics: Inventing Right and Wrong, in which he asserts from the first line:

There are no objective values.

If you agree with that, then it is difficult to see how you can arrive at an objective morality. But it is not clear that you can demonstrate any objective values.
 
Firstly, 'philosophical thinking' is not limited to moral scepticism. There have been philosophers making arguments similar to yours for centuries, and they are very much philosophical arguments. You're already using their terms, 'consequentalism', 'utilitarianism', etc.

There seems to be a tendency to say, 'move over philosophy, it's time for science to answer these questions', which is informing your whole argument.

This tendency belies a mistaken assumption which only serves to display naivety of the subject matter, and smacks of arrogance. It is extremely reminiscent of the Creationist tone when interjecting into debates on evolution. I'm sorry if these words sound harsh, but that is how I see it.

Whether you like it or not, moral scepticism, simply is the more 'scientific' and less 'wooish' position in moral philosophy. Science can help us to accurately describe the world, and it can help us to get what we want, but when all the facts are known, there will still be moral disagreements. Therefore, the Moral Error theorist says that normative statements are all false. Ultimately, there is no objective property of the universe that can answer these questions.

Yes, it is perfectly possible to start with a tautological moral axiom such as, 'well-being is good and should be maximised', and then use science to help us work out how we can maximise well-being, but that claim is neither novel, nor one that is based on logic or evidence. It is 'science after the (assumed) fact'. Science after the fact cannot give credibility that the 'fact' at dispute (in this case, the founding moral axiom), is 'scientific' knowledge.

It is a preference, among competing preferences.

Yes, a lot of people hold that preference, but the fact that a lot of people hold the same opinion, does not make it scientific truth, without some underpinning in reality which is knowable or measurable in some way. What moral realists have consistently failed to do, is to describe the way in which that 'moral truth' is knowable or verifiable.

If I assume that 'fairies are real', is a true statement, that doesn't make a 'science of fairies' respectable does it? That doesn't make 'fairy belief' a more scientific view than 'fairy scepticism'? Even if a lot of people believe in fairies and act is if they were real?

So, the default position for sceptics, should be some form of moral scepticism, on the basis of consistency. If we look at moral claims through the lens of reason and evidence alone, we will find that all of them are false.

The moral sceptic would say that murder is not inherently 'wrong'. Yes it might be a behaviour that many people (including the moral sceptic) find abhorrent. Yes, there might be an argument against murder that is easily made that many people find compelling. Yet, the 'wrongness' in this case refers to a preference, or a group of preferences, in other words, subjective opinion. Yet in our everyday discourse, we refer to moral statements such as 'murder is wrong', as if they hold some undeniable truth about the matter, which must in some way be discoverable by reason or evidence. Yet, if I asked you to prove to me that murder was wrong, using reason and evidence only, and without reference to a tautological moral axiom that you have just assumed, because it is your preference, you would be unable to do that.

That is the essence of Moral Error theory. The 'error' is the idea that these ideas of right and wrong and good and bad, refer to some kind of truth, which is knowable or verifiable in some way, some kind of truth that is not derived from our preferences, which are the product of our genes, our environment and human invention.

And I'm not a Moral Error theorist... so that was heavy lifting devil's advocacy and I hope you appreciate it.

This was a great post!
 

Back
Top Bottom