What should Morals and Ethics be?

Every discussion could have the same problem if one side wanted it to.

Again we could be having the exact same discussion about building bridges where everything discussion about whether to build a suspension bridge or a truss bridge over the river is intertwined with someone wanting to talk about metaphysical proof of the goodness of bridges... we just don't.

It's the "Talk About God" problem again. "This topic is different so we have to talk about it differently and my proof of this is that I'm demanding we talk about it differently" and then it becomes self feeding, we talk about it differently for so long that we can't get back.

To use your chess analogy, this discussion is akin to "what game should we play?" Some are suggesting that different people have different answers to that question and no one has an objective way figure out who is right.
 
To use your chess analogy, this discussion is akin to "what game should we play?" Some are suggesting that different people have different answers to that question and no one has an objective way figure out who is right.

Yeah except some people are going "Hey here's a game, I bash your head in with a rock" and when you go "No I don't want to play that game" you still get "BUT THAT'S NO OBJECTIVE! NOBODY CAN SAY WHAT GAME TO PLAY!"

Again as in all discussions of morality/ethics we get to pure argumentative "Now give me a reason that will satisfy me not to be a psychopath and that your pain/suffering is something I should care about it all" and just... go nowhere.

If the questions have real world consequences, the questions have to have answers.
 
To use your chess analogy, this discussion is akin to "what game should we play?" Some are suggesting that different people have different answers to that question and no one has an objective way figure out who is right.

And others come right back and say that there still must be one truly best game and that can be determined using science. (Obviously, it must be a game where everyone wins and no one loses since losing leads to suffering.)
 
Joe, you're still missing the point.

Your bridge building analogy is entirely unlike what we're trying to say. At the point you're building a bridge, you've already made the subjective decision (the value judgement).

The more analogous question is should the bridge be built? And in order to answer that, you have to define what you value. Obviously you don't just build a bridge in a random location for no reason, it's built for a purpose. And that purpose has to be defined.

Let me try a better example. It's greatly simplified, but I'm hoping it will get across the point we're trying to make.

You're a judge in Exampleland. The King has decreed that one convicted criminal will be executed, and tasks you with choosing which one. Criminal A is a murder, who killed 8 victims. Criminal B is a child molester, who molested 15 children. Criminal C is a contractor, who used substandard materials and cut corners, resulting in the collapse of a building that killed 50 people. Which one should be executed?

There's no way to answer that without first setting values on the different things. Which is worth more, a life or a child's innocence? Is taking life intentionally the same as taking it through neglect? Or personally versus accidentally, even if you should have known better? Science can't answer these questions until you decide on what the goal will be.

Just like you can't answer whether or not you should build a bridge until you decide what your values are, so you can then evaluate whether the bridge will support those values or not.

And that isn't hair-splitting, that is the fundamental problem with moral debate and moral arguments. There is no objective answer. You have to choose a value system to be able to answer any question that asks whether one should or should not.

And the questions do have answers, they just aren't objective answers. And if someone has a value system different from yours, then they will never agree, because there isn't just one answer. You can argue consequences, you can try to appeal to the values they do have, but you can't scientifically prove that their values are wrong in a moral sense. You can only show the consequences. And the consequences only matter if they either support or detract from the values one starts with.

Your game example is a perfect one, frankly. No one can say what game you should play, except those involved. No one is arguing that you don't want to play that game; no one is saying you can't refuse. What we ARE saying is that you can't provide a logical or scientific argument to prove the game is wrong, without starting from a set of values.
 
Every discussion could have the same problem if one side wanted it to.

Again we could be having the exact same discussion about building bridges where everything discussion about whether to build a suspension bridge or a truss bridge over the river is intertwined with someone wanting to talk about metaphysical proof of the goodness of bridges... we just don't.

Again, because the two are fundamentally different. The bridge's existence doesn't depend on your perspective.

The more analogous question is should the bridge be built?

Yes. Yes. Absolutely this.
 
Last edited:
Joe, you're still missing the point.

I have a different opinion. Don't act like that means I'm not listening.

Your bridge building analogy is entirely unlike what we're trying to say. At the point you're building a bridge, you've already made the subjective decision (the value judgement).

Well yeah if I don't agree the question is different, I'm not allowed in the discussion and if I do agree the question is different I'm trapped Jabba style in already agreeing you're correct before the conversation even starts.

The more analogous question is should the bridge be built? And in order to answer that, you have to define what you value. Obviously you don't just build a bridge in a random location for no reason, it's built for a purpose. And that purpose has to be defined.

Yeah. We do that. All the time. And nobody "and why do that? Okay then why do that? Okay what's your reason for doing that?" us into nothingness when we do.

Again for all the "But it's different" special pleading it's not.

Here's I'll show you.

"Let's build a bridge."
"Why?"
"Because the the shopping is one side of the river and housing district is on the other side."
"Why does that matter?"
"Because building a bridge will allow people to travel from their houses to the stores in much less time."
"Why is that better?"
"Because it will save time and resources"
"And why is that better?"
"Because it will free up time and resources to spend on other things."
"And why is that better?"

And so on and so forth. That discussion is absolutely as intellectually valid. It's not any more or less absurd.

You're a judge in Exampleland. The King has decreed that one convicted criminal will be executed, and tasks you with choosing which one. Criminal A is a murder, who killed 8 victims. Criminal B is a child molester, who molested 15 children. Criminal C is a contractor, who used substandard materials and cut corners, resulting in the collapse of a building that killed 50 people. Which one should be executed?

I don't trolley problems.

There's no way to answer that without first setting values on the different things.

But NOBODY is setting these mythical values they start from. Since evidence and logic and reason aren't allowed when discussing them since "they're subjective" everyone just apparently starts at some random point and goes from there.

Just calling them "values" doesn't make "totally random context-less non-falsefiable intellectual starting points" a valid concept.

Just like you can't answer whether or not you should build a bridge until you decide what your values are, so you can then evaluate whether the bridge will support those values or not.

Then how is all these bridges got built? If we can't do X until we do Y and but can't do Y because Y is defined as having no acceptable answer did all these bridges I see in reality just spring from the Earth?

How do we decide to do anything? Why don't we spend all day locked in this same "Turtles All the Way Down" recursive question overload?

For all the "We can't define values" we seem to get along just fine whenever the question is artificially crowbarred into the discussion.
 
Last edited:
Again, because the two are fundamentally different. The bridge's existence doesn't depend on your perspective.


Yes. Yes. Absolutely this.

Okay then how do bridges get built? How do we ever decide to build a bridge?

If the "Why build a bridge" question is the same/equivalent as a moral/ethical one then just use whatever mental process or framework we use to decide to build bridges then.
 
Last edited:
I have a different opinion. Don't act like that means I'm not listening.

That's not what "missing the point" means.

Well yeah if I don't agree the question is different

You don't think that whether or not you should build a bridge is a different question to which kind of bridge you should build?

Again for all the "But it's different" special pleading it's not.

Again for all the "it's not", it is.

"Let's build a bridge."
"Why?"
"Because the the shopping is one side of the river and housing district is on the other side."
"Why does that matter?"
"Because building a bridge will allow people to travel from their houses to the stores in much less time."
"Why is that better?"
"Because it will save time and resources"
"And why is that better?"
"Because it will free up time and resources to spend on other things."
"And why is that better?"

More on that later.

Just calling them "values" doesn't make "totally random context-less non-falsefiable intellectual starting points" a valid concept.

Nobody said that.

Then how is all these bridges got built?

Because most people will be convinced by the exchange you posted above, way before it gets to the end.
 
If the "Why build a bridge" question is the same/equivalent as a moral/ethical one then just use whatever mental process or framework we use to decide to build bridges then.

The problem is that, like morality, there is no one objective decision making process. Sometimes it is because the citizens vote in someone who promises to get a bridge built. Sometimes it is because a lobbyist's brother in law is a bridge builder.
 
The problem is that, like morality, there is no one objective decision making process. Sometimes it is because the citizens vote in someone who promises to get a bridge built. Sometimes it is because a lobbyist's brother in law is a bridge builder.

Okay and?

A process doesn't have to be perfect to be useful. This quest for some perfect "morality" that works in every possible situation, including hypothetical ones specifically designed to "trip up" the process, confounds me.

Again there might not be "one objective decision making process" but that doesn't stop us from... usually building the bridges we need/want built.
 
But this exact same kind of hair splitting is what stalls out real world discussions as well.

Okay and?

A process doesn't have to be perfect to be useful. This quest for some perfect "morality" that works in every possible situation, including hypothetical ones specifically designed to "trip up" the process, confounds me.

Again there might not be "one objective decision making process" but that doesn't stop us from... usually building the bridges we need/want built.

There! That is what we've beentrying to say.

There's no objective answer; science can't prove "should". That doesn't mean there isn't an answer.

What trips up the discussions is when someone insists that their view is objectively right; that others must obviously value what they value in the same way. If you approach it as if there is an objective answer, you will fail, because you can't prove someone should value something that they don't. You can appeal to the values they do hold, though, to show why your way is better.

On your bridge example, let's expand a bit further. Group A wants to build the bridge between the housing area and the shopping center. Person B wants to build the bridge between the housing area and the city center, where the majority of residents work. Which one is "better" can't be answered without deciding how much you value the increased ease of shopping versus how much you value the reduced time to get to work. And neither is necessarily wrong.

So to get things done, a compromise has to be reached, somehow, between the two groups. Perhaps group A can show data that the increased financial transactions at the shopping center will produce enough extra revenue to build a bridge to the city center in three years. OR group B can show data proving that they could build two smaller bridges, one in each location, and both get some of what they want. Those aren't proving anything about which value is better, or morally right. That's appealing to the values the other side has.

Now the bridge example is fairly concrete, and easily answered even with compromise. What makes morality trickier is not only do people have different values, but they put different weights on what they value. You can prove a certain moral or ethical system will reduce violence, or increase trade, or promote knowledge, or whatever. But if Person A values knowledge over wealth, and person B values wealth over peace, and person C values peace over everything, there's no scientific way to say whose right or wrong.

It's not hair-splitting, it's the nature of the beast.

And no one has said they're "totally random context-less non-falsefiable intellectual starting points", any more than the axioms of mathematics are. Arbitrary does not mean random. Neither does subjective. No one said it's context-less. Those are all your mis-understandings, that we've been trying to explain.

They are non-falsifiable, though, because the basic values are assumptions. But that doesn't make them random, of useless, or anything else.
 
Last edited:
There! That is what we've beentrying to say.

There's no objective answer; science can't prove "should". That doesn't mean there isn't an answer.

What trips up the discussions is when someone insists that their view is objectively right; that others must obviously value what they value in the same way. If you approach it as if there is an objective answer, you will fail, because you can't prove someone should value something that they don't. You can appeal to the values they do hold, though, to show why your way is better.

On your bridge example, let's expand a bit further. Group A wants to build the bridge between the housing area and the shopping center. Person B wants to build the bridge between the housing area and the city center, where the majority of residents work. Which one is "better" can't be answered without deciding how much you value the increased ease of shopping versus how much you value the reduced time to get to work. And neither is necessarily wrong.

So to get things done, a compromise has to be reached, somehow, between the two groups. Perhaps group A can show data that the increased financial transactions at the shopping center will produce enough extra revenue to build a bridge to the city center in three years. OR group B can show data proving that they could build two smaller bridges, one in each location, and both get some of what they want. Those aren't proving anything about which value is better, or morally right. That's appealing to the values the other side has.

Now the bridge example is fairly concrete, and easily answered even with compromise. What makes morality trickier is not only do people have different values, but they put different weights on what they value. You can prove a certain moral or ethical system will reduce violence, or increase trade, or promote knowledge, or whatever. But if Person A values knowledge over wealth, and person B values wealth over peace, and person C values peace over everything, there's no scientific way to say whose right or wrong.

It's not hair-splitting, it's the nature of the beast.

And no one has said they're "totally random context-less non-falsefiable intellectual starting points", any more than the axioms of mathematics are. Arbitrary does not mean random. Neither does subjective. No one said it's context-less. Those are all your mis-understandings, that we've been trying to explain.

They are non-falsifiable, though, because the basic values are assumptions. But that doesn't make them random, of useless, or anything else.

Morals are always a product of information and values. Science is a source of information and nothing more. I'm in favor of promoting science as it offers our best source of accurate information.

I think we SHOULD or OUGHT to value increasing overall well being and personal freedom. Well being doesn't mean necessarily a reduction in suffering or an increase in happiness and personal freedom doesn't mean you can do what you please. And there is no question at times these values can be conflicting. So it's not always easy to solve that dilemma.

But of course, this is a reflection of my values.
 
You want me not to kill millions of people, fine. But you want me to refrain, without you having to come over here and physically prevent me. You want me to refrain, without you having to figure out some way to make it unprofitable for me. You want some rational argument, that will convince me to refrain without coercion, and to my own detriment. But you have no such argument. All you have, in the end, is a weak-ass attempt to shame me with your expressions of frustration.

But I am a moral superman. I am not concerned with your "shame". Your frustration is your problem, not mine. If you have no rational argument, then you have no standing. If you cannot at least appeal to practicality and profitability, then you aren't even trying.

Exactly. Grève générale, insurrectionnelle et expropriatrice!
 
Last edited:
Yeah. We do that. All the time. And nobody "and why do that? Okay then why do that? Okay what's your reason for doing that?" us into nothingness when we do.

Again for all the "But it's different" special pleading it's not.

Here's I'll show you.

"Let's build a bridge."
"Why?"
"Because the the shopping is one side of the river and housing district is on the other side."
"Why does that matter?"
"Because building a bridge will allow people to travel from their houses to the stores in much less time."
"Why is that better?"
"Because it will save time and resources"
"And why is that better?"
"Because it will free up time and resources to spend on other things."
"And why is that better?"

And so on and so forth. That discussion is absolutely as intellectually valid. It's not any more or less absurd.
I don't think it's absurd: we actually follow that logic every time we build a bridge. We just come to a point where there's some base value that we all agree on and so don't need to ask why anymore. If you and I both agree that saving time is good, there's no reason to ask why it's better, but if we don't yet agree, you might ask me why I want to do that.

I don't need to ask why I want to have access to food and shelter, but if I did it would probably be because everything that I want is predicated on having access to those things before I can start to work toward my other goals. The next question "Why do your goals matter to you?" is non-sensical: there's no need to justify the blatant fact that my goals matter to me, that's what makes them my goals.

Moral questions, though, are faced with a further and more difficult to address question "why do your goals or preferences matter to me?" And the answer isn't tautological this time. It's not necessarily true that your goals should or do matter to me. If they do, then okay, we don't need to worry about this question and we can move on. If they don't, at least not for their own sake, then we are stuck with negotiation.

Okay then how do bridges get built? How do we ever decide to build a bridge?

If the "Why build a bridge" question is the same/equivalent as a moral/ethical one then just use whatever mental process or framework we use to decide to build bridges then.

To the extent that the questions are the same, it's solvable because people have overlapping goals/preferences.
 
Someone who answers "Why did Rome fall?" with "Because it existed. You see, it was necessary for Rome to exist before it could fall" is just full of ****. That's not what we're asking when we ask "Why did Rome fall?" We are looking for specific causes, not any old necessary condition.

Similarly, someone who answers "What is the basis with morality?" with "Empathy is the basis of morality. You see, without empathy, morality could not exist" is full of ****. When we ask for a basis for morality, we are not asking for any old necessary condition. We are specifically looking for a logical and philosophical foundation.

I have no idea why you keep quoting my words back at me, given that I am trying to make it clear that these are totally different things that you are trying to conflate. I did get a laugh out of "sic", though.


Well, that's just wrong. Compassion, guilt, shame, etc. are all emotions that can motivate action. You're also trying to smuggle in morality by calling empathy a "moral feeling". What makes it so? And it does not "surpass the test of Hume's guillotine". You need an account for why we ought to be empathetic before you can do that.


It's clear that you plucked two paragraphs from totally different sections of that entry and dishonestly presented them as if one were an answer to the other.

I am familiar with Hume's ethics. He does not think that empathy does what you think it does.


This is not true. Ought implies an obligation, not merely "this is good." When I eat ice cream I might say "this is good", I might feel a positive emotion, but that is not equivalent to saying "I ought to eat ice cream".

In addition to waving in the direction of "positive emotions" as if they were synonymous with morality, you want to say that empathy engenders "disinterested action". If the reason I act is because I am feeling your emotional state, obviously my action is not disinterested, but self-serving. You are elaborating an especially bad version of hedonistic egoism, and at the same time declaring victory over the is-ought problem. If only you knew what you were talking about, you could find this embarrassing (another emotion that can motivate action, action like reading a book).


He does not give a solution to the is-ought problem.

He does develop an ethics.

To develop an ethics is not to provide a solution to the is-ought problem. To resolve the is-ought problem you need an account of normative truth. You do not need an account of normative truth to develop an ethics.

Hume solves the problem of be-ought by turning it into an emotional impulse. The obligation would be nothing more than a confused formulation of that impulse. It is a false problem that dissolves with a correct analysis of language.

To pretend that a selfless action is not selfless because I find pleasure in it is absurd. It makes it impossible to distinguish between actions that focus on helping others and actions that only seek personal satisfaction. If you cannot distinguish between these two manifestly different facts, you have a problem with the way you speak.

If you want to revive the concept of ought, you should show what it can be based on other than empathy. I look forward to hearing from you.

I would also like to know which of Hume's texts contradicts what I say.

Thank you.
 
That is tribalistic. For example, there's no reason to exclude non-human animals from consideration that doesn't amount to tribalism.

But empathy is quite bad at the job of turning our concern to other men. Instead, we feel a great deal of empathy for people who are socially proximate, and very little for some poor beggar on the other side of the world. It's almost like it's something we developed when we were living in small kinship groups.


Well, yes, but you will then be saying very different things.

There are a million and one necessary conditions for engaging in moral reasoning. For example, the universe has to exist. And you have to be alive. But it would be foolish to say "The universe is the basis of morality" or "Being alive is the basis of morality." You are intentionally conflating different ideas in order to try to rescue a failed argument.


Neither is empathy. Earlier you intimated that we might be too empathic or not empathic enough. In what terms would you make that argument? What is the good you seek in hoping we will be ideally empathic? The answer can't be "Empathy!" which means there is some more fundamental value at work here, and empathy is therefore not the basis of your morality. Or you can keep insisting that it is, and flail around in the dark forever.


What you need is a normative basis that will make sense out of any of this. You can say "Empathy! Science! Serial killers!" but none of that does or can amount to morality.


No, you don't. You're still failing to appreciate what Hume means. You need an ought before you can get anywhere. Feelings are not normative. If I feel someone else's pain, that's just a declarative fact about the world. It does not imply that I ought to feel their pain.


This is gobbledygook. Hume does not present a solution to the is-ought problem.


This is you disingenuously retreating from what you initially claimed to a triviality (without even doing any work to establish that it's true).

I was explaining that you cannot use a remote condition as a cause ("generate" is your word). Causes must be specific conditions to a fact. Therefore, saying "and you have to be alive" might be a moral condition does not make sense. The conditions that are used as causes in an explanation must be specific. Empathy is specific. Not being alive.

Hume solves the problem of be-ought by turning it into an emotional impulse. The obligation would be nothing more than a confused formulation of that impulse. It is a false problem that dissolves with a correct analysis of language.

To pretend that a selfless action is not selfless because I find pleasure in it is absurd. It makes it impossible to distinguish between actions that focus on helping others and actions that only seek personal satisfaction. If you cannot distinguish between these two manifestly different facts, you have a problem with the way you speak.

If you want to revive the concept of ought, you should show what it can be based on other than empathy. I look forward to hearing from you.

I would also like to know which of Hume's texts contradicts what I say.

Thank you.
 
Okay and?

A process doesn't have to be perfect to be useful. This quest for some perfect "morality" that works in every possible situation, including hypothetical ones specifically designed to "trip up" the process, confounds me.

Again there might not be "one objective decision making process" but that doesn't stop us from... usually building the bridges we need/want built.

Morality isn't all that different from other areas in this though.

Newtonian physics is useful. But through a process of exploring hypotheticals and creating experiments that are waaay outside of what we'd encounter naturally, we build a much better physics.

And it's important to note that the ways in which moral systems are imperfect DO have huge and terrible consequences. Sure it gets tangled up in mistakes of fact and dishonesty, but at the root of most wars, political conflicts and a fair share of couples arguments are moral systems which are ok for many day to day things but clash horribly where there is disagreement.

Just like the higgs boson, (even though we never seem to encounter one in the wild) can tell us about how matter works and what to expect in the universe, so far fetched thought experiments can be a window to what's really going on in a moral system. Just because you use something extreme to reveal and highlight a flaw, doesn't mean that flaw is free of consequences on a simpler level.

I use a similar technique debugging code. I used to make simple games, written in actionscript. I'm no professional, but I enjoyed it. When I made a new chunk of code and it didn't work right or I didn't know if it would work correctly, I had a testing method. I'd set it up to show all the relevant variable outputs numerically on screen and I'd hit it with a bunch of inputs in the expected range, then at the edges, then a bunch of inputs WAAAY outside of the expected range. It was fairly common that when a piece of code wasn't working right, that last group showed me why.

"Oh, I thought this was going generate random numbers for the placement on the X axis, but now that I'm seeing a million more iterations than I would use naturally, I see it's following this specific pattern. Oh I forgot a parenthesis around the random function!"

That **** happened ALL the time.
 
TL;DR: Going back to OP what should the rules be, I think there is absolute morality; codes of ethics (including societal laws) overlap with, but do not perfectly correspond with, morality.

Non-TL;DR:

We may also not always know morality. Slavery of innocents (as punishment for a crime is more complex) is and was immoral, but ethically, was/is considered proper in many societies, and many people thought it was moral (or convinced themselves it was, dictated biblically, warranted because might makes right, whatever) and didn't think it was immoral. I think they were wrong, there is an absolute morality, and I might wrongly think some things that are ethical and that I believe to be moral, are - when in actuality they're not. I'm not omniscient.

Note, while I am religious, I do think that some principles of morality may be found without religious belief. The golden rule and/or the Kantian ethical equivalent, etc. But how it is applied may be problematic. Some would say e.g.:

I am [insert religion], therefore everyone is free to follow the same religion. Or I am in a happy heterosexual marriage, therefore everyone else is free to be in a heterosexual marriage.

Others would see it more as, I am [insert religion], therefore the golden rule means everyone should be free to follow their choice of religion, or none, and change, as they wish. Or I am in a happy heterosexual marriage, therefore everyone else should be free to structure their relationships as they wish and try to be happy as I am.

That is, while I think some behaviour is immoral, my morality and the golden rule to me means that people should be free to pursue activities that I think are morally (absolutely) wrong. For instance, my values of free speech mean that I think the better moral view is to allow offensive speech to occur rather than punish it (with some limits e.g. defamation, direct immediate calls for specific violence, etc.) - even though I consider the offensive speech morally (absolutely) wrong, I think free expression an absolute moral good that requires the possibility of some immoral speech. And I want ethical rules (laws) to support that. Some others would argue that hate speech etc. should be punished, and maybe if there weren't such a history of punishing speech that violated societal norms - including at various times and places preaching religious freedom and tolerance, gender equality, anti-slavery, etc. - I'd be more open to such arguments.
 
TL;DR: Going back to OP what should the rules be, I think there is absolute morality; codes of ethics (including societal laws) overlap with, but do not perfectly correspond with, morality.

Non-TL;DR:

We may also not always know morality. Slavery of innocents (as punishment for a crime is more complex) is and was immoral, but ethically, was/is considered proper in many societies, and many people thought it was moral (or convinced themselves it was, dictated biblically, warranted because might makes right, whatever) and didn't think it was immoral. I think they were wrong, there is an absolute morality, and I might wrongly think some things that are ethical and that I believe to be moral, are - when in actuality they're not. I'm not omniscient.

Note, while I am religious, I do think that some principles of morality may be found without religious belief. The golden rule and/or the Kantian ethical equivalent, etc. But how it is applied may be problematic. Some would say e.g.:

I am [insert religion], therefore everyone is free to follow the same religion. Or I am in a happy heterosexual marriage, therefore everyone else is free to be in a heterosexual marriage.

Others would see it more as, I am [insert religion], therefore the golden rule means everyone should be free to follow their choice of religion, or none, and change, as they wish. Or I am in a happy heterosexual marriage, therefore everyone else should be free to structure their relationships as they wish and try to be happy as I am.

That is, while I think some behaviour is immoral, my morality and the golden rule to me means that people should be free to pursue activities that I think are morally (absolutely) wrong. For instance, my values of free speech mean that I think the better moral view is to allow offensive speech to occur rather than punish it (with some limits e.g. defamation, direct immediate calls for specific violence, etc.) - even though I consider the offensive speech morally (absolutely) wrong, I think free expression an absolute moral good that requires the possibility of some immoral speech. And I want ethical rules (laws) to support that. Some others would argue that hate speech etc. should be punished, and maybe if there weren't such a history of punishing speech that violated societal norms - including at various times and places preaching religious freedom and tolerance, gender equality, anti-slavery, etc. - I'd be more open to such arguments.

There's an objective morality but some people may hold another? How does that work? How do you determine what's good or bad, exactly?
 

Back
Top Bottom