• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Materealism and morality

Are you guys familiar with Alonzo Fyfe, the Atheist Ethicist? (Desire utilitarianism.) Some day I might go buy his book. He used to have those essays on his blog, but even so, you can get his overall approach through many of the essays that are there.


ACK! Natural Law! It burns!

He's arguing that rights exist independent of the ability to defend/assert those rights.

I doubt he could give a list.
 
Last edited:
I'm partial to logic and philosophy myself. One of the things I've found myself interested in lately is the logic of moral systems, and the question of whether it's possible to have a system of moral values that is based on pure logic, that makes sense and doesn't have any contradictions or flaws. In other words, "is there a reason to be good." This topic seems to keep coming up in the threads I participate in. Of course it's a purely philosophical topic, and not a scientific one.
This is the equivalent of the Holy Grail for philosophers, it supposedly exists but is somehow never found.

It should be obvious that there are no universal moral truths. Morals are based on individual opinions and since we don't have a single collective human mind, then moral truths are going to vary by individuals and cultures. Evolution of psychological, sociological, and cultural norms better explains moral positions than magical beliefs about the mind or the concept there exists some universal morality.

I'm not much of a scientist, but I like to analyze and understand the logic behind people's ideas and beliefs. I guess I am a materialist, because I have trouble actively believing in something without evidence... but at the same time, I'm not sure if dualism is that extraordinary of a claim either. There is a certain logic to it, it's just that there is no real evidence, as far as I know. And I don't find it to be a very useful concept. But it's not crazy, either... it's an understandable belief.
Dualism is an outdated belief. It is a remnant of the belief that conscious thought was somehow separate from the physical brain. We know now that it is not. But a lot of people cannot get over the fact that while it sometimes feels like thought processes exist outside the physical brain they actually do not.
 
Last edited:
ACK! Natural Law! It burns!

He's arguing that rights exist independent of the ability to defend/assert those rights.

I doubt he could give a list.
'Rights' and moral truths exist. They exist in our genes and our brains. They just are not universal truths. But people have genetic instructions which produce certain behaviors that are apparent without being taught. Just as my dogs know the other dogs they imprinted on in their first year of life are 'family' and all the dogs they met after that year are 'not family', and just as they know people are not prey but little things that run away are, people have similar behavioral components that are built in to our brains.

Normal healthy brains perceive empathy. They perceive fairness. We do not have to be taught these 'feelings'.

To that, social and cultural influences are added through nurture/experiences and a normal healthy brain ends up with a sense of right and wrong. It is neither mysterious nor magical. It is interesting, even fascinating, but not inexplicable.
 
Last edited:
'Rights' and moral truths exist. They exist in our genes and our brains. They just are not universal truths. But people have genetic instructions which produce certain behaviors that are apparent without being taught. Just as my dogs know the other dogs they imprinted on in their first year of life are 'family' and all the dogs they met after that year are 'not family', and just as they know people are not prey but little things that run away are, people have similar behavioral components that are built in to our brains.

Normal healthy brains perceive empathy. They perceive fairness. We do not have to be taught these 'feelings'.

To that, social and cultural influences are added through nurture/experiences and a normal healthy brain ends up with a sense of right and wrong. It is neither mysterious nor magical. It is interesting, even fascinating, but not inexplicable.

You can argue for common morality based on empathy, I don't dispute that.

But morality is not rights and rights are not morality.

Fyfe is arguing that rights are things that cannot be taken away. Even if the enforced law of the land is to take dissidents, cut out their tongues and sew closed their mouths, they still have the right to free speech.

I think that's nonsense.

If we are to use your formulation, the right to free speech is still existing within the dissidents' genes and brains.

The idea of the right might exist in the brain, but the right itself does not, unless you choose a truly odd definition of 'right'.

The prediction towards desiring rights might exist in the genes, but rights themselves can only exist in the practice.

If you are shot for worshiping your god, you don't have the right to practice your religion, regardless of your ideas of worship or your genetic disposition towards religion.

Your dogs know you are family. Does that give them any rights or do you choose which rights to give them?

eta:

Further, if rights are something inherent and inalienable to the person, what exactly are they?

I say a right is a permission to or privilege that society has acknowledged the importance of and pledged to defend in most circumstances. It's easy for me to list rights and defend distinctions of rights/not rights.

Free speech is a right in the US, society accepts as normative that I should be able to perform that action and defends me in most cases against attempts to prevent me.

The selection of drugs is not a right. Society does not permit me to choose which medical and recreational drugs I can purchase.

Fortunately, many more things are rights than not-rights in USA society. Just walking down the street is probably a right. If someone tries to detain me and prevent me from walking, he/she is probably violating some ordinance about blocking a public way. Only things explicitly prohibited or restricted tend to be not-rights.

There are grey areas, certainly, but I can make an argument based on something.

If rights are instead based on the mind and genes, how do we know what are real rights? Is it anything we feel like we should have? Anything that X% of the population feels they should have? What does it mean to have a right if it cannot be expressed? I think these are more problematic than any issues with my conception.
 
Last edited:
...
Fyfe is arguing that rights are things that cannot be taken away. Even if the enforced law of the land is to take dissidents, cut out their tongues and sew closed their mouths, they still have the right to free speech.

I think that's nonsense.
Obviously, since rights are taken away all the time.

...If we are to use your formulation, the right to free speech is still existing within the dissidents' genes and brains.

The idea of the right might exist in the brain, but the right itself does not, unless you choose a truly odd definition of 'right'.
I don't disagree, but I also don't find the philosophical version of 'rights' to be of any use. Just as one has an emotional reaction to 'fairness' (and it can be demonstrated non-human primates react to fair or unfair), one either feels they have a right or feels they grant a right. These are emotional reactions. Emotions are brain functions.

People who have specific kinds of brain damage can cry inappropriately. Anger can be generated because something is perceived as unfair. A person might react with anger if a perceived right is not granted, and so on.

When I hear people having discussions about whether things like a right or love or other supposedly intangible things exist or do not exist, I view the discussion as unnecessarily contrived. What is the point? It doesn't tell us anything about these intangible things.

OTOH, I do find it useful describing how and why humans and other animals experience love or fairness or a sense of entitlement. I find it useful to explore just what is going on in our brains that make us want fairness or revenge. It's fascinating that a dog might expect fairness. I await the one study showing dogs do expect fairness to be repeated before concluding this is correct, but I see no reason it would not be.



...The prediction towards desiring rights might exist in the genes, but rights themselves can only exist in the practice.
Which in turn results from human actions, not from some magical thing that exists or does not exist outside ourselves.

...
Further, if rights are something inherent and inalienable to the person, what exactly are they?
Outside of the emotional reaction to the perception of rights, they don't exist.

...
If rights are instead based on the mind and genes, how do we know what are real rights? Is it anything we feel like we should have? Anything that X% of the population feels they should have? What does it mean to have a right if it cannot be expressed? I think these are more problematic than any issues with my conception.
You are saying the brain-emotion function which results in our perceiving a right is deserved or we are obligated to observe, should have specific rules. But the function is a lot more complex than that.

What should make you sad? Should we all experience sadness over the same things? Suppose I'm bothered by a puppy being killed and someone else lives in a culture where dogs are commonly eaten so it doesn't make them sad at all?

Philosophizing about what a right is makes as much sense as trying to define what sadness is. You can easily describe the qualities or things which cause sadness, such as a sense of loss. But the specific things which make one feel loss are individual.

What can I say? I am a material girl.
 
Last edited:
Materealism cannot provide a logical foundation for morality.

This is because for morality to happen, one needs to be able to draw a line between people, who should be treated morally, and inanimate objects that shouldn't.

Since materealism is a monistic system, it cannot draw this line.


The criteria of complexity is ridiculous - a human being is more complex than a stone, true. But a robot, in theory can be as complex than a human being. Still, it won't make him an object of morality. (Remember Asimov's first law?). The universe as a whole can be said to be as complex as a human being. Yet, it is not a moral object. (Moral object = an object that should be treated morally).

Replication is also a bad criteria - crystals are not an object of morality. Neither are computer viruses. Or memes. Or robots that can build other robots.


In fairness, any monistic system faces this problem, not just materealism. Systems that believe that "only spirit exists", or "everything is one", or "everything is god" have exactly the same problem. They are unable to draw the line.


----
To clarify, that was a philosophical point. I do not think that monists are less moral than non-monists, and that materealists are less moral than non-materealists. I do not know why this is so. But the facts show it is.


All I can say is you so silly!

Why would ontology have anything to do with it, morals are about economic reciprocity.

Souls and other dualist concepts are un-needed, but you have five pages of responses ! Wow.
 
Originally Posted by JetLeg
I don't think that mind or feelings are independent, like ghosts. I think they are a different substance. It might be that the "substance" of subjective experience cannot exist without brains. But it is still a different type of substance.

I think that angels and demons are dependant upon the brain. I think they are a different substance. It might be that the "substance" of subjective experience cannot exist without brains. But it is still a different type of substance.


Or
I think that speed and handling are dependant upon a car. I think they are a different substance. It might be that the "substance" of speed and handling cannot exist without cars. But it is still a different type of substance.
 
AkuManiMani said:
There are very many questions that arise from all of these observations. What is it about these particular chemical interactions that causes them to have this property of "feeling"?

The way they interact with the whole system. A brain is just a computer, and a feeling is a program running on it.

I think thats a valid hypothesis. The only real problem is that, as of now we cant really conclusively test it. At this point, there is no one who can definitely say they know how to create a "feeling" program. It will probably be a long time before that can be done precisely because the problem isn't as simple as some AI researchers would like to believe.

AkuManiMani said:
Do they have this quality outside of the context of the brain -- say in an animal without a brain?

[...]

Does a single cell have any kind of subjective experience? For what is the brain, but a huge collection of such cells working in concert?


Of course not. "Within a brain". The definition of "brain", however, can be quite loose (computer, network, etc).

[...]

No, unless it consists of the required number of connections, feedback loops, and computational components.

That's just the thing. Your criteria of computational elements and feedback loops are all met in single celled organisms -- infact they are essential to all life processes. Philosophically, what you're proposing is considerably more radical than you seem to realize. If one wants to accept the hypothesis that feeling is synonymous with computational feedback then they must also take seriously the possibility that a plant or even a bacterium have feeling and are therefore, conscious.


AkuManiMani said:
Does it come from them simply being organized in a certain way (i.e. context dependent)?

Please clarify.

Does feeling require "wetware" of some kind? Can the qualitative experience of specific feelings (say, sweetness, nausea, dizziness, etc.) be reproduced on a different medium than the organic molecules we are made of or does it simply arise "magically" from a particular computation process on any medium? In other words, is feeling a special kind of physical phenomenon (i.e. based on of some deep physical principle we don't know of yet), a purely organizational phenomenon (i.e. software), or some of both?

You mentioned that emotion is a kind of computation in a "brain-like" system. The only problem is that all the criteria you mentioned are met in every living organism. Either we must broaden our definition of consciousness to include every living thing and/or figure out some more specific criteria.

If it contains a brain-like structure, sure. But then it isn't really inanimate anymore, is it?

Thats an interesting possibly. Of course then we would have to rethink what it means to be alive. Living organisms, as we know them, have special thermodynamic properties that inanimate objects and dead organisms don't have; so one calling anything "animate" should probably take this into account.

As far as a brain like structure goes, my guess is that in order to effectively simulate the brain [unless we can physically recreate biological structure] such a brain would would have to be a software construct to give it sufficient plasticity.


AkuManiMani said:
What is the subjective experience of feeling, anyway?

An emergent phenomenon wherein a brain modifies behaviour within itself. Sentience is an emergent phenomenon, as are 'feelings'.

Ah, we know that we consciously modify our behavior based on what we feel, but is it possible to have an intelligent system that doesn't actually feel sensory input? If so, what is it that make systems that do differ from ones that don't.


AkuManiMani said:
The understandings obtained from current neuroscience, while greatly invaluable, are only surface knowledge of a much deeper mystery; it's barely scratched the surface. It's one thing to say that X class of chemical is associated with Y feeling, but quite another to explain why that is so or even help understand how there is such a thing as feeling in the first place. With that in mind, its more that premature to conclusively say that feeling is identical with a certain class of chemical reactions. All we know is that there is a correlation; we don't know if it is a necessary correlation or even if it is a causal relation.

Of course it has only scratched the surface. The universe is a complex place. But at least we have knowledge, instead of arm-waving claiming "it's a special thing!".

Also, there is plenty of research which explains why specific chemicals have the reaction they do.

Feeling is special. In all the vastness of space, the only entities that we conclusively know posses it exist on this infinitesimal blue speck of dust we call the earth. There is not a single living person alive today that can explain how neurophysiology produces subjective experiences like sweetness, bitterness, nausea, dizziness, redness, greenness, ugliness, silliness or the whole panoply of subjective "nesses". There is also not a single person alive today who knows how to artificially recreate any of those qualitative experience. The only arm-waving being done is by those claiming that they actually do know how to reproduce such things. All we have are the barest beginnings of some working knowledge of how it happens, and even that may be a generous estimate.

I think our attempts to recreate such things is comparable to the efforts of alchemists of ages past. They were essentially right in their belief that its possible to transmute other elements into gold -- they just didn't know that such a thing is only accomplished in super-massive stars or why this is the case. In hindsight, we know that their efforts were futile but, in principle, not necessarily impossible. That's pretty much the situation that AI researchers are in now, methinks. Our knowledge base simply isn't deep enough yet to allow us to accomplish the feat we wish to; namely, to create a feeling conscious entity.

Consciousness is probably one of the deepest scientific mysteries facing us today -- it ranks up there with the origin of the big bang and the genesis of life. To wave it off as a simple technical feat that any current technician knows how to reproduce is beyond hubris -- its downright absurd.


Wrong. We do know that it is a causal relationship. And I never said a feeling is identical with a certain class of chemical reactions, did I?

Yup.

Here:
Taffer said:
Or it could be that, after the scientific discovery, it was found that feelings are different interactions of chemicals within the brain.

and Here:
AkuManiMani said:
Is it just a very specific class of organic reactions that give rise to subjective experience? Does it really matter what chemicals are interacting?

Yes.

Edit: BTW, sorry for contributing to the derail, Jet.
 
Last edited:
I think thats a valid hypothesis. The only real problem is that, as of now we cant really conclusively test it. At this point, there is no one who can definitely say they know how to create a "feeling" program. It will probably be a long time before that can be done precisely because the problem isn't as simple as some AI researchers would like to believe.

Why do we need to create one to know something? Chemicals modify feelings. This is a fact. Further, we know how some of these chemicals interact with the physical brain. Moreover, we can reproduce our experiments and obtain consistent results. Either a) 'emotions' and 'feelings' and other such things are a direct product of the brain, or b) they are indistinguishable from being direct products of the brain.

That's just the thing. Your criteria of computational elements and feedback loops are all met in single celled organisms -- infact they are essential to all life processes.

Perhaps. But notice I did not give specifics as to these elements? That is because we do not know them yet. Clearly a single celled organism doesn't have feelings, as the brain is a multi-cellular structure. These computational elements are obviously going to be vastly more complex then what is possible in a single cell.

Philosophically, what you're proposing is considerably more radical than you seem to realize. If one wants to accept the hypothesis that feeling is synonymous with computational feedback then they must also take seriously the possibility that a plant or even a bacterium have feeling and are therefore, conscious.

It is not radical, and you do not have to take seriously those possibilities. All "my" hypothesis states is that computational elements and feedback loops are required. It says nothing of the necessary complexity of these elements. A single celled organism is vastly less complex then a complex of many single celled organisms acting as a network. To assume that one is synonymous with the other is silly.

Does feeling require "wetware" of some kind? Can the qualitative experience of specific feelings (say, sweetness, nausea, dizziness, etc.) be reproduced on a different medium than the organic molecules we are made of or does it simply arise "magically" from a particular computation process on any medium? In other words, is feeling a special kind of physical phenomenon (i.e. based on of some deep physical principle we don't know of yet), a purely organizational phenomenon (i.e. software), or some of both?

Based on current knowledge, I would say it is purely an emergent property, i.e. software. Naturally we don't know enough to say this with certainty, but there is no reason to suspect that consciousness and all that come with it is physical and material in nature and cause.

You mentioned that emotion is a kind of computation in a "brain-like" system. The only problem is that all the criteria you mentioned are met in every living organism. Either we must broaden our definition of consciousness to include every living thing and/or figure out some more specific criteria.

Are they? I disagree. A brain is markedly more complex then a single celled organism. Yes, we don't nearly know enough about this field, but why do you think that a brain and a single celled organism have the same level of complexity?

Thats an interesting possibly. Of course then we would have to rethink what it means to be alive. Living organisms, as we know them, have special thermodynamic properties that inanimate objects and dead organisms don't have; so one calling anything "animate" should probably take this into account.

Accepted. Perhaps we should talk about sentience and non-sentience instead of animate and inanimate. This isn't really the topic of this discussion, though.

As far as a brain like structure goes, my guess is that in order to effectively simulate the brain [unless we can physically recreate biological structure] such a brain would would have to be a software construct to give it sufficient plasticity.

Perhaps. Perhaps not.

Ah, we know that we consciously modify our behavior based on what we feel, but is it possible to have an intelligent system that doesn't actually feel sensory input? If so, what is it that make systems that do differ from ones that don't.

Define "feel" for me.

Feeling is special. In all the vastness of space, the only entities that we conclusively know posses it exist on this time blue speck of dust we call the earth. The only arm-waving is the rather unjustified claim that based on the very limited knowledge we have now that we can artificially re-create it.

I never said we could. I said we could affect feelings using understood physical interactions of chemicals. I said it is possible to create a machine that "feels". Where did I claim we already could do so?

There is not a single living person alive today that can explain how neurophysiology produces subjective experiences like sweetness, bitterness, nausea, dizziness, redness, greenness, ugliness, silliness or the whole panoply of subjective "nesses". There is also not a single person alive today who knows how to artificially recreate any of those qualitative experience. The only arm-waving being done is by those claiming that they actually do know how to reproduce such things. All we have are the barest beginnings of some working knowledge of how it happens, and even that may be a generous estimate.

You are putting words in my mouth.

I think our attempts to recreate such things is comparable to the efforts of alchemists of ages past. They were essentially right in their belief that its possible to transmute other elements into gold -- they just didn't know that such a thing is only accomplished in super-massive stars or why this is the case. In hindsight, we know that their efforts were futile but not necessarily impossible. That's pretty much the situation that AI researchers are in now, methinks. Our knowledge base simply isn't deep enough yet to allow us to accomplish the feat we wish to; namely, how to create a feeling conscious entity.

Why are you discussing AI researchers? The possibility of artificially creating something which can 'feel' is unrelated to the material nature of those feelings.

Consciousness is probably one of the deepest scientific mysteries facing us today -- it ranks up there with the origin of the big bang and the genesis of life. To wave it off as a simple technical feat that any current technician knows how to reproduce is beyond hubris -- its downright absurd.

And again, did I do that?

Yup.

Here:


and Here:


Edit: BTW, sorry for contributing to the derail, Jet.

You misunderstand my statements. Chemicals affect feelings. They can create feelings, modify feelings, etc. Antidepressants are the example I have given. This is evidence that feelings are the result of physical process within the brain. Also, what I meant was that I never claimed specific knowledge of which chemicals create which specific feelings, as you seemed to be suggesting.

Please understand. I was asked to provide a definition of "feeling", and I gave one that is supported by evidence. I never claimed perfect knowledge. I never claimed perfect understanding. However, there is no reason to think that consciousness and feelings are a special phenomenon that require a 'magical' explanation. There is no reason to think they are not simply emergent properties of computing matter.
 
Are you guys familiar with Alonzo Fyfe, the Atheist Ethicist? (Desire utilitarianism.) Some day I might go buy his book. He used to have those essays on his blog, but even so, you can get his overall approach through many of the essays that are there.

Nope. I'll definetly make a note of it, thanks.

It should be obvious that there are no universal moral truths. Morals are based on individual opinions and since we don't have a single collective human mind, then moral truths are going to vary by individuals and cultures. Evolution of psychological, sociological, and cultural norms better explains moral positions than magical beliefs about the mind or the concept there exists some universal morality.

Well, I basically agree. I don't think a universal morality exists either, nor do I think magic exists. But even if it is all based on opinion, surely we can say that some opinions are more logical than others, can't we?

Dualism is an outdated belief. It is a remnant of the belief that conscious thought was somehow separate from the physical brain. We know now that it is not. But a lot of people cannot get over the fact that while it sometimes feels like thought processes exist outside the physical brain they actually do not.

I guess you're right. Certainly, it seems like everything that defines me as "me" is probably accounted for by processes of the brain, including my emotions and thoughts. But I'm not sure how much we know about consciousness itself, other than that it is apparently an emergent property of the brain.
 
Define "feelings"

It's an interesting question - how are things to be defined? Invariably, in terms of other things. And how are those things to be defined? We are in the paradoxical situation where all the words in the dictionary are defined by other words in the dictionary. There is no inherent meaning to be found.

What do we do when we try to define feelings? We can't do it. We can make reference to what happens in a physiological or behavioural sense, but we can't define what a feeling is. But funnily enough, we don't need to define it. We know what it is. Feelings are at a level where definition isn't relevant. If we could define them, we'd have to define them in terms of something more fundamental.

And where do feelings lie in scientific terms? We seem to know more about them. We know they are tied to the physical world. But then, we knew that already - every time we stubbed our toe we felt something.

But we've never observed a feeling in a laboratory. In fundamental science, they don't exist. There's never yet been a physics paper which deals with how feelings are produced. And for all the discussion about "emergent properties" we still know absolutely nothing.
 
Now we're on to the subjective experience of consciousness. Neuroscience can tell us quite a bit about this. (Really, it can--we can stimulate a spot on the sensory cortex and make you subjectively experience hallucinations. It can explain proprioception--and even problems with it that lead to the subjective experience of OBEs.)

The problem that comes up with approaching this stuff from a philosophical point of view, is that when we look at the correlation of all these physical things, drugs, MRIs, EEGs, etc. with a person's self-reporting of subjective experience...we're relying on that reporting as being largely accurate and truthful.

Philosophical dualists can then claim that a relatively simple computer program can be made that will self-report the subjective experience of consciousness. It can say, "I am conscious" and "I have feelings", but how can we know it does?

And before long, you're off on an absurd conversation about zombies. . .

So here's my take: the fact that you can't make something that is ONLY a subjective experience any more objective than we have through science, does NOTHING to support dualism.

Even if you hypothesize a soul (or whatever it is that distinguishes a zombie that looks and acts in all ways just like a normal person from a normal person), you've done no better than science (in fact much worse) at explaining the phenomenon of the subjective experience of consciousness or feelings. I say "much worse" because with the zombie dualism approach, you've got a correlation between drugs, MRIs, EEGs, etc. and reported subjective experiences, that you either deny exist or you rely on the creation of entities to explain. (That is, the soul-or whatever-- causes both the change in MRIs and the change in reported subjective experience. I don't know what they do with the fact that you can change the subjective experience with drugs. Presumably the soul--or whatever--just coincidentally experiences a change at the same time but in a consistent and predictably repeatable way.) That fails Occam's Razor at the least.
 
What do we do when we try to define feelings? We can't do it. We can make reference to what happens in a physiological or behavioural sense, but we can't define what a feeling is.

But what's the difference?

I could define specific feelings as being action potentials in specific neurons, and that works very well. It always corresponds to the person's reporting of subjective experience. Why isn't that a valid definition?

In fact, we can even infer feelings that aren't being reported. If someone has a fever (by objective measurement), but they're unconscious, we have a good idea of how that would feel if they were awake. In this case, the objective measurement is more accurate than the subjective reporting anyway.

When I had surgery on my vocal cords about a year ago, I had general anaesthesia (for the first time since I had ether as a 5 year old for a tonsilectomy). My subject experience was that they shot stuff in my IV, I felt euphoric for about 10 seconds, and then "I" switched off. I woke up 3 or 4 hours later with no sense whatsoever of the passage of time. It wasn't at all like being asleep.

The reason they did this, is because the surgery would have caused me a lot of pain if "I" were there to experience it. We have such a practical knowledge of the subjective experience of feelings, that we can routinely and relatively safely do this to people.

So, from a utilitarian point of view, the idea that subjective experience of "feeling" is wholly the result of things that happen to the material structures of the brain and body is very useful.

Dualism is not. At best it can accommodate religious ideas of the soul and explain the difference between a zombie that is identical to a normal person in every way except that it lacks a soul (or whatever) and a normal person. That's not useful at all, since, by definition, there's no way to distinguish between them (except for the case of my own self--the only subjective experience of consciousness I know about without relying on correlations).
 
...
Well, I basically agree. I don't think a universal morality exists either, nor do I think magic exists. But even if it is all based on opinion, surely we can say that some opinions are more logical than others, can't we?....
Absolutely. All you need to do is feed the criteria you want to judge by into the problem.

What I think people who can't quite grasp the idea morality is a material thing are doing is not recognizing a person makes a moral judgment (or a beauty judgment) based on criteria the brain processes almost instantaneously.

Consider what you judge good flavor by. Chocolate or sugar or a good steak if you are so inclined hit those taste buds and you recognize a pleasurable sensation. It tastes good. But why does it taste good? Purely a neuro-chemical pleasure stimulation.

What happens when you see a breed of dog you've never seen before? You recognize instantaneously it is a dog. How is that possible when you've never seen that breed before? Your brain has pre-established the criteria which defines a dog, or a tree, or an insect, or a soldier, or fast food restaurant. Your brain processes that criteria instantaneously and you recognize the thing you are looking at even though you've never seen anything exactly like it before.

What happens when you see something beautiful or ugly? The same thing. Only it is not recognized as easily as the taste sensation or the intellectual assessment of an object. The brain has pre-established criteria which it judges beauty by.

Now think about moral judgments. You can define the criteria you are using to judge if you think about it. Or you can just "know" what is right. But how do you know? Your brain has pre-established criteria it weighs nearly instantaneously.

Those "logical" opinions depend entirely on the criteria one establishes to determine the best moral decision by. If that is survival of the group, logic will be different than if your criteria is survival of the individual. Since we evolved as a gregarious species, chances are the majority of individuals will have evolved moral criteria that favor survival of the group, and existence within the group.

There are reasons individual survival is increased if group survival is supported. You can probably discover why a sense of fairness was naturally selected. That may lead you to determine the logic behind such criteria being involved in our moral decisions. Understanding the basis for natural selection of particular moral emotional reactions won't give you the ideal criteria. But understanding the development of the criteria will give you a better understanding of how logic applies to individual moral decisions.
 
It's an interesting question - how are things to be defined? Invariably, in terms of other things. And how are those things to be defined? We are in the paradoxical situation where all the words in the dictionary are defined by other words in the dictionary. There is no inherent meaning to be found.

What do we do when we try to define feelings? We can't do it. We can make reference to what happens in a physiological or behavioural sense, but we can't define what a feeling is. But funnily enough, we don't need to define it. We know what it is. Feelings are at a level where definition isn't relevant. If we could define them, we'd have to define them in terms of something more fundamental.

And where do feelings lie in scientific terms? We seem to know more about them. We know they are tied to the physical world. But then, we knew that already - every time we stubbed our toe we felt something.

But we've never observed a feeling in a laboratory. In fundamental science, they don't exist. There's never yet been a physics paper which deals with how feelings are produced. And for all the discussion about "emergent properties" we still know absolutely nothing.
I beg to differ. That "physiological or behavioural sense" is the definition of a feeling.

I learned this first hand when I began to recognize I had a sense of 'worry' as a PMS symptom. Hormones caused me to physically experience 'worry'. There was nothing external causing that sensation. Once I learned that, it was easier to ignore, but the sensation still occurred on a regular basis.

Euphoria and depression are 'feelings' that can occur pathologically. These are brain chemistry abnormalities, pure and simple. They do not result from the usual external stimuli. It allows us to see that those 'feelings' are a biological process. That is what they are, that is the definition. And, it can be observed in a lab. We can look at MRIs, PET scans, eegs, measure neurotransmitters, stimulate areas of the brain and recreate the feelings and so on. Just because the researcher is relying on the subject to reveal the sensation does not mean we cannot, by observing many subjects, correlate the objective measurements with the subjective reports.
 
Last edited:
Now we're on to the subjective experience of consciousness. Neuroscience can tell us quite a bit about this. (Really, it can--we can stimulate a spot on the sensory cortex and make you subjectively experience hallucinations. It can explain proprioception--and even problems with it that lead to the subjective experience of OBEs.)

The problem that comes up with approaching this stuff from a philosophical point of view, is that when we look at the correlation of all these physical things, drugs, MRIs, EEGs, etc. with a person's self-reporting of subjective experience...we're relying on that reporting as being largely accurate and truthful.
Not if you test lots of subjects and establish the objective measurement reliably reflects the subjective report.

Philosophical dualists can then claim that a relatively simple computer program can be made that will self-report the subjective experience of consciousness. It can say, "I am conscious" and "I have feelings", but how can we know it does?
If one can explain that you simply programmed the machine to parrot the words, you may not be able to prove it isn't conscious but you can say it is better explained as not conscious. Which I think from the rest of what you said you agree with.
 
Not if you test lots of subjects and establish the objective measurement reliably reflects the subjective report.
Yes--that was the point I was trying to make, though I was sort of making the dualists' case first.

If one can explain that you simply programmed the machine to parrot the words, you may not be able to prove it isn't conscious but you can say it is better explained as not conscious. Which I think from the rest of what you said you agree with.
Yep.

That's why I deny the existence of "zombies". If I'm presented with someone who has all the physiological correlates of feeling and he reports the subject experience of feeling, how can I assert that he is a zombie (somehow different from other people)? To me, it's absurd.

In the biological world, we see the capacity for consciousness or feeling (and for morality and language too) existing in a smooth continuum that always corresponds to the physiology. No magic "substance" necessary.
 

Back
Top Bottom