• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

“Materialism” is an empty word. An incendiary article.

Well, personally, I also have to say that I don't see a reason why we couldn't in theory quantify emotion neurologically. This may even be easier than working out how emotion is arising from neural activity.

It can be possible but I don't know any method to do it.

I have not understand the rest of your comment about information.

For the ladies, diamonds are usually sufficient.
 
My comment was an answer to the concept of matter proposed by Dancing David. Fair or unfair I don't know. But the mathematical concept of matter is a long-standing interesting issue (Descartes).

Still, asking for what units love is measured in, when it had already been explicitly claimed that we do not currently have the information to actually measure such, has no potential value when it comes to disputing the claim that it could be measured if we had sufficient knowledge.
 
Still, asking for what units love is measured in, when it had already been explicitly claimed that we do not currently have the information to actually measure such, has no potential value when it comes to disputing the claim that it could be measured if we had sufficient knowledge.

Cupids.

One Cupid = puppy love

Two Cupids = fraternal love

Three Cupids = maternal love

Four Cupids = undying passionate love
 
Humans feel pain. Animals feel pain. Even microscopic rotifers have a brain and nervous system. They may only have 230 neurons, but they can still feel.

Well, I said I was being usefully pedantic. Or trying to be.

Because, what I'm pointing out is that, usually, only a system which also constructs a sense of personal selfhood can make the statement "I feel pain." From there we easily say things like "the body feels pain," or "animals feel pain." But a line is being crossed in the latter two cases. And you don't necessarily have to take any notice of it, but it's good to recognise it, less you get dragged into murkier waters.

When one says for example "I experience", you are not saying "the brain experiences." Thus to say "does the computer experience" is valid in a certain sense, but also crossing this line.

I'm just trying to be usefully pedantic.


But how could a computer - which is just a collection of wires and transistors - get the kind of thought patterns that make human brains 'conscious'? The answer is that we make them that way. We make computers to mimic our own brains. When a programmer writes code to do a certain task, they are embedding into it their own logical thought processes - and perhaps even a part of their own personality.

OK, fair enough. I'm not an expert on these things. I know there are things like TrueNorth, neuromorphic chips. I guess we'll soon need a test to work out if a machine has conscious experience. No idea how that would look.
 
To run with such extended meanings of words like sense or feel is to unjustifiably anthromorphise thermostats, some bio molecules, contemporary computers, toasters or probably insects. They may map and model their environment but their brains/circuits/mechanisms don't have the complexity of positing/producing an experiencing self. We understand how thermostats work and there is no room for such a thing or other evidence for believing it occurs.

I think it's also good to be aware that the human brain evolved a highly sensitive Agency Detection System, very useful for detecting potential threats. But it also means that we easily imagine agency to exist where none exists, monsters under the bed etc. We're hard wired to do so.

Thus, believing the room stat is conscious may have more to do with this, than any genuine interpretation of what it's doing.
 
Last edited:
How does a human create more information than the 'sum of it's parts'? Or do we?

Well, via emergentism, essentially.

The idea is from the Integrated Information Theory of consciousness, currently all the rage in neuroscience. It asserts that you can measure the degree of consciousness a system has (assigned the Greek letter, Phi) by measuring the amount of information it is creating, compared with the amount that is created by its two strongest subsystems. Systems with a large Phi value are deemed to be highly conscious.

If the overall system reduces the level of uncertainty more than its component subsystems, then presumably all that extra information must go somewhere... so I guess the system is driven to create conscious experience to uphold the 3rd law of thermodynamics. It's a long time since I did maths or physics, I'm afraid.

Anyway, the theory is gaining a lot of attention and respect. And it helps us deal with notions of conscious thermostats and the like when Hyperactive Agency Detection can't!
 
Last edited:
Thank you, but I cannot read a whole book now. I was searching for an article only.
It seems that Beck's work is aimed to measurement of depression, not emotions in general.

Anyway, I only know some studies about the emotions that measure the answers of subjects, either in behaviour or with introspective answers.
They are not very exact.
There is not a consensus in psychology about what different methods of measurement.

Therefore, the psychological measurements of some emotions (the strongest emotions usually) are referred only to some partial answers to emotions. These answers can measure of something: but what kind of thing mental or neuronal?

I don't know any method of objective measurement of all emotions, even the soft feelings.


Hi,
i did link to a mood tracking sheet, yo9u can always look at a pain ranking sheet as well.

For myself who follows the biopschosocial model, emotions are interpretations of physical states that are partly leaned, partly conditioned and lots of interpersonal,social and cultural learning.

this comes from working with individuals to help them identify their emotional states, and from hanging out with children.
 
To run with such extended meanings of words like sense or feel is to unjustifiably anthromorphise thermostats, some bio molecules, contemporary computers, toasters or probably insects. They may map and model their environment but their brains/circuits/mechanisms don't have the complexity of positing/producing an experiencing self. We understand how thermostats work and there is no room for such a thing or other evidence for believing it occurs.

I certainly don't rule this out for machines at some point or shades or grey in the biological world.

These things seem to be provable so I'd say that your intuition is pretty good. :thumbsup:
 
Because, what I'm pointing out is that, usually, only
So which is it:- usually, or only?

a system which also constructs a sense of personal selfhood can make the statement "I feel pain."
A computer can make the statement "I feel pain", so that means computers have a 'sense of personal selfhood', right?

From there we easily say things like "the body feels pain," or "animals feel pain." But a line is being crossed in the latter two cases. And you don't necessarily have to take any notice of it, but it's good to recognise it, less you get dragged into murkier waters.
Where is this line, and what does it cross?

When one says for example "I experience", you are not saying "the brain experiences."
Correct. The brain is not an isolated organ that can function without a body. In order to experience something it must have input, which is sent to it via nerves from various sensors throughout the body - eyes, ears, skin etc. Without them the brain cannot experience anything.

So when I say that "I" experience something hot or cold, I mean that nerve endings in my skin experience the sensation of hot or cold, then transmit this information via nerves to my brain for analysis. My conscious mind then observes my brain's interpretation of the event after it happens. The mind, brain, nerves and nerve endings are really a continuous whole. Take away any part of this system and "I" do not experience the event.

A computer has analogous functions. Its 'consciousness' is the executive program which prints messages like "I feel hot" when it observes the over-temperature flag being set. This flag is set by a 'subconscious' interrupt routine which reads raw temperature data from a remote sensor, analyzes it and sets the flag when when temperature exceeds a critical value.

In this analogy the computer's 'mind' is its operating software - pure information with no physical form. The 'brain' is the CPU, RAM and hard drive. 'Nerves' are the wires connecting the CPU and memory to I/O devices, and 'nerve endings' are sensors such as the thermistor which measures temperature. For the computer to 'experience' some event, everything must work as a whole. Without a CPU the software is just a collection of bits. Without wires the CPU cannot receive any information, and without a sensor it cannot measure the temperature.

Of course nobody is suggesting that current computers are 'conscious' in a human sense, but is the difference fundamental or just one of scale? If the process is same then perhaps the computer is conscious, just at a much lower intensity than a human. Take way 99.9% of your brain cells and you too might have the same amount of consciousness as a computer. Make a computer powerful enough and it may magically become 'conscious' simply by having enough consciousness to match our own.
 
Well, you know your own subjective experiences exist. But with regards to others, no. The existence of anyone else's subjective experiences can only be inferred, and the content of those experiences will always be a mystery.

No! That perspective is pure dualism. In a monist system there cannot exist a point of observation.
 
To run with such extended meanings of words like sense or feel is to unjustifiably anthromorphise...
And yet in electronics, thermostats are classed as sensors. How can engineers justify anthropomorphizing inanimate objects? Perhaps because the concept should not be restricted to humans.

The word 'anthropomorphize' means to talk about a thing or animal as if it were human. But it only applies to properties that only humans have. There is no fundamental difference between the nervous systems of a human, a chimpanzee, a dolphin or a mouse - so there is no reason to believe that they don't feel just like we do. The only difference is that humans can vocalize their feelings.

Going further, why insist that machines can't be like us in some ways if they act the same? After all, we build them precisely to have those attributes!
 
No! That perspective is pure dualism.
Nonsense - it has nothing to do with dualism. The reason that you can only 'infer' someone else's subjective experience is quite mundane. Unlike a computer, every human brain becomes 'randomly' wired as it grows and gathers information. Therefore it is not possible to 'jack into' another person's brain and make sense of their thoughts. That is why we have developed language - to communicate ideas and experiences in a common format that can be understood between minds.

There may come a time when computers have to do the same. We are already having problems getting various operating systems to work together. Programs are often built to run on virtual machines that have a different translator for each platform. Flash drives have internal memory maps that only they know about, each chip mapping out bad bits and moving sectors around to even out wear. To get higher performance we may have to let computers rewire themselves to avoid breaking down when a bit of their microscopic circuitry fails. Eventually we may end up 'growing' computer 'brains' that have to be 'taught', rather than just making exact clones that we can shovel identical programs into.

In a monist system there cannot exist a point of observation.
Nonsense - the system is monist, and we do have 'points of observation'.
 
My conscious mind then observes my brain's interpretation of the event after it happens.
How are you not creating an infinite regression? My brain interprets my brain's interpretation of my brain's interpretation? Homunculus theory has the same problem.

A computer has analogous functions. Its 'consciousness' is the executive program
No, all computers today are non-cognitive systems.

Of course nobody is suggesting that current computers are 'conscious' in a human sense, but is the difference fundamental or just one of scale?
It's fundamental. You can't make a computer or neural network cognitive by making it faster or increasing its processing power.

Make a computer powerful enough and it may magically become 'conscious' simply by having enough consciousness to match our own.
I would say that it won't.
 
Last edited:
Correct. The brain is not an isolated organ that can function without a body. In order to experience something it must have input, which is sent to it via nerves from various sensors throughout the body - eyes, ears, skin etc. Without them the brain cannot experience anything.


…so basically you’re saying that dreams don’t happen. Imagination is just imaginary. Self-reflection, thought, intuition...all a fantasy!
 
Well, my sister's company sells these devices from some guys called HeartMath that claim to measure how open your heart is.

For myself who follows the biopschosocial model, emotions are interpretations of physical states that are partly leaned, partly conditioned and lots of interpersonal,social and cultural learning.

this comes from working with individuals to help them identify their emotional states, and from hanging out with children.

Sigmund Freud thought that he was speaking of mental components (Ego, Superego, Id) that in reality were brain functions, but he recognized that science in his time was not able to provide an exact translation of these mental states in terms of biological concepts. The neurological studies of the brain have advanced a lot since Freud but the situation continues to be similar on this particular point. We can locate brain areas for many mental activities, but we still cannot discriminate between particular states of mind. This limitation is particularly visible in the measurement of emotions.

The studies about measurement of emotions have two main resources: questionnaires and observation of behaviour.

The tests by questionnaire measure the beliefs of the subject about his own emotions. The introspective answers are not measuring emotions, but the linguistic answers that are subjective; this is to say, what the subject thinks about his emotions. (This is very apparent in the case of moral emotions when the cultural determinant is stronger). The observation of the behaviour is only valid for standard situations and in the cases of strong emotions. You can “see” the shame of a person when she is blushing. But in many circumstances the subject is able to control his body reactions and the observation -and the measurement!- is impossible.

This is why the physicalist (strictly materialist) program is not currently feasible. I am materialist because I think all the mental states are caused by brain alterations. But I cannot say that all mental states are brain alterations.
 
(This is very apparent in the case of moral emotions when the cultural determinant is stronger). The observation of the behaviour is only valid for standard situations and in the cases of strong emotions.
I don't know what a moral emotion is. Most emotions have representative facial expressions which are recognizable to both pre-speaking babies and to people of widely varying cultures. This suggests that facial expressions and emotions are part of the basic brain structure.

This is why the physicalist (strictly materialist) program is not currently feasible. I am materialist because I think all the mental states are caused by brain alterations. But I cannot say that all mental states are brain alterations.
What is a physicalist program? And are you suggesting a dualist model?
 
Sigmund Freud thought that he was speaking of mental components (Ego, Superego, Id) that in reality were brain functions, but he recognized that science in his time was not able to provide an exact translation of these mental states in terms of biological concepts. The neurological studies of the brain have advanced a lot since Freud but the situation continues to be similar on this particular point. We can locate brain areas for many mental activities, but we still cannot discriminate between particular states of mind. This limitation is particularly visible in the measurement of emotions.

The studies about measurement of emotions have two main resources: questionnaires and observation of behaviour.

The tests by questionnaire measure the beliefs of the subject about his own emotions. The introspective answers are not measuring emotions, but the linguistic answers that are subjective; this is to say, what the subject thinks about his emotions. (This is very apparent in the case of moral emotions when the cultural determinant is stronger). The observation of the behaviour is only valid for standard situations and in the cases of strong emotions. You can “see” the shame of a person when she is blushing. But in many circumstances the subject is able to control his body reactions and the observation -and the measurement!- is impossible.

This is why the physicalist (strictly materialist) program is not currently feasible. I am materialist because I think all the mental states are caused by brain alterations. But I cannot say that all mental states are brain alterations.

I agree, however, we only learn how to label our emotions, they don't seem to really be intrinsically understood behaviors. Hang out with children, adults model emotions, provide emotional cues and basically teach children how to interpret emotion.

IMO
 
Correct. The brain is not an isolated organ that can function without a body. In order to experience something it must have input, which is sent to it via nerves from various sensors throughout the body - eyes, ears, skin etc. Without them the brain cannot experience anything.

I wouldn't personally call the brain an experiencer. I would call it a processor.

So when I say that "I" experience something hot or cold, I mean that nerve endings in my skin experience the sensation of hot or cold, then transmit this information via nerves to my brain for analysis. My conscious mind then observes my brain's interpretation of the event after it happens. The mind, brain, nerves and nerve endings are really a continuous whole. Take away any part of this system and "I" do not experience the event.

You're asserting three level of neural transduction in this process of experiencing then? Outer reality to neural spike trains; neural spike trains to representation for this "conscious mind" to look at; and then back to neural spike trains to integrate any changes this "conscious mind" might like to make?

And where is this "conscious mind" that is apparently observing? Does it have a location?
 
Last edited:

Back
Top Bottom