• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

“Materialism” is an empty word. An incendiary article.

The researchers successfully measured the neural correlates of emotion, not the emotions themselves. These are different things.
'The researchers successfully measured the electrical impulses correlating with subatomic particles hitting the sensors, not the particles themselves'.

They're not actually measuring qualia.
When an animal experiences fear, hate or love it is a measurable physiological effect - but when humans do it we insist that it is something else. To that argument I ask, what does it feel like to be a computer? Do they experience 'qualia' like us? Of course they do, just like humans do when specific parts of the brain are stimulated with electrical impulses. We only think we are different due to how our conscious mind interprets these 'feelings'.
 
Last edited:
Yes, I'm very aware that philosophical discussions are mostly semantic word games that have no practical use in reality. That is precisely why I made the comment.

Speaking of "semantic word games," I wonder if that phrase means anything other than "arguments that do not interest me."

In any case, your question was obviously silly and failed to make any real point at all, since no one in this discussion has pretended that the topic has anything to do with practical matters like moving snow.

But, you know, aside from that, good point and all.
 
Speaking of "semantic word games," I wonder if that phrase means anything other than "arguments that do not interest me.".

Uh, no. That is not what it means. Which word are you having trouble with? "Semantic"? "Word"? "Game"? In any way, just look up those words in the dictionary and you will have a very clear idea of what I mean.
 
If I may use the same simplified example, of snow being white and cold. We have the thing or content of snow; and its properties, cold and white.
Snow is whatever we decide it is. Cold, white, wet, slushy, dirty, melted - whatever. We call some 'thing' with those properties 'snow' because our puny brains are unable to take in the totality of the phenomena. But we must not forget that in reality it is not a separate object, but merely a particular configuration of a small part of the universe that looks different to us. 'Snow' is a grossly simplified model with just enough attributes for us to distinguish 'snow' from 'not snow' - and has no meaning beyond that.

Imagine a computer so powerful that it could store the position and energy of every atom in a snowball, even keep track of all the subatomic interactions down to the limits of quantum uncertainty. Imagine if it could then process that information to interact with the snowball in ways we would never have thought of. That computer has a much better concept of 'snow' than we do.

At the other end of the scale, consider a low-powered microcontroller that merely measures the temperature and albedo of whatever it's sensors are pointing at, compares the data to a predefined matrix and then declares it to be 'snow' (because that's what it was told those numbers mean). That second computer is us.
 
Snow is whatever we decide it is. Cold, white, wet, slushy, dirty, melted - whatever. We call some 'thing' with those properties 'snow' because our puny brains are unable to take in the totality of the phenomena.

I agree with you - we call 'snow' that thing that has a particular set of properties - but snow is not a set of properties.
 
Snow is whatever we decide it is. Cold, white, wet, slushy, dirty, melted - whatever. We call some 'thing' with those properties 'snow' because our puny brains are unable to take in the totality of the phenomena. But we must not forget that in reality it is not a separate object, but merely a particular configuration of a small part of the universe that looks different to us. 'Snow' is a grossly simplified model with just enough attributes for us to distinguish 'snow' from 'not snow' - and has no meaning beyond that.

Imagine a computer so powerful that it could store the position and energy of every atom in a snowball, even keep track of all the subatomic interactions down to the limits of quantum uncertainty. Imagine if it could then process that information to interact with the snowball in ways we would never have thought of. That computer has a much better concept of 'snow' than we do.

At the other end of the scale, consider a low-powered microcontroller that merely measures the temperature and albedo of whatever it's sensors are pointing at, compares the data to a predefined matrix and then declares it to be 'snow' (because that's what it was told those numbers mean). That second computer is us.

That computer is also imaginary.
 
Yes, they are. They are chemical reactions taking place within the brain. Given a sufficient knowledge of neurochemistry and so forth, as well as a detailed definition of the emotion, they can be measured.

Note that this is not currently practical, but that does not change the fact that, given sufficient information, it can be done.

What is the unit of measure for love?
Don't speak of future, please. We are not making science fiction, I hope.


And what is that matter defined as, then, if not as the sum of its properties?

Definition: A minetable is any object on my table.
There is now on my table my laptop, a bottle of water, some pencils and pens, four books, the scanner, an earphone, etc. Do you think that you can construct the concept of minetable by putting together the features of books, pencils, scanners and bottles?
There is nothing on my table that is a thing called “minetable” because “minetable” is not a category but just an artificious name for very disparate things. In a traditional way you can say that “to be on the table” is an accidental propriety of the things that are on my table in a determinate moment. (It is not a definitory propierty). At this moment moment my cat jumps on the table. We have to add other proprieties to the concept of minetable that becomes unsteady.


This it would be the case for "matter" as the sum of properties of particles, energy, black holes, etc.
 
First, and importantly, Nonpareil said nothing about atomic propositions and I don't think that they are necessarily relevant, since the sort of propositions we had in mind didn't seem to be atomic.

Sorry, I should have been clearer but it was getting late at night. I automatically jumped over a few steps that are obvious to me because I almost always do those and forgot that they are not obvious to others. This is getting quite off-topic but I'll try to expand things a bit here.

There are two aspects for default reasoning: theoretical and practical. The theoretical basis comes from the observation that there are concepts that you can't represent in classical logic in a usable way but that occur in everyday reasoning. Things like 'Today is a work day so Jack is likely to be at his work place. I need to meet him soon so I should go there.'. If we want to capture things like that, we need to add some sort of defaults to logic. What precise form the defaults take is not that important, just that we need some.

From the practical side, having defaults available often makes it simpler to formalize things that you could represent also with classical logic. The classical logic is a fragile and highly unintuitive system. It is easy to make simple mistakes that make the formalization inconsistent or otherwise incorrect. Defaults can be used to eliminate some common sources of errors.

As an almost on-topic side note, I'm not a fan of trying to use symbolic logic to argue about the nature of reality. With logic, you always start by doing an abstract formalization of whatever you try to reason about. All the reasoning that you do is about properties that the formalization has. If the formalization is correct, then the results apply also to the thing that you modeled. If it is incorrect, you may get interesting results that are completely wrong. When the subject gets too philosophical, you have absolutely no way of knowing whether you have a correct formalization or not, so the results are completely useless.

Second, while my familiarity with default logics is only superficial, I don't recall the principle you invoke here and it doesn't strike me as a particularly good principle. It makes the logic very sensitive to the selection of atomic predicates vs. defined predicates.

I personally like default negation of atoms because I find it that it helps quite a lot in formalizing practical problems. Why it does so needs a bit longer explanation.

When you want to solve some real-world problem with symbolic logic, there are two basic approaches that you can use:

* theorem proving where you express the problem as a theorem and create the solution as a part of proving it.

* model-theoretic where you construct a model of a set of sentences and that model contains your answer.

In both cases you will be using a computer to do so because pretty much always the problem will be far too large to do without an automated solver. In very general terms, theorem proving is the kind of stuff that traditional logic programming languages like Prolog do while the model-theoretic approach is SAT solvers and the like. (SAT = the satisfaction problem of propositional logic).

When you have a problem that's large enough, you need to go with the model-theoretic route. This is because theorem proving is much more expensive computationally and you can solve problems that are a few orders of magnitude larger than than you can with theorem proving. SAT and its ilk are computationally difficult problems, but theorem proving is even more difficult.

Finding a model for a set of sentences of predicate logic is more difficult in practice than finding one for a set of sentences of propositional logic so you want to go down the propositional route. However, formalizing a large problem directly with propositional logic is very difficult. So difficuly, in fact, that I'm willing to say that no living person can create a correct propositional formalization of a problem that has as few as 1000 propositional atoms in it.

The way to create a propositional formalization is that you first encode the rules of the problem domain with sentences of predicate logic and then give the parameters of the specific problem instance as a set of facts. The next step is to use an automated tool to translate the rules into propositional logic with respect to the facts that defined the instance. This gives you an instance of propositional logic that you can feed into a SAT solver to get the answer.

For example, if we have a rule 'for all x . P(X) implies Q(X)' and the universe contains two constants, a and b, then we get out four propositional atoms P_a, P_b, Q_a, Q_b and two rules: 'P_a implies Q_a' and 'P_b implies Q_b'. (Usually we continue using the predicate notation also when speaking about the propositional atoms and write P(a) instead of P_a because predicate notation tends to be clearer and there is little danger of confusion.)

From knowledge representation point of view, the largest problem of using the model-theoretic approach with classical logic is that we have to add a large number of rules to weed out obviously-incorrect answers. If we want to solve a logistics problem where we transport packages from some location to another, the vocabulary will include stuff like 'at(p, l, t)' that means 'package p is at location l at the time t' and 'transport(p, l_1, l_2, t)' that means 'transport the package p from l_1 to l_2 at time t'. We would have rules that look something like:

forall p, l_1, l_2, t ( at(p, l_1, t) and transport(p, l_1, l_2, t) implies at(p, l_2, t+1))

The intuitive meaning for this is: 'when we move package from a place to another, it will be at the destination'.

The problem is that when we have implications like this, there is a trivial model that where all atoms 'at(p, l, t)' are true, which is not that useful in logicstics planning - everything is everywhere all the time. We have to add a number of rules (called frame axioms) that enforce the rules that a package may not be in two places at the same time, that it moves when it is transported, and that it stays in the same place when not.

Creating the frame axioms is in practice much easier with default negation because that is much closer to the way that people actually reason. You don't have to remember to prevent spontaneous teleportation because at(p, l, t) is always false unless you force it to be true with an inference rule. Though, you can't escape writing some frame axioms, for example, you need to prevent a package from being in two locations at the same time. But you often can cut down the number of axioms by half or more and you usually keep the simpler half and eliminate the more complex rules.

As a final step you may want to translate the encoding that uses default reasoning back to pure propositional logic using an automatic translation that encodes the rules of the default semantics into classical logic. The reason why you do that is that SAT solvers tend to have better optimizations than solvers for other semantics and you may be better off using them even with the overhead of the final translation. (And the reason why you can do that even though I earlier wrote that you can't represent defaults is that we are here looking at one single specific case, take the closed-world assumption that we have included all relevant data in the encoding, add a number of additional propositional atoms for default handling, and end up with a SAT instance that no one can understand without knowing the details of the transformation).

Is this "simplest" form of default logic actually defended? I don't recall running across it when I briefly looked into the topic a decade ago.

I used the term 'default logics' a bit imprecisely by clumping all other forms of non-monotonic reasoning together with it. My reason for that is that they aim to solve the same problem and 'default' is the term with most human-readable meaning (for examplee, in my experience no one can guess what 'circumscription' means in logic and every time I write it I have to be extra careful so that I don't make unfortunate typos). The standard though sorely dated reference for non-monotonic reasoning is Marek & Truszczynski's Nonmonotonic Logic, 1993. Or, it was the standard back when I used logic actively, it has been some years from those times.
 
What is the unit of measure for love?
Don't speak of future, please. We are not making science fiction, I hope.

Not a particularly fair or meaningful question, really. How often have units of measure been named before the actual thing in question can be measured in a notable fashion and why would being able to produce the name of such actually mean anything for either your position or anyone else's on the general topic?
 
'The researchers successfully measured the electrical impulses correlating with subatomic particles hitting the sensors, not the particles themselves'.

This is a really poor analogy. There are models, both classical and quantum mechanical, that explain how particles interact with sensors. There are no models which explain how love, fear, lust, etc. interact with an fMRI instrument or an electrode. (Unless, of course, you define an emotion to equal a set of firing neurons without any understanding of how firing neurons = qualia. But that would be ridiculous). This is the hard problem of consciousness.

But you do bring up a good point. Measurement in anything involves instrumentation, and it is incredibly important to understand how the measuring apparatus interacts with the system under study to generate data. And our interpretation of the data depends on our models. We don't actually ever "see" particles. We at least have a quantitative model that describes them however, which is completely different than emotional states.

When an animal experiences fear, hate or love it is a measurable physiological effect - but when humans do it we insist that it is something else. To that argument I ask, what does it feel like to be a computer? Do they experience 'qualia' like us? Of course they do, just like humans do when specific parts of the brain are stimulated with electrical impulses. We only think we are different due to how our conscious mind interprets these 'feelings'.

I never said emotional states are unique to humans so I don't know where you got that idea. I feel it is likely that most animals with brains like humans experience emotions. Probably even animals with brains unlike humans.

I don't know what it feels like to be a computer and I have no idea why you think they experience qualia like us. They don't appear to be conscious, so when you say "of course they do [experience qualia]", it seems like you are reaching.
 
What is the unit of measure for love?
Don't speak of future, please. We are not making science fiction, I hope.

Well, personally, I also have to say that I don't see a reason why we couldn't in theory quantify emotion neurologically. This may even be easier than working out how emotion is arising from neural activity.

In Integrated Information Theory, for example, the proposal is that the measured amount of integrated information in a system represents the degree of consciousness it possesses. Even some critics of IIT accept this may well be the case, even though the theory doesn't really attempt to explain how II becomes conscious awareness.

Wanting to know how much you love her, the lady may in future be able to use a meter!
 
Well, personally, I also have to say that I don't see a reason why we couldn't in theory quantify emotion neurologically. This may even be easier than working out how emotion is arising from neural activity.

In Integrated Information Theory, for example, the proposal is that the measured amount of integrated information in a system represents the degree of consciousness it possesses. Even some critics of IIT accept this may well be the case, even though the theory doesn't really attempt to explain how II becomes conscious awareness.

Wanting to know how much you love her, the lady may in future be able to use a meter!

Oh lord, another memeplex!
 
There are no models which explain how love, fear, lust, etc. interact with an fMRI instrument or an electrode. (Unless, of course, you define an emotion to equal a set of firing neurons without any understanding of how firing neurons = qualia. But that would be ridiculous). This is the hard problem of consciousness.

Well, in behavioural psychology emotions are simply seen as short-cuts to useful behaviour, short-cuts that have been engineered into us during our evolutionary history. Emotions hard-wire the majority of humans to respond in ways that help to fulfil basic needs. That's the idea anyway. Of course, life is much more complex these days than in hunter-gatherer times.

But, anyway, you don't necessarily need to know how neural activity gives rise to consciousness in order to locate neural correlates of consciousness. Or to quantify activity in those correlates. We know the brain forms multiple representations of what's going on, and that command-structures decide which of these drafts should be propagated about the brain.

A lot of scientists work on these "easy problems."
 
Well, in behavioural psychology emotions are simply seen as short-cuts to useful behaviour, short-cuts that have been engineered into us during our evolutionary history. Emotions hard-wire the majority of humans to respond in ways that help to fulfil basic needs. That's the idea anyway. Of course, life is much more complex these days than in hunter-gatherer times.
But, anyway, you don't necessarily need to know how neural activity gives rise to consciousness in order to locate neural correlates of consciousness. Or to quantify activity in those correlates. We know the brain forms multiple representations of what's going on, and that command-structures decide which of these drafts should be propagated about the brain.

A lot of scientists work on these "easy problems."


:dl:
 
Oh lord, another memeplex!

Well, it is a collection of memes which find themselves usually hosted by the brain that is more inclined towards math or information theory! This type of person seems to like IIT, presumably thinking... "It's my go now! You neurologists, you spiritualists, you idealists, you've all had your turn. Now I'm going to do the job properly for you!"

We shall see
 
Last edited:
Thanks for your patient explanation, most of which I have snipped, since I have no questions or comments. I'll reply to a few things below.

As an almost on-topic side note, I'm not a fan of trying to use symbolic logic to argue about the nature of reality. With logic, you always start by doing an abstract formalization of whatever you try to reason about. All the reasoning that you do is about properties that the formalization has. If the formalization is correct, then the results apply also to the thing that you modeled. If it is incorrect, you may get interesting results that are completely wrong. When the subject gets too philosophical, you have absolutely no way of knowing whether you have a correct formalization or not, so the results are completely useless.

I more or less agree. As far as I'm concerned, the role of formalization in philosophy is to make one's proposals as clear and unambiguous as possible. In part, this is useful because it allows one to draw clear consequences from philosophical positions. But of course, the real philosophical meat comes from informal reasoning, with some additional possibility of error coming from the formalization itself.

Formal logic is thus a useful tool for making explicit philosophical claims. Of course, it doesn't work well in every domain, but it can be useful in certain cases.


I personally like default negation of atoms because I find it that it helps quite a lot in formalizing practical problems. Why it does so needs a bit longer explanation.

[...]

As a final step you may want to translate the encoding that uses default reasoning back to pure propositional logic using an automatic translation that encodes the rules of the default semantics into classical logic. The reason why you do that is that SAT solvers tend to have better optimizations than solvers for other semantics and you may be better off using them even with the overhead of the final translation.

Thanks for the thorough explanation of why, in certain settings, it is sensible to set atomic propositions to false "by default".

One minor question: you said that humans can't solve a SAT problem with 1000 variables. Is this currently feasible with automated SAT solvers?

I used the term 'default logics' a bit imprecisely by clumping all other forms of non-monotonic reasoning together with it. My reason for that is that they aim to solve the same problem and 'default' is the term with most human-readable meaning (for examplee, in my experience no one can guess what 'circumscription' means in logic and every time I write it I have to be extra careful so that I don't make unfortunate typos). The standard though sorely dated reference for non-monotonic reasoning is Marek & Truszczynski's Nonmonotonic Logic, 1993. Or, it was the standard back when I used logic actively, it has been some years from those times.

I think that non-monotonic reasoning is an approximation of a large chunk of human reasoning, and also an interesting subject in itself. But, as I said, it is not a topic that I've looked at carefully.

Thanks for the reference.
 
The researchers successfully measured the neural correlates of emotion, not the emotions themselves. These are different things.

An example if this isn't clear: Emotions also correlate with facial expressions, body language, and behavior. If I were allowed to observe someone over a period of time, I could most likely infer their emotional state. This does not mean I am measuring emotions.

Similarly, the researchers are observing neural activity over time, and training their model with known emotions of subjects. They can then infer emotions of new subjects based on fMRI data. They're not actually measuring qualia.

Qualia can't be shown to exist?

How do you know they exist?

:D
 
I agree with you - we call 'snow' that thing that has a particular set of properties - but snow is not a set of properties.

Correct because snow is an aggregate of frozen ice crystals that are an external referents in the idiomatic self referencing system of symbols used in language between communicants.
 

Back
Top Bottom