Do you believe in mental causation?

hammegk said:
The problem is that if "the fact that the only means you have to explain the circularity of cognizance is through cognizance" doesn't have meaning to you, I really don't know what else to say. :(
When I boil that sentence down, I get solipsism. Are you a solipsist, hammegk? (I ask in all seriousness.) If not, how do you reconcile that sentence with anything other than solipsism?

Er, "what-is" ... the whole enchilada .. the universe ... everything ...

Sorry not to be of help, M.
Hey, that does help, actually. I was unclear as to whether you meant "people", or universe, or something even more, or less, or in between.

I will return to the original sentence, plug this in, and try to see if I can respond.
 
Mercutio said:
When I boil that sentence down, I get solipsism. Are you a solipsist, hammegk? (I ask in all seriousness.) If not, how do you reconcile that sentence with anything other than solipsism?
We are conscious, that much -- nothing more, nothing less, by the way -- is blatantly obvious.
 
Iacchus said:
We are conscious, that much -- nothing more, nothing less, by the way -- is blatantly obvious.
And until you define "conscious" in a non-circular manner, Iacchus, your statement holds no more meaning than "we are, that much--nothing more, nothing less, by the way--is blatantly obvious." "Consciousness", undefined or circularly defined, adds a whole lot of nothing to your statement.

Indeed, on the face of it, your statement is doubly false. First, by most definitions of consciousness, we are not always conscious. Unless and until you provide an adequate definition of consciousness to compete with the commonly used meanings, you are wrong. Secondly, until you are able to define consciousness meaningfully (which you have not done in any thread here yet), it cannot be "blatantly obvious". Any "we are X", where X remains undefined, cannot be evaluated at all, let alone be "blatantly obvious".
 
Mercutio said:
And until you define "conscious" in a non-circular manner, Iacchus, your statement holds no more meaning than "we are, that much--nothing more, nothing less, by the way--is blatantly obvious." "Consciousness", undefined or circularly defined, adds a whole lot of nothing to your statement.
And do you realize that you couldn't even attempt to make such a statement unless you were "conscious?" It's very clear to me what I mean.
 
Iacchus said:
And do you realize that you couldn't even attempt to make such a statement unless you were "conscious?" It's very clear to me what I mean.
No, Iacchus, I do not realize that. I know that I could not make such a statement unless I was breathing (measured over time--I could type while holding my breath). I know I could not make such a statement unless my heart was beating. I could not make such a statement without several other conditions being true...each of which I can objectively define, measure, distinguish from other conditions...if you are saying that "consciousness" is an absolute necessity for me to be able to make such a statement, then why is it so difficult for you to actually define it?

If all you can do is spout these circularities, I am afraid that "consciousness" will join "phlogiston" in the pile labeled "nice idea, but doesn't work".
 
Mercutio said:
If all you can do is spout these circularities, I am afraid that "consciousness" will join "phlogiston" in the pile labeled "nice idea, but doesn't work".

Is that why my Aethometric Temporal Retensioner isn't working? The phlogiston is faulty?
 
Mercutio said:
When I boil that sentence down, I get solipsism. Are you a solipsist, hammegk? (I ask in all seriousness.) If not, how do you reconcile that sentence with anything other than solipsism?
Solipsism, the first hurdle. I aver that *I* (thought) do not believe that it, or *me* (the baggage perceived as and perceived via the wetware, ego & all) is The Solipsist, nor do I aver, or deny, The Solipsist exists per se. To facilitate discussion, I'm willing to take on faith that you are not The Solipsist either, but rather find yourself in the same position I (and everything that thinks) is in.

How do you reconcile that you are not, in fact, The Solipsist?


Hey, that does help, actually. I was unclear as to whether you meant "people", or universe, or something even more, or less, or in between.

I will return to the original sentence, plug this in, and try to see if I can respond.
:)



eta: I just noticed the poll as it stands has 13 votes for 'believe in mental causation', 8 'idunnos'; where are all these idealists & proto-idealists hiding? ;)
 
hammegk said:
... I just noticed the poll as it stands has 13 votes for 'believe in mental causation', 8 'idunnos'; where are all these idealists & proto-idealists hiding? ;)
I'm hiding here in the 'mental' ward. I rarely vote in the polls and didn't in this one.

It seems obvious that the idea of the symphony, cathedral, or poem precedes the creation of those things in the physical world. When I code the computer program I know what I want to write and as I write I check whether I'm following my plan and spelling things correctly. I know I will the body to act even when it might wish to take a nap instead.

But I'm pretty sure I never know what Ian is talking about and Hamme, you are often equally obscure.

Still you are both interesting characters to me. I'd like to define my self as Merc does, a pragmatist. I think that's the only way out of this dualist tendency I exhibit.

The physical world exists. I know it. But the idealist position of existance as conceptualization is too anti-common sensical to entertain. It leads nowhere. The predictable physical world leads to brighter tomorrows. (Whoops - that's how my idealism slips out.)
 
hammegk,

I just noticed the poll as it stands has 13 votes for 'believe in mental causation', 8 'idunnos'; where are all these idealists & proto-idealists hiding?
Well, Ian seems hell bent on defining people into his way of thinking, so what other results would you expect?
 
I did ask earlier if I was understanding but we havent heard from Ian for a while, I dont know if I agree or not until I know more.

:)
 
Interesting Ian said:
I am making no assumptions about the origin of mental causation, I simply want to know if people believe it exists.

Thus why do I drink a glass of pineapple squash? I might be very thirsty. But that doesn't make me drink, even though the fact that I will get a drink might be 100% predictable. Thus if I'm incredibly thirsty this will have such a huge influence on my behaviour that I am bound to get a drink (although this need not be the only scenario where my behaviour could in principle be 100% predictable). But the thirstiness in itself does not compel me to drink. It's my decision which does that. Nor does the thirstiness compel my decision, although it may render my decision as 100% predictable.

So that's what I'm asking.

Joseph Levine (Professor of Philosophy, Ohio State University) has an interesting view in the problem of mental causation, namely that materialists should probably be content with a 'weaker' degree of mental causation than they would want. I think it is worth posting (learning purposes only of course!) a relevant excerpt on the topic from his book 'Purple Haze' (for those who have an account it can be found here)

quote:

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com)
© Copyright Oxford University Press, 2005. All Rights Reserved

1.5 More on Causal Relevance

I have argued so far that materialism, as embodied in thesis M, is required if we are to make sense of the causal efficacy of the mental. However, Jaegwon Kim has forcefully argued that if we accept something like thesis M, we lose the right to attribute causal efficacy to the mental. As he puts it in a recent work, "If mind-body supervenience fails, mental causation is unintelligible; if it holds, mental causation is again unintelligible. Hence mental causation is unintelligible. That then is. . . Descartes's revenge against the physicalists" (1998, 46).

We have already investigated the first horn of the dilemma. But why accept the second one? Why think that if mind-body supervenience27holds then mental causation is unintelligible? The basic argument is what Kim calls the "causal exclusion" argument, and it goes like this. Consider again the pain's causing my hand to withdraw from the fire. My instantiating the mental property, being in pain, is supposed to be causally relevant to the subsequent motion of my hand. We know that a certain brain state, call it B, set in motion the nerve impulses which ultimately moved the muscles in my hand.

My instantiating B was clearly causally relevant. B also realizes the pain. It's supposed to be because the pain is realized in B, which causes my hand to move, that we get to say that the pain caused my hand to move. However, from the description we just gave, it seems that my (or my brain's) instantiating pain adds nothing to the causal power relevant to producing a hand motion. All the causal work is done by the neurological property B. So it looks as if being physically realized can't help to secure causal efficacy for the mental.

One way out of this predicament is to identify the mental property with its realizer. If we say pain isn't just realized in state B, but is identical to state B, then of course there is no problem about the pain's being the cause of the hand motion. However, as we saw above, identifying pain with state B is inconsistent with the claim that pain can be realized in different ways, as in Martians or robots. So a straightforward identification of pain with a brain state seems wrong.
If one wanted to secure the causal efficacy of the mental through an identity theory, and one also wanted to allow for the possibility of multiple realization, one could of course just identify the mental property with the disjunction
end p.26 of its realizers.

Pain may not be identical to B, or to C, or to D, but it may be identical to (B or C or D). This is a move Kim (1993, 210) is sympathetic to. So long as we allow all metaphysically possible realizers into the disjunction, the mental property and its correlated disjunction will be necessarily coextensive. True, the disjunction will (most likely) be infinitely long, but it's not clear that this should matter. Being infinitely long prohibits a representation from being entertained by a finite mind, but it's not clear that being describable by an infinitely long representation does anything to undermine a property's metaphysical status. After all, if it is identical to the mental property, then we have the short, finite term "pain" (or whatever mental term is in question) by which to refer to the property.
Another possibility for securing the causal efficacy of the mental is to employ the notion of a "trope." Tropes are token instantiations of properties, like the redness in the diskette case, or this particular instance of pain.28 We can then argue as follows. Pain, as a universal, is multiply realizable.

Each instance of pain, however—each of its tropes—is identical to some trope of the relevant property universal that in that instance serves as the realizer. So this trope of pain is identical to this trope of neurological property B. To say that the property of pain is causally efficacious is just to say that its tropes are, which, since they are identical to tropes of physical properties, they will be.29

It's possible that either the disjunctive identity or the trope identity move will work, and thereby secure the causal efficacy of the mental. But I am not happy with them. For one thing, I just don't believe that mental properties are identical to disjunctions of their realizers. In order to refute the identity claim I would need a well-worked-out theory of property identity, which I don't have. It's not easy to say what, over and above necessary coextensivity, is required for property identity. It wouldn't be so difficult if one went all the way and endorsed the view that for each predicate there is a distinct property. However, as I mentioned at the beginning of this chapter, I am a property realist and don't think this principle is consistent with a robust property realism. So what criterion of property identity is consistent with some pairs of distinct but necessarily coextensive predicates picking out the same property and others picking out distinct properties? I think there must be one, but I can't say what it is.

Still, despite lacking a formulation of the requisite criterion, I think there are considerations that one can bring to bear on the question. It seems to me that there is a strong analogy between the relation of realized properties and their realizers, on the one hand, and properties in general and the objects that instantiate them, on the other. Corresponding to each property is the set of individuals across all possible worlds that instantiate it. Some would identify the property with that set, but this isn't really a version of property realism. To be a property realist is to endorse the view that there is something, the property, that all the members of this set have in common, and by virtue of which they are gathered together into this set. If this much is correct, it would seem perverse to say that we can identify the property in question with the disjunctive property of being this member of the set or that member of the set, or so on ad infinitum. The property isn't merely being this or that; rather, it's what this and that have in common.

If this is convincing when discussing properties and the individuals that instantiate them, I think the same considerations apply to properties and their realizers. There are many ways to realize a pain, but in each case what grounds the inclusion of a realizer in the set of realizers is the fact that it is realizing pain. Being a pain is what binds all the realizers together, and therefore isn't merely reducible to being one or the other of the realizers.

But even if one did identify pains and other mental properties with disjunctions of their realizers, it's not at all clear one has overcome the causal exclusion argument. The neurophysiological property B is clearly distinct from the huge disjunctive property of which it is a disjunctive component. To explain what caused my hand to move, appeal to B seems to be sufficient. So what do we need the disjunctive property for? It doesn't seem to do any work. If pain is identical to a disjunction of its realizers, then, it still appears to be out of the causal loop.

As for trope identity, I don't think it really solves the problem. A trope is a kind of particular. As such, it can partake of many universals. So this trope of pain is also a trope of neurological property B. Now it's supposed to be the case that the pain trope derives its causal efficacy from the fact that it is identical to a B trope. But it still seems as if the original question remains. By virtue of being a trope of which property does it cause the hand to move? The causal exclusion argument seems to force us to say it's by virtue of being a trope of property B. We still lack a way to bring the property of being a pain into the causal picture.

I don't take the considerations just adduced to be definitive. Perhaps the disjunctive or trope move can be made to work. I don't find them promising, however. If they don't work, then how do we secure the causal efficacy of the mental? The answer I favor includes two elements. First, we have to be satisfied with perhaps a lesser grade of causal efficacy than we might want. There is no way around it. If materialism is true, then all causal efficacy is constituted ultimately by the basic physical properties. No other property can play this role. So if by "causal efficacy" one means the kind of role that, according to materialism, only basic physical properties can play—and I won't deny that one can plausibly use the phrase that way—then of course it will turn out that mental properties, along with all other non-basic physical properties, are not causally efficacious. But so long as we also recognize another sense of "causal efficacy," a sense that applies not only by virtue of being the ultimate ground of all causal transactions, then there will be a sense in which mental properties are causally efficacious.

When we say that believing it's going to rain and wanting to stay dry cause one to take an umbrella, I don't think we intend that this is a case of basic causation, obtaining without realizing mechanisms. Rather, what makes it a genuine case of causation is the fact that there is a lawful regularity that holds between beliefs and desires with certain contents and behaviors of the relevant kinds.30

True, there are lower-level physical mechanisms that sustain the regularity, but this doesn't itself take away from the regularity's status as a lawful regularity. It supports counter-factuals, is confirmed by instances, and, I believe, grounds singular causal claims. I want to make two points about the (lawful)31 regularity view. First, part of what supports the regularity view of causal efficacy is the belief that what we really care about when making causal claims is providing explanations and affording control over the relevant phenomena. If I want to know why you took an umbrella, then it is arguably a much better explanation to be told that you believed it was going to rain than to be told what brain state you occupied. There is a rational relation between the antecedent state and the behavior that is manifest on the one causal account—the belief/desire account—that is invisible on the other.

Similarly, if I want you to take your umbrella, I'm pretty confident I can get you to do it by telling you it's raining.
My second point goes back to the question of whether to identify the mental property with the disjunction of its realizers. It seems to me that in order to make the regularity view plausible, we have to deny the identity of pain with the disjunction of its realizers. The reason is this.32 Regularities, in one sense, are extremely cheap. Consider a set of arbitrarily chosen event pairs (c n , e n ), where c n is the cause of e n . Now we can construct a pair of "properties," C and E, such that C is the property had by all and only events c i , and E is the property had by all and only events e i (i ≤ n). Now we have guaranteed that the generalization "C's cause E's" is true. But do we really want to say that it's being a C that's responsible for some event's bringing about an E? To say this seems to trivialize causal relevance.

However, it is plausible, especially given that our interest in attributing causal relevance is so closely related to explanation and control, that we restrict the sorts of regularities we allow to ground claims of causal relevance to those that are not indefinite and arbitrary in the way that our trumped-up C and E properties were. I don't know whether the appropriate criterion can be rigorously formulated. I will not try to do it here. But for this sort of move to make any sense we must at least distinguish the properties we do care about, like pains and beliefs, from their correlated disjunctive properties. For if we don't, then we can't distinguish the privileged class of regularities from the trumped-up ones, since all non-basic regularities will involve disjunctive properties.33

The regularity view may not give us all that we want, intuitively, by way of mental causation, but it is all that materialism allows. Is it enough? I think so, but I will not attempt to provide any further defense here.34
 
Ian (hi again, Ian!) seems somewhat reluctant to accept that any sort of thought has a physical and finite form and exists somewhere in the structure of the brain. It's perhaps easier to think of memories like that, as molecules or electrical charges sitting in the crevices of neurons, until some external stimulus sends a signal through to retrieve and reprocess them.

But can a purely internal stimulus, an 'intention' that had no previous existence except perhaps as a bunch of free-floating charged molecules, spontaneously assemble itself and retrieve and reprocess memories 'at will', or initiate activity in motor neurons that, but for the 'intention', would not have moved the little finger (or whatever)?

I answer this by way of a further question: is what we sense as an intention, merely an afterthought - a consequence, no less than memories or the movement of the little finger, of electrochemical events that would happen regardless of how we sense the causal stream of events? Some brain research appears to suggest that this is indeed what occurs - that we sense (are conscious of) intention a split second after the control signals have been sent out, not before.

Nevertheless, once an ‘intention’ has arisen, it becomes itself a potential active agent in the causal network of neurological events that, in a mechanical sense, direct our thoughts and actions.

Therefore, Yes (because intentions have causal influence) and No (because intentions are only the afterglow of things that happen regardless of what we think we want to happen).

I think. Or maybe. Or this is just too frustrating to keep thinking about, even if I can't help myself.
 
A feeling of intention is just another kind of thought. It arises from a current action and goes on to influence subsequent actions. Why did evolution bother? Because a "sense of the self having will" is a useful component of planning for self-protection and survival.

Intention is both a by-product and a participant.

~~ Paul
 
Pangloss said:
Ian (hi again, Ian!) seems somewhat reluctant to accept that any sort of thought has a physical and finite form and exists somewhere in the structure of the brain. It's perhaps easier to think of memories like that, as molecules or electrical charges sitting in the crevices of neurons, until some external stimulus sends a signal through to retrieve and reprocess them.

But can a purely internal stimulus, an 'intention' that had no previous existence except perhaps as a bunch of free-floating charged molecules, spontaneously assemble itself and retrieve and reprocess memories 'at will', or initiate activity in motor neurons that, but for the 'intention', would not have moved the little finger (or whatever)?
Metacristi's article made reference to a discussion (not shown) having to do with pain and a robot's "mind". The mention of this did make me think of a robot whose reactions and behaviors mimiced that of a human. Certainly such a creation could be talked about as behaving according to "mental directives" such that it would pull it's "hand" out of a fire when a hand sensor registered high temperature, but we would attribute all activity to mechanistic cause, wouldn't we?

Pangloss, I ran across this short (1 page) article of a different type of quantum computer being built. It seems to be able (in theory) to spontaneously assemble answers to complex problems. I'm not promoting it as a model for brain activity only as another case for mechanistic models being able to activities of a brain/mind.
 
Mercutio said:
...if you are saying that "consciousness" is an absolute necessity for me to be able to make such a statement, then why is it so difficult for you to actually define it?
Who turned off the lights! ...
 
Pangloss said:
But can a purely internal stimulus, an 'intention' that had no previous existence except perhaps as a bunch of free-floating charged molecules, spontaneously assemble itself and retrieve and reprocess memories 'at will', or initiate activity in motor neurons that, but for the 'intention', would not have moved the little finger (or whatever)?

I answer this by way of a further question: is what we sense as an intention, merely an afterthought - a consequence, no less than memories or the movement of the little finger, of electrochemical events that would happen regardless of how we sense the causal stream of events? Some brain research appears to suggest that this is indeed what occurs - that we sense (are conscious of) intention a split second after the control signals have been sent out, not before.
Doesn't the micro-prossessor rely on what's stored in memory before determining how to react to a new stimulus? Why shouldn't that "lag" be there? People don't automatically leap into action without first "processing" what it is they're looking at.
 
Iacchus said:
... Why shouldn't that "lag" be there? People don't automatically leap into action without first "processing" what it is they're looking at.
You missed his point. He's saying people often do leap into action and the conscious awareness of the situation lags the action.

Like pulling your hand off the stove and then saying "That's hot!"

It's not saying there's no mental causation but it does suggest that there are actions the brain takes with it's voluntary muscles that it does without being told to do so consciously. Sort of how it regulates it's involuntary systems like heartbeat and breathing.

The odd thing to me is that if the conscious mind has been darkened - that is when we're knocked out - the hand that falls on the stove doesn't mind being burned crispy.

Mechanistically speaking, the sensor pathways seem to go through the conscious mind. If it's "off" the senses cannot inform the part of the brain that might react. Otherwise it would and inform the conscious mind of it's action once it came back.
 
Atlas said:
You missed his point. He's saying people often do leap into action and the conscious awareness of the situation lags the action.
Well, what I'm saying is everything is sort of like on "instant reply," and our intent wells up from those things which have happened previously ... subconsciously in other words. In fact I see myself doing this all the time, almost as if a "precursory" signal was being sent out, and quite often I will override my intent and not "follow through." And, while I agree much of this is "reflexive," that isn't to say we don't participate in things consciously, and that we don't engage our will, otherwise we wouldn't have a mental record of it.
 

Back
Top Bottom