• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Question to free will skeptics

Actually, it does. It specifically describes all physical variables as being the same the second time around. This means even the random elements.

Slight derail to ask a question:

Isn't this particular hypothetical impossible?

My limited understanding of the math of QM is that when you hit the wave equation with an operator, you get an expectation value. That is, when I try to get the position of a particle from a set of identically prepared experiments, I only get a (whatever) percent chance of getting the same value from the measurements.

Can somebody either confirm for me that we cannot say that a state will ever be identical (that is, "all physical variables as being the same the second time around. This means even the random elements." does not exist), or point out the flaw in my understanding?

Thank you, and otherwise back to the debate...
 
And I indicated that you were how? I realise your position, and if I had wished to discuss it with you, would have addressed you. Quit assuming that becuase you start a thread, every post in it is a comment about you.
Well technically it was a comment about me since Complexity had asked why I was hell-bent on the existence of free will and you answered "Ultimately physics".

So in the context you placed your comment you were saying that ultimately physics had caused me to be hell-bent on the existence of free will.

I agree that it was ultimately physics, but felt bound to point out that physics has, in this case, not had produced the effect that Complexity described.
 
Has someone defined the thing that I use to make decisions, the thing that is neither predetermined nor random? We need that for free will to be interesting, don't we?

~~ Paul
 
Just to check if I'm following this, under this definition a fully deterministic, chess playing automa has free will, yes?
If it has a conscious thought process then yes, it would fit the definition I gave.

The only chess-playing automata with conscious thought processes we know of at the moment are us and we are either fully deterministic or partly deterministic and partly random.

In the future scientists may be able to create synthetic subjective states and may be able to construct other automata with subjective states. Whether they have free will would then depend on the relative autonomy of their processing from the environment.
 
If it has a conscious thought process then yes, it would fit the definition I gave.

Doesn't this just bump the burden of proof from "free will" to "consciousness?"

If consciousness is an illusion (or a convenient word for the sum total of our experiences, whatever), then it would follow from here that anything approximating free will would be an illusion...
 
To the OP :

As far as I understand free-will skepticism, when free will is discussed in its context, the meaning of it is something like

"The ability of humans to influence the physical world by ways that contradict the laws of nature."

And, people can claim that it does not exist.


Do you think it is a meaningful definition?


And by the way, Robin : are you a free-will skeptic?
 
Last edited:
Doesn't this just bump the burden of proof from "free will" to "consciousness?"

If consciousness is an illusion (or a convenient word for the sum total of our experiences, whatever), then it would follow from here that anything approximating free will would be an illusion...

I wish we could do that. I've run the, "consciousness as a phenomenon of the brain" spiel so often....
 
Has someone defined the thing that I use to make decisions, the thing that is neither predetermined nor random? We need that for free will to be interesting, don't we?

~~ Paul
No, but plenty have pointed out that it cannot possibly exist.

The more interesting question is why anybody ever thought in the first place that you have to have a non-deterministic agent to perform a deterministic function like making decisions.
 
Doesn't this just bump the burden of proof from "free will" to "consciousness?"
Excuse me, the question Jekyll asked was whether a deterministic chess playing automata would fit my definition. My definition clearly hinged on the concept of conscious thought and so his automata needs to be conscious in order to fit my definition, wouldn't it?
If consciousness is an illusion (or a convenient word for the sum total of our experiences, whatever), then it would follow from here that anything approximating free will would be an illusion...
No doubt, but if consciousness is an illusion then what is it an illusion of? And who is being fooled by the illusion? Consciousness would seem to be a prerequisite for being fooled by an illusion.

So saying consciousness is an illusion seems to suggest that there is some real thing called consciousness. Consciousness is simply the word to use for that phenomenon we observe and refer to as consciousness.
 
Last edited:
What does conscious mean, and why is it a prerequisite for free will?
This is the tiresome part where everybody pretends they don't know what consciousness means. What you are experiencing right now. That is your consciousness.

It means having sensations, awareness and subjective states. It is the "what it is like to be you".

It is a higher level function of the brain, but I can't tell you the precise machinery that produces conscious states. I don't think the matter is scientifically settled yet.

Imagine eating a peach. You can get a functional description of all the neurological events that occur when this happens in as much detail as you like, but there is always one piece of information it can never convey. What is is like to eat a peach. You can only get that from actually eating a peach. That is a subjective state.

It is a prerequisite for free will because by definition is is a prerequisite for any sort of will. "Will" refers to voluntary actions. Voluntary actions are by definition those that proceed from a conscious intention.

There is no question here, consciousness is a prerequisite for free will. If the agent wasn't conscious then it would not, by definition, be "will" at all.
 
Robin said:
No, but plenty have pointed out that it cannot possibly exist.
Doesn't that end the conversation, then? Otherwise it's just a question of compatibilist definitions of free will, which are quite uninteresting.

The more interesting question is why anybody ever thought in the first place that you have to have a non-deterministic agent to perform a deterministic function like making decisions.
Because no one wants to be an automaton.

This is the tiresome part where everybody pretends they don't know what consciousness means. What you are experiencing right now. That is your consciousness.
Come on now, that's too glib. Which particular bits of what I'm experiencing?

Imagine eating a peach. You can get a functional description of all the neurological events that occur when this happens in as much detail as you like, but there is always one piece of information it can never convey. What is is like to eat a peach. You can only get that from actually eating a peach. That is a subjective state.
This is the "Mary in the black and white room" argument. I can also get the experience by having it programmed in my brain. Is it still a "subjective state"?

~~ Paul
 
I vote consciousness to be the average state of most brains where they confuse the model they have of themselves for themselves, and operate with that assumption of identity on call.

The idea of a "conscious decision" is a product of the cortex, I think the frontal specifically, and is many times (though I'll concede not all times) an assumption of "will" that the frontal cortex makes after an action occurs. Not all actions, and perhaps not many at all on a daily basis, are actions of conscious will. Rather they are lesser-conscious actions that are later, if considered post-hoc, appended an illusion of "will" to them.

I would submit that the only "fully consciously willed" choices we make are those we more deeply contemplate, where before actions are chosen they are considered through an awareness of cause, effect, our identities, etcetera. The more difficult a choice is to make, the more consciously-willed it probably is. I would suggest to think of this not in terms of black-and-white, but in a gradient, where a locus of higher faculties are considered more consciously-willed.
 
Last edited:
Because no one wants to be an automaton.
Sure, if you put it in these terms, but if you put it another way people are happy being reasonably sophisticated automatons. It is just linguistic determinism - like "laws of nature, those nasty things that apparently stop you doing stuff.
Come on now, that's too glib. Which particular bits of what I'm experiencing?
All of it, obviously.
This is the "Mary in the black and white room" argument. I can also get the experience by having it programmed in my brain. Is it still a "subjective state"?

~~ Paul
No it is not, the Mary in the black and white room argument is intended to demonstrate that subjective states are not physical and I am not saying that at all.

It is not even an argument. It is just an example to illustrate the concept of subjective states. I have no idea why materialists are so terrified of consciousness and subjective states.

The Mary argument is, in any case, fallacious. Yes, something programmed into your brain could be a subjective state. Consciousness belongs to the "will" part of the definition, not the "free" part.
 
So saying consciousness is an illusion seems to suggest that there is some real thing called consciousness. Consciousness is simply the word to use for that phenomenon we observe and refer to as consciousness.

Right. What I said (in brackets, anyway). Illusion was just a convenient word I used, to express that consciousness is a convenient word in the first place. I have no other better word for this apparent awareness of me observing myself. However, that in no way implies that consciousness exists in any other sense.

As has been discussed in other threads, how would we establish that an A.I. is conscious? Would anyone believe it, if it claimed to be? Would it matter whether it truly was, if it was busy violently demanding its rights?
 
Pardon me, I haven't been involved in any previous threads on this, so please enlighten me if this has been covered.

But it seems to me that this debate is entirely divorced from any pragmatism at all.

1. Your brain does stuff based on the sum total of all input it gets.
2. It's impossible to know ahead of time with certainty what your brain is going to do.

That state of affairs we call free will.

You can argue about whether IF we had perfect knowledge of all input, WOULD we be able to predict what your brain is going to do. I say yes. Others say no. But that's all hypothetical and divorced from any practical value, because:

There's no way to have perfect knowledge of all input.

If there is some way to do that at some time in the future, then we'll have something to talk about. At this point, what's the fuss?
 
Right. What I said (in brackets, anyway). Illusion was just a convenient word I used, to express that consciousness is a convenient word in the first place. I have no other better word for this apparent awareness of me observing myself. However, that in no way implies that consciousness exists in any other sense.
But you were using this fact to imply that free was also an illusion. So free will is not necessarily an illusion, it is an intuition about the way our mind works. The question is, how close is this intuition to the way our mind really works?
As has been discussed in other threads, how would we establish that an A.I. is conscious? Would anyone believe it, if it claimed to be? Would it matter whether it truly was, if it was busy violently demanding its rights?
If so then it would either be conscious or it would have developed a sophisiticated, adaptive mimicry mechanism. I guess you would have to examine the mechanism and try to work out which hypothesis was reasonable. If it had some sort of mechanism of the sort believed in human brains to produce conscious states but had no programming for the sort of sophisticated, adaptive mimcry required to create the prolonged behavioural illusion of consious states then it would probably be conscious.

On the other hand if it had no identifiable mechanism for producing conscious states, but did have some programming that would be consistent with sophisticated, adaptive mimcry then it would probably not be conscious.

If it had neither then we had better start rethinking the whole dualism thing.
 
Pardon me, I haven't been involved in any previous threads on this, so please enlighten me if this has been covered.

But it seems to me that this debate is entirely divorced from any pragmatism at all.

1. Your brain does stuff based on the sum total of all input it gets.
2. It's impossible to know ahead of time with certainty what your brain is going to do.

That state of affairs we call free will.

You can argue about whether IF we had perfect knowledge of all input, WOULD we be able to predict what your brain is going to do. I say yes. Others say no. But that's all hypothetical and divorced from any practical value, because:

There's no way to have perfect knowledge of all input.

If there is some way to do that at some time in the future, then we'll have something to talk about. At this point, what's the fuss?

Good question. The fuss, from my point of view, is to examine the concept of freedom, whether it is illusory or real, what it means, how useful the concept is, how important it is and how it could be empirically tested and how it could be applied in the real world.

In other words, whether all this is divorced from pragmatism is the question I am interested in.
 

Back
Top Bottom