Bri said:
I haven't read anything that would indicate that free will occurs in degrees (that some things are more free than others). In fact, I would argue that even if you could find only one single case where anything met the two models of free will within a deterministic environment, you will have proven compatibilism.
Dennett's position is clearly a position that free will occurs in degrees. I am not certain whether or not my position is completely identical with his; I think I remember I had some issues with his position because it seemed to me to be too instrumentalistic.
Instrumentalism would mean something like this: whether or not heliocentrism is true or not we mere mortals can never know, since it will always be impossible to fly and see for ourself, since God has given the ability to fly only to birds and insects and bats. We can nevertheless concede that there is no other model that beats heliocentrism when it comes to make predictions about the future positions of the planets in the sky.
In a similar vein, Dennett seams to say: although we know nothing about a libertarian free will, the notion of a free will is useful to predict the behaviors of our fellow human deterministic machines. So all this talk about "choice", "will", "intention" and so forth is justified. It can, and, for practical reasons, must be preserved.
When I was reading Dennett, I was more of a follower of Churchland, who suggests to abandon folk psychology and its concepts and terms, which means to throw away the notion of a free will and similar concepts, and replace them instead with concepts from advanced neuroscience.
My current position is: certain things show a certain property that we may call "free will". This free will is not the libertarian free will, since determinism (modulo QM) is true. But it is nevertheless the property that justifies our feeling of having a free will, of our concept of responsibility, and so on. And this property is a property that comes in different degrees, it is not a binary property you either have or you don't.
If I am able to show that the usual concept of free will is one of degree, then I guess I haven't proved compatibilism. You might still argue that this common free will is not the real deal. But I have shown i) that your position is in trouble, since the common notion of free will seems to be incongruent with libertarian free will and ii) that I am justified to call my concept "free will".
There are computer programs that can alter their own programming by substituting functions for other functions, either randomely or based on some criteria.
Basically, I would say, they are just programs receiving input, aren't they? All programs that need to make choices must be confronted with some kind of input.
One that I particularly like is written in scheme, and produces pretty pictures.
http://www.genarts.com/karl/genetic-images.html
Indeed, pretty pictures, and once again thanks for the link. It's a shame that there are not bigger pictures, and I think no static picture can tell you what it's like to interact with the program.
However, the program isn't truly unpredictable according to either definition, because running it again with the same random seed would produce the same result as long as the user selected the same images each time. Therefore the results can be predicted with perfect accuracy by using an identical computer, both in theory and in practice.
Agreed, for really disturbing effects, you would need some "Blade Runner"-like scenario, where the amount of input given to the machine is of the same magnitude as the amount of input we humans receive during our lifetime (if Sims's program is just confronted with a short sequence of "one out of a small number
n of choices", the amount of input is quite modest).
You give it the Voight-Kampff test, of course!
The premise of this scenario is that such a test wouldn't work.
Until artificial intelligence becomes a reality, a computer is simply obeying commands, wiring, and programming. It's no different than a blender or a microwave oven. It can be as complex as you want it to be, and it's still not going to have free will, even if humans do have free will. However, with enough complexity, it might become very difficult to tell it from a real person simply from its actions. But free will, even from an ethics standpoint, is a lot more than just actions.
What else would you do to determine whether or not it has a free will? Dissect it to see if there are wires inside?
I concede that there is a problem with my argument, since those androids don't exist outside theory, and there are convincing arguments why it might be impossible to construct them in the foreseeable future. On the other hand, we already see those computers at work: our human brains. Is there any possibility besides a metaphysical variant of Dualism that would allow you to deny that human brains are some sort of computers?
If metaphysical Dualism (as opposed to the view that there is something besides matter, as an epiphenomenon of matter) is the only loophole, I would say that the evidence for this kind of Dualism is so underwhelming I wouldn't bet on it.
Can the robot form intent? Can the robot be responsible for its actions, or is the programmer responsible? Answer: that depends. If the robot is "defective" then it's the programmer's responsibility (just like if the head of a hammer flies off because it wasn't attached correctly and hits someone on the head). If the robot is "misused" it is the user's responsibility (like a hammer swung at another person's head on purpose). It is never the hammer's responsibility.
My position, again, is that even the dumbest robot (like a thermostat) has some tiny, tiny little bit of intent, and that there is a continuous scale between the thermostat and us.
Without some sort of true artificial intelligence (which many doubt is even possible), punishing the robot wouldn't do any good (it would do the same thing again under the same circumstances). It also wouldn't prevent other identical robots from doing the exact same thing under the same circumstances.
If the punishment is part of their input (how else would it be possible to punish them?), it could be used to change their behavior. Of course other robots would have to have the opportunity to learn about the punishment of their comrade.
The answer is that I don't know. Do we know where such concepts as "consciousness" kick in? Does a bee, a cimpanzee, or a dog possess consciousness? I just don't know.
I think that consciousness sneaked in. I find the image of an animal with zero consciousness giving birth to an animal with full consciousness quite absurd. Similar, I would expect free will to sneak in. Therefor, I would say that different animals have this ability to different degrees (it is obviously closely tied to consciousness). It might be that there is a large gap between a typical human and the closest other animal — now. But historically speaking, I would expect to observe a smooth transition.
Oh, you are way over-thinking this! When the lady behind the counter asks if you want to "super-size" your fries, do you feel as though you are free to answer either way?
Straight answer: no.
I remember a waitress asking me if I would like to have some mayonnaise to my French fries, and I answered "no". After thinking about it later, I found out that I would have preferred to have mayonnaise.
Another incident, this time with some moral consequences:
I was standing on a stop with some other people. I heard someone screaming, and turned around: a huge dog was chasing a woman at the other side of the street, and she was obviously scared to death. I was standing there for a few seconds, totally petrified, and kept watching them, until finally a man passed the street (there was no traffic on the street, so this was not a problem), threatened the dog and shooed him away; later, he yelled at the owner of the dog who had the nerve to casually appear.
What would have been the right thing for me to do? I should have had done what this other guy did: trying to help the scared woman. I believe that this would have been the right thing to do. But that's not what I have done.
Do I feel guilty? I think a more appropriate expression to describe what I felt (and, in retrospect, still feel) would be ashamed. I am ashamed that I am not the brave and helpful person I would like to be.
One might argue that I was innocent, since I didn't have a choice to act different, since I was shocked and unable to react. Maybe; but that doesn't alter my feeling of shame one bit. It is still true that I am not the brave and helpful person I would like to be.
Similar, I am proud of a lot of things for which I don't have any responsibility. Many people are proud of their nationality, although they are born with it and hadn't much choice about it (unlike, say, James Randi, who deliberately choose his nationality). That's not something very important for me (well, if your nation has killed six million Jews just because they have been Jews, the incentive to be proud about Goethe or Beckenbauer is small), but I feel that I am proud about the works of my favorite artists. I am even proud that the anonyma who wrote the story of the prince Genji wrote such a fantastic book, although she died long ago before I was born.
So "could have done different" or "ultimate source within" are not necessarily what describes most accurately what I feel when I make a decision, or when I am ashamed or proud of something.
Ahhh...but you are acting as though they have free will then. All of those factors (hypnosis, drugs, enslavement by aliens) are things that might take away a person's free will in a given instance. In other words, the person no longer has a choice between multiple actions. So you don't hold them accountable.
I conceded that I would be angry, and I conceded that I would like to see some actions taken, like putting them in jail. Why do you think I think some people are accountable, while others are not? That doesn't follow.
Indeed, I think some people are accountable, while others are not. But once again, this concept of accountability is not based on libertarian free will. It is based on my "continuous free will", which has as a consequence a concept of accountability in which accountability is a matter of degree.
But with determinism, the person has no choice between multiple actions ever. There is only one possible action that they could take, regardless of the circumstances. So in "real life" you don't consider other people as though their actions are determined.
As I explained, I ask
what determined their behavior: drugs, aliens, or just the firing of their neurons according to the laws of physics after years of education to become a responsible member of our society? Depending on what (deterministically) caused their behavior, I hold them more or less accountable.
Let me ask you this. If you wrong someone else (and let's say it's not due to aliens or drugs or hypnosis) would they be justified in being mad at you?
Since I tried to explained why I am justified to be mad about other people, although their behavior is ultimately determinated, this question is already answered.
I think you may have inadvertently answered the previous question about whether you subjectively "feel" as though you have free will. If you don't feel that you have free will, then you wouldn't expect your friend to be mad at you when you stole her watch, because the choice wasn't yours -- it was made for you well before you were born.
I don't agree. If I had stolen your watch, and explained to you that I don't feel like having a free will, I guess you would be still mad at me.
I like that answer! But what exactly is your aim then, if it's not to punish the guilty and it's not to prevent crime?
As I explained in a previous post (if I remember correctly (too lazy to look it up), my first post mentioning the judicial system), one possible aim could be to maximize happiness.
That is not my aim, by the way (at least not the only one). I think an important point to observe is the narrowness of our knowledge. For me, liberalism is a consequence of the limits of our knowledge.
Therefor, even if I came to the conclusion that it is probable that it would increase the happiness of one of my human fellows if I would force him to endure this or that treatment, I would hesitate with my plans to make him happy, since I don't think it would be right for me to force him his luck down his throat, since, after all, what do I know?
As a consequence, I ask for limited interferences. I think it is sufficient to put some people in jail to ensure a stable community (and the law systems of most countries seem to be in accord with this view), so I don't think it would be justified to execute a criminal — even if we had some evidence that this might decrease the crime rate a bit, since, given the narrowness of our knowledge, such an argument about probable or likely effects is insufficient to justify such a drastic measure.
Now, wait! I never said anyone was in error.
No, you didn't, but that's not necessary since it is a consequence of what you did say.
Just because there is no scientific evidence of it doesn't mean it's necessarily wrong either.
And that's exactly the situation where "benefit of doubt" kicks in.
If the majority of people thought too hard about it and came to just the conclusion that there is no free will, our justice system might be in trouble. Or at least the idea that we can't punish people who aren't ultimately responsible for their crimes might have to be rethought. I personally hope that the compatibilists are right, or that determinism is wrong. Or that we never find out the truth.
I think the latter alternative (reconsidering the concept of responsibility) is much more probable than the former (abandon the justice system completely), so I don't think there is much to worry about. I think it is very unlikely that philosophy alone will have much of an impact about how the justice system works; maybe some terms and phrases change.
The results of any discussions are themselves determined, so even if we came to the "correct" conclusion that determinism was true, that conclusion wouldn't actually be due to the fact that it really was true, but only that it was determined that we would come to that conclusion.
I don't see how this follows. Assume that the theory of evolution is true. As a consequence, there is abundant evidence for the theory of evolution. Now two machines are emulating a discussion about evolution, weighting the evidence. It is determined that the outcome of this discussion will be that the theory of evolution should be accepted.
I would say that even if the end of the discussion was predeterminated, is nevertheless tells us something about the truth of the theory of evolution.
Now replace "evolution" with "determinism".
If you buy that argument (and I'm still pondering it), you're kind of stuck between a rock and a hard place philosophically as to what you believe.
Shouldn't I try to favor what is favored by evidence? Oh, yes, I see, I can't make a "choice" about what I favor, and it is prederminated what I will believe. So what?
Some people who believe in free will don't do those things because they believe they are going to hell.
You could built a machine that tries to avoid "pain" (of course, it is not real pain, since only we humans can feel that), and give that machine the information that hell is "painful", and list some actions that would make the machine come to hell. You could then predict that the machine will avoid "sinful" actions.
I don't see a connection between the concept of hell and the concept of a free will. Maybe you are thinking that God is supposed to be just, and free will is needed for a just punishment. But who told you that God is just?
Others do it because they believe they have a choice between good and bad, and they would rather be good.
And why should anybody prefer to be good, given a free will?
Some would argue that even people who claim to not believe in free will actually do believe in free will (which is also supported by the scientific evidence that we are "hardwired" to believe in free will).
The science you quoted so far seems more suited to indicate that we don't have as much of free will as we would like to think.
Given my concept of free will as a matter of degree, those findings are not a threat for me.
Nonetheless, my understanding is that the Bible is pretty clear about us humans having free will.
Now I would reply that "the Bible" is too broad a term to discuss this seriously. I would be very surprised if the theological and philosophical concepts of the authors of the first chapters of Genesis and the theological concepts of Paul would be completely the same (well, it would be less surprising if we assume that the Bible is the word of God — but why should we?). I don't see how the story of Adam and Eve supposes a free will.
The force might come from within us. Perhaps it is even a part of us. Maybe we're a part of it.
But it is not us, is it?
Or perhaps the outside force provides us the ability to make the choices ourselves (but the choices themselves originate from us).
How about the "ultimate source test" then?
Or maybe part of us exists in some other dimension where free will exists.
Is this other dimension deterministic?
Generally, the court makes a determination about the nature of the crime after considering whether the person is guilty or not. The nature of crime, not whether the person is "partially guilty," determines the punishment. That's why criminal cases are broken into two parts: establishing guilt, and sentencing.
You live in an Anglo-Saxon country, don't you? It seems to me you are confusing some specialty with your local judicial system with a universal trait of judicial systems. This splitting of trials doesn't exist in other traditions
Yes, there can be extranious circumstances that would be the difference between, say, a homicide and manslaughter. Those arguments are nearly always based on intent, which requires a person to have control of his or her actions. If a person killed someone by "accident" then it is manslaughter.
Once again, the language that seems to be your native one confuses manslaughter and "involuntary manslaughter". In my country, "murder" ("Mord") does entail a "mean motive" (interlinear translation, don't know what the correct term is) and a minimal amount of planning. If those traits are lacking, it is considered to be "manslaughter" ("Totschlag"). If it happens per accident, it would be "involuntary manslaughter" ("fahrlässige Tötung").
A person who is insane is unable to form intent may be deemed "innocent by reason of insanity," and isn't punished for the crime (but is often "helped" by putting them in a mental facility where they can be treated, and then generally released if and when they are cured). Strictly speaking, that person didn't have a choice as to whether or not to commit the crime in that circumstance (in effect, had no free will).
I am well aware that your theory about free will is well able to handle my cases of person A and person C. That's no surprise. My argument is that there are intermediate cases.
Yes, and that is often controversial. I'm not a criminal scientist, so I'm not sure what the thought process is behind that. I believe that not having the life experience to know right from wrong is considered a mitigating factor, but I don't think it makes the child any "less guilty" but rather just guilty of a different sort of crime.
It seems to me that this "different sort of crime" construction is just some kind of verbal decoration to avoid to have to admit that the concept of accountability is just a concept of degree. After all, the action that constitutes the crime are the same, aren't they?