jan said:
Dennett's position is clearly a position that free will occurs in degrees.
Please forgive me for not replying sooner. I became busy, and I wanted to give your post the consideration it deserved.
I don't think Dennett's position is that free will occurs in degrees. Instead, he simply redefines free will. His position seems to be that any object for which it is "useful" to consider an "intentional creature" should be considered to have free will. Basically, he ignores the definitions of "free will" that have been proposed, and along with it he ignores any lines that are commonly drawn between things that have free will and things that don't. In effect, he is saying that we should be allowed to say that anything we want has "free will" if it is useful to define it that way.
I don't see anywhere where Dennett claims that some things have more free will than others.
Incidentally, Dennett's logic could easily be used to justify the existance of God. According to Dennett's logic, as long as whatever definition of "God" you're using is "useful" then it is valid.
My current position is: certain things show a certain property that we may call "free will". This free will is not the libertarian free will, since determinism (modulo QM) is true. But it is nevertheless the property that justifies our feeling of having a free will, of our concept of responsibility, and so on. And this property is a property that comes in different degrees, it is not a binary property you either have or you don't.
OK, well unless I am misunderstanding, you are following Dennett's idea of simply redefining free will, but justifying this position by saying that your definition is just as useful as any other definition. If that's what you're saying, then you're conceding that Webster's definition of free will (what we're calling "libertarian free will") isn't compatible with determinism, but that some other (presumably equally useful) definition of free will might be. So, for the record, are you conceding my original proposition, that "libertarian free will" is incompatible with determinism?
If I am able to show that the usual concept of free will is one of degree, then I guess I haven't proved compatibilism. You might still argue that this common free will is not the real deal. But I have shown i) that your position is in trouble, since the common notion of free will seems to be incongruent with libertarian free will and ii) that I am justified to call my concept "free will".
I don't follow your logic here, or how you have shown my position to be in trouble, since it is my position that my definition of free will (libertarian free will) is incompatible with determinism, and your position is that there are other (presumably equally "useful") definitions of "free will" that are compatible with determinism. These two positions seem to be in agreement, actually, as long as your position can be proven to be true.
That said, you and Dennett may actually have a point. Whether one is justified in calling a different concept "free will" is debatable, but for the sake of argument, let's say that IF you can come up with a definition of free will that is equally useful to the notion of "libertarian free will" in a particular context (for instance, ethics), then for that context you are justified in calling it "free will." To prove that another definition of free will is just as useful as the "forking paths" and "source" models of free will is quite a burden to overcome. On top of that, you need to show that your new definition is also compatible with determinism.
Whether or not your definition of free will is more "common" than "libertarian free will" is in no way an argument as to which is more useful, nor does it make your definition compatible with determinism. There are many "common" ideas that aren't very useful, and some that are downright wrong. That said, I don't think that your definition of free will is at all "common." I would say that the "forked path" and "source" definitions are far more common and are based on thousands of years of debate on the matter (Dennett's ideas are relatively recent). Not to mention that libertarian free will forms the basis for Webster's definition of free will. In fact, the word "libertarian" means "one who believes in free will" according to Webster. More importantly, the common notion of "free will" is closely tied with the concepts of "could have done otherwise" and "ultimate source of one's actions."
The article refers to the definitions of free will and intent that Dennett uses as "folk psychological notions in the explanation of intentional action." I disagree with this characterization, because it seems to me that the "folk" notion of free will and intent are very different than Dennett's. For example, Dennett seems to argue that it is valid to consider a thermostat to literally "desire" the room to be a certain temperature, and to intentionally changes its own behavior to achieve its desired result. That a thermostat actually desires or intends anything seems a little silly to me, and I wouldn't say that it is a "folk" notion to consider an inanimate object to have free will (unless by "folk" you mean "silly").
We certainly don't hold thermostats ethically responsible if they fail to keep the room a comfortable temperature. A thermostat certainly violates both the "forking paths" and the "source" models for free will, and I'm not sure what model Dennett is offering as a substitute in order to provide a basis for a system of ethics, and in order to distinguish between the type of "free will" exhibited by a thermostat and "free will" exhibited by people who are morally responsible for their actions.
If you still hold the opinion that some things can possess "more" free will than others, please give an example and a more concrete definition of what you mean by "free will" in that context. Also please explain why you consider that definition to be as "useful" as the common models of free will, and how you would use your definition of free will to distinguish between circumstances when we commonly hold a person responsible for their actions and those when we don't hold the person responsible.
Basically, I would say, they are just programs receiving input, aren't they? All programs that need to make choices must be confronted with some kind of input.
Computers receive input, process that input based on wiring and programming, and then produce some kind of output. There are no choices involved at all. Given the same wiring and programming (neither of which the computer is the "ultimate source" of) the same input (causes) will produce the same results (effects).
Indeed, pretty pictures, and once again thanks for the link. It's a shame that there are not bigger pictures, and I think no static picture can tell you what it's like to interact with the program.
The interaction is very simple in fact, and the setup can be seen at the top of the page that was previously referenced. Interaction was done through a row of monitors, each showing a variation generated by the computer. The user would then stand next to the monitor of the picture she liked best, and the computer would generate more pictures based on variations of the program that was used to generate the picture the user liked. Bigger pictures are
here (see section 5). If you haven't already, check out the Evolving Creatures movie
here which is based on a similar premise of computer-based evolution, but is much cooler.
What else would you do to determine whether or not it has a free will? Dissect it to see if there are wires inside?
You can't tell if someone is exercising free will only from their actions. Like I said, I don't know that our system of ethics is even equipped to consider the possibility that there is no libertarian free will, and therefore assumes that there is. It then uses this assumption of free will to determine guilt. It is impossible to determine if someone is behaving in an ethical manner only from their actions. You have to consider their intent as well. In order to be guilty of a crime, you had to have control over your actions (you could have done otherwise) and you have to be the ultimate source of your actions (you are responsible for the crime). In other words, you had to have committed the crime intentionally -- as an exercize of your own free will.
I concede that there is a problem with my argument, since those androids don't exist outside theory, and there are convincing arguments why it might be impossible to construct them in the foreseeable future. On the other hand, we already see those computers at work: our human brains. Is there any possibility besides a metaphysical variant of Dualism that would allow you to deny that human brains are some sort of computers?
Good question. There are only two reasons I can think of that we wouldn't be able to create artificially intelligent computers: that artificial intelligence simply requires too much complexity, and that there is something about the human brain that is more than the sum of its parts and cannot be duplicated. It's unlikely that the first reason will hold because if nothing else, computers are very good at dealing with (and helping to create) complexity. The second reason is pretty much dualism in a nutshell, although there may be variations on the theme. I don't know if this version of dualism is "metaphysical" or not, but I'm not sure I can think of any theories that couldn't be considered dualistic that would distinguish the human brain from the microchip inside your toaster that keeps your bread from burning.
If metaphysical Dualism (as opposed to the view that there is something besides matter, as an epiphenomenon of matter) is the only loophole, I would say that the evidence for this kind of Dualism is so underwhelming I wouldn't bet on it.
If our brains are dualistic in some way, that doesn't mean that the exact nature of this dualism can't someday be explained by science. However, as you pointed out, the scientific evidence for your brain being more than the sum of its parts is currently underwhelming to say the least. Perhaps there are other "loopholes" that might explain free will, but they would probably currently have equally little scientific evidence.
My position, again, is that even the dumbest robot (like a thermostat) has some tiny, tiny little bit of intent, and that there is a continuous scale between the thermostat and us.
Well, even a "tiny, tiny little bit of intent" is intent. It seems to me that you either have the capacity to form intent or you don't, and you're saying that a thermostat does. Therefore, a thermostat possesses free will by your definition of free will. If that's true, the burden of proof would be on you to demonstrate how this view is "useful" and thereby justifies calling it "free will." You would also have to show it to be compatible with determinism. In order to prove it useful, you would have to examine why we hold that people are capable of forming intent, but not thermostats, and provide a meaningful alternative that fits within both your definition of free will and a "real world" view of ethics. For your definition to be compatible with determinism, you would have to show that your definitions hold true even in a deterministic world where "could have done otherwise" and "ultimate source" are both nonexistent.
If the punishment is part of their input (how else would it be possible to punish them?), it could be used to change their behavior. Of course other robots would have to have the opportunity to learn about the punishment of their comrade.
Robots might be easy to "punish," especially if your definition of "punishment" is simply to change their behavior. We could simply change their programming. We could also change other robot's programming to prevent other robots from behaving the same way. Of course, this is different than how we punish human beings, because this sort of thing would be akin to a frontal labotomy, and would be inethical. Also, punishing other people for the crime of one person would also be inethical.
If you're talking about punishing robots in the same way that we punish human beings (which I believe you are talking about), it would likely have no affect at all unless the robots possessed true artificial intelligence (and we're not sure what that means exactly). I would argue that putting a robot in jail would have no affect on its behavior once released than putting your microwave oven in jail. Nor would jailing a robot affect the behavior of other identical robots any more than destroying a defective car would convince other identical cars not to exhibit the same defect.
Straight answer: no.
I remember a waitress asking me if I would like to have some mayonnaise to my French fries, and I answered "no". After thinking about it later, I found out that I would have preferred to have mayonnaise.
Oh, that's right, "you people" like mayonnaise on your french fries. Yuck! Obviously, if you had any free will you would have ordered ketchup instead!
OK, so in that one instance you felt as though you regretted your decision, but that doesn't mean that you weren't free to answer the waitress differently. Did you feel that you couldn't then decide to call the waitress over and explain that you changed your mind? I suppose you also felt this morning that whether you first put on your right shoe or the left one was decided for you millions of years ago too.
Another incident, this time with some moral consequences:
<story of not helping someone being attacked by a dog removed>
What would have been the right thing for me to do? I should have had done what this other guy did: trying to help the scared woman. I believe that this would have been the right thing to do. But that's not what I have done.
Again, that fear prevented you from making certain decisions in this instance doesn't mean that you don't feel that you have the ability to choose in other instances. Are you really saying that there is no instance in your entire life when you felt that you actually had the ability to choose between two actions?
Do I feel guilty? I think a more appropriate expression to describe what I felt (and, in retrospect, still feel) would be ashamed. I am ashamed that I am not the brave and helpful person I would like to be.
Most people would agree that it wasn't your fault, and that you're not morally or ethically responsible for doing something (or in this case not doing something) when you were prevented by fear. In other words, in that circumstance, you didn't have the freedom to choose (your choices were limited by fear). But that argument assumes that under
other circumstances (where you
do have a choice) you
would be morally responsible for your actions. For example, in a similar situation in which there was no chance that you would have been hurt, but you chose to not act and simply watched a person harmed because you didn't feel like helping, your choice not to act might not be considered moral or ethical.
If determinism is true, you couldn't have acted any other way in either circumstance, and therefore your action (the only one you could possibly have taken) should never be considered either moral or immoral. The fact that people's actions are deemed moral and immoral based on their ability to act indicates that people do believe in free will.
One might argue that I was innocent, since I didn't have a choice to act different, since I was shocked and unable to react. Maybe; but that doesn't alter my feeling of shame one bit. It is still true that I am not the brave and helpful person I would like to be.
So you do you think that if you hadn't been afraid of dogs or if it wasn't a dog but something that you're not afraid of, you would have been able to have done otherwise? If you don't feel that you can ever choose your actions, then your feeling of shame at not having helped the woman is completely irrational. I would argue that the fact that people feel shame at choosing what they consider to be an improper action, and
don't consider it irrational, is evidence that people feel as though they have free will. If they felt that their actions were determined millions of years before they were born and that they couldn't have acted any other way, then it would be unlikely that they would feel shame because of it.
Similar, I am proud of a lot of things for which I don't have any responsibility...
So "could have done different" or "ultimate source within" are not necessarily what describes most accurately what I feel when I make a decision, or when I am ashamed or proud of something.
That's a flawed argument. Granted, there is no choice involved in determining one's own nationality, heritage, etc., yet people do describe being "proud" or "ashamed" of these things. But that is only because you identify with your ancestors, and you are ashamed or proud of the actions you feel they
chose to do! If you didn't feel that your ancestors had any choice but to act exactly as they did, then you wouldn't feel shame or pride over it!
I conceded that I would be angry, and I conceded that I would like to see some actions taken, like putting them in jail. Why do you think I think some people are accountable, while others are not? That doesn't follow.
The conversation to which I was responding was:
BRI: ...if someone wrongs you in some way, the natural reaction is to get angry at them, but if they truly have no choice but to do exactly as they did, is that reaction irrational?
JAN: For me, it boils down to: had they had a choice to do otherwise in a rather modest interpretation. That is, I would ask if they had been hypnotized, drugged, enslaved by aliens, or something along those lines. If I would find out that they hadn't had a choice to act different because the neurons firing in their heads according to the laws of physics forced them to act like they acted, I would still be angry.
BRI: Ahhh...but you are acting as though they have free will then. All of those factors (hypnosis, drugs, enslavement by aliens) are things that might take away a person's free will in a given instance. In other words, the person no longer has a choice between multiple actions. So you don't hold them accountable.
JAN: I conceded that I would be angry, and I conceded that I would like to see some actions taken, like putting them in jail. Why do you think I think some people are accountable, while others are not? That doesn't follow.
You stated that you would be angry at a person and expect them to be jailed only if the person had a choice to do otherwise, but you wouldn't be angry at a person or expect them to be jailed if the person didn't have a choice to do otherwise (due to being hypnotized, drugged, abducted by aliens, etc.). The action that the person took is the same in both cases; the only difference is the person's ability to do otherwise. Therefore, you hold the person who has a choice to do otherwise accountable, while you hold the person who doesn't have a choice to do otherwise unaccountable. The person who cannot do otherwise isn't responsible for their actions. Whether or not you get angry at someone and whether or not you feel that they deserve to be punished, is entirely dependent on whether or not they could have done otherwise, which is one definition of libertarian free will (the "garden paths" model).
Your argument fails miserably in a deterministic world though, because
neither of the people could have done otherwise. You don't explain what the actual difference is between the person who is drugged and the person who is a slave to their neurons.
Indeed, I think some people are accountable, while others are not.
There, see? I was right!
But once again, this concept of accountability is not based on libertarian free will. It is based on my "continuous free will", which has as a consequence a concept of accountability in which accountability is a matter of degree.
Your concept of accountability as you described it in the above exchange is completely based on libertarian free will (in fact, it's the "could have done otherwise" argument exactly). Your argument had nothing at all to do with whether or not the person had "more" or "less" free will (you still haven't explained what you mean by that). Furthermore, you don't hold the person who was hypnotized or drugged accountable
at all for their actions, even though the actions are exactly the same as the person who isn't hypnotized or drugged. You also don't explain how your argument is compatible with determinism because with determinism neither person could have done otherwise.
You have not even come close to explaining what your definition of free will actually is, how it differs from libertarian free will, how it is as "useful" as libertarian free will, how it relates to ethics, or how it is compatible with determinism.
It seems as though your example is simply an example of libertarian free will, and your example demonstrates that without libertarian free will you cannot distinguish from an ethics standpoint between the person who is hypnotized or drugged and the person who is simply acting in the only way they can possibly act due to determinism. If determinism is true, neither of them could have done otherwise, and you never explain the actual difference between the two people that would justify treating them differently.
As I explained, I ask what determined their behavior: drugs, aliens, or just the firing of their neurons according to the laws of physics after years of education to become a responsible member of our society? Depending on what (deterministically) caused their behavior, I hold them more or less accountable.
Actually, you said it boiled down to "had they had a choice to do otherwise." With determinism, a person never, ever has the choice to do otherwise. There is also no evidence of "degrees" of free will in your argument at all. Neither of the people could have done otherwise, and neither have free will. Even if your argument
did take into account
what determined their behavior (I don't think it does at all), you don't explain why you would treat behavior determined by drugs differently than behavior determined by the firing of neurons, except to say that in one case the person "could have done otherwise." Show me how, in a deterministic world, the person whose behavior is determined by the firing of neurons could have done otherwise.
Since I tried to explained why I am justified to be mad about other people, although their behavior is ultimately determinated, this question is already answered.
I don't think you justified being mad at one person and not at the other, and you never showed how either case is different in any way.
As I explained in a previous post (if I remember correctly (too lazy to look it up), my first post mentioning the judicial system), one possible aim could be to maximize happiness.
That's an interesting take on it. I don't think our judicial system does maximize happiness. Killing anyone who is convicted of a crime might maximize happiness because there would be far less crime. Sure, a few people who might be put to death by mistake (and their families) might not be too happy, but everyone else would be far happier.
That is not my aim, by the way (at least not the only one). I think an important point to observe is the narrowness of our knowledge. For me, liberalism is a consequence of the limits of our knowledge.
Therefor, even if I came to the conclusion that it is probable that it would increase the happiness of one of my human fellows if I would force him to endure this or that treatment, I would hesitate with my plans to make him happy, since I don't think it would be right for me to force him his luck down his throat, since, after all, what do I know?
OK, so what if science has determined by a proponderance of evidence that one thing or another would make a person happy. Should we then force that person to do that thing to make them happier? For that matter, should we only be concerned about the happiness of the majority at the expense of the happiness of the few? And while we are maximizing happiness, why not just eliminate the most unhappy segment of the population?
As a consequence, I ask for limited interferences. I think it is sufficient to put some people in jail to ensure a stable community (and the law systems of most countries seem to be in accord with this view), so I don't think it would be justified to execute a criminal — even if we had some evidence that this might decrease the crime rate a bit, since, given the narrowness of our knowledge, such an argument about probable or likely effects is insufficient to justify such a drastic measure.
This doesn't sound like maximizing happiness to me.
No, you didn't, but that's not necessary since it is a consequence of what you did say.
Not at all. I don't know whether free will exists or not. If it doesn't, then yes we are punishing a lot of people who have absolultely no control over their actions, which most ethics systems deem inethical.
And that's exactly the situation where "benefit of doubt" kicks in.
Explain what you mean by this.
I think the latter alternative (reconsidering the concept of responsibility) is much more probable than the former (abandon the justice system completely), so I don't think there is much to worry about. I think it is very unlikely that philosophy alone will have much of an impact about how the justice system works; maybe some terms and phrases change.
I disagree. I think that if science were to prove tomorrow that we have no free will, a lot of things would change. For one thing, every criminal would have an air-tight defense and there would be little that anyone could do about it according to the law. I can't think of how the law would be patched in order to provide a distinction between a crime and an unintended act.
I don't see how this follows. Assume that the theory of evolution is true. As a consequence, there is abundant evidence for the theory of evolution. Now two machines are emulating a discussion about evolution, weighting the evidence. It is determined that the outcome of this discussion will be that the theory of evolution should be accepted.
I would say that even if the end of the discussion was predeterminated, is nevertheless tells us something about the truth of the theory of evolution.
Predeterminated. I like that word!
Whether it tells us something about the truth of the theory of evolution depends on whether we are determined to believe that evolution is true or false. The truth of the argument of the machines would have no affect on our actions one way or the other since those actions are completely determined.
Shouldn't I try to favor what is favored by evidence? Oh, yes, I see, I can't make a "choice" about what I favor, and it is prederminated what I will believe. So what?
Well, worse than that. Many skeptics believe that if something is unfalsifiable, then it's not to be believed. If determinism is unfalsifiable, then you shouldn't believe it, even if it seems to fit reality and there is no example of anything that doesn't fit it.
Take God for example. If you believe that God created the natural universe, and that science only deals with the natural universe, then science itself wouldn't even pertain to God, and there is no possibility of scientific evidence against God. This theory fits perfectly within current scientific knowledge, and in fact there has never been an example of anything that doesn't fit this theory. Even things that cannot currently be explained by science can be explained by God. Even those things that might never be able to explain by science can be explained by God.
If determinism is also unfalsifiable, it should also be rejected.
You could built a machine that tries to avoid "pain" (of course, it is not real pain, since only we humans can feel that), and give that machine the information that hell is "painful", and list some actions that would make the machine come to hell. You could then predict that the machine will avoid "sinful" actions.
Or you could simply program the computer to avoid certain actions. No different. Pain doesn't factor into it, nor does hell.
I don't see a connection between the concept of hell and the concept of a free will. Maybe you are thinking that God is supposed to be just, and free will is needed for a just punishment. But who told you that God is just?
My comment about some people not "sinning" because they feel they are going to hell was simply in response to this question by you:
I don't see how this follows. Assume you have libertarian free will. So what? Why don't you start stealing, raping and killing? What holds you back? Your free will? But why does your free will decide not to engage in those actions? Why shouldn't your free will decide to do them, if you can be sure not to get caught?
No, your free will doesn't keep you from doing anything. In fact, just the opposite. It makes you responsible for your actions. Someone who doesn't have free will can't be responsible, and therefore someone who doesn't believe they have free will would have no real reason not to do bad things if they knew they wouldn't get caught. Even if they beleive in God, they could argue that God couldn't possibly hold them accountable for something that is beyond their control. Someone who does believe in free will generally believes that they are responsible for their actions.
And why should anybody prefer to be good, given a free will?
Perhaps because of a feeling of having responsibility for one's actions. Perhaps because they have a desire for others to also choose to be good, and know that others have a desire for them to choose to be good. Perhaps because they believe in God.
The science you quoted so far seems more suited to indicate that we don't have as much of free will as we would like to think.
The science quoted indicates that we have no free will. Even a little free will would be free will, and there is no scientific evidence of any free will whatsoever.
Given my concept of free will as a matter of degree, those findings are not a threat for me.
I don't think those findings are a threat to your argument, but not because of your concept of free will as a matter of degree. Those findings are simply evidence that people think they have free will whether or not they actually do.
Now I would reply that "the Bible" is too broad a term to discuss this seriously. I would be very surprised if the theological and philosophical concepts of the authors of the first chapters of Genesis and the theological concepts of Paul would be completely the same (well, it would be less surprising if we assume that the Bible is the word of God — but why should we?). I don't see how the story of Adam and Eve supposes a free will.
My understanding is that the story of Adam and Eve is all about humans receiving free will. That's what the Tree of Knowledge was (the knowledge of good and evil). The implication is that after eating from the tree, Adam and Eve "knew" about good and evil, and therefore could choose between them.
If I knew the nature of what "it" is that might give us free will, then I would be able to prove that we have free will. I don't, and I can't. But perhaps "it" is a part of us. So, yes, I suppose that would make it "us," in the same sense that your mind is "you." I think the analogy that I used before might be a good one, that our brain is more than the sum of its parts. In other words, if you believe we have a physical "brain" but something undefined which is our "mind" or "soul" then that "mind" or "soul" might very well be what allows us to have free will. That part of us simply might not follow the laws of physics as we currently understand it.
How about the "ultimate source test" then?
With the above definition, I think it easily meets the "ultimate source" standard. I see what you're getting at though.
Is this other dimension deterministic?
I suppose it wouldn't be.
You live in an Anglo-Saxon country, don't you? It seems to me you are confusing some specialty with your local judicial system with a universal trait of judicial systems. This splitting of trials doesn't exist in other traditions
My apologies. I should have stated that a little differently. I meant that this is why the criminal trial in this country (United States) is broken into two parts. I didn't mean to say that this is how it is everywhere. I would still say that in most judicial systems (probably yours as well) there is no notion of "degrees" guilt, but rather mitigating factors which might affect the punishment. If you're not fully guilty of the crime for which you're being charged, then you're being charged with the wrong crime and you should be innocent of that crime. Of course, you people like mayonnaise on your fries, so who knows what sort of deviant system of justice you have!
[QUOTE
Once again, the language that seems to be your native one confuses manslaughter and "involuntary manslaughter". In my country, "murder" ("Mord") does entail a "mean motive" (interlinear translation, don't know what the correct term is) and a minimal amount of planning. If those traits are lacking, it is considered to be "manslaughter" ("Totschlag"). If it happens per accident, it would be "involuntary manslaughter" ("fahrlässige Tötung").[/QUOTE]
We also distinguish between voluntary and involuntary manslaughter. My point was that the difference between "murder" and either type of "manslaughter" is largely one of intent (motive and planning both indicate malicious intent). I was making this point to show that action by itself isn't enough to determine the crime committed or the appropriate punishment.
I am well aware that your theory about free will is well able to handle my cases of person A and person C. That's no surprise. My argument is that there are intermediate cases.
There are mitigating circumstances, yes. But at least in the United States, if a person is able to know the difference between right and wrong, if the person knew that they were committing a crime, and if the person had the ability to choose not to commit the crime, they are considered fully guilty, even if they are mentally handicapped in some way. Mitigating circumstances can be used to determine the appropriate punishment for the crime, but that doesn't make the person any more of less guilty. Your country might be different though.
It seems to me that this "different sort of crime" construction is just some kind of verbal decoration to avoid to have to admit that the concept of accountability is just a concept of degree. After all, the action that constitutes the crime are the same, aren't they?
You're absolutely right. you have fully convinced me that it's not a "different sort of crime." I believe the thought process is the same as described above. If there are mitigating factors (in this case, the fact that the brain chyemistry of a young person often prevent them from considering the consequences of their actions) then the punishment for the crime might be different than for an adult. That doesn't make the person less guilty though, nor does it change the crime of which the person is guilty.
-Bri