Why Malerin is Wrong About Bayes Theorem

How did you reach the conclusion that I thought that? Did you assign a 50/50 chance and then walk away?

Well, I didn't single you out by name. But if you insist on putting the shoe on and loudly proclaiming that it fits, I will happily agree with you. From your post (#142) :

But my point is that even if you have information you cannot assign a probability of 0.5 just because you can write a "yes/no" question about it.

Certainly you can.

And if you don't know in advance whether the random event generator will be a coin or a die, then you can assign 0.5 to the probability that it will be either a coin or a die, but it would be invalid in this case to assign 0.5 to the probability that you will get a "1".

or --- and this is what you would do if you understood Bayes theorem properly --- you would codify the idea that the random event generator would be a coin or a die and use that idea in conjunction with your observations to adjust your posterior distribution, not as part of your prior.

You seem to think that if it's not part of your prior distribution, you don't know about it. Which is equivalent to guessing 50/50 and walking away.
 
Now, after all of that, at the end of the day if we take into account the not not nots we only increase the number of A's that are not B.

Er, no. For every property P such that "Object O has property P", there is another property P-bar, defined as "... does not have property P," such that "Object O does not have property P-bar."

And similarly, for every property that O does not have, there is a corresponding property that it does have.

So the chance of a statement "A has property B" being true, without further restriction on A/B, is best modelled by a coin flip since you don't know whether B is a P or a P-bar.

Now, of course the number of things that do NOT have any given property are larger than the number of things that do (typically), .... but at that point you're putting restrictions on A and B. You're no longer in a position of maximal uncertainty. If you know enough about closets and dollars to know that the set of closets that do not hold a million dollars is larger than the set of closets that do, then you are by definition not completely ignorant.
 
This is only potentially correct if this assumption is made before the word 'God' is even meant to apply to something - that is, if the assumption is made before someone indicates that it is the world with the invisible sky daddy, rather than the world without the invisible sky daddy, that has 'God'. Clearly we are way beyond that point in any of our discussions here, so it isn't really appropriate to remain maximally uncertain.

Actually, I think you're arguing my side of the case for me. Malerin is trying to establish a "level" (scare quotes deliberate) playing field so that we can discuss this issue without prejudice to his claims. A lawyer would recognize this as a request for a de novo review of a case that has already gone against him, but he wants the appeals court to pretend not to know that the case has already been heard and decided adversely.

And therefore, he wants to control the evidence that can and cannot be submitted; we're supposed to forget, for example, all of the associations we have with the word "God" (allowing him to later slip in the rest of the metaphysical and theological baggage via rhetorical sleight of hand) as well as all of the other demonstrations we know of the universe's lack of regard for life if not positive hostility.

Do you really think that Malerin is saying, "I am maximally uncertain that God exists and after I consider the fine-tuning argument I am every so slightly less maximally uncertain that God exists"? The purpose of his argument is not really to advance our certainty. It's to grossly misrepresent the degree of precision available to her/him.

Of course. But that doesn't mean that his argument is incorrect, as a lot of the people seem to think --- especially the people who are disputing his use of a maximally uninformative prior of 0.5.

Because that particular aspect of his argument is, fortunately or not, technically correct.

Of course, it sets up a number of lies by omission as well as outright lies of commission. But the counter-arguments are focusing on the wrong spot, the one spot in his entire argument that can be justified down to the last jot and tittle.

I think that by presenting the agnostic position as 'p=0.5', instead of 'maximally uncertain', you are playing right into her/his hand.

I'm simply pragmatic enough to recognize that, in this case, p=0.5 is the maximally uncertain position. Accept it, and move on. Attack where his line is weak, not at the single point where it can be successfully defended.
 
(rocketdodger actually said "1", not "123". I changed it to a number that can't be a probability, to make the discussion below perhaps clearer. Nothing essential depends on which number is used.)

There are two 'levels' of probability here, which should be distinguished.

One is what you're calling "the unconditional probability that you would receive a 123". This is supposed to be a real property of the machine, with a single definite value, between 0 and 1, which we happen not to know. Let's call this number "X". (The value of X determines, for example, how often, over the long term, the machine will display 123 rather than any other number if we repeatedly press its button.)

Now, according to the Bayesian approach to probability, whenever we are uncertain about something, we can use probability to describe the nature of our uncertainty. Let's forget for the moment that X is a (different kind of) probability, and just treat it as a number about which we are uncertain. Then, we can say things like, for example, "P(0.3 < X < 0.4) = 0.2" (translation: "there's a 20% probability that X is between 0.3 and 0.4"), where this probability is not an objective property of the machine, but only a way of characterizing our limited knowledge of the machine. X doesn't 'really' have a 20% probability of being between 0.3 and 0.4, or any other probability either. It is a single number, which either is in that range or not. We just don't know which. But we might have some ideas about which, and we quantify these ideas by saying that it has a 20% probability of being there.

Of course, 0.3-to-0.4 is just an example, and we can talk about the various probabilities that X has of being in various other ranges as well. All these probabilities together constitute a 'probability distribution for X'. Again, a probability distribution for X is not an objective property of the machine, but only a way for us to express, as precisely as we can, our more or less vague ideas about the machine. ("We don't know exactly which number X is, but here's a summary of what we do know about X.")

A probability distribution for X contains a lot more data than a single 'estimate' of X (to quote rocketdodger's question). From a distribution, we can, if we wish, derive various single estimates---for example, the mean (or the median, etc.). But we can't forget the entire distribution, and just remember the mean, if we want to be able to change our ideas about X appropriately when we get new information relevant to X---for example, when we press the machine's button and see what number it displays.

The appropriate way to change our ideas about X is to use Bayes's theorem to produce a new distribution for X from the old distribution. The old mean alone is not enough to enable us to produce even a new mean, let alone an entire new distribution.

Good point.

So my question, then, is really "what kind of a probability distribution can you produce from a single sample?"
 
No, it's just .5. In the absence of evidence for or against, a logically possible proposition is assigned an agnostic value.

If you knew a snargle can land quarg, you should assign a .5 value to the claim "this snargle will land quarg." Maybe it lands quarg 90% of the time, maybe 1% of the time. You don't know. To assign a value other than .5 would require some type of evidence.
But again, the only reason there is absence of evidence against is for you to willfully ignore it.

As I pointed out already, we KNOW for a fact that the God as defined by billions of theist contains internal self-contradictions. We know the scriptures many of them base their God-belief on is seriously flawed and contradictory. We KNOW many things claimed as the action of God is not.

0.5 is not a reasonable agnostic value. It is the value you put on it by willfully ignoring what people mean when they use the term "God" and instead arguing the position as if the term "God" were undefined.

And then you might as well be arguing about the existence of snargles and quargs. They are just as meaningful as the undefined term God.

Trouble is, the term God does have meaning by convention by billions of human beings. Why would you willfully ignore that information and claim you need to assign 0.5 probability because there is no other information?

It would be like me buying a lottery ticket and claiming my chance of winning is 0.5 because I don't know anything else about the lottery. The information is there. If I don't know it, it's willful ignorance, and doesn't make it reasonable to claim my chance of winning is 0.5.
 
Last edited:
I'm simply pragmatic enough to recognize that, in this case, p=0.5 is the maximally uncertain position. Accept it, and move on. Attack where his line is weak, not at the single point where it can be successfully defended.
While I agree there are plenty of other problems with the argument, I disagree that 0.5 is reasonable.

It would be like arguing that the probability of winning the lottery with my ticket is 0.5 prior to considering the definition of the lottery (that is, how many numbers are drawn within what range, and how many must be matched to win). It's absurd to say the prior probability is 0.5. That's willfully ignorant, not maximally uncertain.

The definition of the lottery can't be considered new evidence. Neither can the existence of a universe with life or the definition of the term God. It's really a silly mocking of Baye's Theorem to do so. As yyy2bggggs has cleverly called it--it's like "cargo mathematics". It's just pretend.

As I've said, it's the same way QM is invoked to make PSI theories or quack magic water sound scientific.
 
Er, no. For every property P such that "Object O has property P", there is another property P-bar, defined as "... does not have property P," such that "Object O does not have property P-bar."
It's doesn't obviate my argument. Clearly you can talk rings around me but that won't change the fact that there are more properties that O doesn't have than it does have. End of story. P-Bars won't magically change that fact.

And similarly, for every property that O does not have, there is a corresponding property that it does have.
Only by slight of hand. Polar bears are not green and no amount of semantics or logical games will change that or make it more likely. You can't create two extra dollars where there are none. Not not not green is still not green.

There are a finite number of things and a finite number or properties anything can posses so only a small minority of statements of the form "A is B" are true, whereas most statements of the form "A is not B" are true. That is a logical consequence of the universe we live in.

A is not not B is just a an illusion. A slight of hand and it violates parsimony. You don't create a new property by throwing in a p-bar.

Now, of course the number of things that do NOT have any given property are larger than the number of things that do (typically), .... but at that point you're putting restrictions on A and B.
At that point we are simply stating an existential fact. The restrictions are not artificial but are a consequence of existence.

You're no longer in a position of maximal uncertainty.
Maximal uncertainty is an artifice. It's a conceit. It doesn't exist in the real world. It only exists on game shows and theoretics. It can be a valuable tool and I'm not dismissing it out of hand as a statistical tool but we can know from the outset that given the constraints of the material world the properties of any object are not infinitely possible.

If you know enough about closets and dollars to know that the set of closets that do not hold a million dollars is larger than the set of closets that do, then you are by definition not completely ignorant.
And I've agreed with this time and time again. If I say "polar bear" it automatically restricts the properties of O. And we know enough about the meaning of the word "god" that by evoking its existence you are by definition not completely ignorant.
 
Last edited:
Actually, I think you're arguing my side of the case for me.

I agree - my concern was to make more explicit the issue which you explain below.

Malerin is trying to establish a "level" (scare quotes deliberate) playing field so that we can discuss this issue without prejudice to his claims. A lawyer would recognize this as a request for a de novo review of a case that has already gone against him, but he wants the appeals court to pretend not to know that the case has already been heard and decided adversely.

And therefore, he wants to control the evidence that can and cannot be submitted; we're supposed to forget, for example, all of the associations we have with the word "God" (allowing him to later slip in the rest of the metaphysical and theological baggage via rhetorical sleight of hand) as well as all of the other demonstrations we know of the universe's lack of regard for life if not positive hostility.

Yes, I think that is well said.

Of course. But that doesn't mean that his argument is incorrect, as a lot of the people seem to think --- especially the people who are disputing his use of a maximally uninformative prior of 0.5.

Because that particular aspect of his argument is, fortunately or not, technically correct.

Of course, it sets up a number of lies by omission as well as outright lies of commission. But the counter-arguments are focusing on the wrong spot, the one spot in his entire argument that can be justified down to the last jot and tittle.

I'm simply pragmatic enough to recognize that, in this case, p=0.5 is the maximally uncertain position. Accept it, and move on. Attack where his line is weak, not at the single point where it can be successfully defended.

Just like it's defensible to claim that the average person in the US is 49% male and 51% female. But my interest in this is really to just reach a better understanding among those people who are actually interested in understanding the topic. I thought 69dodge did a good job of explaining that it wasn't a mean (p=0.5) that was the prior, but rather the distribution of probabilities (and I also mentioned this in the thread that spawned this thread). You can't represent that distribution with just the mean. After all, the most uninformative prior is likely to be a distribution where p=0.5 is the least likely value, while the mostly likely values are at p=0 and 1.

Also, the other points in her/his line have already been attacked. Her/his argument has already been routed. We're just divvying up the spoils.

Linda
 
<snip>

Not not not green is still not green.

<snip>

Agreed, but it's not green with two not-gates worth of propagation delay. Feeding not green and not not not green into an exclusive-or gate is going to give you glitches on the output which, if you're an analogue engineer, might be exactly what you want.:)
 
Agreed, but it's not green with two not-gates worth of propagation delay. Feeding not green and not not not green into an exclusive-or gate is going to give you glitches on the output which, if you're an analogue engineer, might be exactly what you want.:)
:) But I think most of us would call it noise.
 
While I agree there are plenty of other problems with the argument, I disagree that 0.5 is reasonable.

It would be like arguing that the probability of winning the lottery with my ticket is 0.5 prior to considering the definition of the lottery (that is, how many numbers are drawn within what range, and how many must be matched to win). It's absurd to say the prior probability is 0.5. That's willfully ignorant, not maximally uncertain.

You say "willfully ignorant" like it's a bad thing. Perhaps a better description would be "willfully maximally ignorant"; Malerin is asking us to deliberately suspend our preexisting knowledge in order to analyze in isolation the observations and facts that he presents.

There's nothing wrong with that, any more then there is something wrong with a medical statistician asking what the effect of living in a major city is on lung cancer rates, ignoring (more often phrased as "controlling for") the difference in cigarette smoking rates. Because only by suppressing the information we already know about cigarettes can we find out new information such as the effects of prolonged exposure to bus exhaust.

The definition of the lottery can't be considered new evidence.

Sure it can, in the carefully controlled environment of a study where we are looking only at the effects of how lotteries are defined. (And this could be quite useful, for example, if we were trying to figure out what qualifies as a "lottery" for purposes of public policy.) It would be ludicrous, and probably dishonest, for us to discuss the definition of a "lottery" in detail in the context of a financial planner trying to figure out what the best way for me to retire wealthy is, but it would be quite appropriate in the context of a study of gambling, risk-taking, and social behavior.


Neither can the existence of a universe with life or the definition of the term God.

Again, on the contrary, if what we're trying to do is to look in great detail at the relationship between a life-filled universe and a hypothetical God, then it makes perfect sense to try to do it step by step and figure out what the relevant attributes of God are, or what the relevant attributes of the universe, or how to put them together.

Of course, that doesn't make what he's doing correct in detail.

As I've said, it's the same way QM is invoked to make PSI theories or quack magic water sound scientific.

Yes, but the problem here is the same. If I invoke the Uncertainty Principle to justify my quack magic water, the problem is not with the Uncertainty Principle. It only makes you look the fool if you deny the Uncertainty Principle as part of your response to my magic water, because the Uncertainty Principle does exist and has a well-defined meaning that any snake-oil salesman can copy out of Wikipedia.

The problem is not with the existence of an uniformative prior. They exist.
The problem is not with the use of 0.5 in this context as an uninformative prior. 0.5 is appropriate.
The problem is not with the truth or statement of Bayes' theorem. Bayes' theorem exists as stated in the OP and is provably true.

The problem is with the manufacture of evidence and the controlled presentation of evidence to omit anything negative to the hypothesis. It's the equivalent of an out of context quotation; the words were said, but out of the context in which they were heard, the words are misleading.
 
So doesn't that devastate the fine-tuning argument, since we only have a single sample and no knowledge of the mechanism used to generate that sample?

Yes. The 'optimum' guess at the underlying process which generated a single sample point with no other information is the value of that single sample point.

I.e. When a universe is created it always comes out with its fundamental constants set just right for lifeforms like us.
 
A is not not B is just a an illusion. A slight of hand and it violates parsimony. You don't create a new property by throwing in a p-bar.
Yes you do. For any property, the negation of that property is also a property. Saying something has the property "is green" is just saying something about which wavelengths of light it reflects. Similarly, saying something has the property "is not green" is just saying something about which wavelengths of light it reflects.

You can argue that "is green" is more restrictive than "is not green", and that's fine, but it doesn't change the fact that there are equal numbers of properties held and not held. There isn't a magic complexity boundary beyond which constraints aren't allowed to be held or not held by things. Parsimony and Occam's Razor dictates we care more about simpler things, sure, but that's just saying what I said above, which is that simpler things come up more often. That doesn't mean more complex things (like the property of not being green) don't come up or don't exist, it just means they're weighted less in the distribution.
 
I just had a thought... Malerin claims to be able to use subjective spiritual experiences to qualify as evidence for the existence of God, right?

So wouldn't it be fair to say that an atheist could then use subjective spiritual experiences to qualify as evidence against the existence of God? For example, Sam Harris believes this to be the case.

I'm sure that Malerin will attempt to wiggle out of this one, just as he/she has attempted to use weasel words to get out of the entire "agnostic" probability question concerning the FSM's existence (which is, of course, 0.5).

Sorry Malerin, no Beer Volcano for you. RA-men!!!
 
Last edited:
You say "willfully ignorant" like it's a bad thing. Perhaps a better description would be "willfully maximally ignorant"; Malerin is asking us to deliberately suspend our preexisting knowledge in order to analyze in isolation the observations and facts that he presents.

There's nothing wrong with that, any more then there is something wrong with a medical statistician asking what the effect of living in a major city is on lung cancer rates, ignoring (more often phrased as "controlling for") the difference in cigarette smoking rates. Because only by suppressing the information we already know about cigarettes can we find out new information such as the effects of prolonged exposure to bus exhaust.



Sure it can, in the carefully controlled environment of a study where we are looking only at the effects of how lotteries are defined. (And this could be quite useful, for example, if we were trying to figure out what qualifies as a "lottery" for purposes of public policy.) It would be ludicrous, and probably dishonest, for us to discuss the definition of a "lottery" in detail in the context of a financial planner trying to figure out what the best way for me to retire wealthy is, but it would be quite appropriate in the context of a study of gambling, risk-taking, and social behavior.




Again, on the contrary, if what we're trying to do is to look in great detail at the relationship between a life-filled universe and a hypothetical God, then it makes perfect sense to try to do it step by step and figure out what the relevant attributes of God are, or what the relevant attributes of the universe, or how to put them together.

Of course, that doesn't make what he's doing correct in detail.



Yes, but the problem here is the same. If I invoke the Uncertainty Principle to justify my quack magic water, the problem is not with the Uncertainty Principle. It only makes you look the fool if you deny the Uncertainty Principle as part of your response to my magic water, because the Uncertainty Principle does exist and has a well-defined meaning that any snake-oil salesman can copy out of Wikipedia.

The problem is not with the existence of an uniformative prior. They exist.
The problem is not with the use of 0.5 in this context as an uninformative prior. 0.5 is appropriate.
The problem is not with the truth or statement of Bayes' theorem. Bayes' theorem exists as stated in the OP and is provably true.

The problem is with the manufacture of evidence and the controlled presentation of evidence to omit anything negative to the hypothesis. It's the equivalent of an out of context quotation; the words were said, but out of the context in which they were heard, the words are misleading.

Now I'm being called a shyster. I can't win.

Anyway, what evidence do you think I'm omitting, Dr? The Multiverse theory? Oscillating universe? The possibility of exotic lifeforms existing in a universe with different physical constants?
 
The possibility of exotic lifeforms existing in a universe with different physical constants?

That's a big one, yes.

More accurately, the problem of an observed universe supporting life is one by definition. When you factor that in, you find that the question of fine-tuning becomes irrelevant, because that aspect trumps all other questions about the likelihood of the universe supporting life.
 
I just had a thought... Malerin claims to be able to use subjective spiritual experiences to qualify as evidence for the existence of God, right?

So wouldn't it be fair to say that an atheist could then use subjective spiritual experiences to qualify as evidence against the existence of God? For example, Sam Harris believes this to be the case.

I'm sure that Malerin will attempt to wiggle out of this one, just as he/she has attempted to use weasel words to get out of the entire "agnostic" probability question concerning the FSM's existence (which is, of course, 0.5).

Sorry Malerin, no Beer Volcano for you. RA-men!!!


You mean, like evidence of 'no-God' from Buddhist meditation? Yes, that would be evidence of the same sort.
 
That's a big one, yes.

More accurately, the problem of an observed universe supporting life is one by definition. When you factor that in, you find that the question of fine-tuning becomes irrelevant, because that aspect trumps all other questions about the likelihood of the universe supporting life.

I'm not clear on what you're saying. Are you talking about the problem of old evidence? Pr(E) is 1 if E is "life exists", but Pr(E) is also 1 if E is "The coin landed heads". Bayes would get nowhere fast if there wasn't a counterfactual way to look at evidence you already know exists.
 

Back
Top Bottom