Why Malerin is Wrong About Bayes Theorem

The FSM GOD contradicts biological and evolutionary truths, and is a long conjunction of hypotheses which may or may not be true. Per parsimony, in the absence of any evidence, the more bizzarely detailed a creature is, the less likely it's existence.
Let me seeeeeee....hypocrite anyone?
Sheesh, don't you know anything?

God exists because he gives people brain damage that puts them on the verge of death, just to show how much he loves them.
 
But isn't the point of all this that if you don't know, then you assign a probability of 0.5 (it either is or is not a 1) and then perform some further modification so that you end up with a better idea of it being a 1 or not? If there is no information at all, then can you even use Bayes' theorem?
Well some here have suggested that 0.5 is the probability you should assign even if there is no information at all.

But my point is that even if you have information you cannot assign a probability of 0.5 just because you can write a "yes/no" question about it.

If you have a coin, but you don't know if it is fair, then it is valid to use 0.5 because you know that there are two possibilities.

And if you don't know in advance whether the random event generator will be a coin or a die, then you can assign 0.5 to the probability that it will be either a coin or a die, but it would be invalid in this case to assign 0.5 to the probability that you will get a "1".

It is invalid because I am ignoring the information that I don't know whether there might be 2 or 6 possibilities.

In other words, what you don't know can be information in itself.
 
Last edited:
But I can also use this structure to to keep out information that I know beforehand will be fatal to my argument, which is how Malarin is trying to use it. For example, he assumes (correctly) that the agnostic position on "God exists" is a 50/50 chance, then applies only information that he believes will increase the likelihood that God exists. By telling only one half the story, he can raise the probability as large as you like.

I admitted this in an earlier post. Pr(E/~H) actually should be written Pr(E/~H)k, where "k" is your background knowledge. If, for example, you think something like the multiverse is likely, then the astronomical odds against life is simply a matter of random chance: given enough universes, a few will be life-permitting ones.

Edit: I probably put the k in the wrong spot.
 
Last edited:
But I can also use this structure to to keep out information that I know beforehand will be fatal to my argument, which is how Malarin is trying to use it. For example, he assumes (correctly) that the agnostic position on "God exists" is a 50/50 chance, then applies only information that he believes will increase the likelihood that God exists. By telling only one half the story, he can raise the probability as large as you like.
:) As an auditor I can tell you that this can be applied to pro-forma statements as well (at least those that don't strictly adhere to GAAP.)

In other words, accentuating the positive while ignoring the negative can make a sows ear look like a silk purse.
 
Yup. It's astonishing how many people in this thread seem to think that you simply say "yes, it's a 50/50 chance of being a 1" and then walk away.
How did you reach the conclusion that I thought that? Did you assign a 50/50 chance and then walk away?
 
Yeah, this is incorrect. There's no basis for concluding that P(E|H1) + .... P(E|Hn) = 1. You could be dealing with an event that was highly likely to happen regardless of which hypothesis is true, or one that was highly unlikely to happen regardless of which hypothesis is true (my "child named Sue" example).

Ugh. Not sure how I didn't see this before!

So yeah I was totally wrong about this.

Now we are left to argue whether setting P(E) to 1 is valid or not since that is what I was really getting after in my OP apparently ( I just didn't know it).
 
Pretty ironic considering the reaction to the OP's claim about how wrong I was. ((Pr(E/H) + Pr(E/~H) = 1):rolleyes:

It has been agreed only that I was partially wrong.

The debate continues as to whether it is valid to estimate P(E) at 1.0 given we only have a single trial gather data on.

Why don't you tell me what you would do, Malerin?

Assuming a magic number generator like in Robin's example or my example, and assuming you press the button and receive a value of 1, what is the unconditional probability that you would receive a 1? What value would you estimate it as, and why?
 
Could you answer the question, "should I assign a probability of 0.5?"

Linda
I should first have to answer the question "what do I assign the probability to?".

If I knew in advance that it was an electronic coin tosser or an electronic die then it would not be valid to assign 0.5 to the probability of the number being "1", instead I should assign 0.5 to the probability of it being a coin tosser or die.

But if I did not know whether it was a coin tosser or die then I guess I could calculate the number of pixels on the screen, work out the largest number that could legibly be displayed on that screen and then my agnostic probability about the number of possibilities would be the number of subsets within that range.

And even then I haven't even gotten near the question of the probability that it is a "1". The probability that it is a "1" might still be 0.5, because I might still have an electronic coin tosser after all.
 
Last edited:
But I can also use this structure to to keep out information that I know beforehand will be fatal to my argument, which is how Malarin is trying to use it. For example, he assumes (correctly)

This is only potentially correct if this assumption is made before the word 'God' is even meant to apply to something - that is, if the assumption is made before someone indicates that it is the world with the invisible sky daddy, rather than the world without the invisible sky daddy, that has 'God'. Clearly we are way beyond that point in any of our discussions here, so it isn't really appropriate to remain maximally uncertain. Do you really think that Malerin is saying, "I am maximally uncertain that God exists and after I consider the fine-tuning argument I am every so slightly less maximally uncertain that God exists"? The purpose of his argument is not really to advance our certainty. It's to grossly misrepresent the degree of precision available to her/him. I think that by presenting the agnostic position as 'p=0.5', instead of 'maximally uncertain', you are playing right into her/his hand.

ETA: You will notice that when people wish to represent true agnosticism, they tend to use nonsense words.

Linda
 
Last edited:
Anyhow - God is a necessary being and so has a probability of 0 or 1.

If you identify an entity with any other probability of existing then it ain't God.
 
It has been agreed only that I was partially wrong.

The debate continues as to whether it is valid to estimate P(E) at 1.0 given we only have a single trial gather data on.

Why don't you tell me what you would do, Malerin?

Assuming a magic number generator like in Robin's example or my example, and assuming you press the button and receive a value of 1, what is the unconditional probability that you would receive a 1? What value would you estimate it as, and why?

It's a tough issue, which is why it's called "the problem of old evidence". I've referenced it at least a dozen times in the other thread and included a paper arguing that the old evidence of life existing is ultimately disastrous to the FT argument.

We KNOW life exists, so how can Pr(E) be anything but 1? Then again, we knew about Mercury's eccentric orbit before Einstein, but relativity's prediction of Mercury's orbit was one of the most powerful pieces of confirming evidence, even though it was "old evidence". There are various solutions to this, all with problems attached. I like the counterfactual one the most- try to assume how surprised you would be had you just learned about the evidence.

In the case of the FT argument, you can try the following thought experiment. Suppose you are about to be placed into a universe. All you know about this universe is that the chances of it being the kind of unvierse you can survive in are 1 in ten trillion trillion. Bad news, you think. You are placed in the universe and discover you can survive in it. Very surprising! Did you beat the odds or did someone rig it for you?

This assumes:
1. You were only placed in one universe. If you were put in universe after universe, you would eventually be placed in one you could survive in.
2. You have correctly estimated the kind of universe you can survive in. If the odds were 1 in 5 instead of trillions, it wouldn't be that surprising.
3. The person calculating the odds for you is correct. If their calculation is off and odds are 1 in 3 instead of trillions, you woulnd't be surprised.

How this relates to the FT argument:
1. How likely is a multiverse?
2. How likely is it life could flourish in conditions we think are inhospitible (e.g., a universe with no stars)?
3. How likely is it the universe would really be that different if the constants had different values? Maybe the differnce in one value would be compensated for by the difference in another value.
 
It's a tough issue, which is why it's called "the problem of old evidence". I've referenced it at least a dozen times in the other thread and included a paper arguing that the old evidence of life existing is ultimately disastrous to the FT argument.

We KNOW life exists, so how can Pr(E) be anything but 1? Then again, we knew about Mercury's eccentric orbit before Einstein, but relativity's prediction of Mercury's orbit was one of the most powerful pieces of confirming evidence, even though it was "old evidence". There are various solutions to this, all with problems attached. I like the counterfactual one the most- try to assume how surprised you would be had you just learned about the evidence.

In the case of the FT argument, you can try the following thought experiment. Suppose you are about to be placed into a universe. All you know about this universe is that the chances of it being the kind of unvierse you can survive in are 1 in ten trillion trillion. Bad news, you think. You are placed in the universe and discover you can survive in it. Very surprising! Did you beat the odds or did someone rig it for you?

This assumes:
1. You were only placed in one universe. If you were put in universe after universe, you would eventually be placed in one you could survive in.
2. You have correctly estimated the kind of universe you can survive in. If the odds were 1 in 5 instead of trillions, it wouldn't be that surprising.
3. The person calculating the odds for you is correct. If their calculation is off and odds are 1 in 3 instead of trillions, you woulnd't be surprised.

How this relates to the FT argument:
1. How likely is a multiverse?
2. How likely is it life could flourish in conditions we think are inhospitible (e.g., a universe with no stars)?
3. How likely is it the universe would really be that different if the constants had different values? Maybe the differnce in one value would be compensated for by the difference in another value.

None of these scenarios are analogous to Robin's example or my example.

What would your estimate for the unconditional probability of getting a 1 be in such an example?
 
Well some here have suggested that 0.5 is the probability you should assign even if there is no information at all.

But my point is that even if you have information you cannot assign a probability of 0.5 just because you can write a "yes/no" question about it.

If you have a coin, but you don't know if it is fair, then it is valid to use 0.5 because you know that there are two possibilities.

And if you don't know in advance whether the random event generator will be a coin or a die, then you can assign 0.5 to the probability that it will be either a coin or a die, but it would be invalid in this case to assign 0.5 to the probability that you will get a "1".

It is invalid because I am ignoring the information that I don't know whether there might be 2 or 6 possibilities.

In other words, what you don't know can be information in itself.


Right, I agree with you. I think the point, though, is that we can create a non-realistic situation to start at "ground zero" if we aren't all that sure or if we want to pretend that we have no information.

This is, of course, a lie, as we all tried to say from the outset.

But, we can certainly start there and build a more 'realistic' probability with repreated "runs" through the equations, each posterior probability serving as a new prior probability.

I don't see any particular problem doing it as long as we are clear what we are doing. It's completely artificial, but how else are we going to assign a prior probability to something like God, I mean, for people who want to waste their time on it?

I completely agree that the whole exercise is stupid though figuring out the mistakes was fun. God either is or is not. For us there is no proof. There is a decision.

Bart Ehrman, commenting on his opponent's use of a similar argument (it was Swineburn's argument for the likelihood of the resurrection), once said, "You can't be serious, using a math formula to prove God? If anyone tried to do that in my institution they'd be laughed off the stage."
 
I don't see any particular problem doing it as long as we are clear what we are doing. It's completely artificial, but how else are we going to assign a prior probability to something like God, I mean, for people who want to waste their time on it?
Agreed. Back to my closet example. If instead of saying there is a "closet" I said there are two doors and behind one is a million dollars and the other nothing, we could, for arguments sake, assume a 50/50. Hey, this was the basis of Let's Make a Deal.

So I accept that if we, for arguments sake, agree on a 50/50 proposition to analyze the probability of any binary decision then it's a good assumption. The problem is, as Dr.Kitten points out, we know something about people and closets. Conversely, we know something about "god" or at least what people think about what god is. To apply Bayes theorem to god we need to strip away our preconceptions and assume no knowledge just as we would have to do with the closet. And any attempt to define god would likely change the .5 probability just as assuredly as if you knew that the door with money behind it in the previous paragraph was my closet door.

FTR: I'm still not convinced that my argument that there are an infinite number of existential statements that are false and a finite number of existential statements that are true therefore by sheer numbers alone the existence of anything where nothing is known is significantly less than .5 by default. It has to be (it seems to me). Distribution or not.

That said, again, I can see the value of taking as a premise a .5 assumption as a starting point.

However: A is either B or it is not B.

There are more statements where A is not B that are true than there are A is B.

Polar bears are white.
Polar bears are not green.
Polar bears are not blue.
Polar bears are not chartreuse.
 
Last edited:
FTR: I'm still not convinced that my argument that there are an infinite number of existential statements that are false and a finite number of existential statements that are true therefore by sheer numbers alone the existence of anything where nothing is known is significantly less than .5 by default. It has to be (it seems to me). Distribution or not.
If I ask a random person on the forum to give me "any integer, any integer at all, more than zero" I would say there is a much higher than 0.5 probability they will name a number between 0 and Graham's number (except maybe now that I've made that statement :D). And yet there are infinitely many numbers larger than Graham's number, and only finitely many smaller. What gives? Simple - smaller numbers are more likely to be chosen than larger numbers.

There could be a similarly skewed distribution for "existential statements", where "less complex" statements stand in for smaller integers in being more probable to be true and also more likely to come up.

However: A is either B or it is not B.

There are more statements where A is not B that are true than there are A is B.

Polar bears are white.
Polar bears are not green.
Polar bears are not blue.
Polar bears are not chartreuse.
That is simply not true. You are neglecting Bs of the form "not green".

Polar bears are not white.
Polar bears are not not green.
Polar bears are not not blue.
Polar bears are not not chartreuse.
 
Last edited:
If I ask a random person on the forum to give me "any integer, any integer at all, more than zero" I would say there is a much higher than 0.5 probability they will name a number between 0 and Graham's number (except maybe now that I've made that statement :D). And yet there are infinitely many numbers larger than Graham's number, and only finitely many smaller. What gives? Simple - smaller numbers are more likely to be chosen than larger numbers.
So, if I understand you right, your argument is one of psychology?

There could be a similarly skewed distribution for "existential statements", where "less complex" statements stand in for smaller integers in being more probable to be true and also more likely to come up.
There could be a skewed distribution in any question regarding probability. Regarding the flipping of a non-biased coin: 1"If one were to keep track of the proportion of the time that the number of heads exceeded the number of tails, one might be surprised that it is rarely close to half."

That is simply not true. You are neglecting Bs of the form "not green".

First, you got it wrong, it's not not not green.

Not green = True.
Not not green = False.
Not not not green = True.

I honestly hadn't neglected it. That's just a slight of hand. It doesn't change the truth value of whether or not a polar bear is grean? If you are consistent with your logic then a Polar bear is also not not white.

The point is that there are no green polar bears? (Not not) = is. The salient point isn't the "not" but the truth value of the color of polar bears. Your argument is specious.

BTW: You could use the same slight of hand with dice. Not 7. Not Not 7. Not Not Not 7. It really doesn't work that way though.

But just for fun.

There are 29 Not 7's
There are 6 7's

Now, how exactly do we take into account the not not 7's? Do you honestly suggest that to calculate probability we need to account for the not not 7's in addition to the 6 7's? Hint: see missing dollar puzzle.

Now, after all of that, at the end of the day if we take into account the not not nots we only increase the number of A's that are not B.

GA: One more thing, don't get frustrated with me. I can be maddening as my rhetoric can give me an air of authority that I don't deserve. I concede that. I might be thick but I usually catch on eventually. Some just give up on me and that's fine also. I really do appreciate your good nature and tone. :)


1 Paulos, Inumeracy 57
 
Last edited:
<snip>

In the case of the FT argument, you can try the following thought experiment. Suppose you are about to be placed into a universe. All you know about this universe is that the chances of it being the kind of unvierse you can survive in are 1 in ten trillion trillion. Bad news, you think. You are placed in the universe and discover you can survive in it. Very surprising! Did you beat the odds or did someone rig it for you?

<snip>

In the case of the FT argument, you can try the following thought experiment. Suppose you are held at gun point and your only chance of survival is to deal 52 playing cards. After dealing the cards someone (not you - you understand how to apply probability theory) comes along and calculates the probability of the cards being dealt in the order they are. It's a very, very, very small number. This person then starts asking if someone rigged the deck.

ETA: ...at which point you grab the gun and attempt to pistol whip some sense into them.
 
Last edited:
No, it's just .5. In the absence of evidence for or against, a logically possible proposition is assigned an agnostic value.

Another way to put it is that your maximum uncertainty is symmetrical. But realistically, Malerin, are you really attempting to go with maximum uncertainty here? Are you really trying to smear the fine-tuning argument to cover all the possibilities or are you trying to refine the possibilities?

And it has already been pointed out that your observation that your idea has been "confirmed" is simply a consequence of how you set up the scenario. Specifically, you always assume that life is more likely in the presence of a fine-tuner, rather than less common. This is a decidedly unagnostic postition to take. If you were actually honest about your agnosticism, you would set it up so that life was less common in the presence of a fine-tuner as well.

If you knew a snargle can land quarg, you should assign a .5 value to the claim "this snargle will land quarg." Maybe it lands quarg 90% of the time, maybe 1% of the time. You don't know. To assign a value other than .5 would require some type of evidence.

This would get back to what rocketdodger is referring to - what effect does an observation have on our agnosticism? What you are suggesting is that your knowledge that a snargle can land quarg should be ignored in order to remain maximally uncertain.

Linda
 
Assuming a magic number generator like in Robin's example or my example, and assuming you press the button and receive a value of 123, what is the unconditional probability that you would receive a 123? What value would you estimate it as, and why?

(rocketdodger actually said "1", not "123". I changed it to a number that can't be a probability, to make the discussion below perhaps clearer. Nothing essential depends on which number is used.)

There are two 'levels' of probability here, which should be distinguished.

One is what you're calling "the unconditional probability that you would receive a 123". This is supposed to be a real property of the machine, with a single definite value, between 0 and 1, which we happen not to know. Let's call this number "X". (The value of X determines, for example, how often, over the long term, the machine will display 123 rather than any other number if we repeatedly press its button.)

Now, according to the Bayesian approach to probability, whenever we are uncertain about something, we can use probability to describe the nature of our uncertainty. Let's forget for the moment that X is a (different kind of) probability, and just treat it as a number about which we are uncertain. Then, we can say things like, for example, "P(0.3 < X < 0.4) = 0.2" (translation: "there's a 20% probability that X is between 0.3 and 0.4"), where this probability is not an objective property of the machine, but only a way of characterizing our limited knowledge of the machine. X doesn't 'really' have a 20% probability of being between 0.3 and 0.4, or any other probability either. It is a single number, which either is in that range or not. We just don't know which. But we might have some ideas about which, and we quantify these ideas by saying that it has a 20% probability of being there.

Of course, 0.3-to-0.4 is just an example, and we can talk about the various probabilities that X has of being in various other ranges as well. All these probabilities together constitute a 'probability distribution for X'. Again, a probability distribution for X is not an objective property of the machine, but only a way for us to express, as precisely as we can, our more or less vague ideas about the machine. ("We don't know exactly which number X is, but here's a summary of what we do know about X.")

A probability distribution for X contains a lot more data than a single 'estimate' of X (to quote rocketdodger's question). From a distribution, we can, if we wish, derive various single estimates---for example, the mean (or the median, etc.). But we can't forget the entire distribution, and just remember the mean, if we want to be able to change our ideas about X appropriately when we get new information relevant to X---for example, when we press the machine's button and see what number it displays.

The appropriate way to change our ideas about X is to use Bayes's theorem to produce a new distribution for X from the old distribution. The old mean alone is not enough to enable us to produce even a new mean, let alone an entire new distribution.
 
Good post.

The appropriate way to change our ideas about X is to use Bayes's theorem to produce a new distribution for X from the old distribution. The old mean alone is not enough to enable us to produce even a new mean, let alone an entire new distribution.

I think most people tend to think of the probability as a uniform distribution (the probability of 0.3<x<0.4 is 0.1, 0.4<x<0.5 is 0.1, etc.). Or maybe as a normal distribution. But the most uninformative prior may be a distribution with maximum kurtosis at p=0 and 1, with a minimum at p=0.5.

Linda
 

Back
Top Bottom