• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

An Impossible Coin Flip?

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: An Impossible Coin Flip?

69dodge said:
We're still not talking the same language. :)

What do you mean, "is it OK to assume..."? What do think that assumption entails? What difference does it make whether you assume it or not? Will you act differently in some circumstances than if you don't assume it? If so, how?

Well wait a sec, before the barriage of questions for me, let's get back to what you assume, or rather, just how you justify your assumption. :)

You admitted to have no information about the proportion. Then you say that the distribution you assume is uniform. I'm still waiting for a coherent justification of how total ignorance equates to a uniform distribution.
 
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: An Impossible Coin Flip?

Originally posted by SGT
No, what I'm saying is that the result of an experiment is always a fixed value Only before performing the experiment, we don't know what this exact value is.
What I don't understand is why do you think important to say that theta has a fixed value as if it is a special property of this variable.
Ok, I see. Yes, you're right. The same thing can be said about any single experimental result. The reason I stressed it here is that theta did not result from an experiment that can be repeated independently many times. So, saying that theta has a uniform distribution has no implications for the relative frequencies of the various possible outcomes of such a hypothetical set of experiments.
 
jzs, my questions are simply my attempt to find out what you think I'm assuming. I keep saying that I'm not assuming anything except that I have no information about theta. Assigning a uniform distribution to theta is just a Bayesian's way of saying that. You seem to think that I'm assuming more than that. What, exactly, do you think I'm assuming, that needs justification?

Let me put it this way. Suppose I answered, "yes, if you don't know what time your bus will arrive, it's OK to assume all times are equally likely." I guess you would then say, "Well, that can't be right, because it would imply such-and-such, which is obviously nonsense." What is the nonsensical such-and-such that you think it would imply?

I suspect it wouldn't imply it without additional assumptions such as the independence of different random variables, but obviously I am not claiming independence of anything from anything if I'm talking about a single variable.
 
69dodge said:
jzs, my questions are simply my attempt to find out what you think I'm assuming. I keep saying that I'm not assuming anything except that I have no information about theta. Assigning a uniform distribution to theta is just a Bayesian's way of saying that. You seem to think that I'm assuming more than that. What, exactly, do you think I'm assuming, that needs justification?


You're assigning a uniform distribution to theta, or you would say your knowledge about theta, when you know nothing about theta. So if you know nothing about theta, how do you know to assign a particular distribution, one that implies equal probability on all values of theta?

You go from not admitting to not knowing anything about theta, to specifying how theta takes or probably takes its values!
 
Originally posted by jzs
You're assigning a uniform distribution to theta, or you would say your knowledge about theta, when you know nothing about theta. So if you know nothing about theta, how do you know to assign a particular distribution, one that implies equal probability on all values of theta?
Because that's the distribution that represents the lack of any specific knowledge favoring one value over another. Since I haven't got any such knowledge, that's the distribution I use.
You go from not admitting to not knowing anything about theta, to specifying how theta takes or probably takes its values!
Theta doesn't take values. It is a single value.

If I were trying to characterize my imperfect knowledge about an entire unknown distribution instead of simply a single unknown number, then I'd assign probability densities to distributions, according to how likely I believed each distribution to be. But there is no need here for such infinite-dimensional stuff. Theta is not a random process, with a "true" distribution that I'm trying to estimate; it is a single number. A uniform distribution, or any other distribution, could therefore not possibly be about theta; I use a uniform distribution purely as a description of what I know about theta the number.
 
69dodge said:
I use a uniform distribution purely as a description of what I know about theta the number.

But the point is that you know nothing about it, and the uniform distribution doesn't demonstrate knowing nothing about it, it demonstrates hypothesizing equal probability.

At this point I'll agree to disagree. :)
 
jzs said:
But the point is that you know nothing about it, and the uniform distribution doesn't demonstrate knowing nothing about it, it demonstrates hypothesizing equal probability.


The uniform distribution is the "least informative," in the strictly technical sense that, over the entire potential probability space, it has the minimum average difference from all other probability distributions.

One aspect of Bayesian math is that there's no way to conjure a distribution out of thin air; the math requires an initial distribution to start working with. So the initial distribution is chosen to be as bias-free a hypothesis as possible.
 
Is any event that can be assigned before-the-fact a probability of occurence "random" rather than "deterministic" in the mathematical sense?

If so, why?
 
hammegk said:
Is any event that can be assigned before-the-fact a probability of occurence "random" rather than "deterministic" in the mathematical sense?

This is not a meaningful question.

Assigning a probability is an intellectual act on the part of the assigner and has no effect on the object/event to which the probability is assigned. Basically, it's a description that may or may not be correct (and in the case of the Bayesian formalism, is widely acknowledged to not be correct).

It's rather like answering a forced-choice question. If I hand you a sealed box, and force you at gunpoint to "tell" me what's in it, you will probably babble something rather than take a bullet. But what you tell me has no effect on the properties of what's in the box.
 
hammegk said:
Is any event that can be assigned before-the-fact a probability of occurence "random" rather than "deterministic" in the mathematical sense?

If so, why?
What is your definition for "random" and "deterministic" in the mathematical sense?
 
Good point.

How do you like "an inevitable consequence of antecedent sufficient causes".

Hmm, I suppose we can debate if that's 'the' math definition.
 

Back
Top Bottom