• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

When to "stop" doing science?

That definition only holds for equally likely outcomes.
 
Yes, I like what it says here:

Classical probability suffers from a serious limitation. The definition of probability implicitly defines all outcomes to be equiprobable. While this might be useful for drawing cards, rolling dice, or pulling balls from urns, it offers no method for dealing with outcomes with unequal probabilities.

This limitation can even lead to mistaken statements about probabilities. An often given example goes like this:

I could be hit by a meteor tomorrow. There are two possible outcomes: I will be hit, or I will not be hit. Therefore, the probability I will be hit by a meteor tomorrow is 50%. Of course, the problem here is not with the classical theory, merely the attempted application of the theory to a situation to which it is not well adapted.
 
It seems a little strong to say "not at all" to that particular definition, don't you think?

I have no problem with saying "not at all" to that particular definition. It's simply wrong.

You should be able to see that for yourself from your own examples.
 
I have no problem with saying "not at all" to that particular definition. It's simply wrong.

You should be able to see that for yourself from your own examples.

I can see that the classical theory of probability does not explain the (multiple outcome) coin situation because each event is not equally likely. I agree. It was wrong of me to apply that particular theory to those situations...

...and that is exactly what T'ai Chi is doing.
 
I can see that the classical theory of probability does not explain the (multiple outcome) coin situation because each event is not equally likely. I agree. It was wrong of me to apply that particular theory to those situations...

...and that is exactly what T'ai Chi is doing.

Nope, I'm not seeing it.

T.C. is applying the frequental definition of probability to the of coin being tossed heads. The more times you throw the coin the more certain you can be that your ratio of heads to tails is representative of the underlying distribution.

He's asking at what point do you "know" that heads has a probability of 0.5 of occurring. The obvious answer is that it depends, on your state of prior knowledge, how accurately you need to know, and how certain you need to be that you're correct.
 
Nope, I'm not seeing it.

T.C. is applying the frequental definition of probability to the of coin being tossed heads. The more times you throw the coin the more certain you can be that your ratio of heads to tails is representative of the underlying distribution.

He's asking at what point do you "know" that heads has a probability of 0.5 of occurring. The obvious answer is that it depends, on your state of prior knowledge, how accurately you need to know, and how certain you need to be that you're correct.

If he is asking how you know that heads has a probability of .5: if both sides are equally likely, and there are only two possible outcomes, then the classical theory applies here; either heads or tails will show up. The probability of heads is that it will either be heads or tails, 1/2.

Each coin flip is a seperate event. For each event you will either have heads or tails.
 
If he is asking how you know that heads has a probability of .5: if both sides are equally likely, and there are only two possible outcomes, then the classical theory applies here; either heads or tails will show up. The probability of heads is that it will either be heads or tails, 1/2.

Each coin flip is a seperate event. For each event you will either have heads or tails.

But we don't know if they're equally likely, that's what he's testing. Or pretending to test, or something.
 
Is it a recent development where people think any picture with some lines, some points, and some words make that picture a graph? :boggled:
Nope, but it is one (you prove it) where people don't read, or forget they've read, where I addressed this silly sidetrack complaint, in full, already.
Oh, you thought I was speaking about you? It is formulated as a general question. :rolleyes:
 
How did you come to be assured that P(heads) = .5?

This is directly from the classical theory of probability (Event A divided by the number of possible outcomes).

Notice that the classical theory doesn't say, "If you flip a 'head' the first time, you're gonna get 'tails' the next cuz it's 50-50."

It just says for each event (a coin flip) you are either going to get heads or tails (if heads and tails are equally likely and the only two possible outcomes).
 
Not at all. A colleague of mine in the statistics department has a set of loaded dice that almost always roll sixes. And, for that matter, there are fundamentally three different outcomes for a baseball game -- win, lose, or tie -- but the probability of tieing isn't one in three.



... and the probability of a coin landing on its side is one in three?

Just to be clear:

When you say not at all, it is because you are trying to apply the classical theory (which I gave an accurate definition for) to your collegue's dice and the theory doesn't fit. It is an incorrect application of the theory.

T'ai Chi is most certainly referencing the classical theory when he says P(heads) = .5

I also incorrectly applied the theory to the multiple possibilities of a coin landing on its side, etc. Because the outcomes are not all equally likely.
 
Last edited:
Try spinning the coin instead, on a flat surface. Nickels (and perhaps pennies, and I have no idea for non-US coins) do not have a 50/50 chance of heads and tails when spun.

TC's question might be better asked using spun coins: How many spins must we make before we say that p(heads) is .60? Or might it be .61? Or any of the infinite possibilities between? We do not have an a priori answer for this one, so we cannot have an arbitrary "truth" for it to converge on. A posteriori probability is as good as we will ever get.

(related--in one of my stats books years ago it mentioned that older dice, with the pips carved out as they were, were accidentally "loaded" ever so slightly; sixes were more common than ones because of the greater amount of die removed on the 6 side. How was this discovered? Vegas, baby. Casinos machine-tossed dice for thousands of trials and compared their a posteriori probabilities to the a priori 1/6.)

The answer? How exact a number do you need? Practically speaking, how close do you need to be? Different needs will give rise to different answers.
 
Say the following graph (attached) represents the relationship between science and its convergence to the Truth.

For what distance between science and Truth, are we satisfied that science is describing/predicting/modelling Truth well?

That is, for what tolerance, do we feel good that

|science-Truth| < tolerance

is small enough, and how do we know that we've attained such a tolerance?

The whole coin thing is distracting - as the answer to this question is:

It is the nature of the scientific method that one should never be satisfied.

That is what I was trying to convey by saying that we still refine well established theories and laws.

Your question also seems to ask when something becomes a law or theory and when and how that changes; this quote answers that question:


A scientific theory or law represents an hypothesis, or a group of related hypotheses, which has been confirmed through repeated experimental tests. Theories in physics are often formulated in terms of a few concepts and equations, which are identified with "laws of nature," suggesting their universal applicability. Accepted scientific theories and laws become part of our understanding of the universe and the basis for exploring less well-understood areas of knowledge.

Theories are not easily discarded; new discoveries are first assumed to fit into the existing theoretical framework. It is only when, after repeated experimental tests, the new phenomenon cannot be accommodated that scientists seriously question the theory and attempt to modify it. The validity that we attach to scientific theories as representing realities of the physical world is to be contrasted with the facile invalidation implied by the expression, "It's only a theory." For example, it is unlikely that a person will step off a tall building on the assumption that they will not fall, because "Gravity is only a theory."

Changes in scientific thought and theories occur, of course, sometimes revolutionizing our view of the world (Kuhn, 1962). Again, the key force for change is the scientific method, and its emphasis on experiment.
 
When you say not at all, it is because you are trying to apply the classical theory (which I gave an accurate definition for) to your collegue's dice and the theory doesn't fit. It is an incorrect application of the theory.

The "classical theory" to which you refer is gibberish. No self-respecting probabilist or statistician would give it the time of day.

There's nothing "classical" about it. It's simply WRONG. The textbook that references it is WRONG.
 
The "classical theory" to which you refer is gibberish. No self-respecting probabilist or statistician would give it the time of day.

There's nothing "classical" about it. It's simply WRONG. The textbook that references it is WRONG.

Wow. How do you figure? It's just a definition. Simply asserting that it is wrong doesn't really make a valid argument. What other information can you offer?
 
Wow. How do you figure? It's just a definition. Simply asserting that it is wrong doesn't really make a valid argument. What other information can you offer?

She's offered as much evidence against it as you've offered for it as anythign accurate, or accepted within the field.

I'd first worry about supporting your own claims, before demanding drk support a rebuttal of them.

Frankly, I'm betting on drk.
 
Wow. How do you figure? It's just a definition. Simply asserting that it is wrong doesn't really make a valid argument. What other information can you offer?

Er, the historical definition of probability?

The basis of probability theory began with the work of Pascal, specifically in trying to determine the outcome of an interrupted wager. One of the gamblers was holding to what you call the "classical theory of probability," although it wasn't called that. Pascal's main contribution was to point out the underlying assumption of equal likelihood and to reject that.

The outcome of the Fermat-Pascal correspondance was the statement, that you are misinterpreting as a definition, that if the outcomes of an event are all equally likely, then the probability of a specific outcome are the number of ways that outcome can happen divided by the number of possible events.

That's not, however, a definition. Specifically, it doesn't provide any measure of probability in the case of non-equiprobable events.

Here's Laplace's phrasing on this question:
The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.

Again, note the theoretical primacy -- first we obtain a set of equiprobable events and then calculate the ratio. If we do not have equiprobable events, then the theory of probability as developed by Pascal does not give us a probability answer at all.
 
Er, the historical definition of probability?

The basis of probability theory began with the work of Pascal, specifically in trying to determine the outcome of an interrupted wager. One of the gamblers was holding to what you call the "classical theory of probability," although it wasn't called that. Pascal's main contribution was to point out the underlying assumption of equal likelihood and to reject that.

The outcome of the Fermat-Pascal correspondance was the statement, that you are misinterpreting as a definition, that if the outcomes of an event are all equally likely, then the probability of a specific outcome are the number of ways that outcome can happen divided by the number of possible events.

That's not, however, a definition. Specifically, it doesn't provide any measure of probability in the case of non-equiprobable events.

Here's Laplace's phrasing on this question:


Again, note the theoretical primacy -- first we obtain a set of equiprobable events and then calculate the ratio. If we do not have equiprobable events, then the theory of probability as developed by Pascal does not give us a probability answer at all.

You have just verified what I wrote earlier. I was wrong by leaving out "if the outcomes of an event are all equally probable" and I have admitted that in previous posts.

I'm not sure why it matters to you that I called it a definition. You called it "gibberish" and said "no self-respecting probabilist or statistician would give it the time of day" in one post and then wnet on to show how a respectable scientist gave it the time of day and it became theory (unless you think Pascal is not respectable).

I agree with this statement/definition/postulate/theory/whateverwillpleaseyou below, that you seem to agree with as well:

"if the outcomes of an event are all equally likely, then the probability of a specific outcome are the number of ways that outcome can happen divided by the number of possible events."
 
You have just verified what I wrote earlier. I was wrong by leaving out "if the outcomes of an event are all equally probable" and I have admitted that in previous posts.

Similarly, if you say that "lightning is a kind of insect," you're wrong. Stating that you left out the word "bug" doesn't make you less wrong.


I'm not sure why it matters to you that I called it a definition. You called it "gibberish" and said "no self-respecting probabilist or statistician would give it the time of day" in one post and then wnet on to show how a respectable scientist gave it the time of day and it became theory (unless you think Pascal is not respectable).

Pascal didn't give the theory as you expressed it the time of day.

He rejected it outright as being utterly and completely wrong.

You completely misrepresented Pascal's theories.
 
Similarly, if you say that "lightning is a kind of insect," you're wrong. Stating that you left out the word "bug" doesn't make you less wrong.




Pascal didn't give the theory as you expressed it the time of day.

He rejected it outright as being utterly and completely wrong.

You completely misrepresented Pascal's theories.

Do you agree or disagree with this?

"If the outcomes of an event are all equally likely, then the probability of a specific outcome are the number of ways that outcome can happen divided by the number of possible events."

I continually acknowledge where I went wrong, but it doesn't make the theory wrong by itself. The coin example is a classic introduction to probability.

I must have misunderstood that you were calling the right theory wrong. I am sorry for misunderstanding your intent. I was not aware that you wished to point out the absence of the words "If the outcomes of an event are all equally likely".

It would have been better to correct me as Wollery did:
"That definition only holds for equally likely outcomes."

Then I would have immediately seen my error.
 
Last edited:
Do you agree or disagree with this?

"If the outcomes of an event are all equally likely, then the probability of a specific outcome are the number of ways that outcome can happen divided by the number of possible events."

Yes. Do you agree or disagree that that statement is substantially different from your original statement that "The very definition of probability is: The probability of event A is the number of ways event A can occur divided by the total number of possible outcomes."

First, the statement with which I agree is not a definition.

Second, the statement with which I agree has a key proviso that you neglected.

I must have misunderstood that you were calling the right theory wrong.

Without those words it is not "the right theory" at all.
 

Back
Top Bottom