Do you believe in Luck?

Does luck exist?

  • Yes, luck exists. Some people just seem to have better or worse luck than others.

    Votes: 20 15.2%
  • No, there's no such thing as luck.

    Votes: 102 77.3%
  • On planet X, everybody's lucky all the time.

    Votes: 10 7.6%

  • Total voters
    132
  • Poll closed .
Regarding the question of whether the percentage of hands with probabilities of winning greater than 50% is a valid estimate of the expected percentage of hands won, in general it is not. I've worked up an explanation and posted it here, since it is off-topic and probably not of much interest to anyone except Beth.

Jay
 
Last edited:
Poker Hands are now approaching Normal Probability Levels

If anyone is still interested, I have new data. We had some life events happen last year, but my husband started tracking all of his all-in poker hands again in January.

He has played, both on-line and at garage poker a total of 379 hands with 203 wins and 11 chopped pots giving him 0.5356 proportion of wins and 0.0290 proportion of ties and .5646 proportion of wins and ties combined. His average probability of winning was .5452. The average probability of a tie was .0397.

The probability of getting 203 or fewer wins out of 379 hands with a prob. of winning of 0.5452 is .37. The probability of getting 11 or fewer ties out of 379 hands with a prob. of a tie being 0.0397 is .18. The probability of getting 214 wins or ties with an average prob. of a win or tie being .5646 is .23.

If you’re interested, I've recently started a blog and posted my excel analysis file there.
http://bethclarkson.com/?p=390


The file has a simulation of Mark’s games. For some reason, the simulation has a lot more results below average than above average. I’ll have to ponder that and see if I can figure out why. I suspect it's an error in my programming.
 
The file has a simulation of Mark’s games. For some reason, the simulation has a lot more results below average than above average. I’ll have to ponder that and see if I can figure out why. I suspect it's an error in my programming.

I can't download the spreadsheet. Your server gives me a "bad request" error.
 
I know this is not what the thread is referring to, but I believe in luck.. the harder you work, the luckier you get!:)
 
I believe in some sort of causal determinism, but I also believe in infinity. So that's a cryptic no and yes, in that order.

Also, don't ask me to defend it to the bone, especially the part about infinity. Take it as an opinion you heard in an episode of 2½ half men or something.
 
Thanks for letting me know. I've updated the file. Hopefully you can download it now.

No "luck." Still getting "Bad request." Just a guess, but the problem might be the percent sign (%) in the file name. You might want to try renaming the file to something without any non-alphanumeric characters.
 
No "luck." Still getting "Bad request." Just a guess, but the problem might be the percent sign (%) in the file name. You might want to try renaming the file to something without any non-alphanumeric characters.

Thanks. I've renamed the files without the % sign. I appreciate the feedback. I'm still figuring out the blogging controls.
 
What about the possiblity of cheating? Perhaps this is controlled for somehow (though I can't think of how). But if the other players at the table are sharing their hole cards through another communication channel, they can have a great advantage over the lone player.

IXP
 
That fixed it. I was able to download it.

Thanks. I did not know that about file names. I'll be more careful in the future.

What about the possiblity of cheating? Perhaps this is controlled for somehow (though I can't think of how). But if the other players at the table are sharing their hole cards through another communication channel, they can have a great advantage over the lone player.

IXP

This was discussed earlier in the thread.
 
Luck is simply having random good fortune, and because there is no doubt some people experience random good fortune, there is obviously luck. Does that mean there is some supernatural way of imparting luck on someone? If there were, it would mean that the good fortune wasn't random and therefore it would no longer be luck, so the answer always ends up being "yes" there is luck, but you can't do anything to make anyone or anything any "luckier" than they happen to be at any given time.
 
A little over three years ago, I started this thread on luck to get some advice about better ways to separate out the factors of luck and skill in my husband's poker playing.

While I am hesitate to bump such an old thread, I did receive some good input about how to test the idea that my husband's poker results were due to bad luck or bad play and I thought some of the contributors might appreciate hearing about the results.

My husband collected data for on all of his poker games. I've done one analysis for all games during 2013. Starting in January of 2014, I limited my analysis to the live games rather than internet games. The results may make for some interesting discussion.

I've posted my analysis at my blog. To put it briefly, the results do support his contention that his luck is worse than that expected by random chance alone. I've no idea why, but it is a consistent results across a relatively long spread of time and different data analyses.

http://bethclarkson.com/?page_id=46
 
A little over three years ago, I started this thread on luck to get some advice about better ways to separate out the factors of luck and skill in my husband's poker playing.

While I am hesitate to bump such an old thread, I did receive some good input about how to test the idea that my husband's poker results were due to bad luck or bad play and I thought some of the contributors might appreciate hearing about the results.

My husband collected data for on all of his poker games. I've done one analysis for all games during 2013. Starting in January of 2014, I limited my analysis to the live games rather than internet games. The results may make for some interesting discussion.

I've posted my analysis at my blog. To put it briefly, the results do support his contention that his luck is worse than that expected by random chance alone. I've no idea why, but it is a consistent results across a relatively long spread of time and different data analyses.

http://bethclarkson.com/?page_id=46


Thanks for this! A very well presented study.
 
The p-value of .0672 for your first test shows no evidence of bad luck. In fact, if you input the t-value into the appropriate Bayes factor calculator at http://pcl.missouri.edu/?q=bayesfactor, I think you'll find that the Bayes factor actually favors the null hypothesis over the alternative.

The p-value of .0086 from your second test is less suggestive of bad luck than it appears. Again, I suggest you enter the appropriate data into Rouder's on-line calculator and observe the Bayes factor. My guess is that it will only modestly favor the alternative hypothesis.

In your third analysis, dataset 2014-2, the binomial analysis has multiplicity problems which you did not correct for. The chi-squared tests don't have this problem, and is the more appropriate analysis. The p-value for the test on the raw data, .0108, is suggestive, but if we were to convert it to a Bayes factor, I think again we'd see that it is only weak evidence against the null. Unfortunately, I don't think Rouder has a calculator on line for chi-squared tests. As to your chi-squared test on the values of the hands, the p-value is meaningless, because the values do not follow a chi-squared distribution under the null hypothesis.

Additionally, your analysis overall suffers from uncorrected multiplicity, as you've conducted three sets of tests on the same "luck" hypothesis, and any one of them resulting in statistical significance would have allowed you to claim "bad luck."

Furthermore, your analysis raises questions about selection bias.* Why were these specific datasets chosen for analysis and not others that could have been chosen instead? Were the starting and stopping criteria decided in advance, and unrelated to the results? Were there other tests that were run whose results you have not shown? Or were there other data that you looked at but which did not seem promising and hence were not formally analyzed?

Finally, whether you admit it or not, the hypothesis that people have an attribute called "luck" is a supernatural hypothesis, and no matter how statistically significant your results may be, the probability that those results are due to errors in the experiment are overwhelmingly greater than the probability that they are due to the supernatural hypothesis. If not due to random error, then it is almost certain that significant results are due to systematic error, and the smaller the p-values, the more that systematic error is the favored explanation. Convincingly small p-values of supernatural hypotheses give us an opportunity to learn how a well-intentioned experiment can go wrong.

*See my next post for one possible selection bias.
 
Last edited:
I found the following in an earlier post on Beth's blog:

Recently he’s started recording three particular hands: {5,2}, {Q, 8}, and {A, K}. He started after noticing what seems like an awful lot of five/deuce hands.


The question is, did he include the hands that brought the supposed anomaly to his attention in the dataset? If so, then the dataset is biased.
 
Depends on how you define 'luck'. If you define it as predictive, then no. If you define it as descriptive of what has occurred, then, yes it exists.

Wanted to say something like this. Then found out that something like this has been said already.

Documentable luck exists in retrospective sense, but we cannot very meaningfully document or evaluate luck in a predictive sense. Unless anyone can give an example of that.
 
jt512 - Thanks for taking the time to read over my analysis and provide me with feedback.

The p-value of .0672 for your first test shows no evidence of bad luck. In fact, if you input the t-value into the appropriate Bayes factor calculator at http://pcl.missouri.edu/?q=bayesfactor, I think you'll find that the Bayes factor actually favors the null hypothesis over the alternative.

The p-value of .0086 from your second test is less suggestive of bad luck than it appears. Again, I suggest you enter the appropriate data into Rouder's on-line calculator and observe the Bayes factor. My guess is that it will only modestly favor the alternative hypothesis.

I'm not sure why you think a Bayes computation is suitable for this. Could you explain? The point of the data collection we used was to provide a sample that had a known probability distribution due solely to random chance, so I'm not sure why we need to include a subjective prior probability with respect to the result.

However, use of Bayes is not something that comes up for me in my work, so I'm a bit rusty on it. What is the subjective prior being used? What does the 'scale r on effect size' represent? Does the calculator you linked to have a method to include results from multiple experiments to adjust the final result to include all known information?

In your third analysis, dataset 2014-2, the binomial analysis has multiplicity problems which you did not correct for.
It's true I didn't correct for multiplicity with the binomial tests. A trinomial distribution would be most appropriate, but I was using EXCEL and the build in binomial function is much easier than programming in an exact trinomial probability distrubtion. I don't think the multiplicity issue would cause much difference in the results (only three binomial comparisons were made) but feel free to do thise computations yourself and let me know if I'm mistaken on that point.
The chi-squared tests don't have this problem, and is the more appropriate analysis.
I agree.
The p-value for the test on the raw data, .0108, is suggestive, but if we were to convert it to a Bayes factor, I think again we'd see that it is only weak evidence against the null. Unfortunately, I don't think Rouder has a calculator on line for chi-squared tests.
Again, I'm not sure what the Bayes factor is supposed to represent here, but feel free to educate me about it.
As to your chi-squared test on the values of the hands, the p-value is meaningless, because the values do not follow a chi-squared distribution under the null hypothesis.
I agree that this is a questionable approach. The main point of it is the alignment of the frequency results with the values of the different hands are consistent with the hypothesis of bad luck. If you have an analysis suggestion for looking at not just the frequency of the hands, but also their values, I would be open to suggestions. With only three possible outcomes, I don't feel regression is appropriate.

Additionally, your analysis overall suffers from uncorrected multiplicity, as you've conducted three sets of tests on the same "luck" hypothesis, and any one of them resulting in statistical significance would have allowed you to claim "bad luck."

I disagree that this is an issue of uncorrected multiplicity as the datasets can be treated an independent experiments. There is some overlap of the session data with the 2013 data, but it's a small enough number of hands I'm comfortable with the assumption of independence of the datasets.

Further, whereas multiplicity corrections are designed to account for some positive results occurring through random chance (i.e. one out of 20 at 95% confidence) since we have three out of three independent datasets (actually more if you include the 2011 and 2012 datasets documented earlier in this thread), I think the results can be considered robust. I'm not sure why or how it's occurring, but it does seem to be a consistent finding.

However, as with the above, if you want to look at what effect of multiplicity corrections would have, please feel free to do so and share the results.

Furthermore, your analysis raises questions about selection bias.* Why were these specific datasets chosen for analysis and not others that could have been chosen instead?
You can read through the thread for answers to most of these questions. The all-in hands were chosen after discussion and a conclusion that they represented a true picture of random results uncontaminated by issues of skill during play. The {A,K} {Q,8} and {5,2} data was selected by husband for similar reasons albeit without the additional discussion as it's clear the cards dealt at the beginning of the hand are unaffected by skill. Personally, as I mentioned in my write up (footnote 2) I tried to persuade him to write down every hand he dealt, but that slows down the game (I tried it, it does) and he did not want to impose that on his fellow poker players.

Were the starting and stopping criteria decided in advance, and unrelated to the results? Were there other tests that were run whose results you have not shown? Or were there other data that you looked at but which did not seem promising and hence were not formally analyzed?

Starting and stopping criteria were decided in advance for the 2013 dataset. Other tests were run prior to 2013 analysis and discussed earlier in this thread. There is no other data that we looked at that was not included in the analysis. The session data collection (which started in 2013) is ongoing. I want to test some hypotheses about what he can change and whether or not it will have an impact on the results. Theoretically, nothing should impact the random chance of the cards dealt. But theoretically, these results have a low probability of occurring under the null while they are consistent with the alternative hypothesis.
Finally, whether you admit it or not, the hypothesis that people have an attribute called "luck" is a supernatural hypothesis, and no matter how statistically significant your results may be, the probability that those results are due to errors in the experiment are overwhelmingly greater than the probability that they are due to the supernatural hypothesis. If not due to random error, then it is almost certain that significant results are due to systematic error, and the smaller the p-values, the more that systematic error is the favored explanation. Convincingly small p-values of supernatural hypotheses give us an opportunity to learn how a well-intentioned experiment can go wrong.

Yes, I'm well aware of this issue as is my dh. However, all he can do is collect the data to the best of his ability. How would you suggest he improve his data collection to test the hypothesis that his observation is correct, that he actually has results worse than random chance predicts?

The question is, did he include the hands that brought the supposed anomaly to his attention in the dataset? If so, then the dataset is biased.

No, he did not include that session as he didn't start collecting that data until after he noticed he seemed to be getting a lot of {5,2} hands dealt him. Since he didn't know if it was just observational bias, he included a 'good hand {A, K} and an average hand {Q,8} for comparison purposes to determine if that was the case. It appears to be so.
 
Wanted to say something like this. Then found out that something like this has been said already.

Documentable luck exists in retrospective sense, but we cannot very meaningfully document or evaluate luck in a predictive sense. Unless anyone can give an example of that.

The data collect and analysis I've done is an attempt to do just that.
 
What do we mean by luck ? Happy coincidences ? Sure, those exist. A mystical force of some sort ? Of course not. So I vote X. Oh, wait. I can't vote anymore. Woah, 2011 !
 

Back
Top Bottom