Moderated Coin Flipper

Thank you for admitting that.


That is what an honest ethical skeptic does when proven wrong... not sextuple and septuple on errors and false statements.


But I think you can determine that just by analyzing your code.


Your analysis... is WRONG... and it proves that you have no idea what random means.


I think if you were to amend your code, so that you can run, say, a billion flips with one click, you would be able to see whether the app produces biased results or not.


Thanks yet again for your indefatigable and incessant CONCERN.

I appreciate your proof for the falsity of the very evidently untrue statement below... and I am glad that my app has added to your knowledge

... The fact is your app is trivial. Not only does it do nothing more than illustrate a statisitical fact that has been well understood for hundreds of years, it merely implements a rudimentary random number function, which in R (the programming language I know best) requires exactly one line of code.

Your app contributes exactly nothing to our knowledge of anything. And everybody here—including you—knows it.
 
Last edited:
Apparently not when proven wrong about his coding.

You did not prove me wrong about my coding... you slandered and made false claims about myself and my code.

And thanks for all this emotional slander... QED!!!
 
Yes your are absolutely right... here is vividly glaring example of it...

Your equivocation on the meaning of the word "bias" wasn't funny the first time, never mind any subsequent time.

The issue is whether your app is biased. Please explain how your code, which defines a head as an integer greater than 127, but draws its flips from a dataset that is only 49.94% integers greater than 127, does not result in a 0.06% bias in favor of tails. If you can do that, you win, I will admit I'm wrong, and we're done here.
 
I don't do that... I use a random chunk selected randomly... so it is not the same set of the numbers ordered differently.

It is a randomly selected SUBSET of the numbers ordered differently.

But because your master set is very slightly biased toward the low numbers, over time the average of the subsets will be the same way. It's a very small bias, and smaller even still because you are using a subset, but it's going to be there over enough trials.

Besides... even if I were to download a different set of numbers for each run... there is no guarantees that the numbers will be perfectly spread.

Of course they wouldn't, and I specifically said that. But with a newly generated set of numbers each time sometimes the set would be biased slightly to low numbers and sometime to high numbers. That would average out over time.

In the end however I think the bias toward tails is real, but also really really small.
 
...
So there is no way that you will detect a bias with only 10,000 tosses.

What about 100x10,000 ... as I showed here?

What about 10x10,000 as I showed here?

What about 502x10,000 as I showed here?


If you cannot show bias for all practical purposes then there is no bias for all practical purposes.

And sorry dear fellow... your analysis below is wrong

If a Bernoulli trial with a probability of an indivdual success (heads) of 0.4994 is run 10,000 times then the mean number of heads would be 10,000 * 0.4994 = 499 and the standard deviation would be sqrt(10,000 x 0.4994 x 0.5006) = 50.


The highlighted part is wrong.

Why?

Because the SUBSET of the data is chosen randomly... so the RESULTING SET of numbers that is used has a totally random probability of heads or tails being favored.

And since it is shuffled randomly and the chunk is chosen randomly, then there is no way to know what the value is in your highlighted statement above.

It could be 0.4994 for one run... or 0.5994 for another or 0.4004 for another or 0.5010 for another.

Which is observed by actually RUNNING THE APP.

But thanks for adding to my knowledge... yet again you are an upstanding chap.... :thumbsup:
 
Last edited:
But because your master set is very slightly biased toward the low numbers, over time the average of the subsets will be the same way. It's a very small bias, and smaller even still because you are using a subset, but it's going to be there over enough trials.


How many? Infinity?

It did not show over 10x10,000 nor 100x10,000 nor 5002x10x10,000

And if some concerned party is going to say that it showed up at 1024x10,000

Then try another time or another....

That is what RANDOM means... If you flip a coin for real you will get the very same randomness too.

But the mere fact that one can do 1024x10,000 coin flips using the app is in itself all that one needs to realize the absolute maliciousness and false slander of the statement below.

... The fact is your app is trivial. Not only does it do nothing more than illustrate a statisitical fact that has been well understood for hundreds of years, it merely implements a rudimentary random number function, which in R (the programming language I know best) requires exactly one line of code.

Your app contributes exactly nothing to our knowledge of anything. And everybody here—including you—knows it.



In the end however I think the bias toward tails is real, but also really really small.


You think... I do not...

BUT... even if it were there... as you say it is really really small and it does not show by using the app over enormous runs...

So all this CONCERN for nitpicking about it is nothing but a very egregious and malicious red herring.
 
Last edited:
Because the SUBSET of the data is chosen randomly... so the RESULTING SET of numbers that is used has a totally random probability of heads or tails being favored.

And since it is shuffled randomly and the chunk is chosen randomly and only then there is no way to know what the value is in your highlighted statement above.

It could be 0.4994 for one run... or 0.5994 for another or 0.4004 for another or 0.5010 for another.

If half your master set of numbers was the number 2 would you think that shuffling and selecting a random chunk would eliminate the excess 2s? Of course not.

You've got the same problem just on a much smaller scale. It's going to take a bunch of trials to bring that up out of the noise. But it's in there.
 
How many? Infinity?

No, but a bunch. I'd have break out some old books to even remember how to calculate it. But maybe psion can work it out.

So all this CONCERN for nitpicking about it is nothing but a very egregious and malicious red herring.

Oh I don't know, I think it what programmers do. In our shop we spent many many afternoons and god knows how many cans of Mountain Dew arguing over more trivial points, and had a great time doing it.

If you put the code up on the whiteboard it was just your turn in the barrel.
:)
 
Last edited:
If half your master set of numbers was the number 2 would you think that shuffling and selecting a random chunk would eliminate the excess 2s? Of course not.

You've got the same problem just on a much smaller scale. It's going to take a bunch of trials to bring that up out of the noise. But it's in there.


The data has 20 bytes of more tails than heads in 16,384 bytes.

Not half.

Let's put it this way say you have 16,384 black and white marbles with 20 blacks more than whites.

So when shuffled and a SUBSET is randomly selected (10 to 10,000) out of the 16,384 ... what is the probability of the bias remaining to be 20 more blacks than whites in this SUBSET??


And what is the probability that this will continue to be so every single time you do that???

And what is the probability that the next chunk you pick will not have more whites than blacks... and the next time and the next time? And so on oscillating???

Do you know what RANDOM means???

All this stochastic mathematics is descriptive not prescriptive.

Just because over infinite runs the probability mathematically will average out does not mean that IN REALITY it does.... all you have to do to bust the math is do one more random try.

That is why it is RANDOM and consequently indeterministic.
 
Last edited:
If you put the code up on the whiteboard it was just your turn in the barrel.
:)


I did not... the app was maliciously and deliberately hacked and THE CONCERNED WE put it up on the whiteboard not I.

And I was slandered and called a liar for not genuflecting to their commands that I show them my code.

And again ... mathematics is descriptive not decreeing or prescriptive.

All you have to do to upset the whole math thing is do one more run of a random process and you will get results that flip the whole cart.

That is what random means... it is not decreed by formulas and not deterministic despite the stochastic guessing using the DESCRIPTIVE mathematics... the GUESSING might be right one time but it takes another run and it will be wrong.
 
Last edited:
Some Binomial Distribution theory:


I too have to go through the self-storage mound of boxes to locate my brains.

So please psion10, can you use your evidently excellent knowledge in this field and calculate the following:

Say you have 16,384 black and white marbles with 20 blacks more than whites.

So when shuffled and a SUBSET is randomly selected (10 to 10,000) out of the 16,384 ... what is the probability of the bias remaining to be 20 or 10 or 1 more blacks than whites in this SUBSET??

What is the probability of the chunk you picked having more whites than blacks???

And if you repeat this process say 1000 times... what is the probability that more than 500 chunks had more whites than blacks or vice versa?

And what is the probability of run 1001 to 2000 that the punch you pick will not have more whites than blacks???​

Thanks yet again :thumbsup:
 
Last edited:
The data has 20 bytes of more tails than heads in 16,384 bytes.

Not half.
As I said, you have a much smaller problem.

Let's put this way say you have 16,384 black and white marbles with 20 blacks more than whites.

So when a shuffled and a SUBSET is randomly selected (10 to 10,000) out of the 16,384 ... what is the probability of the bias remaining to be 20 more blacks than whites in this SUBSET??


And what is the probability that this will continue to be so every single time you do that???
It doesn't need to be every single time, just often enough to skew the results over time. The odds of that are something very low but still greater than zero.

Do you know what RANDOM means???


I know it's very difficult to get true randomness out of deterministic systems in useful ways. Your app does what you intended to a very small margin of error. Cool. The fix would be prohibitive for this use. Also cool, no problem. For your purposes I think you nailed it. But there is that small thing that could be fixed in a perfect world. (if you ever find the route to that perfect world let me know, I'd love to ride along!)
 
Your equivocation on ....


You passionate and emotional and indefatigable doubling and sextupling on your already amply and clearly demonstrated false slander of me and my app is well appreciated... so many thanks.... QED!!!


And all this relentless incessant concern of yours for my app being PERFECT to your decreed standards fills my heart with warm satisfaction to say even more ... QED!!!

:th:


Now... seeing that you bragged about the superiority of your knowledge in this field...

As a biostatistician, I'm pretty sure I understand the concept as well as anyone here, probably better than most.


Why don't you give it a shot to calculate these probabilities... maybe you can use the R/S amazing app you are so dexterous with...

Say you have 16,384 black and white marbles with 20 blacks more than whites.

So when shuffled and a SUBSET is randomly selected (10 to 10,000) out of the 16,384 ... what is the probability of the bias remaining to be 20 or 10 or 1 more blacks than whites in this SUBSET??

What is the probability of the chunk you picked having more whites than blacks???

And if you repeat this process say 1000 times... what is the probability that more than 500 chunks had more whites than blacks or vice versa?

And what is the probability of run 1001 to 2000 that the punch you pick will not have more whites than blacks???​
 
Last edited:
As I said, you have a much smaller problem.
It doesn't need to be every single time, just often enough to skew the results over time. The odds of that are something very low but still greater than zero.


Yes... reminds me of asking how many angels can fit on the tip of a sewing needle... I usually answer Zero.


I know it's very difficult to get true randomness out of deterministic systems in useful ways. Your app does what you intended to a very small margin of error. Cool. The fix would be prohibitive for this use. Also cool, no problem. For your purposes I think you nailed it.


I appreciate that a lot
:th:


But there is that small thing that could be fixed in a perfect world. (if you ever find the route to that perfect world let me know, I'd love to ride along!)


Only when one is CONCERNED to spread red herrings is one so CONCERNED to command and decree PERFECTION... when they cannot ever even come within a mile of it let alone a very small margin of error.

I used to be a civil engineer and later an electronic engineer... so never cared about attaining perfection... just really really good.... so I missed that train and at my age my knees will not let me get up onto the platform even if it waited for me.


Many thanks again :thumbsup:
 
Last edited:
What about 100x10,000 ... as I showed here?

What about 10x10,000 as I showed here?

What about 502x10,000 as I showed here?
The Binomial Distribution formulae are quite simple: mean = np, std-dev = sqrt(npq) and the 95% confidence interval is approximately within 2 standard deviations of the mean.

Using n = 1,000,000 and p = 0.4994 we get a mean of 499,400 heads, a standard deviation of 500 and a 95% confidence interval of 498,400 to 500,400.

So 1,000,000 tosses seems sufficient to form an opinion about the bias.

The highlighted part is wrong.

Why?

Because the SUBSET of the data is chosen randomly... so the RESULTING SET of numbers that is used has a totally random probability of heads or tails being favored.

And since it is shuffled randomly and the chunk is chosen randomly, then there is no way to know what the value is in your highlighted statement above.
Probabilities don't change unless you are able to add new information to the question.

If I drew 27 from the array of numbers then I would know that the bias in favour of tails has increased from 8202 / 16384 to 8202 / 16383. But if I don't know which number I have drawn then I have no new information and can't calculate a new probability.

You are right that this isn't strictly a Bernoulli trial situation because the numbers drawn from the array are not replaced. I could use a more precise formula in this situation but it wouldn't radically change the answer I got by just assuming that this is a Bernoulli trial.

Say you have 16,384 black and white marbles with 20 blacks more than whites.

So when shuffled and a SUBSET is randomly selected (10 to 10,000) out of the 16,384 ... what is the probability of the bias remaining to be 20 or 10 or 1 more blacks than whites in this SUBSET??
Without necessarily doing the maths, the most probable result would be that the relative imbalance between black and white marbles would be the same for both the sample and the remaining marbles (with adjustments for round off error).

You would have to draw a sample in excess of 1,684 in order for your sample to have one more black marble than white marbles - which is what I mean about the round off error.
 
Unbiased by Concerns Proper Software Testing

...
You would have to draw a sample in excess of 1,684 in order for your sample to have one more black marble than white marbles - which is what I mean about the round off error.


Exactly... how many angels can stand on the tip of a sewing needle?

Here have a look at these results... if only the people so concerned about the app being PERFECT to their decreed wishes knew anything about how to actually do software testing they would have seen that their concerns are arrantly baseless.

[IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias5A.png[/IMGW] [IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias5B.png[/IMGW]

[IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias5C.png[/IMGW] [IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias5D.png[/IMGW]

[IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias5E.png[/IMGW] [IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias5F.png[/IMGW]​
 
Unbiased by Concerns Proper Software Testing

Here are some more unconcerned and unbiased tests...

[IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias6A.png[/IMGW] [IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias6B.png[/IMGW]

[IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias6C.png[/IMGW] [IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias6D.png[/IMGW]
 
The Binomial Distribution formulae are quite simple: mean = np, std-dev = sqrt(npq) and the 95% confidence interval is approximately within 2 standard deviations of the mean.

Using n = 1,000,000 and p = 0.4994 we get a mean of 499,400 heads, a standard deviation of 500 and a 95% confidence interval of 498,400 to 500,400.

So 1,000,000 tosses seems sufficient to form an opinion about the bias.


That 95% CI doesn't exclude 500,000, so how can a sample size of 1,000,000 be sufficient to draw a conclusion about a bias of .0006?

A quick simulation I ran of 100 replications of 1,000,000 random draws from a Bernoulli(.4994) distribution found that only 57 of the 95% CIs of the 100 means excluded 0.5. In contrast, using a sample size of 10,000,000 resulted in 96% of the CIs excluding 0.5. So, I would say the number of samples you need to draw a reasonably confident conclusion about whether there is a bias on the order of .0006 is closer to 10,000,000 than 1,000,000.

That said, said programming was written after consumption of the better part of a bottle of chianti. I'll check it again in the morning.
 
Last edited:

Back
Top Bottom