• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Moderated Coin Flipper

This episode of Sesame Street has been brought to you today by the words

CONCERNED

and

SLANDER

and by the number

16,384
 
Why... if the bytes are random... and I shuffle the array before using it... then any draw of numbers from the array of random numbers is a random number.... no?
If the array is shuffled before each run then that should avoid a repetition of results. But if the array has a bias (every new array will be either slightly top heavy or bottom heavy) then that would eventually show up in the results.

Why? I think the arrangement of bits in a byte that makes a random Unsigned Integer which is randomly positioned in an array from which a random chunk of numbers is drawn... is as random as anything to be expected from an actual real world source of randomness... no?
What I meant was that 65,536 bits can give you a bigger trial run than 16,384 bytes.

Can you now see the extent of CONCERN RED HERRINGS?

  • First concern about not being a real coin inside the computer
  • Then concern about RNG's quality
  • Then concern about Edge landings... and slandering me because it is not what they want
  • Then concern about BIAS towards Tails
  • Then concern about the bytes in the TRNG's data
  • Then back to more concern about the PRNG
These are not entirely unreasonable concerns. You should be able to address them quite easily.
 
...
A good programmer would welcome third party examination of their code as an opportunity to find bugs and make improvements..


You claim you know about quality control.... right?

Have you ever heard of something called software testing?

Hint: it does not mean a third party looking at the code... it means USING the software to test it.
 
Hahahaha.... amazing!!!

It was an obvious typo. Geez. 127, okay?

ETA: oh, wait. It is worse. You weren't commenting on the typo. You are now implying randomness implies bias. Well done.
 
Last edited:
If the array is shuffled before each run then that should avoid a repetition of results.


Yes...

But if the array has a bias (every new array will be either slightly top heavy or bottom heavy) then that would eventually show up in the results.


Whether random data is biased or not is part and parcel of the randomness.

The randomness is achieved by
  • random byte values
  • shuffling before each run
  • random offset within the random bytes for random chunk of these bytes
  • repeating this before each run

The randomness is also PROVEN by actually RUNING THE APP... and seeing the results.... all they have to do is RUN THE APP.


What I meant was that 65,536 bits can give you a bigger trial run than 16,384 bytes.


Yes... that would be better... but I think 10,000 is good enough anyways.


These are not entirely unreasonable concerns. You should be able to address them quite easily.


There is nothing to address... the original red herring of bias was debunked here in this post...

The new red herring was also debunked here in this post... and see below for yet another sample...

And this latest red herring about the random data file is due to lack of understanding of even the most basic of principles about what random means.

Moreover... not a single one of those concerned guys ran the app even once... had they run the app they would see that their concerns are baseless.

[IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias2.png[/IMGW] [IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/PRNGNoBias2.png[/IMGW]
[IMGW=400]http://godisadeadbeatdad.com/CoinFlipperImages/CRYPTONoBias2.png[/IMGW]​
 
Last edited:
You claim you know about quality control.... right?

Have you ever heard of something called software testing?

Hint: it does not mean a third party looking at the code... it means USING the software to test it.

QA is a grueling, thankless job. I'm not going to test your code for you, when all you have to offer in exchange is vituperation and invective.
 
The randomness is also PROVEN by actually RUNING THE APP... and seeing the results.... all they have to do is RUN THE APP.

No. Absolutely not. All that produces is anecdotal evidence.

...
the original red herring of bias was debunked here in this post...

No. Absolutely not. All that provides is anecdotal evidence.

Your "random" data is provably biased. Shuffling it, and such, does not remove the bias. Bias does not rear its head in every example (i.e., anecdotal evidence), but overall all, it is there.
 
QA is a grueling, thankless job. I'm not going to test your code for you, when all you have to offer in exchange is vituperation and invective.


Hahaha... of course... you are not going to run the app even once... but you are CONCERNED to keep incessantly and repeatedly INVENTING red herrings about it.

Of course... all in the name of defending no randomness in the natural world.

Thanks for admitting.... QED!!!

Had you done any real Software Testing by actually running the app you would have found out that your CONCERNS are BASELESS... as I have already demonstrated with RUNING THE APP.

And that is of course why you do not want to do REAL Software Testing and instead you just keep waving read herrings about the code.

I agree that the business about the edge case bias in data is a red herring, and I'm surprised that you've put so much effort into implementing it.
 
Hahaha... of course... you are not going to run the app even once... but you are CONCERNED to keep incessantly and repeatedly INVENTING red herrings about it.

Of course... all in the name of defending no randomness in the natural world.

Thanks for admitting.... QED!!!

Had you done any real Software Testing by actually running the app you would have found out that your CONCERNS are BASELESS... as I have already demonstrated with RUNING THE APP.

And that is of course why you do not want to do REAL Software Testing and instead you just keep waving read herrings about the code.
Don't put your words in my mouth.
 
No. Absolutely not. All that produces is anecdotal evidence.
No. Absolutely not. All that provides is anecdotal evidence.


Have you ever heard of software testing... it is actually a whole job category.

Have you tried to run the app even once... of course not... because you do not want to find out that your concerns are baseless.


Your "random" data is provably biased. Shuffling it, and such, does not remove the bias. Bias does not rear its head in every example (i.e., anecdotal evidence), but overall all, it is there.


Your self-claimed agnosticism of the principles of randomness is the reason you mistakenly think so.

Your concerns are baseless.
 
Don't put your words in my mouth.


You admitted that you have not done any software testing.

QA is a grueling, thankless job. I'm not going to test your code for you, when all you have to offer in exchange is vituperation and invective.


You admitted that the edge claptrap was a red herring.

I agree that the business about the edge case is a red herring, and I'm surprised that you've put so much effort into implementing it.


You earlier claimed all the RNGs were biased based on looking at a limited one screen shot where the Edge case was allowed for.

All three of the v4 results you presented have Tails slightly above 50%, and Heads slightly below 50%. I think this is a sufficient reason to believe your code might have a pro-Tails bias. I'm not going to try to prove whether it does or not (I'm not that "concerned"). Just saying you might want to check your code.


I explained to you right here how you were wrong and proved it with a RUN sample.

You never admitted your error.

You now are joining in on a NEW error... of BASELESS CONCERNS without ever running the app to do any testing whatsoever.

So you are again doing nothing but engage in another red herring based on not ever running the app and not even looking at screen samples (here and here) either this time round.

Your concerns are proven baseless.... see my post here.... for further debunking of your baseless concerns.
 
Last edited:
Unbiased by Concerns Proper Software Testing

If only the people so concerned about the app's software correctness knew anything about how to actually do software testing they would have seen that their concerns are arrantly baseless.

Hint: software testing is not done by a third party looking at the code... it is done by RUNNING THE SOFTWARE.

I know all these concerned people will never run the app even once... of course.

But now they are even claiming that running the software to show that their claims of error are baseless (or based upon lack of understanding of the principles of randomness) is not really proof because it is anecdotal.

But their claims of an error without even running the software is not baseless despite it being demonstrated to being so.

Here is another set of anecdotal demonstrations of the extent of baselessness of their CONCERNS.

[IMGW=500]http://godisadeadbeatdad.com/CoinFlipperImages/TRNGNoBias3.png[/IMGW]

[IMGW=500]http://godisadeadbeatdad.com/CoinFlipperImages/PRNGNoBias3.png[/IMGW]

[IMGW=500]http://godisadeadbeatdad.com/CoinFlipperImages/CRYPTONoBias3.png[/IMGW]​
 
Last edited:
Whether random data is biased or not is part and parcel of the randomness.

The randomness is achieved by
  • random byte values
  • shuffling before each run
  • random offset within the random bytes for random chunk of these bytes
  • repeating this before each run

The randomness is also PROVEN by actually RUNING THE APP... and seeing the results.... all they have to do is RUN THE APP.
Somebody running your app repeatedly is eventually going to notice a slight bias towards tails which would nullify any conclusions you might otherwise draw from tossing a coin.

If downloading new data each time is too troublesome (sounds like it is) then one way to avoid bias in the results is to employ a toggle function.
Code:
IF randomByte > threshold THEN
   coin = toggle(coin)
ENDIF
This should give unbiased results but if the probability of the randomByte exceeding the threshold is significantly different to 0.5 then this would favour longer (or shorter depending on the probability) runs of heads or tails and be an unsuitable representation of a tossed coin.
 
Somebody running your app repeatedly is eventually going to notice a slight bias towards tails which would nullify any conclusions you might otherwise draw from tossing a coin.


See the images in this post and this one.

Or even better... run the app... as many times as you like.

If downloading new data each time is too troublesome (sounds like it is) then one way to avoid bias in the results is to employ a toggle function.
Code:
IF randomByte > threshold THEN
   coin = toggle(coin)
ENDIF
This should give unbiased results


A good idea... when I finish Sniper Elite 4 I might try it out...

:th:


but if the probability of the randomByte exceeding the threshold is significantly different to 0.5 then this would favour longer (or shorter depending on the probability) runs of heads or tails and be an unsuitable representation of a tossed coin.


Ah well... that is what randomness is all about.

But I will test your idea for sure... I will let you know.:thumbsup:
 
Last edited:
I found this part intriguing. Given that the dataset has been pseudo-randomly reordered, what is the point of then picking a pseudo-random starting point (on the interval [0,200)) for the sequence?


Yeah, yet another inexplicable programming choice.
 
These screenshots already show a tail bias for the real RNG version and this is for only 10,000 tosses.


How so... the running averages are flipping over all the time and in one shot the running average is higher for heads and in another for tails and then again.

Please explain how you arrived at that conclusion.

And please... do your own tests too.
 
Last edited:
Wait wait wait. If Leumas is using a set of pre-generated values to choose from, how do we know the values are evenly distributed on either side of the "> 277" boundary?

I assume you mean 127. Indeed, such a set of such randomly generated would be unlikely. But, in this case we know that the set is unbalanced: it contains 8182 elements > 127 (ie, "heads," as defined by Leumas' script) and 8202 elements ≤ 127 (ie, "tails"). Hence, any random draw from it is biased in favor of tails.
 

Back
Top Bottom