• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Global Consciousness Project

jzs,

Why is the RNG output XORed
Because we are dealing with a real-life, complex electronic machine that is generating the data, that we know has a small amount of bias that can be measured and subtracted out.
But as I have already analytically proven, applying an XOR mask does not "subtract out" the bias in the mean. It instead converts it into correlations in the data. Correlations which will then bias the variance.

This has been explained to you repeatedly. If you acknowledge that there is a bias in the mean of the data coming from these RNGs, then it mathematically follows that the XORed data will also be biased, just in a different way.

If the calculation assumes that the RNGs follow statistical laws which, in reality, they do not follow, the calculated probability will be wrong.
The probability is dependant on the bias. If the bias is large, yes, that is a real problem. Since they know the bias, can estimate it, they subtract it out for that very reason.
This is simply false. They do not "subtract it out". This is the entire point of my critique!

They incorrectly believe that by applying the XOR mask they will be subtracting out the bias in the mean, and you seem to believe them, but I have analytically shown that this is not the case! The XOR operation does not subtract out the bias in the mean. It produces a new signal whose mean is not biased, but which has correlations whose strengths depend on the initial bias. This is has been mathematically shown. How can you continue to ignore this fact?

I can only assume that for some reason you do not agree that XORing a signal with a bitmask will transform biases in the mean into correlations in the output. Is this the case? Or do you disagree with the claim that correlations in the XORed data can bias the variance of the resulting sums? If you agree that both of these things are true, then how can you believe that they are correctly subtracting out the bias of the mean?


Dr. Stupid
 
Stimpson J. Cat said:

But as I have already analytically proven, applying an XOR mask does not "subtract out" the bias in the mean. It instead converts it into correlations in the data. Correlations which will then bias the variance.


They fully acknowledge this on their site:

"After XOR'ing, the mean is guaranteed over the long run to fit theoretical expectation. The trial variances remain biased, however. The biases are small (about 1 part in 10,000) and generally stable on long timescales."
 
CFLarsen said:
But the Orion RNGs do not pass the DIEHARD test. This is a fact, jzs.

And since you can't isolate the data from the Orion RNGs from the rest of the RNGs, you have to discard the whole dataset.


Well, they can do this. Look on their site. They look at the bias in variance produced by the Orion REG and Mindsong REG separately
 
CFLarsen said:
Please, have the Orion RNGs used in the PEAR experiments all been tested, or is it one example, at Orion, that was tested?

I'll answer for you: The latter.

You do understand the problem, don't you? It's like testing a prototype of a thermometer, and then producing a number of thermometers, without ever testing if just one of those thermometers are measuring temperature correctly.

You can't trust the data at GCP, jzs. It's that simple.


They analyse the output of their REG's from 1999-2005 to look at the amount of bias in variance that is produced after XORing.
 
davidsmith73 said:
Well, they can do this. Look on their site. They look at the bias in variance produced by the Orion REG and Mindsong REG separately

In the list of "Current results", do they separate the Orion data from the rest?
 
Oh, God this is frustrating! The whole 'RNG' thing is a complete red herring in light of something more significant; it's all null and void without anything to compare it to!

We can apply all the maths in the world and argue until we're blue in the face that it was or was not calibrated. It's a simulacrum in numbers as far as I'm concerned, and we're debating whether the ruler used to measure it was the right size. ICAN's ol' demons in the static springs to mind; without a negative control, we have no reason to suspect that some sequences of numbers are anything more than faces in the snow.

Athon
 
davidsmith73 said:
Why should they?

Because since the Orion RNGs are not calibrated (and I would love to see how the other two kinds were!), the data would pollute the whole dataset.

So, are they?
 
CFLarsen said:
Because since the Orion RNGs are not calibrated (and I would love to see how the other two kinds were!), the data would pollute the whole dataset.

So, are they?

As far as I can see, they perform the same analyses on control data for each type of REG. If they conclude that the output of each type is not producing any bias into the results (which they do claim, even though we have reason to believe they don't demonstrate it) then they have no reason to separate the data.
 
Originally posted by jzs
[I expect the output of the RNGs to be a]pproximately binomial, mean 100 and standard deviation ~ 7.07.
Hey, that's what I expect too. And, hey, the output is approximately binomial with those parameters. So, what's the problem? Why should I think there's anything strange going on that needs to be explained?
The probability is dependant on the bias. If the bias is large, yes, that is a real problem. Since they know the bias, can estimate it, they subtract it out for that very reason.
So you expect the 200-bit sums, after XORing and normalization of the variance, to be exactly binomial and not just approximately binomial? And to be perfectly independent of each other too? I don't.

If you don't either, you should not defend statistical significance tests which assume that perfection.

If the Global Consciousness Project wants to demonstrate that certain global events affect its RNGs, it needs to compare the RNGs' behavior during the global events to their behavior the rest of the time. It is not sufficient merely to predict that the RNGs will act funny during a global event, and then to observe that they do act funny then. Maybe they act funny all the time.

(edited to correct spelling of "binomial")
 
69dodge said:
Hey, that's what I expect too. And, hey, the output is approximately binomial with those parameters. So, what's the problem? Why should I think there's anything strange going on that needs to be explained?


If there are more times when you compare that output to events specified in the formal hypothesis registry and they are beyond what you'd expect by chance.


So you expect the 200-bit sums, after XORing and normalization of the variance, to be exactly binominal and not just approximately binomial? And to be perfectly independent of each other too? I don't.


I don't either. "exactly" is where I disagree.


If you don't either, you should not defend statistical significance tests which assume that perfection.


All models are wrong, some are useful.
 
athon said:
Oh, God this is frustrating! The whole 'RNG' thing is a complete red herring in light of something more significant; it's all null and void without anything to compare it to!
I agree the debate is pointless because the data, good or bad, shows nothing. After several pages I almost forgot the original topic, and why I dismissed it in the first place.

Walt
 
davidsmith73 said:
As far as I can see, they perform the same analyses on control data for each type of REG.
Where, exactly, do you see this?

Can you, in the datasets, see which data comes from which type of RNG?
 
Originally posted by CFLarsen
Can you, in the datasets, see which data comes from which type of RNG?
The downloadable data identifies each RNG by its ID number. The GCP website has a page that lists all the RNGs with their ID numbers and says what type each is, along with some other information.
 
69dodge said:
The downloadable data identifies each RNG by its ID number. The GCP website has a page that lists all the RNGs with their ID numbers and says what type each is, along with some other information.

Thank you. There's a problem, though.

Take a look at the 10th EGG in the list. It is referred to as "transferred", but installed on 2003-02-03. There are several cases of EGGs that are replaced, e.g. the 53rd EGG, that was once a MINDSONG, but was changed to an ORION. They are not even sure when the 72nd and 74th EGG was installed. And so on, and so on, and so on...

Do you see what's happening here? Their analyses are worthless (gee, we say that a lot, don't we?) because they are mixing the data from different EGGs.
 
CFLarsen said:
Do you see what's happening here?

Certainly.

Out of a list that includes 946 pieces of data, 4 of those disturb you because they used to be one type of RNG, but now they are another.

An additional 2 out of 946 disturb you because they don't know the exact day they were installed.

So 6 out of 946 pieces of data disturb you.

Although, I'm surprised. You failed to mention the cases under 'Type' where they weren't sure of the operating system being used. That would have boosted it up to a whopping 10 out of 946.

You're slipping.
 
jzs said:
Although, I'm surprised. You failed to mention the cases under 'Type' where they weren't sure of the operating system being used. That would have boosted it up to a whopping 10 out of 946.

You're slipping.

Not at all. You, OTOH, have serious reading comprehension problems:

CFLarsen said:
And so on, and so on, and so on...

Do you understand these simple words? Please explain what they mean.
 
CFLarsen said:
Not at all. You, OTOH, have serious reading comprehension problems: Do you understand these simple words? Please explain what they mean.

Aww, the patronizing approach. It is truly amazing how I've managed to get along this far, and this well, with the "serious reading comprehension problems" you need to believe I have.

So you find about .7% of the data in this table problematic.

Um, so...?
 
jzs said:
Aww, the patronizing approach. It is truly amazing how I've managed to get along this far, and this well, with the "serious reading comprehension problems" you need to believe I have.

I was throwing you a lifeline there. So, you deliberately misunderstand what I say?

jzs said:
So you find about .7% of the data in this table problematic.

No, that's not what I said. Since you won't admit to a reading problem, what is your excuse then?

jzs said:
Um, so...?

If they mix the RNGs, then the data is tainted. Do you understand?
 
CFLarsen said:
No, that's not what I said. Since you won't admit to a reading problem, what is your excuse then?


You have a need to believe I have a reading problem. Whatever makes you happy.


If they mix the RNGs, then the data is tainted.

So you must believe the data from the RNGs is statistically different then? If you want to present your analysis, you are welcome to do so.
 

Back
Top Bottom