jzs said:
Please give a specific example or two, with mathematics, in detail for all to see. Or, you can keep being vague.
You miss here something.
To show that someone is doing something wrong with his selection of data, it is enough to show, that he does not give any good reasons for his selection of data.
No complicated math is needed, to show that someone didn't use math properly at all.
To show that the GCP does something wrong with his data, it is enough to ask why they look at the curve of 9/11 starting at 6.45am(or so) and why they do not look at the curve starting at 4.00 am or starting at 8.25 am. If there is nowhere something in their publications, that is related to such questions, you can safely ignore their conclusions, until they have an answer.
But what would disprove the GCP as a whole is showing that a curve derived from the output of a random number generator, will around a certain time always have several points, from which on the curve crosses the line, that marks the 1 in 20 chance, at some time in the future.
Ignoring this possibility seems to be the main GCP mistake: If my guess is correct, you will find in any random curve on any date something that looks like non random behaviour.
The GCP people could of course identify this problem, if they would compare their curves from "important" days with curves from "unimportant" days. I didn't find anything on their website, where they presented lots of curves, that indicated, that they even thought about this problem. Furthermore CFLarsen's questions mentioned in the article point exactly in the same direction and the GCP "guru" was even unable to think about them, likely he has never considered them.
This gives me the conclusion, that GCP are a bunch of incompetent scientist. And the reason why this "field of research" draws so many incompetent scientists, is that competent scientists and talented students recognize the problems and prefer to spend their life with useful and promising research.
It is even quite likely, that my guess has already been proven, then the whole GCP would be disproven, as they do not employ any method to exclude such random non-random looking intervals.
Does anyone know about some prove,
that in any big enough set of perfectly random
data there are always some intervals, that show non random behaviour?
Carn