Since this is not a Poisson process, the t-tests are not applicable to it.
You still don't have a clue what the t-test is or how it works. The t-test always uses the t-distribution, not a normal distribution. It is specifically the test you use when the data are
not expected to conform to any of the normal distributions. Once again you manifest your abject ignorance of descriptive statistics and significance testing.
What's worse is that the two sources you previously cited to describe the t-test amply explain this. It's one thing initially not to understand the t-test test; that's forgivable. It's another thing to cite sources that explain them, insinuate that only you can understand them, and then spectacularly demonstrate your failure to understand them. That takes a special commitment to gaslighting.
This is a “slightly more goal-oriented task”, this is a drastic departure from the Princeton objective.
Jeffers goes to great lengths (Jeffers,
op. cit., pp. 548ff) to explain how his experiments differ in method from the PEAR experiments and why that's a good thing. The problem with PEAR's results was exactly in its methodology. Jeffers was not trying to duplicate PEAR's results using their methodology. He was trying to see if an improved methodology would duplicate the results PEAR obtained (i.e., with a vastly improved methodology that was hoped would eliminate the ways in which critics of PK research could find fault with it). If he achieved a significant result with a similar (but importantly different) methodology, then PEAR's results would have been somewhat vindicated.
The shift in methodology was crucial to Jeffers' stated goal. "[T]he major motivation for this effort was to improve our understanding of the dependencies and invariants of the process, rather than simply to provide more evidence of such anomalies." (
Ibid., p. 547) In this particular paper, Jeffers is helping PEAR and others find the possible confounds to inform further research. This is different than the articles he wrote for
Skeptical Inquirer in which he more directly criticizes PEAR. That you think Jeffers had, or should have had, some different goal in this research is irrelevant. The German scientists attempted to duplicate PEAR's research using their protocol and failed. Jeffers developed his own protocol -- with PEAR's assistance, in some cases -- attempting to study the same PK effect on the same sorts of phenomenon (a protocol that was supposed to correct for PEAR's protocol issues) and achieved only marginal significance that correlated to which test site was used.
As usual, in a broader sense you fail to put this in context that Alcock provides. Jeffers wasn't a mainstream scientist out to discredit psi research at all costs. He was a famous physicist welcomed by PEAR and their colleagues -- initially -- with open arms and an ear to how his insight would improve their plight. You insist on painting him as a mustache-twirling villain and, as a result, not competent to worship the water you walk on.
If this was inactive data, it doesn’t depend on the operator intention, so this conclusion is expected, and it doesn’t prove anything.
Uh, what you write has absolutely nothing to do with what Jeffers is trying to illustrate in that passage. Once again it's as if you don't understand what's being said at all, but you think that if you just say something that sounds vaguely statisticky you can fool people into think you're leveling valid criticism and that anyone who objects "obviously" isn't privy to your brilliance.
It seems to me that Jeffers set his experiment to fail to come to conclusion that it negates the Princeton research conclusion.
No, using a different methodology and protocol than PEAR is not an attempt to "set his experiment to fail" to confirm PEAR, nor does it qualify Jeffers' work as "bs." You're simply trying to shoehorn Jeffers' actual work in this paper into what you preconceived it should have been, and faulting him for little more than failing to validate your preconception. And no, you clearly don't understand what Jeffers actually did to vet the initial PEAR studies in
Skeptical Inquirer. And no, you're still as ignorant as can be over what statistics properly apply to this sort of research.
You declared Jeffers to be biased and "irrelevant" before you even looked at his work, so now you're just cherry-picking stuff from his research that you can spin to make it seem like that's still true.
Also, some bookkeeping. We initially asked you to comment on Jeffers' interpretation of the baseline-bind situation reported by Jahn. Instead, you've decided you're going to attack Jeffers' own original research. Although you're finally addressing the critic we wanted you to focus on, you're not addressing the specific criticism we asked you to look at. You hastily posted a link to an Internet post by Williams, claiming that it answers Jeffers, but as I pointed out Williams clearly offers nothing but thinly-veiled prevarication on that point. It is unsuitable as an answer. I'm asking you to address that specific criticism -- the baseline bind -- because it requires you to demonstrate an actual understanding of sample variance in statistical analysis, a topic you've repeatedly demonstrated yourself deficient in. I want to see if you're able and willing to correct your misunderstanding and misattribution of the underlying statistical phenomenon to margin of error.
And we're still waiting for you to clarify whether you believe Jeffers has done any original research. Earlier you claimed he hadn't, because you claimed Alcock hadn't mentioned any. Today you're clearly looking at, and cherry-picking from, what is obviously original research done by Jeffers in psychokinesis, informed and assisted by PEAR with the aim of following it up. We're wondering when you're going to get around to admitting and taking responsibility for your original error in claiming nothing of the sort existed. If you're never going to acknowledge errors, then it is fruitless for a thinking person to attempt to engage you and correct them.