...the Princeton researchers took appropriate measures to eliminate the bias, and Alcock raised no objections to that.
The Princeton researchers claimed to have taken appropriate measures, and that was good enough for Alcock, who chose to focus his attention elsewhere. Palmer did not, and you still haven't addressed that. And Jeffers noted that the report of those measures was suspicious for reasons he laid out in his own articles. That's why we've been trying for three days to get you to address that, posting links every day with that hope in mind. If you're going to foist your own choice of straw-man critic on the discussion and ignore all the others, you don't get to read significance (specifically an insinuation of correctness) into what he omits -- especially when that was
not omitted by the other critics your opponents have asked you to evaluate.
However, he wrote that the researchers didn’t establish the baseline for the equipment when it runs without human intervention.
He noted that there were no empirical controls in place to ensure that the subjects could not interfere -- consciously or unconsciously -- with the calibration runs. This comes from his extensive experience in human-subjects research, in which Jahn was a novice. And since anomalies
did arise in the calibration runs, that seems a prudent suggestion.
The author is a psychologist, not a mathematician, so his confusion is understandable.
No. You haven't shown that he's confused, only that you don't understand his criticism. Dr. Alcock is an expert in an empirical science that relies heavily on experimentation with human subjects. He can be assumed to understand what empirical controls must attend such research. In contrast, prior to PEAR, Robert Jahn was an engineer and had no experience in human-subject experiments. As such, he failed to take the appropriate precautions. Most notably, when some of his errors were corrected in his own attempts to reproduce his results, he failed to duplicate his own previous findings. That's fairly strong evidence that Jahn, not Alcock, is the party in error.
And none of the PEAR team were mathematicians, so if you're going to hold Jahn's critics to that standard then you will have to reject all of PEAR's analysis as commensurately uninformed. Statistics is a pervasive branch of mathematics. It is applied mathematics, and as such has its most vital role when it is, you know, actually applied.
The author suggests that the researchers should use a random sequence of controlled runs and experimental runs to randomize the process. This is an idiotic suggestion akin to the suggestion to make a subject intake a medication and the other day intake a placebo during clinical drug trials.
Your name-calling does not address the reasons Alcock gave for suggesting the revision. And since you have already admitted ignorance of the notion of empirical controls, your offhand dismissal bolstered by an inapt analogy falls flat.
Since the measured variance exhibited a clear preference for the subject getting to choose how to try to influence the machine, how that choice is made became an obviously confounding variable in the experiment. And in Jahn's protocol it was uncontrolled, so it remains confounded in his original result as a placebo effect. The data correlated to it when, according to Jahn's null, it should not have. Drug trials rely on a cumulative effect, so each trial is not independent. That is not true for requiring the PEAR subject not to know ahead of time what kind of run was on deck or whether he would it would be volitional. Those are discrete events, and in fact must be in the context of the experiment. You don't understand the experiment design. And since there was a suspicious correlation in the calibration runs that was not present in the experimental runs (even absent a PK component), then whether a run was a calibration run or an experimental run was also an uncontrolled confounding variable that leads to a correlation it should not have. Therefore it should also have been outside the subject's influence.
You are a mathematician (or so you claim), not an empirical researcher, so your confusion is understandable. But not excusable.
The question is valid, but it doesn’t apply to the Princeton ESP research.
It most certainly does, because it's the basis for determining that the baseline calibration runs were too good. They should have exhibited more variance than PEAR reported they did. Jahn had no rational explanation for it. And neither do you.
This is an interesting topic, I might return to it in a future. But, as I said, it is irrelevant to the Princeton research
No, it's the basis of the criticism we've been trying to get you to face up to for three days now. You are steadfastly claiming it was never supplied to you. Now you're changing horses and claiming it's inconsequential or irrelevant. As usual, you're simply making up increasingly feeble excuses for why you don't have to address the most damaging criticism against your claim. When it becomes too onerous for you to maintain the ineffective gaslighting, I predict you'll change the "real" purpose of this thread and declare victory.