[Y]ou're only admitting what you were pointed to be awfully wrong many times.
And it's a tactical withdrawal to boot. "I admitted an error, now you have to." He's belatedly trying to portray himself as gallant and conciliatory only so that he can turn around and say his opponents are entrenched by comparison. Buddha's arguments are about three-fourths ham-fisted social engineering on any given day. What do we really learn from the experience? Buddha is willing to jump to a conclusion and cling to it tenaciously, letting go only when it serves his purposes -- if he lets go at all. Yet here he is today practically begging people to take his word for it. Trustworthiness and face-saving are incompatible ends.
What about yourself? It's you who really continues to ignore when they are used.
Indeed, how about our own trip down memory lane. "That's not a proper baseline!" Well, yes it is, and Jahn (not Palmer) is the one who decided to use it. The t-test compares two empirically sampled data sets. He didn't know this. "You have to collect all the baseline data ahead of time." Well, no, his own source -- the Navy's mid-century statistical manual -- talks about pairwise sampling, varying the dependent variable and letting the independents do what they may at each trial. That's what happened in the PK research using various kinds of apparatus. But what Buddha hopes you don't see is that he's moved ahead to accepting that the t-test works with empirical baselines.
Buddha is trying desperately to portray himself as the teacher. But the record shows him struggling to keep up, either silently leaving his former misconceptions behind in hopes that they'll be forgotten, or declaring that he no longer has time to debate them. Even if one doesn't understand the statistics, at the end of the day Buddha is still stuck with the fact that none of the critics-of-the-critics he's arrayed as a screen seem to agree with him on what the problems are, if any, with Jeffers' work. Is it easier to suppose that they all somehow missed what Buddha is telling us is the low-hanging fruit of the "obviously" misapplied t-test? Or is it easier to conclude that Buddha is wrong -- yet again -- in his drunkard's-walk argument? That's an answer you see even without knowing the field. Buddha insinuates that any properly qualified statistician should draw the same conclusions he does. Except they don't.
But you know what? Aside from an illustration of Buddha's method, the t-test is a subject we can allow him largely to drop, as he has demanded. Why? Because its more important role in this debate is to telegraph Buddha's major error. And a colossal one it is, too. You note :--
The fact that you needed to explain this -and explain it in such a "curious" way- is showing how lost you are.
What an understatement.
And by that I mean it's a challenge to convey just how far off the mark Buddha really is. "You can't apply a t-test to an experiment using a double-slit apparatus because the apparatus produces an interference pattern and not a normal distribution." That fundamentally misunderstands what all these experiments are actually recording, manipulating as the dependent variable, and treating with statistical significance tests. I mean
fundamentally misunderstands. The raw behavior of the apparatus -- however devised -- is not what the significance test is performed on. It's not the dependent variable in the experiment. That's right: Buddha is fundamentally mistaken on how all the PK experiments he's discussed were statistically modeled.
Keep in mind the ever-present fact that none of Stanley Jeffers' critics or reviewers managed to latch onto this exceptionally egregious "error" of applying a significance test to something that doesn't even vaguely look like a statistical distribution. And that's because -- whatever other criticism they may wish to mount -- they know that's not what Jeffers was trying to do. Buddha's approach is essentially a cargo-cult version of how this kind of research derives the dependent variable.
Here's what I think might have happened. Buddha keeps focusing on the physics principle that drives Jahn's random-even generator (REG). The behavior of that principle can be accurately described by a properly parameterized normal distribution. Separately, a straightforward significance test exists that compares a sample to a normal distribution and gives you the probability that the random process that produced the baseline distribution could have generated that sample sequence. Those two concepts found each other in Buddha's head and produced a narrative for these experiments. It's a coherent-sounding narrative because it has the commonality of the normal distribution. They "must" be connected the way Buddha imagines, right?
Well, no. The truth is not quite that simplistic. The REG also embodies a process that converts the noise to a discrete binary value. The IEEE article describes it in depth. The law of large numbers says that over an infinite number of "clicks" of such a device, the same number of ones should appear as zeros. But the central limit theorem acknowledges that over short runs, you may get more of one digit than another. The number of runs that depart from equilibrium by a given degree of mismatch is what would ideally be a normal distribution. (But in deference to the non-ideal nature of the process and its confounds, it is fit to the t-distribution, not a normal distribution.) The double-slit phenomenon says that reconverging paths will discretize to one or another node of the interference pattern according to the probabilistic nature of quantum mechanics. Unaffected by PK, over a large number of runs, the distribution across the interference pattern nodes should be symmetrical. But over a short number of runs, there will be some preference for "left" or "right" nodes of the interference pattern. And the degree of that preference should form a normal distribution. Again, because this is empirically collected baseline data, the t-test is used. Then the goal for the subjects in Jeffers' experiment is to bias the deposition of photons to different nodes of the interference pattern to a degree that quantum mechanics cannot account for. That's the dependent variable in Jeffers' experiment.
That indirection is all-important, because it forms the statistical basis for creating a usable distribution out of something that isn't natively a bell-shaped curve. Bias from whatever something is supposed to look like is the data. It's a histogrammic view of the world that comes naturally to people who work with statistics all the time. But it's something that's wholly absent from Buddha's thinking. It's safe to say Buddha has never statistically modeled a real-world science experiment. He's fixated on the nuts-and-bolts operation of experimental apparatus and he has only a simplistic, literal concept for how that translates into a statistical sequence.