Moderated Is the Telekinesis Real?

By the way, the failed replication paper is here. It makes a fascinating study in itself, in that the authors note that they have failed by an order of magnitude to achieve a statistically significant result, but that they have noted "a substantial pattern of structural anomalies related to various secondary parameters, to a degree well beyond chance expectation and totally absent from the calibration data." I'm a little reluctant to plough through 54 pages to divine what these structural anomalies are, but at best they fall foul of the well-known problem of testing hypotheses suggested by the data.

Dave
 
Links to Dr. Jeffers' original articles were posted several times, even before you asked for them. But you admit you "don't have time" to read and respond to them, so now you're just making excuses for your delinquent performance.

I'm reminded of old Westerns, where at one point the villain pulls his gun and shoots at a person's feat yelling "Dance! Dance!"

Our Pseudo-Buddha has proven time and again he doesn't read the sources we provide. I suspect this is by design. As soon as we overtly give up and stop responding with sources he'll never read or respond to he'll go to wherever he's crowing about these exchanges and declare that "The skeptics couldn't provide a singe source to refute me!"

That or he just enjoys dropping a deuce and running.
 
It might also be more honest to replace "reproduced the Princeton research" with "failed to reproduce[d] the results of the Princeton research" in your post.

From the horse's mouth (and ninja'ed by mere minutes).
"As far as the replication results themselves are concerned, we are left with an empirical paradox. Whereas the prior PEAR experiments clearly displayed anomalous secular trends in REG output distribution means in correlation with operator intention, the three-laboratory replications, which employed essentially similar equipment and protocols, failed by an order of magnitude to replicate the primary correlations." Jahn, et al., op. cit., p. 538.​

Buddha :-

With nothing more than a standard scholarly citation, two of your opponents here were able to locate and skim the article. In my case it took me less than it did for my Keurig cup to brew. So let's have no more talk about how pressed for time you are and how others must do all your research for you, on both sides of the question. It's been clear for some time you have done very little actual reading regarding Robert Jahn and pear, while -- you should have gleaned from the links to previous ISF discussions I posted before you even opened this thread -- your critics are far more familiar with the research, its sources, and its problems.
 
Last edited:
The links were
provided, you are about that. But they lead to this website. I was asking for the links to original articles written by the scientists who criticized the Princeton RSP program. Since no one has provided them, I am going to do it myself. See my latest posts.

The hilighted is a barefaced lie, SOdhner posted them twice.

First time was in post #17:

 
Since no one has provided them, I am going to do it myself. See my latest posts.

How puerile (look it up in the dictionary) of you to declare it doesn't exist what has been provided to you in abundance.

Well, we accept your surrender. Part of your tasks now is learning what an article is (you have been mixing up peer-reviewed scientific and technical publications with the Reader's Digest)
 
For their research the Princeton group used random noise produced by the electronic circuits. There is nothing unusual about this choice of a generator of random numbers; my friend, who develops signal processing software, told me once that this method is quite common. Besides, the Princeton researchers took appropriate measures to eliminate the bias, and Alcock raised no objections to that. However, he wrote that the researchers didn’t establish the baseline for the equipment when it runs without human intervention.

Apparently, he doesn’t understand what the baseline means in this case – the chance that a positive or a negative run appears is 0.5 within a margin of error depending on the confidence interval. Nevertheless, he doesn’t argue with the results of evaluation done by the ESP group, which seems to indicate that he accepted their baseline. The author is a psychologist, not a mathematician, so his confusion is understandable.

The author suggests that the researchers should use a random sequence of controlled runs and experimental runs to randomize the process. This is an idiotic suggestion akin to the suggestion to make a subject intake a medication and the other day intake a placebo during clinical drug trials.

I’ll return to the article tomorrow. One more thing – one of my opponents asked me a sarcastic question – do random events generators are truly random? The question is valid, but it doesn’t apply to the Princeton ESP research. In several ESP studies so called random numbers algorithms were used to run the ESP equipment. Technically speaking these algorithms are called generators of pseudo-random numbers, which explains their limitations. However, if a process could be interrupted, they produce random numbers. This is an interesting topic, I might return to it in a future. But, as I said, it is irrelevant to the Princeton research
 
And that's why by so far ignoring three "articles" that discuss "the Princeton research" you have acknowledged them to be right.



Outspoken according to who? Wouldn't it better be one of the worst articles so you can try your way to say something about it?



If you were familiar with peer reviewed papers -they're not called articles- at least you would know it is "cited" and not "sited".

And it looks you managed to get something written for a third party so, that's not the way to discuss criticism on the subject. That is the way you're driving traffic towards marginal content about which you have replies already prepared.

I hope nobody falls in your dialectical trap.
\Please check the Wikipedia article on Princeton ESP research to see that I mentioned a well-known critic. If this doesn't satisfy you, I don't know what else will. Thatnk you for correcting my typo.
 
I'm reminded of old Westerns, where at one point the villain pulls his gun and shoots at a person's feat yelling "Dance! Dance!"

Great, now I've got the image in my head of Michael J. Fox moonwalking.

I suspect this is by design.

Yes. In order to refute Jeffers our Buddha will have to display more knowledge that simply selecting "t-test for Significance" from the menu of his canned statistical analysis package. He will have to demonstrate an actual understanding of the mathematics -- which, I strongly suspect he cannot do. The telltale evidence is his cargo-cult vision of how statistical methods work in science. He can't fathom how any of the criticism works. He thinks the only way the results could have been compromised is if the experimenters had used some one-oiff, homegrown algorithm, or if the standard t-test algorithm had somehow malfunctioned. He doesn't seem to realize that problems can creep in before you ever get to the test for significance, and that any such test will be susceptible to uncontrolled variables such as homoegeneity of the data or anomalies in the baseline.

The technique here is to let the machine run by itself for a while and build a statistical profile of its output when supposedly not affected by the subject. This is the empirical baseline. That is, the baseline behavior of the machine is measured, not guessed at by extrapolating from how it works or assumed from theory. The idea is that any quirks in that particular machine will be accounted for without knowing precisely what causes them. It's actually good methodology compared to previous PK researchers. Then the subject restarts the machine and applies one of several available protocols to attempt to influence the machine, and the results of that are compared to the previously-measured baseline for that machine. The t-test for significance, among other things, comes up with a probability that the baseline behavior accounted for the experimental results. You want that to be a very low probability, less than 0.05.

Separately, the baselines are compared to theoretical random distributions (Gaussian, if memory serves) and shown to conform suitably. That's the empirical control that shows the machine is truly a random number generator to within some tolerance -- again measured statistically. And that's what lets you say the experimental results compare validly to chance.

What Jeffers did is to show that the baselines reported -- the standard against which the experimental results were measured for significance -- were suspiciously correlated with each other and with theoretical distributions. In other words, the empirically measured baselines were "too good" to be the product of the actual machines. Poor Dr. Jahn had only a comically feeble explanation for how that could have occurred. Variance in the calibration runs to establish the baselines did not correlate with variance across other variables in the experimental runs such as which subject was operating the experiment at the time. While many subjects exhibited no ability whatsoever to influence the random-event generator, they all somehow exhibited an uncanny ability to achieve near-perfect calibration runs.

Of course a t-test for significance will tell you that the baseline mean has a low probability of explaining experimental means if the baseline mean is not accurate. The test for significance has no way of verifying that either of the data sets it's looking at is really what the analyst purports it to be. Jahn's critics were charitable. Having no evidence of outright fraud, they simply noted that his experiment lacked the empirical controls to ensure the integrity of either the calibration runs or the experimental runs and suggested ways in which that could have been achieved. They remained dispassionate and scientific, despite Buddha's ongoing attempts to poison the well.

As soon as we overtly give up and stop responding with sources he'll never read or respond to he'll go to wherever he's crowing about these exchanges and declare that "The skeptics couldn't provide a singe source to refute me!"

If he remains true to form he'll rewrite the purpose of this thread. In all his other threads he has closed off the discussion by saying he succeeded in some other goal than to prove his claim. He'll claim he just wanted to see people's reactions, or study some irrelevant issue. This is, I suspect, why he ignored my request at the beginning of this thread to state clearly what his intent was in starting it and participating in it. He claimed in his opening post that Jahn's conclusions were valid and should still stand. I'll bet you dollars to donuts that vindicating Jahn and PEAR won't have the slightest to do with his eventual declaration of "victory."
 
\Please check the Wikipedia article on Princeton ESP research to see that I mentioned a well-known critic. If this doesn't satisfy you, I don't know what else will. ThatnkThank you for correcting my typo.

You're welcome.

Read all those papers yet? Or are you going to pretend you didn't see them? Again.
 
Please check the Wikipedia article on Princeton ESP research to see that I mentioned a well-known critic.

You mentioned a well-known critic, but then proceeded to demonstrate almost no knowledge of what he wrote and no understanding of the basis of his criticism. You further declared that you "didn't have time" to discuss it. If you must resort to Wikipedia to inform you of the criticism against PEAR then we have to conclude you have done too little research to know who his critics are and what they say.

This is especially disturbing when we see that you started off this thread with a blanket accusation that all Robert Jahn's critics were incompetent. You clearly aren't familiar enough with them to make that judgment.

If this doesn't satisfy you, I don't know what else will.

If you don't know what else will satisfy your opponents here, then you are deliberately ignoring them. I want you to respond and comment in detail about Steven Jeffers' analysis of the problematic baselines. This will require you to demonstrate more skill in statistics than merely mentioning the names of common techniques.

Links were posted the first day of your thread and again subsequently. They were posted again after you asked for links to criticism. They were reposted again today, and yet you still claim no one has supplied you with references to criticism that they want you to evaluate and that therefore you must choose for them and supply your own instead. At this point we have to conclude you are deliberately lying in order to evade your critics. Further, a thorough treatment of your foisted critic was provided, but you ignored that as well. Your evasion of meaningful discussion is not correlated with whether you or your critics supply the rebuttals.
 
Last edited:
As this is Jabba's and yrreg's all over again, I suggest we put in place the Jabba-yrreg protocol.

Let's do a provisional outline

1-"Buddha" chooses a pseudo-logical or pseudo-scientific topic related with his personal religion.
2-"Buddha" starts a thread on the topic of choice.
3-The OP is called "an article" and contains a badly argued word-saladish piece that basically says he's right about it
4-"Buddha" ignores all criticism and accuses who he call "his opponents" of saying what they didn't and not understanding what they didn't mention but "Buddha" wants to insist about in order to give the false impression of an ongoing debate
5-"Buddha" ignores all further criticism and calls to follow a reasonable and logical path. He replies isolated bits of other participants' post in order to insist in his previous points and he declares what is the correct way for dealing with the topic at hand (for "correct" meaning what places the embers closer to his marshmallow)
6-"Buddha" ignores the avalanche of criticism, and declares victory.
7-"Buddha" ignores that he has lost the debate from its very beginning, insist in the facts that his repetitive posts have been ignored and the debate has come to a stop because of his "opponents" ' stubbornness
8-"Buddha" makes some protocolar appearances, declare victory again, declare his goals achieved and leaves the thread .


May you embetter this?
 
For their research the Princeton group used random noise produced by the electronic circuits. There is nothing unusual about this choice of a generator of random numbers; my friend, who develops signal processing software, told me once that this method is quite common. Besides, the Princeton researchers took appropriate measures to eliminate the bias, and Alcock raised no objections to that. However, he wrote that the researchers didn’t establish the baseline for the equipment when it runs without human intervention.

Apparently, he doesn’t understand what the baseline means in this case – the chance that a positive or a negative run appears is 0.5 within a margin of error depending on the confidence interval. Nevertheless, he doesn’t argue with the results of evaluation done by the ESP group, which seems to indicate that he accepted their baseline. The author is a psychologist, not a mathematician, so his confusion is understandable.

The author suggests that the researchers should use a random sequence of controlled runs and experimental runs to randomize the process. This is an idiotic suggestion akin to the suggestion to make a subject intake a medication and the other day intake a placebo during clinical drug trials.

I’ll return to the article tomorrow. One more thing – one of my opponents asked me a sarcastic question – do random events generators are truly random? The question is valid, but it doesn’t apply to the Princeton ESP research. In several ESP studies so called random numbers algorithms were used to run the ESP equipment. Technically speaking these algorithms are called generators of pseudo-random numbers, which explains their limitations. However, if a process could be interrupted, they produce random numbers. This is an interesting topic, I might return to it in a future. But, as I said, it is irrelevant to the Princeton research

A pseudorandom number generator can, within certain limits, produce sequences of bits that are equivalent to Gaussian random numbers. One of those limits is that the number of bits in an output number must be fewer than that of the shift register generating the pseudorandom sequence. One can extract seemingly random numbers having more bits from the generator's continuous bit output but the result will have a lop-sided distribution.
During a visit to PEAR I discovered that they were counting and attempting to influence the number of ones in a series of 200-bit numbers. The 200-bit numbers were extracted from the output of a 31-bit generator. One result is that the mean number of ones would inherently not be 100.
 
...the Princeton researchers took appropriate measures to eliminate the bias, and Alcock raised no objections to that.

The Princeton researchers claimed to have taken appropriate measures, and that was good enough for Alcock, who chose to focus his attention elsewhere. Palmer did not, and you still haven't addressed that. And Jeffers noted that the report of those measures was suspicious for reasons he laid out in his own articles. That's why we've been trying for three days to get you to address that, posting links every day with that hope in mind. If you're going to foist your own choice of straw-man critic on the discussion and ignore all the others, you don't get to read significance (specifically an insinuation of correctness) into what he omits -- especially when that was not omitted by the other critics your opponents have asked you to evaluate.

However, he wrote that the researchers didn’t establish the baseline for the equipment when it runs without human intervention.

He noted that there were no empirical controls in place to ensure that the subjects could not interfere -- consciously or unconsciously -- with the calibration runs. This comes from his extensive experience in human-subjects research, in which Jahn was a novice. And since anomalies did arise in the calibration runs, that seems a prudent suggestion.

The author is a psychologist, not a mathematician, so his confusion is understandable.

No. You haven't shown that he's confused, only that you don't understand his criticism. Dr. Alcock is an expert in an empirical science that relies heavily on experimentation with human subjects. He can be assumed to understand what empirical controls must attend such research. In contrast, prior to PEAR, Robert Jahn was an engineer and had no experience in human-subject experiments. As such, he failed to take the appropriate precautions. Most notably, when some of his errors were corrected in his own attempts to reproduce his results, he failed to duplicate his own previous findings. That's fairly strong evidence that Jahn, not Alcock, is the party in error.

And none of the PEAR team were mathematicians, so if you're going to hold Jahn's critics to that standard then you will have to reject all of PEAR's analysis as commensurately uninformed. Statistics is a pervasive branch of mathematics. It is applied mathematics, and as such has its most vital role when it is, you know, actually applied.

The author suggests that the researchers should use a random sequence of controlled runs and experimental runs to randomize the process. This is an idiotic suggestion akin to the suggestion to make a subject intake a medication and the other day intake a placebo during clinical drug trials.

Your name-calling does not address the reasons Alcock gave for suggesting the revision. And since you have already admitted ignorance of the notion of empirical controls, your offhand dismissal bolstered by an inapt analogy falls flat.

Since the measured variance exhibited a clear preference for the subject getting to choose how to try to influence the machine, how that choice is made became an obviously confounding variable in the experiment. And in Jahn's protocol it was uncontrolled, so it remains confounded in his original result as a placebo effect. The data correlated to it when, according to Jahn's null, it should not have. Drug trials rely on a cumulative effect, so each trial is not independent. That is not true for requiring the PEAR subject not to know ahead of time what kind of run was on deck or whether he would it would be volitional. Those are discrete events, and in fact must be in the context of the experiment. You don't understand the experiment design. And since there was a suspicious correlation in the calibration runs that was not present in the experimental runs (even absent a PK component), then whether a run was a calibration run or an experimental run was also an uncontrolled confounding variable that leads to a correlation it should not have. Therefore it should also have been outside the subject's influence.

You are a mathematician (or so you claim), not an empirical researcher, so your confusion is understandable. But not excusable.

The question is valid, but it doesn’t apply to the Princeton ESP research.

It most certainly does, because it's the basis for determining that the baseline calibration runs were too good. They should have exhibited more variance than PEAR reported they did. Jahn had no rational explanation for it. And neither do you.

This is an interesting topic, I might return to it in a future. But, as I said, it is irrelevant to the Princeton research

No, it's the basis of the criticism we've been trying to get you to face up to for three days now. You are steadfastly claiming it was never supplied to you. Now you're changing horses and claiming it's inconsequential or irrelevant. As usual, you're simply making up increasingly feeble excuses for why you don't have to address the most damaging criticism against your claim. When it becomes too onerous for you to maintain the ineffective gaslighting, I predict you'll change the "real" purpose of this thread and declare victory.
 
Last edited:
Talking solo again? Why don't you save yourself the embarrassment and read Jeffers'?

His only substantive post in today's offering is just a slightly wordier version of the blustery claim that his critics don't understand are aren't competent, without giving any details. He's a claimed mathematician, so we're just supposed to take his judgment at face value even though there's no actual demonstration of mathematical expertise behind it. Pure gaslighting, just like all his other threads.
 
Pure gaslighting, just like all his other threads.

extra-extra-entire-skeptic-forum-gets-gas-poisoning-from-woo-woo-slingers-posts.jpg


Extra! extra! Entire skeptic forum gets gas poisoning from woo-woo slinger's posts! - Newspaper boy
 

Back
Top Bottom