Moderated Is the Telekinesis Real?

As an aside, in the US, we normally submit a resume to a potential employer and a CV to a client. Double check what your acronyms represent.

In the UK (and I'm sure many other countries) we refer to the document we submit to potential employers as a curriculum vitae (CV).

Considering that finding flaws in Buddha's posts is a fish/barrel situation I think we can let that pass.
 
I consider it proof positive he is lying about those qualifications.

It's evident in all his threads that he's claiming expertise he does not have and expecting that none of his critics will be able to tell he's bluffing. You'd think that after three or four failed threads that unfolded along the same lines, he'd get the hint and find a more gullible audience.

He has made it clear that he does not understand the need to have baseline tests in assessing a system.

Which is baffling to me for someone who claims to be employed as a statistical analyst. The t-test for significance is the proper choice here because, as David Rogers notes, there's no way you can establish the distribution for the null hypothesis from theory alone. The mean and standard deviation of Jahn's random-event generator (REG) apparatus are hoped and theorized to confirm to a Gaussian distribution, but aren't known to.

The t-test method for significance testing allows you to take a number of measurements of the machine's performance unaffected by the experimental protocol and, from that, build up a statistical profile against which to measure experimental results. This is what you do when theory can't help you. The baseline, however, has a degree of freedom dictated by how many baseline trials you do.

Let's say you have a baseball pitching machine, and someone tells you he can improve it with his add-on gadget. You decide that the measure of its performance will be how well it distributes its pitches throughout the strike zone without pitching outside it. A pitch down the middle of the strike zone every time becomes too easy to hit. And a pitch outside strike zone doesn't count as a strike. So the closer to the edges of the strike zone, the "better" the pitch.

Now no statistical theory predicts how your machine will fare. So you rightly say that you'll run the machine without the gadget and then again with it, and see if it improves performance according to your metric. If you do 10 trial runs, you have a distribution with a certain mean and standard deviation. And you know that this would only approximate the notion you'd have if you did 10,000 trials, or an infinite number. You know intuitively that the more trials you do, the more closely your empirically-determined baseline would approach a theoretically pure baseline.

That's where degrees of freedom factor in. When your vendor plugs in his gadget, the "test for significance" of its effect will be a computation of the probability that the baseline, as established in your 10 trial runs, can "stretch" to accommodate the results produced by the altered machine. But with only 10 trial runs you have only a crude understanding of what the machine's real baseline performance would be -- certainly not as much as you would know if you had done 10,000 trial runs. And that limits your ability to say "yea" or "nay" to whether your vendor has proven his case. We'd have to admit that wherever the theoretical perfection lay in your machine's baseline performance, it's less likely to lie as close to a 10-trial baseline as to a 10,000-trial baseline. Therefore the fuzzy region surrounding the crude baseline -- that would also probably accommodate the experimental data -- is bigger than it would be for a finely-tested baseline. So when the vendor says his 10 trials with his gadget produced results that were (a) within the strike zone more, and (b) more evenly distributed within the strike zone, a crude baseline would give you more justification to say, "Eh, well, I'm not really convinced you improved over what the machine could do by itself in the long run."

Now the sneaky thing is that those 10 baseline trials form a distribution that itself can be examined for statistical properties. This is what the physicist Steven Jeffers did. He looked at the PEAR results and said, effectively, "Wait a minute! These came from 22 runs of an electronic noise circuit? These seem way too tightly clustered for that." And he verified that with his own statistical meta-analysis using both the baseline and the experimental data. The baseline runs are too correlated among themselves to be the product of actual calibration runs, and this can be known with a fairly convincing amount of statistical certainty.

Buddha has nothing to say about that. Jahn had only slightly more, which was, "Gee, I guess our machine operators are just that good."

What does this mean? Well, it mean that the t-test will be working with an artificially shrunken "fuzzy region" around the baselines. Given only 22 calibration runs, the fuzzy region should have been larger, and therefore more able to "stretch" to accommodate the experimental data. Nothing wrong with the t-test itself, of course. But the old adage "garbage in, garbage out" applies. PEAR's results appeared significant only because what they were being compared to should have been fuzzier. The t-test, having been given an inappropriately "well-defined" baseline, naturally said "Why no, it's quite unlikely that the baseline performance accounts for this outlying data." It reasoned correctly from bad data.
 
Last edited:
Telekinesis is real. I can use nothing but my voice and intention to turn my lights off and on, change my television and other forms of manipulation. I do not move a muscle, other than vocal chords. Pretty cool, huh?
 
I have a 100% record as a Cat Whisperer. I simply look them in the eye and say, firmly but calmly, "Do whatever the hell you like" and they do!
 
I just concentrated my thought in making the sun drop below the line of the horizon ... and it did. I had to turn all the lights on.



This guy can manipulate clouds!



Well, not really. He’s apparently insane, but he THINKS he can manipulate clouds.
 
This guy can manipulate clouds!



Well, not really. He’s apparently insane, but he THINKS he can manipulate clouds.

TBF, if some fruit loop chanted 'caloooood, dis-ap-pear. caloooood, dis-ap-pear' around me for a few minutes, my ass would vanish, too.
 
That's so inadequate a summary of the discussion as to indicate that you're either unable to comprehend the text, or deliberately lying. We're all very well trained to recognise the strawman fallacy round here, and one of the classic strawman techniques is to limit your discussion to a single source of counter-argument, misrepresent that source, then handwave away your own misrepresentation. Do you really think you're helping your case by doing that?

Dave
This is not a summary, apparently you haven't read the article or you misunderstood it. The author combined counter-arguments coming from several sources, as he indicated, so this is not a single source. If you pride yourself as a scientist, you should demonstrate that I misrepresented article instead of saying it without providing a single proof of your assessment.
 
The Princeton research was done in the early 80s, around about the time when mobile phones looked like this.
[qimg]https://www.mobilephonehistory.co.uk/lists/motorola_8000x.gif[/qimg]

Given the development in mobile phones you would expect wonderful developments in TK. There have been none. In fact there has been regression. The 80's model has never worked since. Even Princeton couldn't get it to work. No other model has been developed that works either.

All this can easily be found out if you use your phone.
It is a valid remark about the cellphones. However, I do not see how it relates to my post.
 
Someone else already linked the article yesterday, and I (and probably several others too) read it. You don't need to explain IEEE. Anyone who is even remotely involved in STEM fields is well familiar with who they are and what they do. I daresay most of us in the field are members too and read their publications.



No, you're asking your audience to accept the validity of PEAR research on something other than its actual merits. "Their research must be correct because it wouldn't be published in a peer-reviewed journal otherwise." Well, no, the research is incorrect for the reasons given, which you have not addressed. Specifically, you have been directed several times to Dr. Jeffers' critique of the very article you are now trying to cite.

Only the one article that someone else linked yesterday appeared in IEEE journals, not "several" as you erroneously report. The only other publication of PEAR research in a mainstream journal was in Foundations of Physics. All the rest of PEAR's research was published in Journal of Scientific Exploration, a decidedly non-mainstream journal that reports on scientific study of the paranormal. It was certainly never published in physics, neurology, or social science journals. Since those would have been logical fields to publish in, the absence of PEAR's findings in any of them is telling.

Publishing outside one's field is another pseudoscience technique. Jahn piggy-backed on the prestige of Princeton University to give his group undeserved credibility. It would be attractive to say that he tried to piggy-back on IEEE for the same reason, but that's not what happened here.

IEEE is a professional and technical organization. They generally do not publish the kind of science Jahn and his colleagues do. But because the specific article deals heavily with the design and operation of PEAR's equipment, it falls under IEEE interest even if the use to which he is putting it ordinarily would not. Contrary to your characterization, Jahn's paper in Proceedings of the IEEE was an invited paper, followed up four issues later by an invited critique from a noted psychologist. Invited papers are generally not reviewed for methodological rigor as a condition of publication, usually because, as summary or report papers, they do not describe a methodology. Here the purpose was to illustrate to readers what a bad methodology looks like.

The problems with PEAR's research are contained in the research itself. They are not mitigated by the prestige of who hosted them or who published them. If you are not willing to discuss the research you cited on its merits, then we're done here.
Actually I like your post, it is very informative, I wish all other post were like yours. However, I didn't say that the Princeton ESP research should be accepted just because it was published in an IEEE magazine. The point I was making is that the researchers used valid experimentation techniques to conduct their observations, otherwise the editors would have rejected their data.

You called the aforementioned psychologist "noted", which is correct. Unfortunately, some members of the board accused me of choosing a weak opponent who, in their opinion, is a straw man. Perhaps, they will pay more attention to your post than the ones that I wrote.
 
Now hold on a minute. You don't get to tell your critics what the "right way" is to refute your claims. Your critics are correct to reject your attempts to script their side of the debate and quite justified in taking you to task personally for doing it.



Notwithstanding the above, I agree that you are responsible for addressing what has already been written as criticism of the sources you cite. Toward that end, I supplied you with links to previous discussions of PEAR claims here at ISF. You did not read them, as evidenced by your attempt simply to replay a decade-old debate as if it were somehow fresh and new.

You were further supplied -- several times -- with links to Dr. Jeffers' detailed criticism of Jahn's article in Proceedings of IEEE and asked to comment on it. You haven't done that, either. That's deeply troubling, since you started out by accusing these people of being incompetent. You seem unwilling to support that accusation.



With due respect to my colleague, I will accept Dr. Alcock as a prominent critic of PEAR. However, you would have been better off reading and addressing Dr. Jeffers first, because he goes into greater detail about the calibration issues that Dr. Alcock alludes to.



Your summary is greatly simplified and incomplete.



Did you actually read the passage in the book? Dr. Alcock went into great detail describing how it could happen. If you have to ask that question, you either didn't read or didn't understand Dr. Alcock's criticism.



You mean "cited." Along with "IEE" instead of iEEE, this doesn't seem to be your day for accurate writing.



That's a pretty unfair treatment of the Palmer summary. You should have read Dr. Palmer's actual commentary before concluding he was biased. John Palmer, PhD, was the principal investigator in the Parapsychology Laboratory at the University of Utrecht. In his summary of Jahn et al. he is generally complimentary on their methods, noting that he improved over his predecessors in several ways. He does note that there are methodology concerns in Jahn's research, which Alcock discusses in summary, but if you mean to insinuate that Palmer's goal was trying to discredit or undermine PEAR, then you are completely misrepresenting it.

What did Palmer and Alcock do? Well, first they noted that PEAR had miscoded some of the data contrary to PEAR's own method. In the PK- study, a miss should have been classified as in accordance with the "operator's" (subject's) desires. PEAR misclassified it as contrary, which was the pro-psychokinesis outcome in that run. When the data were properly classified according to PEAR's rules, the proposed PK effect was no longer significant.

Now that may have been an innocent error, the kind that would be caught by a reviewer eventually. But Jahn et al. generally didn't publish in rigorous journals. When PEAR's findings were finally set before a more mainstream audience in Proceedings of the IEEE, the error was discovered and corrected by other researchers. It's pretty disingenuous of you to tout the virtues of the review process in ensuring accuracy, but then accuse reviewers of bias as soon as they discover the errors reviewers are supposed to look for.

As for massaging, here's what really happened. Alcock, reporting Palmer, noted that all the significance in the data was accounted for by a single subject, "Operator 010." Operator 010's acumen at "affecting" the random-event generator was significantly higher than all the other subjects put together. Normally in this sort of research, a number of subjects are used in order to control for individual variation. The mean outcome is considered indicative of the population. But in this case it clearly was not. One subject was clearly outlying, and that one person's effect was responsible for all the purportedly significant variance. Homogeneity of the sample is not just hokum. It's something any good analysis must account for. Reporting the results as an aggregation of 22 subjects misleads the reader into thinking a certain amount of the 22 were able to produce the effect to a greater or lesser degree. In fact, minus Operator 010, the other 21 subjects were unable to produce an effect greater than chance. That's a clear case of anomalous data that should be further explored, not simply lumped together with data that otherwise cluster around a much stronger predictor. "Massaging" the data to exclude obvious anomalies is also a common technique. It's why process variables for control systems are often filtered -- you don't want an anomalous sensor reading producing an unwanted control signal.

But it gets worse. The PEAR team tried several different variations on the basic method. Notably, only the runs in which Operator 010 participated showed any statistical significance. When the effect follows a certain subject rather than a more reliable prediction, that subject should be the focus of particular attention. When the PEAR team's data were reviewed in more detail than their initially reported aggregates, a pattern clearly emerged that violated the presumption of an acceptable distribution of effect among the subjects.

This is almost certainly why the two other teams that tried to reproduce PEAR's findings were utterly unable to do so. They didn't have the magical Operator 010. But there is one curious exception to the case of Operator 010. Even Operator 010's ability to affect the machine largely disappeared when the method let the REG select which affective mode was to be used for that run. It seems that Operator 010 could only succeed when she was able to choose how her test run that day would instruct her to affect the machine's behavior.

These are major red flags, and both Alcock and Palmer note that Jahn and his colleagues didn't seem to apply some of the standard empirical controls to prevent such possibilities as data tampering by the subjects. Subjects were left alone, unmonitored, in the room with the REG and were put on their honor to perform their duties honestly. Now none of the critics outright accuses PEAR of hoaxing the data. There's no evidence the data were actually hoaxed. But because the proper empirical controls were not put into place, and because the details of their actual practical methods never made it into any of their publications, there is no basis to exclude subject tampering or fraud as a cause for the significance.

While you note correctly that a standard t-test was used to test for significance, you sidestep entirely what the real problem was in the analysis. The t-test is one of several analysis-of-variance techniques used to compare populations. Jahn et al. used an empirically-determined baseline as the basis for comparison. In effect, he compared what the subject-affected runs produced with what the REG was measured to produce if (presumably) unaffected. The t-test is appropriate, but the results are interpreted as if the subject-affected runs were against theoretical chance. That depends on whether the REG baseline corresponds to chance.

To be sure, it does. Too well. This is why you need to read Dr. Jeffers and comment on that too. Alcock notes the suspiciously favorable calibration runs, but Jeffers goes into detail about what, statistically speaking, makes the reported calibration results suspicious. What's even more telling is what Jahn said when challenged on the calibration data that was just too good to be true: He speculated that the subjects must have unconsciously willed the machine to produce a good calibration run. That's a red flag so big it would have embarrassed even Krushchev.

No, your dismissal of PEAR's critics is comically naive. You're approaching the problem as if science were a matter of straightforward analysis with standardized tools. That may be how you approach statistical analysis, and may be all that's required in your job, but we've already established that you don't really know anything about how science is actually done. You can't speak intelligently about experiment design or empirical controls or any of the parts of the scientific method that ensure correctness and ensure the integrity of the central inference in any study. As such you skip the forest fire to focus on one innocent tree. It's as if you tell us the terrible food we get at some restaurant can't be all bad because the chef used a well-known brand of kitchen knife to prepare it.



No.

The relevant section of Alcock's book can be read in less than 15 minutes, and Palmer's chapter on PEAR can be read in less than half an hour, and is adequately summarized in Alcock. You read Alcock and wrote a brief, dismissing response. No one is buying that you're too busy to adequately address your critics, or that your contribution to this forum is so burdensome. You want to put these topics out there for discussion and debate, but you suddenly "don't have time" to address them. Your behavior is more consistent with simply dumping material out there that you yourself haven't read, and hoping your critics will think they have to accept it and won't read it to discover how badly you've misunderstood and misrepresented it.



And this is very rude. Yesterday you told us that if your critics posted links to the topics they most wanted you to address, you would do so. Having not read the previous day's posts in their entirety, you were unaware that this had already been done before you made the offer. Instead of making good on your promise and dealing with the half dozen or so links your critics had already posted, you decided you were simply going to follow your own path and choose for them what you would talk about. And then you tell us you don't have time to even fully spell that out.

We have little choice but to conclude you were lying when you promised to address what your critics would supply to you, and that you plan to continue this dishonest and evasive approach.



You're trying to steer the debate in a direction you think you're most prepared to travel, irrespective of what the actual questions an comments are. This is a well-established pattern with you. When you get stuck you try to change the subject.

Your behavior is not consistent with someone trying to have an honest and thorough discussion. You simply declare you're too busy to behave politely and responsibly. If you have the time to present your side of the story, but no time to address responses, then your behavior is more consistent with wanting a pulpit, which you are using mostly to trumpet your own accomplishments.



Yet you repeatedly keep spending time complaining about it instead of answering your critics, an exercise you say you lack time to do. Stop tying to play the victim. We're well attuned to your childish attempts at social engineering. You are not being mistreated.

Personal attacks are against the Member Agreement. If you believe you have been personally attacked, report the article for moderation so that another party can properly judge the validity of that claim. Do not instead claim victimhood for rhetorical effect or to conjure up excuses to ignore your critics. If you are unwilling to submit your complaints to moderatorial judgment, I don't want to hear them -- ever.
You wrote a long post and somehow I fell obligated to respond to it, although I do not have time to cover all topics that you have presented.

The Palmer article is my next target. I had to start somewhere, so I started with the Alcock article because it is a summary of all methods of critique of the Princeton ESP research. Palmer covers only statistical aspects of the Princeton research, so his article was not my first target. Once I get to it, I will be happy to discuss statistical aspects with you.
 
You clearly need to learn how to read a scientific reference. What do you imagine the "(Jahn et al. 2000)" part of the quote you rjust eproduced to mean, and what relation might it possibly have to the footnote "Jahn, R., et al. 2000. Mind/Machine Interaction Consortium: PortREG replication experiments. Journal of Scientific Exploration 14(4): 499-555" at the bottom of the CSICOP page?

(And please don't pull the old "If it isn't on the WWW it doesn't exist" gambit.)

It might also be more honest to replace "reproduced the Princeton research" with "failed to reproduce[d] the results of the Princeton research" in your post.

Dave
Jahn is a member of the Princeton ESP research team. I was asking for the links to the articles of INDEPENDENT researchers who were unable to reproduce the Princeton research results.

You don't have to provide the links if you have none, which could be true in some cases. But please, please, provide the titles of their articles so I would be able to find them at the Columbia University library.

I am asking for the titles. Is it too much to ask for?
 
Well, there is an online 'retro pk' experiment going on for more than 20 years… here are the results:

Total experiments: 389151
Number of subjects: 34595
Total tries: 398490624
Total hits: 199247513
Overall z: 0.2205 standard deviations

Source: http://www.fourmilab.ch/rpkp/experiments/summary/

So, the results are unimpressive… something you get just by a chance… if my understanding of statistics is correct.

Evidence presented for existence of PK shown at this thread are not so impressive and good as some would wish them to be.
To you the results may look unimpressive, but to a mathematician they support the researchers ' claim
 
You wrote a long post and somehow I fell obligated to respond to it, although I do not have time to cover all topics that you have presented.
The Palmer article is my next target. I had to start somewhere, so I started with the Alcock article because it is a summary of all methods of critique of the Princeton ESP research. Palmer covers only statistical aspects of the Princeton research, so his article was not my first target. Once I get to it, I will be happy to discuss statistical aspects with you.

....I've heard this before...it's like it's immortal...
 

Back
Top Bottom