Moderated Is the Telekinesis Real?

Basically, it says that the Princeton research group gave incorrect interpretation of their research results because their results are below the significance level, although the group claims the opposite.

How could this happen? The researchers used a standard version of two-sided t-test to draw the conclusion, while the critic (he is not the author of the article, but the author sited his work) transformed the results to fit, as he says, the same t-test. The newly interpreted test shows the results that are below the significance level.

That's so inadequate a summary of the discussion as to indicate that you're either unable to comprehend the text, or deliberately lying. We're all very well trained to recognise the strawman fallacy round here, and one of the classic strawman techniques is to limit your discussion to a single source of counter-argument, misrepresent that source, then handwave away your own misrepresentation. Do you really think you're helping your case by doing that?

Dave
 
The right way to discuss the Princeton research is to read the articles written by its critics. Here is the link to the article written by one the most outspoken critics of the research

http://www.nap.edu/read/778/chapter/7#640

Basically, it says that the Princeton research group gave incorrect interpretation of their research results because their results are below the significance level, although the group claims the opposite.

How could this happen? The researchers used a standard version of two-sided t-test to draw the conclusion, while the critic (he is not the author of the article, but the author sited his work) transformed the results to fit, as he says, the same t-test. The newly interpreted test shows the results that are below the significance level.

The way I see it, the critic “massaged” the data to fit it into his version of truth, as Guilianni put it while defending his client, Trump. This technique might work in the world of politics, but it is not acceptable in the world of science.

This is part1 of the article, the next one deals with the randomization process used by the research group. Unfortunately, I do not have time to discuss it today (it took me more than an hour to read the article and prepare response to it; today I do not have time to respond to my opponents’ posts, but I will do it tomorrow).

I am trying to be thorough and push this discussion in the right direction rather than responding to useless personal attacks. (Personal stuff doesn’t bother me at all, but I see it as a waste of time).
The Princeton research was done in the early 80s, around about the time when mobile phones looked like this.
motorola_8000x.gif


Given the development in mobile phones you would expect wonderful developments in TK. There have been none. In fact there has been regression. The 80's model has never worked since. Even Princeton couldn't get it to work. No other model has been developed that works either.

All this can easily be found out if you use your phone.
 
IEEE (American Society of Electrical Engineers) had published several articles written by the Princeton ESP scientists, I can give a link to at least one of their articles.

Someone else already linked the article yesterday, and I (and probably several others too) read it. You don't need to explain IEEE. Anyone who is even remotely involved in STEM fields is well familiar with who they are and what they do. I daresay most of us in the field are members too and read their publications.

IEEE follow the highest standards of publications on a par with the Physical Review standards, all submitted articles are subjected to peer reviews. This tells something about the professional level of the Princeton research team, doesn’t it?

No, you're asking your audience to accept the validity of PEAR research on something other than its actual merits. "Their research must be correct because it wouldn't be published in a peer-reviewed journal otherwise." Well, no, the research is incorrect for the reasons given, which you have not addressed. Specifically, you have been directed several times to Dr. Jeffers' critique of the very article you are now trying to cite.

Only the one article that someone else linked yesterday appeared in IEEE journals, not "several" as you erroneously report. The only other publication of PEAR research in a mainstream journal was in Foundations of Physics. All the rest of PEAR's research was published in Journal of Scientific Exploration, a decidedly non-mainstream journal that reports on scientific study of the paranormal. It was certainly never published in physics, neurology, or social science journals. Since those would have been logical fields to publish in, the absence of PEAR's findings in any of them is telling.

Publishing outside one's field is another pseudoscience technique. Jahn piggy-backed on the prestige of Princeton University to give his group undeserved credibility. It would be attractive to say that he tried to piggy-back on IEEE for the same reason, but that's not what happened here.

IEEE is a professional and technical organization. They generally do not publish the kind of science Jahn and his colleagues do. But because the specific article deals heavily with the design and operation of PEAR's equipment, it falls under IEEE interest even if the use to which he is putting it ordinarily would not. Contrary to your characterization, Jahn's paper in Proceedings of the IEEE was an invited paper, followed up four issues later by an invited critique from a noted psychologist. Invited papers are generally not reviewed for methodological rigor as a condition of publication, usually because, as summary or report papers, they do not describe a methodology. Here the purpose was to illustrate to readers what a bad methodology looks like.

The problems with PEAR's research are contained in the research itself. They are not mitigated by the prestige of who hosted them or who published them. If you are not willing to discuss the research you cited on its merits, then we're done here.
 
Just for clarity, are you citing the American Society of Electrical Engineers (ASEE), or the American institute of Electrical Engineers (AIEE), or the Institute of Electronic and Electrical Engineers (IEEE), or what?

Or the IEE (last reference) a nolonger separately existent British body the Institute of Electrical Engineers (I was a runner up in their Faraday Award competition many many years ago).

You'd think he'd know what institution he's a member off...
 
The right way to discuss the Princeton research is to read the articles written by its critics. Here is the link to the article written by one the most outspoken critics of the research

Oh good, it's the part of the discussion where Buddha tells us the correct arguments to use so he can use his pre-determined talking points.
 
Oh good, it's the part of the discussion where Buddha tells us the correct arguments to use so he can use his pre-determined talking points.

Yep, we're on schedule for a declaration of victory and advanced notification of the next steaming pile of subject this weekend and a new separate thread next weekend.

I wonder what it will be, maybe mermen since we know that neither believability nor previous discussion are grounds for exclusion.
 
The right way to discuss the Princeton research is...

Now hold on a minute. You don't get to tell your critics what the "right way" is to refute your claims. Your critics are correct to reject your attempts to script their side of the debate and quite justified in taking you to task personally for doing it.

...is to read the articles written by its critics.

Notwithstanding the above, I agree that you are responsible for addressing what has already been written as criticism of the sources you cite. Toward that end, I supplied you with links to previous discussions of PEAR claims here at ISF. You did not read them, as evidenced by your attempt simply to replay a decade-old debate as if it were somehow fresh and new.

You were further supplied -- several times -- with links to Dr. Jeffers' detailed criticism of Jahn's article in Proceedings of IEEE and asked to comment on it. You haven't done that, either. That's deeply troubling, since you started out by accusing these people of being incompetent. You seem unwilling to support that accusation.

Here is the link to the article written by one the most outspoken critics of the research...

With due respect to my colleague, I will accept Dr. Alcock as a prominent critic of PEAR. However, you would have been better off reading and addressing Dr. Jeffers first, because he goes into greater detail about the calibration issues that Dr. Alcock alludes to.

Basically, it says that the Princeton research group gave incorrect interpretation of their research results because their results are below the significance level, although the group claims the opposite.

Your summary is greatly simplified and incomplete.

How could this happen?

Did you actually read the passage in the book? Dr. Alcock went into great detail describing how it could happen. If you have to ask that question, you either didn't read or didn't understand Dr. Alcock's criticism.

The researchers used a standard version of two-sided t-test to draw the conclusion, while the critic (he is not the author of the article, but the author sited his work)...

You mean "cited." Along with "IEE" instead of iEEE, this doesn't seem to be your day for accurate writing.

[Palmer] transformed the results to fit, as he says, the same t-test. The newly interpreted test shows the results that are below the significance level.

The way I see it, the critic “massaged” the data to fit it into his version of truth...

That's a pretty unfair treatment of the Palmer summary. You should have read Dr. Palmer's actual commentary before concluding he was biased. John Palmer, PhD, was the principal investigator in the Parapsychology Laboratory at the University of Utrecht. In his summary of Jahn et al. he is generally complimentary on their methods, noting that he improved over his predecessors in several ways. He does note that there are methodology concerns in Jahn's research, which Alcock discusses in summary, but if you mean to insinuate that Palmer's goal was trying to discredit or undermine PEAR, then you are completely misrepresenting it.

What did Palmer and Alcock do? Well, first they noted that PEAR had miscoded some of the data contrary to PEAR's own method. In the PK- study, a miss should have been classified as in accordance with the "operator's" (subject's) desires. PEAR misclassified it as contrary, which was the pro-psychokinesis outcome in that run. When the data were properly classified according to PEAR's rules, the proposed PK effect was no longer significant.

Now that may have been an innocent error, the kind that would be caught by a reviewer eventually. But Jahn et al. generally didn't publish in rigorous journals. When PEAR's findings were finally set before a more mainstream audience in Proceedings of the IEEE, the error was discovered and corrected by other researchers. It's pretty disingenuous of you to tout the virtues of the review process in ensuring accuracy, but then accuse reviewers of bias as soon as they discover the errors reviewers are supposed to look for.

As for massaging, here's what really happened. Alcock, reporting Palmer, noted that all the significance in the data was accounted for by a single subject, "Operator 010." Operator 010's acumen at "affecting" the random-event generator was significantly higher than all the other subjects put together. Normally in this sort of research, a number of subjects are used in order to control for individual variation. The mean outcome is considered indicative of the population. But in this case it clearly was not. One subject was clearly outlying, and that one person's effect was responsible for all the purportedly significant variance. Homogeneity of the sample is not just hokum. It's something any good analysis must account for. Reporting the results as an aggregation of 22 subjects misleads the reader into thinking a certain amount of the 22 were able to produce the effect to a greater or lesser degree. In fact, minus Operator 010, the other 21 subjects were unable to produce an effect greater than chance. That's a clear case of anomalous data that should be further explored, not simply lumped together with data that otherwise cluster around a much stronger predictor. "Massaging" the data to exclude obvious anomalies is also a common technique. It's why process variables for control systems are often filtered -- you don't want an anomalous sensor reading producing an unwanted control signal.

But it gets worse. The PEAR team tried several different variations on the basic method. Notably, only the runs in which Operator 010 participated showed any statistical significance. When the effect follows a certain subject rather than a more reliable prediction, that subject should be the focus of particular attention. When the PEAR team's data were reviewed in more detail than their initially reported aggregates, a pattern clearly emerged that violated the presumption of an acceptable distribution of effect among the subjects.

This is almost certainly why the two other teams that tried to reproduce PEAR's findings were utterly unable to do so. They didn't have the magical Operator 010. But there is one curious exception to the case of Operator 010. Even Operator 010's ability to affect the machine largely disappeared when the method let the REG select which affective mode was to be used for that run. It seems that Operator 010 could only succeed when she was able to choose how her test run that day would instruct her to affect the machine's behavior.

These are major red flags, and both Alcock and Palmer note that Jahn and his colleagues didn't seem to apply some of the standard empirical controls to prevent such possibilities as data tampering by the subjects. Subjects were left alone, unmonitored, in the room with the REG and were put on their honor to perform their duties honestly. Now none of the critics outright accuses PEAR of hoaxing the data. There's no evidence the data were actually hoaxed. But because the proper empirical controls were not put into place, and because the details of their actual practical methods never made it into any of their publications, there is no basis to exclude subject tampering or fraud as a cause for the significance.

While you note correctly that a standard t-test was used to test for significance, you sidestep entirely what the real problem was in the analysis. The t-test is one of several analysis-of-variance techniques used to compare populations. Jahn et al. used an empirically-determined baseline as the basis for comparison. In effect, he compared what the subject-affected runs produced with what the REG was measured to produce if (presumably) unaffected. The t-test is appropriate, but the results are interpreted as if the subject-affected runs were against theoretical chance. That depends on whether the REG baseline corresponds to chance.

To be sure, it does. Too well. This is why you need to read Dr. Jeffers and comment on that too. Alcock notes the suspiciously favorable calibration runs, but Jeffers goes into detail about what, statistically speaking, makes the reported calibration results suspicious. What's even more telling is what Jahn said when challenged on the calibration data that was just too good to be true: He speculated that the subjects must have unconsciously willed the machine to produce a good calibration run. That's a red flag so big it would have embarrassed even Krushchev.

No, your dismissal of PEAR's critics is comically naive. You're approaching the problem as if science were a matter of straightforward analysis with standardized tools. That may be how you approach statistical analysis, and may be all that's required in your job, but we've already established that you don't really know anything about how science is actually done. You can't speak intelligently about experiment design or empirical controls or any of the parts of the scientific method that ensure correctness and ensure the integrity of the central inference in any study. As such you skip the forest fire to focus on one innocent tree. It's as if you tell us the terrible food we get at some restaurant can't be all bad because the chef used a well-known brand of kitchen knife to prepare it.

Unfortunately, I do not have time to discuss it today (it took me more than an hour to read the article and prepare response to it;

No.

The relevant section of Alcock's book can be read in less than 15 minutes, and Palmer's chapter on PEAR can be read in less than half an hour, and is adequately summarized in Alcock. You read Alcock and wrote a brief, dismissing response. No one is buying that you're too busy to adequately address your critics, or that your contribution to this forum is so burdensome. You want to put these topics out there for discussion and debate, but you suddenly "don't have time" to address them. Your behavior is more consistent with simply dumping material out there that you yourself haven't read, and hoping your critics will think they have to accept it and won't read it to discover how badly you've misunderstood and misrepresented it.

...today I do not have time to respond to my opponents’ posts, but I will do it tomorrow.

And this is very rude. Yesterday you told us that if your critics posted links to the topics they most wanted you to address, you would do so. Having not read the previous day's posts in their entirety, you were unaware that this had already been done before you made the offer. Instead of making good on your promise and dealing with the half dozen or so links your critics had already posted, you decided you were simply going to follow your own path and choose for them what you would talk about. And then you tell us you don't have time to even fully spell that out.

We have little choice but to conclude you were lying when you promised to address what your critics would supply to you, and that you plan to continue this dishonest and evasive approach.

I am trying to be thorough and push this discussion in the right direction...

You're trying to steer the debate in a direction you think you're most prepared to travel, irrespective of what the actual questions an comments are. This is a well-established pattern with you. When you get stuck you try to change the subject.

Your behavior is not consistent with someone trying to have an honest and thorough discussion. You simply declare you're too busy to behave politely and responsibly. If you have the time to present your side of the story, but no time to address responses, then your behavior is more consistent with wanting a pulpit, which you are using mostly to trumpet your own accomplishments.

...rather than responding to useless personal attacks. (Personal stuff doesn’t bother me at all, but I see it as a waste of time).

Yet you repeatedly keep spending time complaining about it instead of answering your critics, an exercise you say you lack time to do. Stop tying to play the victim. We're well attuned to your childish attempts at social engineering. You are not being mistreated.

Personal attacks are against the Member Agreement. If you believe you have been personally attacked, report the article for moderation so that another party can properly judge the validity of that claim. Do not instead claim victimhood for rhetorical effect or to conjure up excuses to ignore your critics. If you are unwilling to submit your complaints to moderatorial judgment, I don't want to hear them -- ever.
 
Do you really think you're helping your case by doing that?

He's hoping we won't actually read it, and that we'll be satisfied responding to his hasty summary instead of what the authors actually say. It's hilarious that he's trying to write off Palmer as an enemy of paranormal beliefs and the research that attempts to validate them. You're right that it shows how very little he has actually read on the subject. Palmer concluded at the time of his writing that despite the errors in methodology exhibited by all the projects he reviewed, they had produced findings worth at least further study.

Alcock (p. 44) concluded similarly to Palmer. He allowed there was "a mystery here," but felt that PEAR's methodological errors were grave enough that he could see no good reason to conclude that the mystery had a paranormal cause.

Jeffers, reporting on PEAR's closure in Skeptical Inquirer, noted that he had contacted and received the cooperation of PEAR's principal researchers in preparing his commentary. This is not a move customarily associated with nefarious intent.

No. Buddha the Claimant, being largely ignorant of PEAR's actual critics and the facts of the criticism, is simply doing his best to poison the well and hope he won't get caught doing it.
 
In line with the previous posts

If you want to deal with real objections, for instance read this and comment on its content.

I posted this more than 48 hours ago and you, "Buddha", persistently ignored it. It's short, simple and right to the point. And it says it all in just one paragraph. I selected that among other suitable papers and articles because it's simple and bite-sized, even for unwilling mouths.


If you can't deal with it, you can't deal with anything.
 
Outspoken according to who?

Thank you for holding Buddha the Claimant's feet to the fire over his straw-man approach. However in this case I feel he has accidental merit. "Outspoken" is perhaps the wrong word, or perhaps too strong. But Dr. James Alcock is not a bad source for the most effective criticism of PEAR. True, he relies heavily on John Palmer's analysis, but he accurately reports and summarizes Palmer. And Palmer's analysis is fair and even-handed and decidedly unbiased.

Buddha wants us to believe that Alcock, Palmer, Jeffers, and others are biased and dismissible simply because they dare to criticize Jahn et al. There is no animus whatsoever in any of these critics' writings. I have to conclude Buddha is unfamiliar with ordinary scholarship and unaware that criticism such as that produced by the authors I named is the lifeblood of science. It's formulated dispassionately and delivered with the aim of improving science as a whole. Buddha wants to believe that scientific discourse and debate follow the same trench-warfare tactics that he and others have employed here on the Internet. That's because -- I think -- he wants to believe that all this online fracas he's embroiled himself in is somehow just as valid as real science.
 
Buddha wants to believe that scientific discourse and debate follow the same trench-warfare tactics that he and others have employed here on the Internet. That's because -- I think -- he wants to believe that all this online fracas he's embroiled himself in is somehow just as valid as real science.

Exactly that. For them, Perry Mason was a scientist. The Internet has empowered everyone and in some that power went to their heads. They can surf the web so they think they rule its vast ocean, but in the end they show a methodical absence of method which in turn speaks of a limited education and leaves their lively egos as the only noteworthy characteristic of theirs. Jabba, yrreg, "Buddha"; so different and yet so alike.
 
Well, there is an online 'retro pk' experiment going on for more than 20 years… here are the results:

Total experiments: 389151
Number of subjects: 34595
Total tries: 398490624
Total hits: 199247513
Overall z: 0.2205 standard deviations

Source: http://www.fourmilab.ch/rpkp/experiments/summary/

So, the results are unimpressive… something you get just by a chance… if my understanding of statistics is correct.

Evidence presented for existence of PK shown at this thread are not so impressive and good as some would wish them to be.
 
Since the links have already been provided and you haven't responded to them, your offer isn't looking very credible right now.

Dave
The links were
provided, you are about that. But they lead to this website. I was asking for the links to original articles written by the scientists who criticized the Princeton RSP program. Since no one has provided them, I am going to do it myself. See my latest posts.
 
Well, I don't know...

It seems that when parapsychology community find some new theory which could account as an explanation of PK or some other anomalous phenomena they try to back it up by some 'new' methods and research. Anyway, I look forward seeing what results will they get.

When a large scale replication of PEAR experiment followed in several laboratories, the results were negative:

"If the claims are credible, it should be possible for other groups to replicate them. To their credit, the PEAR group did enlist two other groups, both based at German universities (Jahn et al. 2000) to engage in a triple effort at replication. These attempts failed to reproduce the claimed effects. Even the PEAR group was unable to reproduce a credible effect.", source: https://www.csicop.org/si/show/pear_proposition_fact_or_fallacy

I just wonder are random number generators truly random or are prone producing results that observers could interpret as anomaly?
I would like to see the articles written by the scientists who reproduced the Princeton research. So far I have found none, but you might be right about that. Anyway, I will respond to their articles if you provide the links.
 
Sorry........you people really expect Buddha to return to this thread?



I don’t. He’s already scampered off, tail between his legs, declaring victory.

At least Caligula had an actual army when he allegedly had them stab the ocean to attack the sea.
 
I would like to see the articles written by the scientists who reproduced the Princeton research. So far I have found none, but you might be right about that. Anyway, I will respond to their articles if you provide the links.

You clearly need to learn how to read a scientific reference. What do you imagine the "(Jahn et al. 2000)" part of the quote you rjust eproduced to mean, and what relation might it possibly have to the footnote "Jahn, R., et al. 2000. Mind/Machine Interaction Consortium: PortREG replication experiments. Journal of Scientific Exploration 14(4): 499-555" at the bottom of the CSICOP page?

(And please don't pull the old "If it isn't on the WWW it doesn't exist" gambit.)

It might also be more honest to replace "reproduced the Princeton research" with "failed to reproduce[d] the results of the Princeton research" in your post.

Dave
 
I was asking for the links to original articles written by the scientists who criticized the Princeton RSP program. Since no one has provided them...

Links to Dr. Jeffers' original articles were posted several times, even before you asked for them. But you admit you "don't have time" to read and respond to them, so now you're just making excuses for your delinquent performance.
 
I would like to see the articles written by the scientists who reproduced the Princeton research.

They did so at the behest of PEAR and were published in PEAR's research. You don't know your own sources. Although technically what you're asking for doesn't exist. No scientists were able to reproduce PEAR's findings. They attempted and failed, just as PEAR itself attempted and failed without the superlative talent of Operator 010.

Anyway, I will respond to their articles if you provide the links.

No, you clearly won't. You keep telling us you "don't have time." At this point we have to conclude that this promise is a deliberate lie.
 

Back
Top Bottom