halleyscomet
Penultimate Amazing
- Joined
- Dec 7, 2012
- Messages
- 10,259
Beside the fact that this is totally irrelevant to the discussion, the only hit that Google is returning for that quote is your post.
Misquote or made up?
My vote is "made up."
Beside the fact that this is totally irrelevant to the discussion, the only hit that Google is returning for that quote is your post.
Misquote or made up?
As an aside, in the US, we normally submit a resume to a potential employer and a CV to a client. Double check what your acronyms represent.
I consider it proof positive he is lying about those qualifications.
He has made it clear that he does not understand the need to have baseline tests in assessing a system.
Beside the fact that this is totally irrelevant to the discussion, the only hit that Google is returning for that quote is your post.
Misquote or made up?
Pfft. I can just think 'daughter, do something that will terrify your mother' and voilà!
I just concentrated my thought in making the sun drop below the line of the horizon ... and it did. I had to turn all the lights on.
This guy can manipulate clouds!
Well, not really. He’s apparently insane, but he THINKS he can manipulate clouds.
This guy can manipulate clouds!
Well, not really. He’s apparently insane, but he THINKS he can manipulate clouds.
This is not a summary, apparently you haven't read the article or you misunderstood it. The author combined counter-arguments coming from several sources, as he indicated, so this is not a single source. If you pride yourself as a scientist, you should demonstrate that I misrepresented article instead of saying it without providing a single proof of your assessment.That's so inadequate a summary of the discussion as to indicate that you're either unable to comprehend the text, or deliberately lying. We're all very well trained to recognise the strawman fallacy round here, and one of the classic strawman techniques is to limit your discussion to a single source of counter-argument, misrepresent that source, then handwave away your own misrepresentation. Do you really think you're helping your case by doing that?
Dave
It is a valid remark about the cellphones. However, I do not see how it relates to my post.The Princeton research was done in the early 80s, around about the time when mobile phones looked like this.
[qimg]https://www.mobilephonehistory.co.uk/lists/motorola_8000x.gif[/qimg]
Given the development in mobile phones you would expect wonderful developments in TK. There have been none. In fact there has been regression. The 80's model has never worked since. Even Princeton couldn't get it to work. No other model has been developed that works either.
All this can easily be found out if you use your phone.
Actually I like your post, it is very informative, I wish all other post were like yours. However, I didn't say that the Princeton ESP research should be accepted just because it was published in an IEEE magazine. The point I was making is that the researchers used valid experimentation techniques to conduct their observations, otherwise the editors would have rejected their data.Someone else already linked the article yesterday, and I (and probably several others too) read it. You don't need to explain IEEE. Anyone who is even remotely involved in STEM fields is well familiar with who they are and what they do. I daresay most of us in the field are members too and read their publications.
No, you're asking your audience to accept the validity of PEAR research on something other than its actual merits. "Their research must be correct because it wouldn't be published in a peer-reviewed journal otherwise." Well, no, the research is incorrect for the reasons given, which you have not addressed. Specifically, you have been directed several times to Dr. Jeffers' critique of the very article you are now trying to cite.
Only the one article that someone else linked yesterday appeared in IEEE journals, not "several" as you erroneously report. The only other publication of PEAR research in a mainstream journal was in Foundations of Physics. All the rest of PEAR's research was published in Journal of Scientific Exploration, a decidedly non-mainstream journal that reports on scientific study of the paranormal. It was certainly never published in physics, neurology, or social science journals. Since those would have been logical fields to publish in, the absence of PEAR's findings in any of them is telling.
Publishing outside one's field is another pseudoscience technique. Jahn piggy-backed on the prestige of Princeton University to give his group undeserved credibility. It would be attractive to say that he tried to piggy-back on IEEE for the same reason, but that's not what happened here.
IEEE is a professional and technical organization. They generally do not publish the kind of science Jahn and his colleagues do. But because the specific article deals heavily with the design and operation of PEAR's equipment, it falls under IEEE interest even if the use to which he is putting it ordinarily would not. Contrary to your characterization, Jahn's paper in Proceedings of the IEEE was an invited paper, followed up four issues later by an invited critique from a noted psychologist. Invited papers are generally not reviewed for methodological rigor as a condition of publication, usually because, as summary or report papers, they do not describe a methodology. Here the purpose was to illustrate to readers what a bad methodology looks like.
The problems with PEAR's research are contained in the research itself. They are not mitigated by the prestige of who hosted them or who published them. If you are not willing to discuss the research you cited on its merits, then we're done here.
You wrote a long post and somehow I fell obligated to respond to it, although I do not have time to cover all topics that you have presented.Now hold on a minute. You don't get to tell your critics what the "right way" is to refute your claims. Your critics are correct to reject your attempts to script their side of the debate and quite justified in taking you to task personally for doing it.
Notwithstanding the above, I agree that you are responsible for addressing what has already been written as criticism of the sources you cite. Toward that end, I supplied you with links to previous discussions of PEAR claims here at ISF. You did not read them, as evidenced by your attempt simply to replay a decade-old debate as if it were somehow fresh and new.
You were further supplied -- several times -- with links to Dr. Jeffers' detailed criticism of Jahn's article in Proceedings of IEEE and asked to comment on it. You haven't done that, either. That's deeply troubling, since you started out by accusing these people of being incompetent. You seem unwilling to support that accusation.
With due respect to my colleague, I will accept Dr. Alcock as a prominent critic of PEAR. However, you would have been better off reading and addressing Dr. Jeffers first, because he goes into greater detail about the calibration issues that Dr. Alcock alludes to.
Your summary is greatly simplified and incomplete.
Did you actually read the passage in the book? Dr. Alcock went into great detail describing how it could happen. If you have to ask that question, you either didn't read or didn't understand Dr. Alcock's criticism.
You mean "cited." Along with "IEE" instead of iEEE, this doesn't seem to be your day for accurate writing.
That's a pretty unfair treatment of the Palmer summary. You should have read Dr. Palmer's actual commentary before concluding he was biased. John Palmer, PhD, was the principal investigator in the Parapsychology Laboratory at the University of Utrecht. In his summary of Jahn et al. he is generally complimentary on their methods, noting that he improved over his predecessors in several ways. He does note that there are methodology concerns in Jahn's research, which Alcock discusses in summary, but if you mean to insinuate that Palmer's goal was trying to discredit or undermine PEAR, then you are completely misrepresenting it.
What did Palmer and Alcock do? Well, first they noted that PEAR had miscoded some of the data contrary to PEAR's own method. In the PK- study, a miss should have been classified as in accordance with the "operator's" (subject's) desires. PEAR misclassified it as contrary, which was the pro-psychokinesis outcome in that run. When the data were properly classified according to PEAR's rules, the proposed PK effect was no longer significant.
Now that may have been an innocent error, the kind that would be caught by a reviewer eventually. But Jahn et al. generally didn't publish in rigorous journals. When PEAR's findings were finally set before a more mainstream audience in Proceedings of the IEEE, the error was discovered and corrected by other researchers. It's pretty disingenuous of you to tout the virtues of the review process in ensuring accuracy, but then accuse reviewers of bias as soon as they discover the errors reviewers are supposed to look for.
As for massaging, here's what really happened. Alcock, reporting Palmer, noted that all the significance in the data was accounted for by a single subject, "Operator 010." Operator 010's acumen at "affecting" the random-event generator was significantly higher than all the other subjects put together. Normally in this sort of research, a number of subjects are used in order to control for individual variation. The mean outcome is considered indicative of the population. But in this case it clearly was not. One subject was clearly outlying, and that one person's effect was responsible for all the purportedly significant variance. Homogeneity of the sample is not just hokum. It's something any good analysis must account for. Reporting the results as an aggregation of 22 subjects misleads the reader into thinking a certain amount of the 22 were able to produce the effect to a greater or lesser degree. In fact, minus Operator 010, the other 21 subjects were unable to produce an effect greater than chance. That's a clear case of anomalous data that should be further explored, not simply lumped together with data that otherwise cluster around a much stronger predictor. "Massaging" the data to exclude obvious anomalies is also a common technique. It's why process variables for control systems are often filtered -- you don't want an anomalous sensor reading producing an unwanted control signal.
But it gets worse. The PEAR team tried several different variations on the basic method. Notably, only the runs in which Operator 010 participated showed any statistical significance. When the effect follows a certain subject rather than a more reliable prediction, that subject should be the focus of particular attention. When the PEAR team's data were reviewed in more detail than their initially reported aggregates, a pattern clearly emerged that violated the presumption of an acceptable distribution of effect among the subjects.
This is almost certainly why the two other teams that tried to reproduce PEAR's findings were utterly unable to do so. They didn't have the magical Operator 010. But there is one curious exception to the case of Operator 010. Even Operator 010's ability to affect the machine largely disappeared when the method let the REG select which affective mode was to be used for that run. It seems that Operator 010 could only succeed when she was able to choose how her test run that day would instruct her to affect the machine's behavior.
These are major red flags, and both Alcock and Palmer note that Jahn and his colleagues didn't seem to apply some of the standard empirical controls to prevent such possibilities as data tampering by the subjects. Subjects were left alone, unmonitored, in the room with the REG and were put on their honor to perform their duties honestly. Now none of the critics outright accuses PEAR of hoaxing the data. There's no evidence the data were actually hoaxed. But because the proper empirical controls were not put into place, and because the details of their actual practical methods never made it into any of their publications, there is no basis to exclude subject tampering or fraud as a cause for the significance.
While you note correctly that a standard t-test was used to test for significance, you sidestep entirely what the real problem was in the analysis. The t-test is one of several analysis-of-variance techniques used to compare populations. Jahn et al. used an empirically-determined baseline as the basis for comparison. In effect, he compared what the subject-affected runs produced with what the REG was measured to produce if (presumably) unaffected. The t-test is appropriate, but the results are interpreted as if the subject-affected runs were against theoretical chance. That depends on whether the REG baseline corresponds to chance.
To be sure, it does. Too well. This is why you need to read Dr. Jeffers and comment on that too. Alcock notes the suspiciously favorable calibration runs, but Jeffers goes into detail about what, statistically speaking, makes the reported calibration results suspicious. What's even more telling is what Jahn said when challenged on the calibration data that was just too good to be true: He speculated that the subjects must have unconsciously willed the machine to produce a good calibration run. That's a red flag so big it would have embarrassed even Krushchev.
No, your dismissal of PEAR's critics is comically naive. You're approaching the problem as if science were a matter of straightforward analysis with standardized tools. That may be how you approach statistical analysis, and may be all that's required in your job, but we've already established that you don't really know anything about how science is actually done. You can't speak intelligently about experiment design or empirical controls or any of the parts of the scientific method that ensure correctness and ensure the integrity of the central inference in any study. As such you skip the forest fire to focus on one innocent tree. It's as if you tell us the terrible food we get at some restaurant can't be all bad because the chef used a well-known brand of kitchen knife to prepare it.
No.
The relevant section of Alcock's book can be read in less than 15 minutes, and Palmer's chapter on PEAR can be read in less than half an hour, and is adequately summarized in Alcock. You read Alcock and wrote a brief, dismissing response. No one is buying that you're too busy to adequately address your critics, or that your contribution to this forum is so burdensome. You want to put these topics out there for discussion and debate, but you suddenly "don't have time" to address them. Your behavior is more consistent with simply dumping material out there that you yourself haven't read, and hoping your critics will think they have to accept it and won't read it to discover how badly you've misunderstood and misrepresented it.
And this is very rude. Yesterday you told us that if your critics posted links to the topics they most wanted you to address, you would do so. Having not read the previous day's posts in their entirety, you were unaware that this had already been done before you made the offer. Instead of making good on your promise and dealing with the half dozen or so links your critics had already posted, you decided you were simply going to follow your own path and choose for them what you would talk about. And then you tell us you don't have time to even fully spell that out.
We have little choice but to conclude you were lying when you promised to address what your critics would supply to you, and that you plan to continue this dishonest and evasive approach.
You're trying to steer the debate in a direction you think you're most prepared to travel, irrespective of what the actual questions an comments are. This is a well-established pattern with you. When you get stuck you try to change the subject.
Your behavior is not consistent with someone trying to have an honest and thorough discussion. You simply declare you're too busy to behave politely and responsibly. If you have the time to present your side of the story, but no time to address responses, then your behavior is more consistent with wanting a pulpit, which you are using mostly to trumpet your own accomplishments.
Yet you repeatedly keep spending time complaining about it instead of answering your critics, an exercise you say you lack time to do. Stop tying to play the victim. We're well attuned to your childish attempts at social engineering. You are not being mistreated.
Personal attacks are against the Member Agreement. If you believe you have been personally attacked, report the article for moderation so that another party can properly judge the validity of that claim. Do not instead claim victimhood for rhetorical effect or to conjure up excuses to ignore your critics. If you are unwilling to submit your complaints to moderatorial judgment, I don't want to hear them -- ever.
Jahn is a member of the Princeton ESP research team. I was asking for the links to the articles of INDEPENDENT researchers who were unable to reproduce the Princeton research results.You clearly need to learn how to read a scientific reference. What do you imagine the "(Jahn et al. 2000)" part of the quote you rjust eproduced to mean, and what relation might it possibly have to the footnote "Jahn, R., et al. 2000. Mind/Machine Interaction Consortium: PortREG replication experiments. Journal of Scientific Exploration 14(4): 499-555" at the bottom of the CSICOP page?
(And please don't pull the old "If it isn't on the WWW it doesn't exist" gambit.)
It might also be more honest to replace "reproduced the Princeton research" with "failed to reproduce[d] the results of the Princeton research" in your post.
Dave
To you the results may look unimpressive, but to a mathematician they support the researchers ' claimWell, there is an online 'retro pk' experiment going on for more than 20 years… here are the results:
Total experiments: 389151
Number of subjects: 34595
Total tries: 398490624
Total hits: 199247513
Overall z: 0.2205 standard deviations
Source: http://www.fourmilab.ch/rpkp/experiments/summary/
So, the results are unimpressive… something you get just by a chance… if my understanding of statistics is correct.
Evidence presented for existence of PK shown at this thread are not so impressive and good as some would wish them to be.
You wrote a long post and somehow I fell obligated to respond to it, although I do not have time to cover all topics that you have presented.
The Palmer article is my next target. I had to start somewhere, so I started with the Alcock article because it is a summary of all methods of critique of the Princeton ESP research. Palmer covers only statistical aspects of the Princeton research, so his article was not my first target. Once I get to it, I will be happy to discuss statistical aspects with you.