Moderated Is the Telekinesis Real?

Why would you argue that it doesn't exist because it's not mentioned in one summary web page? Is this how you did your master's degree research?

I don't think he ever did any masters decree research.

At this point I half expect him to start talking about his girlfriend who lives in Canada(NSFW).

It is true, I could have gotten these books from the Columbia University library, I use it sometimes. But you would have to read all these books in order to find a right quotation.

If you actually HAD a masters degree you'd be able to simply post a citation so any of us could check the quotes ourselves. Every day you sound more and more like Archie the Manager from Duffy's Tavern. You're not fooling anyone when you pretend to have education and experience that you do not.

Unfortunately I do not have time respond to all posts, so I select the most interesting and substantive ones. I didn't block you on purpose, I just do not find your posts challenging enough. As for the insults, I learned from my mistake that they are not tolerated at this website; now the mods do not see my posts as offensive, and I trust their judgement

That is to say, you can't Google phrases from my posts to construct an elaborate wall of text in order to take credit for the very ideas I'm expressing, as you do with JayUtah. :)

You remind me of a Tom Lehrer song.

 
Last edited:
There is no reference to the Jeffers’ experiment negating the Princeton research results.

There is in the book itself. The introduction summarizes the Jeffers research starting on page 40.

The way I see it, Jeffers did not conduct any telekinesis research; for some reason some people believe he did...

That "some reason" being the chapter by Jeffers, entitled "Physics and Claims for Anomalous Effects Related to Consciousness," beginning on page 135 of Psi Wars. Your propensity to deny facts is prodigious.

But let's be clear: that summarizes Jeffers' own affirmative work in psi research. It does not specifically criticize PEAR. That criticism, which we want you to address, is in the Skeptical Inquirer articles to which we linked days ago.
 
You. Always trying to escape by the fringes. :rolleyes:


The purpose was to point the overall lack of connexion with reality in your whole posting here. How you puerilely think that being categorical in your assertions gives them credibility and hides the inherent ridiculousness in many of them. How you show a clumsy use of the forum software, Internet search engines, Internet forums, Statistics, Logic and everything you praise yourself of managing or making a living out of it. And that you won't really contact any mathematicians to enticed them to participate here but you just upped the ante in the verbal front, the only thing you have shown to clearly have abilities to do.

So, you were right. It was criticism.

I want to apologize for this post of mine before "Buddha" and the whole forum community. It was excessive and, in what is worth, it lacks the needed references to make its criticism into a constructive thing, leaving the whole thing one of those diatribes I later criticize from others.
 
Further it is you who cannot demonstrate a proficient knowledge of even basic descriptive statistics. You are constantly trying to fake it by hastily researching words and concepts your critics mention and posting largely irrelevant introductory passages from convenience sources, implying that you understand them while no one else here does.


The whole "categorical variables in clinical trials" thing is a good example of that.


He reminded me the times, decades ago, when I was associate professor in Operational Research and I started each course teaching Dantzig's simplex method. I would enunciate a problem dealing with quantities of different models of trainers/sneakers to be manufactured and next asked the students which the variables were. It was inevitable some of them replied "the trainers", and I would ask "what? Reeboks? Adidas? not really, it's the number of each model of sneaker instead. You are confounding the name of the variables with the variable values themselves, like mixing up the fixed boxes and their variable content."


I have no problem imagining "Buddha" as one of those students and I'm sure one or two of them would be spiteful enough as to repeat the same style of explanation I gave in a recriminatory way if it served to solve their problems with debate.


You showed him he had no idea what he was talking about and he tried to deflect that with a forced "gotcha moment" the style of what I've just described.
 
I want to apologize for this post of mine before "Buddha" and the whole forum community. It was excessive and, in what is worth, it lacks the needed references to make its criticism into a constructive thing, leaving the whole thing one of those diatribes I later criticize from others.

Kudos to you for taking the high road, but tbh I don't think there was anything in that post that wasn't true. Constructive criticism requires a receptive subject, sometimes it's healthy to just let yourself blow off steam.
 
Look at his wording "I assumed, if it exists, it is inadequate and it would not take me too long to demolish it"

Yeah, there's a lot to unpack there.

He "assumes" that talking about non-existent articles is "common practice" in "academic environments".

Well, let's back up and point out what Buddha is actually trying to do. From his source we see

www.thefreelibrary.com said:
"The section on psi experiments closes with a contribution from Stanley Jeffers on 'physics and claims for anomalous effects related to consciousness' that might be hard going for non-physicist readers."

and Buddha's carefully-worded dismissal of it states "There is no reference to the Jeffers’ experiment negating the Princeton research results." (emphasis added) Well, pedantically true, but at the same time extremely misleading regarding what Jeffers says in Psi Wars, what Buddha's opponents have put before him, and what Jeffers has written elsewhere. The specific critiques of PEAR's prior work appeared in Skeptical Inquirer, and one has been repeatedly linked here. Here it is again. https://www.csicop.org/si/show/pear_proposition_fact_or_fallacy . That's what we've been trying to get Buddha to address. Instead, Buddha has found other work by Jeffers on the subject and is trying to equivocate amongst it all to argue that the specific critique we previously referred to doesn't exist because it's not the one Jeffers reference he found. "This one thing I found isn't what you said Jeffers wrote, so I'm going to conclude the thing you referred to doesn't exist."

But for careful reading, he might have succeeded. Earlier he tried to add some allegedly missing conference paper to the mix. He's trying to tie up his opponents in knots with various references and their supposed content.

Having unraveled the equivocation and clearly separated Jeffers' work into its appropriate citations, we are then left with Buddha's bizarre statement :-
Buddha said:
The way I see it, Jeffers did not conduct any telekinesis research; for some reason some people believe he did, although he didn’t say that.

Stanley Jeffers most certainly did conduct psychokinesis research. It's reported in the book we're talking about. And it quite likely was reported additionally elsewhere. That's how science works; there's no one canonical publication. Jeffers' research consisted of work done with Alcock's advice. Some if it was even done at the PEAR laboratories, with Robert Jahn's assistance. And some of it even came agonizingly close to significance, but ultimately did not. So they (Jeffers and Jahn) had to conclude that they had not found a significant psi effect.

It takes a special kind of commitment to gaslighting for Buddha to deny what is clearly right there in his own source. Does he actually think reality changes to conform to what he concludes after a quick trip to Google? We believe Jeffers did research in psychokinesis "for some reason" because it's reported right there in the book Buddha cited. Normally a fringe claimant, when he discovers a source that disputes his claim, will just ignore it. It takes monumental hubris to try to gaslight people into believing it simply doesn't exist. As hallyscomet said, Buddha may be the worst, most transparent gaslighter ever.

Jeffers used a different apparatus based on another physical phenomenon known to be governed by the normal distribution -- the double-slit effect. Subjects were supposed to psychokinetically bias the deposition of particles through the slits. Jeffers consulted with Alcock on the human factors of the experiment design, precisely to avoid the kinds of methodological shortcomings that had doomed prior research. Jeffers specifically said ahead of time that in case he were to find a significant PK effect, he didn't want anyone else to be able to come in and say it was due to bad methodology or factors he had not properly controlled for. (All this is in the book, by the way.) Alcock helped Jeffers tighten up his protocols, and was satisfied with Jeffer's final experiment design. And no significant effect was observed. This echoes what happened when Jahn's colleagues in Germany and elsewhere tried a similar protocol with different apparatus and different subjects.

"Buddha" shows the utmost lack of familiarity with academic life.

The use of "would" and "not too long" can't hide enough his self-image of comic book superhero "demolishing" the villains. A very sad thing.

Indeed he seems to have this odd concept of academic research and writing as the same sort of rancorous Keyboard-Warrior mentality he exhibits here at this forum. He can't seem to imagine that Alcock, Palmer, Jeffers, Jahn, and others actually know each other and are cordial in their relations, even if they point out flaws in each other's research. This is how science works. It's not as if Alcock or Palmer sit down at the keyboard, crack their knuckles, and say, "All right, Jahn, you rebel scum, you've had it now. Wait 'til I'm finished with you!"

And yes, you're right, it's clear we couldn't ask for more evidence than today's posts of the ulterior motive being ego reinforcement. He's got it in his head that he's the world's best "demolisher" of errant claims, and he's going to rewrite as much reality as is necessary to make that seem true to him.
 
JayUtah said:
It takes a special kind of commitment to gaslighting for Buddha to deny what is clearly right there in his own source.


It's the whole "I don't know what I'm talking about but I'm positive I'm right" dance he does -following the style of Bishadi, Malcolm Fitzpatrick and so many other posters- that it's so infuriating and have earned for him such a bad reputation here.
 
Last edited:
It's the whole "I don't know what I'm talking about but I'm positive I'm right" dance...

Of which the best example of that is probably his attempt to comprehend the data-sorting point that Palmer made. It reveals a woefully incomplete, unsophisticated understanding of p-values.

The p-value describes the probability that a particular distribution arose by chance. "Chance" here is a very overloaded term. What we mean is whether it conforms to expectations, and expectations differ. We could expect a normal distribution. We could expect a baseline distribution. We could expect a uniform distribution. The idea in significance testing is that we keep everything the same between expectation and trial ("independent samples") except for one variable, and if the resulting distribution is not probably the result of the independent processes, then the variable's influence is considered significant. Ahead of time we choose a p-value to represent significance, almost always 0.05. If the expected, independent process has less than a five percent chance of producing the distribution we observed, then our observation has become significant.

Earlier, when describing the key process of the REG, Buddha mentioned that the distribution of outcomes from the apparatus "forms" a normal distribution. I corrected him to say that approximates a normal distribution. Absent any confounds, the REG outcomes observed under normal operation should converge to a normal distribution. This is not a nit-picky correction. The notion of the degree of convergence and of uncontrolled sources of error in practical approximations of randomness are critical to understanding PEAR's problems. Processes that are expected to conform to an idealized distribution will ever only approximately do so, if only from the natural uncertainty arising from a finite number of runs. Hence we can express their degree of conformance statistically, a quantification of uncertainty.

Any process that conforms at p≈0.05 still has roughly a five-percent chance of producing an errant outcome by chance alone. That is, we would expect one in approximately every twenty runs to exhibit bias that is not the result of any deliberately affective variable. It's what we would consider normal variance in the process, irrespective of any variable such as (in Buddha's contrived example) equipment reliability. Buddha doesn't understand this. He read what Palmer wrote, and you can almost see the wheels frantically turning. He came up with an example that missed the point entirely.

If we have a process that, say, is ideally Gaussian but practically distributed (as measured against the normal) at only p < 0.05 under acceptable parameters, then out of 100 runs we would still expect roughly five runs to depart significantly from the ideal (i.e., exhibit bias) by chance alone. I don't see any evidence that Buddha understands how that figure is a necessary corollary to the concept of a p-value. He can't think of it except to try to connect it to the variable under test.

Now what Palmer quotes another researcher as suggesting is that we naturally expect those five biased runs to be roughly evenly distributed across the 100 total runs. If, on the other hand, they are suspiciously clustered -- say the first or last five runs of the 100 -- then this may be a different kind of significance suggesting a psi effect. There may be only five biased runs, as chance would predict, but they are biased in the sequence in a way that purely descriptive statistics would not easily reveal, and which would be improbable in a way that's invisible to more mainstream methods of significance testing.

That's as may be; we don't have those data from PEAR to determine it, and this is not a test PEAR did. But I bring this up because this same concept which Buddha misunderstands so perfectly re Palmer is the basis of Stanley Jeffers' criticism of the baseline data. If Buddha can't figure out what Palmer is talking about when he mentions "sorting" the expected biases (as opposed to producing unexpected biases), then he has no prayer of understanding what Jeffers is talking about when he talks about too few expected biases showing up in the baselines.
 
Last edited:
JayUtah said:
I don't see any evidence that Buddha understands how that figure is a necessary corollary to the concept of a p-value. He can't think of it except to try to connect it to the variable under test.
p-values just as okidoki flags and the eternally repeated p-value fallacy. If you discuss it with him a little he'll wikipedigoogle a while and come up lecturing us about errors type I and II :D

To me the whole thing was over from the very beginning -that's why I didn't really follow the debate about Palmer's in detail-: Jeffers' showed that the baseline is biased so all results are rendered moot: they didn't use a random device as claimed:
1o9a1h.png


Jabba insisted that the physics behind guaranteed a random output of the device. Even in a well designed device, in those times some degree of impurity migration in electronic components might have played a role in long runs.

This for me supersedes any other topic related to this debate. Not to mention operator 010. And the miserable scope of the whole experiment.

This long debate with "Buddha" only served to show the holes in his knowledge and that "telekinesis" has really nothing to show.
 
To me the whole thing was over from the very beginning...

For me it was essentially over when he deployed his standard argument in his opening post, "These guys must be wrong because I'm so much smarter than they." That doesn't mean there isn't value in a subsequent explanation of the errors Buddha makes. But at a certain point, after a certain number of pages, there is diminishing returns. Once it becomes readily apparent to the reader that Buddha bluffs every time, there's little interest in pursuing that to the inevitable conclusion.

Jeffers' showed that the baseline is biased so all results are rendered moot: they didn't use a random device as claimed...

Well, let's be absolutely clear about something. Jahn et al. are the ones who presented the anomalous baseline data. It's attractive to write off some of these researchers as hopelessly biased. Jahn did not conceal his data. What Jeffers has done is to point out, for those who don't readily see it, why the condition of PEAR's baseline data renders the study non-probative. It's one thing to say, "Yes, the variance in our baseline data is too high in some cases and too low in other cases (i.e., did not correlate to the variance in the calibration runts)." The layman might say, "Okay, so what?" Jeffers comes in and says, "That means the estimates of significance were measured against an obviously erroneous baseline and can't be relied upon to show actual significance."

Jabba insisted that the physics behind guaranteed a random output of the device. Even in a well designed device, in those times some degree of impurity migration in electronic components might have played a role in long runs.

I assume you mean Buddha. The slip is understandable.

Yes, we covered this before. While the process around which an apparatus is based may theoretically be governed by a normal distribution, its exhibited behavior over a finite number of runs will (according to theory) ever only approximate -- to a measurable extent -- the normal distribution, and (according to practice) be subject to innumerable confounds at varying degrees of significance -- e.g., degradation in electronic components, wear on mechanical parts, environmental factors.

Far from being a problem, these departures from the ideal form the basis for idealizing measured data to within a certain confidence interval. We expect the machine to be a certain degree off from ideal just be chance, owing to the factors I allude to above. If it's more off than that during a baseline run, the the baseline is known to be biased. If the it's less off than that during a baseline run, there's an uncontrolled factor and you cannot assert that the baseline is not biased.

Buddha dismisses this whole idea. He doesn't understand it, obviously, but he contrived an example that he thinks -- from his "infallible" knowledge -- means that such a method could not work. He writes:

Suppose, you have determined in advance the length of a sequence of samples of electric circuits you have to take to decide that the number of defective ones exceeds specified limit indicating that something is wrong with the manufacturing process. For whatever reason that only Palmer has knowledge of, you might decide that the process had produced a “biased” sequence, and continue sampling. Then a new sequence would indicate that the manufacturing process is fine, and stop sampling. But Palmer will tell you that your new sequence is, actually, a “biased” subsequence, and you must go on with the sampling. This means that, if Palmer is correct, a sampling never ends. This nonsense implies that all statistical methods are at fault no matter what application you choose.

Right, so we have a population of N widgets that comprises a production lot. Our goal is to estimate the defect rate. First, we know it's not zero -- just like we know that the departure of Jahn's REGs from a theoretical normal is not zero. No production process is defect-free -- intentionally so. It would be far too expensive. Similarly no machine runs perfectly every time, but we tolerate its misbehavior. So we calculate an acceptable defect rate, one that ensures the cost of replacing the defective widgets that make it into the field is considerably below the cost of improving the production process to eliminate the defects, a p-value. For many large-scale manufacturing processes, that's around p = 0.03. For every hundred widgets, you allow three defects. This is akin to saying that for every 100 baseline runs of the REG, you expect three baseline runs to depart significantly from the ideal normal. In the widget case the p-value for significant defect rate is a business determination. In the REG case the p-value is empirically determined.

He then alludes to the unremarkable concept of margin of error: how many widgets from the lot would we need to test in order to convince ourselves, within a certain confidence interval, that the lot has met or exceeded its defect-rate requirements. (I'm leaving out the one-tailed versus two-tailed distinction here for simplicity.) For some desired plus-or-minus amount, that's straightforwardly computed from N. The measured defect rate of the sample of n is expected to correspond to the defect rate of the lot of N widgets within the specified tolerance.

But what if it doesn't? asks Buddha. Indeed, under the doctrine of normal variance the defect rate of one sample may vary considerably from the defect rate of the next sample and so forth. If we're not sure which one (or both) of those is biased, and by how much, then how are we to know when to stop sampling the lot? We'd have to keep sampling indefinitely, he claims. And since that's practically impossible, Buddha is claiming a sort of reductio ad absurdum that would require either Palmer to be wrong, or all of statistics to be wrong, but not both. Since statistics is obviously right, Palmer is obvious wrong.

Buckle up, kids. There's a lot of problems here.

First and foremost is the central limit theorem, which is, oh, only the most important theorem in descriptive statistics! (Buddha seems always to fall afoul, not of little nitpicky things, but of foundational knowledge.) The central limit says many things, but here it says that the accumulated sample defect rate and lot defect rate must converge, even if the rates are determined by factors that aren't normally distributed. Further, the convergence rate must be linear. This means we don't have to take samples ad infinitum. (This is not the first time Buddha has made this elementary error.) The samples are guaranteed to converge at a useful rate.

Second, we have an expectation. If we aim for a defect rate of p=0.03, the expectation is initially that we met it. That's a prior probability of defect. We can then feed samples through Bayes' theorem and come up with a posterior probability that the lot defect rate has departed from expectation, given the results of each sample. This method tends to converge extremely rapidly to a conclusion. This alludes again to what Palmer and May were talking about. The Bayesian inference method is based on the expected uniform distribution of biased samples in the lot. If the samples are suspiciously front-loaded or back-loaded, or otherwise clustered, Palmer says this is qualitatively significant in another way even if the "defect rate" in the data is what it should be.

Bayes helps us in other ways too. We can compute the likelihood that the samples are biased based on estimates of the probability of front-loaded or back-loaded sampling given the expectation of uniform distribution. That is, given an assumption, say, that the first sample was biased, then what is the likelihood that the second sample will also be biased if the lot is good?

Finally, the PEAR data as treated by May and Palmer is not sampled from a larger population, as Buddha's example discusses, such that a margin of error applies. Buddha is alluding to the rather simplistic concept that you can determine something about a population, to a quantified degree of certainty, by measuring a representative sample of it. That's not at all what Palmer is talking about. He's talking about reasoning about and from all the available data. If you have a lot of size N and you divide it into samples of size n and measure all the samples, The sample means will be expected to converge to the population mean, but the sample means will expect to vary individually from the population means by a deterministic amount. The error is expected to diffuse among the samples, but not perfectly uniformly. The determination of what is biased and what is not, in this case, is not purely guesswork.

Where Jeffers is concerned, the baselines are expected to exhibit the same sample-wise bias rate as the calibration data. This would be akin to having your N-sized lot of widgets partitioned into n-sized samples and looking at how many of those samples were biased, then sampling according to different criterion -- again into n-sized samples -- and seeing a radically different bias rate. It suggests that the bias might caused by the variable for the second sampling criterion.

When Buddha suggests that what Palmer and May contemplate would take infinite sampling, I just have to laugh. Again, the whole foundation of statistics is to avoid having to do that, yet retain a quantified degree of certainty in the observations. Buddha not only doesn't understand statistics, he doesn't understand the core concept of what statistics are for.

This long debate with "Buddha" only served to show the holes in his knowledge and that "telekinesis" has really nothing to show.

His evolution book wasn't really about evolution. His God thread wasn't really about proof for God. His reincarnation thread wasn't really about reincarnation. And this thread really isn't about psychokinesis. All Buddha's threads seem to be his effort to convince people he's very smart. The side effect of correcting Buddha's errors in an on-topic fashion is, regrettably, that he doesn't come off looking very smart, or prudent.
 
Last edited:
JayUtah said:
The layman might say, "Okay, so what?" Jeffers comes in and says, "That means the estimates of significance were measured against an obviously erroneous baseline and can't be relied upon to show actual significance."


An that didn't mean that some effect couldn't be present -on the supposition there was no magical operator "Ura Gellar"- but the actual significance remained unknown.

Suppose, you have determined in advance the length of a sequence of samples of electric circuits you have to take to decide that the number of defective ones exceeds specified limit indicating that something is wrong with the manufacturing process. For whatever reason that only Palmer has knowledge of, you might decide that the process had produced a “biased” sequence, and continue sampling. Then a new sequence would indicate that the manufacturing process is fine, and stop sampling. But Palmer will tell you that your new sequence is, actually, a “biased” subsequence, and you must go on with the sampling. This means that, if Palmer is correct, a sampling never ends. This nonsense implies that all statistical methods are at fault no matter what application you choose.
That nonsense implies that you, "Buddha", were being clearly silly and you should have changed your departure, thought everything out again, deleted the whole paragraph and written something sound to replace it -or nothing at all-.

JayUtah said:
Indeed, under the doctrine of normal variance the defect rate of one sample may vary considerably from the defect rate of the next sample and so forth. If we're not sure which one (or both) of those is biased, and by how much, then how are we to know when to stop sampling the lot? We'd have to keep sampling indefinitely, he claims. And since that's practically impossible, Buddha is claiming a sort of reductio ad absurdum that would require either Palmer to be wrong, or all of statistics to be wrong, but not both. Since statistics is obviously right, Palmer is obvious wrong.

Buckle up, kids. There's a lot of problems here.

First and foremost is the central limit theorem, which is, oh, only the most important theorem in descriptive statistics! (Buddha seems always to fall afoul, not of little nitpicky things, but of foundational knowledge.) The central limit says many things, but here it says that the accumulated sample defect rate and lot defect rate must converge, even if the rates are determined by factors that aren't normally distributed. Further, the convergence rate must be linear. This means we don't have to take samples ad infinitum. (This is not the first time Buddha has made this elementary error.) The samples are guaranteed to converge at a useful rate.


If "Buddha" had something more than a shallow course of Statistics done in a rush, he would have noticed that the mean of the multiple samples quickly tended to the mean of the total population. This is part of any basic questionnaire on the subject, something anyone can find in a "shortcut" book like those of the Schaum series, to name one source from the Anglo-Saxon world.



JayUtah said:
His evolution book wasn't really about evolution. His God thread wasn't really about proof for God. His reincarnation thread wasn't really about reincarnation. And this thread really isn't about psychokinesis. All Buddha's threads seem to be his effort to convince people he's very smart. The side effect of correcting Buddha's errors in an on-topic fashion is, regrettably, that he doesn't come off looking very smart, or prudent.


About the book's author, the jury is still out. Even that mediocre booklet shows abilities "Buddha" have failed to reproduce here.


I would like to voir dire "Buddha" but I spent like ten minutes trying to find out how "test de hipótesis de diferencia de medias" is said in English, to no avail. Forgive me if I never reply your carefully laid out posts in style, but with many topics I have thrown in the towel about learning the way things are called in English. If departing from the Wikipedia article in Spanish and switching to Wikipedia in English doesn't do the trick, I lost my patience. As any language student, I'm bound to say what I can, not what I want.



One thing I can ask "Buddha" anyway: Why does he think the envelopes in the figure taken from Jeffers' -in my previous post- is parabolic. The answer requires to point to a well known formula that is also used in the likes of Schaum series.
 
Schmidt investigated a different ESP phenomenon, precognition. I am not interested in this stuff, so I kept reading the Palmer report. In chapter 9 Palmer writes about metal-bending experiments, which is a form of telekinesis.

“The most extensive metal-bending research has been conducted by
Dr. John Hasted, Professor of Experimental Physics at Birkbeck College,
University of London. His most substantive work, which will be the focus of
this review, has been published in five experimental reports in the Journal
of the Society for Psychical Research (Hasted, 1976, 1977, 1978; Hasted &
Robertson, 1979, 1980, 1981).” Palmer, page 178

J.B Hasted had a PhD in Physics (he died in 2001); he is the author of the book, Metal-Bending. Palmer didn’t give a single reference to Hasted articles; I had to look for them on my own, which took me about an hour. I didn’t finish reading the articles yesterday; tomorrow I will present my critique of Palmer’s evaluation of Hasted’s work.

These are the titles of the articles and links to them.

The Detail of Paranormal Mind-Bending Work,
https://www.cia.gov/library/readingroom/docs/CIA-RDP96-00788R002000130011-5.pdf
Physical Aspects of Paranormal Metal-Bending

http://www.urigeller.com/detail-paranormal-metal-bending/

These articles are an easy read for the engineers, they do not have statistical analysis of experimental data. I wish the audience would take time to read them.

Few words about Palmer: several opponents wrote that I am not qualified to criticize an article written by a professional mathematician. This kind of criticism is extremely weak, usually I ignore it. However, I also thought that Palmer is a mathematician. While reading the Palmer report, I kept wondering how an acclaimed mathematician could write such garbage. Now I think I should forgive Palmer his fuzzy math – he is not a mathematician, he has PhD in Psychology.

Here is some data on the Jeffers experiments that supposedly invalidated the PEAR research:
https://www.scribd.com/document/74208146/Jeffers-PEAR-Proposition-Critique-Post
This article explains why Jeffers claim is false. The author also gives the titles of two Jeffers’ about his reproduction of the PEAR experiment; I will take a look at them in a near future.
 
Schmidt investigated...

I guess you're just in drive-by mode now. Nobody asked you to address Schmidt. We've been exceptionally clear in specifying what criticism we want you to address and how we want it addressed.

I wish the audience would take time to read them.

I'm sure you very much wish that your opponents would follow your script and avoid the topics that make you uncomfortable. However we have asked you to address certain specific parts of the criticism against PEAR. Your unwillingness to do so indicates you are not taking this debate seriously.

Few words about Palmer: several opponents wrote that I am not qualified to criticize an article written by a professional mathematician. This kind of criticism is extremely weak, usually I ignore it.

Yes, you are ignoring practically everything that has been said to you, that points out your elementary errors in basic descriptive statistics. This is important because you're using your own understanding as the yardstick to measure the work of others. You clearly don't know what you're talking about, and this has been demonstrated -- not merely claimed. That sort of criticism cannot be so easily brushed off. And here you are again today, blustering past some rather scathing revelations of the degree of your incompetence, hoping to change the subject.

However, I also thought that Palmer is a mathematician. While reading the Palmer report, I kept wondering how an acclaimed mathematician could write such garbage.

You did not know who John Palmer was before you dismissed him as biased and incompetent, and you're still trying to portray the notion that he is. You haven't figured out that he differs in his opinion from yours because you're the one who's demonstrably incompetent.

Now I think I should forgive Palmer his fuzzy math – he is not a mathematician, he has PhD in Psychology.

As you pointed out, most people who make their living doing data analysis are not mathematicians, including you. That is to say, they do not have a degree in mathematics. John Palmer is a world-recognized expert in a field that relies very heavily on statistical analysis to reach its conclusions. His math is not "fuzzy," as you keep trying to insinuate. You haven't shown anything wrong with his statistical analysis; you have shown only that you don't understand it. Alec and I have demonstrated this at length, and your only response is to say you can safely ignore all that, Trump-fashion, by vaguely labeling it "weak."

This article explains why Jeffers claim is false.

Are you sure? Are you sure you didn't just Google for something that purported to criticize Jeffers and throw the link out there? What you've linked to is essentially an anonymous Internet post. This stands in stark contrast to your earlier claim that you had the capacity yourself to address all the critics of PEAR. I'll take this as an admission that you don't.

What we require from you is a demonstration of your purported prowess in statistical analysis -- your braggadoccio to be able to "demolish" all your critics. What we need from you is a description of Jeffers' work in your own words, and a defense of PEAR against him -- again in your own words. What you've done today is no better than what could be done by a bored 14-year-old with basic Internet search skills.
 
The author also gives the titles of two Jeffers’ about his reproduction of the PEAR experiment; I will take a look at them in a near future.

And with that wave-of-the-hand, you sidestep the problem with your earlier claim that Jeffers did no such research. In reality, you made that claim after doing practically nothing to verify it, giving your readers plenty of reasons to distrust anything you say on your own authority. And now you're confronted with the uncomfortable fact that the source you want your opponents to address clearly contradicts your previous claim. No clarification? No mea culpa? Just a sweep under the carpet and let's move on?

Skipping to the end of the Williams article, there's an obvious howler. Williams criticizes Jeffers for focusing on the problems with the baseline data, insinuating that according to Jahn the proper comparison should have been the calibration data. Well, no, Jahn did his actual analysis for significance against the baseline runs. That means it's the baseline runs that determine whether the experimental runs are significant not. It doesn't matter for that purpose whether Williams or anyone else thinks the calibration data would have been better suited. The actual data that were used form the problem. Further, if it had been Jahn's intent to use the calibration data as the basis for determining significance, why collect baseline data separately? Or at all? And finally, it is Jahn himself who noted the problems in the baseline data, not Jeffers. Jeffers merely indicated their significance. Williams ignores this fact entirely.

If you look at all the other documents on his Scribd, you'll gather the impression that "Bryan J Williams" seems to focus almost entirely on defending claims of the paranormal from mainstream criticism. Did you consider any potential bias on his part before you held him up as an authority?
 
Last edited:
I wish the audience would take time to read them.

I've found that incompetent consultants often try to redirect attention away from critical show-stopper issues towards the areas where they're used to feigning competence.

You are doing the same thing here. You are ignoring fatal, show-stopper issues in favor of fiddling about with side issues you're more comfortable with. Nobody is fooled. You are on the Titanic, pissing and moaning about the orchestra's performance while ignoring the huge hole in the side of the ship and the fact that it's taking on water.
 
""Buddha"" said:
Few words about Palmer: several opponents wrote that I am not qualified to criticize an article written by a professional mathematician. This kind of criticism is extremely weak, usually I ignore it. However, I also thought that Palmer is a mathematician. While reading the Palmer report, I kept wondering how an acclaimed mathematician could write such garbage. Now I think I should forgive Palmer his fuzzy math – he is not a mathematician, he has PhD in Psychology.

Such confusion of yours and your fallacious appeal to authority is born in your overall lack of capacity to produce anything but garbage, not in Palmer's.

I'm delighted you chose such contrived way to admit it and now you're waving your hand towards a different topic with which, undoubtedly, you'll make the same kind of mistakes:

"""Buddha""" said:
Schmidt investigated a different ESP phenomenon, precognition.

Not that anyone expected you to admit that you would need to study years in an academic environment and basically be a different person to really discuss what you're dropping here. Your ego needs immediate satisfaction and pretending you're who you are not is your way, with one only deceived person: you. We know this because you're the umpteenth person offering the same dance act in 16 years of forum history. You're not even the only one doing it today, but one of five to ten.

As I said, it's déjàbba vu.
 
I'm delighted you chose such contrived way to admit it and now you're waving your hand towards a different topic with which, undoubtedly, you'll make the same kind of mistakes...

Indeed, he seems very interested in getting away from PEAR, Palmer, and Jeffers.

We asked Buddha to discuss Jeffers, and all he can do is frantically search for where someone else has taken up Jeffers. As we predicted, Buddha himself cannot understand Jeffers or "demolish" him. Standard keyboard-warrior approach. And Williams really doesn't do anything either. It's the standard apologetics by which the psi camp seems to respond to all its critics.

I already touched on Williams' gross misrepresentation of the baseline bind issue. Jahn, not Jeffers, brought it to our attention. All Williams can do is try to throw a lot of obfuscation at it and trump up the illusion that Jeffers somehow didn't understand what Jahn was trying to do. There is no misundertanding; Jahn captured baseline data intending to use it as the null -- and, in fact, did use it as the null.

Williams criticizes Jeffers' own psi research, but doesn't offer any original criticism. He merely refers to Dobyns as Jeffers' major critic regarding methodology. But he doesn't say much about what Dobyns says. And that's probably because Jeffers himself in "The PEAR Proposition: Fact or Fallacy?" identifies Dobyns as his critic. Williams hasn't done anything more than copy Jeffers' own disclosure and attempt to take credit for it.

To understand this a bit better we have to return to Alcock, because there's some history there. Initially the psi camp was quite excited to have Stanley Jeffers taking their research seriously, doing his own research, and using their labs and their publishing organs to conduct that research. It was only after he got the "wrong" answer that the questions of methodology and bias came up. Alcock notes that after Jeffers failed to confirm Jahn's findings -- in Jahn's own lab, I might add -- the psi community dropped Jeffers like a hot potato. It's important to remember that Jeffers feared this might happen, so he took great pains in advance to engineer his experiment properly and to get buyoff from potential critics. This is included conversations with York Dobyns -- one of PEAR's researchers -- about the proper statistical treatment of the data. Dobyns' later criticism is somewhat untimely.

Further, Williams tries to drive a wedge between the Jahn apparatus and the Jeffers apparatus based on double-slit deposition of particles. The latter, Williams argues, may not be suitable to psi research. But Williams glosses over the fact that Jahn, not Jeffers, suggested other quantum-level phenomena that might be susceptible to psychokinesis, including the double-slit effect. Jahn certainly did not object to Jeffers' method on that basis, and indeed receives credit in Jeffers' writings for it. So it appears Williams is slightly rewriting history in order to cast unwarranted aspersions.

Let me hasten to remind the audience that Bryan Williams is critiquing work that Buddha prematurely claimed did not exist. I'll leave the reader to draw whatever conclusions he may from Buddha's conspicuous lack of correction or clarification of his error. I conclude, however, that he is simply operating in Keyboard-Warrior mode, having no appreciable knowledge of the subject and instead just randomly throwing out hastily-Googled items without regard to whether they are reliable scholarship or whether they fit into a consistent or credible argument. It seems to be a neverending ploy for "gotcha!" moments regardless of what was said in the previous posting cycle. Buddha even explicitly denies any need to look at criticism that arises from the previous day or two-days' worth of postings. Ever onward, never looking back, seems to be his motto.

Williams goes on to note the failure, acknowledged by PEAR, of other universities in Germany and elsewhere to replicate Jahn's initial findings. And in addressing those failures he devolves into the same old apologetics we hear from the fringe. He tries to draw attention away from the failure to achieve significance by saying the replication efforts nevertheless made interesting findings. Mere interest is a low and elastic bar. And while it would suggest further study, it does not redeem this one. There is a fair amount of fringe-standard post hoc hypothesization in Williams' apology. Palmer did this too -- the data-sorting argument. Here Williams speculates that the PK effect may have properties that defy the present protocols to identify, such as severe limits in amplitude and frequency, or aversion to laboratory settings. Again, this suggests further research but does not redeem the present studies; Williams and his colleagues want that speculation to stand in place of the failure of controlled studies to confirm their prior hypothesis, but their hypotheses arose only after seeing failure in their data. And finally Williams tries to sidestep the notion of significance altogether and note that something seemed to affect the REG machines in the right direction, albeit not to the level of significance. That's a fairly standard technique in pseudoscience: trying to rewrite the criteria for significance after receiving the data. It's consummate goalpost-shifting. The obvious statistical rebuttals aside, Williams ignores entirely that the initial PEAR protocol was deficient and did not preclude such things as tampering and other confounds that may explain favorable variance. He is basically begging the question that the proffered hypothesis explains marginally significant findings knowing full well that adequate controls were not in place to support that. The curious behavior of the volitional categorical variable especially tells us that a psi effect is probably not the cause.
 
I've found that incompetent consultants often try to redirect attention away from critical show-stopper issues towards the areas where they're used to feigning competence.

That's clearly what's happening here. He has to have realized that he's in over his head when it comes to statistics, and now he's trying to divert attention away from statistics toward engineering. We'll get there eventually, but I'm not ready to let statistics go just yet.

Buddha is still talking out of both sides of his mouth as to whether he's a mathematician, or whether one needs to be a mathematician in order to understand and properly criticize the PEAR research. Buddha claims to have a degree in applied mathematics, but then tells us he never claimed to be a mathematician. But he also told us from the get-go that John Palmer was incompetent. Today he tells us that he initially thought Palmer was a mathematician, so how does that equate to his being incompetent? If he was supposed to be incompetent as a mathematician, how could Buddha -- an admitted non-mathematician -- properly criticize him? That would only work if Buddha, somehow, as a non-mathematician, acquired the expertise to criticize people he thinks are mathematicians. But if that's possible, the it doesn't matter whether one identifies as a mathematician; one can acquire the necessary skill and knowledge without it. And that certainly seems to be what Buddha thinks, because he reminds us that one can be a data analyst without "being a mathematician" or identifying as one. But apparently, in Palmer's case, one can't properly analyze data if one's career is as an academic psychologist.

As we've come to expect, Buddha is trying to contrive a system of rules and expectations that means he, and only he, is an appropriately qualified expert, and on that claim to authority alone he gets to dismiss everyone who disagrees with him on statistics as "obviously" unqualified and therefore probably wrong. It's amazing that he thinks this actually works. He can't even stay consistent with himself from day to day. I can't imagine why he expects anyone else to buy it.

No, of course I'm not going to let the statistics go. He started this thread purporting to disagree with PEAR's critics on the basis of their purportedly incorrect treatment of statistics. I'm still going to pursue that, and I'm still going to demand that he discuss how Jeffers interprets the statistical errors PEAR made. Buddha can't do that. He can't discuss, in his own words, what PEAR's critics have said in a field Buddha identified as a core competency. The bluff and bluster end here. Not even Williams goes into any detail about the baseline problem. He makes no statistical defense against a specifically statistical criticism. All he can do is manage vague insinuations that suggest Jeffers didn't know what he was talking about.

You're right when you say that unscrupulous consultants often deflect blame for their own failures by changing the subject toward their professed competencies. But I say the phenomenon is broader than that. Charlatans of all stripes try to deflect attention away from their failures, and it doesn't necessarily have to be toward anything in particular. The approach of "Moving on, nothing to see here folks," is the ordinary deflective pivot practiced by politicians, business leaders, and everyone who's basically trying to hide something. It's certainly practiced by fake consultants, but it is exemplary of broader human behavior.
 
Today is Thursday and we've already had the "Buddha drive by". Hopefully we hear tomorrow all that he learned from the mathameticians he said he was going to talk to "on Thursday".
 

Back
Top Bottom