Moderated Is the Telekinesis Real?

I think it was you who called Palmer a straw man just because I chose his article, I was just following your lead.


You haven’t really been reading JayUtah’s posts, have you? Not even the one you were replying to, and quoted, there.
 
Last edited:
I think it was you who called Palmer a straw man just because I chose his article...

I've addressed this three times already.

You keep saying that in psychological studies the outliers should be rejected.

No, I said you were incorrect when you said Palmer was wrong to exclude Operator 010 from the aggregate statistics.

It is your personal point of view; however...

Nope. It's the point of view of every professional who has examined the PEAR data. You quoted my post entirely, but you didn't address a single thing I said in it.

...you didn't provide a single example of a psychological study rejecting an outlier

That's a lie. I gave a lengthy and thorough examination of the subject yesterday. That also included a specific reference to a famous experiment in psychology whose author has lately written a retrospective book on it, including descriptions of his efforts to draw a sample, homogenize it, and winnow outliers. You do not read everyone's posts here. You do not follow links and read what people here cite. You are the last person who can comment honestly about what your opponents have or have not provided.

while I provided several examples showing why in many cases the outliers should be discarded including the one involving a clinical trial.

You gave one example of a clinical trial, which I have discussed at length. Your other examples were given under the presumption that experiment design in psychology must somehow be just like designing control systems. I discussed those at length too.

A mere glance at this thread shows that I have discussed your objection to Palmer at length. At this point it is a bald-faced lie to claim I have not answered you thoroughly.
 
Last edited:
Maybe some people are making money at the casinos the way you described it but for obvious reason they are silent about that. Unfortunately I am not one of them, although I occasionally play poker online.

The purpose of the Princeton study, as I understand it, was to show that so called everyday people have occasional sparks of telekinesis, but they cannot sustain their telekinetic abilities for long; this is the reason why they chose statistical methods to analyze their results.

Then there is Uri Geller, who claims that he has tested telekinetic abilities, but he is a conman who made millions demonstrating his "gift"

Wow!

It sure is amazing that you can be wrong about so many things concerning the very subject that you continually describe as being a real thing: which has lead me to believe that you have got to be about the worst advocate for supernatural things that I have ever encountered.

But anyway, before you continue to lionize Geller, then you should be made aware that several years ago Geller himself stopped claiming that he has any sort of "real" psychic powers and that he is actually an entertainer. And just in case you also missed this bit of news, there are entertainers who do make millions of dollars per year without using any sort of supernatural abilities.
 
The desired behavior of a control system is not at all the same as the expectation from a naturally-occurring system. Yes there are some control applications where you want a "hair trigger" and are able to tolerate false positives. In those cases you would know from analysis or theory that the control action has low consequence and/or low cost, and/or that inaction has high consequence. But on the other hand there are control applications where the opposite is true. Your one example is hardly representative.

Take for example the weight-on-wheels sensor (WOW) on a typical airliner. A number of control discretes are tied to that process variable. Most visibly, the speed brakes (spoilers) and tire brakes are frequently "armed" to deploy or activate on the WOW signal. And if the landing gear trucks are pitched to fit the gear bay, the WOW signal relaxes the retaining actuator so that the truck pitches properly onto the runway. But the WOW signal is filtered, integrated over time. Only when the airframe has properly settled for sufficient time do these actions take place. Why? Because premature control in that case is high-consequence, whereas inaction is neutral -- spoilers and brakes can be manually activated, and truck-pitch actuators have fail-safes. If the airliner merely bounces, say in rough weather, and the WOW signal were not properly integrated, the spoilers would deploy with the airliner possibly several meters above the runway, leading to a sudden loss of lift and an sudden, unacceptable increases in sink rate. The truck-pitch actuator would relax without the runway being in contact with at least one axle, possibly causing the trucks to swing and the airframe's c.g. to fluctuate unacceptably as a result. The brakes would set, causing the tires eventually to hit the runway with a higher level of resistance and therefore increased skid. This prematurely wears the tires, but also risks blowout and loss of control -- especially if one main gear should hit the runway before the other after WOW application. Hence the WOW sensor must be steady for at least 1 second before the control variables are triggered. Momentary WOW signals are discarded as anomalous. When the danger of false-positive control output is significant, one designs the system to recognize and reject spurious outputs. Heck, even amateur electronics hobbyists quickly learn to "debounce" mechanical switches in order to avoid control outputs cycling rapidly during the microseconds in which the switch sporadically makes contact as it closes. This is basic stuff.

Let's move slowly back to the topic. In spacecraft designs -- both manned and unmanned -- it is common to provide a global inhibit signal to cut out things such as propulsion under certain conditions such as when the spacecraft has landed. In the Apollo lunar module this was done explicitly by the LMP "poking" a non-negative value into location 0413 in the guidance computer's memory after the pilot had completed the landing. In several unmanned lander designs, this was detected by an onboard accelerometer operating on the vertical-axis dimension of the spacecraft. In all those cases, the effect was to provide an INHIBIT signal, either to software programmed to consult the appropriate signal, or to combinatorial electronics using the register voltage as a discrete. The goal in either case was to preclude actions that would be considered undesirable for a landed spacecraft, such as firing the descent motors. Sadly more than one spacecraft in our history of space exploration did not properly filter the accelerometer to exclude such spurious signals as the shock of the spring-loaded landing legs deploying, and thus inappropriately inhibited the descent-engine operation. Just because the designers didn't anticipate or imagine the eventual cause of failure, that was no excuse not to apply appropriate control.

So in your rush to criminalize the advisable conditioning of PEAR's experimental data, you've managed to bring up an example that not only fails to represent the behavior of data in the experimental sciences, but fails to represent the experience of control-systems design. It takes effort to misrepresent to that degree. I would expect someone claiming expertise in both applied mathematics and control system design to have drawn the parallel between basic control designs such as PID controllers and their counterpart concepts in statistical sampling. You don't seem to understand either concept, or the relationship.

To be sure, you are correct in saying that excluding anomalous data requires judgment. But you are not correct in pretending your one example is suitable for either situation. You are further wrong in insisting that you have the proper judgment in this case to comment on the data conditioning recommended by others who are experts in the field.

Theory desires that input discretes should be well-behaved and represent accurately the real-world condition we conceptualize as a discrete event in the process we are controlling. We therefore introduce conditioning procedures to mitigate the departure of practice from theory and formalize rules for rejecting false inputs so that the data suitably satisfy theory. Similarly, theory predicts that certain variables sampled from the real world should result in any of several possible distributions of outcome. In like manner we adopt practices to help mitigate departure from theory due to error, and we do so in a way that doesn't require us to know or speculate about the possible causes of error in advance.

In the context of PEAR research, we consider the outcomes theory would predict on either side of the question whether PK is real. If there is no such thing as a psychokinetic effect in humans, then a test properly designed to measure one would fail to show significant variance. The distribution of measurements for all subjects would cluster very closely around the null measurement, with standard deviation corresponding only to measurement and sampling error. There would be only inconsequential variance across all subjects.

In contrast, if there were a PK effect in humans, we could expect a test properly designed to measure that effect to produce results for a good sample of subjects that resembled one of the normal distributions. That is, at one end of the curve we would have a few people who -- for whatever reason -- had little if any PK ability. At the other end we would have a few people who -- again for whatever reason -- had prodigious PK ability. We should expect the majority of people to cluster within a standard deviation or so of the mean PK measurement, for that mean to differ significantly from the mean for null, and for the standard deviation to be broad enough to indicate actual variation in the data that isn't explained by the baseline (which includes both measurement error and sample error). This would be consistent not only with a normal variance in any human ability -- natural or supernatural -- across a proper sample, but consistent also with what the believers in PK have long believed to be the case. Specifically, in Budhhism the PK ability is thought to vary in individuals according to the degree to which one has attained enlightenment, varying from spoon-bending and other parlour tricks to full-blown corporeal flying. A big part of exercising proper judgment in data conditioning is knowing what the data should look like.

The original PEAR data resembles neither of those theoretical predictions. PEAR misleadingly represents that the coarsest of aggregations (omitting per-subject data) shows a significant variance over baseline. That's because the coarsest of aggregations does not reveal that the variance across subjects is suspicious. Moreover, when the anomalous data point is removed, the data across all subjects does resemble one of the expected distributions -- the non-PK one. This is the judgment that's required in this case. Human-subjects research in a properly vetted sample should exhibit variance across subjects conforming to past experience in the field -- generally a one-result cluster (no significance) or a normal distribution (possible significance). Having all the data congregate at one end of the distribution and one data point all by its lonesome at the other end is highly indicative of an anomalous result that should be conditioned away. Without it, the data conform to one of the two expected outcomes.

But Dr. John Palmer wasn't the only one to eliminate Operator 010. Robert Jahn did that too, in the follow-on attempts to replicate that you haven't yet read. There are four datasets in this study: two from Jahn/Boone and one each from two independent researchers. Three of those data sets, when reckoned across all subjects instead of relative to the other variables, conform to the "peaky" one-result distribution, the expected no-PK output. The only dataset that doesn't conform to any expected distribution is the one that includes Operator 010. Hence subsequent testing easily confirms that removing Operator 010 was the proper judgment in that case.
You showed some knowledge of control systems by providing an example where the outliers should be discarded; I guess it took you a lot of time to analyze this particular case since you do not claim to be a control systems engineer and this example doesn't come from your practice. Well, in this case I gave a surprise for you -- as a control systems student I designed a very simple control system for an HVAC system to keep the building temperature within certain limits. In this case the outliers were rejected. I can provide more examples of control of control systems where the outliers are discarded. As I said before, everything depends on the application.

Now about your understanding of a theory's structure -- you got certain things right but they are immaterial to the current discussion because before the Princeton experiments were conducted there was no theory of telekinesis that the scientists use, they simply wanted to see if their measurements of telekinetic effects are statistically significant. I suppose you know how this is done -- they compare their results with the theoretical results that would have been observed if the distribution of measurements were to fit the Poisson distribution. They chose to test the aggregate results, as you call them. In order to do that they didn't reject the outliers as it is always done during clinical trails.

I would say that the rest of your post is instructive and entertaining but it contains a lot of data that is not pertinent to the Princeton research.
 
Wow!

It sure is amazing that you can be wrong about so many things concerning the very subject that you continually describe as being a real thing: which has lead me to believe that you have got to be about the worst advocate for supernatural things that I have ever encountered.

But anyway, before you continue to lionize Geller, then you should be made aware that several years ago Geller himself stopped claiming that he has any sort of "real" psychic powers and that he is actually an entertainer. And just in case you also missed this bit of news, there are entertainers who do make millions of dollars per year without using any sort of supernatural abilities.


I kind of feel sorry for the people who do not have sense of humor -- my remark about the casinos was a joke!
 
You showed some knowledge of control systems by providing an example where the outliers should be discarded; I guess it took you a lot of time to analyze this particular case since you do not claim to be a control systems engineer and this example doesn't come from your practice.

No, it took me no time at all. Those are cases I remember off the top of my head. In your rush to paint me as inexpert, you have sidestepped the issue that they are cases from history that dispute your characterization of the field. You need an argument that's better than constantly accusing your critics of being less informed and less qualified than you. For heaven's sake, you rejected Palmer as biased an incompetent before you even knew who he was or what field he worked in. Your default position seems to be that everyone is less qualified than you in every field.

As I said before, everything depends on the application.

And I gave you two examples from history of applications that contradicted your naive characterization. You clearly can't deal with them, so you tried to personalize the argument instead.

...there was no theory of telekinesis that the scientists use...

Irrelevant. I gave you the reasons why the inter-subject data should be expected to look a certain way.

I suppose you know how this is done -- they compare their results with the theoretical results that would have been observed if the distribution of measurements were to fit the Poisson distribution

No, that would have been the Z-test. The t-test compares experimental results against an empirically determined baseline, not against a normal distribution. You didn't address anything I said. You are simply regurgitating the theory of empiricism that we all learned in seventh grade. I require you, as a self-proclaimed expert in statistics and experimental psychology, to address my post in a way that demonstrates the expertise you claim.

I would say that the rest of your post is instructive and entertaining but it contains a lot of data that is not pertinent to the Princeton research.

No, this is your standard dodge when you don't know enough about something to address it and don't want to expose your ignorance. Jeffers, you claim, is "irrelevant" to the PEAR study. But it's more likely you don't understand what Jeffers has written and you don't want to embarrass yourself by trying. You don't understand anything I've written, so you declare it to be "irrelevant" and thereby absolve yourself of having to answer it.

Everything I wrote is in some way pertinent to your claim that Palmer erred in setting aside Operator 010 when recomputing PEAR's aggregate statistics. If you cannot or will not address it, then your claim fails.
 
Last edited:
Yes, that's true, but it misses the mark of the criticism. While the goal may have been to investigate aggregated behavior, that doesn't preclude inter-subject tests to ensure that the data are homogeneous and therefore that the aggregration has meaning. You seem to be trying to say that Dr. Palmer found an anomaly in something that PEAR wasn't trying to study, so it doesn't matter. That's not what happened. Dr. Palmer found an anomaly while testing the data for integrity and internal consistency. That kind of test is always appropriate, and very important when conclusions are to be drawn from broad aggregations.

Let's say we have a random sample of twenty sixth-graders (12 years old) who are being sent to basketball camp. We test their ability to score a basketball free throw by giving each of them 10 trials. Each player's score is the number of free throws they hit, from zero to 10. Certainly we can create descriptive statistics about the score -- the mean score and the standard deviation. I have no idea what that distribution would look like, but let's say the mean score is 3.1 hits out of ten. Make up a standard deviation; it's not important.

Since it's a random sample of kids, we would expect them to vary in their ability. Some 12-year-olds have more practice and skill than others. If, for each score, we look at the histogram of kids who got that score, it should also look something like a normal distribution. You'd have nerds like me who would score very few hits, and athletes like my 12-year-old nephew who would score a lot. Most kids, we figure, would score somewhere around that mean of 2-4 hits. Few if any would score higher than two standard deviations better than the mean.

That's our empirically determined baseline, although as I continue you will probably see there's a slight problem with method. Ignore it for the purposes of the example; I know it's there.

Now send all the kids to basketball camp for two weeks and draw another test sample. A reasonable test of the effectiveness of the basketball camp would be to see if the mean scores rose. Now let's say the post-camp mean score was 4.2. Eureka! The camp works! Except there's a hitch; the second sample included LeBron James, and the analysts don't know that. They don't know the identities of the subjects, only their scores.

A quick surf over to the NBA stats page says LeBron's free-throw percentage is around 75%, far better than anyone else in the group. When we look at the histogram -- which, for a properly distributed sample, should still look like a normal distribution -- we see that suspicious-looking spike out there in the 7-8 score range. It doesn't fit what we expect the inter-subject data to look like, whether the camp works or not. He's dragging the average artificially upward, so the aggregation is not as meaningful as it otherwise would be. If the mean score without him is only 3.15, we might conclude the camp is not effective.



This is a mess. You didn't give us enough detail in your story to determine what exactly you think the mistake was that you made or why exactly you think Palmer made the same mistake. But let's first discuss what's obviously wrong about your comparison. Then we'll work through the rest as best we can and hope to cover all the bases.

The last statement is simply wrong. "Operator 010 is psychokinetic" is not the "inevitable" conclusion of conscientious scientists looking at outliers. Outliers are not presumed to vary according to the variable proffered in the hypothesis. In fact, if there is any presumption at all it's most often that the anomalous variance comes from a sporadically confounding variable, in my example the considerable outside training and expertise of a professional athlete. It sometimes becomes a subsidiary exercise to discover -- and later control for -- that variable. Dr. Palmer didn't go any further, nor likely would he have been able to.

In your anecdote you proposed to disregard a subject because of questions regarding what may have caused the very low score and its possible ability to confound the intent of the test to measure drug effectiveness. If I'm reading your reasoning correctly, you propose that Palmer wants to disregard Operator 010's score similarly because of what may possibly have caused it. You dance around the concept that it's because of assumed PK ability, but it's not clear that's what you mean. And in my example above, that would be like disregarding the anomalously high score because you suspected it was a professional basketball player.

This is simply inapt. Palmer gives no reason for disregarding Operator 010 beyond the inability of the data to fit reasonably within the expected inter-subject distribution. It would have been appropriate, for example, to disregard Operator 009 if his score had been two orders of magnitude below everyone else's. That could indicate, for example, some pervasive difficulty that subject had operating the machinery, and that effect would mask any intended measurement. We don't have to speculate why scores are anomalous in either direction, although it is often attractive to do so. It is sufficient to reject the score based on its incongruence in context, not on what it may conceivably represent. It would have also been appropriate to remove Operator 010 if the remaining distribution was rendered coherent and fit the pro-PK distribution. Yes, the means would have been slightly lower, but they may still have been significant compared to baseline. And that significance would have statistical validity because the inter-subject distribution would have been as expected.

It's your standard straw-man argument. You're ascribing to Palmer motives you may have once naively had, when there is no evidence for any such motive on Palmer's part and considerable evidence for an entirely different -- and completely necessary -- motive altogether. One that you seem blithely unaware of. Just because Dr. Palmer's actions superficially resemble ones that, in a different context, would be wrong doesn't make them wrong in this context. From the very start you accused him of trying to make the data fit his wishes, which you assumed wrongly to be anti-PK. In fact it's quite obvious he's trying to make the data fit any of the expected distributions so that the descriptive statistics and correlations have the intended meaning. This is common practice, and you know it is. And your protests that Dr. Palmer somehow doesn't know how to do that properly in this field, and that you somehow do, is comically self-serving.

By way of background, just so everyone is up to speed :—

Drug trials typically follow the double-blind, placebo-based model. A sample is drawn, and various categorical variables are consulted for the sample that determine how well it represents the relevant population. That population is then divided (typically randomly) into two groups: one that will receive the drug and the other that will receive a placebo. The subjects don't know which one they're getting. The experimenters who interact with the patients, in turn, don't know which one they're administering. The placebo group serves as the baseline control against which the variable group is measured. In order for that to be valid, all those pre-trial categorical variables have to match up fairly evenly between the groups. They're usually demographic in nature, like age, sex, ethnicity, prior medical conditions, etc. But they're really proxies for effects known, suspected, or speculated to introduce confounding influences in the outcome. Dr. Philip Zimbardo, of Stanford prison experiment infamy, includes a layman-accessible description in his book The Lucifer Effect of how he homogenized his sample between the Guards and Prisoners groups. If all those potential confounders are balanced in both groups, you can confidently attribute variance in outcome (in this case, the disposition or occurrence of cancer) to the only category you varied -- what pill the patient actually got. If your placebo group were mostly men and the drug seemed to have worked well on the mostly-women group who took the actual drug, you may not be able to separate the desired effect of the drug from the baseline fact that cancer rates are higher among men.

In your anecdote, the observation you wished to remove was one in which no cancer was observed. From context, I glean that this was anomalous, such as the patient previously having had cancer, and that the expected effect of the drug was simply to put cancer into remission. The causal category (spontaneous disappearance) you proposed to eliminate from whichever side of the placebo line that patient was on still has to be represented on both sides, and -- within both groups -- along its entire categorical spectrum in order for the correlations to the placebo/drug category to remain valid. This has nothing whatsoever to do with disregarding data that is self-evidently out of place, irrespective of known or speculated cause.

Let's talk more about categorical variables. Imposed controls are often categorical variables, in which case strong correlations to them suggest the presence of the confounding condition that motivated the control. The example I gave above relevant to your anecdote was in controlling the sample for sex, because sex is known to affect cancer rates. For a PK example, if a subject demonstrates the ability to move a paper cup around on the table "using only his mind," and that ability completely disappears when the cup is covered under a bell jar, then any of several confounding phenomena is indicated. The control is applied to prevent the subject from physically manipulating the cup in a way the experimenters wouldn't otherwise detect. In past trials this has been accomplished by invisible loops of of monofilament held between the subjects hands, or tricks as prosaic as blowing on the cup. It isn't necessary for the experimenters to think of every possible way of surreptitiously moving the cup by ordinary physical means. Isolating it physically from the subject eliminates most if not all such methods.

If we have some number of such subjects and some of them can move the cup to varying degrees and others can't, and some can move it with the bell jar in place, and others can't, then we have the basis to perform tests for significant relationships among categorical variables, where in this case the category might be "jar" vs. "no jar." This is how categories would more properly be considered in an experiment, and we would switch to something more akin to the chi-square test for independence.

In the PEAR study the volitional variable was imposed as a control to preclude anything that would have the effect of knowing before the trial how the outcomes would appear. It doesn't matter how such knowledge could be acquired or such preparation could be accomplished. It doesn't matter that you, I, Palmer, or anyone else fails to imagine how it could be done. That's not how empirical control typically works. What matters is that the data were collected in a way that recorded whether the subject was able, at the time of testing, to select the method of trial that would occur. That's a category in the study.

In a double-blind placebo study, we would want the measured cancer rate at the end of the study to be significantly independent of all variables except the placebo-vs-drug variable. Toward that end we compare measured cancer rates to those variables regardless of whether they got the drug or the placebo. If the rate correlates in your sample more strongly to, say, whether a patient exercises regularly than to whether he got the drug, you can't say the placebo-vs-drug category is sufficiently independent to be significant.

If the PK effect hypothesized by PEAR is real, the measurement of it in terms of variance from the baseline was expected to be independent of the volition category. It wasn't. It was quite strongly dependent on it. Now you can read all sorts of nefarious intent into the notion that all the variance in the one out of three studies that showed any variance at all depended on whether one subject knew in advance how the day's experiment was going to be done. But what's more important is that PEAR had no answer for this. They didn't propose any sort of PK-compatible explanation (e.g., "For PK to work, the subject had to be in a certain mindset that was defeated by the volition variable.") They did no further testing to isolate and characterize this clearly predictive variable that wasn't supposed to be predictve.

You can't just leave a failed test for independence alone and claim victory nonetheless. Palmer didn't explicitly perform the independence test, but he didn't have to. The errant correlation is trivially apparent. This lengthy exposition is meant to reach this one point: You accuse Palmer of eliminating or disregarding Operator 010, and you claim -- based on irrelevant and wrong comparisons to control-system design -- that this is a no-no. A better way of looking at Palmer's review is that he considered the problem categorically. The categories of volition-vs-not and significance-vs-significance are not independent in the way they would need to be for Jahn's hypothesis to have been properly supported by his first study. It's not that Operator 010 wasn't taken into account. Palmer took Operator 010 into account according to the way the categories in which she fell could be reasoned about intelligently and correctly.
I do not know why you wrote all this stuff about clinical trials -- it is correct but trivial and everyone knows it. Perhaps you are trying to show me that you are an experienced statistician. Well, you do not have to try real hard to achieve your goal, just tell me you are, and I will accept your word for it.

Now, about my clinical trial mistake -- your discussion about categorical variables is irrelevant to it, just tell me that you know what they are, and I won't try to prove otherwise.

However, I may not have been completely clear about the person whose cancer was cured. After a clinical trial is completed, all members of research team have full access to the research data. so I knew everything about this patient that I needed to know. He was taking real medication, not a placebo.

Now about the cure -- if I recall correctly, a person is declared cancer free if 6 months after a treatment he shows now signs of cancer in his system. If there are no signs, but the time span is less than 6 months, the medical specialists conclude that the cancer is in remission.

As you may know, it is common during a clinical study that a team is not capable to collect all data about the subjects because looses contact with some of them. This patient was the only one cancer -free subject that the team was able to find 6 months after the end of the study.
 
I do not know why you wrote all this stuff about clinical trials...

Because you brought one up as a comparison to PEAR and Palmer.

Well, you do not have to try real hard to achieve your goal, just tell me you are, and I will accept your word for it.

No, that's not how it works. Anyone can claim to be anything as long as they don't have to demonstrate it. That's what you do. Here on this forum and elsewhere you've claimed all kind of expertise you can't ultimately back up. Instead, I demonstrate correct understanding. That way people can draw their own conclusions about whether I know what I'm talking about. They don't have to take my word for it.

Now, about my clinical trial mistake -- your discussion about categorical variables is irrelevant to it...

Nonsense. You brought up a clinical trial as a direct comparison to an error you are claiming Dr. Palmer made. The discussion of categorical variables is necessary to show how subject pools are homogenized in such trials and how your error would have violated that homogenization. Then I showed how, under the same model, Palmer's actions actually had the opposite effect as your mistake, and served to achieve a homogenization that would be necessary for the aggregate statistics in PEAR's findings to have the meaning they intended. If you're claiming that tests for variable independence such as the chi-square are irrelevant to experimental psychology, that's just about as ignorant a statement as can be.

I don't care about the details of how to test drugs for cancer. You have a history of rambling on about irrelevant subjects instead of sticking to the point. You also have a history of insinuating that you have expertise, but declining to demonstrate it. Since you can't understand my argument or rebut it, you're desperately trying to gaslight the audience into thinking it is irrelevant.
 
Last edited:
You haven’t really been reading JayUtah’s posts, have you? Not even the one you were replying to, and quoted, there.

It's become clear to me he is not. He is respond not to what is written, but to the criticism he WISHES had been written instead.
 
He's not even doing that anymore. He's just dismissing everything he can't understand as "irrelevant."

That's probably my fault. I've called out a few of his non sequiturs. He's probably decided that since doing so is so devastating to HIS arguments, accusing others of being irrelevant will be equally devastating to THEIR arguments.
 
That's probably my fault. I've called out a few of his non sequiturs. He's probably decided that since doing so is so devastating to HIS arguments, accusing others of being irrelevant will be equally devastating to THEIR arguments.

Perhaps, but not likely. He has stated several times that he is impervious to criticism. It follows that he has previously developed a toolbox of techniques for evading criticism that likely include simple declarations of inapplicability. This would have been in place long before he encountered us.
 
Perhaps, but not likely. He has stated several times that he is impervious to criticism.


It is becoming increasingly apparent that this is because he either fails to understand it, or ignores it. Or is simply unable to entertain the possibility that he might be wrong.
 
Last edited:
I kind of feel sorry for the people who do not have sense of humor -- my remark about the casinos was a joke!

If that is actually the case,

Then should one treat your determination of Uri Geller as a real psychic to also be some sort of joke?
 
If that is actually the case,

Then should one treat your determination of Uri Geller as a real psychic to also be some sort of joke?

I think it's likely Buddha's entire posting history is an Andy Kaufman style act.
 
The purpose of the Princeton study, as I understand it, was to show that so called everyday people have occasional sparks of telekinesis, but they cannot sustain their telekinetic abilities for long; this is the reason why they chose statistical methods to analyze their results.

But today you reversed that and claimed there was no "theory of telekinesis" that was being tested, therefore no way Dr. Palmer could have known what the inter-subject data would look like, and therefore no basis for him to determine that Operator 010's performance was anomalous.

Vague handwaving references to "statistical methods" don't address the problem that even under the theory you state there still should have been a normal distribution, and that anomalous data would still stand out against it. Nor, when I thoroughly provide the background that illustrates just how such integrity tests and homogenization procedures would be accomplished using inter-subject data, do you simply get to handwave it all away as "irrelevant." You made it relevant in posts such as these. Since you are unwilling to discuss the "statistical methods" beyond broad-strokes handwaving and content-free appeals to your own non-existent authority, we can dismiss your attempt to undermine PEAR's critics as shallow and uninformed.

But it's also worth pointing out that you're changing your story in order to enable reasons for you to sidestep challenges you can't meet. This is not honest debate.
 
Then should one treat your determination of Uri Geller as a real psychic to also be some sort of joke?

I don't see where Buddha said Uri Geller was a real psychic. This is what Buddha wrote:

Then there is Uri Geller, who claims that he has tested telekinetic abilities, but he is a conman who made millions demonstrating his "gift"

That says to me that Buddha agrees Geller was a charlatan.
 
Wow!

It sure is amazing that you can be wrong about so many things concerning the very subject that you continually describe as being a real thing: which has lead me to believe that you have got to be about the worst advocate for supernatural things that I have ever encountered.

But anyway, before you continue to lionize Geller, then you should be made aware that several years ago Geller himself stopped claiming that he has any sort of "real" psychic powers and that he is actually an entertainer. And just in case you also missed this bit of news, there are entertainers who do make millions of dollars per year without using any sort of supernatural abilities.

Sorry 'Buddah'!

I miss-read your posting and as such, I made a stupid mistake about it.

My apologies to you.
 

Back
Top Bottom