Moderated Is the Telekinesis Real?

I suggest everybody not to embrace the example set in this thread from its very inception: Though I don't know the terminology in English (and the BE/AE inevitable differences), the strain is both/either the macroscopic deformation that comes from stress and/or its infinitesimal expression within the same state.

Though I know Wikiwad is not any of an authoritative source (as it's mostly used by cats to feign fictitious knowledge on the fly, in venues like web fora), I managed to find these departing from concepts well known to me in Spanish (and in one case, just by jumping from the Spanish Wikiwad version):

https://en.wikipedia.org/wiki/Stress–strain_curve
https://en.wikipedia.org/wiki/Infinitesimal_strain_theory
https://en.wikipedia.org/wiki/Strain_rate_tensor

Within this subject, strain seems to alternatively match Spanish esfuerzo and deformación depending on the context. And certainly idiotic Hasted was able to observe twisted metal and theorize some strain (as the perverts handling those pieces behind his back did to fool him) was present, what he and Palmer reported or commented on.

Forum user "Buddha" usually transcribes text well. That, I concede.
 
I will abide their vote if you provide the information that I requested regarding the voting procedure. The absence of this information shows beyond shadow of a doubt that I am right – by far and large the audience rejects your posts.

Your apparent confusion about the workings of a very simple, self explanatory, and public poll certainly does not make you correct; although it is an instructive demonstration of your process if we didn't have enough examples already. Your failure to understand a contrary position does not make the one you hold correct.

There's nothing to 'abide' to in the poll anyway, it is simply informative, but it does puncture your pretension to the support of a silent majority.
 
Hasted's conclusions regarding the "surface of action" are also
problematic, even if one accepts the loose definition of "synchronous." This
concept is based on observations of data from North and Williams that
synchronous signals are more prevalent when the specimens are in a
radial-vertical configuration with respect to the subject than in some other
configuration. However, no statistical analyses were offered to support the
significance of this trend. In the case of Williams' data (Hasted, 1977), of 15 (80%) signals in the vertical or radial-horizontal-vertical
configurations were synchronous as compared to 22 of 39 (56%) with the other
configurations. This difference is associated with a corrected chi-square
value of 1.67, which with one degree of freedom is clearly nonsignificant”. Palmer. Page 193

It seems to me Palmer contradicts himself – at first he says the synchronous signals are more prevalent when the “specimens are in a
radial-vertical configuration with respect to the subject”, then he says that the difference is statistically insignificant. As I noted before, in the experiments that this one position of a specimen is of no importance, so the difference is statistically insignificant, as expected. But even if the difference were statistically significant, this wouldn’t mean much either because it be could attributed to the piezoelectric effect that depends on direction of the stress applied to the specimen. But it appears that Palmer didn’t consider this simple explanation of the signal difference.

Although the surface of action is presented as a basic physical
characteristic of the phenomenon, it could just as easily be a reflection of
a possible psychological preference of North and Williams; there is
certainly no basis for drawing conclusions about the generality of the
surface of action”. Palmer, page 193

Psychological preference of what? Palmer didn’t elaborate because his position is completely illogical – the metal bar is either under a stress caused by the subject or not, the subject’s “psychological preference” is completely irrelevant, it doesn’t affect the result.
*.
 
Let's transform the rest of this discussion into a quiz. (I will ask the indulgence of the other contributors to give Buddha a fair chance to research and respond to these before jumping in themselves.)

  1. Is the Jahn REG experiment an example of in-group experiments or between-group experiments?
  2. Randomization is a technique that applies to which of the two experiment classes in the previous question?
  3. What does Dr. R-S say is one possible consequence of applying parametric analysis techniques to data sets with extreme outliers?
  4. What alternatives to randomization does Dr. R-S describe?
  5. On what type of data in Jahn's REG experiment would the chi-square statistic be appropriate?

These are not mere gotcha! questions. These are issues brought up in the video that relate to specific errors you have committed in this thread while attempting to discredit Dr. John Palmer and Dr. Stanley Jeffers.

Buddha is excusing his silence on his failed PEAR claims by saying his opponents haven't produced this or that specific bit of information. He says they haven't described how the poll was taken that disfavors him. Not true, but largely irrelevant since at best that supports only an appeal to the gallery. The gallery has spoken but that's not why we're here. He says they haven't substantiated from the literature the validity Dr. John Palmer's exercise of omitting the data from Operator 010. Also not true, but there are only so many references I can make to Philip Zimbardo's The Lucifer Effect before the reasonable conclusion is that Buddha will never acknowledge or read it. Zimbardo goes on for several chapters recollecting the infamous Stanford prison experiment, and speaks at length about his outlying guard subject. But according to Buddha, none of his opponents has met his challenge to disregarding Operator 010.

The quiz I quoted above was to determine how much Buddha consulted his own sources when dealing with topics like outlying data. He's had time to answer the quiz, but chose to try to wrench the discussion away from PEAR and toward his change-of-subject distractions. So what does his source say?

  1. Is the Jahn REG experiment an example of in-group experiments or between-group experiments?
    It is an example of an in-group experiment design. As Dr. R-S explains, a between-group design is when the subject pool is divided into a control group and a variable group. A dependent variable is expected to vary between the group according to the experimenter's manipulation of an independent variable. This is the common approach in drug trials, where the independent variable is whether one receives the drug or a placebo. In contrast, the in-group design treats all the subjects as one group and measures variance across some other variable. This is the design for exploratory studies which simply seek to measure some property of a group. It also works for longitudinal studies, where the independent variable is time. In Jahn's studies -- as well as Jeffers' -- the subjects were all told to attempt to vary the outcome of a random event. The independent variables were all concerned with whether they would be asked to exert an influence on some given run and toward what variance. The dependent variable was whether the mean number of successes of the group as a whole varied accordingly from no-effort to effort runs.

    The Stanford prison experiment, in contrast, is a between-group experiment. It collected a pool of subjects roughly distributed similarly to the population by the measures that Dr. Zimbardo felt would affect the emergence of the properties he hoped to measure. Then they were randomly divided into guard or prisoner groups, and the experiment was run. His subsequent analysis compared the psychometric data collected after the experiment.

  2. Randomization is a technique that applies to which of the two experiment classes in the previous question?
    Randomness applies only to between-group experiments. Buddha's source confirms what I explained earlier. In a between-group experiment, the least biased method of dividing the subjects into the control and variable groups is by random assignment, such as Zimbardo achieved via the coin-toss. The homogeneity of each group can be measured, and this drives the degrees of freedom in the subsequent parametric comparison of groups, such as via the t-test.

    Since Jahn's subjects were not divided into control and variable groups, there is no randomness constraint to violate if a subject is disregarded. It was an in-group experiment. Jahn's single-slit experiment used a similar design. The subjects were not divided into control and variable groups. It was also an in-group experiment. Rather the control-vs-variable was whether effort was directed to be applied in some given run, as it was in Jahn's experiment. In Jeffers' double-slit experiment there were two groups of subjects, but it was not a between-groups experiment. However, a measure of between-group analysis was possible, with the independent variable being the means-directed versus outcome-directed categorical variable. Since no significant variance was observed in either group, it would have been moot to try to determine whether any variance could have been attributed to that variable.

  3. What does Dr. R-S say is one possible consequence of applying parametric analysis techniques to data sets with extreme outliers?
    She says it will bias any determination of variance. This is exactly what Dr. Palmer observed in the Jahn experiment, so Buddha's source expressly confirms Dr. Palmer's analysis. Dr. R-S says that parametric analysis (the kind used by both Jahn and Jeffers) engenders certain assumptions that are violated by outlying data. One of those assumptions is that the data are normally distributed. Extreme outliers violate the normal-distribution assumption, and Operator 010 -- who singly accounted for all the significance in the entire experiment, and whose performance exceeded by a factor that of all other subjects combined -- is clearly an extreme outlier by Dr. R-S's definition.

    Buddha does not know what data in the Jahn experiment were actually looked at. As stated previously at length, he wrongly thinks the Poisson-governed behavior among small numbers of electrons is what Jahn applied the t-test to. Instead it was the mean number of 1-bits observed over a 200-sample run of his REG, aggregated over several runs taken over many months by a variety of subjects. When these data are observed on a per-subject basis instead of aggregated all together, they must be normally distributed for any of Jahn's tests to be of value. That is, the number of subjects that achieve a certain number of mean 1-bit observations in the no-effort case should be normally distributed around μ=100. The number of subjects that achieve certain PK+ or PK- scores should be normally distributed around a score that is either higher or lower than 100, such as μ=101 for the PK+ case and μ=99 for the PK- case. Instead, Palmer noted that the disaggregated data reported by Jahn showed a normal distribution around μ=100 in the effort case, with Operator 010 a clear outlier in both the PK+ and PK- cases.

    According to Buddha's source, this violates the expectation of normal distribution. Parametric analysis must remove the outlier before it can achieve any explanatory power.

    Buddha tried to get around this by saying the expectation of a normal distribution is a straw man, that no such expectation arose. This is, in fact, the standard rehabilitation of PEAR among proponents of psychokinesis. However, it was Jahn's choice to apply the t-test. That carries with it an expectation of normal distribution. If he did not expect the subjects' performance in the effort case to be normally distributed, he would not have selected the t-test. Again, because Buddha does not know what data were actually used in the t-test, he does not see the problem with his speculative dismissal of Palmer.

    Also included in Dr. R-S's explanation of the assumptions that must hold before parametric analysis is appropriate is the basis of Jeffer's baseline-bind criticism, which Buddha has assiduously let pass in silence. The anomalies in the baseline (i.e., no-effort) runs violate the distribution assumptions just as certainly as outliers, except for exactly the opposite reason. The no-effort results are not as varied as they should be, and Jahn himself notes this. Jeffers, in Skeptical Inquirer, merely notes the proper interpretation that should attach to such a confession.

    Since Buddha was unable to answer this question, I submit the hypothesis that he has doggedly avoided Jeffers' explanation of the baseline consequences because he cannot discuss it knowledgeably and is consequently not an expert in statistics. This undermines his opinion of the impartiality and competence of Palmer's and Jeffers' individual criticism, which was offered as that of an expert. Notably, his own source provides the answer he lacked.


  4. What alternatives to randomization does Dr. R-S describe?
    Explicit homogenization, if the variables that may confound are well-known and well-behaved. This speaks partially to Buddha's attempted counterexample, wherein he was told that he could not remove a miraculously non-cancerous subject from the group to determine what extracurricular factor had saved him. It would violated randomization, and therefore it couldn't be allowed in Palmer's treatment of Jahn. Randomization doesn't apply to Jahn's in-group experiment, but what Buddha proposed was to remove a subject based on a presumption of independent variable. He noted the outlying case as an uncommon variance in the dependent variable, but proposed that it be eliminated because some independent variable was the cause. Preferentially removing a subject according to presumptions of independent variables expressly introduces bias, but that was not the grounds by which Palmer proposed to disregard Operator 010. Nor is it the grounds on which subjects are routinely excused from ongoing research, as AleCcowaN's reference established. Such invariant reasons include non-adherence to protocol, voluntary withdrawal (a legal requirement of human-subjects research conducted in the United States), and the death of the subject. These do not violate randomness because they are factors assumed to affect both groups equally randomly, and not tied to any controllable independent variable. Zimbardo's proposal to exclude his rogue guard was on the grounds of violating the experiment protocol which, we see in the literature, is a perfectly legitimate reason having nothing to do with randomization.

    Dr. R-S notes that it is uncommon to attempt such homogenization, but it is possible and, where successful, suitably rigorous.

  5. On what type of data in Jahn's REG experiment would the chi-square statistic be appropriate?
    Several, for example the direction of effort (i.e., PK+ versus PK-) as a categorical variable, and the problematic volitional variable -- whether the subject got to choose whether to exert and effort, and in what direction. The chi-square test is non-parametric, meaning it does not require its data to be normally-distributed. The PK+ and PK- data can be considered as two kinds of data. Certainly there is the aggregation of means, which is the parametric aspect of it. But there is also a categorical variable in simply whether an effort was made -- "PK-some-direction versus no-effort." And another, as stated, in the direction of effort -- "PK+ versus PK-."

    Buddha had great difficulty with the concept of categorical variables, which is puzzling coming from someone who claims expertise in statistics and who decries Palmer's correct handling of them as incompetent. Buddha wrongly thought that the encoding for a categorical variable had to be treated parametrically in some way as continuous or ordinal data, and that it had to be distributed in a certain way in order for Palmer's analysis to hold. In other words, Buddha was trying to shoehorn non-parametric analysis into the rules for parametric analysis. This is simply not something someone would do who has studied inferential statistics at even the most elementary level. It's a rookie mistake.

    Dr. R-S mentions Buddha's error as a special case to the ordinal variable and gave an example of one from her field (education) whose ordinal encoding is contrived such that it can be treated as a continuous variable. She is careful to mention that this is not licit except in these rare and contrived cases.

    Non-parametric analysis simply means the data are not expected to fall into the distributions suggested by our parameterized mathematical constructs for distribution -- Gaussian, Poisson, binomial, etc. They may exist on some sort of continuum (e.g., for ordinal values). Or they may simply be categories that imply no magnitude or order, such as sex or whether one was supposed to push the REG to a higher or lower number of 1-bits.

    The chi-square test determines how independent two variables are, whether they vary together or vary separately. Variance in a categorical variable is simply whether one data point is in a different category than another. The in-group analysis noted that volition was not independent from the effect. That is, variance in 1-bit means under effort was observed only when the subjects got to choose how they attempted to affect the machine. Volition is a categorical variable; it is not distributed across an ordered sequence of potential outcomes or a quantitative support. It is distributed merely into yes-or-no bins. Trying to shoehorn it into parametric analysis would be a cargo-cult mistake.

It looks like Buddha's own sources do a pretty good job of refuting his claims. It's not like we have to cite many, if any, sources of our own.
 
After a quick review I'm going to say discussing the Poll thread in community is off-topic for this thread, it has nothing to do with the actual science topics this thread is meant to be discussing. So drop the discussion about that thread.
Replying to this modbox in thread will be off topic  Posted By: Darat
 
Is there any evidence supporting the existence of telekinetic powers of any kind?

So far this thread has discussed, at great length, the PEAR study, which was so bad even it's original author collaborated on a subsequent study that eviscerated it. The "defenses" offered of PEAR in these pages have been a profound and complete joke.

There was a metal-bending test, but it was so poorly designed that it would have been trivial to doctor the results or cheat the actual tests.

Is there ANYTHING that stands up to scrutiny?
 
You always have to take it back to the original claim. People believed telekinesis was real because they saw people apparently moving or manipulating macro objects with their minds, moving balls on a table, making metal bend and so on. The type of experiment PEAR did was not looking at what was being claimed, it was looking for an effect they could shoehorn into the word "telekinesis" - there was no phenomenon that people believed in or there was anecdotal evidence for that PEAR was set up to find.

It really was a matter of hoping to find something no matter what that meant something was happening that couldn't be explained by "regular" science.

Even if they had found something that couldn't be explained by current science there would have been no reason to believe it was even linked to the telekinesis that people believe/d existed.
 
You always have to take it back to the original claim. People believed telekinesis was real because they saw people apparently moving or manipulating macro objects with their minds, moving balls on a table, making metal bend and so on.


Don't forget to add levitation to that, as moving other bodies, including one's own, is part of this family of bull. And it's rooted way back in time, in those oriental beliefs that "pure/advanced souls" can float

We "might" find that to be the motivation behind this thread (and profitable pseudo-scientific books can be written about it). Excerpts from Google:

Reiki, Yoga, Meditation and Yagyas:New Age Practices: Techniques for ...

https://books.google.com.ar/books?isbn=1413483879

Marc Edwards - 2005 - ‎Body, Mind & Spirit
Jesus, Buddha, Krishna, and other advanced souls were on extremely high ... however, levitation will happen with significantly large number of meditators in the ...


Attaining the Siddhis: A Guide to the 25 Yogic Superpowers

https://www.consciouslifestylemag.com/siddhis-attain-yoga-powers/
25 Superhuman Powers You Can Gain Through Practicing Yoga and Meditation ... “In Buddhism, these are not miracles in the sense of being supernatural events, any ... The more advanced siddhis are said to include invisibility, levitation,


Is levitation an illusion or is it real? If is it real, can it be ...

https://www.quora.com/Is-levitation-an-illusion-or-is-it-real-If-is-it-real-can-it-be-reall...
Oct 8, 2014 - Magnetic levitation is the most commonly seen and used form of levitation. ... In reality its can be achievable, in ancient Hindu and Buddhism scripture but for ... there are so many process for your soul purification and connect to GOD. ... Levitation, it is said is a side effect of advanced pranayama (yoga breathing exercises).
 
I have always wondered at certain claims of telekinesis like the one in PEAR, how they suppose that it should work, even if it works. How on Earth is the brain going to influence a random number generator when the brain is not able to fathom the quantum workings of a random number generator, let alone a pseudo-random number generator that cannot be influenced. Or the pattern of light in a double slit diffraction experiment.
 
How on Earth is the brain going to influence a random number generator when the brain is not able to fathom the quantum workings of a random number generator, let alone a pseudo-random number generator that cannot be influenced.

It can be argued that the brain itself that cannot fathom those quantum phenomena, it cannot fathom its own internal workings, yet it works!

The matter behind all this wishful thinking about telekinesis and other niceties are things like "the power of will", "the power of holiness" or "the power of perfection". The fallacious step here is thinking that the same way that will is certainly generated by the black box of our brain, it goes the other way around and that will alone would find a quick way to influence every kind of natural law that is making other black boxes do their things.

In the end is not that far of thinking that last minute repent will produce an eternity in the heaven of the blessed. The only problem is the elusive evidence.
 
I have always wondered at certain claims of telekinesis like the one in PEAR, how they suppose that it should work, even if it works. How on Earth is the brain going to influence a random number generator when the brain is not able to fathom the quantum workings of a random number generator, let alone a pseudo-random number generator that cannot be influenced. Or the pattern of light in a double slit diffraction experiment.

You're confusing the quest for actual evidence with the crafting of pseudo-evidence. The results from random number generators and double-slit experiments are easily manipulated by dodgy statistics and compromised baselines. This very thread is full of examples of this. To fake spoon bending however you need to do something as obvious as having the spoons bend while not being observed.

It's easier to hide deceit in obscure phenomena. The legitimacy of a spoon-bending test can be easily challenged by repeating the test with more than one impartial observer watching the spoons when they're supposed to bend. The PEAR study however can be "defended" by spinning tall tales about statistics that are good enough to fool the credulous, so long as they have a layman's understanding of statistics.
 
I have always wondered at certain claims of telekinesis like the one in PEAR, how they suppose that it should work, even if it works. How on Earth is the brain going to influence a random number generator when the brain is not able to fathom the quantum workings of a random number generator...

The first two pages of the Jeffers and Sloan paper (1992, the single-slit experiment) survey that question.

Keep in mind that quantum mechanics says that particles don't really exist in the traditional way, but exist simultaneously in a number of states (i.e., position, velocity, etc.) until they are observed. The observation per se causes them to appear in a fixed state, and the state in which they are observed is governed by a probability distributed among all possible states until the "collapse." Stanley Jeffers points out that a few early physicists entertained the hypothesis that observation was a conscious act, and speculated whether the state of consciousness in the observer had some effect on the way the wave function (i.e., the mathematical description of all those simultaneous states) collapsed.

This is a smidgen disingenuous, because none of those theories really caught on, were validated empirically, or persisted much past the 1950s. Jeffers postures the single-slit experiment as such such an attempt at empirical proof -- which, of course, failed. Less adventurous physics maintains that quantum observation requires no special property in the observer and exerts no variable effect on the wave function. And Jeffers' results are consistent with this.

To answer what I think is your real question, I gather that while the purported effect may be conscious, it may not be cognitive. That is, you don't need to know the intimate realities of fluid dynamics in order to breathe, to affect your breathing, or to cause your breath to have effects on the outside world. Instructions such as "Shift the diffraction pattern to the left," or "Make more ones than zeros come out of the machine," weren't intended to require the operator to know how the apparatus worked at any scope of examination. It's closer, I think, to flying by thinking happy thoughts. Your consciousness is presumed simply to preferentially collapse wave functions without a lot of detailed planning.

Was that the question you were wondering about?

Don't forget to add levitation to that, as moving other bodies, including one's own, is part of this family of bull. And it's rooted way back in time, in those oriental beliefs that "pure/advanced souls" can float...

Yes, let's not forget that this thread seems to be one in a loosely-related series attempting to provide scientifically addressable proof for tenets of Buddhism, or some similar belief system that incorporates elements of Buddhism. And in Buddhism macro-level psychokinesis is a thing. In many of the dharmic religions, degrees of enlightenment are associated with supernatural mind-over-matter ability. Of course anyone familiar with stage magic knows how the swami really levitates, and how the spoons really bend. But there is a movement in all religions, I think, that wants to argue that the supernatural claims have some secular justification or validity.

I agree with Darat :—

The type of experiment PEAR did ... was looking for an effect they could shoehorn into the word "telekinesis" -...

It really was a matter of hoping to find something no matter what that meant something was happening that couldn't be explained by "regular" science.

It's all about getting a foot in the door. If you can show that a quantum-level PK effect exists, then skeptics are wrong in principle -- an important rhetorical victory -- and the rest is just a matter of scale or degree. It could then be said that ordinary people can manipulate matter by forcing wave-function collapses to be non-stochastic on the order of a few particles, but then more enlightened folk could do that on a grander scale because their consciousness just had that much more horsepower.

But of course those claimed macro-psychokinetic effects have never been demonstrated under rigorous empirical control, and those who purport macro-scale ability eschew the rigor and complain about it. This leads the critical thinker to conclude that the macro effects are more likely to be the obvious sorts of stage magic which the actors know would be revealed by the proposed controls, and which the observers have seen revealed to them by their magician friends. The world is right to be skeptical of claims to supernatural ability that work only when conditions are just right.
 
You're confusing the quest for actual evidence with the crafting of pseudo-evidence. The results from random number generators and double-slit experiments are easily manipulated by dodgy statistics and compromised baselines. This very thread is full of examples of this. To fake spoon bending however you need to do something as obvious as having the spoons bend while not being observed.

It's easier to hide deceit in obscure phenomena. The legitimacy of a spoon-bending test can be easily challenged by repeating the test with more than one impartial observer watching the spoons when they're supposed to bend. The PEAR study however can be "defended" by spinning tall tales about statistics that are good enough to fool the credulous, so long as they have a layman's understanding of statistics.

It's important to realize that dodgy statistics come in various flavors. Broadly speaking, you can misuse statistics by applying them where it doesn't belong. Or you can misuse statistics to hide illicit manipulation of or unfortunate accidents in the data. It's important because some kinds of science necessarily rely on statistical methods to arrive at their findings. You don't want to say that such-and-such a study was invalid because it developed its conclusion statistically rather than by direct observation. In the hypothetico-deductive method, statistical reasoning is a perfectly defensible form of deduction. It has a well-defined and well-behaved calculus.

Jabba's proof for immortality is a good example of applying statistics where they don't belong. As the statisticians he consulted told him, his primary error was that he didn't have any actual data. Statisticians like data. To correct that problem, he simply made up all his data. He pulled numbers out of his kiester and applied poorly-understood statistical inference to them to reach an assurance of success. It's akin to a magician standing on the front of the stage and, with a chalkboard, showing -- with reasonable assumptions -- a 98.36% probability that he really did saw the lady in half. Just show us the trick, ya two-bit Houdini!

Where PEAR is concerned, there's one level of criticism that decries looking for effects so tiny that they're fundamentally indistinguishable from noise. (It's psychokinesis, Jim, but not as we know it.) Let's grant that the effect they're looking for is hypothesized to be very small (however boring that is). Then they must use statistical methods to detect it. The next level of criticism is whether they did so correctly. That's the second category I mentioned -- the case where statistics is the right thing to use, but it's used incorrectly or deceptively.

The impropriety in PEAR's case starts with aggregation of poorly-distributed data. This is the extreme outlier problem. Operator 010 clearly does not fit the expected distribution and, in fact, accounts for all the variance noted between the effect and no-effect runs. Hiding a prodigious data point like that behind a skewed mean is dodgy statistics. The impropriety continues with the suspiciously correlated baselines. It doesn't really matter how they got that way. The problem is that the statistical comparison offered by the t-test shows actual significance only if the baselines are credibly distributed, which wasn't the case for PEAR.

Yes, that sort of thing is much easier to hide from the average observer than whether you actually sawed the lady in half, bent the spoon, or are hovering above the sidewalk. The concept of an outlier is intuitive, but the reader has to know one was there. When the data are presented only in aggregate form, the reader is likely to assume they are appropriately distributed. The concept of a baseline bind is instead far more esoteric. It requires adept knowledge of how parametric distributions behave at a deeply conceptual level. It is manifest only in dimensionless numbers that acquire significance only when read with years of experience.

But there is something about the baselines even Stanley Jeffers didn't write about because it happened after he finished his involvement with PEAR and psi research. The baselines in the last data sets added to the PEAR database were as expected. That is, the problem with the baselines magically corrected itself after the scholarly community expressed concern over it. That has two implications -- one fairly practical and the other a bit sinister. In practical terms, you can't aggregate baselines from the beginning of the project (the too-narrow ones) along with more properly distributed baselines from the end of the project. They're dissimilar data. In sinister terms, how about that timing?

Statistical reasoning is problematic even when it's not intentionally dodgy. PK effects barely poking their heads above the ether are unconvincing not only because they don't relate to what has commonly been peddled as psychokinesis, but also because the smaller the observed effect, the more exacting and uncompromising must be the experiment design in order to make the attribution of such an effect credible to a purported cause. The fact that only the first round of the PEAR protocol produced any significant result, and that nothing thereafter -- even those that followed the same protocol -- found anything at all is best explained as a confound particular to that time and place.
 
(Snip)

It's all about getting a foot in the door. If you can show that a quantum-level PK effect exists, then skeptics are wrong in principle -- an important rhetorical victory -- and the rest is just a matter of scale or degree. (Snip)

In other words, the proponents are saying, "If I have my foot in the door, you have to agree I can float a foot off the floor." Not that they will demonstrate that, but skeptics would have to agree that the supposed miniscule effect proves the macroscopic one. That's fallacious too, of course.
 

Back
Top Bottom