• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

How would you test for a PEAR level effect?

A billion bits in the bitstream at under each condition you want to study; a billion bits for the control stream, a billion under the "think 'one' thoughts," and a billion under "think 'zero'" if you want to test that as well.
...snip...
If, as you suggested, you can generate random numbers at 1000/second, it will take about a million seconds, or approximately thirty years, to run the experiment.

Hmmm. So suppose you generated random bits at 1 million/second. Would that allow you to run the experiment in practical time?

And even if you could run the experiment, it would still probably not be accepted by either side. Limbo's already explained about the "known parapsychological effects" like "the sheep-goat effect and the parapsychological experimenter effect." These effects basically "mean" that any time an incompetent researcher produces a finding that evaporates in the lab of a competent one, that means the competent researcher is doing something wrong; in other words, the PEAR hypothesis is unfalsifiable, because anyone who can't duplicate it must be subconsciously suppressing it.

I understand and accept that; trying to convince everybody is obviously futile. What one could show however is that "under the described conditions, at confidence p=.95, the effect if any would have to have been less than 1 in X bits". That doesn't logically "disprove" the effect under different conditions even to some rational observers (we'll leave the wildly irrational out), but as a thought experiment it would nevertheless have some meaning. The conditions could potentially be made rigorous while still more or less supporting asserted prerequisites. For example, as many meditators as desired could hang out in the next room (smile).

Beyond this, there's the simple fact that such an ability, even if confirmed, would be of no practical significance whatsoever; even if the statistical significance were tremendous, the "clinical" significance, or usefulness in explaining real-world phenomena, would be negligible.

I agree that Home Depot is unlikely to begin selling mind control light switches any time soon, even if such a slight effect were confirmed.

I believe however that confirming such a phenomenon would have major impact, in the same way that some high energy physics experiments which imply a new force or particle may require tweaking or discarding theories - tho often with no practical utility. Sometimes the investigated effect in those cases is one in 10^8 or less. What practical utility is there to much of the more abstruse neutrino research, other than to give experimental feedback to our models and theories?

(Do remember that my personal guess is that an unbiased test would more likely be disconfirming. But most folks expected the speed of light to vary depending on the Earth's motion through the ether - it's the exceptions that cause breakthroughs.)

Zeph
 
Something here does not jell, but I don't have time to follow it up at the moment.

Jahn claims strong statistical significance for the meta analysis of experiments over 12 years.

We can doubt that this is valid on the basis that it is unlikely that they had such rigid, failsafe lab procedures over that time that there could be no bias in the samples committed to the database.

But I have never heard anyone question his math.

We don't have to. He questions it himself when he uses the word "meta-analysis." :D As you pointed out, we have significant reason to doubt the quality and reliability of the data; meta-analysis on a barrel of sewage will not be able to extract vintage wine.
 
I can see the problem.

A billion seconds is about thirty years.

A million seconds is only about 11 days.

In fact I believe Jahn works on about 4,000 samples per second but I would need to check.

In any case this effect could quite easily be tested in a fairly short time frame.
 
Hmmm. So suppose you generated random bits at 1 million/second. Would that allow you to run the experiment in practical time?

If the equipment could run that fast, and if an effect of that magnitude were still present. You could get a billion bits in about 20 minutes at a million bits/second.

I understand and accept that; trying to convince everybody is obviously futile. What one could show however is that "under the described conditions, at confidence p=.95, the effect if any would have to have been less than 1 in X bits". That doesn't logically "disprove" the effect under different conditions even to some rational observers (we'll leave the wildly irrational out),

The problem is that no one but the wildly irrational would embark on this line of experimentation in the first place. Among the people who take the PEAR findings in any way seriously, belief in gibberish like the "sheep/goat" effect is near-universal. Among people who are willing to believe that time-travelling space ninjas from the 31st century cannot have an effect on an experiment run today (and that the statistical framework I outlined is therefore a legitimate test) the PEAR results are "obvious" crap and to attempt to replicate them is a waste of money, brains, and time.
 
Except of course for the issue mentioned earlier - that any negative result would be rejected as the "sheep/goat" effect.
 
You'd need to prove that there was no possible effect that could cause correlations among the RNGs. If, for example, cosmic rays generated by the solar wind affected radioactive decay (as they are known to do), then every decay generator on Earth would be affected at the same time. For example, a thermal noise generator and a radioactive decay or photon based generator, and a pseudorandom generator, could all be operating and recorded.

Since I don't think such a proof is possible, I think multiple generators would invalidate the results.

That sounds overly strict.

We have two conditions. Absent any intention, does the photon based generator show any significant statistical correlation with the thermal noise generator? Test as long as you like. If it does, then don't bother using both at once. If it does not, then go ahead and test with both simultaneously.

If the two generators both correlate with the subject's intentions during the "make more 1's/0's" periods, then yes they will likely also correlate with each other during those periods. But such a "turn on/turn off" correlation between generators based on mental state would only make the evidence stronger, not weaker.

The part which would need tweaking is in the other direction. If only one of the generators showed correlation to the subject's intentions, we would probably need to raise the threshold of significance (eg: to p=0.98?). Otherwise one could run 20 generators and claim significance if even one of them showed correlation at p=0.95, an obvious fallacy. However, if you used two generators and either one hit p=0.99 (even while the other was less than p=0.95, say), I assert that would still be significant.

I do believe this could be done right, designed in advance to satisfy rational people on both sides (within the inherent limits). The idea of running multiple simulataneous generators was about "doing more within given time limits", and potentially "investigating whether different types of generators made any difference"; there would have to be compensation in the statistical thresholds to keep the overall criteria for significance balanced such that running multiple generators made it neither harder nor easier to demonstrate the effect.
 
Except of course for the issue mentioned earlier - that any negative result would be rejected as the "sheep/goat" effect.


I see that even you are abusing that easy out for skeptics - the scoff and guffaw factor. I must say I'm disappointed. I had hoped for more intuitive insight from you.
Its not the case that "any negative result" would be rejected as the sheep-goat effect, because not every negative result is strong enough to be statistically significant. In order to be a manifestation of that effect, the results would have to be statistically significant and in the predicted (negative) direction, with the prediction based on the psychological variable of belief or lackthereof. The results would either support a sheep/goat hypothesis or they wouldn't.
 
Last edited:
I see that even you are abusing that easy out. I must say I'm disappointed. I had hoped for more intuitive insight from you.
Its not the case that "any negative result" would be rejected as the sheep-goat effect, because not every negative result is strong enough to be statistically significant. In order to be a manifestation of that effect, the results would have to be statistically significant and in the predicted (negative) direction, with the prediction based on the psychological variable of belief or lackthereof. The results would either support a sheep/goat hypothesis or they wouldn't.

You have no idea how statistics work, have you?

There's no such thing as a degree of acceptance of the null hypothesis.

A negative result on a psi test is a test that fails to reject the null hypothesis of no psi. Therefore, the only possible effect that the sheep/goat effect could have is to force the failure to reject.

Therefore, any failure to reject is explainable-away by time-traveling space ninjas or similar gibberish.
 
Except of course for the issue mentioned earlier - that any negative result would be rejected as the "sheep/goat" effect.

Again - the ONLY way to get any meaning from this is to be pretty precise about what the results do and do not mean. You accept from the beginning that there is literally no way to disprove the thesis that "some other way of testing might have yielded different results". ALL you can do is show that a given testing methodology shows no statistically meaningful evidence of any effect larger than 1 in N bits.

Supporters can fill the vicinity of the experiment with sheep (er, believers) if they wish (so long as they cannot directly modify the results by physical means). It may be possible to find experimenters without a strong agenda.

I do gather that people's experience on these forums tends to suggest that humankind is sharply polarized between true believers who need no further evidence to believe and accept no further evidence to cast doubt, and skeptics who need no further evidence to disbelieve and will not accept any further evidence to believe. Both sides are so convinced and smug in their world views as to consider any investigation as waste of time.

But believe it or not, there do exist people who are not so polarized. I consider myself one such. I find it an interesting hypothesis that there could be some phenomenon outside our current scientific knowledge, yet weak enough to be difficult to show. Based on my experience with the world so far, my prediction would be that no effect would emerge, but I'd be willing to be shown wrong and not unhappy about it. I'd find it valuable to disprove (ie: limit the maximum amplitude of it under the tested conditions) such an effect, or to demonstrate and quantize it better. It is people like that who would be the recipients of the experiment, not the true believers and true disbelievers who can ALWAYS find a rationale for discounting it.

True believers will cite the sheep-goat effect if the results do not suit them. True disbelievers will assume there MUST have been methodological errors (or fraud) if the results do not suit them. Let them.
 
One other thing here - what inspired me to consider this was Carl Sagan's "Demon Haunted World". On pg 302 he cites three phenomena which "could be true", one of which was essentially PEAR type research. Like myself, he thought it more likely than not be found untrue in the end, but they were in his opinion nevertheless worth serious study. I share that opinion.

PEAR gathered evidence, but critics do not accept their research. So my question has been "how could it be done right, so that the results WOULD convince rational people in the middle?". People like Carl Sagan himself, say. Either to cross that off the list of possible, or to consider it more substantially demonstrated.

So no, I do not think that only the irrational could be interested in "doing the investigation right".

Zeph
 
Supporters can fill the vicinity of the experiment with sheep (er, believers) if they wish (so long as they cannot directly modify the results by physical means).


Yeah, I would like to surround the vicinity with groups of experienced meditators, and rotate them in-and-out as test subjects.

It may be possible to find experimenters without a strong agenda.


Just use parapsychologists. Debunkers will taint the whole affair, and neutral scientists from other branches of science won't have any familiarity with psychical research. In all likelyhood, a neutral party with no strong agenda will only know as much about psi as Hollywood and comic books teaches him or her. I hope I don't need to tell you the kind of distortion Hollywood and comic books can create. It would also taint the whole affair. There would be need for a variety of observers though.

Both sides are so convinced and smug in their world views as to consider any investigation as waste of time.


Even though I am already sold on the reality of psi I still learn things from proof-oriented research. Its just that I would prefer to see process-oriented research.

True believers will cite the sheep-goat effect if the results do not suit them. True disbelievers will assume there MUST have been methodological errors (or fraud) if the results do not suit them. Let them.


I would only cite the sheep-goat effect if the results are statistically significant and are in accord with a pre-stated direction that correlates with disbelief in psi. That doesn't sound unreasonable to me.
 
Last edited:
There's no such thing as a degree of acceptance of the null hypothesis.

Just by the way, while "rejecting the null hypothesis" is a valuable & common methodology, it is NOT the sole tool within the scientific method as used in even the hardest of the hard sciences, nor a "gold standard" for every kind of investigation.

Many high energy physics experiments seek to either (1) find a hypothesized particle or effect, or (2) constrain the maximum/minimum size/energy/mass/charge or whatever such a particle or effect could have while still escaping detection. While they cannot necessarily disprove its existence per se, they can successively statistically constrain its possible characteristics to reduce the likelihood of it ever being found and thus the credibility of any theory predicting or relying on it.

The effective "null hypothesis" is in these cases not pre-formulated but calculated during the experiment. It would be something like: "the Higgs Boson must have > X gev energy if it exists", where the goal it to set X as high as possible through the experiment, rather than picking an arbitrary X as the predefined null hypothesis.

I believe this is similar. The "null hypothesis" would have to be "under the tested conditions, any effect is less than 1 in N bits" where the N will be calculated at the end based on the gathered statistics. (You could have a goal of hoping to reach at least some given N, but that's not a binary true/false null hypothesis). It would be nice if it could be solidly disproven, but we are limited to what science can actually show - which can be close enough.

Zeph
 
Just use parapsychologists. Debunkers will taint the whole affair, and neutral scientists from other branches of science won't have any familiarity with psychical research. In all likelyhood, a neutral party with no strong agenda will only know as much about psi as Hollywood and comic books teaches him or her.

Hmm. I don't see a need for the experimenter on the spot to already know anything about psi (from hollywood or elsewhere), if this was done right. Unless you are asserting that the effect only shows up based on the *knowledge* of the experimenter? That would go beyond the thought experiment I am suggesting.

What I am suggesting is that the protocols be vetted in advance, to the satisfaction of both (a number of reasonable) skeptics and believers. Then a trusted neutral party can conduct the tests.

The rational believers would not be endorsing in advance that "I will accept that no such effect could ever occur under any circumstances"; that's too much to expect any test to demonstrate. They might however agree that "if these protocols are followed, I would expect a result and will need to reduce my assertions about the ease with which such a result could be discovered". And on the other side of this thought experiment, the rational skeptics would agree in advance that "if these protocols are followed with positive results, there would be real and serious evidence - but not final 'proof' - for the existence of the effect". They would of course want followup experiments to replicate it.

Or we find that there is no scientific way to study this even in theory. (Yes, I get that many battle hardened advocates here doubt that any possible result would convince the hard core on the other side psychologically - I accept that but that's a different issue).

One of the interesting things about science is that in the end it IS a social phenomenon. I'm definitely not saying, as a post-modernist might, that it's all social fictions; in fact what most characterizes science is its high regard for the feedback from the objective universe, something which art criticism or political prose often seem rather weak on. Nevertheless, many of the practices of science are more in the nature of "useful heuristics" than "logically necessary and sufficient". For example, it's really important to make predictions which can be tested, not just fit existing data into a theory. This is not something logically required for something to be true, but it's a good heuristic for weeding out which theories are worth investing effort upon. Likewise replicability is highly desirable but not essential (it's close to being required in the fields where it's possible - but sometimes in archeology or cosmology we can't repeat things at will so science does without and still makes progress).

In the end, being held to be "scientifically true" means that most other scientists consider it true, while practicing the methods of science in so evaluating, to the best of their human limitations. The glory of science is that over time those things will are held to be scientifically true have a stronger demonstrated tendency to remain apparently valid with accumulating evidence, than those things held scientifically false. There are occasional celebrated exceptions ("continental drift"), but they are celebrated because the ARE exceptions. The exceptions to conventional expectations are actually vital to the growth of scientific knowledge, but we still have to acknowledge that much more often than not, most things which have convinced most knowledgeable scientists turn out to stay "true". And this tendency, accumulated over time, is what causes science to asymptotically approach "truth" about the universe. There are missteps in what science believes, but the dance is biased towards increasing alignment with objective reality.

Is the widespread disbelief in psi by most scientists one of those missteps? More than likely not, as described above. But there is enough evidence to convince some (like Sagan) that there could be some fascinating cracks in the seamlessness of our current scientific understanding of the world. Enough to be worth investigating, even if not so promising to be worth spending more than a tiny fraction of what we do on the LHC.

Zeph
 
One last thought (excuse my proliferousness, it will pass).

My core interest in all this is in discovering more about the universe; more knowledge in either direction does that.

To me it appears that for more than a few people in the world, and maybe even some posters here, the core motivation is in winning arguments - in being "right".

This shows up when I seek to find a methodology which would elicit meaningful answers for those (many or few) who would view it relatively objectively with (appropriately) open minds - and much of the response consists of "but that still wouldn't win the arguments for either side".

I realize that's expectable because I've stepped into the midst of a series of ongoing arguments which have polarized many contributors, and many of the people who stick around are going to be those who enjoy argument per se, and especially thinking they have "won". They may even be a little bitter in tone because the other side has stubbornly refused to concede defeat. That's OK, you all created this space to fulfil your own needs, and I'm just a new visitor with my own overlapping but not identical agenda.

I'm just making clear why sometimes we partially answer each other at cross purposes. I'm looking for understanding, and others think I must be looking for ammunition for one side or the other, and react accordingly.

That said, I *am* getting some useful responses too and I am grateful. Both from others who enjoy the pursuit of truer understandings, and even from intelligent advocates with a bias but also good knowledge and reasoning.

And no, I'm not pure as the driven snow. I get into arguments too. I prefer to be right. Everything I'm saying here is relative, not absolute. Still, I think I'm relatively more interested in neutral investigation (versus winning for a existing position) than is the median here, and that shows up from time to time.

Anybody else identify with this?

Zeph
 
True believers will cite the sheep-goat effect if the results do not suit them. True disbelievers will assume there MUST have been methodological errors (or fraud) if the results do not suit them. Let them.
I fall into neither camp.

As I pointed out before it would only take a very, very few problems in data handling to explain the significance claimed by Jahn in his meta-analysis.

And we are talking about experiments run from time to time over 12 years.

So I don't say it MUST be errors in data handling.

But given the competing explanations of a mysterious force completely unknown by science on the one hand and a few lapses in laboratory discipline on the other - I have to apply Occam and say that it is probably the latter.
 
I fall into neither camp.

As I pointed out before it would only take a very, very few problems in data handling to explain the significance claimed by Jahn in his meta-analysis.

And we are talking about experiments run from time to time over 12 years.

So I don't say it MUST be errors in data handling.

But given the competing explanations of a mysterious force completely unknown by science on the one hand and a few lapses in laboratory discipline on the other - I have to apply Occam and say that it is probably the latter.

That makes sense to me. I too think it's the more likely explanation, if I had to place a bet.

But I would also find it interesting to design an experimental regime which would eliminate such problems in data handling and produce more solid results, one way or the other.

Zeph
 
Just by the way, while "rejecting the null hypothesis" is a valuable & common methodology, it is NOT the sole tool within the scientific method as used in even the hardest of the hard sciences, nor a "gold standard" for every kind of investigation.

Well, it's implicit in every kind of investigation.

Many high energy physics experiments seek to either (1) find a hypothesized particle or effect, or (2) constrain the maximum/minimum size/energy/mass/charge or whatever such a particle or effect could have while still escaping detection.

See? The same asymmetry. Find if it exists, find limitations on its existence if it doesn't. You can't actually assess the probability that there is no effect, which means that Limbo's formulation of asking for a "statistically significant" finding that the effect doesn't exist is not possible.
 
I fall into neither camp.

As I pointed out before it would only take a very, very few problems in data handling to explain the significance claimed by Jahn in his meta-analysis.

And we are talking about experiments run from time to time over 12 years.

So I don't say it MUST be errors in data handling.

But given the competing explanations of a mysterious force completely unknown by science on the one hand and a few lapses in laboratory discipline on the other - I have to apply Occam and say that it is probably the latter.

Check this out. Operator 10 was most likely Brenda Dunne. http://www.skepdic.com/pear.html
 
I'm looking for understanding, and others think I must be looking for ammunition for one side or the other, and react accordingly.

The way I try to understand it is to take a wider contextual view of the rationale. You have already been given a number of good reasons why this endeavour is probably futile - past experiments show any effect will be so small it will be hard to distinguish from noise, this will cause problems with experimental design, making it lengthy and potentially expensive, and difficult to keep conditions controlled and consistent for the duration. Then you have the problem of the human subjects - capricious, easily bored and distracted, etc. You have no way to tell if some subjects are more able than others, etc. There are no metrics for the proposed influence (force?); does it fall off with the square of distance, like gravity? How close should the subjects be to the target? Can this influence be blocked, and by what? should the target (radioactive source?) be completely unshielded?

If you find an effect, even if the analysis is impeccable, these problems will allow critics to find potential holes in the study. If you don't find an effect, you won't know whether its because there is no such ability, or whether your subjects didn't have it, or didn't use it correctly, or any number of other possibilities....

So it's a high risk experiment with a high likelihood of having no useful results.

Now consider what it is you're testing for: the ability of humans to use their minds to influence the outcome of truly random events to a small (almost unmeasurable) extent.

So what is it about the human brain that might support such an ability? We know that the human brain is, in essence, little different from any other mammalian brain; it's built out of neurons - it's more of the same, with some structural enhancements. There are no unusual unexplained areas that particularly distinguish it from other brains, so whatever influence it can exert is likely to be produced by a network of neurons. Any energy output it can produce for as-yet-undetected influences must be very small indeed, as we know how much energy the brain consumes, and we can measure/calculate its overall output in detectable forms.

What about the influence or force involved? It must be a very low energy force to be generated by a neural network and acts at distance of some inches (feet?). AFAIAA there is no physical hypothesis for this force, no hole in physics into which an undetected human-scale force can fit, and its proposed influence, on truly random (i.e. quantum) events, is physically unexpected, and AIUI (I'm not a physicist), possibly contradictory to known physics.

I also think it's worth considering why and how humans might have developed such an ability. Broadly speaking, every feature of the living world arose because it has/had some selective advantage, is associated with a feature that has selective advantage, or is a relatively recent neutral feature. I've been trying to imagine what advantage such an ability might provide, without success. In what situation might an ability to infinitesimally tweak random events be advantageous? The 'Butterfly Effect' explains how a tiny event can have large consequences, but the process is chaotic, so the results are unpredictable. The how is equally problematic; how can a mutation affect the development of a neural network sufficiently to allow it to generate or exert a physical influence on a relatively distant process? We know the brain produces measurable gross electrical activity, but this is barely measurable outside the skull, which suggests that the unknown force in question must propagate relatively unattenuated by the skull, yet have physical effect at a distance. And even if we allow that a mutation could permit a neural network to generate such a force, how does conscious control of this capability come about if the effect is so small as to supply no effective feedback (i.e. it is undetectable in the wild) ?

Given the multiple independent serious implausibilities associated with this paranormal ability hypothesis, and the problems of experimental verification & validation, it seems to me it would be more rational and productive to investigate why people feel such an ability might exist in the first place.
 
Last edited:

Back
Top Bottom