• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

"String Theory, Universal Mind, and the Paranormal"

b) Cannabis induces increased scoring rates but it seems that in within subject designs the major difference comes from psi missing in the control condition.


Hold on, are they talking about "psi missing" as in we scored lower than chance and that proves an effect?
 
Hold on, are they talking about "psi missing" as in we scored lower than chance and that proves an effect?


Yeah. Psi missing is the opposite of psi hitting. It's consistent missing. You could say it's an expression of psi that somehow produces a result opposite of intent.

It seems that in the sheep-goat effect, the sheep (believers) score above chance. The goats (skeptics) score below chance. So the goats are 'psi-missing'...that is to say they are unconsciously using their own psi to systematically avoid the correct target more often than chance would allow.
 
Last edited:
Hold on, are they talking about "psi missing" as in we scored lower than chance and that proves an effect?

Yes. As Limbo demostrates, any pattern in any direction in the data can be retrofitted to confirm psi (also conveniently ignoring that it invalidates the justification for one-tailed testing (which basically doubles your chance of finding a 'statistically significant' result)). And since it is often possible to find patterns through data dredging (especially in the computer age), it would be hard not to find something that will be attributed to psi.

Linda
 
Limbo, I had already seen post #53, but if you insist, I'll go through it.


My contention is that parapsychologists claim to have experiments that work but keep them away from skeptics. This is the opposite of what one should do when trying to prove something important. Your response was that I should consider this:

According to Dean Radin in this talk on Google Video, presentiment experiments of one kind or another have been done about 20 times, give or take. Not all produced positive results, but that's ok.

Presentiment experiments performed at so far:

University of Nevada
University of Amsterdam
University of Edinburgh
Interval Research Corporation
Boundary Institute
University of Texas
Natl Inst of Rad Sciences (Japan)
Lab for Fundamental Research
Budapest, Hungary
University of Northhampton
Institute of Heartmath
Institute of Noetic Sciences

I recommend watching the video. He describes the whole presentiment thing.

I'll just talk about a few of them.

Institute of Noetic Sciences - This is Radin's home. That's cheating.
Labrotories for Fundamental Research - An ESP research center.
University of Northhampton - Technician training? Not exactly a research center
Institute of Heartmath - Sounds like they're selling self-help bunk.
The Boundary Institute - Sounds like "New Physics" crackpotterey but I could be wrong
Interval Research Corporation - Radin worked there. That's more cheating.

Ok that's enough. It's clear to me that Radin went shopping for at least some institutions that would be sympathetic to his line of research. There are some really good schools on that list. However, as you said, not all on the list found positive results.

The thing that would really change my mind would be if PEAR took their work to Randi et al. If you want to accept parapsychology research as it is; I won't argue with you. Fls' discussion with you is more interesting than my point.
 
Last edited:
On the contrary, an anticipation effect based on reactions to previously seen photographs accounts for the data very well.

See http://www.internationalskeptics.com/forums/showthread.php?t=123007


Yes, I was wrong about that. Thanks for the info. Linda also put a link to a paper discussing a similar simulation. Their results indicated a bias of 0.004% (IIRC) for a protocol similar to Radin's experiment. I haven't yet compared that number to the size of the effect Radin found and if that would push his results into the area of being not statistically significan. Since the effect was slight, I suspect that might well account for it.
 
Sorry to jump in here, but since you haven't yet had a response...

These tests seem to be based on blips (what they call SCR's (skin conductance responses)), which are uncommon but should be identifiable at the time they occur once a baseline is established. So they should be able to be used to predict an upcoming event. I'm not sure a coin toss or roulette colour would work, though. The predicted event may have to be unpleasant.
No, you are missing my point.

In these experiments the unpleasant events - the photos - are being selected according to a random number generator. So if the emotional state is anticipating the effect of the photo then it is, by definition, also anticipating the next value of the random number generator.

So replace the RNG with a coin or roulette wheel in these experiments and the emotional state registered should now be predicting those things (if the effect were real).
 
Yes, I was wrong about that. Thanks for the info. Linda also put a link to a paper discussing a similar simulation. Their results indicated a bias of 0.004% (IIRC) for a protocol similar to Radin's experiment. I haven't yet compared that number to the size of the effect Radin found and if that would push his results into the area of being not statistically significan. Since the effect was slight, I suspect that might well account for it.
In the case of my simulation it would indicate that a real effect is being measured in these experiments. Just not a precognitive effect.
 
Institute of Noetic Sciences - This is Radin's home. That's cheating.
Labrotories for Fundamental Research - An ESP research center.
University of Northhampton - Technician training? Not exactly a research center
Institute of Heartmath - Sounds like they're selling self-help bunk.
The Boundary Institute - Sounds like "New Physics" crackpotterey but I could be wrong
Interval Research Corporation - Radin worked there. That's more cheating.
Good digging. The University of Edinburgh link appears to be an unpublished paper in 1999 by one "Norfolk, C". I could not dig up any references to this paper apart from Radin bibiography entries.
 
No, you are missing my point.

In these experiments the unpleasant events - the photos - are being selected according to a random number generator. So if the emotional state is anticipating the effect of the photo then it is, by definition, also anticipating the next value of the random number generator.

I knew that!

I just seem to have forgotten it when I went to answer your post. :)

So replace the RNG with a coin or roulette wheel in these experiments and the emotional state registered should now be predicting those things (if the effect were real).

Yeah.

Linda
 
I e-mailed Caroline Watt and asked her that. I haven't found Peter Ramakers e-mail yet...maybe this weekend I'll have a chance to look a bit more.

When I hear back from either of them I'll let ya know.


Linda,

I heard back from Caroline Watt. Here is the e-mail she sent me:

-------------------------------------------------------------------
Hi, I have just re-done the analysis for you. Here are two SPSS files to
show this analysis.

One shows the raw data (column 1 is Help presses, column 2 is Control
presses) for each of the 24 participants who were tested by believer
experimenters.

The other shows the output of a standard statistical analysis package, asked
to do a related t-test on this data. As you will see, the results accord
with those presented in the JP.

As is normal in journal publications, the table (2) shows a summary of the
data (means and standard deviations for each condition) rather than the raw
data. I hope this helps answer your question. I also attach a small excel
spreadsheet containing the raw data, in case you want to do the calculations
by hand, or use an alternative stats package.

I look forward to hearing more from you as to what the problem is with this
analysis.

Regards, CW
-----------------------------------------------------------------------

Here is the download link to the attachments she sent.

After you have had a chance to go over it, plz let me know what you think.
 
Last edited:
Linda,

I heard back from Caroline Watt. Here is the e-mail she sent me:

-------------------------------------------------------------------
Hi, I have just re-done the analysis for you. Here are two SPSS files to
show this analysis.

One shows the raw data (column 1 is Help presses, column 2 is Control
presses) for each of the 24 participants who were tested by believer
experimenters.

The other shows the output of a standard statistical analysis package, asked
to do a related t-test on this data. As you will see, the results accord
with those presented in the JP.

As is normal in journal publications, the table (2) shows a summary of the
data (means and standard deviations for each condition) rather than the raw
data. I hope this helps answer your question. I also attach a small excel
spreadsheet containing the raw data, in case you want to do the calculations
by hand, or use an alternative stats package.

I look forward to hearing more from you as to what the problem is with this
analysis.

Regards, CW
-----------------------------------------------------------------------

Here is the download link to the attachments she sent.

After you have had a chance to go over it, plz let me know what you think.

Thank you for passing that on. I can't open the .spo and .sav files, but I presume they show a paired T-test ("related" sounds like it means the same thing) and my results are only trivially different from what's reported in Table 2 using the raw data. I wondered if it was something like that.

Linda
 
Thank you for passing that on.


No problemo.


I can't open the .spo and .sav files, but I presume they show a paired T-test ("related" sounds like it means the same thing) and my results are only trivially different from what's reported in Table 2 using the raw data. I wondered if it was something like that.


So the mystery of how you get a t-value of 2.737 from the results given in Table 2 has been solved to your satisfaction?
 
So the mystery of how you get a t-value of 2.737 from the results given in Table 2 has been solved to your satisfaction?

Yes. I assumed that since they used a parametric test that the data would follow a reasonably normal distribution.

Linda
 
Yes. I assumed that since they used a parametric test that the data would follow a reasonably normal distribution.


Cool.

"How do you get a t-value of 2.737 from the results given in Table 2 (since that forms the basis of their claim)?" -fls

What do you think about the basis of their claim now?
 
Cool.

"How do you get a t-value of 2.737 from the results given in Table 2 (since that forms the basis of their claim)?" -fls

What do you think about the basis of their claim now?

I think it doesn't really matter, since it leaves unchanged the concerns which have already been mentioned.

Linda
 

Back
Top Bottom