• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Simulating "Presentiment" Experiments

I was also pondering this presentiment stuff. I had considered a few explanations but had it pretty much on the backburner till I saw Robin's post here. It made me go back to Radin's paper, most especially the part Limbo copied.
Then it hit me like a hammer.

BTW, this paper, from 2002, also presents a simulation:
A COMPUTATIONAL EXPECTATION BIAS AS REVEALED BY
SIMULATIONS OF PRESENTIMENT EXPERIMENTS

Jan Dalkvista, Joakim Westerlund & Dick J. Bierman
I can't post links but you can get it via google scholar.
It also presents a simulation explaining the "presentiment".

Robin, regarding the paper by Dean Radin in your OP. I was wondering if you could highlight the specific flaws in this section:

Anticipatory strategies.
Only the relevant bits:
For example, as previously mentioned, people’s idiosyncratic responses to photos inevitably blur the dichotomous distinction between calm and emotional targets. A more realistic anticipatory simulation might use targets with a continuous range of emotionality, and it would adjust the arousal value for trial N þ 1 up or down according to the emotionality rating of trial N.
The 4th experiment in Radin's paper (p-value= 0.28) as well as Broughton's failed replication use such a continuous range. Seems as if the effect depended on the dichotomy. That is one of the conclusions of Radin's paper in any case.
The data used to test for an expectation effect came from experiments that were explicitly designed to present strongly dichotomous targets.

->This is a total red herring.

But idealized simulations and strategies aside, we can investigate whether anticipatory strategies can explain the observed results by examining the actual data. To do this, data from Experiments 2 and 3 were pooled; this provided a set of 3,469 trials, contributed by a total of 103 participants. The pre-stimulus sums of SCL (i.e., PSCL) prior to stimulus onset in each of the two experiments were normalized separately and then combined.
As Robin pointed out, this is a bit odd. It's not just the pooling of data, it's that it is pooled differently than when he is doing his actual hypothesis tests.
It also seems to me that he is treating the data differently. I haven't wrapped my mind around this, and I think I don't care to, anymore, so don't take this as true. The paper above makes some recomendations on how to treat the data to minimize the influence of the expectation effect. If I am not mistaken (I could easily be!) then Radin is heeding that advice to minimize the expectation effecto when testing for that effect but not when testing his hypothesis.

Then the targets were separated into two classes: emotional, defined as those trials with the top 26% emotionality ratings, and calm, defined as those 74% with lower emotionality ratings. These percentages were selected to create about a 1:3 ratio of emotional to calm targets to ensure that there would be an adequate number of calm targets in a row to test the expectations of an anticipatory strategy.
This is a real howler. No doubt there.

Experiment 2 has this to say on the pictures:
Thirty new pictures were added to the photo pool from Experiment
1, bringing the total to 150. Five men and five women were asked to
independently examine the pictures, one at a time, in random order. The rating dimension consisted of a 100 point scale, and the rating method asked each person to view a picture on a computer screen and move a pointer across a sliding scale to indicate his or her assessment.
The photo pool from experiment 1 is described so:
Calm targets included photos of landscapes, nature scenes, and people; emotional targets included erotic, violent, and accident scenes. Most of the calm pictures were selected from a Corel Professional Photo CD-ROM. Most of the emotional pictures were selected from photo archives accessible over the Internet.
Unfortunately we are not told how exactly the photos were rated but based on their origin I they can be assumed to have been at least somewhat dichotomous in their emotionality rating. (p-value for experiment 2: 0.11)

The target pool consisted of the 80 most calm and the 40 most emotional pictures from the International Affective Picture System (IAPS, Bradley et al., 1993; Ito et al., 1998), where ‘‘calm’’ and ‘‘emotional’’ were defined by the emotionality ratings (averaged across gender) that accompany the IAPS picture set. In an attempt to further enhance the contrast between the emotional and calm targets, participants wore headphones that played one of 20 randomly selected noxious sounds for 3 seconds during presentation of emotional pictures (i.e., screams, sirens, explosions, etc.). Calm pictures were presented in silence.
Alright, so here we have a massive difference in calm vs. emotional. As far as the pictures go we deal with the difference between cute furry things and mutilated corpses, not even mentioning the noise. (p-value=0.0004)

He also says this:
The use of a 2:1 ratio of calm to emotional photos would seem to add noise to the analysis of Hypothesis 1 (which is therefore a conservative test), since that analysis sorts targets by their pre-assessed emotionality ratings and splits the data into two equal subsets. But in practice, people show strong idiosyncratic responses to photos, and this significantly blurs a nominal dichotomy into a continuous scale of emotionality. Thus, splitting the trials into two equal subsets does not introduce as much noise as one might expect.

Whew, that was something. Does everyone still remember what he does to test for expectation? He splits the trials in a 3:1 fashion. He shovels "emotional trials" over into the "calm trials". Gee, might that add noise?

Conclusion:
Radin uses methods designed to fail to detect an expectation effect. He does not use the same methods to test his own hypotheses but shows that he is aware of at least some problems with those inadequate methods.

This may be self-deceit but it certainly is deceit in one form or another.
 
More on Radin's test for the anticipation effect:

radin_fig8.PNG.jpg


And his commentary:

Dean Radin said:
The weighted linear correlation between mean PSCL and trial number for steps 13 ! 1 was positive, but not significantly so (r ¼ 0.29, p ¼ 0.17). Notice that with one exception, all of the mean PSCL values prior to the emotional trial were negative, and three were significantly negative, including the trial immediately preceding the emotional trial.
Thus, contrary to the expectations of an anticipatory strategy, a subset of participants specifically selected for exhibiting strong differential results suggestive of a genuine presentiment effect showed relaxation responses before the emotional target rather than progressive arousal.

In sum, while idealized anticipatory strategies might provide an explanation of the observed results in principle, the actual data did not indicate that such strategies were employed.
OK, so I do the same test with my simulation - 1:3 ratio emotional to calm, 480 trials. Here is my graph:
robin_fig8.PNG.jpg

Hmm! r=-.25. Using Radins methodology I have completely ruled out the anticipation effect as an explanation (even though I put it there myself). And I still get a "presentiment" effect:
robin_480trial.PNG.jpg

So what has happened? Have the time symmetries pervading fundamental physics manifested themselves in my program?

The answer, in this humble poster's view, is almost certainly yes!

But one last graph:
robin_freq.PNG.jpg

Could it be the rather more boring fact that the small number of observations here cannot sort out the (individually tiny) anticipation effect from the noise?

No, it has got to be time symmetries.
 
This is all fascinating stuff. I had wondered if presentiment was finally the break through that parapsychology needed – a simple, replicable and efficient experiment. But I hadn’t had time to sit down with the papers properly (nor, do I have the expertise, to be honest). Thanks to Robin for this. It looks like presentiment is on shaky ground at the moment.
 

Back
Top Bottom