• You may find search is unavailable for a little while. Trying to fix a problem.
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories

Simulating "Presentiment" Experiments

Robin

Penultimate Amazing
Joined
Apr 29, 2004
Messages
14,971
A series of experiments by Dean Radin and others purport to show that the human mind reacted precognitively to stimuli.

The basic design of these experiments is that people are shown, in a random order, a series of photographs - some provoking a strong emotional reaction, some not. Skin conductance is measured both before and after and the results seem to show that dermal activity is higher when the emotional provoking is shown, both before and after the photograph is actually displayed. See Electrodermal Presentiments of Future Emotions for example. In some experiments MRI is used.

I decided to see if these results could be explained by an anticipation effect based on the photographs previously seen. It turns out they can.

The simulation produces 10 seconds worth of data and randomly chooses a state 1 or 0 after the first 5 seconds - representing "emotional" and "calm". The data in the first half of the run is influenced by variables based on past states, the second half is influenced by a function simulating an emotional reaction. A purely random number is also generated to simulate noise.

And yet when data from a series of tests is aggregated, it appears that there is a difference both before and after the random number is actually generated.

image025.png.jpg


Compare with Radin's results.

radin.png.jpg


So is my program precognitively sensing the random number chosen?

As it happens there is nothing mysterious or spooky happening here. I am increasing the anticipation factor if the previous state is "calm" and decreasing it when the previous state is "emotional". This means that whenever there is a run of "calm" the first "emotional" is guaranteed to be a local maximum, similarly the first "calm" after a run of "emotional" data will be a local minimum.

This produces a series of outliers that influence the aggregate result. When the "experiment" is run 20 times it produces a "presentiment" effect more than half of the time, the rest have no significant difference.

This would be consistent with, for example, a mounting anticipation effect after a series of calm-inducing photos and a slight desensitisation during emotion-provoking photos.

This appears to suggest that the presentiment experiments cannot be considered evidence for pre-cognition, paranormal activity or psi.

Just an interesting mathematical illusion.

Here is a link to the Source code

And here is a link to 20 consecutive runs of the program:

http://picasaweb.google.com.au/robin1658/Presentiments#
 
This would be consistent with, for example, a mounting anticipation effect after a series of calm-inducing photos and a slight desensitisation during emotion-provoking photos.
Do you mean that the "anticipation" effect only applies to when a "streak" is broken? (...a streak of neutral pictures broken by an overdue emotional one, or a streak of emotional ones broken by an overdue neutral one...) In other words, do you mean that there was no apparent effect when the two types of picture had been alternating without streaks, and no apparent effect in the middle of a streak?

If so, then this reminds me of the rule about gamblers thinking that a streak makes one outcome more or less likely the next time than it would otherwise be, such as a streak of black results at a roulette meaning that the next spin is especially likely to get a red result.

If that's the same mental phenomenon, then there are some other results that should be predictable from this. One is that the apparent anticipation effect should get stronger the longer the streak is, up to a point, after which the subject might just get used to it and lose the expectation of a break in the streak. Another is that gamblers, especially addicts, should show a stronger apparent anticipation effect than non-gamblers. (Of course, this wouldn't be testing whether anything "paranormal" was happening; it would just test whether we're dealing with one single "normal" bit of psychology or neurology, or two separate "normal" ones.)
 
Do you mean that the "anticipation" effect only applies to when a "streak" is broken? (...a streak of neutral pictures broken by an overdue emotional one, or a streak of emotional ones broken by an overdue neutral one...) In other words, do you mean that there was no apparent effect when the two types of picture had been alternating without streaks, and no apparent effect in the middle of a streak?
No, the anticipation effect applies throughout.

What you suggest is not possible since the streak is not broken until the number is actually generated, which does not occur until the second half of the data.

No information whatsoever about the generated state affects the first half of the data.

However the anticipation effect is influenced by the previously generated number and is increased with each successive "calm" run, or decreased with each successive "emotional" run.

This means that this factor is always the highest at the end of a run of calm's and lowest at the end of a run of emotionals - as you can see in this graph:

anticipation.png.jpg"

If you were to sum all the blue and red values in this graph the reds will be slightly higher.
 
If so, then this reminds me of the rule about gamblers thinking that a streak makes one outcome more or less likely the next time than it would otherwise be, such as a streak of black results at a roulette meaning that the next spin is especially likely to get a red result.

If that's the same mental phenomenon, then there are some other results that should be predictable from this. One is that the apparent anticipation effect should get stronger the longer the streak is, up to a point, after which the subject might just get used to it and lose the expectation of a break in the streak. Another is that gamblers, especially addicts, should show a stronger apparent anticipation effect than non-gamblers. (Of course, this wouldn't be testing whether anything "paranormal" was happening; it would just test whether we're dealing with one single "normal" bit of psychology or neurology, or two separate "normal" ones.)
You are right that it is similar to the gambler's fallacy. Since the data prior to the actual display of the picture is measuring expectation then it is not unreasonable to suppose there will be an increasing expectation with each subsequent calm picture.

There is also the habituation effect you mention, but I would suppose there will not be sufficient runs of calm or emotional photographs for this to be a factor.
 
Very interesting. You should really try to publish that somewhere.
 
Robin, regarding the paper by Dean Radin in your OP. I was wondering if you could highlight the specific flaws in this section:

Anticipatory strategies.

"This is the most common and prima facie the most plausible conventional explanation for the observed effects. The idea assumes the use of an experimental design with dichotomous stimuli: emotional vs. calm, or stimulus vs. no stimulus. With such a design, it is conceivable that on sequential trials the participant’s EDA would increase with each successive calm trial, since anticipation would keep increasing until the emotional trial appeared. Thus, EDA would peak on the emotional trial and then reset back to zero on the next trial, as illustrated in Figure 7.

If this strategy were followed either consciously or unconsciously, then EDA averaged across all emotional trials would be higher than EDA averaged across Fig. 7. Simulation of an idealized anticipatory strategy. The y-axis is the degree of autonomic arousal, the x-axis is successive trials. The top of each ‘‘arousal’’ ramp represents an emotional trial; all other trials are calm. Electrodermal Presentiments 269 calm trials. Simulation and analytical studies have confirmed the existence of this bias (e.g., Dalkvist et al., 2002; Wackermann, 2002). The same simulations also show that with longer sessions, or after pooling trials across many participants, that these biases can become vanishingly small. Anticipatory simulation studies are valuable in highlighting worst-case scenarios, but they also oversimplify what actually occurs in these experiments.

For example, as previously mentioned, people’s idiosyncratic responses to photos inevitably blur the dichotomous distinction between calm and emotional targets. A more realistic anticipatory simulation might use targets with a continuous range of emotionality, and it would adjust the arousal value for trial N þ 1 up or down according to the emotionality rating of trial N. But idealized simulations and strategies aside, we can investigate whether anticipatory strategies can explain the observed results by examining the actual data. To do this, data from Experiments 2 and 3 were pooled; this provided a set of 3,469 trials, contributed by a total of 103 participants. The pre-stimulus sums of SCL (i.e., PSCL) prior to stimulus onset in each of the two experiments were normalized separately and then combined.3 Then the targets were separated into two classes: emotional, defined as those trials with the top 26% emotionality ratings, and calm, defined as those 74% with lower emotionality ratings. These percentages were selected to create about a 1:3 ratio of emotional to calm targets to ensure that there would be an adequate number of calm targets in a row to test the expectations of an anticipatory strategy. Based on this definition of emotional and calm targets, 13 of the 103 participants were identified who independently obtained significant (p , 0.05) emotional vs. calm differences in their pre-stimulus responses. Together these people contributed a total of 450 trials, and as a group they represented (by selection) extremely strong evidence for presentiment.

An anticipatory strategy supposes a positive trend between the number of calm trials before an emotional trial vs. PSCL for each of those trials, as illustrated in Figure 7. Note that this trend, which can be evaluated with a simple linear correlation, cannot include the emotional trial itself, since that would confound testing an anticipatory strategy with a genuine presentiment effect. Figure 8 shows the observed means of PSCL for calm trials 1 to 13 steps before an emotional trial, and for the emotional trial itself (the ‘‘0’’ point on the x-axis), with one standard error bars. The error bars become progressively smaller because the number of sequential calm trials before an emotional trial decreases with the number of total trials. For example, there are many more cases of say, the sequence C–E (one step away) than there are of the sequence, C–C–C–C–C–E (five steps away).

The weighted linear correlation between mean PSCL and trial number for steps 13 ! 1 was positive, but not significantly so (r ¼ 0.29, p ¼ 0.17). Notice that with one exception, all of the mean PSCL values prior to the emotional trial were negative, and three were significantly negative, including the trial immediately preceding the emotional trial. Thus, contrary to the expectations of 270 D. I. Radin an anticipatory strategy, a subset of participants specifically selected for exhibiting strong differential results suggestive of a genuine presentiment effect showed relaxation responses before the emotional target rather than progressive arousal. In sum, while idealized anticipatory strategies might provide an explanation of the observed results in principle, the actual data did not indicate that such strategies were employed."
 
This means that this factor is always the highest at the end of a run of calm's and lowest at the end of a run of emotionals - as you can see in this graph:

[qimg]http://lh6.ggpht.com/robin1658/SMSZ0cXUXbI/AAAAAAAAAMA/kveCe6Vjftw/s400/anticipation.png.jpg"[/qimg]

This means that whenever there is a run of "calm" the first "emotional" is guaranteed to be a local maximum, similarly the first "calm" after a run of "emotional" data will be a local minimum.

If you plan to publish, I think you can be a bit more specific and say that every local maximum will be "emotional", and every local minimum will be "calm". If an event was "calm", then it's "anticipation" must be lower than the following (as a calm event will raise the "anticipation"), thus not being able to be a local maximum. Reverse for "emotional". That statement can be used to directly prove that the effect observed is a mathematical artifact. You only need to prove that events that are neither locally maximal nor minimal are distributed randomly (which should be evident from your algorithm).
 
I'm still not getting how the math means that a prediction isn't actually a prediction, but I do have to suggest not using the word "calm". Calmness is an emotion.
 
Robin, regarding the paper by Dean Radin in your OP. I was wondering if you could highlight the specific flaws in this section:

Anticipatory strategies.

"This is the most common and prima facie the most plausible conventional explanation for the observed effects. The idea assumes the use of an experimental design with dichotomous stimuli: emotional vs. calm, or stimulus vs. no stimulus. With such a design, it is conceivable that on sequential trials the participant’s EDA would increase with each successive calm trial, since anticipation would keep increasing until the emotional trial appeared. Thus, EDA would peak on the emotional trial and then reset back to zero on the next trial, as illustrated in Figure 7.

If this strategy were followed either consciously or unconsciously, then EDA averaged across all emotional trials would be higher than EDA averaged across Fig. 7. Simulation of an idealized anticipatory strategy. The y-axis is the degree of autonomic arousal, the x-axis is successive trials. The top of each ‘‘arousal’’ ramp represents an emotional trial; all other trials are calm. Electrodermal Presentiments 269 calm trials. Simulation and analytical studies have confirmed the existence of this bias (e.g., Dalkvist et al., 2002; Wackermann, 2002). The same simulations also show that with longer sessions, or after pooling trials across many participants, that these biases can become vanishingly small. Anticipatory simulation studies are valuable in highlighting worst-case scenarios, but they also oversimplify what actually occurs in these experiments.

For example, as previously mentioned, people’s idiosyncratic responses to photos inevitably blur the dichotomous distinction between calm and emotional targets. A more realistic anticipatory simulation might use targets with a continuous range of emotionality, and it would adjust the arousal value for trial N þ 1 up or down according to the emotionality rating of trial N. But idealized simulations and strategies aside, we can investigate whether anticipatory strategies can explain the observed results by examining the actual data. To do this, data from Experiments 2 and 3 were pooled; this provided a set of 3,469 trials, contributed by a total of 103 participants. The pre-stimulus sums of SCL (i.e., PSCL) prior to stimulus onset in each of the two experiments were normalized separately and then combined.3 Then the targets were separated into two classes: emotional, defined as those trials with the top 26% emotionality ratings, and calm, defined as those 74% with lower emotionality ratings. These percentages were selected to create about a 1:3 ratio of emotional to calm targets to ensure that there would be an adequate number of calm targets in a row to test the expectations of an anticipatory strategy. Based on this definition of emotional and calm targets, 13 of the 103 participants were identified who independently obtained significant (p , 0.05) emotional vs. calm differences in their pre-stimulus responses. Together these people contributed a total of 450 trials, and as a group they represented (by selection) extremely strong evidence for presentiment.

An anticipatory strategy supposes a positive trend between the number of calm trials before an emotional trial vs. PSCL for each of those trials, as illustrated in Figure 7. Note that this trend, which can be evaluated with a simple linear correlation, cannot include the emotional trial itself, since that would confound testing an anticipatory strategy with a genuine presentiment effect. Figure 8 shows the observed means of PSCL for calm trials 1 to 13 steps before an emotional trial, and for the emotional trial itself (the ‘‘0’’ point on the x-axis), with one standard error bars. The error bars become progressively smaller because the number of sequential calm trials before an emotional trial decreases with the number of total trials. For example, there are many more cases of say, the sequence C–E (one step away) than there are of the sequence, C–C–C–C–C–E (five steps away).

The weighted linear correlation between mean PSCL and trial number for steps 13 ! 1 was positive, but not significantly so (r ¼ 0.29, p ¼ 0.17). Notice that with one exception, all of the mean PSCL values prior to the emotional trial were negative, and three were significantly negative, including the trial immediately preceding the emotional trial. Thus, contrary to the expectations of 270 D. I. Radin an anticipatory strategy, a subset of participants specifically selected for exhibiting strong differential results suggestive of a genuine presentiment effect showed relaxation responses before the emotional target rather than progressive arousal. In sum, while idealized anticipatory strategies might provide an explanation of the observed results in principle, the actual data did not indicate that such strategies were employed."
I will deal in more detail later. But why did Radin test a small percentage (only 450 trials) to test this? He could easily have tested the entire dataset. I can easily do this on my simulated dataset.

More generally, since we are all agreed that the results shown are possible without presentiment, on what basis does he regard them as evidence for presentiment?
 
I'm still not getting how the math means that a prediction isn't actually a prediction,
I don't get what you don't get. Are you saying my program really is psychic?
but I do have to suggest not using the word "calm". Calmness is an emotion.
So?
 
The weighted linear correlation between mean PSCL and trial number for steps 13 ! 1 was positive, but not significantly so (r ¼ 0.29, p ¼ 0.17). Notice that with one exception, all of the mean PSCL values prior to the emotional trial were negative, and three were significantly negative, including the trial immediately preceding the emotional trial. Thus, contrary to the expectations of 270 D. I. Radin an anticipatory strategy, a subset of participants specifically selected for exhibiting strong differential results suggestive of a genuine presentiment effect showed relaxation responses before the emotional target rather than progressive arousal. In sum, while idealized anticipatory strategies might provide an explanation of the observed results in principle, the actual data did not indicate that such strategies were employed."

I appreciate that they attempted to address this issue, but the question of whether you can rule-out an effect is different from whether you can rule-in an effect. They showed that the probability that an effect was ruled-in was only 17%. However, what is of more interest to us is that the probability that they ruled-out a small effect is about 60%, meaning that there is about a 40% chance that they missed a real effect.

Linda
 
Robin, I think what Delvo is getting at is that "calm" and "emotional" is more akin to "angry" and "emotional" or "happy" and "emotional", i.e. they all represent emotion.

Perhaps better terms would be "calm" and "agitated" or "calm" and "aroused".

but if Radin's study used "calm" and "emotional" I would stick with that for the sake of compatibility.
 
Robin, regarding the paper by Dean Radin in your OP. I was wondering if you could highlight the specific flaws in this section:
OK, well first and foremost, he is reducing the sample that he is using to test for the effect.

The anticipation effect would come from the cumulative effect of a number of small outliers distributed across the entire sample on the aggregated data, there is no reason to suppose this effect would be stronger in some rather than others.

So all Radin is doing is reducing the statistical power for his test of anticipation. In my simulations I found 450 trials threw up all sorts of conflicting results.

He is ignoring the runs of "emotional" pictures which can also add to the effect.

He is looking at 13 calm trials prior to an emotional trial. How many runs of 13 calm photos would there be in 450 randomly selected photos, even at a 1:3 ratio?

In any case he does find the positive linear trend that would indicate the presence of an anticipation effect. The low co-efficient would be due to noise in the tests - the same is true of my simulations, since I add in noise.

I am not sure what he means by the next part. If there were a positive linear trend then the mean PSCL values relative to the local baseline would be positive.

All in all he seems to be going out of his way to avoid finding an anticipation effect in his data, which could be easily detected by various analyses on the entire dataset of each experiment.

[edit]And note that in his figure 8, the observations closer to the emotional event (which as he points out will be more representative and have a smaller error) has a quite strongly positive trend, I would suggest this pretty much supports anticipation effect as being the culprit[/edit]
 
Robin, I think what Delvo is getting at is that "calm" and "emotional" is more akin to "angry" and "emotional" or "happy" and "emotional", i.e. they all represent emotion.

Perhaps better terms would be "calm" and "agitated" or "calm" and "aroused".

but if Radin's study used "calm" and "emotional" I would stick with that for the sake of compatibility.
Yes I see. The "calm" "emotional" was, as you point out, Radin's choice. I had begun with "unemotional", "emotional" in my simulation.

From my point of view, in any case, it is simply 1's and zeros.
 
Robin, I think what Delvo is getting at is that "calm" and "emotional" is more akin to "angry" and "emotional" or "happy" and "emotional", i.e. they all represent emotion.

Perhaps better terms would be "calm" and "agitated" or "calm" and "aroused".

but if Radin's study used "calm" and "emotional" I would stick with that for the sake of compatibility.

I would use "non-emotive" and "emotive".

Leon
 
I was also pondering this presentiment stuff. I had considered a few explanations but had it pretty much on the backburner till I saw Robin's post here. It made me go back to Radin's paper, most especially the part Limbo copied.
Then it hit me like a hammer.

BTW, this paper, from 2002, also presents a simulation:
A COMPUTATIONAL EXPECTATION BIAS AS REVEALED BY
SIMULATIONS OF PRESENTIMENT EXPERIMENTS

Jan Dalkvista, Joakim Westerlund & Dick J. Bierman
I can't post links but you can get it via google scholar.
It also presents a simulation explaining the "presentiment".

Robin, regarding the paper by Dean Radin in your OP. I was wondering if you could highlight the specific flaws in this section:

Anticipatory strategies.
Only the relevant bits:
For example, as previously mentioned, people’s idiosyncratic responses to photos inevitably blur the dichotomous distinction between calm and emotional targets. A more realistic anticipatory simulation might use targets with a continuous range of emotionality, and it would adjust the arousal value for trial N þ 1 up or down according to the emotionality rating of trial N.
The 4th experiment in Radin's paper (p-value= 0.28) as well as Broughton's failed replication use such a continuous range. Seems as if the effect depended on the dichotomy. That is one of the conclusions of Radin's paper in any case.
The data used to test for an expectation effect came from experiments that were explicitly designed to present strongly dichotomous targets.

->This is a total red herring.

But idealized simulations and strategies aside, we can investigate whether anticipatory strategies can explain the observed results by examining the actual data. To do this, data from Experiments 2 and 3 were pooled; this provided a set of 3,469 trials, contributed by a total of 103 participants. The pre-stimulus sums of SCL (i.e., PSCL) prior to stimulus onset in each of the two experiments were normalized separately and then combined.
As Robin pointed out, this is a bit odd. It's not just the pooling of data, it's that it is pooled differently than when he is doing his actual hypothesis tests.
It also seems to me that he is treating the data differently. I haven't wrapped my mind around this, and I think I don't care to, anymore, so don't take this as true. The paper above makes some recomendations on how to treat the data to minimize the influence of the expectation effect. If I am not mistaken (I could easily be!) then Radin is heeding that advice to minimize the expectation effecto when testing for that effect but not when testing his hypothesis.

Then the targets were separated into two classes: emotional, defined as those trials with the top 26% emotionality ratings, and calm, defined as those 74% with lower emotionality ratings. These percentages were selected to create about a 1:3 ratio of emotional to calm targets to ensure that there would be an adequate number of calm targets in a row to test the expectations of an anticipatory strategy.
This is a real howler. No doubt there.

Experiment 2 has this to say on the pictures:
Thirty new pictures were added to the photo pool from Experiment
1, bringing the total to 150. Five men and five women were asked to
independently examine the pictures, one at a time, in random order. The rating dimension consisted of a 100 point scale, and the rating method asked each person to view a picture on a computer screen and move a pointer across a sliding scale to indicate his or her assessment.
The photo pool from experiment 1 is described so:
Calm targets included photos of landscapes, nature scenes, and people; emotional targets included erotic, violent, and accident scenes. Most of the calm pictures were selected from a Corel Professional Photo CD-ROM. Most of the emotional pictures were selected from photo archives accessible over the Internet.
Unfortunately we are not told how exactly the photos were rated but based on their origin I they can be assumed to have been at least somewhat dichotomous in their emotionality rating. (p-value for experiment 2: 0.11)

The target pool consisted of the 80 most calm and the 40 most emotional pictures from the International Affective Picture System (IAPS, Bradley et al., 1993; Ito et al., 1998), where ‘‘calm’’ and ‘‘emotional’’ were defined by the emotionality ratings (averaged across gender) that accompany the IAPS picture set. In an attempt to further enhance the contrast between the emotional and calm targets, participants wore headphones that played one of 20 randomly selected noxious sounds for 3 seconds during presentation of emotional pictures (i.e., screams, sirens, explosions, etc.). Calm pictures were presented in silence.
Alright, so here we have a massive difference in calm vs. emotional. As far as the pictures go we deal with the difference between cute furry things and mutilated corpses, not even mentioning the noise. (p-value=0.0004)

He also says this:
The use of a 2:1 ratio of calm to emotional photos would seem to add noise to the analysis of Hypothesis 1 (which is therefore a conservative test), since that analysis sorts targets by their pre-assessed emotionality ratings and splits the data into two equal subsets. But in practice, people show strong idiosyncratic responses to photos, and this significantly blurs a nominal dichotomy into a continuous scale of emotionality. Thus, splitting the trials into two equal subsets does not introduce as much noise as one might expect.

Whew, that was something. Does everyone still remember what he does to test for expectation? He splits the trials in a 3:1 fashion. He shovels "emotional trials" over into the "calm trials". Gee, might that add noise?

Conclusion:
Radin uses methods designed to fail to detect an expectation effect. He does not use the same methods to test his own hypotheses but shows that he is aware of at least some problems with those inadequate methods.

This may be self-deceit but it certainly is deceit in one form or another.
 
More on Radin's test for the anticipation effect:

radin_fig8.PNG.jpg


And his commentary:

Dean Radin said:
The weighted linear correlation between mean PSCL and trial number for steps 13 ! 1 was positive, but not significantly so (r ¼ 0.29, p ¼ 0.17). Notice that with one exception, all of the mean PSCL values prior to the emotional trial were negative, and three were significantly negative, including the trial immediately preceding the emotional trial.
Thus, contrary to the expectations of an anticipatory strategy, a subset of participants specifically selected for exhibiting strong differential results suggestive of a genuine presentiment effect showed relaxation responses before the emotional target rather than progressive arousal.

In sum, while idealized anticipatory strategies might provide an explanation of the observed results in principle, the actual data did not indicate that such strategies were employed.
OK, so I do the same test with my simulation - 1:3 ratio emotional to calm, 480 trials. Here is my graph:
robin_fig8.PNG.jpg

Hmm! r=-.25. Using Radins methodology I have completely ruled out the anticipation effect as an explanation (even though I put it there myself). And I still get a "presentiment" effect:
robin_480trial.PNG.jpg

So what has happened? Have the time symmetries pervading fundamental physics manifested themselves in my program?

The answer, in this humble poster's view, is almost certainly yes!

But one last graph:
robin_freq.PNG.jpg

Could it be the rather more boring fact that the small number of observations here cannot sort out the (individually tiny) anticipation effect from the noise?

No, it has got to be time symmetries.
 
This is all fascinating stuff. I had wondered if presentiment was finally the break through that parapsychology needed – a simple, replicable and efficient experiment. But I hadn’t had time to sit down with the papers properly (nor, do I have the expertise, to be honest). Thanks to Robin for this. It looks like presentiment is on shaky ground at the moment.
 
Back
Top Bottom