• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Simulating "Presentiment" Experiments

I was also pondering this presentiment stuff. I had considered a few explanations but had it pretty much on the backburner till I saw Robin's post here. It made me go back to Radin's paper, most especially the part Limbo copied.
Then it hit me like a hammer.

BTW, this paper, from 2002, also presents a simulation:
A COMPUTATIONAL EXPECTATION BIAS AS REVEALED BY
SIMULATIONS OF PRESENTIMENT EXPERIMENTS

Jan Dalkvista, Joakim Westerlund & Dick J. Bierman
I can't post links but you can get it via google scholar.
It also presents a simulation explaining the "presentiment".

Robin, regarding the paper by Dean Radin in your OP. I was wondering if you could highlight the specific flaws in this section:

Anticipatory strategies.
Only the relevant bits:
For example, as previously mentioned, people’s idiosyncratic responses to photos inevitably blur the dichotomous distinction between calm and emotional targets. A more realistic anticipatory simulation might use targets with a continuous range of emotionality, and it would adjust the arousal value for trial N þ 1 up or down according to the emotionality rating of trial N.
The 4th experiment in Radin's paper (p-value= 0.28) as well as Broughton's failed replication use such a continuous range. Seems as if the effect depended on the dichotomy. That is one of the conclusions of Radin's paper in any case.
The data used to test for an expectation effect came from experiments that were explicitly designed to present strongly dichotomous targets.

->This is a total red herring.

But idealized simulations and strategies aside, we can investigate whether anticipatory strategies can explain the observed results by examining the actual data. To do this, data from Experiments 2 and 3 were pooled; this provided a set of 3,469 trials, contributed by a total of 103 participants. The pre-stimulus sums of SCL (i.e., PSCL) prior to stimulus onset in each of the two experiments were normalized separately and then combined.
As Robin pointed out, this is a bit odd. It's not just the pooling of data, it's that it is pooled differently than when he is doing his actual hypothesis tests.
It also seems to me that he is treating the data differently. I haven't wrapped my mind around this, and I think I don't care to, anymore, so don't take this as true. The paper above makes some recomendations on how to treat the data to minimize the influence of the expectation effect. If I am not mistaken (I could easily be!) then Radin is heeding that advice to minimize the expectation effecto when testing for that effect but not when testing his hypothesis.

Then the targets were separated into two classes: emotional, defined as those trials with the top 26% emotionality ratings, and calm, defined as those 74% with lower emotionality ratings. These percentages were selected to create about a 1:3 ratio of emotional to calm targets to ensure that there would be an adequate number of calm targets in a row to test the expectations of an anticipatory strategy.
This is a real howler. No doubt there.

Experiment 2 has this to say on the pictures:
Thirty new pictures were added to the photo pool from Experiment
1, bringing the total to 150. Five men and five women were asked to
independently examine the pictures, one at a time, in random order. The rating dimension consisted of a 100 point scale, and the rating method asked each person to view a picture on a computer screen and move a pointer across a sliding scale to indicate his or her assessment.
The photo pool from experiment 1 is described so:
Calm targets included photos of landscapes, nature scenes, and people; emotional targets included erotic, violent, and accident scenes. Most of the calm pictures were selected from a Corel Professional Photo CD-ROM. Most of the emotional pictures were selected from photo archives accessible over the Internet.
Unfortunately we are not told how exactly the photos were rated but based on their origin I they can be assumed to have been at least somewhat dichotomous in their emotionality rating. (p-value for experiment 2: 0.11)

The target pool consisted of the 80 most calm and the 40 most emotional pictures from the International Affective Picture System (IAPS, Bradley et al., 1993; Ito et al., 1998), where ‘‘calm’’ and ‘‘emotional’’ were defined by the emotionality ratings (averaged across gender) that accompany the IAPS picture set. In an attempt to further enhance the contrast between the emotional and calm targets, participants wore headphones that played one of 20 randomly selected noxious sounds for 3 seconds during presentation of emotional pictures (i.e., screams, sirens, explosions, etc.). Calm pictures were presented in silence.
Alright, so here we have a massive difference in calm vs. emotional. As far as the pictures go we deal with the difference between cute furry things and mutilated corpses, not even mentioning the noise. (p-value=0.0004)

He also says this:
The use of a 2:1 ratio of calm to emotional photos would seem to add noise to the analysis of Hypothesis 1 (which is therefore a conservative test), since that analysis sorts targets by their pre-assessed emotionality ratings and splits the data into two equal subsets. But in practice, people show strong idiosyncratic responses to photos, and this significantly blurs a nominal dichotomy into a continuous scale of emotionality. Thus, splitting the trials into two equal subsets does not introduce as much noise as one might expect.

Whew, that was something. Does everyone still remember what he does to test for expectation? He splits the trials in a 3:1 fashion. He shovels "emotional trials" over into the "calm trials". Gee, might that add noise?

Conclusion:
Radin uses methods designed to fail to detect an expectation effect. He does not use the same methods to test his own hypotheses but shows that he is aware of at least some problems with those inadequate methods.

This may be self-deceit but it certainly is deceit in one form or another.
 
More on Radin's test for the anticipation effect:

radin_fig8.PNG.jpg


And his commentary:

Dean Radin said:
The weighted linear correlation between mean PSCL and trial number for steps 13 ! 1 was positive, but not significantly so (r ¼ 0.29, p ¼ 0.17). Notice that with one exception, all of the mean PSCL values prior to the emotional trial were negative, and three were significantly negative, including the trial immediately preceding the emotional trial.
Thus, contrary to the expectations of an anticipatory strategy, a subset of participants specifically selected for exhibiting strong differential results suggestive of a genuine presentiment effect showed relaxation responses before the emotional target rather than progressive arousal.

In sum, while idealized anticipatory strategies might provide an explanation of the observed results in principle, the actual data did not indicate that such strategies were employed.
OK, so I do the same test with my simulation - 1:3 ratio emotional to calm, 480 trials. Here is my graph:
robin_fig8.PNG.jpg

Hmm! r=-.25. Using Radins methodology I have completely ruled out the anticipation effect as an explanation (even though I put it there myself). And I still get a "presentiment" effect:
robin_480trial.PNG.jpg

So what has happened? Have the time symmetries pervading fundamental physics manifested themselves in my program?

The answer, in this humble poster's view, is almost certainly yes!

But one last graph:
robin_freq.PNG.jpg

Could it be the rather more boring fact that the small number of observations here cannot sort out the (individually tiny) anticipation effect from the noise?

No, it has got to be time symmetries.
 
This is all fascinating stuff. I had wondered if presentiment was finally the break through that parapsychology needed – a simple, replicable and efficient experiment. But I hadn’t had time to sit down with the papers properly (nor, do I have the expertise, to be honest). Thanks to Robin for this. It looks like presentiment is on shaky ground at the moment.
 
Robin,

Have you read this one?

Skin Conductance Prestimulus Response: Analyses, Artifacts and a Pilot Study

[...]

Expectation: Skin Conductance Response Analysis

Lately there has been theoretical interest in possible contributions to apparent prestimulus response by expectation effects (Dalkvist et al., 2002; Wackermann, 2002). The approach is to assume that there is a monotonic increase in the dependent variable as a function of time between adjacent arousing stimuli, to simulate this in a computer program, and then to compute the resulting Z score reflecting this assumption. The discussion has focused upon skin conduction level shifts, but the concept could apply to our dependent variable as well if there were an increase in the rate of SCRs as time elapses between adjacent audio stimuli. However, these studies also show that in analyses in which data are summed across subjects, as here, the effect can only be very small, and this is confirmed by the two tests which follow.

In the first test we moved the analysis period back 3 seconds to [¡6,¡3) seconds relative to the audio and control stimuli and recalculated the skin conductance response statistic. By repeating this process of moving the whole analysis backwards, we computed the effect size as a function of prestimulus intervals before the stimulus. Figure 7 shows the result, including the effect size, for the prestimulus interval of [¡3,0).

The effect size in the 3-second window immediately prior to the prestimulus period is significantly smaller than that in the prestimulus region (i.e., tdiff ˆ 2.15, df ˆ 2636, p ˆ 0.016), and the effect size remains at mean chance expectation for an additional three earlier intervals. We now have the following three conditions: An excess of skin conductance responses in the prestimulus epoch prior to audio stimuli, no such excess over chance expectation in the preceding epochs and a randomized inter-stimulus interval of 40 to 80 seconds. Any expectation effect must appear as an increase in the dependent variable (i.e., rate of production of SCRs), which is a monotonically increasing function of the time since the last arousing stimulus.

Such an effect, were it to occur, could not give rise to an increase in the SCR rate solely in the epoch prior to the next startle stimulus, a fortiori when the timing of that stimulus was randomly varied and unknown to the participant. A second approach to the expectation issue is, perhaps, more direct. By definition, expectation artifacts of any kind should contribute more the longer the time since the last audio stimulus. The mean of the distribution of times since the last audio stimulus was 106.886 seconds for audio stimuli and 109.482 seconds for controls (tdiffˆ¡0.838, df ˆ 2498, p ˆ 0.800), and the distributions of these times were not significantly different (Kolmogorov–Smirnov p ˆ 0.232).

Since these two sets of times are statistically indistinguishable, the expectation bias model cannot produce differences in SCR rates. We confirmed this in our data since we found a Pearson’s r correlation of 0.056 (tˆ0.448, dfˆ 64, pˆ0.382) for effect sizes correlated against the mean time between adjacent audio stimuli. This result in itself is sufficient to reject an expectation bias hypothesis; however, coupled with the result that the effect sized was at chance in the [¡6,¡3) interval (and earlier ones—see Figure 7) and is significantly lower than that for the actual prestimulus interval of [¡3,0), we conclude that expectation did not account for the proportional SCR rate in this study.
 
Last edited:
Robin,

Have you read this one?
Skin Conductance Prestimulus Response: Analyses, Artifacts and a Pilot Study

It is a new experiment with a tiny effect that has not been independantly replicated.
This means:
1) Ruling out an expectation bias in this experiment does not negate the presence of such in the "standard" experiment.
2) It is not clear wether this experiment is replicable. If it is not, it needs no other explanation but chance or an unnoticed screw-up.

You received extensive replies to your previous posting. Any comments?
 
You are right that it is similar to the gambler's fallacy.

Could you explain for someone not that clever (me) how that's similar to the gambler's fallacy? I don't get it...
 
Could you explain for someone not that clever (me) how that's similar to the gambler's fallacy? I don't get it...
I expect you are as clever as I am, which is not necessarily a compliment.

If I am operating under the gambler's fallacy then the more consecutive blacks I see the greater is my expectation of a red, under the mistaken apprehension that a red becomes more probable.

In my model the person's expectation of an emotive picture increases with every consecutive non-emotive picture.

That is not necessarily the mechanism at work here, but it is at least a candidate.
 
Robin,

Have you read this one?

Skin Conductance Prestimulus Response: Analyses, Artifacts and a Pilot Study

[...]

Expectation: Skin Conductance Response Analysis

Lately there has been theoretical interest in possible contributions to apparent prestimulus response by expectation effects (Dalkvist et al., 2002; Wackermann, 2002). The approach is to assume that there is a monotonic increase in the dependent variable as a function of time between adjacent arousing stimuli, to simulate this in a computer program, and then to compute the resulting Z score reflecting this assumption. The discussion has focused upon skin conduction level shifts, but the concept could apply to our dependent variable as well if there were an increase in the rate of SCRs as time elapses between adjacent audio stimuli. However, these studies also show that in analyses in which data are summed across subjects, as here, the effect can only be very small, and this is confirmed by the two tests which follow.

In the first test we moved the analysis period back 3 seconds to [¡6,¡3) seconds relative to the audio and control stimuli and recalculated the skin conductance response statistic. By repeating this process of moving the whole analysis backwards, we computed the effect size as a function of prestimulus intervals before the stimulus. Figure 7 shows the result, including the effect size, for the prestimulus interval of [¡3,0).

The effect size in the 3-second window immediately prior to the prestimulus period is significantly smaller than that in the prestimulus region (i.e., tdiff ˆ 2.15, df ˆ 2636, p ˆ 0.016), and the effect size remains at mean chance expectation for an additional three earlier intervals. We now have the following three conditions: An excess of skin conductance responses in the prestimulus epoch prior to audio stimuli, no such excess over chance expectation in the preceding epochs and a randomized inter-stimulus interval of 40 to 80 seconds. Any expectation effect must appear as an increase in the dependent variable (i.e., rate of production of SCRs), which is a monotonically increasing function of the time since the last arousing stimulus.

Such an effect, were it to occur, could not give rise to an increase in the SCR rate solely in the epoch prior to the next startle stimulus, a fortiori when the timing of that stimulus was randomly varied and unknown to the participant. A second approach to the expectation issue is, perhaps, more direct. By definition, expectation artifacts of any kind should contribute more the longer the time since the last audio stimulus. The mean of the distribution of times since the last audio stimulus was 106.886 seconds for audio stimuli and 109.482 seconds for controls (tdiffˆ¡0.838, df ˆ 2498, p ˆ 0.800), and the distributions of these times were not significantly different (Kolmogorov–Smirnov p ˆ 0.232).

Since these two sets of times are statistically indistinguishable, the expectation bias model cannot produce differences in SCR rates. We confirmed this in our data since we found a Pearson’s r correlation of 0.056 (tˆ0.448, dfˆ 64, pˆ0.382) for effect sizes correlated against the mean time between adjacent audio stimuli. This result in itself is sufficient to reject an expectation bias hypothesis; however, coupled with the result that the effect sized was at chance in the [¡6,¡3) interval (and earlier ones—see Figure 7) and is significantly lower than that for the actual prestimulus interval of [¡3,0), we conclude that expectation did not account for the proportional SCR rate in this study.
If the anticipation effect was at play here, we would expect the trend to manifest itself over more than one successive "silent" trial, so why is he looking for it in the last 16 seconds of data, where we would not expect to find it?

Also, since it is more a function of the previous stimulus than of time I don't see how his last point is relevant.

There are a number of curious features of this experiment, not the least is the fact that the response to all stimuli - not just startle but control and pseudo-participant responses - all begin in the pre-stimulus period. I wonder why?

Is it, perhaps, an artifact of the mathematical manipulation? For example where is the "de-trending" they speak of pivoted? Figure 2 suggests it is pivoted 3 seconds before the stimulus and altering the values at zero. Have they perhaps goofed here?

Or have I misunderstood these graphs?
 
Limbo,

Also if you look at my second graph in #23 and imagine that this is de-trended and we zoomed in on -5 to 1 seconds on the x axis and zoomed in on 0 to 1.5 on the y axis, wouldn't I have something very like the epoch aggregate that Spottiswoode presents?
 
A COMPUTATIONAL EXPECTATION BIAS AS REVEALED BY
SIMULATIONS OF PRESENTIMENT EXPERIMENTS

Hey Robin, what do you think of this paper? Do they argue the same thing you do (seems so to me), or do you see a difference between what you're saying and what they're saying in this paper?
 
Hey Robin, what do you think of this paper? Do they argue the same thing you do (seems so to me), or do you see a difference between what you're saying and what they're saying in this paper?
A couple of people have drawn my attention to two papers in which this has been raised.

Ah well, nothing new under the sun. I have yet to go through it in detail, but going by what the experimenters are controlling for they do not appear to have taken into account that runs of "emotive" trials can have an effect too.
 
I had not realised it but I am famous. Radin has commented a couple of times on this post in his Entangled Minds blog.

https://www.blogger.com/comment.g?blogID=16158865&postID=9045061547189617170&page=1

Here are two:
Dean Radin said:
I've commented on both of those critiques on this blog. My colleagues and I have been aware of these and other proposed artifacts for many years, we've studied them, and they do not account for the observed results.

E.g., anticipatory models assume that the experiment uses dichotomous targets (only calm vs. emotional). But most of my presentiment experiments (including the one mentioned in this post) select targets at random from a large pool of pictures of widely varying emotionality. That avoids the gambler's fallacy type of strategy, which is what anticipatory models are based upon.
And:
An anticipatory bias can occur if a participant systematically becomes more anxious after each successive calm target until an emotional target appears, and then resets to a low anxiety condition. Computer models of such a strategy do show a bias that can mimic a presentiment result, if you use dichotomous targets, and if you run small numbers of trials in a single session, and if you don't pool trials across sessions.

The problem is that in the real world none of these assumptions are valid. I discussed this in the article mentioned at the mind-energy.net forum. Colleagues who've examined their own presentiment data confirm that realistic anticipatory models cannot account for the actual physiological results observed in these experiments.
Dean Radin's dismissal of the idea of this seems to be based on a number of false or dubious assumptions:
1. It only shows up in small numbers of trials in a single session
2. It requires a "reset" to low anxiety level
3. It does not happen with a variety of targets within each category
4. It would be detectable as an overall trend in the data
5. It would be detectable above noise in a small sample
6. You only need to study successive "calm" trials to detect it
7. The gamblers fallacy only works with dichotomous targets
8. The anticipation effect can only be an artifact of the gamblers fallacy

All of these are highly dubious or just wrong:
1. The simulation used the same number of trials as Radin's experiments
2. There is no "reset" in the simulation
3. Irrelevant, it only depends that there is a noticeable difference between "calm" and "emotional"
4. The overall trend in the data is often against the anticipation effect, the important distinction is between the aggregate trends in "calm" trials and in "emotional" trials
5. In the simulation it is not detectable above noise in small samples and yet shows up in the overall aggregation
6. You need to study both "calm" and "emotional" trials to detect it
7. The gamblers fallacy can work with all sorts of targets, even random walk stock market data - the "this stock is due for a rally any time now" effect
8. The anticipation effect could be the result of a number of things, including unreleased tension (in calm trials), released tension (in emotional trials).

All this casts pretty heavy doubt about Radin's understanding of the anticipation effect or that he has seriously considered it as an artifact.

Here is a portion of the raw data showing that successive calm trials sometimes have an overall downward slope (where the anticipation effect is the other direction). But the "presentiment" effect still shows up in the aggregate.

pres_det3.png.jpg


Now I am going to work on some tests that would effectively detect the anticipation effect as I have described it.
 
Last edited:
I've noticed that Dean Radin doesn't answer questions properly. Odd that.

Anyway, he said that the pictures were of "widely varying emotionality", but (and I'm too lazy to go and check) didn't he define the stimuli as calm or emotional? In other words, he didn't take into account the level of emotionality, rather grouped them in one of two groups.
 
I've noticed that Dean Radin doesn't answer questions properly. Odd that.

Anyway, he said that the pictures were of "widely varying emotionality", but (and I'm too lazy to go and check) didn't he define the stimuli as calm or emotional? In other words, he didn't take into account the level of emotionality, rather grouped them in one of two groups.
Here is how he describes the dataset in the paper:
Dean Radin said:
Calm targets included photos of landscapes, nature scenes, and people; emotional targets included erotic, violent, and accident scenes.
And for a later experiment:
In an attempt to further enhance the contrast between the emotional and calm targets, participants wore headphones that played one of 20 randomly selected noxious sounds for 3 seconds during presentation of emotional pictures (i.e., screams, sirens, explosions, etc.). Calm pictures were presented in silence.
So the "widely varying emotionality" is only within these categories. I am not really sure why he thinks this would avoid the effect I am talking about.
 
7. The gamblers fallacy can work with all sorts of targets, even random walk stock market data - the "this stock is due for a rally any time now" effect.

I'm still trying to understand this...

Dean Radin datas are put in 2 categories: calm and emotional. Why is it important the kind of pictures his using (ranging from very calm to very emotional according to his blog), if at the end of the process he's using 2 categories?

I'm thinking of the Gambler's Fallacy in the context of the Cognitive Dissonance experiments. See:

http://www.som.yale.edu/Faculty/keith.chen/papers.htm

Talking about the content of the pictures seems to me like talking about the nature of the objects people had to rate in the cognitive Dissonance experiments (camera, toaster, clock, and so on), and saying that because they used a wide variety of objects, their experiment adress the Gambler Fallacy issue.

I'm probably misunderstanding all of this, but hey Robin, if you could explain this to me... Thanks! Love learning new stuff...
 
I'm still trying to understand this...

Dean Radin datas are put in 2 categories: calm and emotional. Why is it important the kind of pictures his using (ranging from very calm to very emotional according to his blog), if at the end of the process he's using 2 categories?

I'm thinking of the Gambler's Fallacy in the context of the Cognitive Dissonance experiments. See:

http://www.som.yale.edu/Faculty/keith.chen/papers.htm

Talking about the content of the pictures seems to me like talking about the nature of the objects people had to rate in the cognitive Dissonance experiments (camera, toaster, clock, and so on), and saying that because they used a wide variety of objects, their experiment adress the Gambler Fallacy issue.

I'm probably misunderstanding all of this, but hey Robin, if you could explain this to me... Thanks! Love learning new stuff...
I think it is probably Radin who is misunderstanding it.
 

Back
Top Bottom