• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Dean Radin - harmless pseudo-psientist.

Hard drives of the day were slower than that. Read times of 20mS, Sweep times 20-30mS, track to track 2mS. Total is 50mS, 100mS more?
To avoid jitter in the pre-sentiment window, Radin would have to active the drive at t= 3-(worst case load-time)s and then display the image from RAM at t=3s. Alternatively, he could access the drive at a fixed time "just before", and accept the access time as jitter.
If it is the former, why not say so? I presume he has gone through a similar thought process, and decided that neither is important, as opposed to removing all doubt by displaying all photos from RAM.
Radin's claim is for presentiment. Nobody would be impressed if the response started at t=0. Anything that initiates a response before t=0 is a hidden error that could allow him to drag his samples back in time to his advantage.
 
They can indeed be of different size, although different levels of compression can have a greater effect than complexity.

:hb:

That's what I'm talking about: It takes time to decompress a compressed image. It takes computer power.

However, it is of no relevance whatsoever to the experiment. We can see this by examining what difference it might make ~

Let's say the images differ hugely in size:

Image 1: 50K
Image 2: 500K

(I'm being generous as the image dimensions mentioned are very unlikely to differ so much in size)

You are demonstrably ignorant of image compression. It is by far not unlikely at all that two pictures are that different in size.

Assuming a similar level of fragmentation, for obvious reasons, let's say I2 will take approximately 10X longer to retrieve than I1.

Taking the rough HD speed for a 486 (3600rpm est.) - again I'm being generous, this is the slowest PC he used - it can be said that I1 image would take approximately 0.002 seconds and I1 0.02 seconds.

Therefore, in instance 1, we have the 3 second presentiment time + 0.002 seconds retrieval afterward the presentiment time has finished.

In instance 2 we have the 3 second presentiment time + 0.02 seconds retrieval afterward the presentiment time has finished.

How can that possibly make a difference to anything?

Oh, ye of little knowledge...

You are assuming that the program has absolute, exclusive access to the CPU and the hard disk. You simply can't make that assumption. In reality, you are very, very, very wrong.

Do not presume to lecture people on this. You are totally ignorant, and you only serve to increase confusion, and eventually make more people believe in woo.
 
Hard drives of the day were slower than that. Read times of 20mS, Sweep times 20-30mS, track to track 2mS. Total is 50mS, 100mS more?

Let's assume really archaic technology, then. How does it affect the experiment?

One image takes 3.1s, the next 3.13s, the next 3.11s. Yes, 3.11s isn't 3s but so what?

And more to the point, how would the adverse effect you envisage be in any way consistent? Surely it would be random unless C images are somehow larger or more fragmented than A images, or vice-versa.

To avoid jitter in the pre-sentiment window, Radin would have to active the drive at t= 3-(worst case load-time)s and then display the image from RAM at t=3s. Alternatively, he could access the drive at a fixed time "just before", and accept the access time as jitter.
If it is the former, why not say so?

Probably because he couldn't see why it would be relevant.

I presume he has gone through a similar thought process, and decided that neither is important, as opposed to removing all doubt by displaying all photos from RAM.

If the equipment was archiac then you're not going to fit many images into RAM. And even if you could, the access time wouldn't be 0s so by your own argument it wouldn't remove all doubt, the same criticism would still stand.

Radin's claim is for presentiment. Nobody would be impressed if the response started at t=0. Anything that initiates a response before t=0 is a hidden error that could allow him to drag his samples back in time to his advantage.

t=0? I don't follow.

The upshot is the appearance of significant effect could not be amplified by increasing the time between subject reaction and image viewing, even by a random amount each time. In fact, theoretically it could only reduce the chance of significantly positive results.
 
Absolutely. The possibilities are endless. Baron, it is important to note that he has not subjected his software to any formal analysis. Is he running a little scheduled RTO, and with what tick time?
 
Let's assume really archaic technology, then. How does it affect the experiment?

One image takes 3.1s, the next 3.13s, the next 3.11s. Yes, 3.11s isn't 3s but so what?

And more to the point, how would the adverse effect you envisage be in any way consistent? Surely it would be random unless C images are somehow larger or more fragmented than A images, or vice-versa.

You do not see the raw data, or how he parses it. He acknowledges that sound influences the response to an image. Sound influences the presentiment effect. Let's say the subject plays the Gambler's Fallacy, the effect of which is sequence dependent. Imagine the result is a presentiment that is just below the level to qualify as presentiment of an emotional image. If for any reason, he can get an additional response that adds to that effect, or there is timing jitter that would allow him to drag it back in time, that might just push this marginal result in his favour. Just another grain of sand to add to the meta-analysed pile.

Probably because he couldn't see why it would be relevant.
One of many examples, it seems.

If the equipment was archiac then you're not going to fit many images into RAM. And even if you could, the access time wouldn't be 0s so by your own argument it wouldn't remove all doubt, the same criticism would still stand.

Really? How much RAM can't a PC hold? If the upcoming image is in the graphics controller's RAM, then I will accept any delay as effectively 0.

t=0? I don't follow.
Fig 2 of his paper. T=0 is the beginning of the display period.

The upshot is the appearance of significant effect could not be amplified by increasing the time between subject reaction and image viewing, even by a random amount each time. In fact, theoretically it could only reduce the chance of significantly positive results.
[/QUOTE]
The presentiment effect manifests itself as a change in the slope of the SCL.
If the jitter leads to a longer lead-in time (between t=-5 and t=0), then he will have a higher value at t=0. If he pull his timing back to 'normalise' it looks like it happened earlier.
 
That's what I'm talking about: It takes time to decompress a compressed image. It takes computer power.

That is not what you were talking about. You said "size". It was me who corrected you and mentioned compression.

You are demonstrably ignorant of image compression. It is by far not unlikely at all that two pictures are that different in size.

Don't make me laugh. It's my job. I do it every single day. I've forgotten more about image compression and computer graphics whilst typing this sentence than you'll be able to discover when you desperately Google prior to your next post.

My point was clear. An averagely complex 1024 photo image like the one you mention, saved at 100% quality (no compression), would be around 600K.

The same image saved at 50% (blurry, full of artefacts) would be around 100K.

A difference of 500K. Now, to save one image at 100% quality and another at 50% quality the experimenter would have to have been going all out to ruin his experiment.

That's why I said, the images would not differ by such a huge amount. Savvy?

The images Radin used were 6 x 3 on the screen, we're informed. I don't know what size monitor he was using, or what resolution it was, but assuming an older and likely fairly small display with standard 800x600 this would be a maximum of 450x300 (I'm estimating, I haven't worked this out).

An average photo JPG of that size saved at 100% quality would be around 140K

The same JPG saved at 50% quality (any worse and it would be a blurry mess) would be around 25K.

That's a difference of 115K.

Assuming that Radin is not out to sabotage his own experiment, and his images are between 80 and 90% quality, we can expect a variation of maybe 60 or 70K between images.

And the relevance of your challenge, even forgetting it was ill-conceived and incorrect...?

Zilch.

You are assuming that the program has absolute, exclusive access to the CPU and the hard disk. You simply can't make that assumption. In reality, you are very, very, very wrong.

Sure, he was out to sabotage his own experiment again and probably had Donkey Kong running in the background.

Of course, there's the miniscule CPU usage from Windows processes, which would take up an enormous 1 or 2% of the total processing power. Maybe the synchronisation software too. Add another 2%.

So what?

Do not presume to lecture people on this. You are totally ignorant, and you only serve to increase confusion, and eventually make more people believe in woo.

Time and time again you've embarrassed yourself in this thread. With this new post, however, you've embarrassed me too. I'm embarrassed for you!
 
You do not see the raw data, or how he parses it. He acknowledges that sound influences the response to an image. Sound influences the presentiment effect. Let's say the subject plays the Gambler's Fallacy, the effect of which is sequence dependent. Imagine the result is a presentiment that is just below the level to qualify as presentiment of an emotional image. If for any reason, he can get an additional response that adds to that effect, or there is timing jitter that would allow him to drag it back in time, that might just push this marginal result in his favour. Just another grain of sand to add to the meta-analysed pile.

OK, at the risk of going on for ever, I'll sum up my opinion: such a noise could have had no effect but, in the interests of completeness, I agree it would be best if there was no noise at all.

Really? How much RAM can't a PC hold? If the upcoming image is in the graphics controller's RAM, then I will accept any delay as effectively 0.

My 486 had 1Mb as I recall, I don't know what Radin's had. Again, I don't see that the delay could have an effect but I admit that getting the images from RAM would be better.

Fig 2 of his paper. T=0 is the beginning of the display period.

Sorry, I didn't mean I didn't understand t=0, I just couldn't relate it to your previous statement ~

"...Radin would have to active the drive at t= 3-(worst case load-time)s and then display the image from RAM at t=3s..."

Should it be t=0-(worst case) here, as t=0 is the image display time?

The presentiment effect manifests itself as a change in the slope of the SCL.
If the jitter leads to a longer lead-in time (between t=-5 and t=0), then he will have a higher value at t=0. If he pull his timing back to 'normalise' it looks like it happened earlier.

But isn't the actual time of t=0 is synchronised with the PC's clock, not with the actual display of the image on-screen?
 
Baron,
You are directly relating access time to file size. latencies may vary or be fixed. If the file transfer time is small compared to the latencies, then it won't matter. Perhaps he parks the disk at every occasion, who knows?
He either has to settle for the disk to be accessed at a fixed time before image is shown,and therefore guilty of pre-image stimulus or accept jitter and the error s that gives. It's an error.
 
Baron,
You are directly relating access time to file size. latencies may vary or be fixed. If the file transfer time is small compared to the latencies, then it won't matter. Perhaps he parks the disk at every occasion, who knows?
He either has to settle for the disk to be accessed at a fixed time before image is shown,and therefore guilty of pre-image stimulus or accept jitter and the error s that gives. It's an error.

OK. I'm not directly relating access time to file size other than for brevity, which is why I mentioned that fragmentation has an effect. Other things do too, I know that.

To conclude: There is no pre-image stimulus, there cannot be; but there is a "jitter" error. I think it's negligable, you don't. Therefore it would be best for Radin to eliminate it.

That's pretty much my stance.
 
Last edited:
OK, at the risk of going on for ever, I'll sum up my opinion: such a noise could have had no effect but, in the interests of completeness, I agree it would be best if there was no noise at all.



My 486 had 1Mb as I recall, I don't know what Radin's had. Again, I don't see that the delay could have an effect but I admit that getting the images from RAM would be better.



Sorry, I didn't mean I didn't understand t=0, I just couldn't relate it to your previous statement ~


Should it be t=0-(worst case) here, as t=0 is the image display time?
Yes, my mistake.


But isn't the actual time of t=0 is synchronised with the PC's clock, not with the actual display of the image on-screen?
Not sure what you mean. Recording begins when the subject presses the button. The real time is not relevant I think. The image should appear 5 seconds later, if he allows the that 5 sec lead-in time to lengthen he will get a (slightly) higher value, which he will may refer to the start of the image, rather then to the button press. He gets to choose, because after all, who knows what the delay is?
What about the button? Is that on interrupt, or polled? If there is a latency between the pressing of the button and the capture of the time from the clock, the result will show a 5 second lead in, but in reality will be lengthened by the latency. This bias will be invisible in the records.

The HD is just one of many possible errors of indeterminate effect, that show lack of rigour. I could say that this is 'unscientific', but also rather rude. It is inconsiderate, because all those wishing to understand his paper must do a lot of work checking up on matters he seems unwilling to make explicit. He could solve this particular problem by providing an accurate timing diagram.
 
Last edited:
That is not what you were talking about. You said "size". It was me who corrected you and mentioned compression.

Don't make me laugh. It's my job. I do it every single day. I've forgotten more about image compression and computer graphics whilst typing this sentence than you'll be able to discover when you desperately Google prior to your next post.

:dl:

Sorry, but if there were a nomination for pwnage of the week, that one would be a winner.

Normal transmission may be resumed.

Just a thought. With all of the discussion on images etc, has anyone actually checked to see if the details are recorded somewhere in either Radin's notes or one of the reviews?
 
That is not what you were talking about. You said "size". It was me who corrected you and mentioned compression.

I said size and JPG. JPGs are compressed for size. It takes time to uncompress.

Don't make me laugh. It's my job. I do it every single day. I've forgotten more about image compression and computer graphics whilst typing this sentence than you'll be able to discover when you desperately Google prior to your next post.

I have worked professionally with computer graphics for more than 20 years. Have a nice day.

My point was clear. An averagely complex 1024 photo image like the one you mention, saved at 100% quality (no compression), would be around 600K.

The same image saved at 50% (blurry, full of artefacts) would be around 100K.

A difference of 500K. Now, to save one image at 100% quality and another at 50% quality the experimenter would have to have been going all out to ruin his experiment.

That's why I said, the images would not differ by such a huge amount. Savvy?

One JPG, very simple: 6.509 bytes.

One JPG, photo: 162.080 bytes.

Same dimensions, vastly different sizes. The simple picture is 25 times smaller than an ordinary photo.

Assuming that Radin is not out to sabotage his own experiment, and his images are between 80 and 90% quality, we can expect a variation of maybe 60 or 70K between images.

Assuming, yes. But we can't assume anything, especially not in parapsychology experiments. The whole field is infected with examples of not just gross incompetence, but also outright fraud.

You are way too gullible if you leave out such a huge possibility.

Sure, he was out to sabotage his own experiment again and probably had Donkey Kong running in the background.

Of course, there's the miniscule CPU usage from Windows processes, which would take up an enormous 1 or 2% of the total processing power. Maybe the synchronisation software too. Add another 2%.

So what?

CPU isn't the only factor - hard disks in those days were nowhere near as quick as they are today. Things took much longer then.

Baron,
You are directly relating access time to file size. latencies may vary or be fixed. If the file transfer time is small compared to the latencies, then it won't matter. Perhaps he parks the disk at every occasion, who knows?
He either has to settle for the disk to be accessed at a fixed time before image is shown,and therefore guilty of pre-image stimulus or accept jitter and the error s that gives. It's an error.

Exactly.
 
humber said:
Not sure what you mean. Recording begins when the subject presses the button. The real time is not relevant I think. The image should appear 5 seconds later, if he allows the that 5 sec lead-in time to lengthen he will get a (slightly) higher value, which he will may refer to the start of the image, rather then to the button press. He gets to choose, because after all, who knows what the delay is?
What about the button? Is that on interrupt, or polled? If there is a latency between the pressing of the button and the capture of the time from the clock, the result will show a 5 second lead in, but in reality will be lengthened by the latency. This bias will be invisible in the records.

The HD is just one of many possible errors of indeterminate effect, that show lack of rigour. I could say that this is 'unscientific', but also rather rude. It is inconsiderate, because all those wishing to understand his paper must do a lot of work checking up on matters he seems unwilling to make explicit. He could solve this particular problem by providing an accurate timing diagram.

OK, I understand what you're saying. It's just that you think it's more important than I do. I agree in principle with your last statement.

Just a thought. With all of the discussion on images etc, has anyone actually checked to see if the details are recorded somewhere in either Radin's notes or one of the reviews?

I'll try and check later; it would be interesting if only to get a better idea of what the subject sees.

I said size and JPG. JPGs are compressed for size. It takes time to uncompress.

:rolleyes:

I have worked professionally with computer graphics for more than 20 years. Have a nice day.

Good, you'll have no problem understanding my argument.

One JPG, very simple: 6.509 bytes.

One JPG, photo: 162.080 bytes.

Same dimensions, vastly different sizes. The simple picture is 25 times smaller than an ordinary photo.

Why didn't you simply compress a black square to make the comparison even more absurd? I've already given a reasonable estimate of what the differences in size would be and I'll stick by that, thanks. (And even in your absurdly fixed example, the size difference is only 156K, nowhere near the 450K example you originally derided me for).

Assuming, yes. But we can't assume anything, especially not in parapsychology experiments. The whole field is infected with examples of not just gross incompetence, but also outright fraud.

You are way too gullible if you leave out such a huge possibility.

I don't think I'm gullible to assume the experimenter hasn't rigged his own experiment for failure.

CPU isn't the only factor - hard disks in those days were nowhere near as quick as they are today. Things took much longer then.

Didn't you read my posts where I estimated all that?
 
Why didn't you simply compress a black square to make the comparison even more absurd?

*ding* *ding* *ding* *ding*

The square is one of the Zener card figures. If anything, that is one that will not elicit any emotional response, as a picture of a cute puppy or a dismembered child would.

I've already given a reasonable estimate of what the differences in size would be and I'll stick by that, thanks. (And even in your absurdly fixed example, the size difference is only 156K,

Of course it would be a smaller difference, if the image dimensions were smaller. :rolleyes:

nowhere near the 450K example you originally derided me for).

What 450K example?

I don't think I'm gullible to assume the experimenter hasn't rigged his own experiment for failure.

But it wouldn't be for failure, it would be the opposite. If you can give people a clue as to which picture will be displayed, you got your positive result - even if it only means a clue once in a while. Radin doesn't need all that many false hits.
 
Didn't you read my posts where I estimated all that?

This point is trivial in comparison with the other errors and obfuscation, but the figure I gave came from a 10GB Western Digital Caviar drive - a good performer at the time.
My estimate converts milliseconds to tens, if not hundreds of milliseconds. The sound will impact upon the subject, and that influence will depend upon it's timing relative to the image. The mind processes closely timed stimuli in a non-linear way. Next error, please.
 
Those familiar with image processing, will understand the difficulties of post-processing images of different sample rates, skewed sample rates, linear compression, lossless compression and so forth.
If you do not account for all of them, 'artifacts' appear.

The Gambler' Fallacy simulations (Dalkvist et al.) contains the following tasty morsel;
"The results revealed a small, but clear, positive difference between activating and calm pictures, which, however, decreased as the length of the sequence increased! (Somewhat surprisingly, Radin rejected the difference as probably being due to sampling errors.)"

It is the same with this experiment. Is the SCL linear enough that you can normalise them in the manner he uses. Will that produce a rectification of the signal, producing a bias?
Once data is lost, thrown away, ignored, made invisible, reverse engineering becomes a thankless task.
He knows from his simulations, that the 'gambler's fallacy' is inversely related to the sequence length. As I have mentioned, trials that are numerically 12mins in length, take 30mins. There is a lot of down time. He could run each of the 40 run trials in small chunks, using any excuse to call an interruption. The tiny image on a close screen seems contrived to create subject discomfort and rest periods. He also reports broken EDA wires to be a problem. He's rather spoiled for choice for means of achieving this end.
 
Last edited:
You are not listening: That doesn't mean that the pictures necessarily will be retrieved from the hard disk.

But - if you insist that the pictures were retrieved immediately from the hard disk every time, do you agree that there is a possible leak, yes or no?

Can you explain what a "calm" picture is? What is an "emotional" picture?

And can you answer the questions the first time they are put to you, instead of avoiding them?

You have hit upon a real flaw in this "experiment"

The assumption that there will be a one to one correspondence between what the experimenter thinks is calm or emotional and what the subject will think is calm or emotional is at the center of this "experiment".

"Calm" or "emotion" are not scientific terms but rather feelings.
 
I'm starting this thread as we're getting off topic and onto Radin in another thread.

I find it difficult to have a problem with people like Dean Radin - his claims are that various forms of psi exist, but that they are so low-powered that scientific testing is virtually impossible.

To me, he is like a liberal christian who looks to be inclusive and makes no claims which can't be hidden in gaps in our knowledge.

Mostly harmless.

a non-falsifiable position then (assuming he really said that which is being contended)?
 
Last edited:
CFLarsen said:
The square is one of the Zener card figures. If anything, that is one that will not elicit any emotional response

Why would Radin want to use it, then? He said he used calming and activating images. He said nothing about neutral ones.

CFLarsen said:
Of course it would be a smaller difference, if the image dimensions were smaller.

The size I mentioned was simply the size Radin used.

CFLarsen said:
What 450K example?

The one where I used a 50K image and a 500K image as an example of files with large size differences. 500 - 50 = 450.

CFLarsen said:
But it wouldn't be for failure, it would be the opposite. If you can give people a clue as to which picture will be displayed, you got your positive result - even if it only means a clue once in a while.

You refuse to acknowledge the obvious. There is no "clue".

Let's extrapolate your own argument about image content affecting compression. We'll forget silly squares but it's arguable that calm images (e.g. a sea-scape, a sunset) would compress slightly smaller than activating images (e.g. a road accident, a dead animal).

This would mean that it could take fractionally longer to retrieve activating images than calm ones.

OK. So what? There is no "clue" given, nor could there be!

In fact, because the delay in this example would likely be larger for activating images then this would actually decrease the liklihood of a positive result if presentiment is a valid phenomenon because of the increased time between measurement and image display.

I repeat: it's best to remove all doubt but I don't see it as an issue. It's too insignificant.

humber said:
This point is trivial in comparison with the other errors and obfuscation, but the figure I gave came from a 10GB Western Digital Caviar drive - a good performer at the time.
My estimate converts milliseconds to tens, if not hundreds of milliseconds.

I took your own values in my example. I'm not contesting them.

humber said:
The sound will impact upon the subject, and that influence will depend upon it's timing relative to the image. The mind processes closely timed stimuli in a non-linear way. Next error, please.

As I've said, strictly speaking this artefact should be dispensed with. However, let's be perfectly clear: There is no effect here that could be sympathetic to prediction over this number of trials.

humber said:
Those familiar with image processing, will understand the difficulties of post-processing images of different sample rates, skewed sample rates, linear compression, lossless compression and so forth.
If you do not account for all of them, 'artifacts' appear.

With a modern PC there should be no problem. The images could be larger (maybe 1024 on a 24" monitor) and retrieved from RAM. They could be all recompressed or simply compressed if they're from BMPs or non-compressed TIFs) to a specific size or size range, e.g. 200 - 250K, to minimise on any delay differences. Radin performed his early experiments with the equipment available at the time. This does not make his approach sloppy or fraudulent.

humber said:
The Gambler' Fallacy simulations (Dalkvist et al.) contains the following tasty morsel;
"The results revealed a small, but clear, positive difference between activating and calm pictures, which, however, decreased as the length of the sequence increased! (Somewhat surprisingly, Radin rejected the difference as probably being due to sampling errors.)"

Yep, I read that, but notwithstanding that I don't agree with the basis of these simulations and therefore don't trust them (see my earlier posts on the matter) I'm not clear on what is being said here...

humber said:
He knows from his simulations, that the 'gambler's fallacy' is inversely related to the sequence length. As I have mentioned, trials that are numerically 12mins in length, take 30mins. There is a lot of down time. He could run each of the 40 run trials in small chunks, using any excuse to call an interruption.

EDIT: OK - I see what you're saying. Your saying he interrupted the process to re-instigate the GF. In which case, we need to understand - did he? Where do you conclude he did?

---- THIS IS WHAT I WROTE ORIGINALLY, I'LL LEAVE IT IN FOR COMPLETENESS ---

...If the GF is inversely proportional to sequence length then that obviously means that the GF effect decreases as the sequence length increases. If Radin is deliberately prolonging the sequence length as you suggest then the GF effect will decrease.

How is that a bad thing? If the longer sequences suffer less artefacts due to the GF then surely this means that the data is less contaminated.

-----------------

Furthermore, why do you think he called any interruptions? I thought the subject controlled the image display triggers.

humber said:
The tiny image on a close screen seems contrived to create subject discomfort and rest periods. He also reports broken EDA wires to be a problem. He's rather spoiled for choice for means of achieving this end.

I've agreed that the tiny image is not ideal, but I don't see how you jump from stating something could be implemented better to saying that it actually helps Radin achieve more significant results.

Even in theory, all the disk noise, delay, tiny image displays and uncomfortable chairs in the world wouldn't help the subject predict an image which has not yet been chosen and nor would they give the illusion that this was the case.
 
Last edited:
Why would Radin want to use it, then? He said he used calming and activating images. He said nothing about neutral ones.

*groan*

The size I mentioned was simply the size Radin used.

*groan*

The one where I used a 50K image and a 500K image as an example of files with large size differences. 500 - 50 = 450.

That example was shown to be not so unlikely to differ in size.

You refuse to acknowledge the obvious. There is no "clue".

That's what you erroneously think.

Let's extrapolate your own argument about image content affecting compression. We'll forget silly squares but it's arguable that calm images (e.g. a sea-scape, a sunset)

To someone suffering from agoraphobia or hydrophobia, a seascape would result in a highly "activating" image.

would compress slightly smaller than activating images (e.g. a road accident, a dead animal).

Precisely how much smaller would it have to be to have an effect?

This would mean that it could take fractionally longer to retrieve activating images than calm ones.

OK. So what? There is no "clue" given, nor could there be!

Now your position is that there couldn't be a clue?

Wow.

In fact, because the delay in this example would likely be larger for activating images then this would actually decrease the liklihood of a positive result if presentiment is a valid phenomenon because of the increased time between measurement and image display.

Nonsense. It doesn't matter what type of image it was. All that matters is that the person can learn to distinguish between the various images. Remember, it doesn't take more than a few extra hits to get a positive result.
 

Back
Top Bottom