With a modern PC there should be no problem. The images could be larger (maybe 1024 on a 24" monitor) and retrieved from RAM. They could be all recompressed or simply compressed if they're from BMPs or non-compressed TIFs) to a specific size or size range, e.g. 200 - 250K, to minimise on any delay differences. Radin performed his early experiments with the equipment available at the time. This does not make his approach sloppy or fraudulent.
Sorry, I did not make that clear. I was referring to the sampling of the data. and how timing and non-linearity may generate artifacts in a manner that parallels image processing. Memory is not a problem >64MB on a 486. He can access extended memory if need be.)
EDIT: OK - I see what you're saying. Your saying he interrupted the process to re-instigate the GF. In which case, we need to understand - did he? Where do you conclude he did?
---- THIS IS WHAT I WROTE ORIGINALLY, I'LL LEAVE IT IN FOR COMPLETENESS ---
...If the GF is inversely proportional to sequence length then that obviously means that the GF effect decreases as the sequence length increases. If Radin is deliberately prolonging the sequence length as you suggest then the GF effect will decrease.
How is that a bad thing? If the longer sequences suffer less artefacts due to the GF then surely this means that the data is less contaminated.
[/QUOTE]
Yes, but I think tries he keeps the sequences short, with longer resting periods in order to generate the GF.
The process appears contrived to increase subject stress. The subject must stare at a closely placed screen, time after time.
The researcher is behind a screen, but within earshot. It would be easy for the researcher to track the process and learn from the subject's reaction; sound, movement, the time taken before proceeding to the next image (under the subject's control). It leaks.
The researcher (also it apparently the subject's initial coach) monitors the process, but stops it to 'attend' to subject comfort when it is inferred that there were more emotional than clam images. This artificially correlates the high pre-sentiment of GF with emotional images.
-----------------
Furthermore, why do you think he called any interruptions? I thought the subject controlled the image display triggers.
He is quite terse about the whole process, even though he knows sequence length is important. It is a very odd thing to allow the subject to regulate trial ans sequence length. Surely you avoid that.
Forty photos takes 12mins, but Radin says typically take half an hour, so there are rest periods. The text he mentions EDA breakages and subject stress to be a problem. If I infer cheating because he does not supply enough information, too bad. (One reason why he doesn't get mainstream publication.)
I've agreed that the tiny image is not ideal, but I don't see how you jump from stating something could be implemented better to saying that it actually helps Radin achieve more significant results.
Not ideal? Peculiar, I say. He acknowledges that subject fatigue is high, yet he uses this very small non-standard photo at close quarters. It helps because he wants to maximise
any pre-sentiment signal.
( I wonder what effect being able to see one's reflection in the monitor would have upon skin response? )
Even in theory, all the disk noise, delay, tiny image displays and uncomfortable chairs in the world wouldn't help the subject predict an image which has not yet been chosen and nor would they give the illusion that this was the case.
The former no, the latter perhaps. Here is my 'just so' story.
Considers the quote, from the Dalquist et al re Radins' simulations.
"The results revealed a small, but clear, positive difference between
activating and calm pictures, which, however, decreased as the length of the sequence increased! (Somewhat surprisingly, Radin rejected the difference as probably being due to sampling errors.)"
He came to the wrong conclusion. The authors of the simulation explain that the decrease in GF to be expected, and outline their reasoning. Perhaps Radin did too ( I came to the same conclusion, before reading the simulations, by thinking about PRBS's).
Radin's simulations are unpublished, so it is not possible to see why he made the initial error. However, by learning from it, he may realise that he has a means by which he can control the GF at sampling, or in post-processing.
He only has to demonstrate a statistical link, and not each result.
Simulations assume that the subject's response is monotonic. It seems to be the general subject model. If Radin can 'force' the subject's SCL to behave in this manner, the subject will behave more like the model. Handy.
The sampling frequency is unusually low, forcing a very low cut-off filter of perhaps 2Hz, whereas 10Hz is typical. If the time constant of this filter is long compared to the subject's SCL response, the output will be closer to the
integral of that response, and monotonic. If he samples the data over the entire 200ms period, there will be considerable spectral filtering due to this sampling window.
So, he now has the means to force the simulation model, and to suppress the effect by invoking sampling error. That's all he needs, I think.
Short runs help him, big responses help him.
Perhaps he does not deliberately stop the subject, but it happens due to fatigue alone. Remember, latencies will not be seen in the data.