• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Dean Radin - harmless pseudo-psientist.

*groan*
*groan*
That example was shown to be not so unlikely to differ in size.

Your example was wrong so let's forget it. We've already established reasonable sizes based on the actual dimensions used. As for the groans, ditto.

To someone suffering from agoraphobia or hydrophobia, a seascape would result in a highly "activating" image.

Whilst I commend your thoroughness in suspecting that people with rabies were involved in the trial or that people with agoraphobia obligingly left their houses to take part, I suspect you're taking your protestations a little too far.

Precisely how much smaller would it have to be to have an effect?

It's insignificant, like I've repeatedly said.

Now your position is that there couldn't be a clue?

Wow.

You have flat out failed to understand the experiment. This is the crux of why you keep repeating the same thing. There is no clue! The measurement phase is never intruded upon by the image display or any noises or sensations related to the image. Delayed image display serves only to separate these two stages even more.

I can't help you any more if you can't see that.

Nonsense. It doesn't matter what type of image it was. All that matters is that the person can learn to distinguish between the various images.

No they cannot. The cannot learn anything. There is no clue. Accept it.
 
To avoid jitter in the pre-sentiment window, Radin would have to active the drive at t= 3-(worst case load-time)s and then display the image from RAM at t=3s. Alternatively, he could access the drive at a fixed time "just before", and accept the access time as jitter.
If it is the former, why not say so? I presume he has gone through a similar thought process, and decided that neither is important, as opposed to removing all doubt by displaying all photos from RAM.
Radin's claim is for presentiment. Nobody would be impressed if the response started at t=0. Anything that initiates a response before t=0 is a hidden error that could allow him to drag his samples back in time to his advantage.


I'm not clear on this jitter issue. If the image retrieval (HD or RAM) occurs after the last SCL sample in the presentiment analysis period, then how does a variable retrieval time (is that the jitter?) impact on the analysis? I don't understand what you are refering to when you say he can " drag his samples back in time".
 
I'm not clear on this jitter issue. If the image retrieval (HD or RAM) occurs after the last SCL sample in the presentiment analysis period, then how does a variable retrieval time (is that the jitter?) impact on the analysis? I don't understand what you are refering to when you say he can " drag his samples back in time".

It was humber who wrote that, not me :)
 
Last edited:
Your example was wrong so let's forget it.

I think you should, yes.

Whilst I commend your thoroughness in suspecting that people with rabies were involved in the trial or that people with agoraphobia obligingly left their houses to take part, I suspect you're taking your protestations a little too far.

How ignorant. People with hydrophobia don't need to have rabies. Likewise, people with agoraphobia don't need to suffer from it in such a degree that they are afraid to leave their houses.

It's insignificant, like I've repeatedly said.

I didn't ask you how insignificant it was. I asked you precisely how much smaller it would have to be to have an effect.

You have flat out failed to understand the experiment. This is the crux of why you keep repeating the same thing. There is no clue! The measurement phase is never intruded upon by the image display or any noises or sensations related to the image. Delayed image display serves only to separate these two stages even more.

I can't help you any more if you can't see that.

Nonsense.

No they cannot. The cannot learn anything. There is no clue. Accept it.

You have no idea what you are talking about.
 
whoops, sorry. I mean't it for humber, don't know how your name got in there...

What, exactly, in post #228 don't you understand?

Do you think that Radin is incompetent since he didn't move the computer and hard disk in another room?

Can you explain what a "calm" picture is? What is an "emotional" picture?

Don't talk about something else. Don't talk around these issues. Just answer the questions.
 
How ignorant. People with hydrophobia don't need to have rabies. Likewise, people with agoraphobia don't need to suffer from it in such a degree that they are afraid to leave their houses.

The good old literal interpretation argument combined with diversionary tactics, eh? I feel a word game coming up.

Let's be sensible, if you can manage it. What if someone with cat phobia sees a picture of a cat, which Radin could well classify as Calm. What would happen?

They would get agitated. Therefore, the data could only demonstrate correlation to a lesser extent, because a Calm photo would have produced an "inappropriate" response. This would make it less likely that Radin would get a positive result.

The less accurate Radin is with his classification, the more likely he is to fail in proving the existence of a genuine effect.

If you're saying that Radin didn't screen for phobias then you are actually arguing for Radin and you don't even know it.

I didn't ask you how insignificant it was. I asked you precisely how much smaller it would have to be to have an effect.

There is no threshhold of effect, only levels of insignificance. I've told you this.

Nonsense.

Good point, I'll think about that.

You have no idea what you are talking about.

Well stated. My argument's really floundering now.
 
a non-falsifiable position then (assuming he really said that which is being contended)?

I've paraphrased him, but as is increasingly apparent from this thread, while the claims are falsifiable, such falsification is a tedious and thankless task which is likely to prove fruitless given the enormous amount of data Radin has collected so far.

Unless you wish to wade through thousands of tests, re-checking everything, it just isn't going to happen.

Or, you could make like CFLarsen - read one book, attend one lecture and rubbish it on the basis that you don't like it.

I just figure that there are a lot better targets who make vastly more ridiculous claims and who are more deserving of the barbs. Whatever we do seems to have minimal impact on the wide world of woo, and I feel that spending time on Radin is a bad move - both from a cost/benefit perspective as well as a credibility one.

Others differ.
 
The good old literal interpretation argument combined with diversionary tactics, eh? I feel a word game coming up.

Let's be sensible, if you can manage it. What if someone with cat phobia sees a picture of a cat, which Radin could well classify as Calm. What would happen?

They would get agitated. Therefore, the data could only demonstrate correlation to a lesser extent, because a Calm photo would have produced an "inappropriate" response. This would make it less likely that Radin would get a positive result.

The less accurate Radin is with his classification, the more likely he is to fail in proving the existence of a genuine effect.

If you're saying that Radin didn't screen for phobias then you are actually arguing for Radin and you don't even know it.

You work from the faulty assumption that a response would be gone when the next image pops up. If you have a picture that get people all worked out, you will "pollute" whatever comes next. You can't decide in advance how long people will be upset, and how it will influence otherwise "calm" pictures.

The whole idea of "calm" and "emotional" is stillborn. It's the same bull caca: Instead of simplifying, they complicate the whole experiment. They do this because they need every possibility to obscure and manipulate the data in order to get their positive results.

There is no threshhold of effect, only levels of insignificance. I've told you this.

No threshold of effect? It makes no difference how big a picture is, it will never produce discernable sounds from the hard disk?

That is simply too desperate.
 
You work from the faulty assumption that a response would be gone when the next image pops up. If you have a picture that get people all worked out, you will "pollute" whatever comes next. You can't decide in advance how long people will be upset, and how it will influence otherwise "calm" pictures.

The reading is taken from a baseline starting at t-5 so it shouldn't make any difference. It's not relevant what effect the last image has and it's certainly not relevant if someone interpreted the last calm image as being provocative.

The whole idea of "calm" and "emotional" is stillborn. It's the same bull caca: Instead of simplifying, they complicate the whole experiment. They do this because they need every possibility to obscure and manipulate the data in order to get their positive results.

It seems simple to me.

No threshold of effect? It makes no difference how big a picture is, it will never produce discernable sounds from the hard disk?

That is simply too desperate.

What on earth are you talking about? That wasn't even the argument!

I said two pages ago that an old HD would be noisy. Noisy for all images, not just large ones. I never said that there was a thresshold for noisiness, nor would I ever suggest such a thing.

The argument was about whether the size of the image and therefore the fetch time could affect the experimental results in order to suggest a more favourable outcome (to Radin) than would otherwise be the case. The answer is no, irrelevant of how big the image is.

The noise is irrelevant because the presentiment measurement time is over when it happens.


There is no threshhold. There is no effect. There is no clue.
 
Last edited:
I'm sorry, but you have managed to entangle yourself in a web of ludicrous arguments.
 
With a modern PC there should be no problem. The images could be larger (maybe 1024 on a 24" monitor) and retrieved from RAM. They could be all recompressed or simply compressed if they're from BMPs or non-compressed TIFs) to a specific size or size range, e.g. 200 - 250K, to minimise on any delay differences. Radin performed his early experiments with the equipment available at the time. This does not make his approach sloppy or fraudulent.

Sorry, I did not make that clear. I was referring to the sampling of the data. and how timing and non-linearity may generate artifacts in a manner that parallels image processing. Memory is not a problem >64MB on a 486. He can access extended memory if need be.)

EDIT: OK - I see what you're saying. Your saying he interrupted the process to re-instigate the GF. In which case, we need to understand - did he? Where do you conclude he did?

---- THIS IS WHAT I WROTE ORIGINALLY, I'LL LEAVE IT IN FOR COMPLETENESS ---

...If the GF is inversely proportional to sequence length then that obviously means that the GF effect decreases as the sequence length increases. If Radin is deliberately prolonging the sequence length as you suggest then the GF effect will decrease.
How is that a bad thing? If the longer sequences suffer less artefacts due to the GF then surely this means that the data is less contaminated.
[/QUOTE]

Yes, but I think tries he keeps the sequences short, with longer resting periods in order to generate the GF.

The process appears contrived to increase subject stress. The subject must stare at a closely placed screen, time after time.
The researcher is behind a screen, but within earshot. It would be easy for the researcher to track the process and learn from the subject's reaction; sound, movement, the time taken before proceeding to the next image (under the subject's control). It leaks.

The researcher (also it apparently the subject's initial coach) monitors the process, but stops it to 'attend' to subject comfort when it is inferred that there were more emotional than clam images. This artificially correlates the high pre-sentiment of GF with emotional images.

-----------------
Furthermore, why do you think he called any interruptions? I thought the subject controlled the image display triggers.

He is quite terse about the whole process, even though he knows sequence length is important. It is a very odd thing to allow the subject to regulate trial ans sequence length. Surely you avoid that.
Forty photos takes 12mins, but Radin says typically take half an hour, so there are rest periods. The text he mentions EDA breakages and subject stress to be a problem. If I infer cheating because he does not supply enough information, too bad. (One reason why he doesn't get mainstream publication.)

I've agreed that the tiny image is not ideal, but I don't see how you jump from stating something could be implemented better to saying that it actually helps Radin achieve more significant results.
Not ideal? Peculiar, I say. He acknowledges that subject fatigue is high, yet he uses this very small non-standard photo at close quarters. It helps because he wants to maximise any pre-sentiment signal.
( I wonder what effect being able to see one's reflection in the monitor would have upon skin response? )

Even in theory, all the disk noise, delay, tiny image displays and uncomfortable chairs in the world wouldn't help the subject predict an image which has not yet been chosen and nor would they give the illusion that this was the case.

The former no, the latter perhaps. Here is my 'just so' story.

Considers the quote, from the Dalquist et al re Radins' simulations.

"The results revealed a small, but clear, positive difference between
activating and calm pictures, which, however, decreased as the length of the sequence increased! (Somewhat surprisingly, Radin rejected the difference as probably being due to sampling errors.)"


He came to the wrong conclusion. The authors of the simulation explain that the decrease in GF to be expected, and outline their reasoning. Perhaps Radin did too ( I came to the same conclusion, before reading the simulations, by thinking about PRBS's).

Radin's simulations are unpublished, so it is not possible to see why he made the initial error. However, by learning from it, he may realise that he has a means by which he can control the GF at sampling, or in post-processing.
He only has to demonstrate a statistical link, and not each result.

Simulations assume that the subject's response is monotonic. It seems to be the general subject model. If Radin can 'force' the subject's SCL to behave in this manner, the subject will behave more like the model. Handy.

The sampling frequency is unusually low, forcing a very low cut-off filter of perhaps 2Hz, whereas 10Hz is typical. If the time constant of this filter is long compared to the subject's SCL response, the output will be closer to the integral of that response, and monotonic. If he samples the data over the entire 200ms period, there will be considerable spectral filtering due to this sampling window.

So, he now has the means to force the simulation model, and to suppress the effect by invoking sampling error. That's all he needs, I think.
Short runs help him, big responses help him.

Perhaps he does not deliberately stop the subject, but it happens due to fatigue alone. Remember, latencies will not be seen in the data.
 
I'm not clear on this jitter issue. If the image retrieval (HD or RAM) occurs after the last SCL sample in the presentiment analysis period, then how does a variable retrieval time (is that the jitter?) impact on the analysis? I don't understand what you are refering to when you say he can " drag his samples back in time".

Assume that photo access is variable. Software timing, disk access, file size, but the maximum is 200mS. If he access the disk >200mS before t=0, the photo can accurately be displayed at that time, but there will be the pre-sentiment stimulus of the noise, which he is trying to remove.

If he does otherwise, to "just before", then the image may appear at t=0 +/- access time, or part there of. Either way, he produces a pre-sentiment stimulus and/or jitter.
The timing of the samples in the latter case, is rather arbitrary. He can chose his time datum much as he pleases.
He could use the earliest sample as his datum, and align the others ( which in reality have a slightly longer pre-sentiment time) with them, but there would be no way of telling that from the data. If he gets 100mS this way, and 200ms because of the asynchronous sampling , he could gain 300ms of 'future'.
It is the same with the subject's button. A small 'forgivable' latency of say 200mS, will allow for an increased pre-sentiment time that will not be seen in the data as logged by the PC. In both cases, the logged periods, and the actual periods, may differ.
 
I just figure that there are a lot better targets who make vastly more ridiculous claims and who are more deserving of the barbs. Whatever we do seems to have minimal impact on the wide world of woo, and I feel that spending time on Radin is a bad move - both from a cost/benefit perspective as well as a credibility one.
Others differ.

Perhaps, but he doesn't look like he would be any fun at a party. That's enough.
This woo is different because it has some scientific cache associated with it.
His trials show why mainstream science largely ignores him, yet he has gone on to make his claims public. He is at least dishonest. Wooer's will use anything in their favour.
Look how much trouble it takes to discredit this one item, yet skeptics will not be able to drive a stake through it's heart.
 
You work from the faulty assumption that a response would be gone when the next image pops up. If you have a picture that get people all worked out, you will "pollute" whatever comes next. You can't decide in advance how long people will be upset, and how it will influence otherwise "calm" pictures.

The whole idea of "calm" and "emotional" is stillborn. It's the same bull caca: Instead of simplifying, they complicate the whole experiment. They do this because they need every possibility to obscure and manipulate the data in order to get their positive results.

Bingo. The events are as not as independent as he would have you believe. He knows from simulation how to maximise this dependance, and other effects.
The sampling and filtering 'tames' his subject's response to fit those simulations.

The whole idea is BS. The link between SCL and subject response is only demonstrated to a limited case. One is lulled into believing that staring at a blank screen is a neutral position, save for the acknowledged GF. He contradicts himself on this point. On the one hand, the mind, consciousness is mystical and ubiquitous, yet when it suits him, quite mechanically predictable. It is amusing that he has not considered that the subject may influence the RNG.

The reading is taken from a baseline starting at t-5 so it shouldn't make any difference. It's not relevant what effect the last image has and it's certainly not relevant if someone interpreted the last calm image as being provocative.
Baron, you seem to missing the point. No mental content taken from one to the other? No SCL residue from the filter?
He has a lot of hidden variables to play with. He may not apply them consistently, but when he choses. He his not so stupid as to make these manipulations entirely obvious. BTW Baron, his paper led you to the sound of the HD, right? This is faux scientific integrity. Calling Mr Randi !
 
Last edited:
Assume that photo access is variable. Software timing, disk access, file size, but the maximum is 200mS. If he access the disk >200mS before t=0, the photo can accurately be displayed at that time, but there will be the pre-sentiment stimulus of the noise, which he is trying to remove.
If he does otherwise, to "just before", then the image may appear at t=0 +/- access time, or part there of. Either way, he produces a pre-sentiment stimulus and/or jitter.

The timing of the samples in the latter case, is rather arbitrary. He can chose his time datum much as he pleases.
He could use the earliest sample as his datum, and align the others ( which in reality have a slightly longer pre-sentiment time) with them, but there would be no way of telling that from the data. If he gets 100mS this way, and 200ms because of the asynchronous sampling , he could gain 300ms of 'future'.

It is the same with the subject's button. A small 'forgivable' latency of say 200mS, will allow for an increased pre-sentiment time that will not be seen in the data as logged by the PC. In both cases, the logged periods, and the actual periods, may differ.

So, if the last SCL sample included in the presentiment analysis period occurs before the computer both creates a seed number for the PRNG and retrieves the image, this would solve the problem?
 
So, if the last SCL sample included in the presentiment analysis period occurs before the computer both creates a seed number for the PRNG and retrieves the image, this would solve the problem?

What, exactly, in post #228 don't you understand?

Do you think that Radin is incompetent since he didn't move the computer and hard disk in another room?

Can you explain what a "calm" picture is? What is an "emotional" picture?

Don't talk about something else. Don't talk around these issues. Just answer the questions.
 
Last edited:
Yes, but I think tries he keeps the sequences short, with longer resting periods in order to generate the GF.


The GF hypothesis is based on short term memory and says that pre-stimulus SCL increases in proportion to the number of calm pictures in a row and the pre-stimulus SCL is re-set to baseline after an emotional image. The overall GF effect decreases with time only because of the number of trials. If a block of trials is split into several smaller blocks, this will not have an effect on the GF effect size because you still have the same amount of trials at the end of the day. It would be equivalent to running the Dalkvist et al simulation for 50 trials, pausing it while going for lunch, and finishing the other 50 trials when you got back. You would get the same result with 100 trials whether you paused the simulation or not. In fact, any interuptions from Radin would probably serve to decrease the GF effect because you are more likely to interupt short term memory for the local sequence during which the interuption happened, thus resetting SCL to baseline.


Not ideal? Peculiar, I say. He acknowledges that subject fatigue is high, yet he uses this very small non-standard photo at close quarters.


A small photo makes sense because you want to ensure that as much of the image is processed foveally rather than in the periphery.
 
The GF hypothesis is based on short term memory and says that pre-stimulus SCL increases in proportion to the number of calm pictures in a row and the pre-stimulus SCL is re-set to baseline after an emotional image. The overall GF effect decreases with time only because of the number of trials. If a block of trials is split into several smaller blocks, this will not have an effect on the GF effect size because you still have the same amount of trials at the end of the day. It would be equivalent to running the Dalkvist et al simulation for 50 trials, pausing it while going for lunch, and finishing the other 50 trials when you got back. You would get the same result with 100 trials whether you paused the simulation or not. In fact, any interuptions from Radin would probably serve to decrease the GF effect because you are more likely to interupt short term memory for the local sequence during which the interuption happened, thus resetting SCL to baseline.





A small photo makes sense because you want to ensure that as much of the image is processed foveally rather than in the periphery.

What, exactly, in post #228 don't you understand?

Do you think that Radin is incompetent since he didn't move the computer and hard disk in another room?

Can you explain what a "calm" picture is? What is an "emotional" picture?

Don't talk about something else. Don't talk around these issues. Just answer the questions.
 

Back
Top Bottom