• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Misleading Sceptic Report article

Claus, do you have evidence for your claim that there is a correlation between diminishing effect size and study quality?
 
CFLarsen said:

Hardly an unbiased group of people, in an unbiased environment!

Who is the "skeptical investigator" who "independently" replicated the result? He/she is never named, nor his result. Why not? (I think I know why...)

Sorry, but I am still not seeing any evidence of a paranormal phenomenon anywhere.

And ……

TLN said:
replicated by an independent lab and scientists.

So those getting positive results are biased? What are you suggesting a conspiracy to produce fraudulent PSI? :) Or are you just accusing them of unconcious zombie fraud? Or is it just lucky incompetence?

Who decides who is 'independent' or unbiased? For example the critique of Ganzfeld experiments were largely by professional skeptic/psychologist Ray Hyman of CSICOP ...... and the later selective meta-analysis of Ganzfeld trials was by professional media skeptic/psychologist Richard Wiseman of CSICOP/skeptical inquirer ......... CSICOP is a political organization to debunk claims of the paranormal and defend conventional science, it’s hardly a neutral opinion.

I doubt there is great point in presenting evidence here (Amherst did it, other have too) , all that happens is the typical CSICOP counterclaims get quoted. Remove these counterclaims and is the skeptic argument as strong?
 
davidsmith73 said:
a possible example of their theory. The anomalous psi effect is still there regardless.

Oh, no, no, no...don't start with this "anomalous" crap. You cannot call possible experimental flaws for "psi". Flaws have nothing to do with the paranormal.

Either there is an effect, or there is not.

davidsmith73 said:
If someone rates a picture as higly emotional then it will be reflected in the normal change in heart rate after the picture is shown. If this is the case, how is this a flaw? Under the null hypothesis you still would not expect a pre-stimulus response if the target was emotional or calm.

You don't understand: The people who were rating the pictures during the experiment did not know what the scale was. It's equivalent to asking you to grade someone's exam, but not telling you how to grade it.

davidsmith73 said:
No, they were looking to see if obvious artifacts were being produced due to faulty recording. You don't just keep every bit of data you record without removing instances of equipment failure. I'm just imagining what you would be saying if they kept the obvious artifacts!

Please explain how these faulty recordings were not due to psi. If you cannot, then you have no case.

davidsmith73 said:
Don't know what you mean

The difference between a "hit" and a "miss" was simply too small to be determined objectively.

davidsmith73 said:
the precognition condition was still significant.

Don't play wordgames: Was there evidence of precognition? If so, where?

davidsmith73 said:
Your point being? This was an experiment with evidence for physiological correlates of psi.

Cart before the horse. We have yet to see the existence of psi.

davidsmith73 said:
Not relavent to whether or not the experiment shows evidence.

But the experiment does not show evidence.

davidsmith73 said:
Erm Claus, that's what the HP hypothesis is essentially based on. Wipe the froth away and think about what you are posting next time.

So, how can we determine the difference between the HP hypothesis and the fact that we get used to seeing bad pictures, so they upset us less?

davidsmith73 said:
Lets have the full quote shall we:
At this point, it appears that there are systematic individual differences: Those high in anxiety show the predicted effects on the negative trials, but those high in sensation seeking show the reverse effect, significantly exposing themselves to the negatively arousing images. Erotic and positive (nonerotic) pictures are not yet showing any systematic patterns. At the moment, there are too few sessions to be confident of these patterns, but there does appear to be precognitive responding with this protocol.

Doesn't help. It still contradicts the previous experiment.

davidsmith73 said:
What are you saying here? Is the psychological characteristics of the participants a flaw in the methods? If so why?

I am saying that conducting a psi experiment at INS with believers is hardly the most reliable way to do it.

davidsmith73 said:
Replications are underway. You'll have to wait.

Jeebus creebus, we have heard this for so long! How long can it take to replicate just one of those experiments that are claimed to show evidence of psi?

This doesn't happen - instead, we see yet another experiment, with a slightly changed protocol, invariably claiming "new" evidence of psi. And then, another. And then, another.

Just replicate it. Why is that so hard?

Oh, and...who was this skeptic, and what result did he get?
 
Open Mind said:
So those getting positive results are biased? What are you suggesting a conspiracy to produce fraudulent PSI? :) Or are you just accusing them of unconcious zombie fraud? Or is it just lucky incompetence?

We know that believers have a wish to see their beliefs come true. An example is how sitters try desperately to fit the psychic's coldreading to their own situation. We also know that the field of parapsychology is infested with cheating and fraudulent behavior.

Open Mind said:
Who decides who is 'independent' or unbiased? For example the critique of Ganzfeld experiments were largely by professional skeptic/psychologist Ray Hyman of CSICOP ...... and the later selective meta-analysis of Ganzfeld trials was by professional media skeptic/psychologist Richard Wiseman of CSICOP/skeptical inquirer ......... CSICOP is a political organization to debunk claims of the paranormal and defend conventional science, it’s hardly a neutral opinion.

The data is there for all to see. It isn't a question of opinion, but of whether or not there is evidence. So far, there is none.

Open Mind said:
I doubt there is great point in presenting evidence here (Amherst did it, other have too) , all that happens is the typical CSICOP counterclaims get quoted. Remove these counterclaims and is the skeptic argument as strong?

There is a very great point in presenting evidence here: You have a host of people, genuinely interested, who are more than willing to see if the results hold up to scrutiny.

If not here, where, then? A believer's forum?
 
CFLarsen said:
We know that believers have a wish to see their beliefs come true.

This can cut both ways, though. Goodstein and Brazis mailed abstracts of an astrological study to a random sample of psychologists. They varied the abstracts in only one respect...half reported positive findings while half reported negative findings. In other words, it was the same design with only the result being changed. The psychologists were asked to rate the studies (study really), and it was found that those who received the "negative" results rated the studies as far better designed, more valid, and having more adequate conclusions than those that rated the "positive" studies.

Goodstein , L.D. and Brazis, K.L. Psychology of the Scientist:XXX. Credibility of psychologists: An Empirical Study. Psychological Reports, 1970, 27, 835-838.
 
It doesn't just work like that with parapsychology, either. I read recently of a paper submitted for review to a large number of reviewers, with the results and conclusions tailor-made to be in line with or opposed to the known beliefs of the referees. They differed, as you'd expect, about the worth of the paper, but also the experimenters realised after they sent it out that they'd left an absolute howler in the paper. Guess which referees were most likely to spot it?

Everyone's biased and no-one's objective. That's why we have the scientific method.
 
Davidsmith73,

I am very open to the possibility that psi might exist, I think it would be cool if it did.

But the ganzfeld effect studies are seriously flawd in ways that the ,eta analysis can not compensate for, there is the possibility of confounding effects that are totaly unmeasured and un-controlled for. I don't know of any other area of social science where metaanalysis can compensate for flawed methdology.

I read the paper on the photoic stimulation that you presented in the More Evidence thread and it suffers from a serious problem,

The standard deviation for the sample is higher than the effect size. Therefore all the supposed effect could be attributed to the random variation of the sample during the test period. This is especialy relevant because of the small sample size.

I found the 'linked biological' theory very interesting, but again if the standard deviation is higher than the effect size, then, well it could all be sampling error.

If you could show some more reasonable effect size in relation to standard deviation then I might accept the effect. There is also the issue of removing overlapping cycles from the sampole, which means that the sampling of the 'photoic stimulation epochs' is not random and excludes all the samples that might occur during overlapping times periods. This is a serious sampling flaw. There is alos the elimination of anomolous data from teh data sets. i think that those should have been examined for random variation.

I see no reason to believe that there is an effect in that paper. There are some flaws in the method and the analysis that do not even make it suggestive until there is replication.

(This is actualy a real problem in the social sciences in general, most studies show only suggestive effects that are unreplicated. And researcehers are loathe to go and actualy do the replication.)
 
CFLarsen said:
We know that believers have a wish to see their beliefs come true. An example is how sitters try desperately to fit the psychic's coldreading to their own situation.
I agree keen believers do remember hits better than misses ….. what also must be recognized is that a keen skeptic will remember a misses better than hits.

If it possible for such positive bias to leak into a tightly controlled positive PSI trial, we must consider negative expectation of a professional skeptic could leak into a controlled no PSI result. ……

….Yes, I am concerned about closed minded skeptic researchers (who feel they already know the answer before the trial) assuming their attitude and careless choice of experimenters cannot possibly have an effect upon results.

IMO ...... all PSI researchers and experimenters in any PSI trial should by requirement sign their personal opinion/feeling/belief on the existence of PSI before any trial conducted……where this might seem an absurd or unnecessary suggestion to many skeptics, it is essential in my opinion. These trials are on the transfer/reception of thought and to assume only the thoughts of the those under test have any influence shows blinkered lack of theoretical reasoning or a prior disrespect for the concept thought transfer effects could be affected by all observers or expectations

I’d like to see tests involve natural psychics who have claimed evidential experiences since early childhood and are still convinced of the genuineness of their experiences, how many trials have used such obvious a choice? None I know of …… instead often researchers have in past used university students who are probably studying psychology, science or something materialistic. The cynical researcher might give the impression PSI doesn’t exist ‘let’s repeat this experiment to see what they did wrong’ and the expectation of those under tests follow on in monotonous drudge towards failure. If PSI exists it is something at the edge of human ability, it requires enthusiasm .

The data is there for all to see. It isn't a question of opinion, but of whether or not there is evidence.
I don’t think so. Hyman and Utts didn’t agree on the interpretation of CIA remote viewing data. In another trial on animal telepathy Sheldrake claimed positive trial, Wiseman repeated and claimed negative trial …..but their data was basically the same.

People choose their preferred interpretation of the data ..
’ There are some myths about science and scientists that need to be dispelled. Science gets mistaken as a body of knowledge for its method. Scientists are regarded as having superhuman abilities of rationality inside objectivity. Many studies in the psychology of science, however, indicate that scientists are at least as dogmatic and authoritarian, at least as foolish and illogical as everybody else, including when they do science. In one study on falsifiability, an experiment was described, an hypothesis was given to the participants, the results were stated, and the test was to see whether the participants would say, "This falsifies the hypothesis". The results indicated denial, since most of the scientists refused to falsify their hypotheses, sticking with them despite a lack of evidence! Strangely, clergymen were much more frequent in recognizing that the hypotheses were false.
http://www.fiu.edu/~mizrachs/truzzi.html
I think Truzzi is referring to Mahoney’s research of some years ago. ….it shows what can happen.

There is a very great point in presenting evidence here: You have a host of people, genuinely interested, who are more than willing to see if the results hold up to scrutiny.

OK, I will look forward to debating newer research in future, as it arrives.
 
Open Mind said:
I don’t think so. Hyman and Utts didn’t agree on the interpretation of CIA remote viewing data. In another trial on animal telepathy Sheldrake claimed positive trial, Wiseman repeated and claimed negative trial …..but their data was basically the same.

Very true. Even more, is that things are very rarely settled once and for all by one experiment (no matter how well designed). It is the cumulative evidence (or lack of) that eventually decides the matter.
 
davidsmith73 said:
This is the standard of the Edinburgh ganzfeld protocol today - http://moebius.psy.ed.ac.uk/Ganzfeld_H.html#tg
While I fail to see why the receiver can not pick the sent image rather than the experimenter I think the procedures here look ok.

I had a look on the Edinburgh site for papers on the results obtained using this procedure. The only paper mentioning Ganzfeld concluded

4. Summary of findings
· No evidence for telepathy….
· Opposite to prediction, there was an slightly inverse relationship between telepathy success and reactivity to magnetic field

Has an experiment ever been done using this procedure at Edinburgh and what were the results ?
 
Open Mind said:
I agree keen believers do remember hits better than misses ….. what also must be recognized is that a keen skeptic will remember a misses better than hits.

This is not correct. Skeptics are more prone to remember the correct ratio.

Open Mind said:
If it possible for such positive bias to leak into a tightly controlled positive PSI trial, we must consider negative expectation of a professional skeptic could leak into a controlled no PSI result. ……

But skeptics don't - by definition - have a negative expectation. Skeptics expect nothing - skeptics judge from the data alone.

Open Mind said:
….Yes, I am concerned about closed minded skeptic researchers (who feel they already know the answer before the trial) assuming their attitude and careless choice of experimenters cannot possibly have an effect upon results.

Whaa?? You have to exemplify this. Let's see some evidence.

Open Mind said:
IMO ......

I would prefer evidence.

Open Mind said:
all PSI researchers and experimenters in any PSI trial should by requirement sign their personal opinion/feeling/belief on the existence of PSI before any trial conducted……where this might seem an absurd or unnecessary suggestion to many skeptics, it is essential in my opinion. These trials are on the transfer/reception of thought and to assume only the thoughts of the those under test have any influence shows blinkered lack of theoretical reasoning or a prior disrespect for the concept thought transfer effects could be affected by all observers or expectations

We have seen here, on this board, that people will lie about their beliefs in the paranormal. Such signed confessions are worthless. What we should do is focus on the data. The experimental design. The protocols.

Open Mind said:
I’d like to see tests involve natural psychics who have claimed evidential experiences since early childhood and are still convinced of the genuineness of their experiences, how many trials have used such obvious a choice? None I know of

Schwartz used only well-known psychics. Targ & Puthoff used Geller.

But you have a point: These self-proclaimed psychics are notoriously hard to drag into the lab. One can wonder why...

Open Mind said:
…… instead often researchers have in past used university students who are probably studying psychology, science or something materialistic. The cynical researcher might give the impression PSI doesn’t exist ‘let’s repeat this experiment to see what they did wrong’ and the expectation of those under tests follow on in monotonous drudge towards failure. If PSI exists it is something at the edge of human ability, it requires enthusiasm.

So, you want to prevent skeptics - whom you say have a negative attitude towards the paranormal, and instead focus on those who has "enthusiasm"? You are advocating that people should have a positive attitude?

If psi exists, then it is not at the edge of human ability: It is quite contrary a huge part of reality, judging from the sheer number of psychics, healers, tarot-readers, astrologers, etc.

Open Mind said:
I don’t think so. Hyman and Utts didn’t agree on the interpretation of CIA remote viewing data. In another trial on animal telepathy Sheldrake claimed positive trial, Wiseman repeated and claimed negative trial …..but their data was basically the same.

What about their results? Do you claim that Wiseman's results showed evidence of psi?

It isn't all up to opinion: If that was the case, then why bother go into a lab at all?

Open Mind said:
People choose their preferred interpretation of the data ..

Some people might do that. The rest of us don't.

Open Mind said:
I think Truzzi is referring to Mahoney’s research of some years ago. ….it shows what can happen.

Sure, nobody denies it. So, what do we do? Throw our hands into the air and say "There's no way we can know, so let's all have our own, personal idea of what is real"?

Open Mind said:
OK, I will look forward to debating newer research in future, as it arrives.

Will you also be prepare to draw the consequences, if the results turn out to be negative?
 
CFLarsen said:
Oh, no, no, no...don't start with this "anomalous" crap. You cannot call possible experimental flaws for "psi". Flaws have nothing to do with the paranormal.

Psi is defined to be anomalous results and as such there is evidence for such anomaly. If there are flaws that are producing the effect then I would sure like to hear about them, and prove that they are responsible. Otherwise we still have "psi" based on an an anomalous infomation aquisition.


Either there is an effect, or there is not.

There is. And it's an anomalous effect.


You don't understand: The people who were rating the pictures during the experiment did not know what the scale was. It's equivalent to asking you to grade someone's exam, but not telling you how to grade it.

I understand what you are saying. I don't understand how this is a flaw that will affect the hypothesis?


Please explain how these faulty recordings were not due to psi.

because they knew what the source of the artifacts were.


The difference between a "hit" and a "miss" was simply too small to be determined objectively.

How do you figure that? If the difference between the heart rates is very small then you use many trials.


Don't play wordgames: Was there evidence of precognition? If so, where?

In the statistically signifianct results. Anomalous precognition refers to the conditions within which the anomalous effects are observed, i.e, a response before the stimulus has been selected.



Cart before the horse. We have yet to see the existence of psi.

Here, there is evidence of the anomalous aquisition of a physiological response to remote stimuli.


But the experiment does not show evidence.

You have not explained why


So, how can we determine the difference between the HP hypothesis and the fact that we get used to seeing bad pictures, so they upset us less?

For the precognitive habituation experiments, the participants are simultaneously shown two two negatively arousing pictures (violence) and they are asked to chose which of the pair they prefer. After they chose, one of the pair of pictures is randomly chosen by the computer and subliminally presented to them. According to the PH hypothesis the subject should prefer the subliminally presented picture more often. In trials with positively arousing pictures, they should choose the picture which is not subliminally presented.



Doesn't help. It still contradicts the previous experiment.

Really? how so?



I am saying that conducting a psi experiment at INS with believers is hardly the most reliable way to do it.

Why on earth not? As long as the experimenter remains objective and sceptical, using "believers" should pose no problems at all. In fact it should be encouraged in order to found out if this is a factor in performance.



Jeebus creebus, we have heard this for so long! How long can it take to replicate just one of those experiments that are claimed to show evidence of psi
This doesn't happen - instead, we see yet another experiment, with a slightly changed protocol, invariably claiming "new" evidence of psi. And then, another. And then, another.

Not really. We see experiments that replicate previous findings as well as exploring new parameters in the experiment.


Oh, and...who was this skeptic, and what result did he get?

Which sceptic? where?
 
Claus, do you have evidence for your claim that there is a correlation between diminishing effect size and study quality?
 
Open Mind said:
I doubt there is great point in presenting evidence here (Amherst did it, other have too) , all that happens is the typical CSICOP counterclaims get quoted. Remove these counterclaims and is the skeptic argument as strong?

I think it may be stronger. Perhaps I'm going mad, but most of CSICOP's arguments are based on discrediting the meta-analyses without botheringto check if the data itself is correct. Honorton's oft quoted gnzfeld m-a relies on 28 pre-1981 ganzfeld experiments. I know of over 50. Do you think the results are the same with 28 or 50 experiments? If so, why?

There's no conspiracy.

Just a general reliance on what other people have said about what other people did.

As for the debate about effect size and study quality, that depends on what you consider to be "quality". If you think that ganzfeld is worse than autoganzfeld which is worse than digital ganzfeld, then yes, there is a demonstrable decline in effect size.
 
Dancing David said:
Davidsmith73,

I am very open to the possibility that psi might exist, I think it would be cool if it did.

But the ganzfeld effect studies are seriously flawd in ways that the ,eta analysis can not compensate for,

What flaws?


there is the possibility of confounding effects that are totaly unmeasured and un-controlled for.

True. However one can never rule out the possibility that you haven't controlled for effects you don't know about, in any experiment. You can only reduce the likely hood of this as much as possible.


I don't know of any other area of social science where metaanalysis can compensate for flawed methdology.

If indeed the meta-analysis is compensating for flawed methodology. I do think that meta-analysis is far from the ideal method of demonstrating a replicable effect. Unfortunately, this is the situation at the moment for ganzfeld. The problem can be summed up nicely by Bem and Honortan:

"Given its larger effect size, the prospects for successfully replicating the psi ganzfeld effect are not quite so daunting, but they are probably still grimmer than intuition would suggest. If the true hit rate is in fact about 34% when 25% is expected by chance, then an experiment with 30 trials (the mean for the 28 studies in the original meta-analysis) has only about 1 chance in 6 of finding an effect significant at the .05 level with a one- tailed test. A 50-trial experiment boosts that chance to about 1 in 3. One must escalate to 100 trials in order to come close to the break even point, at which one has a 50-50 chance of finding a statistically significant effect (Utts, 1986). (Recall that only 2 of the 11 autoganzfeld studies yielded results that were individually significant at the conventional .05 level.) Those who require that a psi effect be statistically significant every time before they will seriously entertain the possibility that an effect really exists know not what they ask." (from Bem, Daryl J. and Honorton, Charles (1994). Does Psi Exist? Replicable Evidence for an Anomalous Process of Information Transfer, Psychological Bulletin, 115(1): 4-18 )



I read the paper on the photoic stimulation that you presented in the More Evidence thread and it suffers from a serious problem,

The standard deviation for the sample is higher than the effect size. Therefore all the supposed effect could be attributed to the random variation of the sample during the test period. This is especialy relevant because of the small sample size.

I think you mean to say that the SD is higher than the mean. They use a Wilcoxon signed rank test which is a distribution free method of looking at the difference between two means within the same population. This test is appropriate for small sample sizes.


I found the 'linked biological' theory very interesting, but again if the standard deviation is higher than the effect size, then, well it could all be sampling error.

I don't think they conducted any statistics on the spatial distribution, so that's a possibility. Hopefully the follow up experiment will include a large enough sample.


There is also the issue of removing overlapping cycles from the sampole, which means that the sampling of the 'photoic stimulation epochs' is not random and excludes all the samples that might occur during overlapping times periods. This is a serious sampling flaw.

I don't see why this means the photic stimulation sample is not random. Since both the original stimulation marks and the control marks were both randomly generated, then the samples taken out of the photic stimulation epochs will be random. If you take a random sample from a random sample, you are still left with a random sample are you not?


There is alos the elimination of anomolous data from teh data sets. i think that those should have been examined for random variation.

Do you mean the artifact removal? That's not anomalous data since the source of artifact is known - equipment failure. I believe this is standard practice in EEG analysis.


I see no reason to believe that there is an effect in that paper. There are some flaws in the method and the analysis that do not even make it suggestive until there is replication.


The paper supposedly replicates previous results:

Wackermann, J., Seiter, C., Keibel, H., & Walach, H. (2003). Correlations between brain electrical activities of two
spatially separated human subjects. Neuroscience Letters, 336, 60-64.

I haven't read this paper yet
 
Lothian said:
While I fail to see why the receiver can not pick the sent image rather than the experimenter I think the procedures here look ok.

The receiver picks the sent image?! Was that a mistype?

(edited)
I think I see that you mean the receiver does the judging yes? In some of the cases they will (it's mentioned on the webpage)


I had a look on the Edinburgh site for papers on the results obtained using this procedure. The only paper mentioning Ganzfeld concluded

Has an experiment ever been done using this procedure at Edinburgh and what were the results ?

I don't know if there are any successful experiments done there with the fully updated methods. I'll have a trawl through the available papers
 
davidsmith73 said:
The receiver picks the sent image?! Was that a mistype?

(edited)
I think I see that you mean the receiver does the judging yes? In some of the cases they will (it's mentioned on the webpage)
Yes that is what I meant, Sorry I missed it.

I don't know if there are any successful experiments done there with the fully updated methods. I'll have a trawl through the available papers
Thanks, but it would be interesting to know about all the experiments (rather than just the successes:D ). I can't believe they went to the trouble of designing experiments along with maps of the actual rooms without doing an actual experiment.
 
davidsmith73 said:
What flaws?

The most prominent ones were discussed ad nauseum in the thread I linked to earlier but here is a hit of the high lights.
1. The way that the sample picture sets are generated in the first place.
2. The fact that there is no testing for word lists generated to match a particular picture, this is crucial for two reasons, one it would allow the experiementer to make sure that the sets of pictures do not have cross over on the 'hit' words for the target in a set. Two it would allow for the objective rating of wether a response was a hit.
3. The lack of any testing to see what the response to random sets of sender word strings to given pictures in a set and the total of all sets. this is a serious flaw because it could produce hit rates higher than expected, given the lack of cross match elimination.
4. The method for selecting 'hits' could actualy create an artifact in itself.

Those were the most serious flaws.

True. However one can never rule out the possibility that you haven't controlled for effects you don't know about, in any experiment. You can only reduce the likely hood of this as much as possible.

This shows a gross misunderstanding of the methods of science, although it is more common in social sciences.
In designing an experimental study you are under the burden of trying to elimanate and possible confounding factors. This is not limited to parapsychology, this is true of all research.

Research requires that the experimenter try to control for as many possible confounding factors as possible, otherwise the results are meaningless.

It is absurd to say that you can't control for an effect you didn't know about, the elimination and calibration of confounding and linked effects is at the core of science. You may not have know about a possible effect but that does not mean that you should not then try to test to see if it is there. That is at the core of science!

If indeed the meta-analysis is compensating for flawed methodology. I do think that meta-analysis is far from the ideal method of demonstrating a replicable effect. Unfortunately, this is the situation at the moment for ganzfeld. The problem can be summed up nicely by Bem and Honortan:

"Given its larger effect size, the prospects for successfully replicating the psi ganzfeld effect are not quite so daunting, but they are probably still grimmer than intuition would suggest. If the true hit rate is in fact about 34% when 25% is expected by chance, then an experiment with 30 trials (the mean for the 28 studies in the original meta-analysis) has only about 1 chance in 6 of finding an effect significant at the .05 level with a one- tailed test. A 50-trial experiment boosts that chance to about 1 in 3. One must escalate to 100 trials in order to come close to the break even point, at which one has a 50-50 chance of finding a statistically significant effect (Utts, 1986). (Recall that only 2 of the 11 autoganzfeld studies yielded results that were individually significant at the conventional .05 level.) Those who require that a psi effect be statistically significant every time before they will seriously entertain the possibility that an effect really exists know not what they ask." (from Bem, Daryl J. and Honorton, Charles (1994). Does Psi Exist? Replicable Evidence for an Anomalous Process of Information Transfer, Psychological Bulletin, 115(1): 4-18 )
The statistics are bogus, one can not draw any conclusions about the chances of flawed data, flawed data is flawed data, the chances are not computable when you haven't used a strict method and protocol.

The statistical significance of uncontrolled data is meaningless.

I think you mean to say that the SD is higher than the mean. They use a Wilcoxon signed rank test which is a distribution free method of looking at the difference between two means within the same population. This test is appropriate for small sample sizes.

You misunderstand me completely, the 'effect' that they claim to exist between their different groups falls below the level of the standard deviation. The difference bewteen the sample means, or whatever it was that they claimed was the significant effect is less than the standard deviation in the samples. Therefore the effect could have been produced through the variation of the levels in the samples and not be an effect of 'pair matching'.

I don't think they conducted any statistics on the spatial distribution, so that's a possibility. Hopefully the follow up experiment will include a large enough sample.

Excuse me , but this is a crucial flaw in the study, in that they depend on results prior to the photic stimulation to raise the effectcience in a time ordered paradigm.
Quantum effects do not travel backwards in time, tachyons do but not particles on this side of the light speed.

I don't see why this means the photic stimulation sample is not random. Since both the original stimulation marks and the control marks were both randomly generated, then the samples taken out of the photic stimulation epochs will be random. If you take a random sample from a random sample, you are still left with a random sample are you not?

Sorry David but that is more wishful thinking, you can't just eliminate data from the sets and then claim that they are random. They could have adjusted the interval spacing to eliminate this completely and it is a methodological flaw. It is statisticaly possible that the intervals they chose to eliminate were containing data that would have leveled out the effect completely.
Methododlogy is methodology it is not kind to any one who choses to violate it.

Do you mean the artifact removal? That's not anomalous data since the source of artifact is known - equipment failure. I believe this is standard practice in EEG analysis.

They also state that they remove data for other things that can effect an EEG measurement such as eye movement, it would be standard protocol to then discuss what objective criteria were used in the elimination of those data sets. In that there should be clear guidelenes that any human could enforce. Then you have a sub-test for inter rater reliability to make sure that all raters remove the same data sets consistantly.
There should be a record kept of what was removed for the equipment failures and what data was removed due to human judgement.

The paper supposedly replicates previous results:

Wackermann, J., Seiter, C., Keibel, H., & Walach, H. (2003). Correlations between brain electrical activities of two
spatially separated human subjects. Neuroscience Letters, 336, 60-64.

I haven't read this paper yet

Replication generally requires twenty or more if not hundreds!

I want to make clear that I am just as critical of non parapsychological research. The social sciences are rife with bad methodology, messy protocols and lack of replication. I taer into psychology the same way I tear into parapsychology.
(Like the claims made in Talking to Prozac, they are seriously flawed in different ways.)
 
CFLarsen said:
This is not correct. Skeptics are more prone to remember the correct ratio.

To use a CF Larsen technique ‘Please prove that by a repeated controlled trial’ ;) I’m kidding, save your time, if you don’t have one close at hand..

But skeptics don't - by definition - have a negative expectation. Skeptics expect nothing - skeptics judge from the data alone.
Yes true skeptics might not, skeptics with minds made up, the ones who view it as political cause most probably do. That means probably most in CSICOP (with some exceptions of course) are probably too biased for a neutral intrepretation

We have seen here, on this board, that people will lie about their beliefs in the paranormal.
Unfortunately people lie, how do you know a professional skeptic wouldn’t be economical with the truth? To them perhaps it’s not insignificant lying, they are merely assisting their perceived correct conclusion for the benefit of science and the skeptic movement. ;)

Such signed confessions are worthless. What we should do is focus on the data. The experimental design. The protocols.
Good ….. but with researchers and experimenters enthusiastic to find PSI … to assume bored skeptic going through the motions is going to find PSI is extraordinarily bad science when PSI is supposed to be connected with thought/belief by proponents ….. the researchers must state there opinions on PSI beforehand, if some lie, so be it, it’s still better than a professional skeptic showing interest in running a trial and yawning all the way through it. Some psychologists are doing PSI trials as research into ‘self deception’ not through any sincere effort to find PSI.

Schwartz used only well-known psychics. Targ & Puthoff used Geller.

But you have a point: These self-proclaimed psychics are notoriously hard to drag into the lab. One can wonder why...
I’m saying not to use professional psychics, I’m saying to use natural psychics (they have less motive to cheat or to defend reputation) ….. the ones who have claimed to have seen dead people, etc. since childhood ….put them in friendly environment with strict controls … AFAIK it has not been done .. most ESP trials have measured the general public …. Student musicians and artists did slightly better in some trials.

So, you want to prevent skeptics - whom you say have a negative attitude towards the paranormal, and instead focus on those who has "enthusiasm"? You are advocating that people should have a positive attitude?
Yes I think to not have a positive attitude during a PSI trial is a contradiction of what is under trial. Some trials claimed to have measure ‘belief’ to see if it increases PSI by means of deception … one trial involved lying to the ones being tested telling them they were doing well. (and another group telling correct results as wrong) … This is also anti-PSI because lying in essence is the opposite of telepathy. If 100% perfect telepathy existed (thank goodness it doesn’t) it would probably mean no individuality of human mind, so lying/deception is the concealment of one mind to another – the opposite of telepathy ……Tests involving deception aren’t fair trials – they must be honest but strictly controlled attempts to find PSI with a positive attitude ….. then I would welcome negative researchers repeating and probably failing more often :) In other words the attitude of the researchers/experimenters must be known before and during trial…. That should be the next step in PSI research, an agreement of publishing accompanying attitude (yes, no skeptic is going to sign ‘this is absolute crap!’ :) they will probably claim a modest ‘very doubtful’ …similarly a proponent will probably down play their belief …… but I think science can take this into account with common sense analysis)

If psi exists, then it is not at the edge of human ability: It is quite contrary a huge part of reality, judging from the sheer number of psychics, healers, tarot-readers, astrologers, etc.
Psychics being human tend to exaggerate their abilities. This is not necessarily conscious but the underlying ego in every human that makes humans think they do things better than others see them do it. Yes and many believers are too keen to trust, make things fit and that is where the problem lies IMHO in making PSI appear
absurd …... the observation of this in no way proves real weak PSI doesn’t exist.

With regard to … tarot, astrology, palmistry, etc. …. None of these actually
require PSI ability …. I have little interest in the sort of psychics who use such. To me these are psychics who aren’t really psychic and needs toys. ;)

Spiritual healing? Let’s leave that one for another topic – too complex an issue here.

What about their results? Do you claim that Wiseman's results showed evidence of psi?
According to Sheldrake ……… Wiseman is one of Britain’s leading media sceptics. He is an informed sceptic, in the sense he reads the literature and knows what’s going on and he actually does experiments. However, he is a very committed sceptic who believes these things are basically impossible, and Wiseman went along to do these experiments with Pam Smart. He invented a criterion for the success or failure of the dog, which was, that it should go to the window, for no reason apparent to Wiseman … the first experiment it was 60 seconds. Then he changed the criterion to being two minutes for no apparent reason. If the dog went to the window for no apparent reason when she wasn’t coming home, it failed the test. He published a paper in the British Journal of Psychology, saying it had failed the test. There’s the paper. He put a Press Release. It was in all the science correspondence …………… However, if you plant Richard Wiseman’s data on a graph, which he didn’t do in his papers, even though I sent him graphs before he submitted it, showing it’s a self-reinforcing system, reinforced by a system of taboos and prejudices, which I think, are a shame to science. I think that this is an outrage, really, that in a scientific world we have this kind of behaviour going on, which I think, brings discredit on the whole of science, and I think one of the things that really disillusions people with science is the feeling that science is not actually about evidence … it’s about dogma, and my view is that science needs to be about evidence, not dogma, and personally, I see telepathy as a test case for this very principle.’

Will you also be prepare to draw the consequences, if the results turn out to be negative?
Yes ….. but I’m not sure we would completely agree on the consequences.
 
Open Mind said:
Some trials claimed to have measure ‘belief’ to see if it increases PSI by means of deception … one trial involved lying to the ones being tested telling them they were doing well. (and another group telling correct results as wrong)

Which study was this?
 

Back
Top Bottom