• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

psychics

Are you defining progress as moving toward resolving the issue of whether psi exists?

Yes. Plus clarifying various types of anomalous information and starting to understand the details.

In the Bem, Broughton, and Palmer article, five studies have the same footnote, with that footnote reading: "Hit rate not reported. Estimated from z score." In conjunction with the footnote you quoted, I believe the correct interpretation is that those five studies (four or which were included in the original 30 studies analyzed by Milton and Wiseman) were the only ones of the 40 studies to have used "different outcome measures." If I am correct, the authors of both articles could have excluded those studies (which were a small percentage of the total) from their analyses and used binomial models. I will check with one of the authors to see whether my interpretation is correct.

I see. It's okay for you to arbitrarily exclude as many studies as you like, but it's not okay for me to exclude a single study for just cause.

However, I think that I was wrong (and that you are right - those 5 represent all that weren't reported as 'hit' and 'miss'). I found many discrepancies between the reported hit rates and the z-scores in that table and assumed (incorrectly) that it wasn't due to error, but rather that different information was going into the two numbers. But I've been looking into a number of studies now, and it looks like the discrepancies are errors. Some of the errors were on the part of the authors of the individual papers and some of the errors are on the part of Bem et al and Milton and Wiseman. One of them is obvious from the table. If you compare the first two, their z-scores should be identical, just the signs should be different, because their hit rates are equidistant from the mean. But they're not.

I don't know if this has been addressed already, but obviously this paper can't be used as is. Someone has to go through all the original papers and redo all the calculations. I find it odd that these errors persisted during the two years between the publication of the meta-analyses and that they weren't picked up by Bem et al.

Sorry if I wasn't clear, but you stated: "The effect of publication bias is usually taken into consideration when doing meta-analyses in other fields." And I replied: "How so? Can you give an example or two?" You provided me with an article that concludes: "Assessment of publication bias using a funnel plot was attempted, but too few studies were available to allow any meaningful judgment." So I still see no evidence that publication bias is accounted for in a meaningful way.

Right. "I assume you are lying until proven otherwise." As I have already discovered, there is no evidence which you will consider acceptable proof. And that this assumption is made without evidence that I have ever lied to you further demonstrates the futility of my participation in this game.

Linda
 
I see. It's okay for you to arbitrarily exclude as many studies as you like, but it's not okay for me to exclude a single study for just cause.
Why is it arbitrary to exclude a handful of studies that used a fundamentally different approach than the great majority of the studies? And your "just cause" is based solely on the fact that the study in question is a statistical outlier, as opposed to a finding that it was undertaken improperly. But I'm fine with: (a) showing the results with and without the studies that used a fundamentally different approach, and (b) showing the results with and without the outlier study.

However, I think that I was wrong (and that you are right - those 5 represent all that weren't reported as 'hit' and 'miss'). I found many discrepancies between the reported hit rates and the z-scores in that table and assumed (incorrectly) that it wasn't due to error, but rather that different information was going into the two numbers. But I've been looking into a number of studies now, and it looks like the discrepancies are errors. Some of the errors were on the part of the authors of the individual papers and some of the errors are on the part of Bem et al and Milton and Wiseman. One of them is obvious from the table. If you compare the first two, their z-scores should be identical, just the signs should be different, because their hit rates are equidistant from the mean. But they're not.

I don't know if this has been addressed already, but obviously this paper can't be used as is. Someone has to go through all the original papers and redo all the calculations. I find it odd that these errors persisted during the two years between the publication of the meta-analyses and that they weren't picked up by Bem et al.
The Bem article states: "Milton and Wiseman's (1999) own figures were used for the 30 studies in their analysis, and their statistical procedures were duplicated to the extent possible for the 10 new studies." So, I think Bem et al. simply accepted those procedures. They probably decided that there was no need to argue about the validity of those procedures because their analysis showed statistical significance even using those procedures.

Right. "I assume you are lying until proven otherwise." As I have already discovered, there is no evidence which you will consider acceptable proof. And that this assumption is made without evidence that I have ever lied to you further demonstrates the futility of my participation in this game.
Again, it's not a matter of lying, and I have never accused you of that. I just don't see that publication bias has been used to discredit research in fields outside of psi, so why use it to discredit psi research?
 
Why is it arbitrary to exclude a handful of studies that used a fundamentally different approach than the great majority of the studies? And your "just cause" is based solely on the fact that the study in question is a statistical outlier, as opposed to a finding that it was undertaken improperly. But I'm fine with: (a) showing the results with and without the studies that used a fundamentally different approach, and (b) showing the results with and without the outlier study.

Because there is no particular reason to think that the studies you want to exclude aren't measuring the same effect as all the other studies, whereas there is good reason to think that the one study isn't.

The Bem article states: "Milton and Wiseman's (1999) own figures were used for the 30 studies in their analysis, and their statistical procedures were duplicated to the extent possible for the 10 new studies." So, I think Bem et al. simply accepted those procedures. They probably decided that there was no need to argue about the validity of those procedures because their analysis showed statistical significance even using those procedures.

I wasn't talking about the validity of the procedures but about errors - things like writing down the wrong number, performing a calculation incorrectly, mixing up which columns to perform the operations on, etc.

Again, it's not a matter of lying, and I have never accused you of that. I just don't see that publication bias has been used to discredit research in fields outside of psi, so why use it to discredit psi research?

Right. You just treat what I say as though it is untrue. When I tell you that publication bias is treated seriously in other fields (i.e. it changes the conclusions that are drawn from meta-analyses), you don't believe me.

I don't object to someone asking for evidence/examples/further explanation. What I object to is your pretense that doing so would make any difference.

Linda
 
Last edited:

Back
Top Bottom