davidsmith73
Graduate Poster
- Joined
- Jul 25, 2001
- Messages
- 1,697
While I realise there are some important method issues with the PEAR work on RV, I wanted to present what I think the PEAR data shows, from this paper:
http://www.princeton.edu/~pear/pdfs/jse_papers/IU.pdf
From here on I'll assume that people have the read the paper and become familiar with PEAR's methods.
There has been some claims put forward, by some people here and the Skeptic Report, that this paper demonstrates that the entire PEAR RV positive results were due to subjective biases in the analytical judging procedure. I will argue against this claim.
The first chunk of data that does not support the subjective bias claim is Table 4. They collect their data using the FIDO method and then run it through an array of different scoring methods. Each method produces approximately the same p-value. Notice that the binary method of scoring (which produced highly significant results when the data was collected using the binary method) produced nothing here. If the binary method of scoring were inflating the results then it should do so here too. But it doesn't. It produces worse results than the other methods.
Table 5 shows the same thing. They collect their data using a descriptor array of 10 possible answers to each question (distributive method) and then analyse the data using a variety of methods. Again, the binary method choice produced the same results as the distributive. If the positive results were due to the analysis method getting more "subjective" then a supposedly true chance result, produced by analysing the data with the distributive method, would get inflated when the binary method was used. This did not happen.
Conclusion:
The drop in results is most likely due to the method of data collection and not the data analysis.
http://www.princeton.edu/~pear/pdfs/jse_papers/IU.pdf
From here on I'll assume that people have the read the paper and become familiar with PEAR's methods.
There has been some claims put forward, by some people here and the Skeptic Report, that this paper demonstrates that the entire PEAR RV positive results were due to subjective biases in the analytical judging procedure. I will argue against this claim.
The first chunk of data that does not support the subjective bias claim is Table 4. They collect their data using the FIDO method and then run it through an array of different scoring methods. Each method produces approximately the same p-value. Notice that the binary method of scoring (which produced highly significant results when the data was collected using the binary method) produced nothing here. If the binary method of scoring were inflating the results then it should do so here too. But it doesn't. It produces worse results than the other methods.
Table 5 shows the same thing. They collect their data using a descriptor array of 10 possible answers to each question (distributive method) and then analyse the data using a variety of methods. Again, the binary method choice produced the same results as the distributive. If the positive results were due to the analysis method getting more "subjective" then a supposedly true chance result, produced by analysing the data with the distributive method, would get inflated when the binary method was used. This did not happen.
Conclusion:
The drop in results is most likely due to the method of data collection and not the data analysis.