I saw what you tried to do, it is just irrelevant for reasons stated, namely that there is no optional stopping in the JREF tests, and the observed data is not compared to what the claimant expects but to chance.
Let's break this down; just show me one example of an actual, not hypothetical, test by JREF where optional stopping was agreed upon and moreover, actually occured. Just one. Would shut me up, and would prove your argument has some worth.
Once again, and slowly: That was not the point. The point was that the challenge tests were smaller runs because the cutoffs were chosen based on the claim, not based on an assumption of chance performance. That is, if you understand it, sufficient to create the bias. My example was much more blatant, and distilled the problem into one easy-to-see example.
The fact that you still do not see it, though, tells me that the argument failed in its purpose. Oh, well.
No, it does not necessarily imply that. It tells us the characteristics of that sample of people.
Perhaps you had better correct your web page then. It speaks of "what people believe in", not "what that small, self-selected sample of people believe in"
Yes, that is correct. Without data we can't say much.
Does the "yes" go so far as to understand the point I was making? In the part of my post you chose not to quote, I suggest that your own questions are better answered using other methods.
A parapsychologists' data set is not data from preliminary tests done by skeptical organizations. If you wish to look at parapsychologists' data sets, which I agree is fascinating, but off the topic of looking at data from tests done by skeptical organizations, you are welcome to do so.
Well, then...given that the data are unable to answer the questions you have about "what people believe in...", what exactly
is your motivation for focusing on the poorer data set?
Looking at data from a sample is ideal for telling us about that sample.
Yes. We have already looked at that sample, in the context of answering the questions that sample was intended to answer. Any more is bad statistics, and bad methodology.
If one has some motive for suggesting that data from skeptical organizations cannot possibly be analyzed, they can, by all means, continue.
Cannot? No. Should not? Many. No ulterior motive, though, simply enough experience with statistics and methodology that their misapplication is irritating. Does one need more motivation to advocate not using flawed methods to try to draw conclusions? I thought we had a common goal of understanding the world, understanding these phenomena. If I pointed out that there was one microscope in the lab that had a cracked lens, and noticed that you advocated using that scope, do you need to suggest "some motive" for my actions?
Your suggestion is flawed. Drop it and walk away.