• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

JREF Challenge Statistics

This is an additional question you did not pose to begin with. Methinks you just added this now to avoid doing the work of finding the answer yourself.

From my page
(bold added)

The JREF has been making strides in making information on "the claims received, the correspondences exchanged between the JREF and the applicant, and subsequent protocol negotiations and test results" electronic, but not, as far as I can tell, the numerical results of past preliminary tests, which in my opinion are just as, if not more, interesting and relate more to the science.

I've always been interested in data from the past as well.

So, again, why do you assume I did not calculate what you are demanding? And why do you think what you think I calculated or not matters to proposing the idea of seeing interesting data?
 
They are necessarily bad when we use them for inference.

Who is doing inference?

You argue against doing that, and I agree.

Until you have a research question that is limited to the sample, I must conclude you are using them for inference.

That is incredibly flawed logic. I have not talked about koala bears, therefore you must conclude I am talking about koala bears.

Your demonstrated (ill)ogic is about as pertinant as your lack of imagination.
 
T'ai Chi,

Would it be possible for you to post for us the following information concerning your post history? :

Date/Time of Post, Username used to Post with, JREF Board Posted in, Subject Category of Post, JREF User Replied To, and length (In characters) of post?

I think this would be very, very interesting information to have.

Thanks!

Thanks for sharin'.
 
from your page:
the numerical results of past preliminary tests, which in my opinion are just as, if not more, interesting and relate more to the science.
Relate how? What can this self-selected sample tell you about "the science"? Please, I lack imagination; tell me.
 
I don't mean to harp on this point, but it lies at the very basis of this discussion. Mercutio, CFLarsen, and others have tried to explain to you why data from many different tests cannot be combined into any meaningful meta analysis, but you ignore their knowledgeable attempts to help.

It was already explained, ad nausem, that
(bold added)

One could also consider doing some type of meta-analysis on the data if appropriate.

That is, as explained, many times, I'm not saying meta-analysis is the way to go no questions asked.

The main point, as also pointed out several times, is to see some interesting data that skeptics should be interested in seeing.
 
from your page: Relate how? What can this self-selected sample tell you about "the science"? Please, I lack imagination; tell me.

I'm not on trial here. Your demands are ignored.

If you aren't willing to think how test results relate more to science than Kramer's emails, I cannot help you.
 
With a self-selected sample, what would the trend from year to year be able to tell you?

It all depends on what the specific data is. Obviosuly, something like the number of tests done of dowsing per year would tell us the number of dowsing tests that were done per year, and if that number is increasing or not.

You like to argue in hypotheticals, which are easily picked apart (no matter if the hypotheticals are yours or mine). I don't see the point of such avoiding-data-at-all-costs inquiries.

What would it be able to tell you that isn't better answered with a different, systematically collected, data set?

If you're interested in hypothetical as yet to be collected datasets, you're welcome to investigate those.
 
I'm not on trial here. Your demands are ignored.
It's a request, not a demand. I even said please. I have said before, and I mean it, if you can show a legitimate use for these data I would gladly support your request. You may (and will, likely) ignore my request, but frankly you act against your own interests in doing so. I honestly think that the data are inappropriate for any use beyond their challenge purpose (for reasons of self-selection, and of the bias that may emerge in combination, as previously explained); if you honestly think they are of use, I have no clue why you would not say how. Claus dismisses you out of hand; I am more patient. I don't care what I might think of you, if you have good ideas, I am for them. If you do not share them, I cannot support them. Your loss.
If you aren't willing to think how test results relate more to science than Kramer's emails, I cannot help you.
Willing to think? Certainly. Willing to suspend what I know about biased samples? Sorry, no. Thinking about how test results relate means critically examining the possibilities. If the data don't work for a particular purpose, it is better to obtain good data than to argue about bad data.
 
It all depends on what the specific data is. Obviosuly, something like the number of tests done of dowsing per year would tell us the number of dowsing tests that were done per year, and if that number is increasing or not.
But of course, that number is meaningless. It does not tell us if there are more dowsers, because we simply cannot know if the reasons for self-selection were constant, or whether there was some unrelated reason that there were more dowsing challenges. In fact, if we picked up an apparent trend that the dowsing challenges were increasing, it would tell us nothing; is it a representative sample? We simply cannot know--not only why there were more dowsers--but even whether there were more dowsers, in anything but a self-selected sample. An ad in a dowsing website could easily throw off any claims of "trend", and no one has control over that.
You like to argue in hypotheticals, which are easily picked apart (no matter if the hypotheticals are yours or mine). I don't see the point of such avoiding-data-at-all-costs inquiries.

If you're interested in hypothetical as yet to be collected datasets, you're welcome to investigate those.
Well, if I had a question to answer, I would have a much better idea of how to collect the data. That is pretty much the whole point of the "hypothetical as yet to be collected datasets"; they are collected to answer a particular question. Obviously, you have some reason to keep looking at the data which have already been collected, and I cannot fathom why. What question can be answered? You will not help with that, and I ...apparently...lack imagination.
 
It's a request, not a demand. I even said please.

But I'm not on trial here. I find your repeated questioning without a point tiresome and I will not tolerate it.

I have said before, and I mean it, if you can show a legitimate use for these data I would gladly support your request.

Your support or lack of is of no relevance to me.

Willing to suspend what I know about biased samples? Sorry, no.

Just like you're not willing to suspend what you know, I'm not willing to suspend what I know (Lohr and other resources and examples).

We're at a 'we both disagree with each other' place. Yup.
 
It does not tell us if there are more dowsers, because we simply cannot know if the reasons for self-selection were constant, or whether there was some unrelated reason that there were more dowsing challenges.

Of course it can tell you the basic fact if there were more dowsers tested in a certain year.

is it a representative sample?

This has been addressed plenty of times.

An ad in a dowsing website could easily throw off any claims of "trend", and no one has control over that.

And? No one can control the stock market, weather, and many other things that people get descriptive statistics on.

Obviously, you have some reason to keep looking at the data which have already been collected, and I cannot fathom why. What question can be answered?

This has been addressed plenty of times.
 
It was already explained, ad nausem, that
(bold added)



That is, as explained, many times, I'm not saying meta-analysis is the way to go no questions asked.

The main point, as also pointed out several times, is to see some interesting data that skeptics should be interested in seeing.

And, as is your M.O., you have completely ignored the question that was put to you. What one thing do every single preliminary test ever conducted by the JREF have in common?
 
But I'm not on trial here. I find your repeated questioning without a point tiresome and I will not tolerate it.

Your support or lack of is of no relevance to me.

Just like you're not willing to suspend what you know, I'm not willing to suspend what I know (Lohr and other resources and examples).

We're at a 'we both disagree with each other' place. Yup.
I am sorry that you see scientific inquiry as spanish inquisition. You see no point? Funny, I thought I was the one with no imagination.

Fine, agree to disagree. The offer is open. If you can produce questions for this dataset, do so. If you do not wish to contribute to a scientific community, then "agreeing to disagree" is as good an excuse as any.
 
Of course it can tell you the basic fact if there were more dowsers tested in a certain year.
This has been addressed. A faulty answer is as bad, or worse, than no answer. What does the self-selected sample give you that a proper sample would not? There are, of course, many things that a proper sample would give you that a self-selected sample will not. Even your level of understanding of statistics and research methods will show that.
This has been addressed plenty of times.
Yes. It is not a representative sample.
And? No one can control the stock market, weather, and many other things that people get descriptive statistics on.
The data we collect on those are intended for different purposes. And are collected with those purposes in mind. Unless you wish to suggest that a self-selected "reportyourweather.com" will be a better source of data than systematically collected data, these examples are useless for your argument.
This has been addressed plenty of times.
So far, the answer is "no, there is no reason to look at these data."
 
What one thing do every single preliminary test ever conducted by the JREF have in common?

I'm not interested in every single test, only the ones that are statistical in nature, and not just JREF, but any skeptical organization that does similar tests.

Surely, if there are tests that are statistical in nature, on dowsing, it seems reasonable to look into the possibility of combining their data. And if not appropriate, then no biggie, don't do it. But seeing the data, and calculating descriptive statistics are what is important.
 

Back
Top Bottom