• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Meta-analysis of JREF tests

CFLarsen said:

You are very right - this is not about me, it's about you.


Actually, I'm pretty sure it is 100% about meta analysis.


Why are you unable to explain, in your own words, how you are going to do a meta-analysis on the JREF challenges?


I don't let myself be bullied. I've given you my answer already.


just go ahead and do this meta-analysis of yours.

What's stopping you?

We were discussing the appropriateness (or not) of a meta analysis. Now you seem to want to discuss why I am not currently doing a meta analysis. I'm not going to humor your topic change.
 
69dodge said:
Suppose it's not. What do you think we could conclude from that? How would it change your beliefs or your actions? In short, what difference would it make?

If the results would make no difference, there's no point in doing the meta-analysis to begin with.

Great questions 69dodge. If the combined standardized score is not what we expect by chance and it is statistically significant, then it would be interesting to see how it is statistically significant, and to see what that means in terms of dowsing ability, for example, and the experiments themselves.

If the combined standardized score is significant, we need to see if most of the standardized scores are positive, negative, or if they are all about average with 1 or 2 'superstar' standardized scores.

If most of the scores are positive or negative, it could mean there is systematic bias in the experiments that make the participants score too high or too low on average. This knowledge could lead into improving the experiments, designing new ones, and in general, learning more about dowsing and testing paranormal abilities.

I'd personally expect the combined standardized score to be around what chance predicts, but that is just my belief. The data itself could verify that, or not.
 
dissonance said:
T'ai Chi, what's your statistics background? Graduate level? Undergrad? Interested layperson? We (OK, I) might have an easier time explaining why what your asking is inappropriate if we knew where you were coming from on this.

Hi dissonance! I have graduate level (and yes, I did graduate :) ) and professional knowledge of statistics.
 
T'ai Chi,

"Bullied"? What are you talking about?

You want to do a meta-analysis. Why is it "bullying" to ask you how you are going to do it?

There is no change of topic. If you don't want to discuss how you are going to do it, then just go ahead and do it.

What's stopping you?
 
T'ai Chi said:


Hi dissonance! I have graduate level (and yes, I did graduate :) ) and professional knowledge of statistics.

So then, maybe you have adequate knowledge to perform a meta-analysis. Feel free to fly to Jacksonville and start anytime. The JREF's records are accessible by the public.

Oh, and we'd like to see how you performed the analysis as well.
 
CFLarsen said:
, then just go ahead and do it.

What's stopping you?

The fact that I am not currently doing a meta analysis is in no way at all related with discussing the appropriateness of doing a meta analysis.

I am also not carrying out an analysis of black holes, but I can still discuss your posts, I mean, discuss cosmology.

In fact, I can personally have no intention of ever doing a meta analysis, and still be, obviously, justified in discussing meta analysis.
 
T'ai Chi said:
The fact that I am not currently doing a meta analysis is in no way at all related with discussing the appropriateness of doing a meta analysis.

I am also not carrying out an analysis of black holes, but I can still discuss your posts, I mean, discuss cosmology.

In fact, I can personally have no intention of ever doing a meta analysis, and still be, obviously, justified in discussing meta analysis.

So, what is your point of this thread? Let's take a look at your opening post:

T'ai Chi said:
What does everyone think about this:

Would it be desirable to perform a meta-analysis(or analyses) of all JREF tests that were conduced in a similar (ideally exact) manner?

Yes/no? Because... ?

What do people think the result would be?

You asked for people's opinions. Not one supported you. All gave perfectly good reasons why such an analysis would be futile.

Now, you say - after many posts where you defend the viability of just such a meta-analysis - that you are not "currently" doing such an analysis.

May I ask: What is your point, then? You really are just a troll, aren't you?
 
T'ai Chi said:

Great questions 69dodge. If the combined standardized score is not what we expect by chance and it is statistically significant, then it would be interesting to see how it is statistically significant, and to see what that means in terms of dowsing ability, for example, and the experiments themselves.
This is where I get confused. I don't see how it is possible to take a group of scores that are not statistically significant, combine them, and somehow get a statistically significant answer. For the JREF challenges, certainly not a positive one. We know that no one has passed the preliminary test. This tells me that no one has been able to perform whatever task to a statistically significant level above chance.

Let's keep it easy and confine it to dowsers, as I think Randi has mentioned he gets a number of them. Let's say that chance would dictate they get 5 of 10 correct, and that in order to pass (to be statistically significant) they need to get 8 of 10 correct. (Note: I have no idea if 8 is the right number. It is not important to my argument whether it is or not). We know, as no one has passed, that no one has gotten 8 or more correct. If we combine all of these scores, and average them, it will still add up to less than 8 out of 10, and be non-statistically significant.

Maybe I am being too simple minded about "meta analysis", but I don't see how you can combine a bunch of results that at best hover around the chance mark and come up with a positive statistically significant number.
 
So where would the data be then? Its a bit pointless arguing about whether a meta analysis would be useful (doable)unless you can actually look at the data.

I don't think there should be any problem with doing some sort of meta analysis. Its' quite reasonable. I think maybe people are taking a stance against the suggestion because of the suggester.

A meta analysis is a tool of science.

The only problem is how comparable the data is and how easily analysable it is, it strikes me it shouldn't be too hard...... but may take quite a while to do (compiling the data).

My background is that i am a biomedical scientist and am involved in medical and quality statistics as secondary functions of my job/proffession.
 
Thanz said:

This is where I get confused. I don't see how it is possible to take a group of scores that are not statistically significant, combine them, and somehow get a statistically significant answer.

If we take one "coin flip" and get 8/10 "heads", that's on the edge of 5%.

If we take 10 such events, each one providing 8/10, we wind up with 80/100 which is, although I don't have my handy program up and running at the minute, WAY outside any normal definition of "random".

What does this mean? Well, for 'n' trials, the "noise" due to randomness is proportional to sqrt(n). The number of trials is n, so the total normalized noise is sqrt(n)/n, or 1/sqrt(n).

This means that the percentage (or per-trial) noise (as opposed to total counts) is proportional to 1/sqrt(n) when examining the whole data set.

In other words, 1% off random n 10 trials is meaningless. 1% off in 100 trials is very normal, expected randomness. 1% off random in 1,000,000,000 trials, on the other hand, looks like a real effect.
 
T'ai Chi said:
What does everyone think about this:

Would it be desirable to perform a meta-analysis(or analyses) of all JREF tests that were conduced in a similar (ideally exact) manner?

Yes/no? Because... ?

What do people think the result would be?

First of all you need access to the data. I understand this can all be viewed, only if you go along to the JREF in Florida, and ask.

I assume it'll be found in a filing cabinet, at the bottom of some cellar, behind a door. With a sign which says "beware of the leopard.". (Apol's to Mr Adams.)

Unless it's published and peer reviewed, that is. But, it's not.
 
Thanz said:

This is where I get confused. I don't see how it is possible to take a group of scores that are not statistically significant, combine them, and somehow get a statistically significant answer.
/B]


If the majority of the standardized scores, while non significant themselves, are negative or positive, this could lead to a significant combined standardized score.
 
CFLarsen said:

You really are just a troll, aren't you?

Thanks for your concern, and the label, Claus.

This thread is about an analysis, not assigning cute labels for people. Further attempts at diverting the topic will simply be ignored.
 
My question would be: what measurements are we going to be subjecting to analysis?

The JREF data would be particularly problematic since even the broadest measures (like 'hits') are defined individually for each test. It's a signifigant element of the challenge that each test is custom-tailored to the exact claim made. This is, I figure, much more variation than in a typical medical meta-analysis where many measures are gathered using standard protocols, or where the same experiment is replicated with some variations. To me, it seems that adding 1 hit from a dowser to 1 hit from a blindfolded reader doesn't equal 2 hits in any meaningful way; it equals an apple and an orange, so to speak.

Can anyone think of any measures that would be meaningful across a broad range of trials? What specific conclusions could we draw from statistics on these measures?
 
Re: Re: Meta-analysis of JREF tests



First of all you need access to the data. I understand this can all be viewed, only if you go along to the JREF in Florida, and ask.

I assume it'll be found in a filing cabinet, at the bottom of some cellar, behind a door. With a sign which says "beware of the leopard.". (Apol's to Mr Adams.)


It must be accessible to the public, in accordance to law.


Unless it's published and peer reviewed, that is. But, it's not.


They are not science studies.
 
T'ai Chi said:


Thanks for your concern, and the label, Claus.

This thread is about an analysis, not assigning cute labels for people. Further attempts at diverting the topic will simply be ignored.

You are quite obviously a troll.

Your conduct in other threads has been quite scurrilous, you have demonstrated a most peculiar understanding of the process of science, and your argument tactics are visibly designed to suit emotion rather than logic.

Your objection to labelling, therefore, is purely propagandistic.

Your proposed analysis is utterly invalid, you can't combine an apple, two pears, and a milkweed pod, unless you're a japanese flower arranger.
 
FutileJester said:
To me, it seems that adding 1 hit from a dowser to 1 hit from a blindfolded reader doesn't equal 2 hits in any meaningful way

Hi FutileJester,

I agree, that if a meta analysis were done, the studies would have to be combined in a way that makes sense.

Perhaps only the dowsing studies could be combined?
 
Here we go again...

The only paranormal ability I saw mentioned was dowsing, so let's use that, as it comes out extremely simply.

The individual can either do what they claim, or they cannot. Yes or no. 1 or 0. Black or white. Chevy or Ford. Pepsi® or Coke®. Either/or.

There is no middle ground.

Analyzing the data and subjecting it to conditions that were not in existence at the time of the test is BAD SCIENCE!!! It's rewriting the rules halfway through the game. That's not the way things are done, and you know it.

The individuals either have the paranormal ability, or they do not. What third option might a meta-analysis of the data show? Do you want to quantize it? Fine -- zero still equals zero.

What is it specifically that you think might be shown by this analysis? How do you hope to show some statistically significant result when there is NO DATA to support it?

You keep trying to put a number to all this. You can't, because YES and NO are not numbers!

You're trying to make something out of nothing here (literally). That's bad science. Stop it.
 
A quick note. This is clearly a troll. If you have some number of tests accross some number of subjects and the results for each subject are not significant there is not,, short of woo-woo statistics, that you can suddenly get significance. The use of meta analysis is for diagnostics not bailing out failed experiments. Stimson did a nice treatment of it in this forum on my thread about the Banality of Paranormal research.
 

Back
Top Bottom