• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Meta-analysis of JREF tests

T'ai Chi said:
They are probably similar enough to be combined, to see if the combined stadardized score is near 0 and non-significant as we'd expect by chance.

"probably".

Um, dude, some of us do this kind of analysis, you know, and speaking as one such, I dare say "probably not" is a great deal more likely.
 
Diogenes said:

Ahhh.. But you can dismiss the possibility that it is significant.. Like an atom of arsenic in a gallon of water..

Actually, that particular example is a wee bit significant. We can't live without trace quantities of arsenic, you know, even if it is also a heavy metal poison... :D

Sorry, allow me to sneak away now:

Pedant, Pedant
Pedant
Pedant pedant pedant

(think pink panther)
 
In proper statistics, if you believe the effect you are testing for has not been found because the sample size is too small, you resolve that not by pooling the results of already-done tests, but by performing a NEW test with the correct sample size.

Anything else is cooking the books. Seriously, what if I set about doing this and found out that the results of the last 100 tests were statistically insignificant, but the results of the last 50 tests are significant. I'd be awfully tempted to write my paper highlighting the last 50 tests, if I were biased towards the hypothesis. I might even do so without consciously meaning to be deceitful.

The only way to be sure this is not happening (by unconscious bias or malfeasance) is to always test again and get new data, if you intend to change the conditions.
 
Ed said:

Luci, your knowledge of stats is too poor to carry on. What is the probability of drawing 911 on a specific date again?

LOL, you hypothesizing I'm Lucianarchy?

You're simply resorting to a 'I don't know the topic so I will just name call.' tactic.

I'm very glad that all these messages are archived. :)
 
gnome said:
In proper statistics, if you believe the effect you are testing for has not been found because the sample size is too small, you resolve that not by pooling the results of already-done tests, but by performing a NEW test with the correct sample size.

Anything else is cooking the books.


I am not testing for any effect. I am simply wanting to see if the combined standardized score is near 0 and non-significant as we'd expect.
 
Maybe you are, maybe you are not. Your knowledge of applied stats is as bad as his so I thought .....

As far as staying on topic, I think that your question has been addressed, and that early on. You do see why what you are suggesting is silly, do you not?
 
T'ai Chi said:


I am not testing for any effect. I am simply wanting to see if the combined standardized score is near 0 and non-significant as we'd expect. [/B]

You might or might not expect this. You really don't get it do you? And if you get a number (which you surely will) what then? How would you interpret it? On what basis would you pool the data?

Your absence of specifics and blase assertions sure look like Luci. Combined with the stats thing, I dunno.
 
T'ai Chi said:
LOL, you hypothesizing I'm Lucianarchy?

You're simply resorting to a 'I don't know the topic so I will just name call.' tactic.

I'm very glad that all these messages are archived. :)
You wouldn't know it was a skeptic board, would you? With skeptipsychics trying to divine sockpuppetness.
 
Ed said:

You might or might not expect this. You really don't get it do you? And if you get a number (which you surely will) what then? How would you interpret it?


It obviously depends on what the combined standardized score is. If it is non-significant, it is what we expect it to be by chance. If it is significant, then the experiments could be improved or there could some some small but present anamoly that would beed to be investigated further.
 
Ed said:
Maybe you are, maybe you are not. Your knowledge of applied stats is as bad as his so I thought .....


You've already addressed me as "Luci" by saying:


Luci, your knowledge of stats is too poor to carry on. What is the probability of drawing 911 on a specific date again?


so you obviously aren't thinking maybe here.

As far as staying on topic, I think that your question has been addressed, and that early on. You do see why what you are suggesting is silly, do you not?

Please, Ed, actually explain what you mean by "silly". So far no one has adequetly addressed why seeing if the combined standardized score is near 0 and non-significant as we'd expect is in any way unscientific.
 
T'ai Chi said:

I am not testing for any effect. I am simply wanting to see if the combined standardized score is near 0 and non-significant as we'd expect. [/B]

Perhaps the term "effect" is not the best word here, but it doesn't change the argument that you are testing a hypothesis (that the standardized scores over several tests will be greater than chance levels) and you must use new data.

I should modify what I've been saying, however--it has been pointed out to me IRL that such meta-analysis might be useful in designing a new test, as long as the results of the meta-analysis are not held up as significant in themselves.
 
T'ai Chi said:


You've already addressed me as "Luci" by saying:

[/b]




Please, Ed, actually explain what you mean by "silly". So far no one has adequetly addressed why seeing if the combined standardized score is near 0 and non-significant as we'd expect is in any way unscientific.
[/B]

You are not doing a meta analysis, the way it is generally understood. You are pooling data from different "experiments" and deriving some number from them. You then say "if the number is big, design a new test". That is all ok, it ain't science and you are probably violating various assumptions (homogeneity of varience, possibly) you might or might not have experiments that are comperable in terms of method and control, who knows. You are getting a bunch of numbers is all, any result is meaningless. Why would you go thru this sort of crap and not design a simple experiment with adequite sample in the first place. I'll suggest a reason. A well controlled experiment will never be done when you can draw specious conclusions from crappy data and thus muddy the water. That is, I am afraid, SOP for paranormal research.

So go right ahead and z-score away, it is in all probability, meaningless.

That is what I meant by silly. Something out of the Ministry of Irreproduceable Results, headed by John Cleese.
 
Pyrrho said:

You wouldn't know it was a skeptic board, would you? With skeptipsychics trying to divine sockpuppetness.

You referring to me or the Luci entity that I am in communication with. If not a sock then Channeling, fer sure ... Could Luci have died and come back? How ironic.
 
Ed said:

You are not doing a meta analysis, the way it is generally understood. You are pooling data from different "experiments" and deriving some number from them. You then say "if the number is big, design a new test".


I'm not doing any test at all. I am simply discussing the possibility and the interpretation of the results from such a test if one were to occur.

If I did it, if the combined standardized score was significant, then I'd say there is something that we need to look into, because we'd expect it to be non-significant.


That is all ok, it ain't science ..


I don't think you much or any justification for saying that. The discipline of Statistics is essentially the method of the scientific method.


..and you are probably violating various assumptions

..you might or might not have experiments that are comperable in terms of method and control,


I agree 100%. We don't know for sure, certainly, without seeing the JREF database of tests (and/or other skeptical organizations that test), of course.


You are getting a bunch of numbers is all, any result is meaningless.


If a sound meta analysis is carried out, the results are anything but meaningless. Why do you think meta analyses are done in the first place. I disagree 100% with the statement that "any result is meaningless". If the combined standardized score is significant, that would be meaningful.


Why would you go thru this sort of crap and not design a simple experiment with adequite sample in the first place.


JREF, and other skeptical organizations, already did the tests.


I'll suggest a reason. A well controlled experiment will never be done when you can draw specious conclusions from crappy data and thus muddy the water. That is, I am afraid, SOP for paranormal research.


From what I've read, there have been soundly designed experiments in anomalous mental phenomena with repeatable results. Also, Honorton, I believe, examined the common claim against psi research of 'poor experimental design led to the better scores' and found there to be no significance in that relationship.


That is what I meant by silly. Something out of the Ministry of Irreproduceable Results, headed by John Cleese.

Everyone is entitled to their opinion.
 
T'ai Chi said:
The discipline of Statistics is essentially the method of the scientific method.

The axioms of the scientific method don't depend on statistics. Statistics are mathematical tools that support the practical pursuit of science. Regardless, the question is not whether or not stats are important, but whether or not this particular application of statistics should be expected to produce useful data.

If a sound meta analysis is carried out, the results are anything but meaningless. Why do you think meta analyses are done in the first place. I disagree 100% with the statement that "any result is meaningless". If the combined standardized score is significant, that would be meaningful.

Standardized score of what measure? Let's limit ourselves to dowsers for simplicity. What measure could we analyze and what information would be suggested by various scores? Is there a concrete example along the lines of "the score for measure X was much greater than Y, which suggests Z" that you can give?

I'll admit to some ignorance here, but it seems to me that 'useful' meta-analysis involves more than the principle measure of a test. The original aspirin studies presumably measured pain relief; but of course many measures are taken on participants in a medical study. The meta-analysis was able to show a connection between aspirin use and heart attacks because of these adiitional measures.

AFAIK we don't have any other additional measures for the JREF data. If we did it's possible some interesting areas for study could be revealed. For example, it could be found that out of all categories of claimants, dowsers lived perceptibly longer. Or that clarivoyants have the strongest religious convictions. But without this additional data, we're basically stuck with hits. And I don't yet see any way that meta-analyzing the hits would be suggestive of anything concrete enough to focus a new study on.
 
FutileJester said:


The axioms of the scientific method don't depend on statistics. Statistics are mathematical tools that support the practical pursuit of science. Regardless, the question is not whether or not stats are important, but whether or not this particular application of statistics should be expected to produce useful data.



Standardized score of what measure? Let's limit ourselves to dowsers for simplicity. What measure could we analyze and what information would be suggested by various scores? Is there a concrete example along the lines of "the score for measure X was much greater than Y, which suggests Z" that you can give?

I'll admit to some ignorance here, but it seems to me that 'useful' meta-analysis involves more than the principle measure of a test. The original aspirin studies presumably measured pain relief; but of course many measures are taken on participants in a medical study. The meta-analysis was able to show a connection between aspirin use and heart attacks because of these adiitional measures.

AFAIK we don't have any other additional measures for the JREF data. If we did it's possible some interesting areas for study could be revealed. For example, it could be found that out of all categories of claimants, dowsers lived perceptibly longer. Or that clarivoyants have the strongest religious convictions. But without this additional data, we're basically stuck with hits. And I don't yet see any way that meta-analyzing the hits would be suggestive of anything concrete enough to focus a new study on.

Luci simply wants to lump data to muddy the water.
 
Ed said:

Luci simply wants to lump data to muddy the water.

Ed, you are on ignore now for obvious reasons. Unfortunately, you resorted to absurd claims of me being Luci.

If I am Luci, (I'm not), that has no bearing at all on the topic of meta analysis anyway, something which all skeptical thinkers should obviously know.
 
T'ai Chi said:


If I am Luci, (I'm not), that has no bearing at all on the topic of meta analysis anyway, something which all skeptical thinkers should obviously know.

True, but there is no reason to keep trying to reason with you, as you have completely ignored ALL relevant points as to why a meta-analysis could not be done.

So, we may as well call you names, Whodini.
 
T'ai Chi said:


Ed, you are on ignore now for obvious reasons. Unfortunately, you resorted to absurd claims of me being Luci.

If I am Luci, (I'm not), that has no bearing at all on the topic of meta analysis anyway, something which all skeptical thinkers should obviously know.

Of course it does. It means that rational discusion is not possible.

Notice how you are Luci-like in ignoring every comment that is critical and keep up with the tedious mantra of standardized scores without the slightest notion of what you are talking about. Keep ignoring the questions, just keep on repeating.
 

Back
Top Bottom