• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Check my methodology - prayer study

Blinding means you anyone involved in the trial cannot know which group any participants are assigned to. If you determine that the groups are equivalent by asking participants questions you must know which groups htey are in, and therefore you are not blinded, by definition.

Not so. You only gather the data. You don't compare the groups (and unblind them) until all data is collected.


Strange, the words are all English, but somewho it makes no sense. Are you seriously claiming that just because you have a random selection, any groups you choose will turn out to be identical?

"Choose"?

Yes, I am saying that two groups randomly selected from the same pool will be statistically identical. Prove me wrong. With math and monte carlo sims.


In the post I quoted you claimed that you were creating the equation before collecting the data. Clearly this is not true for all your tests, even if it is for some. Therefore your equation is extremely likely to be picked specifically to give a positive result. This is not good.

Not so. For each body of data analyzed, the equation is set before that data is gathered. The existence of other data is irrelevant because it's not part of the new set.


I agree, the end. If you refuse to make the test sufficient for anyone apart from yourself you have no chance of ever being tested.

"For anyone apart from yourself"? WTF?

My understanding is that the JREF tests MY claim, not that of every theist worldwide. They can make their own applications if they so desire. So can you.


The JREF specifically says that the preliminary and final tests have the same protocol. If you change your measure this would not be the case.

Changing the measure is part of the protocol. JREF's only relevant concern is to ensure the protocol is methodologically proof against a false positive.


You may not be interested, but it is likely that the testers will be. They may accept the equation as given, but if you refuse to allow even the possibility of changing it I really doubt you will be tested. Consider that you could come up with an equation that guaranteed a positive result. If they can't alter this then you would win whatever the result. I am not suggesting that you will actually do this, but with a million dollars at stake you can bet that the JREF will not allow this option to be available.

Tell me how I could come up with such an equation, given that by definition it has no way of knowing which group any person is assigned to?


Do you really not see the problem with this? The whole point of a trial is htat you test for one thing and control for all the other things you can think of. As I have said before, and you agreed with, if you look for a change somewhere you will find one. This is the entire reason the measure is always specified in advance.

See above. That is why the Round 1 measure is determined in advance - but lacking a reason to determine it in a particular way, it is necessarily arbitrary.


You are attempting to apply the status of "JREF preliminary test" to the test which occured in the past. QED. As I said, if everyone was allowed to choose which test was the official preliminary after they knew the result, everyone who took the test would pass.

Um, no. What test "occured in the past"? If the first preliminary fails, then we try it again (with a better-informed SE). Quite simple.
 
Let me put it very simply. You asked us to point out any flaws with your test. We have done so. You have ignored pretty much everything we have said. The JREF is very likely to ask the same questions. If you don't address them, you will not be tested.
 
Not so. You only gather the data. You don't compare the groups (and unblind them) until all data is collected.

Your concept of blinding is flawed. Someone, usually the statistician, assigns a subject to either the control or treatment group, so this person is not "blind" to which group a subject is assigned. The blinding occurs when neither the treatment provider or subject knows which group the subject has been assigned to...

And yes, you do compare groups ahead of time to verify your randomization procedure didn't result in introducing additional bias...


Yes, I am saying that two groups randomly selected from the same pool will be statistically identical. Prove me wrong. With math and monte carlo sims.

And here we have the crux of your misunderstanding. You're confusing pools with populations. Subjects can be from the same pool but not from the same population unless they have the same mean and variance. Females typically are not from the same statistical population as males. Skin cancers are definitely not the same statistical population as brain cancers...

Mathematically, two populations have either different means, different variances, or both. When you mix populations into a pool and randomly select them, unless you're very lucky, you have to adjust for the things that might confound your outcome...

In addition, simple comparison tests like the t-test assume homogeneity of the variances within groups. You cannot make that assumption with a heterogeneous group sample (e.g., different types of cancers, demographics, etc.)...

Do you understand the criticisms now or are you still going to defend your position?
 
You misunderstand the setup. They are praying for one particular individual. They don't know anything at all about the rest.
I disagree. These are people who have volunteered to take part in a prayer study. It's reasonable to assume that they want the test to produce positive results. So no matter how hard they try to focus on healing the experimental group, there's always going to be an implied prayer that people in the control group stay sick.

Your argument is only valid if you're willing to also argue that every prayer for one person is a prayer against everyone else.
If I'm not taking part in a prayer study, then I don't have any reason to want strangers to stay sick. If the person I'm praying for gets better, then I'll be happy, but I certainly won't be unhappy if random strangers also recover.


(On the coin-flip experiment: )

No thanks; I am interested in whether intercessory prayer works for realworld medical usage in the way I've set up, not in the way you propose. Of course it's a perfectly valid test; it's just not the one I'm doing.
Well, OK. It's your time and money.

But I'm puzzled about why you want to test your hypothesis using a complicated experiment with hard-to-interpret results. You could test the same hypothesis using a simple experiment with easy-to-interpret results.

Really, if intercessionary prayer has a measurable benefit, then you should see that benefit in the simplest possible experiment. The more complexity you add, the more likely it is that the benefit (or lack thereof) will get lost in the statistical noise.

And now that I think about it, the simple experiments have already been done and they've never shown any benefit. Do you have a theory as to why your experiment might work when so many other people have failed? What factor is genuinely new here?
 
And yes, you do compare groups ahead of time to verify your randomization procedure didn't result in introducing additional bias...

Would you mind showing the math for how the randomization procedure could "introduce" additional bias? Specifically, one that results in a false positive more than (p*100%) of the time? Include a monte carlo sim demonstrating the math with real numbers please.

'cause, y'know, that's what the p is for...
 
I disagree. These are people who have volunteered to take part in a prayer study. It's reasonable to assume that they want the test to produce positive results. So no matter how hard they try to focus on healing the experimental group, there's always going to be an implied prayer that people in the control group stay sick.


If I'm not taking part in a prayer study, then I don't have any reason to want strangers to stay sick. If the person I'm praying for gets better, then I'll be happy, but I certainly won't be unhappy if random strangers also recover.

That's fine. If that is how prayer works, then evidently it works equally for everyone related to some particular group and not just the person you are in fact praying for.

Since that's not something that can be controlled for, it's an acceptable limit of the study.


But I'm puzzled about why you want to test your hypothesis using a complicated experiment with hard-to-interpret results. You could test the same hypothesis using a simple experiment with easy-to-interpret results.

I don't consider it particularly hard to interpret.

In any case, my question is not whether prayer works in some general sense, but whether it works for helping people who are sick. Thus your alternate experiment is irrelevant to answering my question.
 
Would you mind showing the math for how the randomization procedure could "introduce" additional bias? Specifically, one that results in a false positive more than (p*100%) of the time? Include a monte carlo sim demonstrating the math with real numbers please.

'cause, y'know, that's what the p is for...

No, until you understand and demonstrate to us that you know the difference between pools and populations anything else is a waste of time...
 
Saizai, to the outsider like me, this thread looks exactly like any other challenge thread. You are an academic, why did you stoop to argue like any other crackpot about what JREF want? If you back away now you lose face with the forum readers (with me anyway, I can only speak for myself), which may not be a big deal but why bringing this on yourself?

Now that you got yourself into this predicament, IMO your best bet is to set aside your preparatory work and ask officially JREF to provide you with a test protocol that meets their criteria. Possible outcomes:

- JREF declines on the basis that the claim is too difficult to test properly or asks you to pay a lot for the work they need to do, in which case you can withdraw without losing face. You do not get the million but if your work is sound as you believe, it will be vindicated by the peer reviews;

- JREF provides you with a reasonable protocol, therefore, as a genuine scientist, you should be happy to apply it;

- JREF provides you with an extremely stringent protocol that would cost a fortune to apply and it is either:
- unreasonably demanding and can be shown to be so to
the scientific community, or
- a good justification for asking your sponsors to cough up
the money if they are honestly looking for results that will
stand up to scrutiny. If they don't, you might canvas widely
for new sponsors, if this does not work either you are clear.

Bear with me, all my life I have upset people offering advice without being asked, old habits are difficult to change... as seems to be true for many other opinionated people on this forum...
 
Here's something to think about, saizai.

Suppose you pick a scoring method that assigns to each receiver a random number between 0 and 100, entirely ignoring whether they got better or worse.

1) If prayer doesn't work, will the study be too likely to produce a positive result? (Answer: no.)

2) If the result of the study turns out to be positive---it probably won't, of course, but if it does---would this positive result constitute even the slightest evidence for the efficacy of prayer? (Answer: also no.)

So, something appears to be wrong with your repeatedly and vigorously stated position that all that's needed is to ensure a small probability of false positives.

What makes a particular result deserving of the title "positive" in the first place? It needs to be improbable assuming prayer doesn't work, but it also needs to be probable assuming prayer does work. Otherwise, the study isn't studying prayer at all.

Suppose I buy a lottery ticket, and decide that if I win, then prayer cures cancer. Clearly, this makes no sense at all. I probably won't win the lottery, but even if I should happen to win it, my win would obviously give me absolutely no reason to believe that prayer cures cancer. What has the lottery to do with cancer? But isn't it entirely true that if prayer doesn't cure cancer, I am unlikely to win the lottery? So, the probability of a false positive is extremely low? Yes. But what's also true is that if prayer does cure cancer, I'm still unlikely to win the lottery. And both of those probabilities matter.
 
69:

1. It will be likely exactly as much as the statistical uncertainty is. That in turn is dependent on the bell curve involved and the sample size.

2. A positive result would constitute evidence for the efficacy of prayer... at influencing the score equation decided upon. (E.g. a theist might argue that God wanted to show herself by demonstrating a positive result, and therefore the prayers went toward influencing the [psuedo-]random number generators used.)

Remember, I am claiming nothing whatsoever about mechanism. In fact, I have no theories about what mechanisms may or may not exist, nor any a priori basis to have one. So I have no reason to say that it will affect that and not something else. My choice of the HRQOL as the initial measure is largely arbitrary; a matter of personal taste, if you will. A better mechanism is to base the score equation on the results of the previous round (i.e. on what items tested appeared to show the highest effects), which is what I would do for round #2.


You cannot objectively say "well we will thow out this result because we think it's silly". You set the measurement, you gather the results, and whatever the statistical uncertainty of the result, that's what it is. If you get a positive result and don't believe it for some reason, you run the test again, with a larger sample size. But you don't get to toss the result; that is not proper protocol, and opens the door to all sorts of fallacy.


You must realize, this is true of ALL research of this paradigm (i.e. double-blind randomized controlled trials), of which there is a LOT. Probability dictates that p of it will in fact be false positive; p is usually <.05 for normal clinical trials, and further reduced through repetition, etc. But it still exists. It could still be wrong, and seemed to be true merely by an accident of chance.

The same is true of mine. If I get a positive result, there is a certain chance - p - that the result is false. But you cannot claim that it is other than p; e.g. that because the result is positive, the result is false, which is what you are arguing. If you doubt it, you simply run a trial again, or with greater numbers, until p is sufficiently low for your taste. The standard in academic research is p<.05 (5%); very high certainty is considered to be p<0.001 (.1%). Of course, important experiments get duplicated, and this further reduces the p overall if the experiments are similar enough to enable meta-analysis.
 
Last edited:
69:

1.

The same is true of mine. If I get a positive result, there is a certain chance - p - that the result is false. But you cannot claim that it is other than p; e.g. that because the result is positive, the result is false, which is what you are arguing. If you doubt it, you simply run a trial again, or with greater numbers, until p is sufficiently low for your taste. The standard in academic research is p<.05 (5%); very high certainty is considered to be p<0.001 (.1%). Of course, important experiments get duplicated, and this further reduces the p overall if the experiments are similar enough to enable meta-analysis.

p is the probability of getting an outcome when the null hypothesis is true, not the probability THAT the null hypothesis is true.
 
2. A positive result would constitute evidence for the efficacy of prayer... at influencing the score equation decided upon. (E.g. a theist might argue that God wanted to show herself by demonstrating a positive result, and therefore the prayers went toward influencing the [psuedo-]random number generators used.)


I thought the study was supposed to test whether prayer helps improve the health of cancer patients, not whether prayer affects random number generators.

Remember, I am claiming nothing whatsoever about mechanism. In fact, I have no theories about what mechanisms may or may not exist, nor any a priori basis to have one. So I have no reason to say that it will affect that and not something else.


You can't design a study to test for something, if you haven't decided what you want to test for.

You don't have to claim that you personally and wholeheartedly believe prayer works, and that you believe it works in a particular way. But you have to decide what sort of prayer you want to test for---how it would work if it did work---because otherwise you won't be able to decide how to test for it.

Probability dictates that p of it will in fact be false positive;


What is the "it", p of which will be false positives? (Rhetorical question. I answer it below.)

The same is true of mine. If I get a positive result, there is a certain chance - p - that the result is false.


That's not what p is. This is what p is:

Suppose prayer doesn't work. And suppose you haven't done the study yet. What is the probability that, when you do the study, you will get a positive result? (Such a positive result would necessarily be a false positive, because we're supposing that prayer doesn't work.)​

The probability p is a probability that is based on the assumption that prayer definitely doesn't work.

Of course, we don't actually know whether prayer works. That's why we're doing a study. And when the study is done, we still won't know for sure whether prayer works, although we will have gotten some new information that will affect how likely we think it is to work. So, we're never in a situation where the assumption underlying p is known to be true. So, p is not as directly useful as one might imagine. It is relevant, to be sure, but not directly so.

But you cannot claim that it is other than p;


It is almost certainly other than p. By "it", I mean this probability:

Suppose, realistically and unlike before, that we don't know whether prayer works. Suppose the study has been done, and the result was positive. Now, what is the probability that prayer doesn't work? (If prayer doesn't work, the positive result was a false positive.)​

This is a different question from the previous one, and it has, in general, a different answer, even though both questions could be phrased, imprecisely, as, "What is the probability of a false positive?". In both, we're interested in the probability of the combination: prayer doesn't work and the study result is positive. However, in one, we already know that prayer doesn't work but we don't know whether the study result will be positive, while in the other, we already know that the study result was positive but we don't know whether prayer works.

What's the probability it's raining, if it's cloudy? What's the probability it's cloudy, if it's raining? Not the same thing.

e.g. that because the result is positive, the result is false, which is what you are arguing.


I'm arguing that, supposing the study's result turns out to be positive, knowing p isn't enough to decide whether that positive result is probably a true positive or probably a false positive. Not only must you know what the probability of a positive result was on the assumption that prayer doesn't work (i.e., p), you also must know what the probability of a positive result was on the assumption that prayer does work. And, therefore, you have to be specific enough about the sort of prayer you want to study, to be able to determine the latter probability.

(The prior probability that prayer works matters too, but that's not what I was talking about here.)
 
Sorry if someone has already mentioned this, but have any of you studied the work of Peter Fenwick. Here's an article about him I found: http://www.thepsychictimes.com/articles/fenwick.htm In addition to his NDE research he carried out an experiment into prayer studies.

I live near this guy and went to a lecture of his once. It was most interesting.
 
See http://www.prayermatch.org/ . It should be a complete description - methodology, goals, my intent / opinion, etc.

The backend programming isn't ready yet but the basics (i.e. user accounts and the public pages) are there. I intend to begin once a sufficient number of participants are signed up; the backend will be ready by then.

If you have a critique, please make sure that:
* you've read all the pages linked from the main page
* you can explain why your perceived flaw in my design would cause a false positive result, i.e. a statistically significant difference between the active and control groups of Recipients in the second and/or third round

I am aware that I have put limits on it that may cause false negatives, and am quite okay with that; my problem not yours. ;)

If I have left out anything it is probably by mistake (I only just finished writing the content); point it out and I'll correct it.

BTW, I have previously suggested this as a bona fide MDC, but the understanding I reached with Randi's representative was that they are only interested in things that can be proven in a small-scale, one-person fashion. I do not claim any such power or effect; if there is an effect, I only expect a small but statistically significant difference between the active and control groups.

Thanks!

P.S. Yes, I've read the rules and FAQ.

I have read through the pages at the site you have linked and understand what you are proposing. I have not read through all of the posts on this thread (although I will attempt to do so later), so I apologize is any or all of this has already been addressed.

My biggest concern is with respect to the ethics of performing this study. I realize that you only consider it your problem if the results are negative, but it is not that simple.

You have not explained what you hope to accomplish with this study and how it will be achieved. The general indication is that you wish to provide results that will push the research forward among serious researchers. That is, a positive result would be considered valid and worthy of further consideration by those who are currently unconvinced. There has already been a lot of research on this topic. What you need to explain is exactly what the methodologic concerns were from previous studies and how your study will overcome these concerns. And how your results can be received in a way that they can be taken seriously. Normally, studies are published in peer-reviewed journals in order to make sure that there has been at least a minimum degree of oversight to assess validity. If you can not accomplish that, it seems likely that the very people you wish to reach will not accept the results as valid. You do not mention any connection to an academic institution or ethical approval from an independent review board. Normally, both of those are required for consideration for publication.

Your study design sets you up for failure (I'll elaborate in a bit). I understand that you consider that only your problem, however it should be considered unethical to waste the time of people who are already going through a very difficult period in their lives, and to add to their distress by providing false hope. By false hope, I don't mean the presupposition that the prayer itself will offer benefit, but the presupposition that participation in this study can advance this area of research - i.e. that participation in this study can be meaningful. How can it be meaningful if the chances of it finding a real effect are miniscule and if "positive" findings will probably be ignored by serious researchers?

Let me explain why your study sets you up for failure (I'm not assuming you don't know this, just wanting to make it explicit). Let's start by assuming that there is actually a real effect that you could find. What is the chance that you will actually find that real effect vs. the chance that you will find spurious effects? You are collecting a lot of data and it is vaguely defined. Once you sit down to analyze it, you will probably be able to find dozens of ways in which to compare the two groups (the "data mining" you refer to). Since you have set your p to <0.05, by chance you will find several outcome variables that are different between the two groups. We have also assumed that there is a real effect on some outcome(s). And one would assume that that outcome will also show a signficant difference, except that your study is so under-powered that it is probably far more likely to miss the effect than it is to capture the effect. The variability on your variables is very high and previous studies have failed to demonstrate an effect, suggesting that the effect is (at the most) small. Taking that into consideration, I'd be suprised if your study has a power greater than 0.10 to detect a difference. What that means is that any differences you do detect are far more likely to be false-positives than they are to be true-positives. When you go on to repeat the study, focussing only on those variables, the results will almost certainly be negative because you failed to select the relevant outcomes (assuming that there are any to select). These ideas are discussed in greater detail in this paper.

Positive results will likely be subject to extra scrutiny for validity (without additional support from other research). Outcome measures whose validity has already been established (e.g. a visual analog scale for pain) should be used. Otherwise you don't know whether the answers to your questions reliably or validly measure anything of interest. The low number of participants makes unequal sorting of confounders likely, and if you don't know what you are measuring, it will be easy to miss this.

ETA: I accidentally edited this out.

It would be preferable to have a neutral third party analyze the results. The outcome measures should be coded (converted into a form suitable for analysis) blind.

Other minor quibbles that don't affect the study...

You refer to "data mining" as selection bias. Selection bias refers more to how you select the population from which your samples will be drawn (in this case, people aware of your site who have cancer, have an interest in prayer and healing, and make the effort to participate) which leads to issues of generalizability and confounding. Although, to be fair, I find that often selection bias as a term gets used as a catch-all for different kinds of bias - sample biases in particular - so it's use may not really be constrained to a particular type of bias.

It is not "impossible to prove a negative". It is no more or less possible to prove a negative than it is a positive. However, I see this bit of "wisdom" repeated frequently, including on this forum, so that's probably a whole separate discussion.

Linda
 
Last edited:
Okay, I've had a chance to read through the thread. I see some of the issues I mentioned have already been raised.

Somehow I got the impression from my quick reading that you had already decided upon an N of 25, which is what my comment about being underpowered was based upon. I see that that number is still undetermined.

I mentioned confounders and this has also been discussed. With so many variables (likely mostly unmeasured) that can affect outcome, the concern is that unequal sorting of the confounders/independent variables could lead to a false positive, and that there is a chance that this unequal sorting could happen in the same direction for each study. Depending upon the strength of the association between confounder and outcome, the chance of this happening may be greater than the one in a thousand standard from JREF (i.e. the chance of confounding may be greater than the chance that the null should be rejected). Starz' Monte Carlo sim doesn't eliminate this concern as it tested different parameters than the ones that we are concerned about.

It isn't really the main area of concern, though.

Originally Posted by saizai(Frankly, this is a point that's always puzzled me about the practice of medicine: why not try to enhance the placebo effects as much as possible if your goal is to heal the patient? Obviously, one must control for them as a confounder [note that I'm using this to mean "thing which could result in a false positive"] when doing research for the effectiveness of treatments, but that doesn't mean that the placebo is ineffective; quite the opposite has been proven true repeatedly.)

That is a misconception. The "placebo effect" represents what was going to happen anyway, plus some changes in the subjective evaluation of symptoms/outcomes. A healing effect specific to placebo has not been demonstrated.

Linda
 
Linda,

It's not worth your time with this guy. To use an old title from a mid 80's Billy Bragg album, it's like "Talking with the Taxman about Poetry."

Many of us have already tried to educate him on basic statistical theory, sampling from different populations, confounding, and clinical trial design but his hubris blinds him to his ignorance...

Anyhow, I like your definition of the placebo effect, it's the most succinct one I've ever seen...

-digithead
 
p is the probability of getting an outcome when the null hypothesis is true, not the probability THAT the null hypothesis is true.

*laugh* Correct. Figures there'd be someone around who wants to be precise.

In any case, p is effectively the uncertainty factor, i.e. the chance that you haven't proven what you thought you had, which is all I was using it for in my argument.

I thought the study was supposed to test whether prayer helps improve the health of cancer patients, not whether prayer affects random number generators.

The study is supposed to test whether prayer creates an effect in this study. It is set up especially so that if it creates a HRQOL effect on people prayed for, that can get detected... but who am I to be picky? :P

You don't have to claim that you personally and wholeheartedly believe prayer works, and that you believe it works in a particular way.

I don't; I'm an agnostic and have never been a theist. I simply want to test it.

But you have to decide what sort of prayer you want to test for---how it would work if it did work---because otherwise you won't be able to decide how to test for it.

Per above, I'm testing for additive prayer on seriously ill patients. The choice of cancer patients is largely arbitrary; I'd be open to switching it to any group that has rapid (w/in 1 year) changes of health condition (so as to create a bell curve that would make group differences sensitive), is appealing to pray for, etc.

This is a different question from the previous one, and it has, in general, a different answer, even though both questions could be phrased, imprecisely, as, "What is the probability of a false positive?". In both, we're interested in the probability of the combination: prayer doesn't work and the study result is positive. However, in one, we already know that prayer doesn't work but we don't know whether the study result will be positive, while in the other, we already know that the study result was positive but we don't know whether prayer works.

What's the probability it's raining, if it's cloudy? What's the probability it's cloudy, if it's raining? Not the same thing.

Per above, this is certainly true. However, I think we are getting outside the range of valid objections to my protocol, and into simply general objections to all research, particularly speculative research. As it is not related to any particular flaw of *my* methodology, I'd rather not get into it.

Not only must you know what the probability of a positive result was on the assumption that prayer doesn't work (i.e., p), you also must know what the probability of a positive result was on the assumption that prayer does work.

And that's not possible to know. (Though theists may claim otherwise.)

Certainly it's not possible to even discuss what that p(positive | it works [in manner X]) without discussing X. And I will not get into any discussion about X, as I consider it a waste of time given the dearth of valid non-contradictory evidence to determine X.

You have not explained what you hope to accomplish with this study and how it will be achieved. The general indication is that you wish to provide results that will push the research forward among serious researchers. That is, a positive result would be considered valid and worthy of further consideration by those who are currently unconvinced. There has already been a lot of research on this topic. What you need to explain is exactly what the methodologic concerns were from previous studies and how your study will overcome these concerns. And how your results can be received in a way that they can be taken seriously. Normally, studies are published in peer-reviewed journals in order to make sure that there has been at least a minimum degree of oversight to assess validity. If you can not accomplish that, it seems likely that the very people you wish to reach will not accept the results as valid. You do not mention any connection to an academic institution or ethical approval from an independent review board. Normally, both of those are required for consideration for publication.

While I thank you for your concern, that is not something I am interested in discussing, as it falls within the "what will you do with it if you win" category.

By false hope, I don't mean the presupposition that the prayer itself will offer benefit, but the presupposition that participation in this study can advance this area of research - i.e. that participation in this study can be meaningful. How can it be meaningful if the chances of it finding a real effect are miniscule and if "positive" findings will probably be ignored by serious researchers?

That is only one aspect of it.

You could just as well claim that serious researchers would never accept a positive finding, and that therefore any research is completely fruitless. I happen to disagree. However, per above, I do not want to discuss this further, as it is not related to a specific critique of (and preferably, improvement to) my methodology.

You are collecting a lot of data and it is vaguely defined.

It is not vaguely defined, except the comments section, which I have said that I do not intend to use for the purposes of conclusion-relevant analysis.

Once you sit down to analyze it, you will probably be able to find dozens of ways in which to compare the two groups (the "data mining" you refer to). Since you have set your p to <0.05, by chance you will find several outcome variables that are different between the two groups.

Certainly. Which is why I set the analysis before the data is collected, per standard rigorous protocol. No sharpshooter fallacy here. :)

These ideas are discussed in greater detail in this paper.

Thank you for the reference. I'll have to read it later, since I'm a bit busy at the moment.

Positive results will likely be subject to extra scrutiny for validity (without additional support from other research). Outcome measures whose validity has already been established (e.g. a visual analog scale for pain) should be used.

I tentatively intend to use the well-established SF36v2 HRQOL as a measure for round 1. What I use for round 2 will be decided after round 1 is complete.

It would be preferable to have a neutral third party analyze the results. The outcome measures should be coded (converted into a form suitable for analysis) blind.

I intend to collect all data by internet application, and only use paper for verification / signature collection. So the neutral third party is a computer program, with the relevant parts of it being open sourced.

You refer to "data mining" as selection bias. Selection bias refers more to how you select the population from which your samples will be drawn (in this case, people aware of your site who have cancer, have an interest in prayer and healing, and make the effort to participate) which leads to issues of generalizability and confounding. Although, to be fair, I find that often selection bias as a term gets used as a catch-all for different kinds of bias - sample biases in particular - so it's use may not really be constrained to a particular type of bias.

Sorry for the lax use of terms. What I mostly was referring to is formally known as the Texas sharpshooter's fallacy.

It is not "impossible to prove a negative". It is no more or less possible to prove a negative than it is a positive. However, I see this bit of "wisdom" repeated frequently, including on this forum, so that's probably a whole separate discussion.

Indeed it is.

Argument from ignorance is generally valid only in some very limited circumstances, where you have proven the ability of the test to detect the thing tested for, and are claiming that the (new) negative results are therefore evidence that the thing tested for does not exist in the place it was newly tested for.

See my "Charlie the Treasure Hunter" analogy; should come up on a forum search.

Somehow I got the impression from my quick reading that you had already decided upon an N of 25, which is what my comment about being underpowered was based upon. I see that that number is still undetermined.

Correct. I would like 50<n<500 but it's primarily a pragmatic question, of how many qualified participants can be recruited.

I mentioned confounders and this has also been discussed. With so many variables (likely mostly unmeasured) that can affect outcome, the concern is that unequal sorting of the confounders/independent variables could lead to a false positive, and that there is a chance that this unequal sorting could happen in the same direction for each study.

How would it do so [at likelihood > p], given that the sorting into groups is random?

Depending upon the strength of the association between confounder and outcome, the chance of this happening may be greater than the one in a thousand standard from JREF (i.e. the chance of confounding may be greater than the chance that the null should be rejected). Starz' Monte Carlo sim doesn't eliminate this concern as it tested different parameters than the ones that we are concerned about.

If you believe that the sim is invalid, please propose an alternate sim so that we can test your hypothesis. :)


Again, thanks for the reference; will read later.

I don't think that this one affects my methodology, however.
 
That is a misconception. The "placebo effect" represents what was going to happen anyway, plus some changes in the subjective evaluation of symptoms/outcomes. A healing effect specific to placebo has not been demonstrated.

Linda

the interesting question is how medical science should evaluate subjective patient evaluations of pain reduction - and whether or not this can be said to have a physiological or psychological root - or indeed if it's a false dichotomy to separate psychological from psyiological at all.....
 
Oof, let's please not get into the 'what is pain really' thing; I had more than enough of that from John Searle. :p
 
Oof, let's please not get into the 'what is pain really' thing; I had more than enough of that from John Searle. :p

lol

i won't derail your thread....i might start something in SMMT though....things normally get a bit heated when philosophy and science collide - could make an interesting topic :D
 

Back
Top Bottom