• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Check my methodology - prayer study

"Willful" ignorance is an pretty strong claim. Are you psychic?

Certainly, I understand the benefits of experience, which is why I consult others. I don't feel that this in any way makes it okay for someone to insult me, nor for you to turn that into an ad hominem attack as well.

If you have comments about the study design, please restrict them to the methodology itself rather than making comments about me.
 
Saizai, for what it's worth I do apologize for the intemperence of my post. It is frustrating, though, to be posting essentially sympathetic and would-be-helpful comments and to get a series of hostile responses that are based on arbitrary misreadings of one's posts.

In general it seems to me that you are wedded to your design, and look at any and all criticism as simply something that needs to be explained away. As it happens I think your design--with a large enough sample--will probably work well enough to generate some kind of meaningful outcome (one to which the anti-prayer-healing folk will say "see?" and to which the pro-prayer-healing folk will say "well, of course, for prayer to really work you need such-and-such and this-and-so). On the other hand, I think you'd probably make an even better study if you were a little more open to constructive criticism and a little less wedded to defending your current design.
 
Yoink - Thank you for your apology.

I am only wedded to the design insofar as I want to be able to test prayer in a feasible way, and do not want to claim any personal ability nor that the effect will be very large or completely reliable. Can you think of any other method? Can you think of any way to make the protocol I've described laxer (but still rigorous) to encompass more pro-prayer viewpoints? The only thing I can think of would be to allow contact or restrict to known people, and of course that as you have pointed out would be excessively difficult to run. So this is the only design I feel adequately meets my desired criteria.

I am a skeptic, too, you see. I will defend the design though I am perfectly willing to be convinced otherwise... but as with others' comments that are obviously not flaws though they were claimed to be (eg "what if someone doesn't pray? what if they have other people praying for them?"), I'm not going to blindly accept criticism that isn't properly justified.
 
FYI: I'm doing a server move now and debugging the result (why is it that whenever you have something that works on one server, it breaks when you move it to another?).

Anywho, hopefully it should be functional soon - and on a server that can actually have reliable uptime and load capacity, unlike my laptop.
 
FYI: The main site is not working yet, BUT http://forums.prayermatch.org is.

I've added a Skeptic's Corner forum on it, and will offer moderator status there to anyone who is a moderator here (PM me here with your username if you want me to mod you.)

You may notice some other parallels to the JREF forums. ;)
 
"Willful" ignorance is an pretty strong claim. Are you psychic?

Certainly, I understand the benefits of experience, which is why I consult others. I don't feel that this in any way makes it okay for someone to insult me, nor for you to turn that into an ad hominem attack as well.

If you have comments about the study design, please restrict them to the methodology itself rather than making comments about me.

Well, since all of my posts have been about your study and your continued dismissal of concerns that quite a few of us have raised, it appears to be "willful" ignorance to me...

Am I psychic? No. But I at least understand what ad hominem means means, which is to attack the person rather than their ideas or actions. Since I characterized your actions, which are fair game, it wasn't ad hominem. I didn't call you names or suggest that you're stupid, I merely made the observation that you are willfully ignoring all of the problems with your study that others and I have raised...

And since you don't want to listen, I'll wish you luck in your study and be done with matter...
 
I'd like to speak up in saizai's defense. Mind you, I think he has exactly the same chance of winning the million as I do of pulling a rabbit out of my hat. And I have questions about whether the protocol is secure. But he's asked a couple of times for clarification of why his randomization procedure is an inadequate safeguard. I don't think he's gotten a very clear answer.

If some stat type has the time, the following might be helpful. Perform a Monte Carlo consistent with saizai's "experimental design" that shows why his test of the null hypothesis is invalid. Then perhaps an explanation could be provided or perhaps even the code could be posted. This might be illuminating for those who aren't statisticians. It might also be useful for JREF before they proceed with any protocol.
 
I'd like to speak up in saizai's defense. Mind you, I think he has exactly the same chance of winning the million as I do of pulling a rabbit out of my hat. And I have questions about whether the protocol is secure. But he's asked a couple of times for clarification of why his randomization procedure is an inadequate safeguard. I don't think he's gotten a very clear answer.

If some stat type has the time, the following might be helpful. Perform a Monte Carlo consistent with saizai's "experimental design" that shows why his test of the null hypothesis is invalid. Then perhaps an explanation could be provided or perhaps even the code could be posted. This might be illuminating for those who aren't statisticians. It might also be useful for JREF before they proceed with any protocol.

Starz, this is done for your benefit because Saizai clearly has dismissed this as inconsequential, which he does at his own peril...

He is wrong because he's naively assuming that the typical confounders in any clinical trial or quality of life analysis will be taken care of solely by his simple random sampling. Type of disease, disease severity, current treatment, gender, race, religion, culture, education, and age are all confounders to some degree in these types of studies but mainly I'm concerned about type of disease and its severity and current treatment because these can easily distort measures if they are not adjusted for in a statistical model. Another term is lurking variable. Without adjusting for it, any statistical test will likely show a spurious result either in the negative (it covers up the relationship) or more likely the positive (it makes the relationship significant when it's not). There's no need to perform Monte Carlo simulations, this is basic clinical trial design fundamentals...

More simply, by his randomization scheme, he's assuming homogeneity across the groups which you cannot make without testing for it...

If you want a good reference, try Friedman, Furberg, and DeMets "Fundamentals of Clinical Trials", it's probably in its 5th edition now...

I also don't think his Likert scale proposal is all that great for what he's trying measure which is essentially a quality of life estimate. He should try something like the SF-36 which has norms that he can compare against. But this is an entirely other matter from his randomization scheme...
 
I would quite like to see the result of Startz' suggestion.

Digithead - It's still ad homem (and psychic), because you are asserting something about what I intend, in contravention to what I've explicitly said (i.e. that I am interested in criticism so long as it is justified). Or, perhaps, do you know want I want better than I? That would be quite the power!

A bit difficult to prove in double-blind trials though... :p
 
In addition to my feeling that this "experiment" does not belong in this venue, I don't see how any study of prayer can possibly be considered valid as it is impossible to effectively set up a control group. How do you tell people not to pray for someone who is sick? How do you know they are telling the truth if they claim not to have prayed. How do you know how many people outside of the study's purview did or did not pray for the subjects in the study? I don't see how randomisation can be an effective control in this area unless the sample size is huge.
 
Perhaps this is, if everyone will forgive the cliche, a teachable moment. (I hope not to be boring everyone with statistical detail.) saizai believes that his idea is subject to straightforward statistical test. Others claim you can't ignore "confounders." First, the general principle. Then, an illustration.

In general, "controlling for confounders" is a critically important step. Imagine your outcome was whether patients died in a hospital and you were trying to see if being given last rites made a difference. If you tested for differences in means without controlling for patient health (a confounder) you'd get a silly answer. That's because those given the last rites are probably a lot sicker.

BUT! It isn't absolutely necessary to control for confounders that are uncorrelated with your observed explanatory variables. It's okay to omit confounders if they are uncorrelated with selection into control vs treatment group and uncorrelated with the data gathering process.

To illustrate this I made up a computer example in which 400 observations are randomly divided into a "prayed for" and "not prayed for" group. An outcome y is determined by a strong confounder, a modest random effect, and no effect at all of prayer. Then I do a standard 2-tailed t-test for differences in means at the 0.05 level and see how often "prayer has no effect" is rejected.

I did this 10,000 times and a found a significant effect of prayer 4.48 percent of the time. Just about what you'd expect.

This seems to illustrate that saizai 's claim that confounders don't matter in his application is basically right. Or it may illustrate that people are talking at cross-purposes, in that my "model" of saizai's experiment misses an important element that would make a difference. I hope that showing this toy model will make it easier for others to point out the specifics of what's missing.

For those interested, here's the code (in Matlab).
Code:
%{
confoundedPrayer.m
Monte Carlo to illustrate testing for prayer efficacy
in presence of confounder
Dick Startz
August 2006
%}
rand('state',0);  %% reset random number generators
randn('state',0);
nMonte = 10000
n = 400
rejectSum = 0;
for iMonte = 1:nMonte
    prayFor = rand(n,1)>0.5;
    notPrayFor = ~prayFor;
    y = 2 + 3*rand(n,1) + 0*prayFor + randn(n,1);
    meanDif = mean(y(prayFor)) - mean(y(notPrayFor));
    stdErr = sqrt(var(y(prayFor))/sum(prayFor) + var(y(notPrayFor))/sum(notPrayFor));
    rejectSum = rejectSum + (abs(meanDif/stdErr)>1.96);
end
disp(['False rejections occurred ',num2str(100*rejectSum/nMonte),' percent of the time']);
 
I don't see anywhere in your code where you included disease type, disease severity, treatment protocol, or any of the other confounders I've listed. Your assumption is still that the two groups are homogeneous which you cannot make until you verify from your sampling that it is true...

And confounders are not only correlated with the selection process, they are correlated with other covariates and the outcome and need to be accounted for in either the selection process or in the statistical analysis...

If you were to focus on one and only one disease type, severity, and treatment your assumptions would be correct although you'd still have to test if the groups are homogeneous with regard to gender, etc. Randi discussed this type of study here where they looked at prayer and its effect on patients who had coronary bypass surgery:

http://www.randi.org/jr/2006-04/041406schwartz.html#i7

However, in this type of study that Saizai wants to do, a matched pair design would work just as well with randomly selecting an individual with given demographics, disease type, disease level and treatment into the new treatment group and then matching with a similar individual in the control group...

As for Likert scale, see:

http://en.wikipedia.org/wiki/Likert_scale

And SF-36, see:

http://www.sf-36.org/
 
Here's the cardiac study at Pubmed:

http://www.ncbi.nlm.nih.gov/entrez/..._uids=16569567&query_hl=3&itool=pubmed_docsum

Note that they demonstrated that "Major events and 30-day mortality were similar across the 3 groups" rather than assuming this was true...

But the outcome is what's most interesting:

"CONCLUSIONS: Intercessory prayer itself had no effect on complication-free recovery from CABG, but certainty of receiving intercessory prayer was associated with a higher incidence of complications."
 
And looking at the bibliography in the article in greater detail, there are many studies of intercessory prayer. One would be better off doing a meta-analysis of the literature than reinventing the wheel. Although at first glance, I can guess what the result will be given the titles of many of the articles...

Note: edited for grammar
 
Last edited:
I don't see anywhere in your code where you included disease type, disease severity, treatment protocol, or any of the other confounders I've listed. Your assumption is still that the two groups are homogeneous which you cannot make until you verify from your sampling that it is true...

The relevant line in the code is
Code:
y = 2 + 3*rand(n,1) + 0*prayFor + randn(n,1);

where "rand(n,1)" represents confounders. This allows for the confounders to be stronger for one group than the other - but only by random chance.

And confounders are not only correlated with the selection process, they are correlated with other covariates and the outcome and need to be accounted for in either the selection process or in the statistical analysis...
>snip

I've supplied some evidence, albeit pretty simple-minded. Perhaps those who disagree might chip in with evidence too. All we need is a simple example that models the proposed protocol and comes up with false positives. I'm sure that such an example would be very helpful in letting JREF know what to watch out for.
 
As for the meta-analysis I've proposed, nevermind. It's already been done:

Masters KS, Spielmans GI, Goodson JT (2006). Are there demonstrable effects of distant intercessory prayer? A meta-analytic review. Ann Behav Med 32(1):21-6.

It's conclusion: "There is no scientifically discernable effect for IP [Intecessory Prayer] as assessed in controlled studies. Given that the IP literature lacks a theoretical or theological base and has failed to produce significant findings in controlled trials, we recommend that further resources not be allocated to this line of research."

It also was "designed to provide a current meta-analytic review of the effects of IP and to assess the impact of potential moderator variables."

Moderator variables are another name for confounders...

So Saizai, I'm figuring that you need to do a serious literature review before you even embark on your study...
 
And Starz, for your code to be valid you needed to use a multivariate random variable with a certain level of covariance. Your code "y = 2 + 3*rand(n,1) + 0*prayFor + randn(n,1);" is just an addition of two random variables which creates only an additive distribution (N(mu1+mu2,sigma)) rather than a multivariate distribution (N(mu1, mu2, sigma1, sigma2, rho))...
 
And Starz, for your code to be valid you needed to use a multivariate random variable with a certain level of covariance. Your code "y = 2 + 3*rand(n,1) + 0*prayFor + randn(n,1);" is just an addition of two random variables which creates only an additive distribution (N(mu1+mu2,sigma)) rather than a multivariate distribution (N(mu1, mu2, sigma1, sigma2, rho))...

Okay, that's a very specific response. Is the following correct? If instead of
Code:
rand(n,1)
and
Code:
randn(n,1)
I used two correlated normal random variables with different means and variances, then I'll get too many false positives?
 

Back
Top Bottom