Non-Homeopathic Belladonna

According to whom? According to The American Heritage Dictionary, evidence is: "A thing or things helpful in forming a conclusion or judgment: The broken window was evidence that a burglary had taken place. Scientists weigh the evidence for and against a hypothesis."

As has already been demonstrated on this forum, one can have endless arguments about the definition of "evidence". However, in this particular setting, what I am talking about is information that leads to a specific conclusion, mostly by excluding the possibility of any other explanation. And that is really what we have been talking about all along. The information you have presented on Cayce allows for multiple explanations, so it is unable to persuade someone to switch from one explanation to another.

The "evidence" in evidence-based medicine refers to just that. The value in a randomized, double-blind, placebo-controlled trial is not that it provides confirmation. It is that the set-up excludes any other explanation for the confirmation.

All you have really done is look for information that confirms your pre-conceived ideas - a notoriously unreliable way to find the truth.

This just emphasizes the subjective nature of evaluating evidence. I may be convinced by something that doesn't convince you and you may be convinced by something that doesn't convince me.

It isn't so much that the process of evaluating evidence is subjective, but rather that the starting points are subjective. A scientific approach starts with the assumption of a naturalistic explanation until the possibility is excluded. You are assuming a supernatural explanation until the possibility is excluded. It simply shifts the burden of what kind of evidence is necessary, which is why we find different things convincing.

I think you will have a hard time finding anybody who disagrees with the idea that "recommendations/decisions should be based on evidence."

You are quite mistaken about this, but that's a whole different discussion.

But the guidelines are necessarily subjective. If they weren't, there would be no disagreement.

What disagreement?

Linda
 
As has already been demonstrated on this forum, one can have endless arguments about the definition of "evidence". However, in this particular setting, what I am talking about is information that leads to a specific conclusion, mostly by excluding the possibility of any other explanation. And that is really what we have been talking about all along. The information you have presented on Cayce allows for multiple explanations, so it is unable to persuade someone to switch from one explanation to another.

The "evidence" in evidence-based medicine refers to just that. The value in a randomized, double-blind, placebo-controlled trial is not that it provides confirmation. It is that the set-up excludes any other explanation for the confirmation.
So why is it that we often hear in the media that a randomized, double-blind, placebo-controlled trial has proven something or other, only to have that finding contradicted down the road by another randomized, double-blind, placebo-controlled trial?

All you have really done is look for information that confirms your pre-conceived ideas - a notoriously unreliable way to find the truth.
Your subjective opinion is noted.

It isn't so much that the process of evaluating evidence is subjective, but rather that the starting points are subjective. A scientific approach starts with the assumption of a naturalistic explanation until the possibility is excluded. You are assuming a supernatural explanation until the possibility is excluded. It simply shifts the burden of what kind of evidence is necessary, which is why we find different things convincing.
Okay, but what is your threshold -- that extraordinary claims require extraordinary evidence?

You are quite mistaken about this, but that's a whole different discussion.
Can you name a well-known person who disagrees with the idea that "recommendations/decisions should be based on evidence"?

What disagreement?

Linda
For example, the Wikipedia article on Evidence Based Medicine states: "Critics of EBM say lack of evidence and lack of benefit are not the same, and that the more data are pooled and aggregated, the more difficult it is to compare the patients in the studies with the patient in front of the doctor — that is, EBM applies to populations, not necessarily to individuals. In The limits of evidence-based medicine, Tonelli argues that 'the knowledge gained from clinical research does not directly answer the primary clinical question of what is best for the patient at hand.' Tonelli suggests that proponents of evidence-based medicine discount the value of clinical experience."
 
So why is it that we often hear in the media that a randomized, double-blind, placebo-controlled trial has proven something or other, only to have that finding contradicted down the road by another randomized, double-blind, placebo-controlled trial?

Do we? Can you provide an example?

Your subjective opinion is noted.

Well, you've never provided evidence to the contrary despite being asked numerous times....

Okay, but what is your threshold -- that extraordinary claims require extraordinary evidence?

I contributed to this in a thread a little while ago. Basically, all claims require the same evidence. It's just that ordinary claims already have an extraordinary amount of evidence to back them up.

Can you name a well-known person who disagrees with the idea that "recommendations/decisions should be based on evidence"?

Isn't your wikipedia quote from below an example of that?

For example, the Wikipedia article on Evidence Based Medicine states: "Critics of EBM say lack of evidence and lack of benefit are not the same, and that the more data are pooled and aggregated, the more difficult it is to compare the patients in the studies with the patient in front of the doctor — that is, EBM applies to populations, not necessarily to individuals. In The limits of evidence-based medicine, Tonelli argues that 'the knowledge gained from clinical research does not directly answer the primary clinical question of what is best for the patient at hand.' Tonelli suggests that proponents of evidence-based medicine discount the value of clinical experience."

I can see why the neutrality of that section is disputed.

I think this is a better description of EBM. The subjective component relates to the individual patients and doctors. The guidelines are meant to be an objective, consistent way to evaluate the evidence relevant to individual situations.

Linda
 
So why is it that we often hear in the media that a randomized, double-blind, placebo-controlled trial has proven something or other, only to have that finding contradicted down the road by another randomized, double-blind, placebo-controlled trial?


Because "the media" are often not very good at reporting science.
 
Do we? Can you provide an example?
I recall various studies seeming to show that eggs and coffee contributed to a variety of health problems. More recently, some studies claimed that coffee can prevent colorectal cancer. And now, there's an article that takes issue with those studies. See -- http://aje.oxfordjournals.org/cgi/content/abstract/163/7/638

Well, you've never provided evidence to the contrary despite being asked numerous times....
What kind of "evidence" are you looking for?

I contributed to this in a thread a little while ago. Basically, all claims require the same evidence. It's just that ordinary claims already have an extraordinary amount of evidence to back them up.
I disagree with your latter statement. In some cases, "ordinary claims" prove false. Also, I note that on the thread that you reference, you express your strong opinion about the Michelson-Morley experiment. For an alternative opinion, see http://www.alternativescience.com/ether.htm

Isn't your wikipedia quote from below an example of that?
I don't think so because, again, you have to distinguish between "evidence" and "Evidence Based Medicine."

I can see why the neutrality of that section is disputed.
But doesn't that prove my point about subjective opinions?

I think this is a better description of EBM. The subjective component relates to the individual patients and doctors. The guidelines are meant to be an objective, consistent way to evaluate the evidence relevant to individual situations.

Linda
I understand the intent, but don't think EBM always produces optimal results.
 
I recall various studies seeming to show that eggs and coffee contributed to a variety of health problems. More recently, some studies claimed that coffee can prevent colorectal cancer. And now, there's an article that takes issue with those studies. See -- http://aje.oxfordjournals.org/cgi/content/abstract/163/7/638

So, Rodney, exactly what part of either the original study or this later reanalysis employed double-blind, randomized exposure to the suspected toxicant or, in this case, safener? Take your time.

Also, I note that on the thread that you reference, you express your strong opinion about the Michelson-Morley experiment. For an alternative opinion, see http://www.alternativescience.com/ether.htm

For land's sake, Rodney, this article is as much hogwash as anyone could stand. The Michelson-Morley experiment was run to prove the existence of aether, not disprove it. But, disprove it, it did. Ain't science grand? The adoption of facts over "tradition" or commonly accepted beliefs? Even Maxwell, the physicist who unknowingly proved that light propagation needed no medium through his published mathematics, believed in aether. Some errors die hard.

Rodney, I have no idea what kind of person you are but you really are over your head in dealing with science. You can't seem to comprehend what evidence is critical and can be easily verified from BS foisted by people who want to sell you stuff, like your $49 membership to a group that should be revolutionizing health care if its main tenets were true.

I understand the intent, but don't think EBM always produces optimal results.

Of course not! You don't understand science, do you? Science is not some unalterable, know-everything body of knowledge that always comes up with the right answer. Science survives on research and investigation of facts, not made-up, untestable crapola like Cayce's visions. However, I will take evidence based medicine over anything your or your woo buddies ever concoct as I know I stand a much better chance with science.
 
I recall various studies seeming to show that eggs and coffee contributed to a variety of health problems. More recently, some studies claimed that coffee can prevent colorectal cancer. And now, there's an article that takes issue with those studies. See -- http://aje.oxfordjournals.org/cgi/content/abstract/163/7/638

I said "randomized, double-blind, placebo-controlled trial".

What kind of "evidence" are you looking for?

Addressing the disconfirming evidence that is presented to you.

I disagree with your latter statement. In some cases, "ordinary claims" prove false.

I don't think you understood what I meant, then. Of course ordinary claims can be false. I'm saying that it's a quite different matter to convince my neighbour that I have a cat than it is to convince her that I have a unicorn because the background evidence necessary to prove the claim of "cat" has already been established.

Also, I note that on the thread that you reference, you express your strong opinion about the Michelson-Morley experiment. For an alternative opinion, see http://www.alternativescience.com/ether.htm

What am I supposed to get from that? I already know that that the internet allows people free rein to demonstrate their ignorance.

You never answered my previous question ("What's the alternative - ignoring the information and tossing a die?") so maybe I'll ask it here. Once I decide to abandon a rational approach to evaluating ideas, how do I choose which ideas to consider valid? Toss a die?

I don't think so because, again, you have to distinguish between "evidence" and "Evidence Based Medicine."

But the argument that he is presenting is that non-evidentiary knowledge should also be incorporated into clinical decision-making. The deficiency/criticism he has of EBM is based on the fact that it depends upon evidence.

But doesn't that prove my point about subjective opinions?

No, because the criticisms aren't directed at the evaluation of evidence, but rather the degree to which evidence should play a role. And my comment about neutrality was really referring to the mischaracterization about what EBM says on this point.

I understand the intent, but don't think EBM always produces optimal results.

Rodney, I sincerely doubt that you understand EBM.

If EBM doesn't produce optimal results, it is on the basis of whether adequate evidence exists (the points Tonelli was making as well), not because the evaluation of the existing evidence is subjective.

Linda
 
I think this is a better description of EBM. The subjective component relates to the individual patients and doctors. The guidelines are meant to be an objective, consistent way to evaluate the evidence relevant to individual situations.

Linda

I've also found this one to be very informative (and slightly easier to read - but that's me)
 
I said "randomized, double-blind, placebo-controlled trial".
Okay, so let's look at one of your favorite topics: OTC cough medicines. According to the article "Systematic review of randomised controlled trials of over the counter cough medicines for acute cough in adults" -- http://www.bmj.com/cgi/content/abst...9ee4dd03a3731c5f0787b1db&keytype2=tf_ipsecsha --

"Included studies: All randomised controlled trials that compared oral over the counter cough preparations with placebo in adults with acute cough due to upper respiratory tract infection in ambulatory settings and that had cough symptoms as an outcome.

"Results: 15 trials involving 2166 participants met all the inclusion criteria. Antihistamines seemed to be no better than placebo. There was conflicting evidence on the effectiveness of antitussives, expectorants, antihistamine-decongestant combinations, and other drug combinations compared with placebo."

Addressing the disconfirming evidence that is presented to you.
What, exactly, is that disconfirming evidence?

I don't think you understood what I meant, then. Of course ordinary claims can be false. I'm saying that it's a quite different matter to convince my neighbour that I have a cat than it is to convince her that I have a unicorn because the background evidence necessary to prove the claim of "cat" has already been established.
But your claim of having a cat is no more evidential than your claim of having a unicorn. Similarly, a research finding that establishes at the 1% level of statistical significance that a non-controversial hypothesis is correct should be treated no differently than a research finding that establishes at the 1% level of statistical significance that a controversial hypothesis is correct.

What am I supposed to get from that? I already know that that the internet allows people free rein to demonstrate their ignorance.
What you're supposed to get is that what you were taught in school about the Michelson-Morley experiment was inaccurate: Michelson and Morley did not in fact obtain a null result in their original experiment, nor did that experiment irrefutably establish that there is no ether.

You never answered my previous question ("What's the alternative - ignoring the information and tossing a die?") so maybe I'll ask it here. Once I decide to abandon a rational approach to evaluating ideas, how do I choose which ideas to consider valid? Toss a die?
No, but a rational approach does not, in my opinion, mean throwing out evidence from 100 years ago on the basis that credible individuals were all mistaken or that a spontaneous healing must have occurred. Rather, a rational approach means broadly examining all evidence, even if it has not been tested in a laboratory or, indeed, is not even amenable to being tested in a laboratory.

But the argument that he is presenting is that non-evidentiary knowledge should also be incorporated into clinical decision-making. The deficiency/criticism he has of EBM is based on the fact that it depends upon evidence.

No, because the criticisms aren't directed at the evaluation of evidence, but rather the degree to which evidence should play a role. And my comment about neutrality was really referring to the mischaracterization about what EBM says on this point.

Rodney, I sincerely doubt that you understand EBM.

If EBM doesn't produce optimal results, it is on the basis of whether adequate evidence exists (the points Tonelli was making as well), not because the evaluation of the existing evidence is subjective.

Linda
I think I can better address your points if you answer this question: If you were Tommy House's doctor in February 1909, would you have administered Cayce's recommended treatment?
 
Okay, so let's look at one of your favorite topics: OTC cough medicines. According to the article "Systematic review of randomised controlled trials of over the counter cough medicines for acute cough in adults" -- http://www.bmj.com/cgi/content/abst...9ee4dd03a3731c5f0787b1db&keytype2=tf_ipsecsha --

"Included studies: All randomised controlled trials that compared oral over the counter cough preparations with placebo in adults with acute cough due to upper respiratory tract infection in ambulatory settings and that had cough symptoms as an outcome.

"Results: 15 trials involving 2166 participants met all the inclusion criteria. Antihistamines seemed to be no better than placebo. There was conflicting evidence on the effectiveness of antitussives, expectorants, antihistamine-decongestant combinations, and other drug combinations compared with placebo."

So, what's your point? Where is one randomized, double-blind study specifically contradicting another? "Conflicting evidence" means that the null hypothesis was not proven to the target confidence level. The language of that article (not in a journal, by the way) does not even come close to supporting your argument that double-blind studies contradict each other. If that were true, this very expensive practice would be dropped overnight.

Similarly, a research finding that establishes at the 1% level of statistical significance that a non-controversial hypothesis is correct should be treated no differently than a research finding that establishes at the 1% level of statistical significance that a controversial hypothesis is correct.

You're finally getting it. So, what does that have to say about Cayce? Why does no one but your Cayce Society take him seriously? Methinks you just gave us a reason.

What you're supposed to get is that what you were taught in school about the Michelson-Morley experiment was inaccurate: Michelson and Morley did not in fact obtain a null result in their original experiment, nor did that experiment irrefutably establish that there is no ether.

Yes, it did. Your point to prove. If you can't prove, move on. What does it have to do with Cayce anyway?

No, but a rational approach does not, in my opinion, mean throwing out evidence from 100 years ago on the basis that credible individuals were all mistaken or that a spontaneous healing must have occurred. Rather, a rational approach means broadly examining all evidence, even if it has not been tested in a laboratory or, indeed, is not even amenable to being tested in a laboratory.

Where have you been? The evidence has been considered and rejected. Linda has patiently and painstakingly guided you through that process. So you expect medical science to repeat all they've done to debunk these claims again for your personal satisfaction.

And, Rodney, if it can't be tested, it's not worth the time of day. Unfalsifiability is key to science. Deal with it.

If you were Tommy House's doctor in February 1909, would you have administered Cayce's recommended treatment?

No.
 
Whilst the examples are based around the sponsoring companies drugs it doesn't invalidate the lesson.

Maybe the authors chose to focus on randomized controlled trials as the epitome of evidence (when in fact they would be a poor source of information when asking other types of questions) because they thought their target audience was mostly interested in evidence as it relates to treatment and not so they could focus on an example that would serve as an advertisement for their sponsor. It's possible. And I might even agree that it doesn't matter except that several times I have been quite inappropriately asked for a "randomized controlled trial" to back up my assertions. And when I try to explain that an RCT is the wrong kind of evidence and provide a link to better evidence, I'm not believed. So I wouldn't mind a link that explains the whole picture instead of perpetuating the myth the RCT's are the answer. I admit that I don't really know if this is a significant issue, since I realize that some people refuse to believe anything that contradicts their own opinion and may just be looking for an excuse. And I also admit that my link is harder to read (and aimed at a different audience) and so may not get read at all (something less than ideal is still better than nothing?). And that I am overly sensitive about ridding physicians of the notions that it is okay to let drug companies heavily subsidize educational matters and that we aren't being subtly influenced in thousands of ways by their involvement.

Linda
 
So, what's your point? Where is one randomized, double-blind study specifically contradicting another? "Conflicting evidence" means that the null hypothesis was not proven to the target confidence level. The language of that article (not in a journal, by the way) does not even come close to supporting your argument that double-blind studies contradict each other. If that were true, this very expensive practice would be dropped overnight.


Well, not quite. Every now and then you may get a single properly conducted study that produces a false positive or false negative; it's just down to the way stats work (Linda explained this at some length in the "homeopathic cough medicine" thread, I think). This is why it is important that the results can be replicated by other similar studies.

But Rodney wasn't asking, in this thread, about contradictory results from randomized, double-blind, placebo-controlled trials. He was asking about media reporting of the results of trials:
So why is it that we often hear in the media that a randomized, double-blind, placebo-controlled trial has proven something or other, only to have that finding contradicted down the road by another randomized, double-blind, placebo-controlled trial?


This opens up a whole different can of worms. "The media" are not very good at reporting science. Many journalists who report "science" stories appear ill-qualified to do so. Studies that are not particularly conclusive, or that haven't been replicated, are reported as if they're proof of something. Non-issues get blown up out of all proportion; solid science goes unreported if it's not considered sensational enough. If you want to read more about "the media" and science reporting, I can recommend Ben Goldacre's Bad Science blog.

Needless to say, Rodney considers newspaper reports of medical matters to be completely reliable, especially if they were written the best part of a century ago and supported by affidavits given years after the event by people with no medical qualifications.

Note also how in the first link he's using the newspaper story to support the reliability of the affidavit, while in the second link he says the affidavit was given to support the newspaper story. Neat, huh?
 
...a research finding that establishes at the 1% level of statistical significance that a non-controversial hypothesis is correct should be treated no differently than a research finding that establishes at the 1% level of statistical significance that a controversial hypothesis is correct.


If my neighbour told me she saw a cat run over in the shopping mall car park, the distressed look on her face would be sufficient to convinve me that it happened. If she told me, with a similar distressed look, that she saw an alien spaceman run over in the shopping mall car park, I would need a just a little more evidence.
 
Okay, so let's look at one of your favorite topics: OTC cough medicines. According to the article "Systematic review of randomised controlled trials of over the counter cough medicines for acute cough in adults" -- http://www.bmj.com/cgi/content/abst...9ee4dd03a3731c5f0787b1db&keytype2=tf_ipsecsha --

"Included studies: All randomised controlled trials that compared oral over the counter cough preparations with placebo in adults with acute cough due to upper respiratory tract infection in ambulatory settings and that had cough symptoms as an outcome.

"Results: 15 trials involving 2166 participants met all the inclusion criteria. Antihistamines seemed to be no better than placebo. There was conflicting evidence on the effectiveness of antitussives, expectorants, antihistamine-decongestant combinations, and other drug combinations compared with placebo."

None of those studies are an example of "a randomized, double-blind, placebo-controlled trial has proven something or other, only to have that finding contradicted down the road by another randomized, double-blind, placebo-controlled trial?" If you think that any one of those studies contradicts any of the others, specify which ones.

What, exactly, is that disconfirming evidence?

You (very conveniently) provide an example in the paragraph below.

But your claim of having a cat is no more evidential than your claim of having a unicorn.

Right. I need 100 points to establish both claims. 99 of those 100 are already available for the cat, so I only need to provide one more point. None are available for the unicorn, so I need to provide all 100.

Similarly, a research finding that establishes at the 1% level of statistical significance that a non-controversial hypothesis is correct should be treated no differently than a research finding that establishes at the 1% level of statistical significance that a controversial hypothesis is correct.

I agree that a research finding at the 1% level of statistical significance is independent of whether or not the underlying hypothesis is controversial or not. But I have already explained to you (in detail) that the use of that bit of information to establish whether or not a hypothesis is correct is a completely separate step. Ivor the Engineer recently started a thread on this very issue. And it is well-established and non-controversial that these are two separate steps. The controversy is only over the extent to which researchers are aware of and make a distinction between these two steps when reporting their conclusions.

And this is an example of you failing to address disconfirming evidence. I have explained to you in detail the difference between the two and provided links to supplementary information. You even had an independent statistician (Beth) weigh in on the subject who confirmed this. Yet you continue to completely ignore it, even though it is one of the most crucial pieces of evidence against the paranormal/supernatural.

What you're supposed to get is that what you were taught in school about the Michelson-Morley experiment was inaccurate: Michelson and Morley did not in fact obtain a null result in their original experiment, nor did that experiment irrefutably establish that there is no ether.

I was never taught that in the first place. None of that supports the author's conclusions.

No, but a rational approach does not, in my opinion, mean throwing out evidence from 100 years ago on the basis that credible individuals were all mistaken or that a spontaneous healing must have occurred. Rather, a rational approach means broadly examining all evidence, even if it has not been tested in a laboratory or, indeed, is not even amenable to being tested in a laboratory.

I agree. Do you have any reason to think that I don't do that?

I think I can better address your points if you answer this question: If you were Tommy House's doctor in February 1909, would you have administered Cayce's recommended treatment?

My answer to that question will not allow you to address my points since 1) the EBM movement has had almost no influence on how I practice medicine (so my response cannot be taken as an example of EBM), and 2) you are perpetuating the same mischaracterization of EBM that the wikipedia article promotes and the article I linked to tries to counteract.

I realize that you wish to establish that the insistence on the use of evidence in medicine will lead to negative outcomes, as Tommy would have died under those circumstances. And whether or not it's true in Tommy's case doesn't matter because it has already been established to be true. There are occasions where decisions based on the best evidence available have turned out to be wrong and harmful. We can strive towards perfection, but a complex system imposes limits. Again I ask, what is your alternative? And do you have any reasonable justification for abandoning this system and for adopting your alternative?

Linda
 
Last edited:
None of those studies are an example of "a randomized, double-blind, placebo-controlled trial has proven something or other, only to have that finding contradicted down the road by another randomized, double-blind, placebo-controlled trial?" If you think that any one of those studies contradicts any of the others, specify which ones.
I don't have the time at the moment to analyze these studies, but if there was "conflicting evidence on the effectiveness of antitussives, expectorants, antihistamine-decongestant combinations, and other drug combinations compared with placebo," doesn't that indicate the studies were contradictory?

I agree that a research finding at the 1% level of statistical significance is independent of whether or not the underlying hypothesis is controversial or not. But I have already explained to you (in detail) that the use of that bit of information to establish whether or not a hypothesis is correct is a completely separate step. Ivor the Engineer recently started a thread on this very issue. And it is well-established and non-controversial that these are two separate steps. The controversy is only over the extent to which researchers are aware of and make a distinction between these two steps when reporting their conclusions.

And this is an example of you failing to address disconfirming evidence. I have explained to you in detail the difference between the two and provided links to supplementary information. You even had an independent statistician (Beth) weigh in on the subject who confirmed this. Yet you continue to completely ignore it, even though it is one of the most crucial pieces of evidence against the paranormal/supernatural.
I don't have time to read the other thread right now, but I assume you're referring to the "pre-study odds" idea addressed in the John Ioannidis article "Why Most Published Research Findings Are False." To me, that idea is just a way of discrediting hypotheses that challenge the conventional wisdom. And I think Beth came down somewhere in the middle between your position and mine.

I was never taught that in the first place. None of that supports the author's conclusions.
You were taught something different than me. ;)

I agree. Do you have any reason to think that I don't do that?
Not you personally, perhaps, but I think EBM is based on ignoring anecdotal evidence.

My answer to that question will not allow you to address my points since 1) the EBM movement has had almost no influence on how I practice medicine (so my response cannot be taken as an example of EBM), and 2) you are perpetuating the same mischaracterization of EBM that the wikipedia article promotes and the article I linked to tries to counteract.

I realize that you wish to establish that the insistence on the use of evidence in medicine will lead to negative outcomes, as Tommy would have died under those circumstances. And whether or not it's true in Tommy's case doesn't matter because it has already been established to be true. There are occasions where decisions based on the best evidence available have turned out to be wrong and harmful. We can strive towards perfection, but a complex system imposes limits. Again I ask, what is your alternative? And do you have any reasonable justification for abandoning this system and for adopting your alternative?

Linda
Again, I believe that the "best evidence available" includes more than just randomized, double-blind, placebo-controlled trials. My alternative is to look at evidence as broadly as possible and make decisions on a case-by-case basis.
 
Well, not quite. Every now and then you may get a single properly conducted study that produces a false positive or false negative; it's just down to the way stats work (Linda explained this at some length in the "homeopathic cough medicine" thread, I think). This is why it is important that the results can be replicated by other similar studies.

Perhaps. I couched my comment with the conclusions delivering the target statistical confidence. But, you are right, especially in the case where a source of bias was not identified in the first study but controlled in the second.

But Rodney wasn't asking, in this thread, about contradictory results from randomized, double-blind, placebo-controlled trials. He was asking about media reporting of the results of trials

Thank you. That escaped me. Yes, he was remarking about reading that stuff in newspapers, not the subject itself. I took his complaint for sloppy writing. :wink: Maybe we should press him to cite one of these newspaper stories. So far, he's turned up a couple of non-starters on the web. Maybe he can send in his many clippings from the National Enquirer.

This opens up a whole different can of worms. "The media" are not very good at reporting science. Many journalists who report "science" stories appear ill-qualified to do so. Studies that are not particularly conclusive, or that haven't been replicated, are reported as if they're proof of something. Non-issues get blown up out of all proportion; solid science goes unreported if it's not considered sensational enough. If you want to read more about "the media" and science reporting, I can recommend Ben Goldacre's Bad Science blog.

You are absolutely right. Science reporting in the national media is usually subpar. There are notable exceptions, though, but these are generally recognized scientists, not reporters. I've even caught gaffes in the weekly mag from the American Chemical Society and that really shook my confidence in them.

There's a lot of bad reporting. Some of it is in the study reports themselves! From time to time, I've been assigned the task of reviewing studies whose reported conclusions are contradictory to known fact and I've found that most such assertions are unsupported by the data and only a reflection of the investigator's bias. So, peer review and confirmation are two of science's saving graces.

Needless to say, Rodney considers newspaper reports of medical matters to be completely reliable, especially if they were written the best part of a century ago and supported by affidavits given years after the event by people with no medical qualifications.

Note also how in the first link he's using the newspaper story to support the reliability of the affidavit, while in the second link he says the affidavit was given to support the newspaper story. Neat, huh?

Rodney is truly a phenomenon. Whatever logic he applies to situations escapes me. Circular reasoning (as you've pointed out), denial of fact, selection of only confirmatory evidence, and so on are his hallmarks. Linda's patience is truly remarkable!

Thanks!
 

Back
Top Bottom