• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Scientists as "believers".

Stumpy said:

Agreed. However, what about the numerous examples that scientific fraud and junk science in more mainstream areas of science. Aren't these things driven by an "a priori" belief in a particular hypothosis. Things like the EPA's meta-analysis that concluded that environmental tobacco smoke causes cancer. This conclusion was only reached after deliberately ignoring studies that gave results contrary to the "desired" result and then manipulting the numbers. There are thousands of examples, aren't such dubious scientific endeavours driven by a pre-existing belief in the hypothesis that is being tested?

No. There are some -- I doubt if there are even hundreds -- examples of dubious scientific endeavours being driven by a desire by the funding agencies to find evidence supporting a particular point.

The point that you miss is that beliefs are not evidence, and scientists are trained to evaluate evidence in support of (or against) a particular hypothesis. As a practicing scientist, I may believe such-and-such (which is why I am studying it). However, my beliefs have almost no causal effect on the beliefs of other scientists; only when I can amass evidence will they start to share my belief.

The EPA's meta-analysis is actually a fairly good example, albeit of a deceptive (and therefore justly scandalized) practice; the EPA cannot adjust the actual evidence from the studies, and therefore are attempting to manipulate perceptions by manipulating the perception of the evidence. It's no different than if a bench scientist were to fabricate his lab notes; it's simple misrepresentation of evidence. I.e. they're lying.

The thing about lying, though, is that the practice of science demands independent replication of studies (and the reportage of data), so things like this get caught --- not as a check on the "biases" but as a check on the fundamental honesty of the all too human scientists who are part of the system. Meta-analyses are particularly succeptible to manipulation, which is part of the reason that some scientists simply refuse to accept meta-analyses as "evidence," saying that they prove nothing. But the only reason you know about the EPA's manipulations is because the results and evidence were critically examined.

The bigger problem is the press. The press, basically, doesn't critically analyze anything, and goes out of its way to blow anything and everything well out of proportion. That's also the reason for the "mindless health scares." It doesn't matter how fair, unbiased, and clear the scientist is, when the international press is practically guaranteed to misrepresent his words with the deliberate intent of creating a scare -- because scares sell papers.


Which isn't a problem if the ravens are actually based in the football fields. You will be wasting as much time looking in parks if there are no ravens there but you believe that there is.

Absolutely. And this is a point subject to empirical validation, if necessary. You look for ravens in football fields, I'll look for them in parks where ravens have previously been spotted, and we'll compare numbers later.

In general, this experiment has already been done. "The race is not always to the swift, nor the battle to the strong, but that's the way to bet."
 
new drkitten said:
The bigger problem is the press. The press, basically, doesn't critically analyze anything, and goes out of its way to blow anything and everything well out of proportion. That's also the reason for the "mindless health scares." It doesn't matter how fair, unbiased, and clear the scientist is, when the international press is practically guaranteed to misrepresent his words with the deliberate intent of creating a scare -- because scares sell papers.
"Science is a good thing. News reporters are good things too. But it's never a good idea to put them in the same room."
Scott Adams.
 
new drkitten said:
No. There are some -- I doubt if there are even hundreds -- examples of dubious scientific endeavours being driven by a desire by the funding agencies to find evidence supporting a particular point.


This site suggests otherwise, this is only a small selection focussed on a narrow field, usually health and environmental scares.

The point that you miss is that beliefs are not evidence, and scientists are trained to evaluate evidence in support of (or against) a particular hypothesis.

The point you miss is that beliefs frequently and undeniably produce flawed evidence and cloud objective judgement when it comes to evaluating evidence.

As a practicing scientist, I may believe such-and-such (which is why I am studying it). However, my beliefs have almost no causal effect on the beliefs of other scientists; only when I can amass evidence will they start to share my belief.

Your beliefs may well have an effect on the general population, especially if your beliefs are compromising your objectivity.


The thing about lying, though, is that the practice of science demands independent replication of studies (and the reportage of data), so things like this get caught --- not as a check on the "biases" but as a check on the fundamental honesty of the all too human scientists who are part of the system.

Science also demands impartiality, open-minded skepticism and objectivity in it's approach (or at least it used to).

Meta-analyses are particularly succeptible to manipulation, which is part of the reason that some scientists simply refuse to accept meta-analyses as "evidence," saying that they prove nothing. But the only reason you know about the EPA's manipulations is because the results and evidence were critically examined.

I may know about it, you may know about it, the US supreme court may know about it, but how many of the general public know about it? As far as the public is concerned environmental tobacco smoke causes cancer. The whole myth stems from the EPA's junk science, that was in turn borne from an a priori belief that this was the case.

[/QUOTE]The bigger problem is the press. The press, basically, doesn't critically analyze anything, and goes out of its way to blow anything and everything well out of proportion. That's also the reason for the "mindless health scares." It doesn't matter how fair, unbiased, and clear the scientist is, when the international press is practically guaranteed to misrepresent his words with the deliberate intent of creating a scare -- because scares sell papers.[/QUOTE]

If the junk scientists weren't turning out the junk science in the first place, then the press wouldn't even come into the equation.
 
geni said:
There is a difference between of and about.

Apologies. Let me re-phrase my points:

What are the rules about "belief" and how do they relate to logic?

Also is the church minister bound by the same rules about logic and belief, as the scientist, in his work ? Is the artist bound by the same rules when painting an abstract piece of art for sale?

Scientists should be striving towards the truth, isn't there an almost inexhaustable supply of examples of how bias and "a priori" belief have got in the way of that?
 
Stumpy said:
Apologies. Let me re-phrase my points:

What are the rules about "belief" and how do they relate to logic?

Also is the church minister bound by the same rules about logic and belief, as the scientist, in his work ? Is the artist bound by the same rules when painting an abstract piece of art for sale?

Well I think so


Scientists should be striving towards the truth, isn't there an almost inexhaustable supply of examples of how bias and "a priori" belief have got in the way of that?

Yep but there is also an almost inexhaustable supply of examples of how bias and "a priori" belief have speeded up the process.
 
geni said:
Well I think so

I don't see how, perhaps if you could tell me about the rules of belief and how they relate to the rules about logic it might become clearer. For instance, the faith based discussions about how god fits into recent Asian tsunami, seem miles apart from the scientific explanations given.


Yep but there is also an almost inexhaustable supply of examples of how bias and "a priori" belief have speeded up the process.

Can you give an example please?
 
Stumpy said:
I don't see how, perhaps if you could tell me about the rules of belief and how they relate to the rules about logic it might become clearer. For instance, the faith based discussions about how god fits into recent Asian tsunami, seem miles apart from the scientific explanations given.

It may be that faith is logical indeficible (but then I conceed the same about prettty much everything) thios does not mean that the rules of logic should not be applied




Can you give an example please?

Discovery of the moons of mars.
 
Stumpy said:
This site suggests otherwise, this is only a small selection focussed on a narrow field, usually health and environmental scares.

I'm afraid that if you get your science news from a cite like junkscience.com, you get no sympathy (or respect) from me or anyone else who actually studies the practice of science. Do you get your political news from the New York Post, too?





The point you miss is that beliefs frequently and undeniably produce flawed evidence and cloud objective judgement when it comes to evaluating evidence.

No, the point that you miss is that your claim that this happens is unfounded and relies on a misapprehension of scientific practice and the role of belief. I didn't miss your point. I simply reject it.

Let's take a (semi)-realistic example of how science is actually practiced, to see what's going on. Suppose, for whatever reason, I believe that daily doses of beet juice will extend lifespan. Presumably (as a scientist), I have some kind of evidence backing it up and haven't just conjured this belief out of thin air. Perhaps I visited a small village in Ruthenia where the inhabitants drink beet juice and regularly live to be 90, or perhaps I found some enzyme in beet juice that appears to have anagathic properties. Perhaps it's just because my great-grandmother swore that a glass of beet juice a day keeps the undertaker away. Whatever

This is the first step in scientific research. If I didn't believe that there was an effect, I'd have no motivation to investigate beet juice in particular among all the other things there are (many of which taste a lot better). But at the same time, if I simply go around telling people to drink beet juice, I'm not a scientist, but merely a fruitcake. I have to look for evidence to support my beliefs.

Now, because I'm a competent researcher, I am aware of the necessity for actual data. So I'll get a couple of dozen lab rats, feed Purina rat chow and beet juice to half, and just the rat chow to the others. And, sure 'nough, the rats on the beet juice live (on average) longer. Is this "proof"?

Not at all, but it might be evidence. At this point, I've probably got enough "evidence" that I can approach the National Science Foundation and say "look, please give me enough money to run a 'real' experiment." (I want to repeat it with monkeys, which are biologically close to humans, but expensive.) The NSF are pretty cagey with their money, and they're not going to give it to me unless their tame panel of experts feel that there's a good chance I'm onto something (they're looking for low-risk, high-reward research here). But at this point, it doesn't matter what I believe, but what the evidence I can amass is. The experts will look at my statistics and confirm that, yes, in fact, the experimental rats did live longer. They'll look at my experimental setup to confirm that I ran the experiment fairly --- that I didn't cheat by assigning unhealthy rats to the control group, etc.

Now, you're absolutely right that it's possible for my biases to affect my interpretation of the experimental outcome. However, since I'm a competent scientist, I followed what are by now standard experimental practice to minimize the known effect of bias. For example, I picked a variable (lifespan) that is hard to fudge -- a rat is either alive or dead, and no amount of bias on my part will resurrect a dead rat. Second, I ran a "blinded" experiment, so I didn't even know which rats were in the control group vs. experimental group. In other words, I specifically set it up so that my prior belief would have as little effect as possible on the experimental outcome. Furthermore, I used standard statistics and statistical methods so that my analyses would be as clear as possible to the reviewing panel and so that there's no possibility of cooking the data. Finally, I made all my data avaiable to the reviewers so that, if there was any question about whether or not I was interpreting the numbers correctly, they could redo the analysis themselves and doublecheck me.

Alternatively, maybe I wasn't a competent researcher and failed to do the things mentioned above. But in this case, the NSF experts (who expect, and usually get, their science in the form above), will notice that those things weren't done. But because I didn't do those things, the NSF experts will notice that I ran a bad experiment, collected untrustworthy data, and therefore will probably not believe what I claim to have shown. In other worse, if I have a setup where my biases are likely to influence the data, the experts will spot it as potentially bad data. Because the data might not be trustworthy, it doesn't support the claim I wish to make, and my funding gets denied.

Alternatively, I write the result of the experiment up for a journal. Again, the journal editors will pass the writeup off to a panel of experts for evaluation, who will look at the data (and procedure) with the same critical eye that the NSF reviewers use. If my procedures and data are sound, they will accept the paper irrespective of my beliefs. If my procedures and data are unsound, they will probably reject my paper. This is one of the reasons that skeptics so often insist on "peer-reviewed" research; because such research has been viewed (skeptically) by experts in the field who know many of the pitfalls that befall a careless (or sloppy) researcher. If my research is bad, then it won't pass peer review --- and if it doesn't get into the peer reviewed journals, other scientists won't pick up on it and use it.

But if I didn't initially believe in the life-extending power of beet juice, I'd never have run the experiment in the first place. Having examined my evidence, the NSF panel probably now believes (or at least is willing to accept as a possibility) this life-extending power, thus I get my funding. And the whole reason I'm running the experiment with monkeys is that, although I still believe that it works on humans, I don't actually have evidence to support that belief, so I am being open-mindedly skeptical about the question "Well, it works on rats, but how about on primates?"


Science also demands impartiality, open-minded skepticism and objectivity in it's approach (or at least it used to).

I don't think that you know what these words mean when applied to scientific practice. In the hypothetical study above, I was impartial, open-minded, skeptical, and objective. In fact, the whole point of running the experiment in the first place was to generate objective numbers to test impartially a belief I already held. I'm being open-minded and skeptical enough to accept and to take seriously the possibility that my belief, or even my well-founded theory, might be wrong. Instead of saying "I believe this to be true," I'm saying "I believe this to be true, but let's check anyway."



If the junk scientists weren't turning out the junk science in the first place, then the press wouldn't even come into the equation.

Um, this is simply wrong. The press (in company with fools who rush to judgement on science without understanding it) turn good science into junk science through ignorance and fear.

In particular, notice that what I told the NSF is that I want to run an experiment on monkeys because I have shown that beet juice works on rats, but I believe that it works on monkeys (and will also work on humans). The NSF now believes that it works on rats (since there's the experimental evidence to back it up, evidence that I produced), but is skeptical enough to want to see that it works on monkeys before authorizing full-scale clinical trials on humans.

But at this point, Judy Journalist wanders into my lab and asks what I'm doing. And I tell her exactly what I have told you above. Which story do you think she will write?

[1] Beet Juice Extends Life, Scientist Finds

[2] Scientist Asks For Money for Monkey Diet Study

If you picked #2, you've obviously never been to journalism school. She picks #1, the one that will get attention, catch readers, get picked up by the newswire, and sell papers. It's also basically a lie. But so what? My good science has been turned into junk science, and an idiotic site like junkscience.com will eventually complain about how I'm deceiving the public and people like you who don't understand science will complain about how my scientific bias is "corrupting my objectivity."
 
new drkitten said:
I'm afraid that if you get your science news from a cite like junkscience.com, you get no sympathy (or respect) from me or anyone else who actually studies the practice of science. Do you get your political news from the New York Post, too?


I don't get my science news from www.junkscience.com, I get my junk science news from there, of which there is mountains. What has my political views, or my source of them, got to do with this thread? Why are you bringing personalities into this?


No, the point that you miss is that your claim that this happens is unfounded and relies on a misapprehension of scientific practice and the role of belief. I didn't miss your point. I simply reject it.

You weren't rejecting it when we were in agreement of the EPA environmental tobacco smoke farce.

Let's take a (semi)-realistic example of how science is actually practiced, to see what's going on. Suppose, for whatever reason, I believe that daily doses of beet juice will extend lifespan. Presumably (as a scientist), I have some kind of evidence backing it up and haven't just conjured this belief out of thin air. Perhaps I visited a small village in Ruthenia where the inhabitants drink beet juice and regularly live to be 90, or perhaps I found some enzyme in beet juice that appears to have anagathic properties. Perhaps it's just because my great-grandmother swore that a glass of beet juice a day keeps the undertaker away. Whatever

This is the first step in scientific research. If I didn't believe that there was an effect, I'd have no motivation to investigate beet juice in particular among all the other things there are (many of which taste a lot better).

Why not apply real science?

1. You make an observation, in this case about the unusaual longevity of inhabitants of village. Assuming that you have done the necessary comparisons to confirm that the inhabitants are living extraordinarily long lives and confirmed that it is not some statistical fluke, then this requires no belief. However, the process of confirming that there is something worthy of investigation is going to require A LOT of work, in terms of confirming ages via doumentary evidence, comparison with previous generations, comparison with other societies around the world. After all, the natural life span in USA may not match the natural life span in small Ukranian villages. Your first mistake would be to BELIEVE that there is anything unusual going on just because that the Ukranian lifespan is longer than the USA lifespan.
2. You form a hypothesis about why they live so long. I simply don't see how your belief in the truth of that hypothesis will determine whether you test it or not. In fact I can see how your belief is going to cloud your objectivity. Once the seed of belief is sown then it is that much harder for it not to impinge on your results, albeit unconsiously.

But at the same time, if I simply go around telling people to drink beet juice, I'm not a scientist, but merely a fruitcake. I have to look for evidence to support my beliefs.

And it's at this point that your argument comes crashing down. Find me one University professor who will advocate that experimentation is nothing more than a search for evidence that supports your beliefs. Just find me one text book, one lecture note, one vaguely credible scientific resource that states that this should be the case. Most people involved in science would baulk at making such a statement, it is fundamentally and unarguably wrong.

I wont conduct a further detailed analysis of your "experiment", especially as you base your conclusion on a argument from authority spiced up with a bit of ad hominen. Suffice it to say it has all the hallmarks of junk science, right from the hypothesis through the methodology and the publishing process.

I don't think that you know what these words mean when applied to scientific practice. In the hypothetical study above, I was impartial, open-minded, skeptical, and objective.

I don't think I need a lecture on scientific practice from someone who uses experimentation to "look for evidence that supports their belief". Thanks anyway, I'll just trundle over to junkscience.com where they have very old fashioned ideas about such lofty principles.
 
Stumpy said:

2. You form a hypothesis about why they live so long. I simply don't see how your belief in the truth of that hypothesis will determine whether you test it or not. In fact I can see how your belief is going to cloud your objectivity. Once the seed of belief is sown then it is that much harder for it not to impinge on your results, albeit unconsiously.

This astonishes me. Having observed that the inhabitants of the village all drink beet juice, you don't see how my belief that the beet juice is the causally efficacious factor will influence what experiments I try to run?

But I can give you a real-life example of this stuff in action in the discovery of penicillin. Fleming observed that the mold from his sandwich killed bacterial colonies, and that there was probably some antibacterial activity going on. Based on this, what substances did he decide to test? The mold from his sandwich, or something random? Obviously, that's a rhetorical question; he had identified a possible antibacterial action in Pencillium mold, and proceeded to run a directed experiment along those lines to see if there was actually something in that particular mold that killed bacteria.

He obviously expected to find an antibacterial agent in the mold before he began cultivating it as part of the experiment -- he wasn't just cultivating molds at random on the off chance that they would have something interesting in them.



And it's at this point that your argument comes crashing down. Find me one University professor who will advocate that experimentation is nothing more than a search for evidence that supports your beliefs.

FInd me one point where I claimed that experimentation is nothing more than a search for evidence that supports [one's] beliefs. Experimentation is a search for evidence. But there are many, many different spots to look for evidence, and human life is regrettably short. Experimentors look for evidence in places they expect to find it, and they look for evidence relevant to their beliefs. If I believe that mold kills bacteria, or that beet juice extends life, I wouldn't go out and run an experiment involving the strong anthropic principle and the local observatory.

Re-read what I actually wrote. I distinguished between making a claim based on mere belief, and making a claim based on supportive evidence. Assuming that I do not have supportive evidence, but I want to, then obviously I have to look for supportive evidence. You're assuming that I'm a sufficiently incompetent scientist that I would neither recognize nor accept contrary evidence, or that I'm sufficiently dishonest that I would stoop to manufacturing deliberately flawed evidence to support my belief --- but that's the whole point of the peer review process. If I were that incompetent or that dishonest, the peers will catch me. Alexander Fleming believed that Penicillium would kill bacteria, but (skeptically, open-mindedly) checked this belief by growing bacterial cultures in controlled environments. Whether you initially believed this antibacterial effect or not, the results of the experiment were pretty incontrovertible --- the mold destroyed the bacterial colonies. The fact that Fleming already expected this result doesn't negate it.



I wont conduct a further detailed analysis of your "experiment", especially as you base your conclusion on a argument from authority spiced up with a bit of ad hominen. Suffice it to say it has all the hallmarks of junk science, right from the hypothesis through the methodology and the publishing process.

Really? Please let me know what these hallmarks are. I've sat on NSF panels (and reviewed journal submissions), and I'd love to know what I'm doing wrong when I critically evaluate a submission to determine whether the evidence supports the claimed findings or expected impacts. I've never once seen a grant proposal where the experimentor didn't have an idea of what the expected results of the proposed research project were going to be -- in fact, it's a required part of most proposals. Check the NSF web site.

Here's a good example (one among many), from a recent biology grant proposal. Font changes added by me.

In previous work using Arabidopsis leaf hairs (trichomes) as a model for cell differentiation, the SIAMESE (SIM) gene was shown to be required to repress mitosis and establish endoreplication. The SIM gene has now been isolated by the PI's group, and the predicted gene product is a good candidate for a direct regulator of cyclin-dependant kinase activity, which plays a key regulatory role in the cell cycle..

The researcher obviously has a strong expectation of what the SIM gene (as previously isolated) does -- it's expected to play a key role in the cell cycle. However, the researcher (and the NSF) recognize that this belief is not evidence. But you can't really think that the researcher is picking SIM (out of all the other genes in the genome) for this particular study at random, do you? He's picking it specifically because it's likely to have the effect that he's testing -- or more accurately, because he believes it's will have the effect he's testing.

This proposal, by the way, was funded.

You're welcome to tell me why the idea of looking for something in a way that's likely to find it is junk science. Alternatively, you're welcome to simply learn something about science.
 

Back
Top Bottom