Merged Naive empiricism vs skeptical empiricism

!Kaggen

Illuminator
Joined
Jul 12, 2009
Messages
3,874
Last edited:
Who wrote this article: a scientist, journalist, historian, philosopher? In the past, I have seen some patent nonsense written by uninformed journalists in The Economist.
 
The Economist's authors are usually anonymous or pseudonymous unfortunately.

This is an op/ed piece that says little that is new. Most published papers don't get replicated. On the other hand: most published papers don't influence policy either. It's sort of a nonissue in that sense - "Publish or Perish" is not a new problem.

This article is probably responding to a recent news story about the quality of articles in suscription-based publications ([I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals]), which have increased exponentially over the past 20 years. Only a few subscription-based or open-source publications have good peer review.

ETA: not sure what this has to do with 'empiricism' ?
 
Last edited:
The fact that science must be reproduciable does not mean or imply that all findings must be reproduced. This is trivially true to anyone in the field--simply reproducing someone else's work is about as likely to be published as it is to be used as toilet paper. The only way it can be published in our current system is if it demonstrates that some finding is NOT reproduciable.

Most findings aren't reproduced, no. However, that does not mean that they are not tested. Take geologic maps. I test those constantly--I call such tests "preconstruction surveys". A chemist that finds a new chemical will have their findings reproduced every time someone fabricates that chemical in order to use it. Geologists have their work reproduced every time someone mines ore.

And often a single example is sufficient to demonstrate an idea. Strong Inference establishes exactly that sort of test (http://en.wikipedia.org/wiki/Strong_inference). Much of theoretical physics is akin to this--finding a single example of, say, a gravitational lense would be sufficient to demonstrate portions of general relativity. if we've proven the idea, what value is there in reproducing it?

The idea that most science needs to be reproduced to be valid also flatly contradicts how science is done. Most science is conducted under a particular paradigm, and to support that paradigm. Only a small number of scientific concepts are truly novel, meaning paradigm-shifting. In essence, most work by most scientsts is reproduction of some aspect of previous work, though typically in a novel context.
 
Economist: science based on trust?!#$%

I was reading The Economist online, and saw this article as the number one most popular in a sidebar/top articles area. The opinion piece, titled, "How Science Goes Wrong," with no author listed, begins with this sentence:

A SIMPLE idea underpins science: “trust, but verify”.

What? Who says that? I've never heard the word "trust" used to describe the scientific method. After Googling that phrase, I found it was a favorite or that champion of science, Ronald Reagan. Also saw that that very same phrase was the title of another Economist article from the day before (neither here nor there, but what the hell).
 
1. It's the Economist.
2. Duplicate thread (Naive empiricism vs skeptical empiricism). Naughty jimtron.
picture.php


:D
 
The problem with that article is that the author does not appear to tell the difference between science being broken and science working exactly as intended.
A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers.
Yes, the results of much published work is not able to be replicated. That's why work is published in the first place. The author appears to have completely forgotten about the "verify" part of "trust, but verify". We don't just blindly assume that all published work is correct, we simply take it as a starting point for attempts at replication and further study.

If anything, the problem is the exact opposite of what is claimed here. Far too many negative studies are not published at all. There should actually be far more published papers that are either not replicable themselves, or which show previous studies are not replicable. But even aside from funding issues, "It didn't work and we didn't see anything" is rarely seen as interesting material for publication.
 
Who wrote this article: a scientist, journalist, historian, philosopher? In the past, I have seen some patent nonsense written by uninformed journalists in The Economist.

Is "Idiot" taken? I think the article was written by one of those, probably a professional.
 
The Economist knows its audience - people who took PPE because they found science hard, low-status (in their own social milieu) and unrewarding (again, in terms they're familiar with).

Have they turned the same critical eye on their own "dismal science" recently? It's one in which the arrant failure of theories is no block to their continued acceptance (however often the failure is reproduced) as long as they're socially acceptable.
 
jimtron said:
What? Who says that? I've never heard the word "trust" used to describe the scientific method.
Where did you learn about science? I first heard that as an undergrad, during a meeting with the Paleo Club. It became a cliche in grad school. I went into industry after that, and trust is often a bit strained in the private sector, but it's still tossed around a bit.

Cuddles said:
The problem with that article is that the author does not appear to tell the difference between science being broken and science working exactly as intended.
I had to swallow fast to avoid spraying coffee over my laptop. :D That's what I describe as a critical failure right there.

We don't just blindly assume that all published work is correct, we simply take it as a starting point for attempts at replication and further study.
I've seen the view that published=infalliable on this forum as well. THere's this weird view in the public that anything that passes peer review is automatically cannon and cannot be rejected by anyone, ever. Any practicing scientist, however, views rejection of published arguments as commonplace if not obligatory. "Published" means that some people thought the idea worth including in the conversation, and that's it. It says NOTHING about the validity of the research. And peer review is only the first and most minor hurddle to overcome. Once it gets past peer review the rest of us get to analyze it. And beleive me, the rest of us consider ripping papers apart to be fun.

If anything, the problem is the exact opposite of what is claimed here. Far too many negative studies are not published at all.
Ain't that the truth. Most journals in geology reject negative studies out of hand. This leads to a lot of people re-inventing the wheel. I was just talking yesterday about an experiment a friend of mine wanted to run while she was in grad school (I don't recall what it was; I was there to talk about fish otoliths). The professor just happened to know a researcher who had ran the experiment and demonstrated that it was an utter failure. Without that random connection, due to the lack of venue for publishing those types of research, my friend would have wasted weeks if not months.

If I ever became rich, the second thing I'd do would be to start a journal specifically for negative studies. We really need a place to publish all the ineffeciive techniques that scientists discover.
 
I've seen the view that published=infalliable on this forum as well. THere's this weird view in the public that anything that passes peer review is automatically cannon and cannot be rejected by anyone, ever.
so who's fault is that? the public's or our's?

and who is responsible for the explosion in "number of papers" per author? (or number of citations, or H index). attempts to shortcut to evaluation of a scientist's work result in gumming up the works. to the detriment of the advance of science.
"Published" means that some people thought the idea worth including in the conversation, and that's it. It says NOTHING about the validity of the research.

i do not think peer review has slipped quite that far, yet. look at the rapidly growth of "open source"/archived "non reviewed" literature...

but it also worse than you suggest: when you constructively comment on and reject something for nature, only get the same piece with the same typo's to review for science, and then later see it appear in another journal (typos corrected, scientific flaws unchanged), then you start putting less effort into reviewing. the value of the published literature drops and the entire process slides... one might even tend to start taking students and postdocs only from friends. less mixing of ideas. harder for the brightest young ones...

i'd argue we need many many fewer papers. it is no longer merely that the noise level too high, the sheer volume is unmanageable. what would one suggest to a young scientist: quality or volume?
 
The fact that science must be reproduciable does not mean or imply that all findings must be reproduced. This is trivially true to anyone in the field--simply reproducing someone else's work is about as likely to be published as it is to be used as toilet paper. The only way it can be published in our current system is if it demonstrates that some finding is NOT reproduciable.

That's an exaggeration, but true to a large degree. It is also a major contributor to the high false-positive rate among published studies. Consider the extreme: a hypothetical field that only tests false hypotheses, and whose journals only publish results that are statistically significant at the .05 level. Then 95% of the studies conducted in the field will go unpublished, and 100% of the published studies will be false. I leave to the reader to judge how close various "scientific" fields are to this "hypothetical" example.

The idea that most science needs to be reproduced to be valid also flatly contradicts how science is done.

No. The fact that most science needs to be reproduced to be considered valid, but replication isn't usually attempted or published, shows that the process is broken. When most published research in a field is false, then the system has failed.

"Published" means that some people thought the idea worth including in the conversation, and that's it. It says NOTHING about the validity of the research.

The only way that could be true is if the probability that a paper is rejected does not depend on the validity of the paper, which seems unlikely to be the case, and I doubt you could demonstrate that it is true.
 
Last edited:
jt512 said:
No. The fact that most science needs to be reproduced to be considered valid,
This view is not held by any scientist I have ever worked with. ReproducIABLE, yes. ReproducED, no.

but replication isn't usually attempted or published, shows that the process is broken.
No. It shows that the process works exactly as it was intended. Reproduction IS NOT limited to merely redoing exactly the same experiment. Every study that uses both radiometric dating and biostratigraphy reproduces the studies that demonstrated the Principle of Faunal Succession, though not a single one of them say that's what they intend. We take what others have done and build upon it; that's what science does. What you're describing is the cartoon version of science used in grade school; the real practice is much different but not flawed.

The only way that could be true is if the probability that a paper is rejected does not depend on the validity of the paper, which seems unlikely to be the case, and I doubt you could demonstrate that it is true.
This tells me that you haven't given five minutes' thought to this issue. But allow me to illustrate your flaw.

In 1969 Dibblee examined the Horned Toad Hills. He named and described the Horned Toad Formation, the remains of a Miocene/Pliocene tectonically controlled lake. He recognized three units--Member 1, 2, and 3, creatively enough. These represented the development and maturity of that lake.

In 2001, Paleontologia Electronica published a paper by May et al. which included Members 4 and 5, which included sediments labeled as "Older Alluvium" by Dibblee. Those subunits were re-interpreted, based on 30 years of advancement in science, to represent the filling of that lake by sediment derived from the surrounding mountains. These sediments were interpreted as alluvial, but much, much older alluvium than the overlying Pleistocene and Holocene alluvium in the Antelope Valley.

(Dibblee reference)
(May et al., 2001 reference)

Dibblee WAS WRONG. If you knew California geology, you'd understand how major a statement that was. At any rate, your argument demands that we dismiss Dibblee's work--and more, that Dibblee's work was unfit for publication at the time in which it was published. The facts haven't changed; the rock is what it is. If we must determine whether a paper is publishable by examining its accuracy, we must conclude that Dibblee's map in the above reference is unpublishable due to his rather large error (along with two or three other maps, I might add!).

In reality, this is not the case. Dibblee published his paper, based on the best knowledge he had at the time (and Dibblee's knowledge was, to my knowledge, the best there was at the time). His research conformed with all requirements for validity, and the results were rigorous and well-supported. I've read the original report; given the data he had, I'd have drawn the same conclusion. After examining the rocks, I can attest to the difficulty in differentiating between Member 5 and the Older Alluvium. The publication of Dibblee's work entered the hypothesis--and a geologic map IS a hypothesis--into the realm of scientific discourse. Later researchers re-examined the information based on shifted paradigms and new data, and found it to be wrong. To practicing scientists this is par for the course; we expect this to happen, with far more frequency than people like you would assume. In fact, we ourselves intend to do it; that's how we make a name for ourselves (well, one way, anyway). The report raised some questions; answering them provided us with a great deal more information. This in turn raised other questions, which researchers are working on currently. That's how science is done.

The data must be accurate to the abilities of the researcher, yes. If the data or conclusions in the report are completely off-base peer review should reject the paper. This is obvious to any rational person. However, the abilities of the researcher and the state of the field itself must be considered, and you haven't done so adequately. The end result is that even articles that have passed peer review are open to examination, and may be found wanting upon further examination.

lenny said:
so who's fault is that? the public's or our's?
The fault lies with the public. I know of no journal that claims to provide The Truth. They only claim to provide the most recent data.

and who is responsible for the explosion in "number of papers" per author?
Anyone who uses the number of publications as a criteria for evaluating any scientist, rather than the quality thereof.

i do not think peer review has slipped quite that far, yet.
You misunderdstand me. The fact that peer review doesn't evaluate the validity of the claims made is a good thing. Remember, all science is conducted within a certain paradigm. That paradigm may shift in the future, casting all previous research into question. This is a VERY good thing! The fact that we are willing to question even our most fundamental assumptions is the very thing that prevents science from descending into dogmatism. Peer review can, in a world bereft of infallibility, only ever prove that a paper is good enough to discuss. Determining if the paper is true falls upon those of us who read and evaluate the paper. The only other option is that taken by many religions: dogmatism handed down from on high.

but it also worse than you suggest: when you constructively comment on and reject something for nature, only get the same piece with the same typo's to review for science, and then later see it appear in another journal (typos corrected, scientific flaws unchanged), then you start putting less effort into reviewing.
I can't comment on that. My papers have been reviewed, but I've yet to be asked to be a reviewer. The fact that I'm in the private sector has, ironically enough, contributed to this--the pool of potential reviewers seems to be academics only, which is a serious problem (many in my field "dodge dozers" as mitigation monitoring is called).

i'd argue we need many many fewer papers. it is no longer merely that the noise level too high, the sheer volume is unmanageable. what would one suggest to a young scientist: quality or volume?
You'll get no argument from me. I've got ideas for research, but several require decades of work. Imagine how likely it will be to find funding. "Publish or Perish" emphasizes ephemeral papers, rather than deep analysis.
 
This view is not held by any scientist I have ever worked with. ReproducIABLE, yes. ReproducED, no.

This gets back to the topic of the other thread. Few scientists understand the issue. They do an experiment, get a statistically significant result, and think they've proved their hypothesis. Not even close. There is no clear-cut relationship between a p-value and the probability that a research hypothesis is true. Hence you have whole fields, such as medicine, in which it is likely that most published research findings are irreproducible—that is, false. In a field with even less theoretical guidance, such as experimental psychology, I would not be at all surprised if 90% of published research is false. Reproducibility is at the heart of frequentist statistics. The fact that you don't realize this, or know any scientists who do, again, just shows how wrong you were in the other thread about the importance of understanding statistics.

No. It shows that the process works exactly as it was intended. Reproduction IS NOT limited to merely redoing exactly the same experiment.

Your first sentence above is false on its face. If the process worked, then few published studies would be false. However, many—most, in some fields—are false. Hence, the process is broken.

Your second sentence above is true. No one said replications have to be exact.

You misunderdstand me. The fact that peer review doesn't evaluate the validity of the claims made is a good thing.

That is utter nonsense. Of course the peer review process evaluates the validity of the claims made in the article. That's the main point of peer review. :roll:
 
Last edited:

Back
Top Bottom