• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Fake science is out of control

Puppycow

Penultimate Amazing
Joined
Jan 9, 2003
Messages
29,826
Location
Yokohama, Japan
Incentives matter.

‘The situation has become appalling’: fake scientific papers push research credibility to crisis point

Last year, 10,000 sham papers had to be retracted by academic journals, but experts think this is just the tip of the iceberg
“The situation has become appalling,” said Professor Dorothy Bishop of Oxford University. “The level of publishing of fraudulent papers is creating serious problems for science. In many fields it is becoming difficult to build up a cumulative approach to a subject, because we lack a solid foundation of trustworthy findings. And it’s getting worse and worse.”

The startling rise in the publication of sham science papers has its roots in China, where young doctors and scientists seeking promotion were required to have published scientific papers. Shadow organisations – known as “paper mills” – began to supply fabricated work for publication in journals there.

The practice has since spread to India, Iran, Russia, former Soviet Union states and eastern Europe, with paper mills supplying fabricated studies to more and more journals as increasing numbers of young scientists try to boost their careers by claiming false research experience. In some cases, journal editors have been bribed to accept articles, while paper mills have managed to establish their own agents as guest editors who then allow reams of falsified work to be published.
The problem is that in many countries, academics are paid according to the number of papers they have published.

“If you have growing numbers of researchers who are being strongly incentivised to publish just for the sake of publishing, while we have a growing number of journals making money from publishing the resulting articles, you have a perfect storm,” said Professor Marcus Munafo of Bristol University. “That is exactly what we have now.”

Paid according to the number, not the quality of the papers they publish. Is it any wonder that such an incentive system would result in a tsunami of garbage? Ten garbage papers are worth more than one quality paper, so why bother trying to do high-quality research?

Maybe such journals should just be ignored entirely? A large majority of it seems to be coming from one publishing group, according to the article.
 
Yeah, I have a slightly crackpot view that this problem has been going on for a while but getting worse over time, and is at least one of the causes of the decline in productivity in science (in per researcher terms, if not in absolute terms). This is as opposed to the "low hanging fruit" view.
 
Yeah, I have a slightly crackpot view that this problem has been going on for a while but getting worse over time, and is at least one of the causes of the decline in productivity in science (in per researcher terms, if not in absolute terms). This is as opposed to the "low hanging fruit" view.

The first part of your claim is almost certainly true. The second part raises the question for me of "How does one measure productivity in science?" Maybe there's an obvious answer to that, but it seems like a difficult, or at least complicated problem to me.

Clearly, it cannot be as simple as "number of papers published" or "words published" or anything so crude as that.
 
Last edited:
The first part of your claim is almost certainly true. The second part raises the question for me of "How does one measure productivity in science?" Maybe there's an obvious answer to that, but it seems like a difficult, or at least complicated problem to me.

Clearly, it cannot be as simple as "number of papers published" or "words published" or anything so crude as that.
One place to look is in the productivity statistics. At least one marker of scientific progress is that the more we know the more we can do. The scientific progress that went along with the industrial revolution saw a huge growth in productivity. The second industrial revolution (electrification, etc.) also saw a similar growth. In general scientific progress has led to technological progress, which increases productivity. But over the last 50 years the productivity growth rate has been decline (productivity has increased, but at a lower rate than in the past). Economic models (see Paul Romer, for instance), show that continued growth is pretty much dependent upon technological progress.

So, if we use productivity as a proxy for scientific progress, we see continued but slowing progress.

On the other hand, the total money spent on research, and even the percentage of GDP, and particularly the total number of researchers have increased dramatically during that period.

From what I've seen its plausible that the rate of progress in science has actually stayed steady, rather than declining. But even if that's the case, the rate per researcher has declined significantly.
 
One place to look is in the productivity statistics. At least one marker of scientific progress is that the more we know the more we can do. The scientific progress that went along with the industrial revolution saw a huge growth in productivity. The second industrial revolution (electrification, etc.) also saw a similar growth. In general scientific progress has led to technological progress, which increases productivity. But over the last 50 years the productivity growth rate has been decline (productivity has increased, but at a lower rate than in the past). Economic models (see Paul Romer, for instance), show that continued growth is pretty much dependent upon technological progress.

So, if we use productivity as a proxy for scientific progress, we see continued but slowing progress.

On the other hand, the total money spent on research, and even the percentage of GDP, and particularly the total number of researchers have increased dramatically during that period.

From what I've seen its plausible that the rate of progress in science has actually stayed steady, rather than declining. But even if that's the case, the rate per researcher has declined significantly.

One could suppose that more and more money, resources and people are needed to make progress as the questions and required hardware become increasingly complex. It's no longer exceptional for three dozen authors from ten different institutions to contribute to a single paper.
 
One could suppose that more and more money, resources and people are needed to make progress as the questions and required hardware become increasingly complex. It's no longer exceptional for three dozen authors from ten different institutions to contribute to a single paper.

Yep, that's the low-hanging fruit theory. Basically, we've already made all the easy discoveries, now what's left to discover requires more [people/time/resources]. Galileo could make a discovery by pointing his telescope at Jupiter, and seeing its moons. Now we need the JWST. Rutherford could do his experiments in a small lab, now we need the LHC and thousands of people and billions of dollars working together to expand the boundaries of human knowledge.

It's an entirely plausible viewpoint. I'm just not convinced its the answer. Note, though, that if the low-hanging-fruit view is correct, the thing it's explaining is still real: the declining productivity per researcher.

Here's the somewhat famous paper that first demonstrated this issue pretty clearly:

https://web.stanford.edu/~chadj/IdeaPF.pdf

Are Ideas Getting Harder to Find?
By Nicholas Bloom, Charles I. Jones, John Van Reenen,
and Michael Webb

Long-run growth in many models is the product of two terms: the effective number of researchers and their research productivity. We present evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply. A good example is Moore’s Law. The number of researchers required today to achieve the famous doubling of computer chip density is more than 18 times larger than the number required in the early 1970s. More generally, everywhere we look we find that ideas, and the exponential growth they imply, are getting harder to find.

(the whole paper is there at the link for those interested)
 
Last edited:
There are also many areas of science that don't in principle lead to any direct growth in productivity or any sort of direct economic productive value. Just growth in human knowledge. Like astronomy or cosmology or even particle physics. Do the findings of the experiments in the Large Hadron Collider lead to some sort of economically relevant increase in productivity?
 
There are also many areas of science that don't in principle lead to any direct growth in productivity or any sort of direct economic productive value. Just growth in human knowledge. Like astronomy or cosmology or even particle physics. Do the findings of the experiments in the Large Hadron Collider lead to some sort of economically relevant increase in productivity?

Sure, but do you expect progress in those areas to be happening at a different rate from progress in the areas that do impact growth? The paper I linked looks at crop productivity, medical research, Moore's Law, and other areas, and finds basically the same trend in all of them. Is there some reason to think that trend exists only in places where we can measure it and not elsewhere? If the low-hanging fruit has been picked in medicine, but not in astronomy, that seems surprising.

I think the same researchers published something more recently that tried a different methodology less tied to productivity figures, but I've forgotten the details. Something about looking through Nobel prizes and what they were given for. Anyway, the upside is that the results are the same. Sorry, don't have the link, but if you're interested you might be able to find it.

Just on the "vibes" side of things, while the number of researchers has increased by an order of magnitude, it doesn't seem like the rate of progress in science has increased by an order of magnitude in the same time. I'd be pretty surprised if that turned out the be the case, so the results in that paper are pretty much in line with my intuitive sense of things. But again, the rate of progress staying constant means productivity per researcher is down (by an order of magnitude).

Of course, under the low hanging fruits view, this isn't anything about the researchers, it's just that the problems they are working on are getting harder. It's just diminishing returns. And that seems to be the most accepted view. I have some reasons for doubting it, or at least doubting that its the whole picture, but doesn't seem to be worth going into those if we don't even agree that the phenomenon its explaining exists.

ETA I didn't mean to hijack your thread, which is an interesting and important story. My comment was meant only as a comment as its related to some thoughts I've been having but which aren't really fully formed yet. This does seem tangentially related, but this thread should focus on the issues raised in the OP.
 
Last edited:
One measure of how good a paper is, is its impact factor. That is how often other researchers reference the paper in question. Ditto to journals.

More info here: https://researchguides.uic.edu/if/impact

I would imagine that even that could be gamed to a certain extent when "paper mills" publish enough fake papers. Just have fake papers citing other fake papers and hey-presto, the papers now have "impact" too.
 
I would imagine that even that could be gamed to a certain extent when "paper mills" publish enough fake papers. Just have fake papers citing other fake papers and hey-presto, the papers now have "impact" too.

You can also get things like citing your own papers when not really relevant or reciprocal citing: I'll cite you if you cite me, which can inflate citation numbers.
 
I would imagine that even that could be gamed to a certain extent when "paper mills" publish enough fake papers. Just have fake papers citing other fake papers and hey-presto, the papers now have "impact" too.

True. But this can be detected and corrected. It is one of the problems Google has with ranking pages.
- If a limited number of people cite each other's work then they are all ignored.
- A cite from a good source is worth far more than an unknown source. And papers that cites many other papers does not count.
- Certain people have qualifications and experience. Anything they cite is worth heaps.
 
There’s a really good book on this called Science Fictions by Stuart Ritchie.

He goes through a few different types of problem in publishing such as Fraud, Bias and Hype.

https://www.sciencefictions.org/p/book

Another one that I read recently is Calling BS by Carl Bergstrom and West:

https://calling********.org/

Good stuff! (although your second link seems to have fallen afoul of the autocensor).

Found this via the first link:
https://www.sciencefictions.org/p/science-fictions-links-january-2024

Which led me to this:
A prestigious cancer institute is correcting dozens of papers and retracting others after a blogger cried foul
And this:
Stop Hocusing Your Western Blots, Maybe (Derek Lowe)
And this (where the whistle was first blown)

And other interesting stuff.
So it's not just a problem in "those other countries".
 
Good stuff! (although your second link seems to have fallen afoul of the autocensor).

Found this via the first link:
https://www.sciencefictions.org/p/science-fictions-links-january-2024

Which led me to this:
A prestigious cancer institute is correcting dozens of papers and retracting others after a blogger cried foul
And this:
Stop Hocusing Your Western Blots, Maybe (Derek Lowe)
And this (where the whistle was first blown)

And other interesting stuff.
So it's not just a problem in "those other countries".

Ah, interesting. Ritchie talks a lot about Western blots in his book.

I see Ritchie also mentions the Dan Ariely / Francesca Gino issue.

That was the subject of an episode of Freakonomics recently about academic fraud.

Link

Another group who get some exposure, in a good way (hopefully!) on that podcast episode is Data Colada.

The people involved in Data Colada have written papers showing just how easy it is to p-hack findings and were some of the people involved with demonstrating how big a replication crisis there was.

As a result, they are not popular with some academics who have managed to fudge their figures enough to make their findings sound sexier and get round the publication bias favouring novel findings as opposed to, say, boring findings that either show a null result or merely support previous research. They are big advocates of open science.

The Open Science Framework is trying to create better accountability for research scientists by getting them to pre-register and to share data to allow other researchers to easily do their own analyses.

One of the issues that the Data Colada people found was something called researcher degrees of freedom which comes down to how the researchers choose to display their findings which may end up purposefully or perhaps unconsciously skewed to give more impressive results. If researchers have to pre-register how they will conduct their research, they will have less opportunity to fudge the results. Presumably. Of course, there is always the possibility that determined bad actors will find ways around that too, but at least it would be clearer that fraud was committed rather than incentives lining up with unconscious bias.

Here is a paper they wrote on this.

There is a podcast episode here in which two academics discussing the paper.
 
Uri Simonsohn in the most recent blogpost on Data Colada (November 2023, but I guess he's been busy preparing for the Gino lawsuit) gives some suggestions for how to improve science:

Suggestions For a Better Future

1. Preregister or justify. It's time for journals to require authors to justify when an experiment is submitted without a pre-registration. Papers without compelling justifications could be desk rejected.

2. Journal checks. Journals should dedicate the 10 minutes per-paper that it takes to ensure that links to pre-registrations actually lead to (the public version of) a pre-registration.

3. The 20 second rule. Authors (and journals) should ask: "How long would it take a reader visiting the paper's repository to find the pre-registration for Study 2 and the data for Study 3"? If the answer is "more than 20 seconds", the paper is not ready for publication.

4. Single links. Toward that aim, I recommend that authors include in their papers a single link to a well organized repository with all files (code, data, materials, and pre-registrations). This is done by default at ResearchBox (an open research platform that is fully integrated with AsPredicted). See screenshots below. Authors posting to the OSF should spend the 20 minutes that it may take to properly organize and label their files.

Since not everyone is familiar with ResearchBox yet, below I include three screenshots of ResearchBox pages for three different published papers. They all look the same, which is why it takes very few seconds to know what's available and where.

Link

Another pre-registration site, that I had not heard of, is called AsPredicted.

The suggested use of ResearchBox also looks interesting.

Maybe as the culture of pre-registration spreads, scientific research might get better.
 
It’s been said that Publish or perish was replaced with publish rubbish and flourish.
Another way to boost publication numbers is not so much to publish rubbish but instead, publish one set of research as multiple papers. Make what was previously one paper into a quarterly serial. So each paper is actually "good research", but only a fraction of the envelope-push in each one.
 
There are also many areas of science that don't in principle lead to any direct growth in productivity or any sort of direct economic productive value. Just growth in human knowledge. Like astronomy or cosmology or even particle physics. Do the findings of the experiments in the Large Hadron Collider lead to some sort of economically relevant increase in productivity?

If there's something economically useful that requires the LHC to discover it, that might also require building an LHC to produce it. When scientific experiments become prohibitively expensive, that means it's probably less useful to any industry which doesn't have unlimited funds to invest (which is all of them). Don't get me wrong. I'm sure there could be exceptions to this rule. But when there are exceptions, we probably could have made the discoveries more economically, too.

So arguably, this mental model suggests that science is often outspending industry by far too much to be practically applicable, at least in the short term.
 
Last edited:
If there's something economically useful that requires the LHC to discover it, that might also require building an LHC to produce it. When scientific experiments become prohibitively expensive, that means it's probably less useful to any industry which doesn't have unlimited funds to invest (which is all of them). Don't get me wrong. I'm sure there could be exceptions to this rule. But when there are exceptions, we probably could have made the discoveries more economically, too.

So arguably, this mental model suggests that science is often outspending industry by far too much to be practically applicable, at least in the short term.

Related: https://marginalrevolution.com/marginalrevolution/2024/02/is-science-a-public-good.html

The question of whether science is a public good is not merely technical but has significant implications. If science is a public good, markets will likely underproduce it, making government subsidies to universities crucial for stimulating R&D and economic growth. Conversely, if ideas are embodied and thus closely tied to their application, government funding for university research might not only fail to enhance economic growth but could also hinder it. This occurs as subsidies draw scientists away from firms, where their knowledge directly contributes to product development, towards universities, where their insights risk becoming lost in the ivory tower. (Teaching scientists who then go on to careers in the private sector is much more likely to be complementary to productivity growth than funding research which pulls scientists away from the private sector.)

In a commentary on Arora et al., the Economist notes that growth in universities and government science has coincided with a slowdown in productivity.

The paper linked in that post:

https://www.nber.org/papers/w31899
We study the relationships between corporate R&D and three components of public science: knowledge, human capital, and invention. We identify the relationships through firm-specific exposure to changes in federal agency R\&D budgets that are driven by the political composition of congressional appropriations subcommittees. Our results indicate that R&D by established firms, which account for more than three-quarters of business R&D, is affected by scientific knowledge produced by universities only when the latter is embodied in inventions or PhD scientists. Human capital trained by universities fosters innovation in firms. However, inventions from universities and public research institutes substitute for corporate inventions and reduce the demand for internal research by corporations, perhaps reflecting downstream competition from startups that commercialize university inventions. Moreover, abstract knowledge advances per se elicit little or no response. Our findings question the belief that public science represents a non-rival public good that feeds into corporate R&D through knowledge spillovers.
 
When this article came out, Apologetics for the religion of college were in full Kent Hovind spin control mode
 
Yeah, I have a slightly crackpot view that this problem has been going on for a while but getting worse over time, and is at least one of the causes of the decline in productivity in science (in per researcher terms, if not in absolute terms). This is as opposed to the "low hanging fruit" view.

The article blames paper mills in China and the former soviet bloc countries and notes that hit as been increasing lately.

Watchdog groups – such as Retraction Watch – have tracked the problem and have noted retractions by journals that were forced to act on occasions when fabrications were uncovered. One study, by Nature, revealed that in 2013 there were just over 1,000 retractions. In 2022, the figure topped 4,000 before jumping to more than 10,000 last year.

Then there's this:
Of this last total, more than 8,000 retracted papers had been published in journals owned by Hindawi, a subsidiary of the publisher Wiley, figures that have now forced the company to act. “We will be sunsetting the Hindawi brand and have begun to fully integrate the 200-plus Hindawi journals into Wiley’s *portfolio,” a Wiley spokesperson told the Observer.
One publisher seems to be responsible for the vast majority.
 
https://en.m.wikipedia.org/wiki/Gravity_Research_Foundation

There has always been crackpot science.
Babson wanted to abolish Gravity, because he blamed it for the death of his family.

I put fake science in a different category from crackpot science.

A crackpot may be an honest crackpot. And they usually didn't make it past peer review. I don't know if that's still true, but these days a lot of garbage and fakery seems to be slipping past peer review.

In an essay titled Gravity – Our Enemy Number One, Babson indicated that his wish to overcome gravity dated from the childhood drowning of his sister. "She was unable to fight gravity, which came up and seized her like a dragon and brought her to the bottom", he wrote.[9]

Gravity can certainly kill, but without it life as we know it wouldn't be possible.
 
I put fake science in a different category from crackpot science.

A crackpot may be an honest crackpot. And they usually didn't make it past peer review. I don't know if that's still true, but these days a lot of garbage and fakery seems to be slipping past peer review.



Gravity can certainly kill, but without it life as we know it wouldn't be possible.

Look, if we didn't have water in 1889, Hilter would never have been born.

Are you are one of those filthy Hydrophiles?
 
It is a real problem. In my own endeavors, there is a claim out there that a powdered metal application technology is capable of making extremely advanced alloys on the fly. Sure, there's some melting, but nothing that would lead to much more than jumbled intermetallics. We'll see, but there are things in materials science papers that are sounding fishy. Paper in question is from China.
 
I put fake science in a different category from crackpot science.

I was thinking about the difference between fake science and crackpottery, and here's my own thoughts on the difference:

The 'crackpot' is typically a contrarian. He (or she) has a 'big idea' that goes against the scientific mainstream. Einstein was wrong, for example, and he has a better theory. The crackpot thinks that "the establishment" is suppressing his ideas and sees himself as some sort of modern Galileo, challenging the orthodoxy and dogma of the establishment.

The fake scientist has a different motivation. He (or she) wants to be accepted by and rewarded by the scientific mainstream. In some cases, if they are successful, they may even achieve this. There have been several recent cases involving highly successful academic careerists. These people are not contrarians. They are often part of the mainstream, and their goal is to publish papers that appear to be legitimate science, but they cut corners on the science or data that they base these papers on.
 
Fake science vs crackpot(psuedoscience), there is a difference but they are kind of on a spectrum.

The crack pot truly believes the thing, the fake is just making stuff. Not all crackpots are just making stuff, mostly they just don't understand what they are missing.

Even if the crack pot believes it, he may start faking evidence to prove it and even if the fake starts by just making stuff up, they may eventually start believing there own lies.

Key difference in my opinion is whether the purveyor of the fake/pseudo science started with fake evidence that they knew was fake or not. This the same for a lot of woo. Some faith healers and psychics probably think their gifts are real and are as fooled as their clients. Most are just conmen.
 
A lot of Fake Science, just like Fake News, is based on the assumption that the presented "facts" are actually factual, but the author
can't find them just this moment, or
they we haven't got the right technology yet, or
they are being suppressed by T-H-E-M, or
they could easily be discovered by a Special Council with unlimited Subpoena Power.

that is because Fake News and Fake Science "feel true" to their proponents.
 
A lot of Fake Science, just like Fake News, is based on the assumption that the presented "facts" are actually factual, but the author
can't find them just this moment, or
they we haven't got the right technology yet, or
they are being suppressed by T-H-E-M, or
they could easily be discovered by a Special Council with unlimited Subpoena Power.

that is because Fake News and Fake Science "feel true" to their proponents.

I'd argue(not to vociferously) that that is the difference between fake science and psuedo science. What you describe is more pseudo science than fake.

The difference between misinformation and disinformation. Depends on whether the purveyor knows its fake and they aren't just faking evidence for something they know to be true.
 
Who fakes cancer research? Apparently, lots of people.

Dana-Farber was rocked this January by a blog post by Sholto David, a molecular biologist and internet data sleuth, in which he presented evidence of widespread data manipulation in cancer research published by leading researchers including the institute’s CEO and COO. David reportedly contacted the institute with concerns about 57 papers, 38 of which were ones for which the institute had “primary responsibility for the potential data errors.” The institute has requested retractions for 6 of them and initiated corrections for 31.
Being a data sleuth is deeply unrewarding, and even risky. David is currently unemployed and doing the work of flagging data manipulation in his free time between gigs, as he told the Guardian.

Many data sleuths have been threatened with lawsuits for exposing data fraud. “A lot of important science gets done not by big institutions questioning things but by independent people like this,” defamation lawyer Ken White told me last summer. The problem is that there’s no institutional process to review papers unless someone else brings problems to light — and most scientists don’t want to endanger their own careers to do that thankless, frustrating work.
 
https://www.nature.com/articles/d41586-024-01672-7

Research-integrity watchers are concerned about the growing ways in which scientists can fake or manipulate the citation counts of their studies. In recent months, increasingly bold practices have surfaced. One approach was revealed through a sting operation in which a group of researchers bought 50 citations to pad the Google Scholar profile of a fake scientist they had created.

The scientists bought the citations for US$300 from a firm that seems to sell bogus citations in bulk. This confirms the existence of a black market for faked references that research-integrity sleuths have long speculated about, says the team.
 
This is a big one:

Academic fraud endemic in published research, from photoshopped blots to AI slop

Eliezer Masliah, a prominent neuroscientist and top NIH official steering billions in federal grant money, has "fallen under suspicion" of extensive academic fraud, writes Charles Piller at Science.org. The understated measure of that quote collapses quickly into a basement stuffed with comically overwhelming evidence.

A Science investigation has now found that scores of his lab studies at UCSD and NIA are riddled with apparently falsified Western blots—images used to show the presence of proteins—and micrographs of brain tissue. Numerous images seem to have been inappropriately reused within and across papers, sometimes published years apart in different journals, describing divergent experimental conditions.

Fraud, so much fraud, writes Derek Lowe.

It seems like a strange thing to take someone with a long and respected career and subject them to what would essentially be a Western blot and photomicrograph audit before offering them a big position. But if the NIH had done that in 2016, they wouldn't be in the position they're in now, would they? How many people do we need to check? How many figures do we have to scrutinize? What a mess.
 
Back
Top Bottom