Haruko Obokata published two papers in January 2014 that described how regular blood cells could be turned into pluripotent stem cells.
At the time, this was a coup – it dramatically simplified a previously complicated process and opened up new vistas of medical and biological research, while neatly sidestepping the bioethical considerations of using human embryos to harvest stem cells.
Moreover, the process for this was straightforward, and involved applying a weak acid solution or mechanical pressure – oddly similar to how you'd clean a rust stain off a knife.
Within a few days, scientists noticed some of the images in the paper were irregular. And a broader skepticism began. Could it really be that simple?
As the experiments were simple and the biologists were curious, attempts to replicate the papers' findings began immediately. They failed. By February, Obokata's institute had launched an investigation. By March, some of the paper's co-authors were disavowing the methods. By July, the papers were retracted.
While the papers were clearly unreliable, there was no clarity on the center of the problem. Had the authors mislabelled a sample? Did they discover a method that worked once but was inherently unreliable?
Had they simply made up the data? It took years longer, but the scientific community got an approximate answer when further related papers by Obokata were also retracted for image manipulation, data irregularities, and other problematic issues.
The whole episode was a sterling example of science correcting itself. An important result was published, it was doubted, it was tested, investigated, and found wanting… and then it was retracted.
This is how we might hope the process of organized skepticism would always work. But, it doesn't.
In the vast majority of scientific work, it is incredibly rare for other scientists to even notice irregularities in the first place, let alone marshal the global forces of empiricism to do something about them. The underlying assumption within academic peer review is that fraud is sufficiently rare or unimportant as to be unworthy of a dedicated detection mechanism.
Most scientists assume they will never come across a single case of fraud in their careers, and so even the thought of checking calculations in reviewable papers, re-running analyses, or checking if experimental protocols were properly deployed is deemed unnecessary.
Worse, the accompanying raw data and analytical code often needed to forensically analyze a paper are not routinely published, and performing this kind of stringent review is often considered to be a hostile act, the kind of drudge work reserved only for the deeply motivated or the congenitally disrespectful.
Everyone is busy with their own work, so what kind of grinch would go to such extremes to invalidate someone else's?
Which brings us neatly to ivermectin, an anti-parasitic drug trialed as a treatment for COVID-19 after lab-bench studies early in 2020 showed it was potentially beneficial.
It rose in popularity sharply after a published-then-withdrawn analysis by the Surgisphere group showed a huge reduction in death rates for people who take it, triggering a massive wave of use for the drug across the globe.
More recently, the evidence for ivermectin's efficacy relied very substantially on a single piece of research, which was preprinted (that is, published without peer review) in November 2020.
This study, drawn from a large cohort of patients and reporting a strong treatment effect, was popular: read over 100,000 times, cited by dozens of academic papers, and included in at least two meta-analytic models that showed ivermectin to be, as the authors claimed, a "wonder drug" for COVID-19.
It is no exaggeration to say that this one paper caused thousands if not millions of people to get ivermectin to treat and/or prevent COVID-19.
A few days ago, the study was retracted amid accusations of fraud and plagiarism. A masters student who had been assigned to read the paper as part of his degree noticed that the entire introduction appeared to be copied from earlier scientific papers, and further analysis revealed that the study's datasheet posted online by the authors contained obvious irregularities.
It is hard to overstate how monumental a failing this is for the scientific community. We proud guardians of knowledge accepted at face value a piece of research that was so filled with holes that it only took a medical student a few hours to entirely dismantle.
The seriousness accorded to the results was in direct contrast to the quality of the study. The authors reported incorrect statistical tests at multiple points, standard deviations that were extremely implausible, and a truly eye-watering degree of positive efficacy – the last time the medical community found a '90 percent benefit' for a drug on a disease, it was the use of antiretroviral medication to treat people dying of AIDS.
Yet, no-one noticed. For the better part of a year, serious, respected researchers included this study in their reviews, medical doctors used it as evidence to treat their patients, and governments acknowledged its conclusions in public health policy.
No-one spent the 5 minutes required to download the data file that the authors had uploaded online and notice that it reported numerous deaths happening before the study had even begun. No one copy-and-pasted phrases from the introduction into Google, which is all it takes to notice just how much of it is identical to already-published papers.
This inattention and inaction perpetuated the saga – when we remain studiously disinterested in the problem, we also don't know how much scientific fraud there is, or where it can be readily located or identified, and consequently make no robust plans to address or ameliorate its effects.
A recent editorial in the British Medical Journal argued that it might be time to change our basic perspective on health research, and assume that health research is fraudulent until proven otherwise.
That is to say, not to assume that all researchers are dishonest, but to begin the receipt of new information in health research from a categorically different baseline level of skepticism as opposed to blind trust.
This might sound extreme, but if the alternative is accepting that occasionally millions of people will receive medications based on unvetted research that is later withdrawn entirely, it may actually be a very small price to pay.
James Heathers is the CSO of Cipher Skin and a scientific integrity researcher.
Gideon Meyerowitz-Katz is an epidemiologist working in chronic disease in Sydney, Australia. He writes a regular health blog covering science communication, public health, and what that new study you've read about actually means.
Opinions expressed in this article don't necessarily reflect the views of ScienceAlert editorial staff.