Published studies are the backbone of medical understanding, both for healthcare professionals and the general public. And medical journals are the gatekeepers of that research, ensuring accuracy and integrity.
As the coronavirus continues to wreak havoc around the globe, interest in medical studies offering any clarity or guidance are read with great interest. But faulty studies have led to confusion, conspiracy theories, and even death.
How medical journals vet studies
A good study is constructed and tested using the scientific method: A problem is found, data is gathered, and a hypothesis is put forward.
Then that hypothesis is tested under strict controls and the results, whether they confirm the hypothesis or not, are recorded and a conclusion is developed.
The barrier to entry for reputable medical journals is pretty high: The Journal of the American Medical Association (JAMA) accepts just 4 percent of the more than 5,300 research papers it receives annually.
"Two-thirds of papers are rejected without external peer-review and one-third of the papers go out for external peer-review," Howard Bauchner, JAMA's editor-in-chief, told Business Insider.
Peer-reviewed studies are analysed by experts in the field, who look at the validity of the results, as well as the originality and importance of the research.
Well-established journals like JAMA, The New England Journal of Medicine (NEJM), and The Lancet are peer-reviewed. Bauchner said many medical professionals want to review for JAMA, but their expertise must match the study closely.
After a paper has been peer-reviewed, it will be edited, fact-checked, and reviewed by an associate editor, deputy editor, and senior editor.
According to the JAMA website, the average time for an article to go from being submitted to being accepted is 30 days. It's then another 25 days from acceptance to online publication.
The hallmarks of a reputable medical journal
Medical journals should be judged by "the quality of peer-review that they conduct, the quality of the manuscript editing that they do, and the fact-checking with respect to numbers," Bauchner said.
"So that the numbers in the manuscript are the same as the numbers in the abstract."
Language describing results should be accurate, he added, including "whether or not results should represent causal or non-casual relationships."
A reliable medical journal will be indexed in the National Library of Medicine, according to physician William Li, whose research has been published in the NEJM and The Lancet.
It should also have an editorial board composed of experts in the field the journal covers, Li said. The ability to distribute a study both in-print and electronically is also crucial.
A large and active social media presence is "an increasingly important hallmark for a high-quality journal," Baucher added. JAMA tweets multiple times a day to its 367,000 Twitter followers.
It doesn't always matter who funds a study
There's often a lot of scrutiny around funding sources for medical studies, especially if they come from a pharmaceutical company or other industry groups.
But, Bauchner said, if the study is based on original research and uses randomised clinical trials, it's fairly easy for editors to verify the results.
"When studies are not clinical trials, when they're observational in nature, then readers need to be more cognisant of the funding source," Baucher said. "As do peer-reviewers and editors. That funding source may be relevant."
That's true of industry funding, as well as academic sources and think tanks, he added.
JAMA documents in detail any potential conflict of interest in funding, Bauchner said.
How bad studies get published
According to Nobel-winning pharmacologist Louis Ignarro, problems occur when "submitted studies are assigned by the scientific or medical journal to potential reviewers who are not true experts in the field."
"This makes for bad review and possible publication of a bad paper," Ignarro told Business Insider.
A bad study, he said, may have inadequate controls, unqualified subjects, or unblinded testing – meaning participants and researchers are aware of the specific test conditions.
"Hastily written conclusions not based entirely on the scientific and medical results" can also result in faulty findings, Ignarro said, as can poor statistical evaluation of the data.
The problem with preprint papers
A preprint is a full draft of a research paper that's shared publicly before being peer-reviewed or published. They often appear on online platforms like arXiv (pronounced "archive"), medRxiv, and bioRxiv.
The practice allows researchers to share findings with colleagues and establish claims. But it can lead to unverified information landing in the hands of journalists, laymen, and conspiracy theorists,
At the end of January, a preprint on bioRxiv pointed to an "uncanny similarity" between proteins in HIV and the novel coronavirus, The New York Times reported.
Some critics felt the study was feeding conspiracy theories about the coronavirus being engineered in a lab, STAT reported. Others complained of rushed methodology and mistaking coincidence for a valid connection.
"Had this manuscript undergone legitimate peer review, these flaws would have led to a swift rejection and it wouldn't be contributing to the conspiracy theories and fear surrounding this outbreak," Michael Shiloh, an infectious disease expert at the University of Texas Southwestern Medical Centre tweeted, according to STAT.
In a follow-up tweet, Shiloh worried about "the potential for abuse when content is rapidly disseminated and taken as 'fact' when it clearly isn't."
Reputable journals do release preprints: A disclaimer on Lancet's website explicitly states that "preprints are not peer-reviewed and should not be used for clinical decision making or reporting of research to a lay audience" without indicating it's just preliminary research.
A bad study can have serious consequences
The pandemic has resulted in "a flood of research reports," Li said, "some that are important and well-designed and others that are poorly designed with sketchy conclusions."
With all the global anxiety, an unreliable coronavirus study could have catastrophic consequences.
In March, an optimistic French study in the International Journal of Antimicrobial Agents suggested chloroquine could help against COVID-19. The test group was very small, and six patients dropped out early on.
But the results were touted by President Trump and others, and several people poisoned themselves by self-medicating with chloroquine phosphate, a chemical used to clean aquariums.
The Centre for Evidence-Based Medicine has reported that studies on hydroxychloroquine and chloroquine effectiveness had immense discrepancies between the protocols laid out and the methodologies followed.
In April, the FDA stated that hydroxychloroquine has "not been shown to be safe and effective for treating or preventing COVID-19." A month later, a study in the Lancet linked the drug to higher mortality rates in COVID-19 patients.
Reputable journals are methodical in their approach specifically to avoid publishing poor research. NEJM receives between 100 and 150 coronavirus-related submissions a day, according to communications director Jennifer Zeis.
Since its coverage began, though, the journal has published only about 130 articles on the virus.
"We have an experienced team of manuscript editors, illustrators, proofreaders, and production staff, who work to ensure that every article meets exacting standards," Zeis said.
This article was originally published by Business Insider.
More from Business Insider: