Peter Coy
The New York Times
TT

How to Disseminate Science Quickly

Peer review is both the greatest strength and the greatest weakness of the scientific research system. It filters out bad work and makes good work better. But it can also slow down the dispersal of new ideas, which is a big problem when it comes to tackling fast-changing challenges like the Covid-19 pandemic. And, of course, peer review can fail. Retraction Watch, a website, maintains a list of more than 100 Covid-19 research papers that were peer reviewed, published and then had to be retracted.

On the whole the scientific community has done a good job of managing the peer review challenge. Most research today is released originally in the form of preprints, which are articles that are made widely available even though they haven’t been peer-reviewed. The major scholarly journals, which used to insist on exclusivity, have agreed not to deem a preprint as having been published, which means they will consider it for publication. That way the work gets out quickly and still has a chance to appear in a journal, which entails closer scrutiny (through peer review) and earns the authors a measure of academic repute.

But dissemination of scientific knowledge could be done better. “The limitation is that any idiot can publish any idiotic stuff on a platform that doesn’t have pre-publication peer review,” says Robert West, an emeritus professor of behavioral science and health at University College London and a past editor in chief of Addiction, a scholarly journal. The trick is to develop a system that keeps the speed while reducing the risk that bogus ideas such as treating Covid-19 with ivermectin or hydroxychloroquine will slip through.

In an early stab at collecting and publicizing early-stage work, the National Institutes of Health created a registry for preprints of biology research, but discontinued it in 1967 after the scholarly journals in the field refused (at the time) to consider submissions that had previously posted as preprints, according to a 2020 article in JAMA.

The preprint system got its real start in physics, math and computer science: Paul Ginsparg, a theoretical physicist, created arXiv, pronounced “archive,” while working at Los Alamos National Laboratory in 1991. (He thought of replacing the ch with an X “while driving up to Taos for a holiday dinner,” he told an interviewer in 2002.) ArXiv is now housed at Cornell, where Ginsparg works in quantum field theory.

The success of arXiv in disseminating research rapidly and cheaply led to the creation of bioRxiv in 2013 and medRxiv in 2019. (The Rx embedded in their names is a nice touch.) In medicine, outbreaks of Ebola and Zika between 2013 and 2016 increased interest in preprints. In 2016 major journals and public health organizations issued a manifesto on the importance of sharing data in public health emergencies.

But it was the Covid-19 pandemic that brought preprints into the mainstream of medicine, says Michael E. Mullins, an associate professor of emergency medicine at the Washington University School of Medicine in St. Louis. Dr. Mullins is the editor of an open-access journal, Toxicology Communications, in which researchers pay for their articles to be published, and a review editor of a traditional journal, Clinical Toxicology, in which costs of publication are borne by subscribers.

With preprints, there’s no quality control apart from some remedial due diligence. MedRxiv, for example, says that “all manuscripts undergo a basic screening process for offensive and/or nonscientific content and for material that might pose a health risk and are checked for plagiarism,” but otherwise aren’t vetted.

On the plus side, most scientists care about their reputations so they don’t knowingly produce junk. Also, mistakes in preprints tend to be exposed by fellow scientists, even if those scientists haven’t been formally selected as reviewers. And journalists have gotten more careful about noting that preprints haven’t been peer-reviewed and therefore may be wrong.

What more could be done? In a November article for The Scientist, Dr. Mullins proposes that preprints “have a limited shelf life with a link that expires within 12 months” so that bad research doesn’t linger. If it’s good work, it should have found a publisher by then, he argues. Every page should have a digital watermark identifying it as not peer-reviewed, he says. And preprints should be digitally linked to the peer-reviewed articles they become to “motivate authors to complete the peer-review process,” he writes.

Open-access journals are another solution to the speed versus quality challenge. In contrast to preprints, they are peer-reviewed. However, some are unreliable, Dr. Mullins warns: Low-quality journals have sprung up to collect payments from researchers who are desperate to get published. He said he needs a spam filter to stave off pitches from open-access journal publishers inviting him to pay to get his articles in.

I’m intrigued by new business models pioneered by the likes of F1000, Research Square and Qeios, which tweak the preprint publishing approach in various ways. On Qeios, which is based in London, researchers or their institutions don’t pay per article, but they do pay a monthly fee to post an unlimited number of articles. The articles go up right away, as on a preprint server such as medRxiv, but are later peer-reviewed, as in a journal.

Gabriele Marinello, the chief executive and co-founder of Qeios, dropped out of medical school in Italy in 2016, incorporated Qeios in 2017 and started it in 2019. Qeios is pronounced “chaos.” That sounds like an inauspicious name, but Marinello says it’s appropriate for something new because Chaos was the first of the Greek gods, at the beginning of the world.

Some of the corporate researchers have used Qeios as the final destination for their work, while university researchers have used it like a preprint service, with plans to get the work eventually published in a journal, Marinello says. To attract reviewers, which is a challenge for all outlets, Qeios is publishing their full reviews on the site, a form of recognition that’s lacking in conventional journals. The Qeios website cites work that it has published by researchers from the California Institute of Technology, Cambridge, Harvard, Stanford and University College London.

In an email, Marinello wrote this about Qeios: “Though not perfect, it’s more transparent, inclusive, and you tend to spot and mark more research as good or bad, reducing the amount that is currently missed by journals’ scrutiny and, importantly, the immense frustration that hurried rejections create for countless great scientists.”

West says that researchers who have truly excellent work might skip a platform such as Qeios and take it straight to a major journal to avoid having to deal with random potshots from readers. Getting into a journal that has high impact, measured by citations of its articles, can determine whether a scholar gets tenure.

But more than 20,000 individuals and institutions have signed the 2012 San Francisco Declaration on Research Assessment, which aims to loosen the journals’ stranglehold. It says, in part, “Do not use journal-based metrics such as journal impact factors as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions or in hiring, promotion or funding decisions.” As that movement catches on, alternative platforms such as Qeios get more breathing room.

The challenge in scientific publication is to keep out all of the bad stuff while blocking none of the good stuff. It’s not easy. Fortunately, lots of smart people are tackling the problem.

The New York Times