Categories
Blog Post

Preprints are the Solution, Not the Problem

“It doesn’t matter. You can peer review something that’s a bad study.”
-Dr. Anthony Fauci, a leading member of the White House Coronavirus Task Force, July 31, 2020

Recently a reporter, writing an article on the issues of retracted preprints and this issue of non-peer-reviewed articles, wanted some comments about the use of this mechanism as a way to disseminate research findings. They were surprised at my answer. I shared with them that I think the way to make our research stronger is to get papers out as preprints and not rely on the peer review system that we currently have to be able to determine “poor” science and “good” science. I am going to elaborate on some of my reasoning in this post but also end with some ideas that my colleague, Dr. Lisa Singh, a computer scientist, and I have put together that are borrowed from other disciplines that may help all of our research disseminate more quickly but also with some confidence that what is shared is also rigorous.

As a psychologist, I have been very frustrated with the number of retractions in my discipline that generally fall under the category of questionable research practices (QRP’s). These practices include not making adequate adjustments for the number of models run, excluding data to increase the probability of finding a statistically significant outcome, changing the original hypothesis of a study to match the results of analyses, and many less obvious manipulations of the data or hypotheses. The replication crisis is based on existing published studies in peer-reviewed journals. Even when some of the most egregious studies are retracted, these papers are still in the electronic and printed journal world and still receive citations. Even though many of the QRP’s discussed were considered “normal” research practice for years and thus peer reviewers did not catch them, there is now a large literature that suggests these are poor research practices that are related to poor science. However, these poor research practices persist and papers using them continue to be published suggesting that the current peer-review process is not capable of recognizing and rejecting papers using these practices. Indeed, there are scientists (data sleuths) who have devoted their academic training and careers to identifying papers that should be retracted because they are using poor research practices and making scientific statements that do not hold up to close scrutiny. 

In the last 5 years, a surge in preprint services has emerged. These were generally intended as a place to share non-journal formatted “final” drafts of papers that had been accepted to a journal. This allowed for broader dissemination of the science and still met the copyright rules of the journals. These types of “final” drafts had been peer-reviewed and were just awaiting formatting and scheduling in the journal for publication. However, during the COVID19 crisis, in the desire to disseminate research and information quickly, these preprint servers became a place to share “first” drafts of papers that had not yet been through the peer-review process. When some of these papers were noted as being flawed, the community complained that the reasons were due to not going through the rigorous, peer-review process. I argue, however, that these papers went through a more rigorous process once they were made “open” and available for the community of scholars to examine closely and provide arguments and feedback about why they were flawed. This review process happened relatively quickly versus the anonymous and often time-consuming structure of the current peer-review system. Instead of 3-4 peer-reviewers, these papers were often getting 100’s of comments (not all useful) on the integrity of the research. In this way, good peer-reviewing was occurring and doing as it was intended-catching poor science or science done with poor methods prior to publication. It was doing this through an open and transparent process instead of the usual “closed, anonymous” review that has been heralded as the gold standard of research integrity. An open review should be seen as a positive step forward. 

We do need to be concerned with incidents that happen where a high profile study is distributed without review but with a press release. In this case, a document that has only been reviewed by the authors is being put out broadly as if it is equivalent to a reviewed and corrected study. This is a practice that needs to be reconsidered because it makes it unclear to the larger community what has been reviewed by research peers and what is in essence-an opinion piece with some data to support it. All research needs to be reviewed by other scientists to make sure it follows the rules of scientific integrity and discourse, and this system needs to be expanded not removed. I think, providing preprints to the community of scholars for review is an important step in increasing our vigilance on getting robust science, but we currently do not have an adequate way to do this and note to the community that a paper is considered solid science by some percentage of the community of scholars that conducts research in the same area. It is a time to look to other disciplines for models of review and dissemination. 

Physics was the first discipline to develop an extensive online preprint culture. No other discipline has integrated preprints as effectively. Preprints are an outgrowth of physicists who shared research results by mailing them to colleagues and others in the field. Once it was possible to do this more effectively online, the transition occurred.  In comparison to physics, all other disciplines were slower. Even though computer science has engaged in preprints for almost two decades, a decade ago, only 1% of the articles had preprints. In 2017, 23% of the articles had preprints and the percentages continue to grow. The preprint culture is divided in computer science, where some subdisciplines share preliminary results more readily than others. Papers in theoretical computer science or newer areas like deep learning and data science have more established preprint cultures than many other areas of computer science. It is not uncommon to see preprints listed on CVs – similar to working papers in other fields. It is a simple way to show what is in the pipeline. While conferences are the primary forum for community building in many fields, online communities can be just as important. Glimpsing research before official publications can speed up scientific progress and be important for junior faculty who are new to the field. Waiting until the official publication is released can delay the use of or dissemination of research for well over a year. As the pace of research innovation continues to quicken, preprints are necessary to keep up with the latest research. 

Using our understanding of the preprint cultures in these two disciplines, their successes, and failures, we present a few suggestions for increasing and potentially, improving the current preprint culture across all fields:

  • Instead of viewing preprints as a service for sharing research that will be ultimately published in a journal, we need to view them as an archival service around which research ecosystems emerge. An important goal of every researcher is to advance the understanding within an area of research through different intellectual contributions. Generally, these contributions take the form of a peer-reviewed journal publication, but there are many other forms of intellectual contributions that advance different fields, including white papers, opinion pieces, and data releases. Viewing a preprint service as a place to share any of these types of contributions will encourage others to participate. This was the foundation for the cultural shift in physics and computer science. 
  • It is important to know the “state” of the work being shared. Is it a first draft that has been submitted? Is it a final draft of a report or other white paper that will not be submitted? Extending preprint services to include categories for types of intellectual contributions and the state of the contribution can help others understand how far along the work is. To date, there is no standard way to categorize the completeness of the work, leading to citations of what may be viewed as preliminary work. 
  • Preprints are a great way to share null results, particularly in fields that do not have other avenues for them. These null results are important intellectual contributions that lead to important information sharing and possibly more impactful results. Getting them out to the community of scholars through preprints is a good way to adjust for publication bias of only positive results and to help add to the meta-scientific literature on a topic.
  • Ecosystems can only be sustained through active participation. Get in the habit of sharing one or two preprints each year and looking at preprint services before publishing a paper. If there is a paper that you have a strong opinion or comment on, share it. Seeing how leaders in a field comment on work, help those new to the field understand how to critique work in their field. My computer science students and colleagues began using preprints before me. Only recently have I realized the richness around some of my research areas. I think if a significant number of senior faculty engage in sharing preprints, the next generation of researchers will already be trained to do so. If leaders of an area use preprints and share them extensively, others within the field will do the same. 

Given the current speed of dissemination of information that we have available to us as scientists, finding ways to provide our scientific findings and discoveries relatively swiftly should be a general goal. The important component is to have ways to signal that the science has been properly scrutinized and is rigorous to the values we hold in science. Currently, we continue to do this by elevating peer-reviewed papers in high impact journals as the “gold standard” for science and it is unclear that is the best direction to keep going. By embracing and improving the preprint or pre-peer-reviewed structure we have an opportunity to change the direction of our published science and hopefully improve the quality of scholarly research across all disciplines.

One reply on “Preprints are the Solution, Not the Problem”

Great discussion. I agree that preprints have lots of potential to improve the dissemination of science. But right now there isn’t a great mechanism for providing substantive feedback. Plaudit is great it is just a check mark for important, clear and exciting. And right now most preprints don’t get reviewed at all. We need full reviews and (as you say) markers of how far along a paper is in the process to be part of the preprint hosting sites. Signed reviews could count as publications, or people could be required to review twice for each paper they post.

Comments are closed.