Peer review futures: Balancing efficiency and transparency to crack the issues in peer review

by | Sep 19, 2019 | Community

Peer Review Week is here and a great time for us to reflect on what we do best at PeerJ! With seven peer-reviewed publications to our name, we know that detailed, thoughtful, constructive improvement of scientific work is what authors are looking for and what quality science depends on. Facilitating quality in peer review is what we are most proud of at PeerJ and we do this by focusing on efficiency through a streamlined editorial system, with clear guidelines for authors, reviewers, and editors, and transparency in sharing the full peer review history of all our articles and through our steps to encourage people to sign their names to their peer reviews (optional open peer review).

Our system of optional open peer review at PeerJ is somewhat unique – both the author and the reviewer still maintain a choice. At this moment in time, where changes are happening everywhere in academia, we think this is a good transitional path that is working well. Here are some stats on where we are so far. We look forward to encouraging more transparency and efficiency in peer review to improve the entire process but we also recognize that the culture of openness is different across countries and disciplines. Below PeerJ Section Editors Andy Farke and Jennifer Vonk share their thoughts on quality in peer review.

 

Andy Farke, PeerJ Section Editor Paleontology and Evolutionary Science

Can you share a memorable experience receiving peer review in your career? How did this experience go on to shape your own approach or understanding of peer review?

In one of my early papers, I submitted it to the journal, and waited for the reviews with editorial decision. And waited. And waited. I prodded the editor (reluctantly, because I don’t like to bug editors unless I absolutely have to!), and they told me they were still waiting on a reviewer to get back to them. So I waited some more. Something like six months went by, and I finally got the reviews. One of the two reviewers had a detailed and very constructive review. The other had about two sentences with some vague suggestions. I don’t know for certain, but I strongly suspect the short and unhelpful review was the person who sat on it for six months, and they phoned it in just to get the editor off their back. This experience stuck with me in two ways–one, it’s important to be as timely as possible with reviews. A minor delay is OK, but if you say you’re going to review something, do it. A slow review has a cascading effect, especially on early career researchers; they might depend on this paper for a job! Plus, slow reviews just slow down the overall progress of science! Secondly, this experience affected how I work as an editor–I’m sympathetic to the delays that happen in the course of life, but I’ll almost never accept a review that is delayed by several months. It’s not fair to the authors! So, I’ll just move on and find another reviewer.

“A minor delay is OK, but if you say you’re going to review something, do it. A slow review has a cascading effect, especially on early career researchers; they might depend on this paper for a job! Plus, slow reviews just slow down the overall progress of science!”

In your field of research, what do you think is most important for peer reviewers to consider?
I can’t pick a single thing, but I have a few pet items that I like to highlight as a reviewer. First, are all the data accessible somewhere? It’s easy enough to provide supplemental information, or a CT scan archive link, so the days of “data available from author upon request” are long past. Next, I want to see that the authors made a good-faith effort to include relevant references. I’m not a fan of citation blizzards, but I do like to see a nice diversity of citations, not just things from high impact factor journals or the usual suspects of classic papers. Finally, good figures are a must. Especially for descriptive work, you have to be able to see the relevant anatomical features. With techniques like photogrammetry, it really is easier than ever, even if you don’t have a professional illustrator at hand!
What is most frustrating about how peer review is currently generally done? What needs to be improved?
We need more people to have an opportunity to look at a manuscript before acceptance for publication. In my experience, reviewers generally go a great job (truly!), but there are inevitably things they don’t know. Sometimes little details might slip through (e.g., a relevant paper or specimen that should be cited), and sometimes big things slip through (like a faulty statistical assumption). It’s really annoying to see a paper published (especially if I handled it as editor, or if I was a peer reviewer) and see a social media comment to the effect of, “How did *this* slip through the reviewers and editor?” I really believe that preprints, used in concert with formal review, are a good way to get as many eyes as possible on a manuscript, and maximize the quality of feedback. Formal solicited reviews are still important, but I recognize that they have limitations.
What is the future of peer review?
I want to see some way to crack the problem of anonymous peer review. On the one hand, I really get that there are legitimate reasons for wanting to stay anonymous when reviewing a paper, particularly for early career researchers, those who are marginalized for whatever reason, or those who are reviewing papers by notoriously cantankerous colleagues. However…and this is a big however…I don’t like that anonymous review is also abused by the powerful and privileged, who might have little to lose but still choose to hide behind anonymity while providing unhelpful and sometimes hurtful comments. I don’t have an easy answer, because there are trade-offs each way, but I’m not terribly satisfied when (often senior) people are suddenly and inexplicably concerned about early career researchers whenever the issue of open peer review comes up! I think change in this area will only happen with broader professional accountability, if there are genuine consequences for retaliation after a fair but negative peer review, or if there are genuine consequences for an unhelpful and disparaging review. That’s not something journals can necessarily do, though. It’s up to the professional communities, I think.

Jennifer Vonk, PeerJ Section Editor Zoological Science

In your field of research, what do you think is most important for peer reviewers to consider?
In comparative psychology, I think it’s really important to take into account the different settings in which research is conducted – you cannot evoke the same control in zoo and field research that you can in a lab. You will not have access to the same numbers of animals if you are working with exotic species. Reviewers are often unforgiving with regard to the challenges of studying other species and need to not the unique challenges that do not apply to studying children (for example). But I think that reviewers also have to think about the fact that animals may not think like humans do and may be solving tasks using completely different cognitive skills.

I think the most frustrating thing is when editors make decisions based on a reviewer’s misread or error in interpreting an aspect of a study. Editors need to contribute more to the process than just summarizing the reviews.

Editors need to contribute more to the process than just summarizing the reviews.

I suppose, like most people, I remember the negative reviews the most. I remember a reviewer commenting on my lack of experience as a graduate student author. I think that graduate advisers really must do the work to provide guidance before having students submit their own papers. I was embarrassed but it did wake me up to the fact that writing a publishable paper is very different from getting a good course grade. I’m not sure if enough advisers talk to their mentees about that.

Get PeerJ Article Alerts