My peer review process
Published:
As a junior scientist, the peer review process was a total mystery, part of a so-called ‘hidden curriculum’ of science. The standards and expectations in the peer review process were only revealed to me when I began submitting papers and conducting peer reviews in my first year as a postdoc. Even now as a fourth year postdoc, I am still encountering new situations. While there exists a small proportion of peer reviews that are accessible, it is usually only the final review output that is visible. I have not found many resources where researchers share or iterate their review process. Most guides from journals are not updated for the Open Science movement and mostly provide stylistic recommendations for the writing. I thought I would share my peer review process to help junior scientists gain insight into the review and publication process. (For the sake of focus, I won’t explicitly detail the benefits of doing a review for an early-career researcher here, but I’ll allude to them throughout this post.)
I also get the feeling that my ethos and approach to the review is substantially different from others, and wanted to invite commentary and comparison. I want to stress that I do not believe I am close to doing it the ‘best way’ and do not wish to set a standard that others should adhere to. Rather I am opening my process to criticism, inviting scrutiny and excoriation, which may provide some impetus for discussion on aspects of the review process or indeed the wider mechanisms for how science curates and publishes its outputs.1
My review process
1. Deciding whether to accept or decline a review
There are numerous factors that influence my decision to accept or decline a review, varying in how much they impact my decision with each invitation. I have made an attempt here to order them in priority:
My (perceived) current work load: If I do not have the bandwidth to take on the review, I guess that I will not provide a high quality review, and so I will simply decline the review. I usually do not set a deadline on how much time I will devote to each peer review, but I’ve spent on average about 10 hours for each first-round review of a manuscript (tracking my time with the Toggl app for the past 6 reviews I have completed). Most (but not all) of that time is usually outside of ‘lab hours’, typically during the evenings or weekends.
Some concrete indicators that go into my perceived load are any upcoming conferences and their submission deadlines, and the number of other reviews I am doing at the moment (I try not to juggle more than two at a time but I do find it difficult to decline opportunities when they are presented to me). Less concrete but perhaps more influential are my current general stress and mood levels – I think an unappreciated reason for current shortages of effective reviewers, and the slow review rates that are impacting all of us. (That being said, I have completed 8 reviews in this calendar year in service of the visual working memory field.)
The journal venue:
- I do not accept any reviews from Elsevier journals. I do not wish to contribute to their profit-motivated systems – including exorbitant article processing charges for open-access publishing. The parent company, RELX, reported £942 million in profit at a margin of ~37% in 2018, from rent-seeking scientific publishing practices. With their profits and power, they have lobbied against Open Science initatives (see a current list), have ties to fossil fuel industry despite publishing climate change research, scraped data from unassuming scientists with a popular reference manager, and charge substantial article processing charges for open-access articles in their prestigious journals (see T.R. Shankar Raman’s article on why they won’t review or write for Elsevier and Alex Holcombe’s blogpost on scholarly publisher profit). I do not wish to support their business with my time and effort.
- I do not accept any reviews from Frontiers journals. I believe they are predatory and do not rigorously uphold the scientific peer review process. They have been reported to dismiss reviews that suggest to ‘reject’ and seek favorable reviews, so far as remove the ‘reject’ decision and button in their editorial manager. They have been reported to indundate those that have reviewed once for their journals with requests, rather than carefully selecting based on relevant expertise or otherwise. I don’t believe they can uphold a rigorous standard for scientific outputs when seemingly anyone can become a ‘review editor’ – the Frontiers in Psychology journal currently boasts over 14,000 editors at various tiers. To me, this appears to be a model that proliferates their number of publications (and profits from article-processing charges) at the sacrifice of critical scientific rigor. (See these Twitter threads: 1, 2, 3, 4, 5, 6).
- I will do my best to accept review from diamond-open access journals. Diamond open-access journals, those that do not charge any article processing charges or fees for publishing open-access, are typically smaller venues that cannot rely on prestige to recruit reviewers. I will be much more willing to accept their review invitation to support their efforts.
- The journal venue is relevant. If I have published at the journal previously or suspect that I will likely do so in future, I feel more obliged to accept the review to promote some sort of parity in what I have provided and received from the journal. This will also mean the content of the manuscript is more likely to be aligned with my expertise and relevant to my research.
- I have not reviewed for the journal yet. I am more likely to accept a review from a journal I have not previously reviewed for. This is for a fairly superfluous reason – scientists include a list of the journals they have reviewed for on their academic CVs to presumably indicate their service to the community and the breadth of that service. I don’t truly believe it to be a good metric of community service, nor do I believe it is ever really scrutinized, so it’s nothing more than gathering collectibles. (On the service to the community, my current rule of thumb is that I should complete 3 (first-round) reviews of manuscripts for every 1 first-author publication to maintain parity between what I give and take from the system. I believe a 2:1 ratio is acceptable for junior scientists given the lack of invitations to review, lesser visibility and inexperience. My current review to publication ratio is 7:1 (21 completed first-round manuscript reviews to 3 published first-author submissions, but with 2 submissions soon hopefully…)
The to-be-reviewed manuscript:
- The content: I read the abstract attached to the review request and make a judgment on it’s relevance to my research, as well as whether I will potentially gain something from reviewing it – a new perspective, better understanding of a research method, a new empirical finding.
- My expertise: As I consider the relevance of the content, I make a judgment on whether I have the appropriate expertise to provide a high-quality review. I mainly consider this in two domains: my content expertise that my own papers could speak to (say theory-based), and my research method expertise (say familiarity with research paradigm and statistical analysis).
2. Attitude towards the purpose of peer review
I think it is really important to uphold an ethos around peer review in science. For me, as the supposed quality control mechanism in science, I am foremost trying to be critical and verify the rigor of the research – to me, this means scrutinizing the quality of the research and statistical methods, the transparency and openness of the research outputs, and the evidence for the scientific inferences that are being made.
But I think just as importantly, I am trying to improve the research and manuscript with my comments, rather than solely judging it for whether it is acceptable for the journal. I find that if one is focused solely on making a judgment, there’s a risk of taking a sledgehammer to the manuscript and having the review become an unhelpful and unkind list of research flaws and conceptual disagreements. I believe having the intention to improve the manuscript lends oneself to writing more effective reviews – one can still be critical but write their review constructively and kindly, rather than harshly. I am also of the belief that encouraging a more communal rather than combative attitude will likely help promote participation of women, people of color and other underrepresented populations in science (Murphy et al., 2020)2.
Further, one might start to bias their decision with the ‘prestige’ of the journal and are at risk to gatekeep certain work from publication due to a hidden internal value system (a bias that I do not wish to have in my review, and a value I do not wish to promote in science). I rather not have the judgment as the sole priority, but aim to hopefully provide a thorough review that the action editor, who can consolidate all the reviews, can use to make the best decision. That being said, I will ultimately make a recommendation on whether the manuscript should be published. I believe the reviewer has an important role to play in curating what content is published, and as such, do not think that making no decision or not being selective with the recommendation (say requesting “major revisions” for every manuscript) in an unconsidered manner to be productive for scientific progress. Not all papers deserve to be “published”, and by not being selective, we are at risk of perpetuating academic urban legends (Rekdal, 2014).
3. Reading and proofing the manuscript
I will often read through the manuscript many times over the course of the review. I like to read it from beginning to end, Introduction to Discussion, the first time. As I read the manuscript through for the first time, I highlight and note down any relevant thoughts and feelings – perhaps passages where I get stuck or confused reading, questions or past studies that pop up in my mind or even when I disagree or dislike claims. Recording these first impressions help me understand any initial biases and positions I may have, and so in turn, I can try to lessen their influence on my review.
I will sometimes include my position or understanding as a preface to my major comments in the review because I think sharing my perspective can help provide the context, and help the authors or editor evaluate the critique. For example, I may have a different understanding about a paper that is referenced or a process within a model, and by sharing that, the authors can evaluate whether my comments were sensible or from a misunderstanding (I am certainly fallible!), and to make clarifications in their review response and/or in the writing of their manuscript.
I do include in the minor comments section of my review, any typographical errors that I have noticed and sometimes (fairly infrequently) suggestions for rephrasing as ‘minor comments’ in my review. I have heard the perspective that these should not be included with the review as this may disadvantage non-native English writers by negatively influencing the editorial decision, and journals do often have copy editors with this responsibility following the acceptance of the manuscript. I find that I am thankful when reviewers of my manuscript have noted typos (I definitely wouldn’t want them to be missed and to then appear in the final proof!).
When I do make those ‘rephrasing’ comments, I’m not dissecting the syntax and grammar of the manuscript throughout, but morseo I report where I experienced points of confusion as a naïve reader, and make suggestions that I think better clarify the ideas at hand. These suggestions are often made to the author in a ‘take it or leave it’ manner – I think it’s tough for authors to predict what a naïve reader may be thinking!
4. Checking preregistrations, open data and code
As an advocate for Open Science, I believe that one aspect of the research worth reviewing is its transparency and reproducibility such as through preregistration, open data and code. An often-made argument for open scholarship is that it enables reviewers to catch errors and prevent fraud, and so I do try to uphold that. I will do this regardless of the journal venue, and whether or not Open Science badges are awarded at the journal venue.
With preregistrations, I usually check:
- The dates of the preregistration to ensure they do precede the data collection and analysis (if the datasets are available).
- That the sample sizes and their justificatiosn match what is reported in the manuscript.
- That all analyses in the preregistration were conducted and reported as planned in the manuscript.
- That any additional analyses are reported as exploratory.
With open data and code, I usually check:
- Whether the data can be comprehended (say with a README to explain variable names or file names)
- Whether the sample size of the datasets match what is reported in the manuscript3
- Whether I can reproduce basic values like sample means, and reproduce the statistical results from any provided code
For those that do preregister their studies and/or share their data and code, I will often commend the authors for making their research openly accessible explicitly in my review, and then report the results of my checks and suggestions for improving the accessibility of the data or code.
For those that do not, I often encourage the authors to make their research reproducible through sharing their data and code, saying I wish I could have informed my review with access to them. There have been many times where I resolved my own misunderstanding about what was reported in the manuscript after looking at the data. I will often make a comment that an analysis code or the dataset would be very useful for others to learn from or recreate in future studies!
5. Checking the statistical analyses and inferences
The first thing that I do is to run a quick check for errors in reporting the test statistics through http://statcheck.io. It’s surprising how many times it has found an error (usually typos) from the quick check! I will also cross-check the values with any provided statistical outputs, such as an SPSS output file or the rare RMarkdown file, just to make sure they line up.
I always check the sample sizes and the number of trials for each experiment. In recent times, I have also conducted some parameter recovery and model recovery analyses (likely because there is substantial modelling work in my research field of visual working memory). I find that this gives me a sense for effect sizes and statistical power of the design, helping me evaluate whether the effect that is being reported is likely to be credible and reliable. Further, this will help me evaluate whether the inferences being made, and how they are being made, are consistent with the evidence of the findings.
I think all researchers (I have to admit myself included) have a tendency to report evidence for a small but reliable empirical effect but then draw large sweeping conclusions about a system in their excitement, without ever stipulating the connection between theory and hypothesis (think underdetermination from a lack of specificity). That is, in my view, psychology researchers could perhaps do better to communicate some uncertainty in their papers, and be comfortable with making careful claims rather than bold claims as to be contentious or ostentatious. I think a good scientific paper states that it might be premature to make strong conclusions, and be explicit about limitations and unknowns. Even better, the paper states that there may need to be more independent replications to confirm the reported empirical effect, and then points to the repository for attempts to reproduce the research.
I want to explicitly note here that I do not readily evaluate the originality or novelty of the research that scientific journals usually covet – perhaps only when authors make the extraordinary claim that they are making an entirely novel contribution as to add a new name to the discovery, without rigorously confirming the empirical phenomenon or by ignoring past research. I care about reproducible and replication research, I care about considered and thoughtful experiment designs, and I care about credible scientific inferences linked to current and relevant theory – not a splashy but probably spurious positive finding, we already have many of those.
6. Checking the figures
I think data visualizations are very important in research papers – if done well, it can bring clarity to the statistical analyses being conducted, helping the reader better understand the results. If I find a figure confusing or difficult to interpret, I will usually comment on that with a suggestion that I believe will help improve the figure. This has most often been to add data points to bar graphs to provide a sense for the overall distribution, or more precise axis labels and ticks to better establish and evaluate what is being presented or compared.
7. Writing the review
Finally, scrutinizing all the elements of the manuscript after one last read-through, I’m ready to write the review. I typically include three sections in my review: summary, major comments, and minor comments.
The summary includes a brief overview of the research manuscript – the number of experiments, the key method and measures and the notable findings. Then I will report my overall perspective on the paper, noting to compliment any credible parts of the work, and summarizing my comments and concerns in the review. I often include in this paragraph whether there are any gaps in my expertise, and anything else that I think is worth emphasizing upfront. Together, this serves a few purposes: to evidence that I have read and engaged with the manuscript, to provide context of my review for the editor, and for the authors to see what I have emphasized or highlighted.
The major comments section include thoughts or criticisms that I think the authors will have to address and by doing so, will strengthen the manuscript. The first comment of my numbered list is almost always those about the preregistration, open data or code, and the results of my reproducibility attempts. For example, if there was no open data or code, I encourage the authors to consider making the data or code openly accessible if possible, at least to inform future review.
What follows are any concerns or requests for clarification about the research method, statistical analyses, and research claims organized in order of priority. With each comment I make in this section, I am sure to include my detailed justification or reasoning. I think it is not constructive to simply say what I think without explaining what went into that thinking and anything that could be done to address the concerns.
The minor comments include comments that I am personally less confident about (and I state as such in the review) or perhaps suggestions that are more up to the authors’ discretion, like changes to some phrases. I will also include in this section any typos that I have picked up in the manuscript.
I don’t set a word limit or page limit on my reviews. They have been approximately 1000 words on average (according to Publons, across 25 verified reviews). I accept that this is likely on the longer end, perhaps too long, but I think it equates to how thoroughly I am conducting the review, especially when I report the results of any reproducibility checks I did. I do try to be concise and ensure I am being as constructive as possible in my review, despite the length.
8. Signing my review
I have signed all my reviews by adding the following statement underneath my name at the end of the review:
I sign all my reviews, regardless of the recommendation to the editor. By signing this review, I affirm that I have made my best attempt to be polite and respectful while providing criticism and feedback that is hopefully helpful and reasonable.
I sign my reviews so that I am accountable for the contents of my review, including the tone of the comments. While the purpose of peer review is to provide criticism, this does not allow a reviewer to be callous in their comments. While it takes effort and consideration, writing criticisms to be productive and helpful will more likely have those ideas considered by the authors. There is a lot of rejection in science and it helps to promote kindness.
There are reasons not to sign a review – revealing one’s identity makes the reviewer vulnerable to retaliation. I get the feeling that I’m likely a more ‘difficult’ reviewer than most, so perhaps I have built a negative reputation as a result. I’m yet to have experienced any direct retaliation personally, but I am afraid that will happen someday. Nevertheless, I think by being accountable and transparent in my review, what is gained by signing the review outweighs this personal risk.
9. Making a recommendation
There is some variation between journals in the decisions that one can recommend, but typically they are accept, minor revisions, major revisions, reject. As I noted above, I do try to be selective with my recommendation here rather than suggesting the same decision (say major revisions) for every manuscript I review. I try to make this decision without the journal venue in mind, holding a consistent criterion as best as I can – in actuality, I think with a conscientious and thorough review, the recommendation is usually pretty straightforward. If I think the paper has easy-to-take steps to address my comments, and it would not require my attention, then I will recommend accept or minor revisions. If my concerns are such that I think that a significant rewrite is needed, or I would need to be convinced by secondary statistical analyses, then I will recommend major revisions. And if I find significant flaws with the research design, I would recommend reject.
Comments?
As I stated at the start of this post, my goal with sharing my review process was not to set a standard, but to invite scrutiny and discussion about parts of the review process. At the very least, I’m grateful for any criticism that will help me improve my contributions to science. For lack of a better forum for discussion, perhaps the best place will be to comment on Twitter.
Thank you to Kim Meidenbauer and Priya Silverstein for comments on early drafts of this blogpost and their support.
1 Scientific journals seem to moreso bungle the peer review process despite their ownership of its overview. Importantly, the Peer Community In (PCI) network has been striving to remedy this.
2 Thank you to Alison Ledgerwood who shared this reference as part of a discussion in the Journal Editors Discussion Interface (JEDI).
3 I have found differences in sample sizes and the dataset in three of my reviews.
This blogpost has been uploaded as a PDF document to Zenodo here: https://zenodo.org/record/7502567, where it has been assigned a DOI: https://doi.org/10.5281/zenodo.7502567.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.