Peer-review is often referred to as the "gold standard" of scholarly publishing: a rigorous vetting process that ensures the quality of the research that makes its way into print. And although the system isn't fail-proof--crap can make its way into top journals and every couple of years there's an actual scandal involving falsified data or obviously tendentious analysis--I do believe that, in the long run, the traditional double-blind review system mostly works.
But "in the long run" is a pretty big caveat, and I know no one who hasn't beaten her head against the wall of peer-review at least a few times. There are genuine horror stories of reviewers who make it their mission to block any work that challenges their own or that uses a theoretical model they disdain, but mostly there's just pettiness and obtuseness: reviewers who on some level can't "hear" arguments that don't match their own.
Indeed, although the publishing world has been good enough to me over the years, I've encountered as many obstructionist reviewers as generous ones, starting with my very first submission. As a grad student, I submitted one of my dissertation chapters to a top journal and got back a thirteen-page, single-spaced review. It was clear that the reviewer had taken a strong dislike to me, and hostility oozed from every sentence. He appealed continually to what "everyone knew" about the subject, sometimes going so far as to say, of actual, documented facts "this hardly seems likely." In one especially bizarre paragraph, he took it upon himself to lecture me on the Vietnam War and U.S. policy in Central America in the 1980s. (My essay was on Milton.)
Compared to later negative reviewers I've encountered, however, this one was easily sidelined: the journal editor told me to read the review carefully, make "whatever revisions you think necessary," and send it back. It went out to a second reviewer from a similar school of thought, but this time a professionally and intellectually generous one; I later met him at a conference, and as he shook my hand and introduced himself, he said, "I don't know if you could tell from my review, but I disagreed with pretty much every other sentence of your essay." He smiled, told me that it was a provocative and worthwhile argument, and added, "and boy, can you write."
You could say that this taught me to have faith in the system, and I do, but it's been tested routinely by both my own and others' subsequent experiences. The problem isn't so much with the bad behavior that anonymous review sometimes permits or with the way a single person can block good work for personal reasons. Those are problems, to be sure, but there are plenty of venues out there, and plenty of readers; good work will generally get published eventually, and the real test is its afterlife: how often it gets read and cited and grappled with once it's out there in the world.
No, the real problem lies with "eventually": scholarly time is always inefficient and unpredictable--it's hard to know whether the article you're writing will take six months or two years--but when your work is in your hands, at least you have some understanding of why it's taking so long and what comes next. This isn't true of peer review, which might as well be a black box: even the reports and the editorial decisions, once you get them, are not always self-interpreting. This is especially hard on junior scholars, who are always on a clock; they need that vita line ASAP because they're going on the job market, or they're approaching their third-year or their tenure review. Under those circumstances, even the usual delays--a reviewer takes six months instead of the promised four; a journal requires a second round of revisions--can be nerve-wracking, and the more capricious and unreasonable responses can damage careers.
There's not a solution here that I'm aware of; I don't believe that open-source peer review is a better answer--on the whole, I think it's likely to produce more conservative and crowd-pleasing rather than more innovative work--and I certainly don't advocate for the end of peer-review, but there are problems here that affect junior scholars disproportionately (although not exclusively: I've heard well-published full professors mention having written pieces that got savaged so badly they'd never had the heart to send them out again). I suppose one solution is, "submit early and often," but that too is hard on junior scholars, who tend to be focused on the One Big Thing that is their dissertation/first book and can't as easily work up little side articles.
Readers, what do you think? Are there ways to make the peer review process work more equitably and efficiently (beyond being an ethical and responsible reviewer oneself)--or do you have words of wisdom to give to grad students or recent PhDs stuck in peer-review hell?
12 comments:
I think at least part of the problem is a lack of professionalism in academia in general.
If a review is supposed to take four months, it should not take six. Period. Yes, things come up and stuff takes longer than you thought it would and no, it isn't a priority etc. Don't we all sit around and bitch about students who pull that crap? We do. So why is it okay when we do it?
To the same point, if the assignment is to assess a certain set of criteria in an essay, a review that ignores the criteria and takes off on some hobbyhorse of the reviewers is a failure and should be rejected as such. No student who ignored the assignment that way would receive a high grade from any of us, so why do we think we can get away with that?
And yet we all know that happens. My first reader's report was full out ad hominem and that should never happen, in a professional context. And yet that kind of think happens all the time with very little if any consequence to the person displaying the bad behavior.
I recognize that reviewing is service and it's pretty thankless. I know professors are busy and overworked. I get it. That might just be the crux of the problem. It seems to me that editors are just so grateful that somebody who knows something agreed to review the thing that they'll put up with whatever that person produces, regardless of what it looks like.
This seems to me to be a major issue in all kinds of ways ways in academia. I have never in my life witnessed so much unchecked bad behavior among people who should know better. Rarely if ever does this kind of unprofessional behavior engender real consequences.
Anastasia:
To stick with the peer-review portion of what you describe, it seems to me that editors/editorial boards can and should act as a check on the most egregious kinds of abuses of anonymity. That's more or less what happened with the article of mine that I describe in this post, and I'm very grateful to the editor for exercising his own judgment.
Obviously this is harder when the subject of an essay is extremely specialized and the journal/press is dependent on just a couple of experts; sometimes there are also other pressures that prevent an editor from advocating as strongly as he or she might. For example, a friend of mine got a wildly split decision on her book MS--one reviewer LOVED it and thought it was a major advance while the other thought it was dangerously bad--and although the obvious move would have been to solicit a third reviewer, the editor was under pressure to limit the number of first books she took to the press's editorial board, and so didn't feel she could go to bat for anything with even one vicious review.
But, importantly, she explained the situation to my friend; what authors find frustrating (and I include myself in this) is when editorial decisions seem arbitrary and capricious. In general, it would be helpful if writers understood editors to be their advocates, and if editors acted as gatekeepers for the gatekeepers by not sending work to reviewers who have acted badly in the past.
(I hasten to add that most of the editors I've worked with have been terrific--efficient, professional, and good communicators. I see the author/editor relationship as collaborative. But many people, and junior scholars perhaps especially, understand the relationship as at least partly adversarial. And when something crappy happens to them in peer review. . . well, no wonder.)
The treatment of the first article I sent out (in grad school) was so unprofessional my adviser wrote to the journal. I've had a few other weird reviews over the years, but I think that's partly because my work has always been a bit sideways to the field, off on its own trajectory.
I do my best to do reviews in the time allotted, though sometimes things come up that reasonably stretch things out -- deaths in the family, illness, etc. I also do my best to be constructive: when I reviewed an essay that I thought was crazy this fall, I said I didn't buy the argument, but that I thought a central insight was actually quite useful, and could be developed.
I think it's hard to enforce timelines on reviewers precisely because it's a favor. What editors have to do is exactly what your first editor did -- tell you that ze understood it was a crazy review, and just to take what you could. And that goes for journal editors (often just as uncompensated as the reviewers) as well as book editors.
It's hard to believe how much of our profession depends on our voluntarily doing all sorts of hard work. It makes policing good behavior particularly difficult.
In other fields of work, it's well known that if you ask a pro to do something for free, you'll get what you pay for. If you think about it, people intent on protecting their own little hill are, if anything, more likely to accept opportunities to review, so they can keep doing their little gatekeeping activity.
My first experience with peer review was a bad one -- took ages, was just one reviewer, did not address my argument, was upset that I hadn't had a few footnotes on a side text I mentioned (I later read the books he thought were so important for me to cite, and while they are important books, they had nothing to do with the point I was making), and made claims that were contrary to what we know about the history of the period. I wound up sitting on the article for longer than I should have, but then I sent it to a second journal, where I got two professional, critical, serious reports back in record time that really helped me make the work better.
Still, I'm pretty sure that my writer's block, when I suffer from it, is in large part due to that first reviewer. I'm so afraid of my work being savaged by someone like that in a context where there aren't a lot of second chances (not a lot of presses print books in my field), that it becomes hard to write altogether.
My current beef with peer review is not that it exists, but that it's so vaguely defined. In my dream world, journals (and books) would openly state how many reviewers are used for each article, and would differentiate between openly submitted articles and commissioned/invited ones in some kind of visible way. I just don't think the "prestige points" should be the same for pieces that are invited and very likely to get in, probably reviewed by people chosen to be nice, etc. etc., and pieces that are sent in independently and have to go through the full rigours of the process.
i:
Thanks for this. I had a really disproportionate reaction to my most recent R&R--which was genuinely capricious, obtuse, and obstructionist, but not vicious or nasty--despite the fact that there are still a number of top journals I could send this article to if the journal doesn't accept my revision. Cosimo helpfully pointed out that it was probably a kind of PTSD response to the capricious, obtuse, and obstructionist second reader of my book manuscript. . .and that press's resulting behavior.
I'm also 100% with you on the vagueness of what "peer review" actually consists of: if an editor selects your essay for inclusion in a collection, and the whole collection undergoes outside review. . . should that count in some way? If you're published in a peer-reviewed journal, but it's a special edition and the editors are reviewing the MSS rather than sending to outside reviewers. . . does that count? And we all know that some edited collections are higher-quality than some peer-reviewed journals (they have highly involved editors and a press with multiple stages of review), but they aren't classified the same way on a C.V. or in tenure review.
Again, though: these are problems that most affect junior scholars, who need to quantify and categorize their work in ways that really don't matter for those who are already established--our advisors could publish articles for the rest of their lives in collections edited by friends for OUP and CUP and it's all good.
I think the nasty first batch of reviews are in some respects unavoidable. I choose to wear mine as a badge of honor; after all, what's the point of new scholarship and junior scholars if they don't shake up old orthodoxies and piss off a few people?
Another reason they are unavoidable is that although many journals profess to do double-blind reviews, status and rank still count. And now, of course, in the age of Google, only a few keystrokes will separate you as a peer reviewer from the identity of the author of the article you've been asked to review. (I will say that I don't Google topics to try to find out whose scholarship I'm reviewing, because of my third point below.)
Finally, although most reviewers participate in peer review in good faith--remember Susan's excellent point that this is a favor to a journal, to a subfield, and maybe even to your entire profession--maybe the best way to ensure that peer reviews offer constructive advice is to ask reviewers to consider signing their reviews. I have never, ever failed to disclose my name & affiliation to an author, even when (once) I recommended not publishing. Most reviews I've done recommend revise & resubmit, so I make recommendations and then let the editor and the author decide how useful my advice is. Signing my reviews keeps me honest, and in almost every case, the author has contacted me independently to THANK me for giving their articles such a careful read & such helpful advice.
One final final point: The role of the editor in all cases is really vital. This was going to be my original comment, but then Flavia address the question in the comments really well. Editors need to have the stones to refuse articles w/o sending them out for peer review, and/or after they send articles out for review, they need to help the authors read the peer reviews they get intelligently. Editors should not use peer review to outsource editorial decisions, but rather to guide and inform hir process. In the end, the final decision is always up to the editors.
I'm a little too close to this right now to really be able to think abstractly — definitely a case of the long run being too big a caveat for me at this phase of my career as a junior scholar who is definitely suffering the negative consequences of a really capricious editorial decision. But I'm starting to think that maybe one way around it is to make reviews at a maximum single-blind (and maybe not blind at all). I think that would cut down on the nastiness and on the selection of inappropriate reviewers if the reviewer had to sign his or her name to it. And besides, as Historiann said, the reviews are already not blind, so why perpetuate the pretext? Just my two cents...
I'm really interested in Historiann and S.J.'s suggestion that we do away with reviewer anonymity. Since anonymity is often only a thin fiction and since external review is, at its core, about helping the field to advance, anything that encourages a reviewer to see him or herself as a mentor or sounding board seems like a good thing.
I'd love to hear what others think. The main objection, I suppose, is that reviewers would start pulling their punches so's not to offend friends or friends-of-friends. . . but seriously. We're all big kids (external reviewers are never going to be meaningfully junior to those they're reviewing), and we all grade papers for a living: we all know it's possible to admire someone's intelligence and the ambition of his/her work. . . and yet think a particular project isn't firing on all cylinders. And we all have experience giving targeted, incisive, yet charitable advice in those situations.
I'd also be interested in hearing what others think about S.J. Pearce's suggestion on her blog (password-protected) that one potential reform for peer review is to normalize simultaneous submissions.
I understand the objections, which are mostly practical and perhaps insurmountable: since everyone on the review end--often including the journal editors themselves--is a volunteer, doubling or tripling their workloads could be crushing (and/or there may simply not be enough experts available).
But for junior scholars caught in a serious time crunch, it could remove some of the randomness and unpredictability. (Wasting time, as your tenure clock ticks down, at a journal that loses your submission, or takes a year to get back to you, or sends it to a nasty, unhelpful jerk.)
I'll be lazy and just copy the paragraph from my post, which was a list of five ideas for improving peer review on the basis of my most recent revise-resubmit-reject experience, into this thread:
1) Allow simultaneous submission.
This is a no-brainer. If editors and reviewers know that they are competing for the best articles, the process will move more quickly. Scholars will have some options about where to place their work. And those whose work is difficult to place because it falls between disciplinary boundaries or deals with unusual groupings of languages or types of texts won't have to gamble six months away at the mercy of a single journal perhaps seeing the wisdom in her work but perhaps erring on the side of intellectual conservatism. Particularly for people whose work is really, truly interdisciplinary this is especially important.
***
And I guess I'll just add here that yes, I understand the objection about simultaneous submission adding to workload for journals, but they're the ones who are in the positions of power in this equation, so for me that doesn't hold a lot of water.
S.J.:
Sorry no one seems interested in taking up the challenge! But since I just spent the last week re-revising and re-resubmitting what I've taken to calling the Article of Eternal Return, I feel ya.
"Top" journals need to reject a lot of papers--that's what defines "top" journals. The pressure to publish in "top" journals leads to a lot of quality submissions which need to be rejected. So you end up with vague and capricious reviews.
Removing anonymity from reviewers has been tried in medical research, but it didn't work well. The problem is a hyper-competitive job market that forces job and promotion candidates to enter the "top" journal lottery. Reviewers are pricks because that's their job and they can get away with it.
Post a Comment