Logo
 

Abertay Research Collections >
Research Centres >
SIMBIOS Collection >

Please use this identifier to cite or link to this item: http://hdl.handle.net/10373/553

View Statistics
Title: Peer review: beyond the call of duty?
Authors: Griffiths, Peter
Baveye, Philippe C.
Affiliation: University of Abertay Dundee. Scottish Informatics, Mathematics, Biology and Statistics Centre
Keywords: Bibliometrics
Scientific publishing
Peer review
Research
Issue Date: Jan-2011
Publisher: Elsevier
Type: Journal Article
Refereed: peer-reviewed
Rights: This is the accepted manuscript version of this article (c)Elsevier. Published version available from ScienceDirect at DOI: 10.1016/j.ijnurstu.2009.12.013
Citation: Griffiths, P. and Baveye, P.C. 2011. Peer review: beyond the call of duty? International Journal of Nursing Studies. 48(1): pp.1-2. Available from DOI: 10.1016/j.ijnurstu.2009.12.013
Abstract: The number of manuscripts submitted to most scholarly journals has increased tremendously over the last few decades, and shows no sign of leveling off. Increasingly, a key challenge faced by editors of scientific journals like the International Journal of Nursing Studies (IJNS) is to secure peer reviews in a timely fashion for the manuscripts they handle. We hear from editors of some journals that it is not uncommon to have to issue 10–15 invitations before one can secure the peer reviews needed to assess a given manuscript and although the IJNS generally fares better than this it is certainly true that a high proportion, probably a majority, of review invitations are declined. Most often, researchers declining invitations to review invoke the fact that they are too busy to add yet another item to their already overcommitted schedule. Some reviewers respond that administrators at their university or research center are actively discouraging them from engaging in an activity that seems to bear no tangible benefits. Yet, however one looks at it, peer reviewing is a crucial component of the publishing process. Nobody has yet come up with a viable alternative. Therefore, we need to find a way to convince our colleagues to peer review manuscripts more often. This can be done with a stick or with various types of carrots. One “stick”, occasionally envisaged by editors (e.g., Anon., 2009), is straightforward, at least to explain. For the peer-reviewing enterprise to function well, each researcher should be reviewing every year as many manuscripts as the number of reviews he or she is getting for his/her own papers. So, someone submitting 10 manuscripts in a given year should be willing to review 20 or 30 manuscripts during the same timeframe (assuming that each manuscript is reviewed by 2 or 3 individuals, as is commonly the case). If this person does not meet the required quota of reviews, there would be some restrictions imposed on the submission of any new manuscript for publication. Boehlert et al. (2009) have advocated such a “stick” in the case of the submission of grant proposals. However, the implementation of such an automatic accounting of reviewing activities is fraught with difficulties. For one thing, it would not prevent reviewers from defeating the system by writing short, useless reviews just to make the number. To eliminate that loophole, someone would have to assess whether reviews meet minimal standards of quality before they can be counted in the annual or running total. There would need to be allowances, for example to allow young researchers to get established in their career. This raises the prospect of a complex and potentially expensive system somewhat akin to carbon trading where credits for reviewing are granted and then traded with a verification system to ensure that no one cheats. An alternative approach, instead of sanctioning bad reviewing practices, would be to reward good ones. Currently the IJNS publishes the names of all reviewers annually. Other journals go a step further for example by giving awards to outstanding reviewers (Baveye et al., 2009). The lucky few who are so singled out by such awards see their reviewing efforts validated. But fundamentally, these awards do not change the unsupportive atmosphere in which researchers review manuscripts. The problem has to be attacked at its root, in the current culture of universities and research centers, where administrators tend to equate research productivity with the number of articles published and the amount of extramural funding brought in. Annual activity reports occasionally require individuals to mention the number of manuscripts or grant proposals reviewed, but these data are currently unverifiable, and therefore, are generally assumed not to matter at all for promotions or salary adjustments. There may be ways out of this difficulty. All the major publishers have information on who reviews what, how long reviewers take to respond to invitations, how long it takes them to send in their reviews. All it would take, in addition, would be for editors or associate editors who receive reviews to assess and record their usefulness, and one would have a very rich data set, which, if it were made available to universities and research centers in a way that preserves the anonymity of the peer-review process, could be used fruitfully to evaluate individuals’ reviewing performance and impact. Of course, one would have to agree on what constitutes a “useful” review. Pointing out typos and syntax errors in a manuscript is useful, but not hugely so. Identifying problems and offering ways to overcome them, proposing advice on how to analyze data better, or editing the text to increase its readability are all ways to make more substantial contributions. Generally, one might consider that there is a usefulness gradation from reviews focused on finding flaws in a manuscript to those focused on helping authors improve their text. Debate among scientists could result in a reliable set of guidelines on how to evaluate peer reviews. Beyond making statistics available to decision makers, other options are also available to raise the level of visibility and recognition of peer reviews (Baveye, 2010). Right or wrong, universities and research centers worldwide now rely more and more on some type of scientometric index, like the h-index (Hirsch, 2005), to evaluate the “impact” of their researchers. In other cases, such as the UK, the basis on which institutions are funded is linked to schemes which have measures such as the impact factor at their core (Nolan et al., 2008 M. Nolan, C. Ingleton and M. Hayter, The research excellence framework (REF): a major impediment to free and informed debate?, International Journal of Nursing Studies 45 (4) (2008), pp. 487–488. Article | PDF (202 K) | View Record in Scopus | Cited By in Scopus (4)Nolan et al., 2008). While many researchers see bibliometric analysis as a legitimate tool to explore discipline's activities and knowledge sources (see for example [Beckstead and Beckstead, 2006], [Oermann et al., 2008] and [Urquhart, 2006]), previous editorials in the IJNS have noted this trend and expressed disquiet at the distorting effect it could have on academic practice when used to pass judgments on quality ([Ketefian and Freda, 2009] and [Nolan et al., 2008]). Many of these indices implicitly encourage researchers to publish more articles, which in turn may detract researchers from engaging in peer reviewing. Certainly, none of the current indices encompass in any way the significant impact individuals can have on a discipline via their peer reviewing. But one could conceive of scientometric indexes that would include some measure of peer-reviewing impact, calculated on the basis of some of the data mentioned earlier. Clearly, such developments will not happen overnight. Before any of them can materialize, a necessary first step is for researchers to discuss with their campus administration, or the managers of their research institution, the crucial importance of peer reviewing and the need to have this activity valued in the same way that research, teaching, and outreach are. A debate along these lines is long overdue. Academic peer review is a necessary part of the publication process but while publication is recognised and valued, peer review is not. Even without the pressures of reward based on publication-based measures there is a potential for those less civic-minded authors to benefit from, but not contribute to, the peer-review system. Current scientometrics actively encourage and reward such behavior in a way that is, ultimately, not sustainable. Once administrators perceive that there is a need in this respect, are convinced that it will not cost a fortune to give peer reviewing more attention, and formulate a clear demand to librarians and publishers to help move things forward, there is hope that this perverse incentive in the current system can be removed. Otherwise the future of the current model of peer review looks bleak and we may indeed have to look forward to a complex bureaucratic system in which review credits are traded. For now, although the IJNS can count itself lucky because the problem affects this journal less than many others, in common with other journals we must thank our peer reviewers who are acting above and beyond the call of duty as it is perceived by many institutions. Without their efforts, journals like this cannot maintain their high standards. It is time for us to lend our weight to calls for a wide-ranging debate in order to ensure that these efforts are properly acknowledged and rewarded when judging the extent and quality of an academic's scientific contribution.
URI: http://hdl.handle.net/10373/553
ISSN: 0020-7489
Appears in Collections:SIMBIOS Collection

Files in This Item:

File Description SizeFormat
BayeveIntJouNursStuAcceptedManus2010.pdf88.71 kBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

 

Valid XHTML 1.0! DSpace Software Copyright © 2002-2010  Duraspace - Feedback