Srikanth Sugavanam, Baijayanta Roy
How do we motivate reviewers to be more actively involved in the peer review process, and also enhance the quality of the peer review they provide?
Critical, unbiased peer review is vital for scientific progress. It helps validate one’s ideas, derive constructive criticism, initiate dialogue, uncover hidden biases in one’s thinking, and lead to revelations. There is no doubt amongst academics about the relevance and significance of this practice. It is viewed as an act of giving back to the community, and academics take part in doing so voluntarily. However shifting paradigms in research and associated practices require us to revisit and reflect on the effectiveness of the peer review framework.
Sadly, the race to enhance one’s h-index (i.e. publish or perish), fuelled further by the availability of several (in some cases, unsavoury) venues for publishing, has led to an explosion of articles submitted towards publication, albeit of sub-par scientific quality. The dearth of reviewers has exacerbated the situation, with journal editors often forced to consider untested experts. Further, the reviewers themselves are hard-pressed, divvying their time between research, teaching, student supervision, proposal writing, and of course, family. Deadlines slip, and the quality of peer-review also dips. For instance, in a study, eight errors were deliberately inserted into a paper and sent across to 420 potential reviewers. Only 221 of them responded (53%). More striking was the fact that “the median number of errors spotted was two, nobody spotted more than five, and 16% did not spot any”!
One way to get academics to be more actively involved in the peer review process is to provide suitable incentive. Indeed, the question of effectively incentivizing the peer-review process has been pondered upon quite considerably. Publishing houses do make the effort to recognize the contributions of particularly active reviewers, but the nature of this recognition and the extent of its visibility is not uniform across all publishing houses. For instance, some publishing houses offer a flat incentive in the form of discounted society memberships, or free journal access, while some provide end of year certificates recognising their commited contribution. Several strategies have been proposed and even adopted, but there isn’t a one-size-fits-all solution yet.
So the problem can be summarized as follows – How do we motivate reviewers to be more actively involved in the peer review process, and also enhance the quality of the peer review they provide? We feel the answer lies not in the nature of the incentive, but the way in which the incentive is dealt to the reviewers.
One can make the assumption that all academic reviewers are inherently altruistic – a fair assumption, given that they volunteer for the arduous task in the first place. Furthermore, if we consider Maslow’s hierarchy of needs, materialistic incentives fulfil a peer with the four lowest tiers of basic needs, which can be safely taken to be largely fulfilled already for any respectable peer. The thing that most likely remains unfulfilled for a peer is the top most need – self-actualization.
Here, we propose the establishment of a reviewer-centric metric, which we henceforth call the hR-index, towards increasing the visibility of contribution of the academic towards the peer review process. In this metric, each reviewer is assigned an hR-score based on the review he or she performs. The scoring is similar to the h-index , with a slight modification – here, an hR-score of h indicates that the reviewer has at least performed h reviews, wherein the final decision made for the submission was in agreement with the academic’s recommendation.
Relating the score to the penultimate decision ensures that the score reflects on the reviewer’s academic expertise. A high value of such the hR-index would then imply that the reviewer has taken part in several peer reviews, where his/her views had an impact on the field. The hR-index thus helps quantify the extent impact of the reviewer within his/her field. Of course, the h-index addresses this point too. However, we believe the adoption of an hR-index scoring will go beyond functioning as a metric, and help improve the quality of the peer review.
In our suggested approach, the hR-index is incremented only if the final outcome is in agreement with the final decision made for the submitted work. A potential pitfall exists in this regard – in an effort to increase their hR scores, reviewers may have a propensity to play safe and conform to the established ideas, and refute radical yet completely valid works (consider Galileo’s plight for instance). This can be avoided by including experts, i.e. academics with high hR-index in the reviewer panel. The important point is all other reviewers in the panel are made aware of this inclusion. The fallout of this can be envisioned as a positive spin on the Prisoner’s dilemma – the reviewer with the relatively lower hR-index will believe that the expert reviewers will perform their task with due diligence, and is motivated to be more involved in the peer review process. In fact, such a strategy may also be adopted to test the quality of novice reviewers. This directly translates into an improvement in the quality of the peer review provided.
The proposed reviewer-centric hR-index will go beyond materialistic incentives, and serve to identify the extent of participation of the individual academic in the peer-review process, increasing his/her visibility and recognition within the community. The increased visibility will attract more academics to the peer review process. The nature of the scoring will expand the pool of available, reliable reviewers, which in turn will also positively feedback to the quality of peer-review, and the quality of science in the process. In fact, the reliability of the peer review itself increases if more reviewers take part in the process. The incentive towards performing a better peer review will also motivate academics to undergo formal training for peer-review – a dire need for both fresh and established reviewers – improving the quality of the review in the process. The availability of such an index would also be beneficial towards identifying high quality expert reviewers for the evaluation of research proposals, and more crucially, in the establishment of reliable and trustworthy governmental and inter-governmental panels that make decisions on global change, where the stakes are much higher.
The h-index is often used as a measure of a scientist’s standing within the community and beyond. However, the invaluable time academics invest in peer review largely goes unnoticed. The hR-index as proposed above not only serves to recognise this endeavour, but has the potential to motivate active involvement of reviewers in the peer-review, improving its quality, and forging effective scientific advance in the process. The challenge now lies in the adoption of the practice across publishing houses, and also its centralisation, as a repository for such hR-scores would then have to be maintained – not a straightforward task, yet also not beyond the grasp of current technology.
Loading ...
Loading ...
Please feel free to leave your thoughts and views in the comments section below.
About the authors
Srikanth Sugavanam is a post-doctoral researcher in the area of photonics and fibre lasers. When not tinkering away in the lab, or preoccupied with EU deliverables, he indulges in making computer music and planning the next hike. You can reach him on twitter: @Srikanthislive.
Baijayanta is an audio analyst and senior developer in GP Robotics. He is also a freelance sound designer, location sound engineer for audiovisual productions and a music producer. He has been professionally working with the independent film circuit for quite some time. He is also interested in AI, cognitive psychology and philosophy of science. You can reach him on twitter: @BaijayantaRoy
A proposition of incentivizing peer-review by Srikanth Sugavanam, Baijayanta Roy is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
A proposition for incentivizing peer-review
Srikanth Sugavanam, Baijayanta Roy
How do we motivate reviewers to be more actively involved in the peer review process, and also enhance the quality of the peer review they provide?
Critical, unbiased peer review is vital for scientific progress. It helps validate one’s ideas, derive constructive criticism, initiate dialogue, uncover hidden biases in one’s thinking, and lead to revelations. There is no doubt amongst academics about the relevance and significance of this practice. It is viewed as an act of giving back to the community, and academics take part in doing so voluntarily. However shifting paradigms in research and associated practices require us to revisit and reflect on the effectiveness of the peer review framework.
Sadly, the race to enhance one’s h-index (i.e. publish or perish), fuelled further by the availability of several (in some cases, unsavoury) venues for publishing, has led to an explosion of articles submitted towards publication, albeit of sub-par scientific quality. The dearth of reviewers has exacerbated the situation, with journal editors often forced to consider untested experts. Further, the reviewers themselves are hard-pressed, divvying their time between research, teaching, student supervision, proposal writing, and of course, family. Deadlines slip, and the quality of peer-review also dips. For instance, in a study, eight errors were deliberately inserted into a paper and sent across to 420 potential reviewers. Only 221 of them responded (53%). More striking was the fact that “the median number of errors spotted was two, nobody spotted more than five, and 16% did not spot any”!
One way to get academics to be more actively involved in the peer review process is to provide suitable incentive. Indeed, the question of effectively incentivizing the peer-review process has been pondered upon quite considerably. Publishing houses do make the effort to recognize the contributions of particularly active reviewers, but the nature of this recognition and the extent of its visibility is not uniform across all publishing houses. For instance, some publishing houses offer a flat incentive in the form of discounted society memberships, or free journal access, while some provide end of year certificates recognising their commited contribution. Several strategies have been proposed and even adopted, but there isn’t a one-size-fits-all solution yet.
So the problem can be summarized as follows – How do we motivate reviewers to be more actively involved in the peer review process, and also enhance the quality of the peer review they provide? We feel the answer lies not in the nature of the incentive, but the way in which the incentive is dealt to the reviewers.
One can make the assumption that all academic reviewers are inherently altruistic – a fair assumption, given that they volunteer for the arduous task in the first place. Furthermore, if we consider Maslow’s hierarchy of needs, materialistic incentives fulfil a peer with the four lowest tiers of basic needs, which can be safely taken to be largely fulfilled already for any respectable peer. The thing that most likely remains unfulfilled for a peer is the top most need – self-actualization.
Here, we propose the establishment of a reviewer-centric metric, which we henceforth call the hR-index, towards increasing the visibility of contribution of the academic towards the peer review process. In this metric, each reviewer is assigned an hR-score based on the review he or she performs. The scoring is similar to the h-index , with a slight modification – here, an hR-score of h indicates that the reviewer has at least performed h reviews, wherein the final decision made for the submission was in agreement with the academic’s recommendation.
Relating the score to the penultimate decision ensures that the score reflects on the reviewer’s academic expertise. A high value of such the hR-index would then imply that the reviewer has taken part in several peer reviews, where his/her views had an impact on the field. The hR-index thus helps quantify the extent impact of the reviewer within his/her field. Of course, the h-index addresses this point too. However, we believe the adoption of an hR-index scoring will go beyond functioning as a metric, and help improve the quality of the peer review.
In our suggested approach, the hR-index is incremented only if the final outcome is in agreement with the final decision made for the submitted work. A potential pitfall exists in this regard – in an effort to increase their hR scores, reviewers may have a propensity to play safe and conform to the established ideas, and refute radical yet completely valid works (consider Galileo’s plight for instance). This can be avoided by including experts, i.e. academics with high hR-index in the reviewer panel. The important point is all other reviewers in the panel are made aware of this inclusion. The fallout of this can be envisioned as a positive spin on the Prisoner’s dilemma – the reviewer with the relatively lower hR-index will believe that the expert reviewers will perform their task with due diligence, and is motivated to be more involved in the peer review process. In fact, such a strategy may also be adopted to test the quality of novice reviewers. This directly translates into an improvement in the quality of the peer review provided.
The proposed reviewer-centric hR-index will go beyond materialistic incentives, and serve to identify the extent of participation of the individual academic in the peer-review process, increasing his/her visibility and recognition within the community. The increased visibility will attract more academics to the peer review process. The nature of the scoring will expand the pool of available, reliable reviewers, which in turn will also positively feedback to the quality of peer-review, and the quality of science in the process. In fact, the reliability of the peer review itself increases if more reviewers take part in the process. The incentive towards performing a better peer review will also motivate academics to undergo formal training for peer-review – a dire need for both fresh and established reviewers – improving the quality of the review in the process. The availability of such an index would also be beneficial towards identifying high quality expert reviewers for the evaluation of research proposals, and more crucially, in the establishment of reliable and trustworthy governmental and inter-governmental panels that make decisions on global change, where the stakes are much higher.
The h-index is often used as a measure of a scientist’s standing within the community and beyond. However, the invaluable time academics invest in peer review largely goes unnoticed. The hR-index as proposed above not only serves to recognise this endeavour, but has the potential to motivate active involvement of reviewers in the peer-review, improving its quality, and forging effective scientific advance in the process. The challenge now lies in the adoption of the practice across publishing houses, and also its centralisation, as a repository for such hR-scores would then have to be maintained – not a straightforward task, yet also not beyond the grasp of current technology.
Please feel free to leave your thoughts and views in the comments section below.
About the authors
Srikanth Sugavanam is a post-doctoral researcher in the area of photonics and fibre lasers. When not tinkering away in the lab, or preoccupied with EU deliverables, he indulges in making computer music and planning the next hike. You can reach him on twitter: @Srikanthislive.
Baijayanta is an audio analyst and senior developer in GP Robotics. He is also a freelance sound designer, location sound engineer for audiovisual productions and a music producer. He has been professionally working with the independent film circuit for quite some time. He is also interested in AI, cognitive psychology and philosophy of science. You can reach him on twitter: @BaijayantaRoy
A proposition of incentivizing peer-review by Srikanth Sugavanam, Baijayanta Roy is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Commentaries