Research Program Peer Review: Purposes, Principles, Practices, Protocols
Total Page:16
File Type:pdf, Size:1020Kb
RESEARCH PROGRAM PEER REVIEW: PURPOSES, PRINCIPLES, PRACTICES, PROTOCOLS DR. RONALD N. KOSTOFF OFFICE OF NAVAL RESEARCH 800 N. QUINCY STREET ARLINGTON, VA 22217 INTERNET: [email protected] PHONE: 703-696-4198 FAX: 703-696-4274 (THE VIEWS IN THIS REPORT ARE SOLELY THOSE OF THE AUTHOR AND DO NOT REPRESENT THE VIEWS OF THE DEPARTMENT OF THE NAVY) TABLE OF CONTENTS I. ABSTRACT II. EXECUTIVE SUMMARY - PEER REVIEW PRINCIPLES III. INTRODUCTION, DEFINITIONS, AND BACKGROUND IV. PEER REVIEW PURPOSES AND PRINCIPLES V. PEER REVIEW PRACTICES VI. PEER REVIEW PROTOCOLS VI-A. APPENDIX I - REVIEW PANEL SELECTION APPROACHES VI-B. APPENDIX II - PROGRAM PEER REVIEW PROTOCOL VI-C. APPENDIX III - USE OF PUBLISHED PAPERS IN RESEARCH PROGRAM EVALUATION VI-D APPENDIX IV – NETWORK-CENTRIC PEER REVIEW VI-E APPENDIX V – SIGNIFICANT PEER REVIEW RESOURCES V-1. PROPOSAL PEER REVIEW V-2. PEER REVIEW GUIDE V-3. BIOMEDICAL PEER REVIEW CONGRESSES VI-F LARGE AGENCY PEER REVIEW VI-G DETAILED PEER REVIEW PROTOCOL VII. BIBLIOGRAPHY AND RELATED REFERENCES KEYWORDS: peer review; GPRA; evaluation criteria; metrics; research assessment; retrospective studies; roadmap; data mining; text mining; research merit; research approach; blind review; research evaluation; research impact; bibliometrics; decision aids. Page 1 Form Approved Report Documentation Page OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2. REPORT TYPE 3. DATES COVERED 01 JUN 2004 N/A - 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Research Program Peer Review: Purposes, Principles, Practices, 5b. GRANT NUMBER Protocols 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER DR. RONALD N. KOSTOFF 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION OFFICE OF NAVAL RESEARCH 800 N. QUINCY STREET REPORT NUMBER ARLINGTON, VA 22217 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT See report 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE UU 299 unclassified unclassified unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 I. ABSTRACT The purposes, principles, practices, and protocols of research program peer review are described. While the principles are fundamentally generic, and apply to peer review across the full spectrum of performing institutions, as well as manuscript/ proposal/ program peer review, the focus of this report is peer review of proposed and ongoing research programs in federal agencies. Following the self-contained Executive Summary of factors for high-quality peer reviews, the report addresses potential implications of the implementation of the Government Performance and Results Act of 1993 on federal agency research program peer review practices. Then, the report describes strengths and weaknesses of major peer review components and issues, including: Objectives and Purposes of Peer Review; Quality of Peer Review; Impact of Peer Review Manager on Quality; Selection of Peer Reviewers; Selection of Evaluation Criteria; Secrecy (Reviewer and Performer Anonymity); Objectivity/ Bias/ Fairness of Peer Review; Normalization of Peer Review Panels; Repeatability/ Reliability of Peer Review; Effectiveness/ Predictability of Peer Review; Global Data Awareness; Costs of Performing a Peer Review; Ethical Issues in Peer Review; and Alternatives to Peer Review. The report then presents different federal agency peer review practices, and sample protocols and processes for conducting a successful research program peer review. Some peer review variants, such as the Science Court and Network-Centric Peer Review, are described, and research requirements to improve peer review are discussed. The final section is an extensive bibliography of over 3000 references that includes not only text references but related references for further reading as well. Page 2 II. EXECUTIVE SUMMARY - PEER REVIEW PRINCIPLES The Government Performance and Results Act of 1993 (GPRA, 1993) requires federal agencies to develop strategic plans, annual performance plans, and performance measures to gauge progress in achieving their planned targets. A precursor paper in Science (Kostoff, 1997b) recommends that peer review be the dominant metric GPRA applies to basic research. However, for research program peer review to be used effectively and efficiently for GPRA, it must be understood, developed, and standardized well beyond its present status. Program peer review should also be integrated seamlessly into an organization's business operations evaluation processes in general, and in particular into its peer review processes. It should not be incorporated into management tools as an afterthought, which is today’s common practice, but should rather be part of the organization's front-end design. This allows optimal matching among requirements for generating, gathering, and reviewing data. It helps avoid the present practice of force-fitting evaluation criteria and processes to whatever data are produced from non-evaluation requirements. This report focuses on the underlying principles necessary for high-quality peer review. Although targeted toward research program peer review, most of the principles this report enunciates apply to many kinds of peer review. The author's experience, based on examining the peer review literature, conducting many peer review experiments (e.g., Kostoff, 1988), and managing hundreds of peer reviews, leads to the following conclusions about the factors critical to high-quality peer review (Kostoff, 1995, 1997a, 2001b): 1) Senior Management Commitment Senior management’s commitment is the most important factor in the quality of an organization’s S&T evaluations. The relevant senior positions are those with evaluation decision authority, and their most significant contributions lie in the rewards and incentives they institute to encourage high-quality evaluation.. Senior managers’ commitment should include not only assurance that a credible need for the evaluation exists, but also a strong desire that the evaluation be structured to address that need as directly and completely as possible. 2) Evaluation Manager Motivation The second most important factor is the operational evaluation manager's motivation to perform a technically credible evaluation. The manager: a) sets the boundary conditions and constraints on the evaluation's scope; b) selects the final specific evaluation techniques used; c) selects the methodologies for how these techniques will be combined, integrated, and interpreted, and d) selects the experts who will perform the interpretation of the data output from these techniques. In particular, if the evaluation manager does not follow, either consciously or unconsciously, the highest standards in selecting these experts, the evaluation's final conclusions could be substantially determined even before the evaluation process even begins. All the evaluation Page 3 processes considered (peer review, retrospective studies, metrics, economic studies, roadmaps, data mining, and text mining) need experts, and this conclusion about expert selection holds for every one of them. 3) Statement of Objectives Third most important is transmission of a clear and unambiguous statement of the review’s objectives (and conduct) and its potential impact and consequences to all participants. This statement should occur at the very beginning of the review process. 4) Competency of Technical Evaluators Fourth most important factor is the quality of the technical evaluators themselves, specifically their role, objectivity, and competency. While the requirements for experts in peer review, retrospective studies, roadmaps, and text mining are obvious, there are equally compelling reasons for using experts in metrics-based evaluations. Metrics should not be used as a stand- alone diagnostic instrument (Kostoff, 1997b). Like lab tests in a medical exam, even quantitative metrics results from suites of instruments require expert interpretation to be placed into proper context and gain credibility. Evaluation resembles diagnosis more than it resembles accounting. The metrics results should make a subordinate contribution to an effective peer review of the technical area being examined. Thus, this fourth critical factor consists of the evaluation experts' competence and objectivity. All the experts should be technically competent in their subject area, and the competence of the total evaluation team should cover the multiple