Whodo: Automating Reviewer Suggestions at Scale

Total Page:16

File Type:pdf, Size:1020Kb

Whodo: Automating Reviewer Suggestions at Scale 1 WhoDo: Automating reviewer suggestions at scale 59 2 60 3 61 4 Sumit Asthana B.Ashok Chetan Bansal 62 5 Microsoft Research India Microsoft Research India Microsoft Research India 63 6 [email protected] [email protected] [email protected] 64 7 65 8 Ranjita Bhagwan Christian Bird Rahul Kumar 66 9 Microsoft Research India Microsoft Research Redmond Microsoft Research India 67 10 [email protected] [email protected] [email protected] 68 11 69 12 Chandra Maddila Sonu Mehta 70 13 Microsoft Research India Microsoft Research India 71 14 [email protected] [email protected] 72 15 73 16 ABSTRACT ACM Reference Format: 74 17 Today’s software development is distributed and involves contin- Sumit Asthana, B.Ashok, Chetan Bansal, Ranjita Bhagwan, Christian Bird, 75 18 uous changes for new features and yet, their development cycle Rahul Kumar, Chandra Maddila, and Sonu Mehta. 2019. WhoDo: Automat- 76 ing reviewer suggestions at scale. In Proceedings of The 27th ACM Joint 19 has to be fast and agile. An important component of enabling this 77 20 European Software Engineering Conference and Symposium on the Founda- 78 agility is selecting the right reviewers for every code-change - the tions of Software Engineering (ESEC/FSE 2019). ACM, New York, NY, USA, 21 79 smallest unit of the development cycle. Modern tool-based code 9 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 22 review is proven to be an effective way to achieve appropriate code 80 23 review of software changes. However, the selection of reviewers 81 24 in these code review systems is at best manual. As software and 1 INTRODUCTION 82 25 teams scale, this poses the challenge of selecting the right reviewers, Large software projects have continuously evolving code-bases 83 26 which in turn determines software quality over time. While pre- and an ever changing set of developers. Making the development 84 27 vious work has suggested automatic approaches to code reviewer process smooth and fast, while maintaining code-quality is vital to 85 28 recommendations, it has been limited to retrospective analysis. We any software development effort. As one approach for meeting this 86 29 not only deploy a reviewer suggestions algorithm - WhoDo - and challenge, code review [1, 2] is widely accepted as an effective tool 87 30 evaluate its effect but also incorporate load balancing as part ofit for subjecting code to scrutiny by peers and maintaining quality. 88 31 to address one of its major shortcomings: of recommending expe- Modern code review [3], characterized by lightweight tool-based 89 32 rienced developers very frequently. We evaluate the effect of this reviews of source code changes, is in use broadly across both com- 90 33 hybrid recommendation + load balancing system on five reposito- mercial and open source software projects [17]. This form of code 91 34 ries within Microsoft. Our results are based around various aspects review provides developers with an effective workflow to review 92 35 of a commit and how code review affects that. We attempt to quanti- code changes and improve code and this process has been studied 93 36 tatively answer questions which are supposed to play a vital role in in depth in the research community [5, 6, 11, 13, 17–20, 23]. 94 37 effective code review through our data and substantiate it through One topic that has received much attention over the past five 95 38 qualitative feedback of partner repositories. years is the challenge of recommending the most appropriate re- 96 39 viewers for a software change. Bacchelli and Bird [3] found that 97 40 CCS CONCEPTS when the reviewing developer had a deep understanding of the code 98 41 being reviewed, the feedback was "more likely to find subtle defects 99 • Human-centered computing → Empirical studies in collabo- 42 ... more conceptual (better ideas, approaches) instead of superficial 100 rative and social computing; • Software and its engineering → 43 (naming, mechanical style, etc.)". Kononenko et al [12] found that 101 Software configuration management and version control systems; 44 selecting the right reviewers impacts quality. Thus, many have pro- 102 Software maintenance tools; Programming teams. 45 posed and evaluated approaches for identifying the best reviewer 103 46 104 KEYWORDS for a code review [4, 8, 14, 15, 22, 24, 26] (see section 2 for a more 47 in-depth description of related work). At Microsoft, many devel- 105 48 software-engineering, recommendation, code-review opment teams have voiced a desire for help in identifying those 106 49 developers that have the understanding and expertise needed to 107 50 Permission to make digital or hard copies of all or part of this work for personal or 108 classroom use is granted without fee provided that copies are not made or distributed review a given software change. 51 for profit or commercial advantage and that copies bear this notice and the full citation The large and growing size of the software repositories at many 109 52 on the first page. Copyrights for components of this work owned by others than ACM software companies (including Microsoft, the company involved 110 must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, 53 111 to post on servers or to redistribute to lists, requires prior specific permission and/or a in the evaluation of our approach) has created the need for an 54 fee. Request permissions from [email protected]. automated way to suggest reviewers [3]. One common approach 112 55 ESEC/FSE 2019, 26–30 August, 2019, Tallinn, Estonia that several projects have used is a manually defined set of groups 113 © 2019 Association for Computing Machinery. 56 114 ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 that identify experts in an area of code collectively. These groups are 57 https://doi.org/10.1145/nnnnnnn.nnnnnnn used in conjunction with rules which trigger the addition of groups 115 58 1 116 ESEC/FSE 2019, 26–30 August, 2019, Tallinn, Estonia Asthana, et al. 117 whenever files in a pre-defined part of the system change (e.g.,add • Provide an evaluation of the recommendation system in live 175 118 participants in the Network Protocol Review group whenever a production. 176 119 file is changed in /src/networking/protocols/TCPIP/*/). This • Present the results of a user study of software developers 177 120 ensures that the appropriate group of experts are informed of the that used the recommendation system. 178 121 file change and can review it. However, such solutions are hardto 179 122 scale, suffer from becoming stale quickly, and may miss the right 2 RELATED WORK 180 123 181 reviewers even when rules and groups are manually kept up to Tool based code review is the adopted standard in both OSS and 124 182 date. proprietary software systems [17] and many tools exist that enable 125 183 Motivated by the need for help in identifying the most appropri- developers to look at software changes effectively and review them. 126 184 ate reviewers and the difficulty of manually tracking expertise, we Reviewboard1, Gerrit2 and Phabricator3, the popular open source 127 185 have developed and deployed an automatic code review system at code review tools and Microsoft’s internal code review interface 128 186 Microsoft called WhoDo. In this paper, we report our experience share some common characteristics: 129 and initial evaluation of this deployment on five software reposi- 187 130 tories. We leverage the success of previous works like Zanjani et • Each code change has a review associated with it and almost 188 131 al. [26] that demonstrated that considering past history of code all code changes (depending on team policies) have to go 189 132 in terms of authorship and reviewership with respect to the cur- through the review process. 190 133 rent change is an effective way to recommend peer reviewers for a • Each review shows the associated changes in a standard diff 191 134 code change. Based on the positive metrics of our evaluation and format. 192 135 favorable user-feedback, we have proceeded to deploy WhoDo onto • Each review can be reviewed by any number of developers. 193 136 many additional repositories across Microsoft. Currently, it runs Currently, reviewers are added by the author of the change 194 137 on 123 repositories and this number continues to grow rapidly. or through manually defined rules. 195 138 As discussed in Section 2, reviewer recommendation has been • Each reviewer can leave comments on the change at any 196 139 the subject of much research. However, it is not clear which of these particular location of the diff pinpointing errors or asking 197 140 systems have been deployed in practice and the evaluation of such for clarifications. 198 141 recommendation systems has consisted primarily of historical com- • The code author addresses these comments in subsequent 199 142 parison, determining how well the recommendations match what iterations/revisions and this continues until all comments 200 143 actually happened and who participated in the review. While such are resolved or one of the reviewer signs-off. 201 144 an offline evaluation [9] is useful, it may not provide an accurate One potential bottleneck of the above workflow is addition of 202 145 picture of the impact of the recommendation system when used in code reviewers by authors. This is a manual activity and several 203 146 practice. For example, there is an (often implicit) assumption that social factors [7] such as developer relations, code knowledge play 204 147 those who participated in the review were in fact the best people a role in it. This system, while, effective is also biased towards inter- 205 148 to review the change and those who were not invited were not personal relations which can lead to incorrect assignment in cases 206 149 appropriate reviewers.
Recommended publications
  • The'as Code'activities: Development Anti-Patterns for Infrastructure As Code
    Empirical Software Engineering manuscript No. (will be inserted by the editor) The `as Code' Activities: Development Anti-patterns for Infrastructure as Code Akond Rahman · Effat Farhana · Laurie Williams Pre-print accepted at the Empirical Software Engineering Journal Abstract Context: The `as code' suffix in infrastructure as code (IaC) refers to applying software engineering activities, such as version control, to main- tain IaC scripts. Without the application of these activities, defects that can have serious consequences may be introduced in IaC scripts. A systematic investigation of the development anti-patterns for IaC scripts can guide prac- titioners in identifying activities to avoid defects in IaC scripts. Development anti-patterns are recurring development activities that relate with defective IaC scripts. Goal: The goal of this paper is to help practitioners improve the quality of infrastructure as code (IaC) scripts by identifying development ac- tivities that relate with defective IaC scripts. Methodology: We identify devel- opment anti-patterns by adopting a mixed-methods approach, where we apply quantitative analysis with 2,138 open source IaC scripts and conduct a sur- vey with 51 practitioners. Findings: We observe five development activities to be related with defective IaC scripts from our quantitative analysis. We iden- tify five development anti-patterns namely, `boss is not around', `many cooks spoil', `minors are spoiler', `silos', and `unfocused contribution'. Conclusion: Our identified development anti-patterns suggest
    [Show full text]
  • Software Reviews
    Software Reviews Software Engineering II WS 2020/21 Enterprise Platform and Integration Concepts Image by Chris Isherwood from flickr: https://www.flickr.com/photos/isherwoodchris/6807654905/ (CC BY-SA 2.0) Review Meetings a software product is [examined by] project personnel, “ managers, users, customers, user representatives, or other interested parties for comment or approval —IEEE1028 ” Principles ■ Generate comments on software ■ Several sets of eyes check ■ Emphasis on people over tools Code Reviews — Software Engineering II 2 Software Reviews Motivation ■ Improve code ■ Discuss alternative solutions ■ Transfer knowledge ■ Find defects Code Reviews — Software Engineering II Image by Glen Lipka: http://commadot.com/wtf-per-minute/ 3 Involved Roles Manager ■ Assessment is an important task for manager ■ Possible lack of deep technical understanding ■ Assessment of product vs. assessment of person ■ Outsider in review process ■ Support with resources (time, staff, rooms, …) Developer ■ Should not justify but only explain their results ■ No boss should take part at review Code Reviews — Software Engineering II 4 Review Team Team lead ■ Responsible for quality of review & moderation ■ Technical, personal and administrative competence Reviewer ■ Study the material before first meeting ■ Don’t try to achieve personal targets! ■ Give positive and negative comments on review artifacts Recorder ■ Any reviewer, can rotate even in review meeting ■ Protocol as basis for final review document Code Reviews — Software Engineering II 5 Tasks of
    [Show full text]
  • Arxiv:2005.09217V1 [Cs.SE] 19 May 2020 University of Tennessee Knoxville, Tennessee, USA E-Mail: [email protected] 2 Andrey Krutauz Et Al
    Noname manuscript No. (will be inserted by the editor) Do Code Review Measures Explain the Incidence of Post-Release Defects? Case Study Replications and Bayesian Networks Andrey Krutauz · Tapajit Dey · Peter C. Rigby · Audris Mockus Received: date / Accepted: date Abstract Aim: In contrast to studies of defects found during code review, we aim to clarify whether code reviews measures can explain the prevalence of post-release defects. Method: We replicate McIntosh et al.'s [51] study that uses additive re- gression to model the relationship between defects and code reviews. To in- crease external validity, we apply the same methodology on a new software project. We discuss our findings with the first author of the original study, McIntosh. We then investigate how to reduce the impact of correlated pre- dictors in the variable selection process and how to increase understanding of the inter-relationships among the predictors by employing Bayesian Network (BN) models. Context: As in the original study, we use the same measures authors ob- tained for Qt project in the original study. We mine data from version control and issue tracker of Google Chrome and operationalize measures that are close Andrey Krutauz Concordia University Montreal, QC, Canada E-mail: [email protected] Tapajit Dey University of Tennessee Knoxville, Tennessee, USA E-mail: [email protected] Peter C. Rigby Concordia University Montreal, QC, Canada E-mail: [email protected] Audris Mockus arXiv:2005.09217v1 [cs.SE] 19 May 2020 University of Tennessee Knoxville, Tennessee, USA E-mail: [email protected] 2 Andrey Krutauz et al.
    [Show full text]
  • Software Project Management (November 2018)
    TYBSC.IT (SEM5) – SOFTWARE PROJECT MANAGEMENT (NOVEMBER 2018) Q.P. Code: 57826 Q1 a) Briefly explain the different phases of project management life cycle. (5) The project life cycle describes the various logical phases in the life of a project from its beginning to its end in order to deliver the final product of the project. The idea of breaking the project into phases is to ensure that the project becomes manageable, activities are arranged in a logical sequence, and risk is reduced. (1) The Project Goal The first step of any project, irrespective of its size and complexity, is defining its overall goal. Every project undertaken aims to provide business value to the organization hence the goal of the project should focus on doing the same. Now, the goal of the project needs to be defined initially as it provides the project team with a clear focus and guides it through each phase of the project. The project is hazy and seems risky at the start, but as the project goals get defined and starts making progress, things start to look brighter and the probability of success increase. (2) The Project Plan Also known as baseline plan. The project plan is developed to provide answers to various project related queries such as: What the project aims to achieve?-The project deliverables How does the project team aim to achieve it?-The tasks and activities Who all will be involved in the project?-The stakeholders and the project team How much will it cost?-The project budget How much time will it take?-The project schedule What are the risks involved?-Risk identification (3) Project Plan Execution The project plan thus developed needs to now be executed.
    [Show full text]
  • The Impact of Code Review Coverage and Code Review Participation on Software Quality
    The Impact of Code Review Coverage and Code Review Participation on Software Quality A Case Study of the Qt, VTK, and ITK Projects Shane McIntosh1, Yasutaka Kamei2, Bram Adams3, and Ahmed E. Hassan1 1Queen’s University, Canada 2Kyushu University, Japan 3Polytechnique Montréal, Canada 1{mcintosh, ahmed}@cs.queensu.ca [email protected] [email protected] ABSTRACT 1. INTRODUCTION Software code review, i.e., the practice of having third-party Software code reviews are a well-documented best practice team members critique changes to a software system, is a for software projects. In Fagan's seminal work, formal design well-established best practice in both open source and pro- and code inspections with in-person meetings were found prietary software domains. Prior work has shown that the to reduce the number of errors detected during the testing formal code inspections of the past tend to improve the qual- phase in small development teams [8]. Rigby and Bird find ity of software delivered by students and small teams. How- that the modern code review processes that are adopted in a ever, the formal code inspection process mandates strict re- variety of reviewing environments (e.g., mailing lists or the view criteria (e.g., in-person meetings and reviewer check- Gerrit web application1) tend to converge on a lightweight lists) to ensure a base level of review quality, while the mod- variant of the formal code inspections of the past, where ern, lightweight code reviewing process does not. Although the focus has shifted from defect-hunting to group problem- recent work explores the modern code review process qual- solving [34].
    [Show full text]
  • A Community of Practice Around Peer Review for Long-Term Research Software Sustainability
    A Community of Practice Around Peer Review for Long-Term Research Software Sustainability Karthik Ram Noam Ross University of California, Berkeley, EcoHealth Alliance, The rOpenSci Project The rOpenSci Project Carl Boettiger Maelle€ Salmon University of California, Berkeley, The rOpenSci Project The rOpenSci Project Stefanie Butland Scott Chamberlain The rOpenSci Project University of California, Berkeley, The rOpenSci Project Abstract—Scientific open source projects are responsible for enabling many of the major advances in modern science including recent breakthroughs such as the Laser Interferometer Gravitational-Wave Observatory project recognized in the 2017 Nobel Prize for physics. However, much of this software ecosystem is developed ad hoc with no regard for sustainable software development practices. This problem is further compounded by the fact that researchers who develop software have little in the way of resources or academic recognition for their efforts. The rOpenSci Project, founded in 2011 with the explicit mission of developing software to support reproducible science, has in recent years undertaken an effort to improve the long tail of scientific software. In this paper, we describe our software peer-review system, which brings together the best of traditional academic review with new ideas from industry code review. & MODERN SCIENCE RELIES very heavily on open source software to acquire, manipulate, and ana- Digital Object Identifier 10.1109/MCSE.2018.2882753 lyze large volumes of data. One would be hard Date of publication 30 November 2018; date of current pressed to find areas of science today that do not rely on software to generate new discoveries. version 8 March 2019. March/April 2019 Published by the IEEE Computer Society 1521-9615 ß 2018 IEEE 59 Accelerating Scientific Discovery With Reusable Software Such software, often referred to as research soft- analyze their data.
    [Show full text]
  • Contemporary Peer Code Review Practices and Associated Benefits
    CONTEMPORARY PEER CODE REVIEW PRACTICES AND ASSOCIATED BENEFITS by AMIANGSHU BOSU DR. JEFFREY C. CARVER, COMMITTEE CHAIR DR. JEFF GRAY DR. MUNAWAR HAFIZ DR. RANDY SMITH DR. MARCUS BROWN A DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of The University of Alabama TUSCALOOSA, ALABAMA 2015 Copyright Amiangshu Bosu 2015 ALL RIGHTS RESERVED ABSTRACT Prior research indicates that peer code review is an effective method for reducing the num- ber of defects and improving the quality of code. Besides maintaining the integrity of the code, code review spreads knowledge, expertise, and development techniques among the review par- ticipants. In the recent years, many Open Source Software (OSS) communities and commercial organizations have adopted ’contemporary’ or ’modern’ code review, an informal, regular, tool- based process. Because both OSS and commercial developers spend a significant amount of effort performing code reviews, the primary goal of this dissertation is to better understand contemporary code review practices, its benefits, and factors that influence its outcomes. To address this goal, this dissertation describes empirical studies using surveys, software repository mining, and social network analysis. The first study is a survey of OSS developers to understand their collaboration and the process by which they form impressions of each other. The results suggest that coding-related factors influence impression formation the most among OSS developers. Therefore, the types of interactions where participants can judge a peers code or creativity (e.g., code review) should be crucial for peer impression formation.
    [Show full text]
  • Revisiting Linus's
    Int. J. Human-Computer Studies 77 (2015) 52–65 Contents lists available at ScienceDirect Int. J. Human-Computer Studies journal homepage: www.elsevier.com/locate/ijhcs Revisiting Linus’s law: Benefits and challenges of open source software peer review Jing Wang n, Patrick C. Shih, John M. Carroll College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, United States article info abstract Article history: Open source projects leverage a large number of people to review products and improve code quality. Received 14 January 2014 Differences among participants are inevitable and important to this collaborative review process— Received in revised form participants with different expertise, experience, resources, and values approach the problems 18 December 2014 differently, increasing the likelihood of finding more bugs and fixing the particularly difficult ones. To Accepted 18 January 2015 understand the impacts of member differences on the open source software peer review process, we Communicated by Francoise Detienne Available online 28 January 2015 examined bug reports of Mozilla Firefox. These analyses show that the various types of member differences increase workload as well as frustration and conflicts. However, they facilitate situated Keywords: learning, problem characterization, design review, and boundary spanning. We discuss implications for Online collaboration work performance and community engagement, and suggest several ways to leverage member Software peer review differences in the open source software peer review process. Open source & 2015 Elsevier Ltd. All rights reserved. 1. Introduction provide opportunities for engaging an even larger audience in OSS development, and these potential contributors are likely to differ “Given enough eyeballs, all bugs are shallow” (Raymond, 2001).
    [Show full text]
  • Open Source Peer Review – Lessons and Recommendations for Closed Source
    Open Source Peer Review – Lessons and Recommendations for Closed Source Peter C. Rigby, Software Engineering Group, University of Victoria, Canada Brendan Cleary, CHISEL Group, University of Victoria, Canada Frederic Painchaud, DRDC Valcartier, Department of National Defence, Canada Margaret-Anne Storey, CHISEL Group, University of Victoria, Canada Daniel M. German, Software Engineering Group, University of Victoria, Canada Use asynchronous, frequent, incremental peer reviews conducted by invested experts sup- ported by lightweight tools and metrics Although the effectiveness of software inspec- tion (formal peer review) has a long and mostly supportive history, the thought of reviewing a large, unfamiliar software artifact over a period Figure 1. The spectrum of peer review techniques: of weeks is something dreaded by both the author From formal inspection to minimal-process OSS review. and the reviewers. The dislike of this cumbersome Tool supported, lightweight review provides a flexible, process is natural, but neither the formality nor but traceable middle ground. the aversion are fundamental characteristics of peer review. The core idea of review is simply false assurance of quality, frustrate developers, and lengthen to get an expert peer to examine your work for the development cycle. As Michael Fagan, the father of problems that you are blind to. The actual process formal inspection laments, “even 30 years after its [Fagan is much less important for finding defects than the inspection’s] creation, it is often not well understood and expertise of the people involved [1]. more often, poorly executed” [2]. In this article, we contrast While developers will acknowledge the value of inspec- OSS peer review with the following traditional inspection tion, most avoid it and adoption of traditional inspection process that is widely acknowledged in the inspection litera- practices remains relatively low [2], [3].
    [Show full text]
  • Reviewer Recommendation Using Software Artifact Traceability Graphs
    Reviewer Recommendation using Software Artifact Traceability Graphs Emre Sülün Eray Tüzün Uğur Doğrusöz [email protected] [email protected] [email protected] Bilkent University Bilkent University Bilkent University Ankara, Turkey Ankara, Turkey Ankara, Turkey ABSTRACT 1 INTRODUCTION Various types of artifacts (requirements, source code, test cases, Due to its nature, through the lifecycle of a software, many different documents, etc.) are produced throughout the lifecycle of a software. kinds of changes occur to the software. These changes might have These artifacts are often related with each other via traceability intended consequences such as successfully adding new function- links that are stored in modern application lifecycle management ality or modifying the existing functionality. However, in some repositories. Throughout the lifecycle of a software, various types cases, the changes have unintended consequences such as intro- of changes can arise in any one of these artifacts. It is important to ducing a new bug while fixing another; thus, these changes need review such changes to minimize their potential negative impacts. to be carefully reviewed. To maximize benefits of a review, the To maximize benefits of the review process, the reviewer(s) should reviewer(s) should be chosen appropriately. However, choosing be chosen appropriately. the right reviewer(s) might not be a trivial task, especially in large In this study, we reformulate the reviewer suggestion problem projects. Automatically suggesting appropriate reviewers have two using software artifact traceability graphs. We introduce a novel main benefits: (1) reducing overall review time by automatically approach, named RSTrace, to automatically recommend reviewers assigning reviewers [22] (2) increasing the review quality and thus that are best suited based on their familiarity with a given artifact.
    [Show full text]
  • Experimenters Handbook (GRC-DOC-PLAN-5006)
    National Aeronautics and GRC-CONN-PLAN-5006 A Space Administration EFFECTIVE DATE: 06/18/2012 Space Communications and Navigation (SCaN) Testbed Project National Aeronautics and Space Administration John H. Glenn Research Center at Lewis Field, Ohio 44135 SCaN TESTBED PROJECT Experimenter’s Handbook AUTHORIZED by CM when under FORMAL Configuration Control Date Signature 7/9/12 /s/ Robert E. Turk Distribution: [ ] NASA (U.S. Gov. Only) [ ] Project Only [ X ] Government and Contractors Availability: [ X ] Public (No Restriction) [ ] Export Controlled [ ] Confidential/ Commercial [ ] Internal Use Only Space Communications and Navigation (SCaN) Testbed Project Document No.: GRC-CONN-PLAN-5006 Revision: A Title: Experimenter’s Handbook Effective Date: 06/18/2012 Page ii of viii PREFACE National Aeronautics and Space Administration (NASA) is developing an on-orbit, adaptable, Software Defined Radio (SDR)/Space Telecommunications Radio System (STRS)-based testbed facility to conduct a suite of experiments to advance technologies, reduce risk, and enable future mission capabilities on the International Space Station (ISS). The Space Communications and Navigation (SCaN) Testbed Project will provide NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in the laboratory and space environment based on reconfigurable, software defined radio platforms and the STRS Architecture. The project was previously known as the Communications, Navigation, and Networking reConfigurable Testbed (CoNNeCT). Also included are the required support efforts for Mission Integration and Operations, consisting of a ground system and the Glenn Telescience Support Center (GRC TSC). This document has been prepared in accordance with NASA Glenn’s Configuration Management Procedural Requirements GLPR 8040.1 and applies to the SCaN Testbed configuration management activities performed at NASA’s Glenn Research Center (GRC).
    [Show full text]
  • Innovations in Peer Review[Version 3; Peer Review: 2 Approved]
    F1000Research 2017, 6:1151 Last updated: 17 MAY 2019 REVIEW A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; peer review: 2 approved] Jonathan P. Tennant 1,2, Jonathan M. Dugan 3, Daniel Graziotin4, Damien C. Jacques 5, François Waldner 5, Daniel Mietchen 6, Yehia Elkhatib 7, Lauren B. Collister 8, Christina K. Pikas 9, Tom Crick 10, Paola Masuzzo 11,12, Anthony Caravaggi 13, Devin R. Berg 14, Kyle E. Niemeyer 15, Tony Ross-Hellauer 16, Sara Mannheimer 17, Lillian Rigling 18, Daniel S. Katz 19-22, Bastian Greshake Tzovaras 23, Josmel Pacheco-Mendoza 24, Nazeefa Fatima 25, Marta Poblet 26, Marios Isaakidis 27, Dasapta Erwin Irawan 28, Sébastien Renaut 29, Christopher R. Madan 30, Lisa Matthias 31, Jesper Nørgaard Kjær 32, Daniel Paul O'Donnell 33, Cameron Neylon 34, Sarah Kearns 35, Manojkumar Selvaraju 36,37, Julien Colomb 38 1Imperial College London, London, UK 2ScienceOpen, Berlin, Germany 3Berkeley Institute for Data Science, University of California, Berkeley, CA, USA 4Institute of Software Technology, University of Stuttgart, Stuttgart, Germany 5Earth and Life Institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium 6Data Science Institute, University of Virginia, Charlottesville, VA, USA 7School of Computing and Communications, Lancaster University, Lancaster, UK 8University Library System, University of Pittsburgh, Pittsburgh, PA, USA 9Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA 10Cardiff Metropolitan University, Cardiff, UK 11VIB-UGent Center
    [Show full text]