Caesar: a Social Code Review Tool for Programming Education
Total Page:16
File Type:pdf, Size:1020Kb
Caesar: A Social Code Review Tool for Programming Education by Mason Tang S.B., Massachusetts Institute of Technology (2010) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the Degree of Master of Engineering in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology September 2011 Copyright 2011 Mason Tang. All rights reserved. The author hereby grants to M.I.T. permission to reproduce and distribute publicly paper and electronic copies of this thesis document in whole and in part in any medium now known or hereafter created. Author................................................................ Department of Electrical Engineering and Computer Science August 22, 2011 Certified by . Robert C. Miller Associate Professor of Computer Science and Engineering Thesis Supervisor Accepted by . Arthur C. Smith Professor of Electrical Engineering Chairman, Department Committee on Graduate Theses 2 Caesar: A Social Code Review Tool for Programming Education by Mason Tang Submitted to the Department of Electrical Engineering and Computer Science August 22, 2011 In partial fulfillment of the requirements for the Degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Caesar is a distributed, social code review tool designed for the specific constraints and goals of a programming course. Caesar is capable of scaling to a large and diverse reviewer population, provides automated tools for increasing reviewer efficiency, and implements a social web interface for reviewing that encourages discussion and participation. Our system is implemented in three loosely-coupled components: a language-specific code pre- processor that partitions code into small pieces, filters out uninteresting ones, runs static analysis, and detects clusters of similar code; an incremental task router that dynamically assigns reviewers to tasks; and a language-agnostic web interface for reviewing code. Our evaluation using actual student code and a user study indicate that Caesar provides a sig- nificant improvement over existing code review workflows and interfaces. We also believe that this work contributes a modular framework for code reviewing systems that can be easily extended and improved. Thesis Supervisor: Robert C. Miller Title: Associate Professor of Computer Science and Engineering 3 4 Acknowledgements I would like to thank my advisor, Rob Miller, whose guidance has been instrumental in keeping me on track and on task throughout the entire course of this thesis. He has been patient, responsive, and consistently receptive to even the most outlandish ideas to come out of our numerous brainstorming sessions. Most valuably to me, he has taught me a great deal about research, software interfaces, software engineering, and myself. While Prof. Miller who gave me the guidance and direction for my thesis, I would not have made it very far without the constant and unwavering support of my family. My parents and my brother have always been a warm and loving home to me, whether that means a home-cooked meal, a friendly phone call, or even a short e-mail to see how my week has been. I am truly very lucky to have them, and I constantly look to their examples in an effort to become a better person. Equally supportive throughout this project has been my girlfriend, Elena Kon, who, despite living across the country and three timezones away, has consistently managed to be one of my greatest sources of comfort and strength. She has been endlessly patient with me even at my moments of greatest stress, and has believed in me and my abilities even when I had managed to thoroughly convince myself otherwise. I would also like to thank my roommates and best friends, Greg and Katrina, for giving me advice, calming me down when I needed it, and putting up with some extra dirty dishes when deadlines were looming. They have been, in almost every respect, a second family to me. This thesis would not have made it very far either without the inspiration, encourage- ment, and example of the rest of the UID group. It has been my pleasure and honor to find such a group of exceptionally talented researchers to call my colleagues and friends. A special thanks as well to Elena Tatarchenko for joining the project and helping me with the implementation. 5 6 Contents 1 Introduction 11 2 Related Work 17 2.1 Code Review in Industry ........................... 17 2.2 Code Review in Education .......................... 18 2.3 Crowdsourced Workflows .......................... 20 2.4 Static Code Analysis ............................. 21 3 Code Preprocessing 23 3.1 Partitioning .................................. 23 3.1.1 Design ................................. 23 3.1.2 Implementation ............................ 25 3.2 Chunk Filtering ................................ 26 3.2.1 Design ................................. 26 3.2.2 Implementation ............................ 26 3.3 Automated Commenting ........................... 26 3.3.1 Design ................................. 26 3.3.2 Implementation ............................ 27 3.4 Clustering ................................... 28 3.4.1 Design ................................. 28 3.4.2 Implementation ............................ 29 4 Task Routing 31 4.1 Design Goals ................................. 31 4.2 Implementation ................................ 32 7 5 Reviewing Interface 37 5.1 Design Goals ................................. 38 5.2 Features .................................... 39 5.2.1 Streamlined Commenting Interface . 40 5.2.2 Discussion Support .......................... 42 5.2.3 Code Display ............................. 43 5.2.4 Activity Streams ............................ 44 5.2.5 Dashboard ............................... 45 6 Evaluation 47 6.1 Code Preprocessor .............................. 48 6.1.1 Partitioning .............................. 48 6.1.2 Chunk Filtering ............................ 49 6.1.3 Automated Commenting ....................... 50 6.1.4 Clustering ............................... 51 6.2 Task Routing ................................. 53 6.3 Reviewing Interface .............................. 55 7 Conclusion 57 7.1 Future Work .................................. 58 7.1.1 Additional Forms of Review Content . 58 7.1.2 Improved Chunk Filtering ...................... 59 7.1.3 Improved Chunk Clustering ..................... 60 7.1.4 Data-Driven Task Routing ...................... 60 7.1.5 Similar Comment Clustering ..................... 61 7.1.6 Community Building ......................... 61 8 List of Figures 5-1 Caesar’s reviewing interface ......................... 37 5-2 Review Board comment discussion interface . 41 5-3 Rietveld reviewing interface ......................... 41 5-4 Comment markers on a chunk ........................ 42 5-5 Comment display and discussion interface . 42 5-6 New comment form for writing a comment . 43 5-7 Interface for browsing submission code ................... 44 5-8 User contribution activity stream ...................... 45 5-9 Dashboard interface with submissions and task assignments . 46 6-1 Checkstyle comment density distributions . 51 6-2 Chunk cluster size distributions ....................... 52 List of Tables 3.1 Checkstyle violations for example code in Listing 3.1 . 28 6.1 Assignment statistics ............................. 48 6.2 Pre-filtering chunk size distribution ..................... 49 6.3 Post-filtering chunk size distribution ..................... 49 6.4 Chunks removed per submission by preprocessor filters . 50 6.5 Generated Checkstyle comment counts ................... 50 9 6.6 Task assignment student co-reviewer interactions . 54 6.7 Task assignment metrics by submission ................... 54 Listings 3.1 Example Checkstyle input .......................... 28 4.1 Simplified code for task routing ....................... 33 10 Chapter 1 Introduction Software code review is the seemingly simple practice of human examination of source code to improve software quality. This can range from informal “over the shoulder” code reviews to mandatory company-wide review policies supported by specialized code review tools. Regardless of its implementation, code review provides not only the direct effect of catching potential programmer errors, but also the indirect, and often more valuable, effect of spreading developer knowledge and expertise among those involved. Code review as an existing practice in the software engineering industry and open source community generally assumes: that the unit of code being reviewed is a set of changes against an existing codebase (incremental code review), that reviewer assignment is an ad hoc decision made by the submitter, and that the reviewers involved are also themselves developers familiar with the relevant code. Meanwhile, code review as it exists in industry has found little traction in program- ming education. While other software engineering best practices from industry like the use of static analysis tools, version control systems, unit testing, pair programming, and rapid iterations have all experienced at least some adoption in the classroom, peer review of student source code has remained relatively unexplored. Much of this can be attributed to some of the additional constraints encountered in a classroom setting that challenge or violate the assumptions inherited from the standard code review model, such as: the possibility of plagiarism, the relative inexperience of students, and the task (or burden)