Pacise-Proceedings-2020.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
Proceedings of the 35th Annual Conference of The Pennsylvania Association of Computer and Information Science Educators 2020 Computing on the Edge Hosted by West Chester University of Pennsylvania TABLE OF CONTENTS 5. Acknowledgements 8. Best Paper Awards Faculty Papers: Refereed 10. Reflection-Based Precise Auto-Grading, Chad Hogg 16. Teaching to Learn and Learning to Teach an Introductory Programing Course Using Learning Theories, Pratibha Menon 22. An Examination of the Ethics of Downloadable Content in Video Games, Brandon Packard 32. Solving A Minesweeper Board By Visually Parsing It, Brandon Packard 39. Assessing a Scholarship Program for Underrepresented Students in Computer Science & Information Technology, Patrick Earl, Lisa Frye, and Dale Parson 49. A Distributed Model-View-Controller Design Pattern for a Graphical Remote Control of a Multi- User Application, Dale E. Parson 56. Design and Implementation of 2D Game Sprites in Java, Krish Pillai Undergraduate Student Papers: Refereed 61. CoNFET: An English Sentence to Emojis Translation Algorithm, Alex Day, Chris Mankos, Soo Kim, and Jody Strausser 70. Improving Code Readability by Transforming Loops to Higher Order Function Constructs, Anthony DePaul and Chen Huo 79. Singular Value Decomposition: Implementation, Analysis, and Optimization, Maxsim Melnichenko and William Killian 90. Instrument Identification Using Convolutional Neural Networks, Kevin Rutter and C. Dudley Girard 100. Does a News Article Exist Within a Web Page?, Adam Standke and C. Dudley Girard Undergraduate Student Abstracts 110. Case Study: Headcount Survival Analytics for Employee Retention Using SAS, Janet Poague and Jay Annadatha Birds of a Feather 112. AUTO-GRADING, William Killian and Chad Hogg 2 Student Posters 114. The First Journey: A Story Driven Adventure Game Written in 8086 Assembly, Devin Kaltenbaugh and Brandon Packard 115. A Graphical, Orbital Gravity Simulator for Planetarium, Angela Kozma and Todd Lichtenberg 117. Board of Directors 119. Author Index 3 GENERAL NOTE This was the year of the conference that wasn’t. The covid-19 virus virtually shut down all face-to-face instruction at the PASSHE schools, resulting in the cancellation of the PACISE 2020 conference. Hosting a virtual conference was considered, but unfortunately did not come about. Nevertheless, prior to the onset of the virus, people had submitted papers for peer review, abstracts for presentations, posters, and birds-of-a-feather proposals. These submissions underwent the review process, and authors were notified of acceptance. Even though these folks were not able to present their work, their efforts will not fall by the wayside. These proceedings include all of the submissions that were accepted for the 2020 conference. 4 ACKNOWLEDGEMENTS The PACISE Board of Directors and the PACISE Editorial Board would like to thank the following for their contributions to the PACISE 2020 conference. Program Committee at West Chester University: Conference Chair: Richard Burns Assistant Conference Chair: Liu Cui Programming Competition Chair: Linh Ng Security Competition Chair: Si Chen Robtoics Competition Chair: C. Dudley Girard (Shippensburg) Department Secretary/Registration Kathleen Barimani PACISE Editorial Board: Lisa Kovalchick California University Robert Montante Bloomsburg University Dave Mooney (Chair) Shippensburg University Reviewers: Alawya Alawami Clarion University Jay Annadatha Clarion University Alice Armstrong Shippensburg University Jeyaprakash Chelladurai East Stroudsburg University Weifeng Chen California University MD Minhaz Chowhurdy East Stroudsburg University Dudley Girard Shippensburg University Nazli Hardi Millersville University Chad Hogg Millersville University Olaniyi Samuel Iylola California University Mark Jones Lock Haven University William Killian Millersville University Soo Kim Clarion University Sangkook Lee Shippensburg University Pratibha Menon California University Brandon Packard Clarion University Dale Parson Kutztown University Loreen Powell Bloomsburg University Stephanie Schwartz Millersville University Dylan Schwesinger Kutztown University Yili Tseng Slippery Rock University Carol Wellington Shippensburg University 5 General Thanks: to everyone who contributed submissions to the conference, and especially to the conference committee at West Chester who had to deal with a very adverse situation. Dave Mooney, Chair, PACISE 2020 Editorial Review Board 6 BEST PAPER AWARDS 7 Faculty: Chad Hogg, Millersville University CREATING PRECISE GRADING TESTS WITH REFLECTION Undergraduate student: Alex Day, Chris Mankos, Soo Kim, and Jody Strausser, Clarion University CoNFET: AN ENGLISH SENTENCE TO EMOJIS TRANSLATION ALGORITHM 8 REFEREED FACULTY PAPERS 9 REFLECTION-BASED PRECISE AUTO-GRADING Chad Hogg Millersville University [email protected] ABSTRACT report of whether the test fails or succeeds. They will Automatic testing of student code is frequently used only get meaningful feedback about the failure if the in programming-intensive computer science courses. designer of the test has built that feedback into it. Because However, typical techniques for writing tests cannot novices make creative kinds of errors that experienced generate detailed and sensible feedback about the cause of programmers would not have even considered within the a test failure, which introductory students need. This paper realm of possible outcomes, it can be difficult to predict presents a technique using reflection to generate suites of and plan for all potential failure modes. tests for quickly determining the cause of every failure in a student-written class, provided that the design of the class It is advisable to design tests for students around the is exhaustively specified. primary goal of isolating root causes and explaining the exact state of the system and action taken in each test. This requires that only one action be taken per test, which in 1. Introduction turn requires a mechanism for setting up a desired context without using the student code directly. It also requires Students in early, programming-intensive computer science the ability to observe all state changes within an object courses require frequent opportunities to practice the skills to determine whether or not the action is achieving its they are learning and timely, detailed feedback about the post-conditions and maintaining its invariants. quality of their work. Creating good feedback is a very time-intensive task, as there are both many correct ways Fortunately, student assignments, particularly in to solve a problem and infinitely many incorrect ways. introductory courses, are often fully specified and Explaining why and how a program is broken requires require students to solve problems in very specific ways. both sufficient testing to discover all deficiencies and As such, it is not unreasonable to require that every sufficient debugging to determine the root cause of each. student’s class use the same fields, with the same names As enrollments increase it becomes infeasible to do this and types, and the same methods, with the same names and manually. To simplify this process, many faculty members signatures. This means that reflection can be used to set rely on automated testing, which they either run themselves up explicit, isolated test cases for each use case of every on student submissions while grading or make available to method in a class. Reflection also provides other benefits, students through an auto-grading system so that they can such as the ability to selectively run tests against only the follow a submit - improve - resubmit workflow. parts of a class that a student has written so far. Unit testing suites such as JUnit provide a quick and Using reflection to access private members of a class easy way to write test cases that are good at discovering for testing purposes is not a novel idea, but nor is it deficiencies. When a professional developer encounters in widespread use. The technique is not appropriate a failed test, she can read the source code of the test to for professional developers writing production code, determine the action that caused the test failure and the because in that environment tests should only verify the context in which that action was taken. Furthermore, she external, public interfaces of classes while ignoring their can attach a debugger to the test and step through it to implementation details [1]. The contribution of this paper determine exactly where and when anomalous behavior is to explain why this technique can be appropriate for begins. This is not true for a student submitting their testing introductory student assignments, and to provide a assignment to an auto-grading system, and it is too time framework for doing so in Java. consuming for an instructor grading tens or hundreds of assignments. Thus, in the grading environment these kinds of tests are not effective at quickly discovering the root 2. An Example Assignment: Linked causes of deficiencies. Sequence A student submitting their code to an auto-grading system We will be using an assignment from a CS2 course as an does not have direct access to the test case, only to a example of this technique. In it, students implement the 10 Sequence abstract data type, as described by Michael 3. Traditional Tests Main [2]. This data type is an ordered collection of values with a single embedded pseudo-iterator called the cursor The simplest tests for this class would construct an and the following operations: object, add and remove elements while