A New Testing Framework for C-Programming Exercises
Total Page:16
File Type:pdf, Size:1020Kb
Int'l Conf. Frontiers in Education: CS and CE | FECS'15 | 279 A new Testing Framework for C-Programming Exercises and Online-Assessments Dieter Pawelczak, Andrea Baumann, and David Schmudde Faculty of Electrical Engineering and Computer Science, Universitaet der Bundeswehr Muenchen (UniBw M), Neubiberg, Germany Abstract - Difficulties with learning a programming assessment and grading of programming assignments in the language are wide spread in engineering education. The use third year now. However the original aim to have many of a single integrated programming environment for coding, small accompanying programming exercises for self- debugging, automated testing and online assessment lowers learning could not be established yet, due to the high effort the initial burdens for novice programmers. We have for writing tests. In this paper we present a new testing developed the Virtual-C IDE especially for learning and framework, which enormously reduces the effort for test teaching the C programming language with an integrated development. Although this framework allows students to framework for program visualizations, programming exer- write their own tests, we do not plan to integrate test cises and online assessments. A new enhancement of the writing in the primer C programming course at the moment, IDE is a xUnit like testing framework allowing on the one as our curriculum covers software testing in the software hand larger sets of small, test-based programming exercises engineering courses in the major terms. and on the other hand simplifying the development of pro- gramming assignments. The integration of the new testing 2 Review of related work framework in the assessment system gives students a better and direct feedback on their programming achievements Software testing is a core topic in computer science. and helps to find syntactic and semantic errors in their Even though software testing is often taught in conjunction source code. with software engineering, it becomes more and more im- portant for programming courses: besides testing first ap- Keywords: C-programming, teaching programming, unit proaches, tool-based testing is widely used today in pro- testing, static code analysis, dynamic code analysis gramming primers [4]. The benefit of testing for students is obvious: with test programs provided by the lecturer, stu- 1 Introduction dents can train programming outside classroom and have immediate feedback on their exercises. For the Java pro- Difficulties with learning a programming language are gramming language, jUnit tests are widely spread. The a well-known challenge for students and lecturers in language independent testing concept is typically named undergraduate courses [1]. As several studies show, con- xUnit tests [5]. Based on these, systems for automated gra- tinuously practicing programming by starting from small ding of programming assignments like e.g. AutoGrader problems shows respectable success, e.g. see [2]. For small have evolved [6]. As xUnit testing is aimed more at profes- classes, training might be part of the lectures and additional sional developers, several tools to simplify test specifica- tutors can give a hand to prevent students from falling be- tions or handling of tests have been introduced for teaching hind. Although the same could be done for large classes, programming, as for instance Web-CAT [7]. While Web- training during the lecture becomes less effective due to the CAT additionally focuses on tests written by students, other high diversity of previous knowledge of the students, and platforms analyze beyond unit testing the data and control finding sufficient and appropriate tutors is an extensive flow of programs, like e.g. ProgTest [8]. AutoGradeMe task. Luckily an advantage of learning programming is that works independent from unit testing and is based on static – after managing an initial burden – students can directly code analysis and flow analysis of Java programs [9]. grasp, what happens by debugging or testing their pro- grams. A direct integration of testing in an adequate IDE An important aspect is the actual purpose of testing: further lowers the burdens for novice programmers. Ideally while assessment tools typically test, evaluate and grade a students work continuously on small programming exer- program source after its submission, xUnit tests provide cises and receive directly feedback from the IDE with re- immediate feedback without formal grading. The new spect to syntactical and semantic issues; and finally testing framework presented in this paper covers both: an students can submit larger programming assignments from xUnit like test system for offline training and evaluation of that IDE to receive their credit points. programming exercises as well as an automated assessment 1 system for programming assignments. As the test frame- We have developed the Virtual-C IDE (especially designed work is directly integrated in the IDE, students benefit from for learning and teaching the C programming language) a single environment for coding, debugging and testing. over the last years [3]. We use the IDE for automatic 1 https://sites.google.com/site/virtualcide/ 280 Int'l Conf. Frontiers in Education: CS and CE | FECS'15 | Source under test testSuite.tsc 3 Testing framework (SUT) The testing framework (TF) is generally based on the TF SSID well-known xUnit frameworks and its test dialogs [5]. It is Test suite creates accesses syntactically adapted from the Google C++ Testing Frame- C-Compiler static source Compiler work [10] with regards to the C programming language and information (TSC) database for educational scope. A test suite (TS) is a single test file and consists of test cases (TC) based on one or more tests. MOPVM- testDef.c Main differences (despite the programming language) com- Extensions.lib pared to the Google C++ Testing Framework are: C-Compiler • Random test data via the macro ARGR. • Simplified verification of output parameters with the Linker ARG macro. static tests • Predefined tests: function and reference function tests dynamic tests (Section 3.4.2), performance tests (Section 3.4.3) and Test Executable I/O tests (Section 3.4.4) No test fixture macro . Instead, test fixtures • TEST_F MOPVM are provides in the test prologue. • Test and test case names can be string literals • Heap and data segment re-initialization per test case Test Results for full application tests, i.e. execution of main(). • Dynamic re-linking of C functions (Section 3.4.5) Figure 1. Structure of the testing framework (TF) • Automatic prototyping for functions under test. • GUI based selection/ de-selection of test cases. 3.2 Structure of the testing framework Figure 1 shows the structure of the TF. The first step 3.1 Constraints for the educational scope covers compiling the student’s source file under test (SUT). It is a challenge for automated testing of students’ In case of a compiler error, the test execution is aborted, as source codes on the one hand to support students in finding a syntactical correct file is expected. Static source code and fixing of programming mistakes in their code, and on information is stored on a function base in a database the other hand not to reveal too much information about the (SSID). Static information covers for instance the para- solution. The use of a reference implementation inhibits meters, number of integer or float-operations, the maximum disclosure of the test definition. Still a reference implemen- loop depth, recursion, etc. Afterwards the test suite is com- tation is a common and efficient way to test students’ code, piled by the test suite compiler (TSC), which generates a C compare e.g. [6] [8]. Revealing the test definition (as it is file of the corresponding test cases. This file is compiled self-evident in software engineering) can help students in and linked with the SUT and together with the virtual understanding the task description and train their testing machine (MOPVM) extensions library. Finally the test is skills, see e.g. [4]. The TF in general allows to open and executed in the MOPVM and the results are displayed in edit test definitions except for automatic assessment of pro- the test dialog, see Section 4.1. The TSC is a stand-alone gramming assignments, i.e. for graded submissions. tool, which is able to generate test files according to a test Theoretically a student can adapt her/ his code according to suite description; still it relies on the SSID and the the test results, thus creating a solution, which fits to the MOPVM features integrated in the Virtual-C IDE. tests but not to the task description. Likewise undesirable is, that students add workarounds in their existing solution to 3.3 Static tests fit the tests. This raises the question about the level of detail for the test results. Our former approach was to give a pro- The compiler issues typical warnings during its se- gramming description with detailed specification on the mantic analysis phase with respect to typecasts, unused or functions and the program’s I/O. The tests performed for un-initialized local variables, dead-code, accessing NULL- the assessment ran locally in the Virtual-C IDE with direct pointers, etc. Another important static test is performed by feedback. However the test input was not revealed in the the TSC, as it checks, if functions under test (FUT) are test results. Although most errors could be fixed easily properly used in the SUT, e.g. if count and types of comparing the test report with the exercise description, arguments are correct. This measure prevents linker errors, student’s missed the lack of test data. The new testing which are typically difficult to trace for students. The re- framework offers the opportunity to reveal test data in the sults of the static tests are printed to the console and visu- test results as we introduced random data. This highly in- alized in the test dialog, see Section 4.1.