HackEd: A Pedagogical Analysis of Online Vulnerability Discovery Exercises Daniel Votipka Eric Zhang and Michelle L. Mazurek Tufts University University of Maryland [email protected] [email protected], [email protected] Abstract—Hacking exercises are a common tool for security exercises provide the most effective learning, and researchers education, but there is limited investigation of how they teach do not have a broad view of the landscape of current exercises. security concepts and whether they follow pedagogical best As a step toward expanding this analysis, we review online practices. This paper enumerates the pedagogical practices of 31 popular online hacking exercises. Specifically, we derive a set hacking exercises to address two main research questions: of pedagogical dimensions from the general learning sciences and • RQ1: Do currently available exercises apply general educational literature, tailored to hacking exercises, and review pedagogical principles suggested by the learning sciences whether and how each exercise implements each pedagogical di- literature? If so, how are these principles implemented? mension. In addition, we interview the organizers of 15 exercises to understand challenges and tradeoffs that may occur when • RQ2: How do exercise organizers consider which princi- choosing whether and how to implement each dimension. ples to implement? We found hacking exercises generally were tailored to students’ To answer these questions we performed an in-depth qual- prior security experience and support learning by limiting extra- itative review of 31 popular online hacking exercises (67% neous load and establishing helpful online communities. Con- versely, few exercises explicitly provide overarching conceptual of all online exercises we identified). As part of our analysis, structure or direct support for metacognition to help students we completed a sample of 313 unique challenges from these transfer learned knowledge to new contexts. Immediate and 31 exercises. We evaluated each exercise against a set of tailored feedback and secure development practice were also recommended pedagogical principles grounded in learning uncommon. Additionally, we observed a tradeoff between provid- theory [18], [19]. We base our approach on previous curriculum ing realistic challenges and burdening students with extraneous cognitive load, with benefits and drawbacks at any point on this evaluation efforts [20], tailoring the pedagogical principles we axis. Based on our results, we make suggestions for exercise use for applicability to hacking exercises. Further, we interview improvement and future work to support organizers. the organizers of 15 exercises to understand how they consider which principles to implement. I. INTRODUCTION We found that no exercise implemented every pedagogical Historically, the security community has used online hacking principle, but most were implemented by at least some exercises, exercises to provide practical education, exposing participants some in unique and creative ways. Notable exceptions include to a variety of vulnerabilities and security concepts. In that many exercises do not provide structure to help students these exercises, participants demonstrate their security concept organize knowledge, or feedback to guide their progress understanding by finding, exploiting, and sometimes fixing through learning objectives. Few organizers had considered vulnerabilities in programs. Exercises offer discrete practice metacognition, i.e., helping students consider what and how sets that can be undertaken in a modular fashion, similarly much they have learned at a high level. We also found that to practice problems commonly included at the end of each some pedagogical principles are in tension with each other — chapter in mathematics textbooks. In fact, hacking exercises are such as balancing difficulty with realism — while others are in commonly considered useful educational tools, with security tension with the competitive origin of many exercises. Finally, experts often reporting that they rely on them for their we find that community participation brings many benefits, but education [1], bug bounty platforms directing those interested must be carefully managed to ensure educational structures are in security to start with these exercises [2], [3], and a significant maintained. From these results, we distill recommendations for amount of recent security-education work focuses on creating improving exercises and future work to support organizers. new exercises [4]–[10]. Further, prior work has provided some evidence that hacking exercises can provide valuable immediate II. METHODS feedback to learners in academic settings [7], [11]–[14]. To understand the current landscape of online hacking However, analysis of hacking exercises as educational tools exercises, we performed a two-phase study: a qualitative review is limited. First, many studies only consider a sparse few of popular online exercises and interviews with the organizers exercises [4]–[10], [13], [15], [16], limiting understanding of of these exercises. Here, we discuss how we selected exercises the broad set of popular exercises. Prior work also focuses on for review, our review process, and our interview protocol. a few specific measures of learning and engagement [5]–[8], [12], [17], making the evidence narrow. In particular, learning A. Exercise Selection factors which are difficult to control for and measure are rarely There are many kinds of resources available to security stu- considered. Overall, exercise organizers have limited guidance dents, such as vulnerability write-ups, certifications, academic for building effective exercises, educators do not know which coursework, and books. To limit our inquiry’s scope, we focus on online educational exercises—commonly recommended by While almost all the exercises we identified were joinable security experts [1]—, which meet the following criteria: year-round, many were initially designed as a live, short- • Educational – Because we are evaluating the educational term competition. We expected the initial participation context benefit of each exercise, we only include exercises which to affect exercise structure, so for comparison purposes, we explicitly state education as a goal. We do not consider assigned each exercise to one of two categories: competitions, such as the DefCon Qualifiers, whose goal • Synchronous (N=13) – Designed for simultaneous partic- is to identify the “best” hackers. ipation over a short time period (i.e.,a few days or weeks). • Hands-on – Exercises must include a hands-on component This includes most capture-the-flag (CTF) competitions. requiring students to actively practice security concepts. Challenges from these exercises are made available after This component could be the central focus of the exercise— the competition for more students to try at their own pace. as in many CTFs—or auxiliary, e.g., presented after a • Asynchronous (N=18) – Designed for participation at any series of associated lectures. time at the student’s pace; often referred to as “wargames.” • Online and publicly accessible - We focused on online 2) Sample Selection: We identified 45 exercises meeting exercises so we could analyze them by actually partici- our criteria (18 Synchronous, 27 Asynchronous). To balance pating, rather than making possibly incorrect assumptions completeness with manual effort, we sampled about 66% for based on an offline exercise’s description. in-depth review. To focus on exercises reaching the most • Popular – We opted to focus on popular exercises students participants, we began with the top 30% (by popularity rank) in are most likely to participate in. To estimate a site’s each group. We then randomly sampled the remaining exercises popularity, we used its Tranco rank—a secure method until we selected about 66% of each group. We include less for ranking sites based on user visits [21] — . as of visited exercises to account for those still growing in popularity. October 15th, 2019. Because Tranco only tracks the top The final list of exercises is given in Table I. Note, some authors one million sites, we used Alexa rankings if no Tranco are affiliated with BIBIFI, which was randomly selected during ranking was available. Each site’s rank is given in Table I.1 this phase. We did not exclude it to ensure representation of Because we focused on explicitly educational and popular attack-defense-style exercises. To expand this category beyond exercises, many of the exercises we reviewed had funding BIBIFI, we purposively added one more exercise (iCTF), and support, either by offering a paid version of the exercise worked with its organizers to enable analysis despite its highly (e.g., HackEDU, HackTheBox, Mr. Code, Vulnhub), receiving synchronous (not typically joinable at any time) structure, funding through a parent company (e.g., Google supports bringing the total set of reviewed exercises to 31. gCTF and the SANS Institute supports GirlsGo CyberStart), B. Pedagogical Review (RQ1) or through grant funding (e.g., picoCTF, BIBIFI). As a result, several organizers we interviewed could dedicate time and To identify pedagogical principles, we drew on previous resources to improving students’ educational experience, which efforts to synthesize major theoretical and empirical learn- is not necessarily common among CTFs run by professionals ing sciences and education research findings into actionable in their spare time or university student clubs [22]. principles [18].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages18 Page
-
File Size-