Cobold: Gobblin' up COBOL Bugs for Fun and Profit

Total Page:16

File Type:pdf, Size:1020Kb

Cobold: Gobblin' up COBOL Bugs for Fun and Profit COBOLd: Gobblin’ Up COBOL Bugs for Fun and Profit squaresLab∗ Mr. squaresLab SpouseMan† Institute for Soware Research e Private Sector School of Computer Science Pisburgh, PA 15213 Carnegie Mellon University Pisburgh, PA 15213 ABSTRACT that failing test cases pass without breaking any of the originally e cost and ubiquity of soware bugs has motivated research in passing test cases. automated program repair (APR), a eld dedicated to the automatic In this work, we automatically x bugs in COBOL programs triage and patching of soware defects. APR has typically focused using COBOLd, which is just GenProg [10, 11, 17, 18] directly ap- on popular languages such as C and Java, but COBOL developers plied to COBOL programs. GenProg uses genetic programming have suered six decades of neglect and deserve a repair tool too. to traverse a space of possible patches to a given buggy program. We present COBOLd, the rst tool for automatically repairing buggy GenProg uses the test results to dene an objective function that COBOL programs. We demonstrate COBOLd’s eectiveness on informs an evolutionary search [4, 8] over the space of candidate a COBOL reimplementation of the infamous Zune leap-year bug. patches. We reuse GenProg because it is well-established and also We also argue that “we got this” and other researchers should stay we know its source code very, very well, having wrien a large pro- away so that we can x all the COBOL bugs ourselves and thus be portion of it. It turns out that, given the appropriate conguration lthy, stinking rich. options, the code we already have can totally x bugs in COBOL programs. Who knew? CCS CONCEPTS •So ware and its engineering →So ware evolution; Main- 2 BACKGROUND taining so ware; Search-based so ware engineering; •COBOL Don’t know COBOL? Don’t worry. is section has all you need. repair →$$$$$$$$$$$$$$$$$$$$$$$$$$$$; 2.1 COBOL: Is it reasonable? KEYWORDS 2.1.1 Yes. It is incredibly dicult to create a program that con- COBOL, SBSE, Soware Evolution, Genetic Programming, $$$$ tains bugs in COBOL. is is because the design of the language makes it incredibly dicult to write any program in COBOL. e ACM Reference format: squaresLab and Mr. squaresLab SpouseMan. 2018. COBOLd: Gobblin’ Up lack of advanced features such as passing parameters to procedures COBOL Bugs for Fun and Prot. In Proceedings of SIGBOVIK, Pisburgh, PA means that programmers are forced to think very carefully before USA, March 2018 (SIGBOVIK’18), 5 pages. authoring COBOL programs. Bugs can only exist if programs exist. DOI: 10.475/123 4 2.1.2 Counterpoint: Absolutely Not. e rst specication of the language (COBOL-60) contained many logical errors and was 1 INTRODUCTION impossible to interpret unambiguously [2]. is is unsurprising: no 1.1 Automatic Program Repair (APR) academic computer scientists were on the design commiee. e Automatic program repair (APR) is all about xing bugs automati- commiee invented a new metalanguage to dene its syntax be- cally because soware maintenance is expensive. APR techniques cause no commiee members had heard of Backus-Naur form [14]. take as input (1) a program with a defect and (2) a mechanism to Dijkstra once declared: “e use of COBOL cripples the mind; its validate correct and incorrect behavior in that program. e val- teaching should, therefore, be regarded as a criminal oence.” [3] Also, the “Hello World” program for COBOL is considered non- idation mechanism is typically a test suite: a bug to be repaired 1 corresponds to one or more tests that is failing; one or more pass- compliant. ing tests guard against the removal of desireable functionality. e goal is to produce a patch that modies the original program such 2.2 (Inexplicably Recent) Prior Work on COBOL ∗Chris Timperley, Deby Katz, Zack Coker, Rijnard van Tonder, Mauricio Soto, Afsoon Afzal, Cody Kinneer, Jeremy Lacomis, and Claire Le Goues Unfortunately for Dijkstra, COBOL is still lurking in academic con- †Adam Brady ferences and college curriculums (ese citations are all real‼! I know. We were also surprised!). Research papers on COBOL in Permission to make digital or hard copies of part or all of this work for personal or st classroom use is granted without fee provided that copies are not made or distributed the 21 century cover aspect oriented programming [9, 15] and a for prot or commercial advantage and that copies bear this notice and the full citation (2017‼!) study of code clones [6]. Professors disagreeing with Dijk- on the rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). stra, or openly admiing to torturing their students, have published SIGBOVIK’18, Pisburgh, PA USA © 2018 Copyright held by the owner/author(s). 123-4567-24-567/08/06...$15.00 1e DISPLAY statement (i.e., print) is considered a dangerous paern according to a DOI: 10.475/123 4 COBOL static analyzer: hps://rules.sonarsource.com/cobol/RSPEC-1279 SIGBOVIK’18, March 2018, Pisburgh, PA USA squaresLab and Mr. squaresLab SpouseMan 1 IDENTIFICATIONDIVISION . 1 IDENTIFICATIONDIVISION . 2 PROGRAM-ID .ZUNEBUG. 2 PROGRAM-ID .ZUNEBUG. 3 3 4 DATADIVISION . 4 DATADIVISION . 5 WORKING-STORAGESECTION . 5 WORKING-STORAGESECTION . 6 01 WS-YEAR PIC 9(9) VALUE 1980. 6 01 WS-YEAR PIC 9(9) VALUE 1980. 7 01 WS-DAYS PIC 9(9). 7 01 WS-DAYS PIC 9(9). 8 01 WS-Q PIC 9(9). 8 01 WS-Q PIC 9(9). 9 01 WS-R1 PIC 9(9). 9 01 WS-R1 PIC 9(9). 10 01 WS-R2 PIC 9(9). 10 01 WS-R2 PIC 9(9). 11 01 WS-R3 PIC 9(9). 11 01 WS-R3 PIC 9(9). 12 12 13 PROCEDUREDIVISION . 13 PROCEDUREDIVISION . 14 MAIN-PARAGRAPH. 14 MAIN-PARAGRAPH. 15 DISPLAY 1 UPON ARGUMENT-NUMBER. 15 DISPLAY 1 UPON ARGUMENT-NUMBER. 16 ACCEPT WS-DAYS FROM ARGUMENT-VALUE. 16 ACCEPT WS-DAYS FROM ARGUMENT-VALUE. 17 PERFORMWITHTESTBEFOREUNTIL WS-DAYS <= 365 17 PERFORMWITHTESTBEFOREUNTIL WS-DAYS <= 365 18 *LEAPYEARCOMPUTATION 18 *LEAPYEARCOMPUTATION 19 DIVIDE WS-YEAR BY 4 GIVING WS-Q REMAINDER WS-R1 19 DIVIDE WS-YEAR BY 4 GIVING WS-Q REMAINDER WS-R1 20 DIVIDE WS-YEAR BY 100 GIVING WS-Q REMAINDER WS-R2 20 DIVIDE WS-YEAR BY 100 GIVING WS-Q REMAINDER WS-R2 21 DIVIDE WS-YEAR BY 400 GIVING WS-Q REMAINDER WS-R3 21 DIVIDE WS-YEAR BY 400 GIVING WS-Q REMAINDER WS-R3 22 IF ( WS-R1 = 0 AND WS-R2 NOT = 0) OR WS-R3 = 0 THEN 22 IF ( WS-R1 = 0 AND WS-R2 NOT = 0) OR WS-R3 = 0 THEN 23 IF WS-DAYS ISGREATERTHAN 366 THEN 23 SUBTRACT 366 FROM WS-DAYS 24 SUBTRACT 366 FROM WS-DAYS 24 IF WS-DAYS IS GREATER THAN 366 THEN 25 ADD 1 TO WS-YEAR 25 ADD 1 TO WS-YEAR 26 END-IF 26 END-IF 27 ELSE 27 ELSE 28 SUBTRACT 365 FROM WS-DAYS 28 SUBTRACT 365 FROM WS-DAYS 29 ADD 1 TO WS-YEAR 29 ADD 1 TO WS-YEAR 30 END-IF 30 END-IF 31 END-PERFORM . 31 END-PERFORM . 32 DISPLAY WS-YEAR. 32 DISPLAY WS-YEAR. 33 STOPRUN. 33 STOPRUN. (a) Buggy code: An innite loop happens when WS-YEAR (b) Zune bug, repaired by COBOLd (yes, seriously). Lines 23 and 24 is a leap year, and WS-DAYS is less than 366. are swapped in the original source Figure 1: COBOL implementation of the infamous Zune bug and repaired program generated by COBOLd. First six columns omitted to t on one page. the colorfully titled papers “Teaching a language many CS students Algorithm 1 A simplied G&V algorithm for automated COBOL didn’t want to learn: COBOL” [7] and “Bringing COBOL back to repair. the college IT curriculum” [12]. Some of our favorite excepts from 1: function COBOLd(p) these papers’ abstracts include “As shocking as this may sound to 2: cands GenProg.getInitialPopulation(p) some, there are many reasons beyond great need to bring COBOL back 3: while !oracle(cands) do into the CIS curriculum” [12] and “Over 80 percent of error diagnosis 4: cands GenProg.getNextPopulation(cands) was found to be inaccurate. Such feedback is not optimal for users, 5: end while particularly for the learning user of Cobol.” [13]. 6: ??????????????????????????????? 2 As far as static analysis tools, Sonarbe provides static analy- 7: PROFIT! sis tools for COBOL. However, Sonarbe can’t automatically x 8: end function COBOL programs. But we can. So watch this and don’t blink. 3 APR FOR COBOL program text itself. is helps limit the generation stage to syntac- 3.1 Generate-and-validate. tically valid programs, and allows more advanced reasoning about program structure. ough this works for reasonable programming One of primary classes of approaches for APR is known as heuristic languages, COBOL is not specied with a real grammar (cf. Sec- or generate-and-validate repair (G&V), which uses search-based tion 2.1.2), and the creation of a parser would require for-real work. soware engineering techniques [5, 16] to generate many candidate So, the COBOLd function invokes GenProg [11] to modify the pro- patches for a bug, and then validate them using indicative workloads gram text of p directly. is is particularly fortunate because we or test suites. would hate to implement a parser for a language with 550 reserved Algorithm 1 illustrates our novel G&V approach for xing COBOL words, are you #kidding me?3 programs. e COBOLd function takes as input a textual COBOL e brunt of the work is done by GenProg, which produces program p.
Recommended publications
  • Write Your Own Rules and Enforce Them Continuously
    Ultimate Architecture Enforcement Write Your Own Rules and Enforce Them Continuously SATURN May 2017 Paulo Merson Brazilian Federal Court of Accounts Agenda Architecture conformance Custom checks lab Sonarqube Custom checks at TCU Lessons learned 2 Exercise 0 – setup Open www.dontpad.com/saturn17 Follow the steps for “Exercise 0” Pre-requisites for all exercises: • JDK 1.7+ • Java IDE of your choice • maven 3 Consequences of lack of conformance Lower maintainability, mainly because of undesired dependencies • Code becomes brittle, hard to understand and change Possible negative effect • on reliability, portability, performance, interoperability, security, and other qualities • caused by deviation from design decisions that addressed these quality requirements 4 Factors that influence architecture conformance How effective the architecture documentation is Turnover among developers Haste to fix bugs or implement features Size of the system Distributed teams (outsourcing, offshoring) Accountability for violating design constraints 5 How to avoid code and architecture disparity? 1) Communicate the architecture to developers • Create multiple views • Structural diagrams + behavior diagrams • Capture rationale Not the focus of this tutorial 6 How to avoid code and architecture disparity? 2) Automate architecture conformance analysis • Often done with static analysis tools 7 Built-in checks and custom checks Static analysis tools come with many built-in checks • They are useful to spot bugs and improve your overall code quality • But they’re
    [Show full text]
  • Empirical Study of Vulnerability Scanning Tools for Javascript Work in Progress
    Empirical Study of Vulnerability Scanning Tools for JavaScript Work In Progress Tiago Brito, Nuno Santos, José Fragoso INESC-ID Lisbon 2020 Tiago Brito, GSD Meeting - 30/07/2020 Purpose of this WIP presentation ● Current work is to be submitted this year ● Goal: gather feedback on work so far ● Focus on presenting the approach and preliminary results Tiago Brito, GSD Meeting - 30/07/2020 2 Motivation ● JavaScript is hugely popular for web development ○ For both client and server-side (NodeJS) development ● There are many critical vulnerabilities reported for software developed using NodeJS ○ Remote Code Executions (Staicu NDSS’18) ○ Denial of Service (Staicu Sec’18) ○ Small number of packages, big impact (Zimmermann Sec’19) ● Developers need tools to help them detect problems ○ They are pressured to focus on delivering features Tiago Brito, GSD Meeting - 30/07/2020 3 Problem Previous work focused on: ● Tools for vulnerability analysis in Java or PHP code (e.g. Alhuzali Sec’18) ● Studying very specific vulnerabilities in Server-side JavaScript ○ ReDos, Command Injections (Staicu NDSS’18 and Staicu Sec’18) ● Studying vulnerability reports on the NodeJS ecosystem (Zimmermann Sec’19) So, it is still unknown which, and how many, of these tools can effectively detect vulnerabilities in modern JavaScript. Tiago Brito, GSD Meeting - 30/07/2020 4 Goal Our goal is to assess the effectiveness of state-of-the-art vulnerability detection tools for JavaScript code by performing a comprehensive empirical study. Tiago Brito, GSD Meeting - 30/07/2020 5 Research Questions 1. [Tools] Which tools exist for JavaScript vulnerability detection? 2. [Approach] What’s the approach these tools use and their main challenges for detecting vulnerabilities? 3.
    [Show full text]
  • Guidelines on Minimum Standards for Developer Verification of Software
    Guidelines on Minimum Standards for Developer Verification of Software Paul E. Black Barbara Guttman Vadim Okun Software and Systems Division Information Technology Laboratory July 2021 Abstract Executive Order (EO) 14028, Improving the Nation’s Cybersecurity, 12 May 2021, di- rects the National Institute of Standards and Technology (NIST) to recommend minimum standards for software testing within 60 days. This document describes eleven recommen- dations for software verification techniques as well as providing supplemental information about the techniques and references for further information. It recommends the following techniques: • Threat modeling to look for design-level security issues • Automated testing for consistency and to minimize human effort • Static code scanning to look for top bugs • Heuristic tools to look for possible hardcoded secrets • Use of built-in checks and protections • “Black box” test cases • Code-based structural test cases • Historical test cases • Fuzzing • Web app scanners, if applicable • Address included code (libraries, packages, services) The document does not address the totality of software verification, but instead recom- mends techniques that are broadly applicable and form the minimum standards. The document was developed by NIST in consultation with the National Security Agency. Additionally, we received input from numerous outside organizations through papers sub- mitted to a NIST workshop on the Executive Order held in early June, 2021 and discussion at the workshop as well as follow up with several of the submitters. Keywords software assurance; verification; testing; static analysis; fuzzing; code review; software security. Disclaimer Any mention of commercial products or reference to commercial organizations is for infor- mation only; it does not imply recommendation or endorsement by NIST, nor is it intended to imply that the products mentioned are necessarily the best available for the purpose.
    [Show full text]
  • Efficient Feature Selection for Static Analysis Vulnerability Prediction
    sensors Article Efficient Feature Selection for Static Analysis Vulnerability Prediction Katarzyna Filus 1 , Paweł Boryszko 1 , Joanna Doma ´nska 1,* , Miltiadis Siavvas 2 and Erol Gelenbe 1 1 Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Baltycka 5, 44-100 Gliwice, Poland; kfi[email protected] (K.F.); [email protected] (P.B.); [email protected] (E.G.) 2 Information Technologies Institute, Centre for Research & Technology Hellas, 6th km Harilaou-Thermi, 57001 Thessaloniki, Greece; [email protected] * Correspondence: [email protected] Abstract: Common software vulnerabilities can result in severe security breaches, financial losses, and reputation deterioration and require research effort to improve software security. The acceleration of the software production cycle, limited testing resources, and the lack of security expertise among programmers require the identification of efficient software vulnerability predictors to highlight the system components on which testing should be focused. Although static code analyzers are often used to improve software quality together with machine learning and data mining for software vulnerability prediction, the work regarding the selection and evaluation of different types of relevant vulnerability features is still limited. Thus, in this paper, we examine features generated by SonarQube and CCCC tools, to identify those that can be used for software vulnerability prediction. We investigate the suitability of thirty-three different features to train thirteen distinct machine learning algorithms to design vulnerability predictors and identify the most relevant features that should be used for training. Our evaluation is based on a comprehensive feature selection process based on the correlation analysis of the features, together with four well-known feature selection techniques.
    [Show full text]
  • End to End Quality Sonar Ecosystem Water Leak Metaphor
    End to End Quality with the Sonar Ecosystem and the Water Leak Metaphor G. Ann Campbell @GAnnCampbell | [email protected] @SonarLint | @SonarQube | @SonarSource SonarLint Leak Period Quality Gate 20+ Languages The <3 of the ecosystem Static Analysis What is Static Analysis? Analyzing code, without executing it! A Means to an End Detecting Bugs, Vulnerabilities, and Code Smells Why use Static Analysis Catch new problems ASAP ● the longer it takes to catch a bug, the more it costs ● no one writes perfect code every time ● rule description and precise issue location cut research time Why use Static Analysis Changing A might have added bugs in B ● peer review misses new issues in untouched code ● static analysis is machine-assisted code review; it looks at every file every time Why use Static Analysis Provide coaching ● language best practices ● team coding style SonarSource’s Toolbox Lexical Analysis Only two things are infinite, the universe and human stupidity, and I am not sure about the former. Syntactic Analysis Only two things are infinite, the universe and human stupidity, and I am not sure about the former. Subjects Verbs Albert E. Semantic Analysis Only two things are infinite, the universe and human stupidity, and I am not sure about the former. Albert E. Semantic Analysis Only two things are infinite, the universe and human stupidity, and I am not sure about the former. Albert E. Beyond Semantic: Symbolic Execution Object myObject = new Object(); if(a) { myObject = null; } ... if( !a ) { ... } else { myObject.toString(); } Beyond Semantic: Symbolic Execution Object myObject = new Object(); if(a) { myObject = null; } ..
    [Show full text]
  • Analyzed – Static Code Analysis for D Stefan Rohe 3Th May 2013 Dconf
    AnalyzeD – Static Code Analysis for D Stefan Rohe 3th May 2013 DConf 1 Overview ■ Introduction / Motivation ■ Our D Way ■ Static Code Analysis ■ Metrics ■ D1to2-Example ■ Sonar ■ Summary ■ Outlook 2 Introduction / Motivation 3 Introduction ■ Funkwerk Information Technologies HQ in Munich, Germany One of the market leaders in passenger information systems for public transportion ■ http://www.funkwerk-itk.com 4 Motivation - Sun Microsystems rationale ■ Code conventions are important to programmers for a number of reasons: 40%-80% of the lifetime cost of a piece of software goes to maintenance. Hardly any software is maintained for its whole life by the original author. Code conventions improve the readability of the software, allowing engineers to understand new code more quickly and thoroughly. If you ship your source code as a product, you need to make sure it is as well packaged and clean as any other product you create. ■ http://www.oracle.com/technetwork/java/codeconventions-150003.pdf 5 Motivation ■ Started with D (D1, Tango) in mid of 2008. ■ 05/2013 having 250kLOC D (src); 8 developers ■ Writing Clean Code → Many Code Reviews ■ Time waste and social difficulty to mark Code that violates (computer checkable) conventions ■ Even more difficult if conventions and their reason haven't been written down anywhere ■ Growing team; needed a way to effectively split knowledge about the conventions ■ Got Java/C++ Programmers ■ No courses or training available 6 Motivation „You cannot always hire perfect engineers.“ Walter Bright, DConf
    [Show full text]
  • Comparison of Java Programming Testing Tools
    [Gautam *, Vol.5 (Iss.2): February 2018] ISSN: 2454-1907 DOI: https://doi.org/10.29121/ijetmr.v5.i2.2018.147 COMPARISON OF JAVA PROGRAMMING TESTING TOOLS Shikha Gautam *1 *1 ICT Research Lab, Department of Computer Science, IIIT-L, University of Lucknow, Lucknow, 226007, India Abstract: Various testing tools have been used to find defects and measure quality of software which have been developed in different languages. This paper provides the overview of various testing tools and analyzed Java programming testing tools because Java programming is very important due to its mature nature to develop software. Java testing tools are analyzed based on various quality attributes. Analysis shows that selection of testing tool depends on requirement. Keywords: Software Testing; Software Quality; Java; Requirement. Cite This Article: Shikha Gautam. (2018). “COMPARISON OF JAVA PROGRAMMING TESTING TOOLS.” International Journal of Engineering Technologies and Management Research, 5(2), 66-76. DOI: https://doi.org/10.29121/ijetmr.v5.i2.2018.147. 1. Introduction Software testing is a process of executing a program or application with the intent of finding the errors or quality of the product. Software testing is conducted when executable software exists. Testing used to find out errors and fix them to support and improve software quality. Testing check what all functions of software supposed to do and also check that software is not doing what it not supposed to do [1]. Software testing is more difficult process because of the vast array of programming languages, operating systems, and hardware platforms that have evolved. There are different types of software testing.
    [Show full text]
  • A Critical Comparison on Six Static Analysis Tools: Detection, Agreement, and Precision
    Noname manuscript No. (will be inserted by the editor) A Critical Comparison on Six Static Analysis Tools: Detection, Agreement, and Precision Valentina Lenarduzzi · Savanna Lujan · Nyyti Saarim¨aki · Fabio Palomba Received: date / Accepted: date Abstract Background. Developers use Automated Static Analysis Tools (ASATs) to control for potential quality issues in source code, including de- fects and technical debt. Tool vendors have devised quite a number of tools, which makes it harder for practitioners to select the most suitable one for their needs. To better support developers, researchers have been conducting several studies on ASATs to favor the understanding of their actual capabilities. Aims. Despite the work done so far, there is still a lack of knowledge regarding (1) which source quality problems can actually be detected by static analysis tool warnings, (2) what is their agreement, and (3) what is the precision of their recommendations. We aim at bridging this gap by proposing a large-scale comparison of six popular static analysis tools for Java projects: Better Code Hub, CheckStyle, Coverity Scan, Findbugs, PMD, and SonarQube. Method. We analyze 47 Java projects and derive a taxonomy of warnings raised by 6 state-of-the-practice ASATs. To assess their agreement, we compared them by manually analyzing - at line-level - whether they identify the same issues. Finally, we manually evaluate the precision of the tools. Results. The key results report a comprehensive taxonomy of ASATs warnings, show little to no agreement among the tools and a low degree of precision. Conclusions. We provide a taxonomy that can be useful to researchers, practi- tioners, and tool vendors to map the current capabilities of the tools.
    [Show full text]
  • Optimizing Quality Analysis to Deliver Business Value Amid Complexity: Code Visibility Cuts Software Risk
    I D C TECHNOLOGY SPOTLIGHT Optimizing Quality Analysis to Deliver Business Value Amid Complexity: Code Visibility Cuts Software Risk Adapted from Worldwide Automated Software Quality 2014–2018 Forecast and 2013 Vendor Shares: Some Growth in ASQ with Ongoing Adoption Projected for Mobile, Cloud and Embedded, IDC #251643 and Establishing Software Quality Analysis Strategies to Help Address Third Platform Complexity, IDC #253257 Sponsored by: SonarSource Melinda-Carol Ballou Jennifer Thomson January 2015 INTRODUCTION: UNDERSTANDING THE IMPACT OF QUALITY ANALYSIS GAPS The shift to a digital world, the impact of digital transformation, and demand for continuous deployment across technology platforms are putting huge pressure on IT organizations as they address dynamically evolving business needs. Time to market of high-quality applications becomes critical, but delivering software releases and developing new customer-facing applications quickly is a growing challenge. This is particularly the case for large, multinational organizations that must contend with a complex web of changing, multimodal technologies combined with legacy systems and resources across thousands of users that are geographically distributed. For the CIO, the goal is not merely to improve IT agility — it is about using IT to successfully enhance business agility, innovation, and customer experience across the "3rd Platform," which ranges from mobile, to social systems of engagement, to the cloud, while incorporating Big Data analytics. At an operational level this approach puts increased pressure on companies to restructure, modernize, and transform software development and testing practices. This can allow for faster delivery of enterprise consumer applications with appropriate quality, risk, speed, and cost levels. Yet despite the obvious consequences of poor quality software on customer access, revenue, and business reputation in these influential mobile and other 3rd Platform environments, many organizations have slipped into pitiful software hygiene habits.
    [Show full text]
  • Sifu - a Cybersecurity Awareness Platform with Challenge Assessment and Intelligent Coach Tiago Espinha Gasiba1* , Ulrike Lechner2 and Maria Pinto-Albuquerque3
    Espinha Gasiba et al. Cybersecurity (2020) 3:24 Cybersecurity https://doi.org/10.1186/s42400-020-00064-4 RESEARCH Open Access Sifu - a cybersecurity awareness platform with challenge assessment and intelligent coach Tiago Espinha Gasiba1* , Ulrike Lechner2 and Maria Pinto-Albuquerque3 Abstract Software vulnerabilities, when actively exploited by malicious parties, can lead to catastrophic consequences. Proper handling of software vulnerabilities is essential in the industrial context, particularly when the software is deployed in critical infrastructures. Therefore, several industrial standards mandate secure coding guidelines and industrial software developers’ training, as software quality is a significant contributor to secure software. CyberSecurity Challenges (CSC) form a method that combines serious game techniques with cybersecurity and secure coding guidelines to raise secure coding awareness of software developers in the industry. These cybersecurity awareness events have been used with success in industrial environments. However, until now, these coached events took place on-site. In the present work, we briefly introduce cybersecurity challenges and propose a novel platform that allows these events to take place online. The introduced cybersecurity awareness platform, which the authors call Sifu, performs automatic assessment of challenges in compliance to secure coding guidelines, and uses an artificial intelligence method to provide players with solution-guiding hints. Furthermore, due to its characteristics, the Sifu platform allows for remote (online) learning, in times of social distancing. The CyberSecurity Challenges events based on the Sifu platform were evaluated during four online real-life CSC events. We report on three surveys showing that the Sifu platform’s CSC events are adequate to raise industry software developers awareness on secure coding.
    [Show full text]
  • The Forrester Wave™: Static Application Security Testing, Q1 2021 the 12 Providers That Matter Most and How They Stack up by Sandy Carielli January 11, 2021
    LICENSED FOR INDIVIDUAL USE ONLY The Forrester Wave™: Static Application Security Testing, Q1 2021 The 12 Providers That Matter Most And How They Stack Up by Sandy Carielli January 11, 2021 Why Read This Report Key Takeaways In our 28-criterion evaluation of static application Veracode, Synopsys, Checkmarx, And Micro security testing (SAST) providers, we identified Focus Lead The Pack the 12 most significant ones — CAST, Forrester’s research uncovered a market in Checkmarx, GitHub, GitLab, HCL Software, Micro which Veracode, Synopsys, Checkmarx, and Focus, Parasoft, Perforce Software, SonarSource, Micro Focus are Leaders; HCL Software and Synopsys, Veracode, and WhiteHat Security — CAST are Strong Performers; GitHub, Parasoft, and researched, analyzed, and scored them. This GitLab, Perforce Software, and SonarSource report shows how each provider measures up are Contenders; and WhiteHat Security is a and helps security and risk professionals select Challenger. the right one for their needs. Developer Enablement, New Architecture Support, And Accuracy Are Key Differentiators As development speeds continue to increase and teams embrace new development methodologies, SAST solutions that build security into the software development lifecycle (SDLC), regardless of how and where the application is built, will lead the pack. Vendors that offer deep integration with the CI/CD pipeline; quickly expand to protect new architectures like containers, APIs, and infrastructure-as-code (IaC); and continuously improve on performance and accuracy, position
    [Show full text]
  • Static Analyzers and Potential Future Research Directions for Scala: an Overview
    Static Analyzers and Potential Future Research Directions for Scala: An Overview Eljose E Sajan1, Yunpeng Zhang2*, Liang-Chieh Cheng3 1College of Natural Science and Mathematics, University of Houston, [email protected] 2College of Technology, University of Houston, [email protected] 3College of Technology, University of Houston, [email protected] Abstract—Static analyzers are tool-sets which are proving to be indispensable to modern programmers. These enable the programmers to detect possible errors and security defects present in the current code base within the implementation phase of the development cycle; rather than relying on a standalone testing phase. Static analyzers typically highlight possible defects within the ’static’ source code and thus does not require the source code to be compiled or executed. The Scala programming language has been gaining wider adoption across various industries in recent years. With such a wide adoption of tools of this nature, this paper presents an overview on the static analysis tools available, both commercial and open-source, for the Scala programming language. This paper discusses in detail about the types of defects that each of these tools can detect, limitations of these tools and also provide potential research direction that can improve the current state of static analyzers for the Scala programming language. Keywords—Scala; static code analysis; error detection I. INTRODUCTION Software development is still a human initiated task that requires extensive time and effort to plan and implement. Programmers and developers work with very short deadlines and are most concerned about delivering functionalities rather than having application security as the primary objective.
    [Show full text]