State-Of-The-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation

Total Page:16

File Type:pdf, Size:1020Kb

State-Of-The-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation INSTITUTE FOR DEFENSE ANALYSES State-of-the-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation Gregory Larsen, Task Leader E. Kenneth Hong Fong, Project Leader David A. Wheeler Rama S. Moorthy July 2014 Approved for public release; distribution is unlimited. IDA Paper P-5061 Log: H 13-001130 Copy INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 About This Publication This work was conducted by the Institute for Defense Analyses (IDA) under contract DASW01-04-C-0003, Task AU-5-3595, “State of the Art Resource for Hardware and Software Assurance,” for the Office of the Deputy Assistant Secretary of Defense for Systems Engineering; Office of the Deputy CIO for Identity and Information Assurance (DCIO(IIA)), Director of Trusted Mission Systems and Networks (TMSN); Technical Director for the Center for Assured Software, National Security Agency. The views, opinions, and findings should not be construed as representing the official position of either the Department of Defense or the sponsoring organization. Copyright Notice © 2014 Institute for Defense Analyses 4850 Mark Center Drive, Alexandria, Virginia 22311-1882 • (703) 845-2000. This material may be reproduced by or for the U.S. Government pursuant to the copyright license under the clause at DFARS 252.227-7013 (a)(16) [Sep 2011]. INSTITUTE FOR DEFENSE ANALYSES IDA Paper P-5061 State-of-the-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation Gregory Larsen, Task Leader E. Kenneth Hong Fong, Project Leader David A. Wheeler Rama S. Moorthy Executive Summary Nearly all modern systems depend on software. It may be embedded within the system, delivering capability; used in the design and development of the system; or used to manage and control the system, possibly through other systems. Software may be acquired as a commercial off-the-shelf component, custom developed for the system, or embedded within subcomponents by their manufacturers. Modern systems often perform the majority of their functions through software and can easily include millions of lines of software code. Although functionality is often created through software, this software can also introduce risks. Unintentional or intentionally inserted vulnerabilities (including previously known vulnerabilities) can provide adversaries with various avenues to reduce system effectiveness, render systems useless, or even turn our systems against us. Department of Defense (DoD) software, in particular, is subject to attack. Analyzing DoD software to identify and remove weaknesses is a critical program protection countermeasure. Unfortunately, it can be difficult to determine what types of tools and techniques exist for analyzing software, and where their use is appropriate. The purpose of this paper is to assist Department of Defense (DoD) program managers (PM), and their staffs, in making effective software assurance (SwA) and software supply chain risk management (SCRM) decisions, particularly when they are developing their program protection plan (PPP). A secondary purpose is to inform DoD policymakers who are developing software policies. This paper defines and describes the following overall process for selecting and using appropriate analysis tools and techniques for evaluating software: 1. Select technical objectives based on context. This paper identifies a set of 10 major technical objectives and subdivides them further into up to 3 more levels of progressively more detailed objectives. For example, the major technical objective “counter unintentional-‘like’ weaknesses” is subdivided into a second level of 12 sub-categories, and some of these second-level objectives are subdivided still further. This multi-stage breakdown of technical objectives is captured in Appendix E, Software State-of-the-Art Resources (SOAR) Matrix. 2. Select tool/technique types to address those technical objectives. This paper identifies 56 types of tools and techniques available for analyzing software. The supporting “Software SOAR Matrix” provides a detailed mapping between these i tool/technique types and the technical objectives, to help readers identify and select the types of tools and techniques to meet the technical objectives. 3. Select tools/techniques. This paper identifies, in some cases, where additional information is available to help the selection process. 4. Summarize selection as part of a Program Protection Plan. This paper provides guidance on how to summarize the information derived from the selection of tool/technique types, and later the planned use of the tools/techniques, into a PPP. 5. Apply the tools/techniques and report the results. Here the selected tools and techniques are applied, including the selection, modification, or risk mitigation of software based on tool/technique results. Reports are provided to support oversight and governance. Vignettes in Section 8 provide examples of this process. This paper also describes some key gaps that were identified in the course of this study, including difficulties in finding unknown malicious code, obtaining quantitative data, analyzing binaries without debug symbols, and obtaining assurance of development tools. Additional challenges were found in the mobile environment; examples include lack of maturity in many tools, expectations of time constraints that preclude in-depth analysis, and widespread use of a Software-as-a-Service (SaaS) model that limits data availability and application to DoD systems. These would be plausible areas to consider as part of a research program. Appendices provide additional detail, including more information on each type of tool and technique. Appendix D, for example, describes how we believe analysis should be continuously applied and integrated into the entire software lifecycle, creating a feedback loop for better-informed risk management decisions. The information provided here was gathered from a variety of sources, including many interviews of subject matter experts. These experts identified a number of key topics, some of which are also captured in this paper. This paper extends the earlier 2013 draft by adding information specifically focused on mobile platforms (e.g., smartphones and tablets running on operating systems such as iOS and Android). It also includes various incremental improvements, including those suggested by reviewer comments from the sponsors, the Software Engineering Institute (SEI), and the MITRE Corporation. In particular, the set of technical objectives was expanded and slightly reorganized per reviewer comments. Software analysis is a large and dynamic field, and this paper represents one step in capturing and organizing a wide range of diverse information. We hope that this material will be refined through feedback from the larger community. We recommend piloting the approach described in this document to determine its utility and to evolve it. ii Contents Executive Summary ............................................................................................................. i 1. Introduction ............................................................................................................. 1-1 2. Background .............................................................................................................. 2-1 3. Overall Process for Selecting and Reporting Results from Appropriate Tools and Techniques ............................................................................................................... 3-1 A. General Approach ............................................................................................ 3-1 B. Matrix to Help Select Tool/Technique Types to Address Technical Objectives ........................................................................................................ 3-1 C. Using the Matrix .............................................................................................. 3-5 4. Technical Objectives ............................................................................................... 4-1 A. Technical Objectives’ Development Approach .............................................. 4-1 B. Technical Objectives – Main Categories ......................................................... 4-2 5. Types of Tools and Techniques ............................................................................... 5-1 A. Static Analysis ................................................................................................. 5-3 B. Dynamic Analysis ........................................................................................... 5-7 C. Hybrid Analysis ............................................................................................. 5-10 D. Combining Tools and Techniques ................................................................. 5-10 6. Software Component Context ................................................................................. 6-1 A. General Factors ................................................................................................ 6-1 B. PPP Contexts ................................................................................................... 6-2 7. Program Protection Plan Roll-up ............................................................................. 7-1 8. Vignettes .................................................................................................................. 8-1 A. OTS Proprietary Software Critical Component .............................................
Recommended publications
  • Write Your Own Rules and Enforce Them Continuously
    Ultimate Architecture Enforcement Write Your Own Rules and Enforce Them Continuously SATURN May 2017 Paulo Merson Brazilian Federal Court of Accounts Agenda Architecture conformance Custom checks lab Sonarqube Custom checks at TCU Lessons learned 2 Exercise 0 – setup Open www.dontpad.com/saturn17 Follow the steps for “Exercise 0” Pre-requisites for all exercises: • JDK 1.7+ • Java IDE of your choice • maven 3 Consequences of lack of conformance Lower maintainability, mainly because of undesired dependencies • Code becomes brittle, hard to understand and change Possible negative effect • on reliability, portability, performance, interoperability, security, and other qualities • caused by deviation from design decisions that addressed these quality requirements 4 Factors that influence architecture conformance How effective the architecture documentation is Turnover among developers Haste to fix bugs or implement features Size of the system Distributed teams (outsourcing, offshoring) Accountability for violating design constraints 5 How to avoid code and architecture disparity? 1) Communicate the architecture to developers • Create multiple views • Structural diagrams + behavior diagrams • Capture rationale Not the focus of this tutorial 6 How to avoid code and architecture disparity? 2) Automate architecture conformance analysis • Often done with static analysis tools 7 Built-in checks and custom checks Static analysis tools come with many built-in checks • They are useful to spot bugs and improve your overall code quality • But they’re
    [Show full text]
  • Empirical Study of Vulnerability Scanning Tools for Javascript Work in Progress
    Empirical Study of Vulnerability Scanning Tools for JavaScript Work In Progress Tiago Brito, Nuno Santos, José Fragoso INESC-ID Lisbon 2020 Tiago Brito, GSD Meeting - 30/07/2020 Purpose of this WIP presentation ● Current work is to be submitted this year ● Goal: gather feedback on work so far ● Focus on presenting the approach and preliminary results Tiago Brito, GSD Meeting - 30/07/2020 2 Motivation ● JavaScript is hugely popular for web development ○ For both client and server-side (NodeJS) development ● There are many critical vulnerabilities reported for software developed using NodeJS ○ Remote Code Executions (Staicu NDSS’18) ○ Denial of Service (Staicu Sec’18) ○ Small number of packages, big impact (Zimmermann Sec’19) ● Developers need tools to help them detect problems ○ They are pressured to focus on delivering features Tiago Brito, GSD Meeting - 30/07/2020 3 Problem Previous work focused on: ● Tools for vulnerability analysis in Java or PHP code (e.g. Alhuzali Sec’18) ● Studying very specific vulnerabilities in Server-side JavaScript ○ ReDos, Command Injections (Staicu NDSS’18 and Staicu Sec’18) ● Studying vulnerability reports on the NodeJS ecosystem (Zimmermann Sec’19) So, it is still unknown which, and how many, of these tools can effectively detect vulnerabilities in modern JavaScript. Tiago Brito, GSD Meeting - 30/07/2020 4 Goal Our goal is to assess the effectiveness of state-of-the-art vulnerability detection tools for JavaScript code by performing a comprehensive empirical study. Tiago Brito, GSD Meeting - 30/07/2020 5 Research Questions 1. [Tools] Which tools exist for JavaScript vulnerability detection? 2. [Approach] What’s the approach these tools use and their main challenges for detecting vulnerabilities? 3.
    [Show full text]
  • Guidelines on Minimum Standards for Developer Verification of Software
    Guidelines on Minimum Standards for Developer Verification of Software Paul E. Black Barbara Guttman Vadim Okun Software and Systems Division Information Technology Laboratory July 2021 Abstract Executive Order (EO) 14028, Improving the Nation’s Cybersecurity, 12 May 2021, di- rects the National Institute of Standards and Technology (NIST) to recommend minimum standards for software testing within 60 days. This document describes eleven recommen- dations for software verification techniques as well as providing supplemental information about the techniques and references for further information. It recommends the following techniques: • Threat modeling to look for design-level security issues • Automated testing for consistency and to minimize human effort • Static code scanning to look for top bugs • Heuristic tools to look for possible hardcoded secrets • Use of built-in checks and protections • “Black box” test cases • Code-based structural test cases • Historical test cases • Fuzzing • Web app scanners, if applicable • Address included code (libraries, packages, services) The document does not address the totality of software verification, but instead recom- mends techniques that are broadly applicable and form the minimum standards. The document was developed by NIST in consultation with the National Security Agency. Additionally, we received input from numerous outside organizations through papers sub- mitted to a NIST workshop on the Executive Order held in early June, 2021 and discussion at the workshop as well as follow up with several of the submitters. Keywords software assurance; verification; testing; static analysis; fuzzing; code review; software security. Disclaimer Any mention of commercial products or reference to commercial organizations is for infor- mation only; it does not imply recommendation or endorsement by NIST, nor is it intended to imply that the products mentioned are necessarily the best available for the purpose.
    [Show full text]
  • Efficient Feature Selection for Static Analysis Vulnerability Prediction
    sensors Article Efficient Feature Selection for Static Analysis Vulnerability Prediction Katarzyna Filus 1 , Paweł Boryszko 1 , Joanna Doma ´nska 1,* , Miltiadis Siavvas 2 and Erol Gelenbe 1 1 Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Baltycka 5, 44-100 Gliwice, Poland; kfi[email protected] (K.F.); [email protected] (P.B.); [email protected] (E.G.) 2 Information Technologies Institute, Centre for Research & Technology Hellas, 6th km Harilaou-Thermi, 57001 Thessaloniki, Greece; [email protected] * Correspondence: [email protected] Abstract: Common software vulnerabilities can result in severe security breaches, financial losses, and reputation deterioration and require research effort to improve software security. The acceleration of the software production cycle, limited testing resources, and the lack of security expertise among programmers require the identification of efficient software vulnerability predictors to highlight the system components on which testing should be focused. Although static code analyzers are often used to improve software quality together with machine learning and data mining for software vulnerability prediction, the work regarding the selection and evaluation of different types of relevant vulnerability features is still limited. Thus, in this paper, we examine features generated by SonarQube and CCCC tools, to identify those that can be used for software vulnerability prediction. We investigate the suitability of thirty-three different features to train thirteen distinct machine learning algorithms to design vulnerability predictors and identify the most relevant features that should be used for training. Our evaluation is based on a comprehensive feature selection process based on the correlation analysis of the features, together with four well-known feature selection techniques.
    [Show full text]
  • End to End Quality Sonar Ecosystem Water Leak Metaphor
    End to End Quality with the Sonar Ecosystem and the Water Leak Metaphor G. Ann Campbell @GAnnCampbell | [email protected] @SonarLint | @SonarQube | @SonarSource SonarLint Leak Period Quality Gate 20+ Languages The <3 of the ecosystem Static Analysis What is Static Analysis? Analyzing code, without executing it! A Means to an End Detecting Bugs, Vulnerabilities, and Code Smells Why use Static Analysis Catch new problems ASAP ● the longer it takes to catch a bug, the more it costs ● no one writes perfect code every time ● rule description and precise issue location cut research time Why use Static Analysis Changing A might have added bugs in B ● peer review misses new issues in untouched code ● static analysis is machine-assisted code review; it looks at every file every time Why use Static Analysis Provide coaching ● language best practices ● team coding style SonarSource’s Toolbox Lexical Analysis Only two things are infinite, the universe and human stupidity, and I am not sure about the former. Syntactic Analysis Only two things are infinite, the universe and human stupidity, and I am not sure about the former. Subjects Verbs Albert E. Semantic Analysis Only two things are infinite, the universe and human stupidity, and I am not sure about the former. Albert E. Semantic Analysis Only two things are infinite, the universe and human stupidity, and I am not sure about the former. Albert E. Beyond Semantic: Symbolic Execution Object myObject = new Object(); if(a) { myObject = null; } ... if( !a ) { ... } else { myObject.toString(); } Beyond Semantic: Symbolic Execution Object myObject = new Object(); if(a) { myObject = null; } ..
    [Show full text]
  • Analyzed – Static Code Analysis for D Stefan Rohe 3Th May 2013 Dconf
    AnalyzeD – Static Code Analysis for D Stefan Rohe 3th May 2013 DConf 1 Overview ■ Introduction / Motivation ■ Our D Way ■ Static Code Analysis ■ Metrics ■ D1to2-Example ■ Sonar ■ Summary ■ Outlook 2 Introduction / Motivation 3 Introduction ■ Funkwerk Information Technologies HQ in Munich, Germany One of the market leaders in passenger information systems for public transportion ■ http://www.funkwerk-itk.com 4 Motivation - Sun Microsystems rationale ■ Code conventions are important to programmers for a number of reasons: 40%-80% of the lifetime cost of a piece of software goes to maintenance. Hardly any software is maintained for its whole life by the original author. Code conventions improve the readability of the software, allowing engineers to understand new code more quickly and thoroughly. If you ship your source code as a product, you need to make sure it is as well packaged and clean as any other product you create. ■ http://www.oracle.com/technetwork/java/codeconventions-150003.pdf 5 Motivation ■ Started with D (D1, Tango) in mid of 2008. ■ 05/2013 having 250kLOC D (src); 8 developers ■ Writing Clean Code → Many Code Reviews ■ Time waste and social difficulty to mark Code that violates (computer checkable) conventions ■ Even more difficult if conventions and their reason haven't been written down anywhere ■ Growing team; needed a way to effectively split knowledge about the conventions ■ Got Java/C++ Programmers ■ No courses or training available 6 Motivation „You cannot always hire perfect engineers.“ Walter Bright, DConf
    [Show full text]
  • Comparison of Java Programming Testing Tools
    [Gautam *, Vol.5 (Iss.2): February 2018] ISSN: 2454-1907 DOI: https://doi.org/10.29121/ijetmr.v5.i2.2018.147 COMPARISON OF JAVA PROGRAMMING TESTING TOOLS Shikha Gautam *1 *1 ICT Research Lab, Department of Computer Science, IIIT-L, University of Lucknow, Lucknow, 226007, India Abstract: Various testing tools have been used to find defects and measure quality of software which have been developed in different languages. This paper provides the overview of various testing tools and analyzed Java programming testing tools because Java programming is very important due to its mature nature to develop software. Java testing tools are analyzed based on various quality attributes. Analysis shows that selection of testing tool depends on requirement. Keywords: Software Testing; Software Quality; Java; Requirement. Cite This Article: Shikha Gautam. (2018). “COMPARISON OF JAVA PROGRAMMING TESTING TOOLS.” International Journal of Engineering Technologies and Management Research, 5(2), 66-76. DOI: https://doi.org/10.29121/ijetmr.v5.i2.2018.147. 1. Introduction Software testing is a process of executing a program or application with the intent of finding the errors or quality of the product. Software testing is conducted when executable software exists. Testing used to find out errors and fix them to support and improve software quality. Testing check what all functions of software supposed to do and also check that software is not doing what it not supposed to do [1]. Software testing is more difficult process because of the vast array of programming languages, operating systems, and hardware platforms that have evolved. There are different types of software testing.
    [Show full text]
  • A Critical Comparison on Six Static Analysis Tools: Detection, Agreement, and Precision
    Noname manuscript No. (will be inserted by the editor) A Critical Comparison on Six Static Analysis Tools: Detection, Agreement, and Precision Valentina Lenarduzzi · Savanna Lujan · Nyyti Saarim¨aki · Fabio Palomba Received: date / Accepted: date Abstract Background. Developers use Automated Static Analysis Tools (ASATs) to control for potential quality issues in source code, including de- fects and technical debt. Tool vendors have devised quite a number of tools, which makes it harder for practitioners to select the most suitable one for their needs. To better support developers, researchers have been conducting several studies on ASATs to favor the understanding of their actual capabilities. Aims. Despite the work done so far, there is still a lack of knowledge regarding (1) which source quality problems can actually be detected by static analysis tool warnings, (2) what is their agreement, and (3) what is the precision of their recommendations. We aim at bridging this gap by proposing a large-scale comparison of six popular static analysis tools for Java projects: Better Code Hub, CheckStyle, Coverity Scan, Findbugs, PMD, and SonarQube. Method. We analyze 47 Java projects and derive a taxonomy of warnings raised by 6 state-of-the-practice ASATs. To assess their agreement, we compared them by manually analyzing - at line-level - whether they identify the same issues. Finally, we manually evaluate the precision of the tools. Results. The key results report a comprehensive taxonomy of ASATs warnings, show little to no agreement among the tools and a low degree of precision. Conclusions. We provide a taxonomy that can be useful to researchers, practi- tioners, and tool vendors to map the current capabilities of the tools.
    [Show full text]
  • Optimizing Quality Analysis to Deliver Business Value Amid Complexity: Code Visibility Cuts Software Risk
    I D C TECHNOLOGY SPOTLIGHT Optimizing Quality Analysis to Deliver Business Value Amid Complexity: Code Visibility Cuts Software Risk Adapted from Worldwide Automated Software Quality 2014–2018 Forecast and 2013 Vendor Shares: Some Growth in ASQ with Ongoing Adoption Projected for Mobile, Cloud and Embedded, IDC #251643 and Establishing Software Quality Analysis Strategies to Help Address Third Platform Complexity, IDC #253257 Sponsored by: SonarSource Melinda-Carol Ballou Jennifer Thomson January 2015 INTRODUCTION: UNDERSTANDING THE IMPACT OF QUALITY ANALYSIS GAPS The shift to a digital world, the impact of digital transformation, and demand for continuous deployment across technology platforms are putting huge pressure on IT organizations as they address dynamically evolving business needs. Time to market of high-quality applications becomes critical, but delivering software releases and developing new customer-facing applications quickly is a growing challenge. This is particularly the case for large, multinational organizations that must contend with a complex web of changing, multimodal technologies combined with legacy systems and resources across thousands of users that are geographically distributed. For the CIO, the goal is not merely to improve IT agility — it is about using IT to successfully enhance business agility, innovation, and customer experience across the "3rd Platform," which ranges from mobile, to social systems of engagement, to the cloud, while incorporating Big Data analytics. At an operational level this approach puts increased pressure on companies to restructure, modernize, and transform software development and testing practices. This can allow for faster delivery of enterprise consumer applications with appropriate quality, risk, speed, and cost levels. Yet despite the obvious consequences of poor quality software on customer access, revenue, and business reputation in these influential mobile and other 3rd Platform environments, many organizations have slipped into pitiful software hygiene habits.
    [Show full text]
  • Sifu - a Cybersecurity Awareness Platform with Challenge Assessment and Intelligent Coach Tiago Espinha Gasiba1* , Ulrike Lechner2 and Maria Pinto-Albuquerque3
    Espinha Gasiba et al. Cybersecurity (2020) 3:24 Cybersecurity https://doi.org/10.1186/s42400-020-00064-4 RESEARCH Open Access Sifu - a cybersecurity awareness platform with challenge assessment and intelligent coach Tiago Espinha Gasiba1* , Ulrike Lechner2 and Maria Pinto-Albuquerque3 Abstract Software vulnerabilities, when actively exploited by malicious parties, can lead to catastrophic consequences. Proper handling of software vulnerabilities is essential in the industrial context, particularly when the software is deployed in critical infrastructures. Therefore, several industrial standards mandate secure coding guidelines and industrial software developers’ training, as software quality is a significant contributor to secure software. CyberSecurity Challenges (CSC) form a method that combines serious game techniques with cybersecurity and secure coding guidelines to raise secure coding awareness of software developers in the industry. These cybersecurity awareness events have been used with success in industrial environments. However, until now, these coached events took place on-site. In the present work, we briefly introduce cybersecurity challenges and propose a novel platform that allows these events to take place online. The introduced cybersecurity awareness platform, which the authors call Sifu, performs automatic assessment of challenges in compliance to secure coding guidelines, and uses an artificial intelligence method to provide players with solution-guiding hints. Furthermore, due to its characteristics, the Sifu platform allows for remote (online) learning, in times of social distancing. The CyberSecurity Challenges events based on the Sifu platform were evaluated during four online real-life CSC events. We report on three surveys showing that the Sifu platform’s CSC events are adequate to raise industry software developers awareness on secure coding.
    [Show full text]
  • The Forrester Wave™: Static Application Security Testing, Q1 2021 the 12 Providers That Matter Most and How They Stack up by Sandy Carielli January 11, 2021
    LICENSED FOR INDIVIDUAL USE ONLY The Forrester Wave™: Static Application Security Testing, Q1 2021 The 12 Providers That Matter Most And How They Stack Up by Sandy Carielli January 11, 2021 Why Read This Report Key Takeaways In our 28-criterion evaluation of static application Veracode, Synopsys, Checkmarx, And Micro security testing (SAST) providers, we identified Focus Lead The Pack the 12 most significant ones — CAST, Forrester’s research uncovered a market in Checkmarx, GitHub, GitLab, HCL Software, Micro which Veracode, Synopsys, Checkmarx, and Focus, Parasoft, Perforce Software, SonarSource, Micro Focus are Leaders; HCL Software and Synopsys, Veracode, and WhiteHat Security — CAST are Strong Performers; GitHub, Parasoft, and researched, analyzed, and scored them. This GitLab, Perforce Software, and SonarSource report shows how each provider measures up are Contenders; and WhiteHat Security is a and helps security and risk professionals select Challenger. the right one for their needs. Developer Enablement, New Architecture Support, And Accuracy Are Key Differentiators As development speeds continue to increase and teams embrace new development methodologies, SAST solutions that build security into the software development lifecycle (SDLC), regardless of how and where the application is built, will lead the pack. Vendors that offer deep integration with the CI/CD pipeline; quickly expand to protect new architectures like containers, APIs, and infrastructure-as-code (IaC); and continuously improve on performance and accuracy, position
    [Show full text]
  • Static Analyzers and Potential Future Research Directions for Scala: an Overview
    Static Analyzers and Potential Future Research Directions for Scala: An Overview Eljose E Sajan1, Yunpeng Zhang2*, Liang-Chieh Cheng3 1College of Natural Science and Mathematics, University of Houston, [email protected] 2College of Technology, University of Houston, [email protected] 3College of Technology, University of Houston, [email protected] Abstract—Static analyzers are tool-sets which are proving to be indispensable to modern programmers. These enable the programmers to detect possible errors and security defects present in the current code base within the implementation phase of the development cycle; rather than relying on a standalone testing phase. Static analyzers typically highlight possible defects within the ’static’ source code and thus does not require the source code to be compiled or executed. The Scala programming language has been gaining wider adoption across various industries in recent years. With such a wide adoption of tools of this nature, this paper presents an overview on the static analysis tools available, both commercial and open-source, for the Scala programming language. This paper discusses in detail about the types of defects that each of these tools can detect, limitations of these tools and also provide potential research direction that can improve the current state of static analyzers for the Scala programming language. Keywords—Scala; static code analysis; error detection I. INTRODUCTION Software development is still a human initiated task that requires extensive time and effort to plan and implement. Programmers and developers work with very short deadlines and are most concerned about delivering functionalities rather than having application security as the primary objective.
    [Show full text]