Proceedings of Workshop on Software Security Assurance Tools, Techniques, and Metrics

Proceedings of Workshop on Software Security Assurance Tools, Techniques, and Metrics

NIST Special Publication 500-265 Proceedings of Workshop on Software Security Assurance Tools, Techniques, and Metrics Paul E. Black (workshop Chair) Michael Kass (Co-chair) Elizabeth Fong (editor) Information Technology Laboratory National Institute of Standards & Technology Gaithersburg MD 20899 February 2006 U.S. Department of Commerce Carlos M. Gutierrez. Secretary National Institute of Standards and Technology William Jeffrey, Director 1 Disclaimer: Any commercial product mentioned is for information only; it does not imply recommendation or endorsement by NIST nor does it imply that the products mentioned are necessarily the best available for the purpose. Software Security Assurance Tools, Techniques, and Metrics, SSATTM’05 ISBN # 1-59593-307-7/05/0011 2 Proceedings of Workshop on Software Security Assurance Tools, Techniques, and Metrics Paul E. Black (workshop chair) Michael Kass (co-chair) Elizabeth Fong (editor) Information Technology Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899 ABSTRACT This is the proceedings of a workshop held on November 7 and 8, 2005 in Long Beach, California, USA, hosted by the Software Diagnostics and Conformance Testing Division, Information Technology Laboratory, of the National Institute of Standards and Technology. The workshop, “Software Security Assurance Tools, Techniques, and Metrics,” is one of a series in the NIST Software Assurance Measurement and Tool Evaluation (SAMATE) project, which is partially funded by DHS to help identify and enhance software security assurance (SSA) tools. The goal of this workshop is to discuss and refine the taxonomy of flaws and the taxonomy of functions, come to a consensus on which SSA functions should first have specifications and standards tests developed, gather SSA tools suppliers for “target practice” on reference datasets of code, and identify gaps or research needs in SSA functions. Keywords: Software assessment tools; software assurance; software metrics; software security; target practice, reference dataset; vulnerability 3 Foreword The workshop on “Software Assurance Tools, Techniques, and Metrics” was held 7-8 November 2005 at the Long Beach, California, USA, co-located with the Automated Software Engineering Conference 2005. This workshop consisted of eleven paper presentations for the first day. The second day morning consisted of “target practice” and the review of the nature of the reference dataset. The Program Committee consisted of the following: Freeland Abbott Georgia Tech Paul Ammann George Mason U. Elizabeth Fong NIST Michael Hicks U. of Maryland Michael Koo NIST Richard Lippmann MIT Robert A. Martin MITRE Corp. W. Bradley Martin NSA Nachiappan Nagappan Microsoft Research Samuel Redwine James Madison U. Ravi Sandhu George Mason U. Larry D. Wagoner NSA These proceedings have five main parts: • Summary • Workshop Announcement • Workshop Agenda • Reference Dataset Target Practice, and • Papers We thank those who worked to organize this workshop, particularly Elizabeth Fong, who handled much of the correspondence and Debra A. Brodbeck, who provided conference support. We appreciate the program committee for their efforts in reviewing the papers. We are grateful to NIST, especially the Software Diagnostics and Conformance Testing Division, for providing the organizers' time. On behalf of the program committee and the whole SAMATE team, thanks to everyone for taking their time and resources to join us. Sincerely, Dr. Paul E. Black 4 Table of Contents Summary …………………………………………………………………………………6 Workshop CALL FOR PAPERS …………………………………………………………7 Workshop Program ……………………………………………………………………….9 SAMATE Reference Dataset “Target Practice” ………………………………………...10 Where do Software Security Assurance Tools Add Value ……………………………...14 David Jackson and David Cooper Metrics that Matter ………………………………………………………………………22 Brian Chess The Case for Common Flaw Enumeration ……………………………………………...29 Robert Martin, Steven Christey, and Joe Jarzombek Seven Pernicious Kingdoms: A Taxonomy of Software Security Errors ………………36 Katrina Tsipenyuk, Brian Chess, and Gary McGraw A Taxonomy of Buffer Overflows for Evaluating Static and Dynamic Software Testing Tools……………………………………………………………………………………..44 Kendra Kratkiewicz and Richard Lippmann ABM: A Prototype for Benchmarking Source Code Analyzers ………………………..52 Tim Newsham and Brian Chess A Benchmark Suite for Behavior-Based Security Mechanisms ………………………..60 Dong Ye, Micha Moffie, and David Kaeli Testing and Evaluation of Virus Detectors for Handheld Devices ……………………...67 JoseAndre Morales, Peter J. Clarke, and Yi Deng Eliminating Buffer Overflows Using the Compiler or a Standalone Tool …………...…75 Thomas Plum and DavidM. Keaton A Security Software Architecture Description Language ………………………………82 Jie Ren and Richard N. Taylor Prioritization of Threats Using the K/M Algebra ……………………………………….90 Supreeth Vendataraman and Warren Harrison 5 Summary This is the proceeding of the workshop on Software Security Assurance Tools, Techniques, and Metrics, held on November 7 and 8, 2005 at Long Beach, California, USA, co-located with the Automated Software Engineering Conference 2005. It was organized by the Software Diagnostics and Conformance Testing Division, Information Technology Laboratory, National Institute of Standards and Technology (NIST). Forty- two people attended, including people from government, universities, tool vendors and service providers, and research companies. The workshop is one of a series in the NIST Software Assurance Measurement and Tool Evaluation (SAMATE) project, http://samate.nist.gov/ A previous workshop was on Defining the State of the Art in Software Security Tools, held on August 10 and 11, 2005 at the NIST in Gaithersburg, MD, USA. The call for papers resulted in eleven accepted papers, which were presented on the first day of the workshop. The second day was devoted to the discussion of reference dataset and target practice with three SSA tool vendors, and included an invited presentation “Correctness by construction: The case for constructive static verification” by Rob Chapman. The material and papers for the workshop were distributed on USB drives to the participants. The content of the USB drives was: • Introduction, • Workshop call of papers, • Workshop agenda, • Reference dataset target practice, • Flaw taxonomies, and • Accepted papers. Here are summaries of the workshop conclusions: • Today’s SSA tool does not add much value to real, large software products. • How do we score (rate) the risk of a piece of code is still a challenging question. • There is a need to harmonize the different taxonomy of vulnerabilities. • Very substantive feedbacks were gathered on the shared reference dataset. See write-up on SAMATE Reference Dataset “Target Practice” in this document. • There were consensuses that the first SSA specification and standard tests will be the source code scanner tools. 6 Workshop CALL FOR PAPERS (SSATTM'05) ------------------------------------------------------------------------------------------- National Institute of Standards and Technology (NIST) workshop on Software Security Assurance Tools, Techniques, and Metrics 7-8 November 2005 Co-located with ASE 2005, Long Beach, California, USA ------------------------------------------------------------------------------------------- Funded in part by the Department of Homeland Security (DHS), the National Institute of Standards and Technology (NIST) started a long-term, ambitious project to improve software security assurance tools. Security is the ability of a system to maintain the confidentiality, integrity, and availability of information processed and stored by a computer. Software security assurance tools are those that help software be more secure by building security into software or determining how secure software is. Among the project's goals are: (1) develop a taxonomy of software security flaws and vulnerabilities, (2) develop a taxonomy of software security assurance (SSA) tool functions and techniques which detect or prevent flaws, and (3) develop testable specifications of SSA functions and explicit tests to evaluate how closely tools implement the functions. The test materials include reference sets of buggy code. These goals extend into all phases of the software life cycle from requirements capture through design and implementation to operation and auditing. The goal of the workshop is to convene researchers, developers, and government and industrial users of SSA tools to • discuss and refine the taxonomy of flaws and the taxonomy of functions, which are under development, • come to a consensus on which SSA functions should first have specifications and standard tests developed, • gather SSA tools suppliers for "target practice" on reference datasets of code, and • identify gaps or research needs in SSA functions. REFERENCE DATASET "TARGET PRACTICE" Sets of code with known flaws and vulnerabilities, with corresponding correct versions, can be references for tool testing to make research easier and to be a standard of evaluation. Working with others, we will bring reference datasets of many types of code, like Java, C, binaries, and bytecode. We welcome contributions of code you've used. To help validate the reference datasets, we solicit proposals not exceeding 2 pages to participate in SSA tool "target practice" on the datasets. Tools can range from university projects to commercial products. Participation is intended to demonstrate the state of the art in finding flaws, consequently

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    95 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us