DR. CHECKER: a Soundy Analysis for Linux Kernel Drivers

DR. CHECKER: a Soundy Analysis for Linux Kernel Drivers

DR. CHECKER: A Soundy Analysis for Linux Kernel Drivers Aravind Machiry, Chad Spensky, Jake Corina, Nick Stephens, Christopher Kruegel, and Giovanni Vigna, UC Santa Barbara https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/machiry This paper is included in the Proceedings of the 26th USENIX Security Symposium August 16–18, 2017 • Vancouver, BC, Canada ISBN 978-1-931971-40-9 Open access to the Proceedings of the 26th USENIX Security Symposium is sponsored by USENIX DR.CHECKER: A Soundy Analysis for Linux Kernel Drivers Aravind Machiry, Chad Spensky, Jake Corina, Nick Stephens, Christopher Kruegel, and Giovanni Vigna fmachiry, cspensky, jcorina, stephens, chris, [email protected] University of California, Santa Barbara Abstract and is quickly becoming intractable as the complexity and volume of kernel-level code increase. Low-level While kernel drivers have long been know to poses huge code, such as kernel drivers, introduce a variety of hard security risks, due to their privileged access and lower problems that must be overcome by dynamic analysis code quality, bug-finding tools for drivers are still greatly tools (e.g., handling hardware peripherals). While some lacking both in quantity and effectiveness. This is be- kernel-level dynamic analysis techniques have been pro- cause the pointer-heavy code in these drivers present posed [23, 25, 29, 46], they are ill-suited for bug-finding some of the hardest challenges to static analysis, and as they were implemented as kernel monitors, not code their tight coupling with the hardware make dynamic verification tools. Thus, static source code analysis has analysis infeasible in most cases. In this work, we long prevailed as the most promising technique for kernel present DR.CHECKER, a soundy (i.e., mostly sound) code verification and bug-finding, since it only requires bug-finding tool for Linux kernel drivers that is based on access to the source code, which is typically available. well-known program analysis techniques. We are able to Unfortunately, kernel code is a worst-case scenario overcome many of the inherent limitations of static anal- for static analysis because of the liberal use of pointers ysis by scoping our analysis to only the most bug-prone (i.e., both function and arguments are frequently passed parts of the kernel (i.e., the drivers), and by only sac- as pointers). As a result, tool builders must make the rificing soundness in very few cases to ensure that our tradeoff between precision (i.e., reporting too many false technique is both scalable and precise. DR.CHECKER is positives) and soundness (i.e., reporting all true posi- a fully-automated static analysis tool capable of perform- tives). In practice, precise static analysis techniques have ing general bug finding using both pointer and taint anal- struggled because they are either computationally infea- yses that are flow-sensitive, context-sensitive, and field- sible (i.e., because of the state explosion problem), or too sensitive on kernel drivers. To demonstrate the scala- specific (i.e., they only identify a very specific type of bility and efficacy of DR.CHECKER, we analyzed the bug). Similarly, sound static analysis techniques, while drivers of nine production Linux kernels (3.1 million capable of reporting all bugs, suffer from extremely high LOC), where it correctly identified 158 critical zero-day false-positive rates. This has forced researchers to make bugs with an overall precision of 78%. variety of assumptions in order to implement practical analysis techniques. One empirical study [14] found that 1 Introduction users would ignore a tool if its false positive rate was higher than 30%, and would similarly discredit the anal- Bugs in kernel-level code can be particularly problem- ysis if it did not yield valuable results early in its use atic in practice, as they can lead to severe vulnerabil- (e.g., within the first three warnings). ities, which can compromise the security of the entire Nevertheless, numerous successful tools have been computing system (e.g., Dirty COW [5]). This fact has developed (e.g., Coverity [14], Linux Driver Verifica- not been overlooked by the security community, and a tion [36], APISan [64]), and have provided invaluable significant amount of effort has been placed on verify- insights into both the types and locations of bugs that ing the security of this critical code by means of man- exist in critical kernel code. These tools range from pre- ual inspection and both static and dynamic analysis tech- cise, unsound, tools capable of detecting very specific niques. While manual inspection has yielded the best classes of bugs (e.g., data leakages [32], proper fprintf results historically, it can be extremely time consuming, usage [22], user pointer deferences [16]) to sound, im- USENIX Association 26th USENIX Security Symposium 1007 precise, techniques that detect large classes of bugs (e.g., In summary, we claim the following contributions: finding all usages of strcpy [55]). One notable finding early on was that a disproportionate number of errors in • We present the first soundy static-analysis technique the kernel were found in the drivers, or modules. It was for pointer and taint analysis capable of large-scale shown that drivers accounted for seven times more bugs analysis of Linux kernel drivers. than core code in Linux [19] and 85% of the crashes • We show that our technique is capable of flow- in Windows XP [49]. These staggering numbers were sensitive, context-sensitive, and field-sensitive anal- attributed to lower overall code quality in drivers and im- ysis in a pluggable and general way that can easily proper implementations of the complex interactions with be adapted to new classes of bugs. the kernel core by the third party supplying the driver. In 2011, Palix et al. [39] analyzed the Linux kernel • We evaluated our tool by analyzing the drivers of again and showed that while drivers still accounted for nine modern mobile devices, which resulted in the the greatest number of bugs, which is likely because discovery of 158 zero-day bugs. drivers make up 57% of the total code, the fault rates for drivers where no longer the highest. Our recent analy- • We compare our tool to the existing state-of-the- sis of main line linux kernel commit messages found that art tools and show that we are capable of detecting 28% of CVE patches to the linux repository in the past more bugs with significantly higher precision, and year involved kernel drivers (19% since 2005), which is with high-fidelity warnings. in line with previous studies [17]. Meanwhile, the mo- bile domain has seen an explosion of new devices, and • We are releasing DR.CHECKER as an open-source thus new drivers, introduced in recent years. The lack of tool at github.com/ucsb-seclab/dr_checker. attention being paid to these drivers, and their potential danger to the security of the devices, has also not gone 2 Background unnoticed [47]. Recent studies even purport that mobile kernel drivers are, again, the source of up to 85% of the Kernel bug-finding tools have been continuously evolv- reported bugs in the Android [48] kernel. Yet, we are ing as both the complexity and sheer volume of code in unaware of any large-scale analysis of these drivers. the world increases. While manual analysis and grep In this work, we present DR.CHECKER, a fully- may have been sufficient for fortifying the early versions automated static-analysis tool capable of identifying of the Linux kernel, these techniques are neither scalable numerous classes of bugs in Linux kernel drivers. nor rigorous enough to protect the kernels that are on our DR.CHECKER is implemented as a completely modu- systems today. Ultimately, all of these tools are devel- lar framework, where both the types of analyses (e.g., oped to raise warnings, which are then examined by a points-to or taint) and the bug detectors (e.g., integer human analyst. Most of the initial, and more successful overflow or memory corruption detection) can be eas- bug-finding tools were based on grep-like functionality ily augmented. Our tool is based on well-known pro- and pattern matching [45,55,57]. These tools evolved to gram analysis techniques and is capable of performing reduce user interaction (i.e., removing the need for man- both pointer and taint analysis that is flow-, context-, and ual annotation of source code) by using machine learn- field-sensitive. DR.CHECKER employs a soundy [31] ing and complex data structures to automatically identify approach, which means that our technique is mostly potential dangerous portions of code [41, 59–63]. While sound, aside from a few well-defined assumptions that these tools have been shown to return useful results, iden- violate soundness in order to achieve a higher precision. tifying a number of critical bugs, most of them are de- DR.CHECKER, is the first (self-proclaimed) soundy veloped based on empirical observation, without strong static-analysis-based bug-finding tool, and, similarly, the formal guarantees. first static analysis tool capable of large-scale analysis Model checkers (e.g., SLAM [13], BLAST [27], of general classes of bugs in driver code. We evaluated MOPS [18]) provide much more context and were able DR.CHECKER by analyzing nine popular mobile device to provide more formalization, resulting in the detec- kernels, 3.1 million lines of code (LOC), where it cor- tion of more interesting flaws. However, these tech- rectly reported 3,973 flaws and resulted the discovery of niques soon evolved into more rigorous tools, capable 158 [6–10] previously unknown bugs. We also compared of more complex analyses (e.g., path-sensitive ESP [22]) DR.CHECKER against four other popular static analy- and the more recent tools are capable of extracting far sis tools, where it significantly outperformed all of them more information about the programs being analyzed to both in detection rates and total bugs identified.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us