Safety Properties Liveness Properties
Total Page:16
File Type:pdf, Size:1020Kb
Computer security Goal: prevent bad things from happening PLDI’06 Tutorial T1: Clients not paying for services Enforcing and Expressing Security Critical service unavailable with Programming Languages Confidential information leaked Important information damaged System used to violate laws (e.g., copyright) Andrew Myers Conventional security mechanisms aren’t Cornell University up to the challenge http://www.cs.cornell.edu/andru PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 2 Harder & more important Language-based security In the ’70s, computing systems were isolated. Conventional security: program is black box software updates done infrequently by an experienced Encryption administrator. Firewalls you trusted the (few) programs you ran. System calls/privileged mode physical access was required. Process-level privilege and permissions-based access control crashes and outages didn’t cost billions. The Internet has changed all of this. Prevents addressing important security issues: Downloaded and mobile code we depend upon the infrastructure for everyday services you have no idea what programs do. Buffer overruns and other safety problems software is constantly updated – sometimes without your knowledge Extensible systems or consent. Application-level security policies a hacker in the Philippines is as close as your neighbor. System-level security validation everything is executable (e.g., web pages, email). Languages and compilers to the rescue! PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 3 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 4 Outline The need for language-based security Security principles Security principles Security properties Memory and type safety Encapsulation and access control Certifying compilation and verification Security types and information flow Handouts: copy of slides Web site: updated slides, bibliography www.cs.cornell.edu/andru/pldi06-tutorial PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 5 1 Conventional OS security Access control model Model: program is black box The classic way to prevent “bad things” Program talks to OS via protected from happening interface (system calls) Requests to access resources (objects) Multiplex hardware are made by principals Isolate processes from each other Restrict access to persistent data (files) Reference monitor (e.g., kernel) permits or + Language-independent, simple, limited denies request User-level Program Hardware request Reference Object memory Principal Monitor (Resource) Operating System protection Kernel PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 7 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 8 Authentication vs. Authorization 1st guideline for security Abstraction of a principal divides Principle of complete mediation: enforcement into two parts Every access to every object must be checked by Authentication: who is making the request the reference monitor Authorization: is this principal allowed to make this request? Problem: OS-level security does not support complete mediation request Reference Object Principal Monitor (Resource) PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 9 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 10 OS: Coarse-grained control Need: fine-grained control Operating system enforces security at Modern programs make security decisions system call layer with respect to application abstractions UI: access control at window level Hard to control application when it is not making system calls mobile code: no network send after file read E-commerce: no goods until payment Security enforcement decisions made with intellectual property rights management regard to large-granularity objects Need extensible, reusable mechanism for Files, sockets, processes enforcing security policies Coarse notion of principal: Language-based security can support an extensible protected interface, e.g., Java security If you run an untrusted program, should the authorizing principal be “you”? PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 11 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 12 2 2nd guideline for secure design Least privilege problems Principle of Least Privilege: each principal OS privilege is coarse-grained: user/group is given the minimum access needed to Applications need finer granularity Web applications: principals unrelated to OS principals accomplish its task. [Saltzer & Schroeder Who is the “real” principal? ‘75] Trusted program? Full power of the user principal Examples: Untrusted? Something less Trusted program with untrusted extension: ? + Administrators don’t run day-to-day tasks as root. So Untrusted program accessing secure trusted subsystem: ? “rm –rf /” won’t wipe the disk. Requests may filter through a chain of programs - fingerd runs as root so it can access different users’ or hosts .plan files. But then it can also Loss of information is typical “rm –rf /”. E.g., client browser → web server → web app → database PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 13 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 14 3rd guideline: Small TCB Small TCB and LBS Trusted Computing Base (TCB) : Conventional wisdom (c. 1975): components whose failure compromises “operating system is small and simple, compiler is the security of a system large and complex” OS is a small TCB, compiler a large one Example: TCB of operating system includes kernel, memory protection system, disk image c. 2003: OS (Win2k) = 50M lines code, compiler ~ 100K lines Small/simple TCB: code ⇒ TCB correctness can be checked/tested/reasoned about more easily ⇒ more likely to work Hard to show OS implemented correctly Many authors (untrustworthy: device drivers) Large/complex TCB: Implementation bugs often create security holes ⇒ TCB contains bugs enabling security violations Problem: modern OS is huge, impossible to verify Can now prove compilation, type checking correct Easier than OS: smaller, functional, not concurrent PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 15 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 16 The Gold Standard [Lampson] When to enforce security Authenticate Possible times to respond to security Every access/request associated with correct principal violations: Authorize Before execution: Complete mediation of accesses analyze, reject, rewrite Audit During execution: Recorded authorization decisions enable after-the-fact monitor, log, halt, change enforcement, identification of problems After execution: roll back, restore, audit, sue, call police Language-based techniques can help PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 17 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 18 3 Language-based techniques Maturity of language tools A complementary tool in the arsenal: programs don’t have Some things have been learned in the last 25 to be black boxes! Options: years… How to build a sound, expressive type system 1. Analyze programs at compile time or load time to that provably enforces run-time type safety ensure that they are secure ⇒ protected interfaces 2. Check analyses at load time to reduce TCB Type systems that are expressive enough to 3. Transform programs at compile/load/run time so that encode multiple high-level languages they can’t violate security, or to log actions for auditing. ⇒ language independence How to build fast garbage collectors ⇒ trustworthy pointers On-the-fly code generation and optimization ⇒ high performance PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 19 PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 20 Caveat: assumptions and abstraction Arguments for security always rest on assumptions: “the attacker does not have physical access to the hardware” “the code of the program cannot be modified during execution” A sampler of attacks “No one is monitoring the EM output of the computer” Assumptions are vulnerabilities Sometimes known, sometimes not Assumptions arise from abstraction security analysis only tractable on a simplification (abstraction) of actual system Abstraction hides details (assumption: unimportant) Caveat: language-based methods often abstract aspects of computer systems Need other runtime, hardware enforcement mechanisms to ensure language abstraction isn’t violated—a separation of concerns PLDI Tutorial: Enforcing and Expressing Security with Programming Languages - Andrew Myers 21 Attack: buffer overruns Execute-only bit? Payload Stack smashing executes code on stack -- mark char buf[100]; Return address stack non-executable? … gets(buf); Return-to-libc attack defeats this: void system(char * arg) { buf ... Program r0 = arg; execl(r0, ...); // “return” here with r0 set sp Stack ... } Attacker gives long input that overwrites function return address, local variables Not all dangerous