UC San Diego UC San Diego Previously Published Works

Total Page:16

File Type:pdf, Size:1020Kb

UC San Diego UC San Diego Previously Published Works UC San Diego UC San Diego Previously Published Works Title Liquid information flow control Permalink https://escholarship.org/uc/item/0t20j69d Journal Proceedings of the ACM on Programming Languages, 4(ICFP) ISSN 2475-1421 Authors Polikarpova, Nadia Stefan, Deian Yang, Jean et al. Publication Date 2020-08-02 DOI 10.1145/3408987 Peer reviewed eScholarship.org Powered by the California Digital Library University of California Type-Driven Repair for Information Flow Security Nadia Polikarpova Jean Yang Shachar Itzhaky Massachusetts Institute of Technology Carnegie Mellon University Armando Solar-Lezama [email protected] [email protected] Massachusetts Institute of Technology shachari,[email protected] Abstract We present LIFTY, a language that uses type-driven program repair to enforce information flow policies. In LIFTY, the programmer Policies specifies a policy by annotating the source of sensitive data with a refinement type, and the system automatically inserts access checks Program Program Verify necessary to enforce this policy across the code. This is a significant with Repair with Program improvement over current practice, where programmers manually holes checks implement access checks, and any missing check can cause an information leak. To support this programming model, we have developed (1) an encoding of information flow security in terms of decidable refine- ment types that enables fully automatic verification and (2) a pro- gram repair algorithm that localizes unsafe accesses to sensitive data and replaces them with provably secure alternatives. We for- ✓ ✗ malize the encoding and prove its noninterference guarantee. Our experience using LIFTY to implement a conference management Specification Core program Program hole system shows that it decreases policy burden and is able to effi- ciently synthesize all necessary access checks, including those re- Policy check Tool component quired to prevent a set of reported real-world information leaks. Figure 1. Type-driven repair for policy-agnostic programs. 1. Introduction Programs that compute over sensitive data are becoming more com- programmer burden. In prior work on policy-agnostic program- plex. In addition to directly displaying sensitive values, applications ming [7, 52, 53], the programmer implements each information often support functionality such as search and aggregation. To pro- flow policy once, associated with data definitions, rather than as tect users’ privacy and prevent information from being disclosed to checks across the program. The remainder of the program may be unauthorized users, programmers must implement access checks free of checks and the language runtime is responsible for steer- across the program. Any missing check may result in an informa- ing dynamic behavior to adhere to policies. While this program- tion leak. ming model makes it impossible to leak information through miss- Traditional approaches to information flow control can detect ing checks, the dynamic solution unfortunately involves potentially leaks that result from missing access checks, but the amount of pro- prohibitive overheads and unpredictable runtime behavior. grammer effort required to write secure code remains high. Tech- We take a static approach to avoid the pitfalls of runtime tech- niques for language-based information flow security [43] statically niques for policy-agnostic programming. Generating information verify the absence of such leaks, but they require the program- flow checks seems like a perfect application for program synthesis, arXiv:1607.03445v1 [cs.PL] 12 Jul 2016 mer to first correctly implement information flow checks across the as what we want is to synthesize small code snippets corresponding program and then additionally provide specifications of permitted to policy checks. To make the synthesis problem tractable, however, flows. While dynamic approaches [9, 27, 54] decrease the anno- we need to be able to synthesize each check independently. Thus, tation burden, the programmer remains responsible for correctly we need a way to decompose the global repair problem into local implementing checks in order to avoid exceptions or silent failures. synthesis problems. This is challenging because information flow We take a policy-agnostic approach [7, 52], factoring out in- checks may depend on both sources and sinks for sensitive data formation flow policies from the rest of the program to mitigate and there may be complex computations in between. To address these challenges, we propose a type-driven solu- tion for policy-agnostic programming. At the core of our tech- nique is the insight that we can use refinement types—types dec- orated with decidable predicates [41, 49]—to statically enforce a policy-compliant semantics. Refinement types have the advantages that(1) they support expressive policies, (2) type-checking is de- cidable, and (3) type inference can provide error localization while reducing annotation burden. In our solution, the programmer imple- ments each information flow policy only once, as a type annotation, [Copyright notice will appear here once ’preprint’ option is removed.] rather than as repeated checks across the program; the type checker 1 2016/7/13 is responsible for identifying unsafe flows, and a subsequent repair phase prevents these flows by automatically inserting conditional showPaper w pid = expressions implementing the policy checks into the code. The ap- let u = getCurrentUser w out = do title getPaperTitle w pid proach is fully static and requires no runtime analysis. authors getPaperAuthors w pid In this paper, we present(1) a policy-agnostic, security-typed return (title + + ": " + + show authors) language called LIFTY (Liquid Information Flow TYpes) and (2) a in print w u out compiler that repairs LIFTY programs by inserting information flow checks. In Fig. 1 we show an overview of LIFTY and its com- Figure 2. Excerpt from a conference management server code. pilation process. In LIFTY, the programmer specifies information flow policies as refinement type associated with sources of sensi- IFTY tive data. L ’s verifier uses type inference techniques inspired and reviewers. The function showPaper in Fig. 2 gets as argument a by liquid types [41] to produce a program with holes, where each paper ID pid, retrieves the paper’s title and author information from hole corresponds to an unsafe data access paired with a local pol- the persistent store, and displays it to the current user1. Note that the icy specification. The repair phase builds on an existing type-based function has both read effects (retrieval from the store) and write program synthesis technique [39] to produce a policy check for effects (output to user). For simplicity, we capture these effects by each hole. We formalize a core language for LIFTY, prove a non- propagating a single additional argument w (of type World) through interference property, and demonstrate the practical promise of our the code. We assume that a World value encapsulates both the state approach using an implementation of a LIFTY-to-Haskell compiler. of the persistent store and the observations made by the users. This paper makes the following contributions: The body of showPaper accesses potentially sensitive data • Static, type-based approach to policy-agnostic program- using accessor functions getCurrentUser, getPaperTitle, and ming. We present a programming model that supports the im- getPaperAuthors. For example, implementing double-blind review plementation of information flow policies as refinement types, requires hiding the list of paper authors from ordinary reviewers separately from other functionality. The compiler, rather than and making it visible only to the program chair. One way to en- the programmer, becomes responsible for generating code to force this policy using conventional programming paradigms, is to implement the policies. guard the call to getPaperAuthors by a conditional that checks if the session user is allowed to see the author list, and alternatively • Verification of expressive type-based policies. In order to sup- returns a constant default value. Note that the check is specific not port policy-agnostic programming, we developed a verification only to the data element being read, but also to the eventual viewer technique for information flow security based on liquid type in- of the result. (In this case it happens to be the session user, but this ference. The technique is sound, has minimal annotation over- is not always the case.) Because of this, the problem of checking head, and supports expressive policies and program constructs cannot be solved simply by delegating the checks to accessor func- (such as recursion and higher-order functions). tions. Instead, a potentially different access check has to appear in • Formalization and proof of security guarantee. We formalize every computation that involves getPaperAuthors, which quickly our verification approach in terms of refinement types [41, 49] becomes tedious and error-prone. and prove a non-interference property. Our proof technique uses LIFTY obviates the need for writing policy checks: as long as a phantom encoding to reduce the problem of proving non- the programmer correctly annotates getPaperAuthors with the de- interference to the problem of proving type safety, within the sired policy, the compiler will automatically guard each invocation same language. of this function with an appropriate policy check. To this end, the compiler has to propagate the information about where a sensitive • Multistage program
Recommended publications
  • Gradual Liquid Type Inference
    Gradual Liquid Type Inference NIKI VAZOU, IMDEA, Spain ÉRIC TANTER, University of Chile, Chile DAVID VAN HORN, University of Maryland, USA Refinement types allow for lightweight program verification by enriching types with logical predicates. Liquid typing provides a decidable refinement inference mechanism that is convenient but subject to two major issues: (1) inference is global and requires top-level annotations, making it unsuitable for inference of modular code components and prohibiting its applicability to library code, and (2) inference failure results in obscure error messages. These difficulties seriously hamper the migration of existing code to use refinements. This paper shows that gradual liquid type inferenceśa novel combination of liquid inference and gradual refinement typesśaddresses both issues. Gradual refinement types, which support imprecise predicates that are optimistically interpreted, can be used in argument positions to constrain liquid inference so that the global inference process effectively infers modular specifications usable for library components. Dually, when gradual refinements appear as the result of inference, they signal an inconsistency in the use of static refinements. Because liquid refinements are drawn from a finite set of predicates, in gradual liquid type inference wecan enumerate the safe concretizations of each imprecise refinement, i.e., the static refinements that justify why a program is gradually well-typed. This enumeration is useful for static liquid type error explanation, since the safe concretizations exhibit all the potential inconsistencies that lead to static type errors. We develop the theory of gradual liquid type inference and explore its pragmatics in the setting of Liquid 132 Haskell. To demonstrate the utility of our approach, we develop an interactive tool, GuiLT, for gradual liquid type inference in Liquid Haskell that both infers modular types and explores safe concretizations of gradual refinements.
    [Show full text]
  • Relatively Complete Refinement Type System for Verification of Higher-Order Non-Deterministic Programs
    Relatively Complete Refinement Type System for Verification of Higher-Order Non-deterministic Programs HIROSHI UNNO, University of Tsukuba, Japan YUKI SATAKE, University of Tsukuba, Japan TACHIO TERAUCHI, Waseda University, Japan This paper considers verification of non-deterministic higher-order functional programs. Our contribution is a novel type system in which the types are used to express and verify (conditional) safety, termination, non-safety, and non-termination properties in the presence of 8-9 branching behavior due to non-determinism. 88 For instance, the judgement ` e : u:int ϕ(u) says that every evaluation of e either diverges or reduces 98 to some integer u satisfying ϕ(u), whereas ` e : u:int ψ (u) says that there exists an evaluation of e that either diverges or reduces to some integer u satisfying ψ (u). Note that the former is a safety property whereas the latter is a counterexample to a (conditional) termination property. Following the recent work on type-based verification methods for deterministic higher-order functional programs, we formalize theideaon the foundation of dependent refinement types, thereby allowing the type system to express and verify rich properties involving program values, branching behaviors, and the combination thereof. Our type system is able to seamlessly combine deductions of both universal and existential facts within a unified framework, paving the way for an exciting opportunity for new type-based verification methods that combine both universal and existential reasoning. For example, our system can prove the existence of a path violating some safety property from a proof of termination that uses a well-foundedness termination 12 argument.
    [Show full text]
  • Refinement Types for ML
    Refinement Types for ML Tim Freeman Frank Pfenning [email protected] [email protected] School of Computer Science School of Computer Science Carnegie Mellon University Carnegie Mellon University Pittsburgh, Pennsylvania 15213-3890 Pittsburgh, Pennsylvania 15213-3890 Abstract tended to the full Standard ML language. To see the opportunity to improve ML’s type system, We describe a refinement of ML’s type system allow- consider the following function which returns the last ing the specification of recursively defined subtypes of cons cell in a list: user-defined datatypes. The resulting system of refine- ment types preserves desirable properties of ML such as datatype α list = nil | cons of α * α list decidability of type inference, while at the same time fun lastcons (last as cons(hd,nil)) = last allowing more errors to be detected at compile-time. | lastcons (cons(hd,tl)) = lastcons tl The type system combines abstract interpretation with We know that this function will be undefined when ideas from the intersection type discipline, but remains called on an empty list, so we would like to obtain a closely tied to ML in that refinement types are given type error at compile-time when lastcons is called with only to programs which are already well-typed in ML. an argument of nil. Using refinement types this can be achieved, thus preventing runtime errors which could be 1 Introduction caught at compile-time. Similarly, we would like to be able to write code such as Standard ML [MTH90] is a practical programming lan- case lastcons y of guage with higher-order functions, polymorphic types, cons(x,nil) => print x and a well-developed module system.
    [Show full text]
  • Umltmprofile for Schedulability, Performance, and Time Specification
    UMLTM Profile for Schedulability, Performance, and Time Specification September 2003 Version 1.0 formal/03-09-01 An Adopted Specification of the Object Management Group, Inc. Copyright © 2001, ARTiSAN Software Tools, Inc. Copyright © 2001, I-Logix, Ind. Copyright © 2003, Object Management Group Copyright © 2001, Rational Software Corp. Copyright © 2001, Telelogic AB Copyright © 2001, TimeSys Corporation Copyright © 2001, Tri-Pacific Software USE OF SPECIFICATION - TERMS, CONDITIONS & NOTICES The material in this document details an Object Management Group specification in accordance with the terms, conditions and notices set forth below. This document does not represent a commitment to implement any portion of this specification in any company's products. The information contained in this document is subject to change without notice. LICENSES The companies listed above have granted to the Object Management Group, Inc. (OMG) a nonexclusive, royalty-free, paid up, worldwide license to copy and distribute this document and to modify this document and distribute copies of the modified version. Each of the copyright holders listed above has agreed that no person shall be deemed to have infringed the copyright in the included material of any such copyright holder by reason of having used the specification set forth herein or having conformed any computer software to the specification. Subject to all of the terms and conditions below, the owners of the copyright in this specification hereby grant you a fully- paid up, non-exclusive, nontransferable,
    [Show full text]
  • Refinement Session Types
    MENG INDIVIDUAL PROJECT IMPERIAL COLLEGE LONDON DEPARTMENT OF COMPUTING Refinement Session Types Supervisor: Prof. Nobuko Yoshida Author: Fangyi Zhou Second Marker: Dr. Iain Phillips 16th June 2019 Abstract We present an end-to-end framework to statically verify multiparty concurrent and distributed protocols with refinements, where refinements are in the form of logical constraints. We combine the theory of multiparty session types and refinement types and provide a type system approach for lightweight static verification. We formalise a variant of the l-calculus, extended with refinement types, and prove their type safety properties. Based on the formalisation, we implement a refinement type system extension for the F# language. We design a functional approach to generate APIs with refinement types from a multiparty protocol in F#. We generate handler-styled APIs, which statically guarantee the linear usage of channels. With our refinement type system extension, we can check whether the implementation is correct with respect to the refinements. We evaluate the expressiveness of our system using three case studies of refined protocols. Acknowledgements I would like to thank Prof. Nobuko Yoshida, Dr. Francisco Ferreira, Dr. Rumyana Neykova and Dr. Raymond Hu for their advice, support and help during the project. Without them I would not be able to complete this project. I would like to thank Prof. Paul Kelly, Prof. Philippa Gardner and Prof. Sophia Drossopoulou, whose courses enlightened me to explore the area of programming languages. Without them I would not be motivated to discover more in this area. I would like to thank Prof. Paul Kelly again, this time for being my personal tutor.
    [Show full text]
  • Generative Contracts
    Generative Contracts by Soheila Bashardoust Tajali B.C.S., M.C.S. A Thesis submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science Ottawa-Carleton Institute for Computer Science School of Computer Science Carleton University Ottawa, Ontario November, 2012 © Copyright 2012, Soheila Bashardoust Tajali Library and Archives Bibliotheque et Canada Archives Canada Published Heritage Direction du 1+1 Branch Patrimoine de I'edition 395 Wellington Street 395, rue Wellington Ottawa ON K1A0N4 Ottawa ON K1A 0N4 Canada Canada Your file Votre reference ISBN: 978-0-494-94208-6 Our file Notre reference ISBN: 978-0-494-94208-6 NOTICE: AVIS: The author has granted a non­ L'auteur a accorde une licence non exclusive exclusive license allowing Library and permettant a la Bibliotheque et Archives Archives Canada to reproduce, Canada de reproduire, publier, archiver, publish, archive, preserve, conserve, sauvegarder, conserver, transmettre au public communicate to the public by par telecommunication ou par I'lnternet, preter, telecommunication or on the Internet, distribuer et vendre des theses partout dans le loan, distrbute and sell theses monde, a des fins commerciales ou autres, sur worldwide, for commercial or non­ support microforme, papier, electronique et/ou commercial purposes, in microform, autres formats. paper, electronic and/or any other formats. The author retains copyright L'auteur conserve la propriete du droit d'auteur ownership and moral rights in this et des droits moraux qui protege cette these. Ni thesis. Neither the thesis nor la these ni des extraits substantiels de celle-ci substantial extracts from it may be ne doivent etre imprimes ou autrement printed or otherwise reproduced reproduits sans son autorisation.
    [Show full text]
  • Local Refinement Typing
    Local Refinement Typing BENJAMIN COSMAN, UC San Diego RANJIT JHALA, UC San Diego 26 We introduce the Fusion algorithm for local refinement type inference, yielding a new SMT-based method for verifying programs with polymorphic data types and higher-order functions. Fusion is concise as the programmer need only write signatures for (externally exported) top-level functions and places with cyclic (recursive) dependencies, after which Fusion can predictably synthesize the most precise refinement types for all intermediate terms (expressible in the decidable refinement logic), thereby checking the program without false alarms. We have implemented Fusion and evaluated it on the benchmarks from the LiqidHaskell suite totalling about 12KLOC. Fusion checks an existing safety benchmark suite using about half as many templates as previously required and nearly 2× faster. In a new set of theorem proving benchmarks Fusion is both 10 − 50× faster and, by synthesizing the most precise types, avoids false alarms to make verification possible. Additional Key Words and Phrases: Refinement Types, Program Logics, SMT, Verification ACM Reference format: Benjamin Cosman and Ranjit Jhala. 2017. Local Refinement Typing. Proc. ACM Program. Lang. 1, 1, Article 26 (September 2017), 31 pages. https://doi.org/10.1145/3110270 1 INTRODUCTION Refinement types are a generalization of Floyd-Hoare logics to the higher-order setting. Assertions are generalized to types by constraining basic types so that their inhabitants satisfy predicates drawn from SMT-decidable refinement logics. Assertion checking can then be generalized toa form of subtyping, where subtyping for basic types reduces to checking implications between their refinement predicates [Constable 1986; Rushby et al.
    [Show full text]
  • Substructural Typestates
    Substructural Typestates Filipe Militao˜ Jonathan Aldrich Lu´ıs Caires Carnegie Mellon University & Carnegie Mellon University Universidade Nova de Lisboa Universidade Nova de Lisboa [email protected] [email protected][email protected] Abstract tocols, of the kind captured by typestate systems [6, 16, 40, 41], are Finding simple, yet expressive, verification techniques to reason a considerable obstacle to producing reliable code. For instance, about both aliasing and mutable state has been a major challenge in Java a FileOutputStream type represents both the open and for static program verification. One such approach, of practical closed states of that abstraction by relying on dynamic tests to de- relevance, is centered around a lightweight typing discipline where tect incorrect uses (such as writing to a closed stream) instead of types denote abstract object states, known as typestates. statically tracking the changing properties of the type. In this paper, we show how key typestate concepts can be pre- A diverse set of techniques (such as alias types [2, 13, 15, 39], cisely captured by a substructural type-and-effect system, exploit- typestate [6, 16, 34], behavioral types [10], and others [20, 26, 44, ing ideas from linear and separation logic. Building on this foun- 45, 47]; discussed in more detail in Section 4) have been proposed dation, we show how a small set of primitive concepts can be com- to tackle this problem, with various degrees of expressiveness. posed to express high-level idioms such as objects with multiple In this work, we reconstruct several core concepts introduced in independent state dimensions, dynamic state tests, and behavior- practical typestate systems (e.g., [6]), by modeling them in a oriented usage protocols that enforce strong information hiding.
    [Show full text]
  • Refinement Type Inference Via Horn Constraint Optimization⋆
    Refinement Type Inference via Horn Constraint Optimization? Kodai Hashimoto and Hiroshi Unno University of Tsukuba fkodai, [email protected] Abstract. We propose a novel method for inferring refinement types of higher-order functional programs. The main advantage of the proposed method is that it can infer maximally preferred (i.e., Pareto optimal) refinement types with respect to a user-specified preference order. The flexible optimization of refinement types enabled by the proposed method paves the way for interesting applications, such as inferring most-general characterization of inputs for which a given program satisfies (or vi- olates) a given safety (or termination) property. Our method reduces such a type optimization problem to a Horn constraint optimization problem by using a new refinement type system that can flexibly rea- son about non-determinism in programs. Our method then solves the constraint optimization problem by repeatedly improving a current solu- tion until convergence via template-based invariant generation. We have implemented a prototype inference system based on our method, and obtained promising results in preliminary experiments. 1 Introduction Refinement types [5, 20] have been applied to safety verification of higher-order functional programs. Some existing tools [9, 10, 16{19] enable fully automated verification by refinement type inference based on invariant generation tech- niques such as abstract interpretation, predicate abstraction, and CEGAR. The goal of these tools is to infer refinement types precise enough to verify a given safety specification. Therefore, types inferred by these tools are often too specific to the particular specification, and hence have limited applications. To remedy the limitation, we propose a novel refinement type inference method that can infer maximally preferred (i.e., Pareto optimal) refinement types with respect to a user-specified preference order.
    [Show full text]
  • 1 Refinement Types
    1 Refinement Types This section is primarily based on Principles of Type Refinement, Noam Zeil- berger, OPLSS 2016 The concept of refinement types is quite general. So general, in fact, that it is not immediately obvious that the various publications claiming to present refinment type systems are founded upon a common framework. Nonetheless, Noam Zeilberger recently published a tutorial presenting such a framework. Before shifting to a more historical perspective, we will examine Zeilberger’s framework so that we understand the presented systems as refinement type systems rather than ad-hoc ones. Definition 1.1. Refinement type system: An extra layer of typing on an exist- ing type system. Remark 1.2. Type refinement systems are conservative: they preserve the properties of the underlying type system. The framework to be presented as- sumes the existing type system does not include subtyping. Definition 1.3 (Type syntax). T,S,U (types) ::= Int | Bool | T → T ρ,π,σ (refinements) ::= rInt | rBool | ρ → ρ Remark 1.4. A refinement type system decomposes each base type of the underlying system into a preordered set of base refinement types. In our example system, these sets (rInt and rBool) happen to be lattices. Definition 1.5 (Base refinements). Preorder h rInt, ≤Inti Preorder h rBool, ≤Booli 1Int NN NP 1Bool Z True False 0Int 0Bool Remark 1.6 (Type semantics). Recall that a type T can be interpreted as a set of closed values; e.g., JBoolK = {true,false} where true and false are values in a term language. For distinct S and T the underlying system should have JT K ∩ JSK = ∅.
    [Show full text]
  • Refinement Types for Secure Implementations
    Refinement Types for Secure Implementations Jesper Bengtson Karthikeyan Bhargavan Cedric´ Fournet Andrew D. Gordon Uppsala University Microsoft Research Microsoft Research Microsoft Research Sergio Maffeis Imperial College London and University of California at Santa Cruz Abstract automatically extract models from F# code and, after We present the design and implementation of a typechecker for applying various program transformations, pass them to verifying security properties of the source code of cryptographic ProVerif, a cryptographic analyzer [Blanchet, 2001, Abadi protocols and access control mechanisms. The underlying type and Blanchet, 2005]. Their approach yields verified secu- theory is a l-calculus equipped with refinement types for express- rity for very detailed models, but also demands considerable ing pre- and post-conditions within first-order logic. We derive care in programming, in order to control the complexity of formal cryptographic primitives and represent active adversaries global cryptographic analysis for giant protocols. Even if within the type theory. Well-typed programs enjoy assertion-based ProVerif scales up remarkably well in practice, beyond a security properties, with respect to a realistic threat model includ- few message exchanges, or a few hundred lines of F#, veri- ing key compromise. The implementation amounts to an enhanced fication becomes long (up to a few days) and unpredictable typechecker for the general purpose functional language F#; type- checking generates verification conditions that are passed to an (with trivial code changes leading to divergence). SMT solver. We describe a series of checked examples. This is the first tool to verify authentication properties of cryptographic Cryptographic Verification meets Program Verification protocols by typechecking their source code.
    [Show full text]
  • Refinement Types for Secure Implementations
    Refinement Types for Secure Implementations Jesper Bengtson Karthikeyan Bhargavan Uppsala University Microsoft Research Cedric´ Fournet Andrew D. Gordon Microsoft Research Microsoft Research Sergio Maffeis Imperial College London Revision of November 2010 Technical Report MSR–TR–2008–118 Microsoft Research Roger Needham Building 7 J.J. Thomson Avenue Cambridge, CB3 0FB United Kingdom Refinement Types for Secure Implementations Jesper Bengtson Karthikeyan Bhargavan Cedric´ Fournet Andrew D. Gordon Uppsala University Microsoft Research Microsoft Research Microsoft Research Sergio Maffeis Imperial College London Abstract minimize the gap between executable and verified code. # We present the design and implementation of a typechecker for They automatically extract models from F code and, af- verifying security properties of the source code of cryptographic ter applying various program transformations, pass them to protocols and access control mechanisms. The underlying type ProVerif, a cryptographic analyzer [Blanchet, 2001, Abadi theory is a l-calculus equipped with refinement types for express- and Blanchet, 2005]. Their approach yields verified secu- ing pre- and post-conditions within first-order logic. We derive rity for very detailed models, but also demands considerable formal cryptographic primitives and represent active adversaries care in programming, in order to control the complexity of within the type theory. Well-typed programs enjoy assertion-based global cryptographic analysis for giant protocols. Even if security properties, with respect to a realistic threat model includ- ProVerif scales up remarkably well in practice, beyond a ing key compromise. The implementation amounts to an enhanced few message exchanges, or a few hundred lines of F#, veri- typechecker for the general purpose functional language F#; type- checking generates verification conditions that are passed to an fication becomes long (up to a few days) and unpredictable SMT solver.
    [Show full text]