Towards Short-Lived Certificates

Total Page:16

File Type:pdf, Size:1020Kb

Towards Short-Lived Certificates Towards Short-Lived Certificates Emin Topalovic∗, Brennan Saeta∗, Lin-Shung Huangy, Collin Jacksony and Dan Boneh∗ ∗Stanford University, femint, saeta, [email protected] yCarnegie Mellon University, flinshung.huang, [email protected] Abstract—The Online Certificate Status Protocol (OCSP) is as on its clients. For space considerations, their global CRL is good as dead. It imposes a massive performance penalty on not exhaustive, and can exclude the revocations that happen web traffic and has failed to mitigate the recent high-profile for purely administrative reasons. Network filtering attacks certificate security breaches of certificate authorities. Citing these fundamental protocol flaws, Google Chrome, one of the world’s that block updates are still possible, but would require con- most popular browsers, is permanently disabling OCSP and stant blocking from the time of revocation. Google’s decision taking direct ownership over certificate revocation. We will soon is mainly due to the ineffectiveness of soft-fail revocation be living in a post-OCSP world where Google has become a checks, and also the obvious costs in performance and user single point of failure for certificate validation. We argue that privacy [6]. certificate authorities should reassert control over the certificate revocation process by issuing certificates with a very short While revocation by software updates may work well for lifetime. These certificates complement browser-based revocation Chrome, it can be quite difficult on a number of platforms by allowing certificate authorities to revoke certificates without the cooperation of browser vendors, and without imposing a such as smartphones, where issuing a software update in- performance penalty on web traffic. volves multiple entities and can take many months until wide deployment. Even now several smartphone vendors have not We have implemented a prototype certificate authority and certificate update plugin for Apache that demonstrates feasibility yet issued software updates to block DigiNotar certificates. of short-lived certificates. We have also implemented client-side Revocation by software update takes control of revocation out pinning to short-lived certificates in the Chromium browser. Fi- of the hands of the CAs where it naturally belongs and puts it nally, we show that short-lived certificates complement browser- in the hands of software and hardware vendors who may have based revocation and address its major limitations; the two less of an incentive to issue a software update every time a mechanisms can be combined to achieve secure, performant, and backwards-compatible browsing experience on the web. certificate is revoked. An Old Proposal Revisited I. INTRODUCTION In this paper we propose to abandon the existing revocation X.509 certificates are widely used by web browsers to authen- mechanisms in favor of an old idea [7], [8] — short-lived ticate destination servers [1]. It is current standard practice certificates — that puts revocation control back in the hands for certificate authorities (CAs) to issue certificates with a of the CAs. Short-lived certificates are far more efficient relatively long validity period such as a year or longer. Ideally, than CRLs and OCSP, and require no client-side changes, certificates are expected to be used for their entire validity although a minor client change can strengthen the proposal period, but unfortunately, a certificate can go bad prior to its significantly. expiration date for many reasons. Private keys corresponding to the certificate can be stolen, the certificate could have been A short-lived certificate is identical to a regular certificate, issued fraudulently, or the certificate could have simply been except that the validity period is a short span of time such reissued (e.g. due to a change of name or association). For as a few days. Such certificates expire shortly, and most these reasons, X.509 defines mechanisms, including certificate importantly, fail closed after expiration on clients without the revocation lists (CRLs) [1] and the Online Certificate Status need for a revocation mechanism. We suggest a certificate Protocol (OCSP) [2], for revoking a certificate prior to its lifetime as short as four days, matching the average length expiration date. of time that an OCSP response is cached [9]. Notably, recent security breaches of certificate authorities (i.e. In our proposal, when a web site purchases a year-long Comodo [3] and DigiNotar [4]) have resulted in fraudulently certificate, the CA’s response is a URL that can be used issued certificates exposed in the wild. However, current to download on-demand short-lived certs. The URL remains revocation mechanisms have been ineffective, as discussed in active for the year, but issues certs that are valid for only the next section, and browser vendors were forced to issue a few days. Every day a server-side element fetches a new software updates to block bad certificates instead of relying certificate that is active for the next few days. If this fetch on revocation checks. In fact, Google has announced plans to fails, the web site is not harmed since the certificate obtained disable altogether OCSP in Chrome — one of the world’s most the previous day is active for a few more days giving the popular browsers [5] — and instead reuse its existing software administrator and the CA ample time to fix the problem. In update mechanism to maintain a list of revoked certificates this way the burden of validating certificates is taken off the critical path of HTTPS connection establishment, and instead intervals, there is a risk of not blocking recently revoked is handled offline by the web site. We emphasize that short- certificates in time. lived certificates do not change the communication pattern • Since CRLs themselves can grow to be megabytes in between clients and web servers. Moreover, since clients size, clients often employ caching strategies, otherwise typically fail closed when faced with an expired certificate, large transfers will be incurred every time a CRL is this approach is far more robust that the existing OCSP and downloaded. This introduces cache consistency issues CRL based approaches. where a client uses an out-of-date CRL to determine revocation status. Although it is conceptually simple, many issues need to be • Browsers have historically been forgiving to revocation addressed for this approach to work. First, we hope to use failures (a.k.a “fail open”) so as not to prevent access to short-lived intermediate certificates, but this requires some popular web sites in case their CAs are unreachable [11]. additional steps at the CA. Second, we need to ensure that In practice, they either ignore CRL by default, or do not installing a new certificate on a web server does not force a show clear indications when revocation fails [12]. Unfor- web server restart. Third, for web sites with many servers, we tunately, this lets a network attacker defeat revocation by need an architecture to ensure that only one request from the simply corrupting revocation requests between the user site is issued to the CA per day (as opposed to one request and the CA. per server). Finally, a small optional change to the client can • It should also be noted that the location of the CRL (indi- provide additional defense-in-depth against attacks on the CA. cated by the CRL distribution point extension) is a non- We discuss these issues in the following sections. critical component of a certificate description, according We analyze the security of this proposal in Section V, but to RFC5280. This means that for certificates without this briefly mention here that when a web site key is compromised extension, it is up to to the verifier to determine the by a server-side breach, the certificate can be used for a few CRL distribution point itself. If it cannot CRLs may be days, but becomes worthless once it expires. In real-life this ignored [13]. is equivalent or better than the security guarantees offered by OCSP — OCSP responses are usually valid for a few days [9] B. Online Certificate Status Protocol and therefore revocation via OCSP also need a few days to take affect. The Online Certificate Status Protocol (OCSP), an alternative to CRLs proposed in RFC 2560 [2], allows client software to obtain current information about a certificate’s validity on a II. BACKGROUND certificate-by-certificate basis. When verifying a certificate, a client sends an OCSP request to an OCSP responder, which We begin by surveying the existing standards for removing responds whether the certificate is valid or not. Typically, trust from a valid certificate before its expiration date, and clients are instructed to cache the OCSP response for a few discuss the deficiencies that have caused Chrome to abandon days [9]. OCSP responders themselves are updated by CAs as them. to the status of certificates they handle. In theory, it is possible for CAs to issue OCSP responses with A. Certificate Revocation Lists short validity periods (since the response size is smaller than One solution to dealing with certificates that go bad is the a CRL), however there are many real-world constrains that certificate revocation list (CRL) [1]. When a certificate goes make this approach infeasible: bad, its identifying serial number is published to a list, signed • OCSP validation increases client side latency because and timestamped by a certificate authority (CA). In order to verifying a certificate is a blocking operation, requiring trust a certificate, a user must not only verify the signature a round trip to the OCSP responder to retrieve the and expiration date, but also ensure that the certificate is not revocation status (if no valid response found in cache). A listed in CRLs. previous study [9] indicates that 91.7% of OCSP lookups For CRLs to be effective, one assumes that (1) up-to-date are costly, taking more than 100ms to complete, thereby lists are published frequently by the CA, (2) the most recent delaying HTTPS session setup.
Recommended publications
  • Ten Strategies of a World-Class Cybersecurity Operations Center Conveys MITRE’S Expertise on Accumulated Expertise on Enterprise-Grade Computer Network Defense
    Bleed rule--remove from file Bleed rule--remove from file MITRE’s accumulated Ten Strategies of a World-Class Cybersecurity Operations Center conveys MITRE’s expertise on accumulated expertise on enterprise-grade computer network defense. It covers ten key qualities enterprise- grade of leading Cybersecurity Operations Centers (CSOCs), ranging from their structure and organization, computer MITRE network to processes that best enable effective and efficient operations, to approaches that extract maximum defense Ten Strategies of a World-Class value from CSOC technology investments. This book offers perspective and context for key decision Cybersecurity Operations Center points in structuring a CSOC and shows how to: • Find the right size and structure for the CSOC team Cybersecurity Operations Center a World-Class of Strategies Ten The MITRE Corporation is • Achieve effective placement within a larger organization that a not-for-profit organization enables CSOC operations that operates federally funded • Attract, retain, and grow the right staff and skills research and development • Prepare the CSOC team, technologies, and processes for agile, centers (FFRDCs). FFRDCs threat-based response are unique organizations that • Architect for large-scale data collection and analysis with a assist the U.S. government with limited budget scientific research and analysis, • Prioritize sensor placement and data feed choices across development and acquisition, enteprise systems, enclaves, networks, and perimeters and systems engineering and integration. We’re proud to have If you manage, work in, or are standing up a CSOC, this book is for you. served the public interest for It is also available on MITRE’s website, www.mitre.org. more than 50 years.
    [Show full text]
  • A Systematic Study of Cache Side Channels Across AES Implementations
    A Systematic Study of Cache Side Channels across AES Implementations Heiko Mantel1, Alexandra Weber1, and Boris K¨opf 2 1 Computer Science Department, TU Darmstadt, Darmstadt, Germany [email protected], [email protected] 2 IMDEA Software Institute, Madrid, Spain [email protected] Abstract While the AES algorithm is regarded as secure, many imple- mentations of AES are prone to cache side-channel attacks. The lookup tables traditionally used in AES implementations for storing precom- puted results provide speedup for encryption and decryption. How such lookup tables are used is known to affect the vulnerability to side chan- nels, but the concrete effects in actual AES implementations are not yet sufficiently well understood. In this article, we analyze and compare multiple off-the-shelf AES implementations wrt. their vulnerability to cache side-channel attacks. By applying quantitative program analysis techniques in a systematic fashion, we shed light on the influence of im- plementation techniques for AES on cache-side-channel leakage bounds. 1 Introduction The Advanced Encryption Standard (AES) is a widely used symmetric cipher that is approved by the U.S. National Security Agency for security-critical ap- plications [8]. While traditional attacks against AES are considered infeasible as of today, software implementations of AES are known to be highly suscepti- ble to cache side-channel attacks [5, 15, 18, 19, 32]. While such side channels can be avoided by bitsliced implementations [6,23], lookup-table-based implementa- tions, which aim at better performance, are often vulnerable and wide spread. To understand the vulnerability to cache side-channel attacks, recall that the 128bit version of AES relies on 10 rounds of transformations.
    [Show full text]
  • The BEAST Wins Again: Why TLS Keeps Failing to Protect HTTP Antoine Delignat-Lavaud, Inria Paris Joint Work with K
    The BEAST Wins Again: Why TLS Keeps Failing to Protect HTTP Antoine Delignat-Lavaud, Inria Paris Joint work with K. Bhargavan, C. Fournet, A. Pionti, P.-Y. Strub INTRODUCTION Introduction Cookie Cutter Virtual Host Confusion Crossing Origin Boundaries Shared Session Cache Shared Reverse Proxies SPDY Connection Pooling Triple Handshake Conclusion Why do we need TLS? 1. Authentication – Must be talking to the right guy 2. Integrity – Our messages cannot be tampered 3. Confidentiality – Messages are only legible to participants 4. Privacy? – Can’t tell who we are and what we talk about Why do we need TLS? 1. Authentication – Must be talking to the right guy Active Attacks 2. Integrity (MitM) – Our messages cannot be tampered 3. Confidentiality – Messages are only legible to participants Passive Attacks 4. Privacy? (Wiretapping) – Can’t tell who we are and what we talk about What websites expect of TLS • Web attacker – Controls malicious websites – User visits honest and malicious sites in parallel – Web/MitB attacks: CSRF, XSS, Redirection… • Network attacker – Captures (passive) and tampers (active) packets What websites expect of TLS • Web attacker – Controls malicious websites – User visits honest and malicious sites in parallel – Web/MitB attacks: CSRF, XSS, Redirection… • Network attacker Strictly stronger – Captures (passive) and tampers (active) packets What websites expect of TLS If a website W served over HTTP is secure against a Web attacker, then serving W over HTTPS makes it secure against a network attacker. What websites expect of TLS If a website W served over HTTP is secure against a Web attacker, then serving W over HTTPS makes it secure against a network attacker.
    [Show full text]
  • Searching for Trust
    SEARCHing for Trust Scott Rea1, TBA GOV2, TBA EDU3 1DigiCert Inc, Lindon, UT U.S.A., [email protected] 2Gov Agency, TBA, Australia, [email protected] 3University or Corporation, TBA, Australia, [email protected] ABSTRACT The security of the X.509 “oligarchy” Public Key Infrastructure for browsers and SSL web servers is under scrutiny in response to Certification Authority (CA) compromises which resulted in the circulation of fraudulent certificates. These rogue certificates can and have been used to execute Man-in-the-Middle attacks and gain access to users’ sensitive information. In wake of these events, there has been a call for change to the extent of either securing the current system or altogether replacing it with an alternative design. This panel will review the results of the research paper to be published that will explore the following proposals which have been put forth to replace or improve the CA system with the goal of aiding in the prevention and detection of MITM attacks and improving the trust infrastructure: Convergence, Perspectives, Mutually Endorsed Certification Authority Infrastructure (MECAI), DNS-Based Authentication of Named Entities (DANE), DNS Certification Authority Authorization (CAA) Resource Records, Public Key Pinning, Sovereign Keys, and Certificate Transparency. In the paper, a new metric is described that ranks each proposal according to a set of well-identified criteria and gives readers an idea of the costs and benefits of implementing the proposed system and the potential strengths and weaknesses of the design. The results of the research and the corresponding impacts for eResearchers and Government collaborators will be discussed by the panel.
    [Show full text]
  • Whither Deprecating TCP-MD5? a Light Dose of Reality Vs
    Deprecating MD5 for LDP draft-nslag-mpls-deprecate-md5- 00.txt The IETF MPLS and PALS WG Chairs Our Problem • Control plane protocols are often carried over simple transport layers such as UDP or TCP. • Control planes are good targets for attack and their disruption or subversion can have serious operational consequences. • TCP RST attacks against BGP routers were the original motivation for RFC 2385, TCP-MD5. • LDP runs over TCP. • It currently uses TCP MD5 for authentication, which is no longer considered secure (see RFC 5925) • This is frequently pointed out to us when our documents go to the IESG for publication. Small Survey among operators and vendors - I • The survey was totally un-scientific, and just a small number of vendors and operators were asked. Questions could be better formulated. • Operators were asked. • If TCP-AO were available in products, would you use it? • Are you planning to deploy it? • Vendors were asked. • Do you have TCP-AO? • We will consider making a bigger and more scientific survey to send out to “everybody”. Small Survey among operators and vendors - II • Operators answered: • No plan to deploy TCP-AO as long as vendors support their MD-5 implementations. • Very few authenticated LDP sessions. • There is a cost to deploy TCP-AO. • Vendors answered: • No we don’t have TCP-AO in our products. • One vendor said that it will be available later this year. • We will not implement it until we hear from the operators that they need it. What we need • A security suit that: • Is more secure than MD5 when used over the long-lived sessions that support routing.
    [Show full text]
  • Analysis of DTLS Implementations Using Protocol State Fuzzing
    Analysis of DTLS Implementations Using Protocol State Fuzzing Paul Fiterau-Brostean and Bengt Jonsson, Uppsala University; Robert Merget, Ruhr-University Bochum; Joeri de Ruiter, SIDN Labs; Konstantinos Sagonas, Uppsala University; Juraj Somorovsky, Paderborn University https://www.usenix.org/conference/usenixsecurity20/presentation/fiterau-brostean This paper is included in the Proceedings of the 29th USENIX Security Symposium. August 12–14, 2020 978-1-939133-17-5 Open access to the Proceedings of the 29th USENIX Security Symposium is sponsored by USENIX. Analysis of DTLS Implementations Using Protocol State Fuzzing Paul Fiterau-Bro¸stean˘ Bengt Jonsson Robert Merget Joeri de Ruiter Uppsala University Uppsala University Ruhr University Bochum SIDN Labs Konstantinos Sagonas Juraj Somorovsky Uppsala University Paderborn University Abstract reach 11.6 billion by 2021 [26]. This will constitute half of all devices connected to the Internet, with the percentage set to Recent years have witnessed an increasing number of proto- grow in subsequent years. Such trends also increase the need cols relying on UDP. Compared to TCP, UDP offers perfor- to ensure that software designed for these devices is properly mance advantages such as simplicity and lower latency. This scrutinized, particularly with regards to its security. has motivated its adoption in Voice over IP, tunneling techno- DTLS is also used as one of the two security protocols in logies, IoT, and novel Web protocols. To protect sensitive data WebRTC, a framework enabling real-time communication. exchange in these scenarios, the DTLS protocol has been de- WebRTC can be used, for example, to implement video con- veloped as a cryptographic variation of TLS.
    [Show full text]
  • Prying Open Pandora's Box: KCI Attacks Against
    Prying open Pandora’s box: KCI attacks against TLS Clemens Hlauschek, Markus Gruber, Florian Fankhauser, Christian Schanes RISE – Research Industrial Systems Engineering GmbH {clemens.hlauschek, markus.gruber, florian.fankhauser, christian.schanes}@rise-world.com Abstract and implementations of the protocol: their utility is ex- tremely limited, their raison d’ˆetre is practically nil, and Protection of Internet communication is becoming more the existence of these insecure key agreement options common in many products, as the demand for privacy only adds to the arsenal of attack vectors against cryp- in an age of state-level adversaries and crime syndi- tographically secured communication on the Internet. cates is steadily increasing. The industry standard for doing this is TLS. The TLS protocol supports a multi- 1 Introduction tude of key agreement and authentication options which provide various different security guarantees. Recent at- The TLS protocol [1, 2, 3] is probably the most tacks showed that this plethora of cryptographic options widely used cryptographic protocol on the Internet. in TLS (including long forgotten government backdoors, It is designed to secure the communication between which have been cunningly inserted via export restric- client/server applications against eavesdropping, tamper- tion laws) is a Pandora’s box, waiting to be pried open by ing, and message forgery, and it also provides additional, heinous computer whizzes. Novel attacks lay hidden in optional security properties such as client authentica- plainsight. Parts of TLS areso oldthat theirfoul smell of tion. TLS is an historically grown giant: its predecessor, rot cannot be easily distinguished from the flowery smell SSL [4,5], was developed more than 20 years ago.
    [Show full text]
  • Journey to a RTE-Free X.509 Parser
    Journey to a RTE-free X.509 parser Arnaud Ebalard, Patricia Mouy, and Ryad Benadjila [email protected] ANSSI Abstract. The C programming language is a security nightmare. It is error-prone and unsafe, but, year after year, the conclusion remains the same: no credible alternative will replace C in a foreseeable future; especially in low-level developments or for constrained environments. Additionally, even though some C developers are keen to drop this lan- guage when possible for more robust ones like ADA or Rust, converting the existing code basis to safer alternatives seems unrealistic. One of the positive aspects with C is nevertheless that its inherent flaws became a long time ago a full-time research topic for many people. Various static analysis tools exist to try and verify security aspects of C code, from the absence of run-time errors (RTE) to the verification of functional aspects. At the time of writing, none of these tools is capable of verifying the full-fledged C source code from most well-known software projects, even the smallest ones. Those tools are nonetheless getting better, and they may be able to handle large software code bases in the near future. Doing some steps in the direction of static analysis tools is sometimes sufficient to achieve full verification of a complex piece of code. This article details this kind of meet-in-the-middle approach applied to the development in C99 of a X.509 parser, and then its later verification using Frama-C. Table of Contents Journey to a RTE-free X.509 parser ..........................
    [Show full text]
  • Journey to a RTE-Free X.509 Parser
    Journey to a RTE-free X.509 parser Arnaud Ebalard, Patricia Mouy, and Ryad Benadjila [email protected] ANSSI Abstract. C programming language is a security nightmare. It is error- prone and unsafe, but, year after year, the conclusion remains the same: no credible alternative will replace C in a foreseeable future; all the more in low-level developments or for constrained environments. Additionally, even though some C developers are keen to drop this lan- guage when possible for more robust ones like ADA or Rust, converting the existing code basis to safer alternatives seems unrealistic. But one of the positive aspects with C is that its inherent flaws became a long time ago a full-time research topic for many people. Various static analysis tools exist to try and verify security aspects of C code, from the absence of run-time errors (RTE) to the verification of functional aspects. At the time of writing, none of these tools is capable of verifying the full-fledged C source code from most well-known software projects, even the smallest ones. Those tools are nonetheless getting better, and they may be able to handle large software code bases in the near future. Meanwhile, doing some steps in the direction of static analysis tools is sometimes sufficient to achieve full verification of a complex piece of code. This article details this kind of meet-in-the-middle approach applied to the development in C99 of a X.509 parser, and then its later verification using Frama-C. 1 Introduction In a nutshell, the aim of the project was to develop a “guaranteed RTE-free X.509 parser”, so that it could be used safely for syntactic and semantic verification of certificates before their usual processing in an implementation (e.g.
    [Show full text]
  • Showcase Your Company at the Annual Convergence of Industry Leaders in Commercial Cryptography
    Showcase Your Company at the Annual Convergence of Industry Leaders in Commercial Cryptography Almost 300 commercial cryptography professionals from 22 countries came together last May in Ottawa for the International Cryptographic Module Conference—the industry’s leading forum for data security based in commercial encryption. For 2017, we’re building on that success with additional events and up-to-the-minute sessions and content and we’re continuing our in-depth coverage of cryptographic policies, standards, technologies, implementation, and markets. Reserve your place now at this gathering of professionals who develop, test, certify, specify, procure, and rely on these technologies. NEW Tracks Added to A Unique Promotion Conference Agenda Opportunity The conference at ICMC includes over Promotion opportunities are 70 presentations in nine tracks: available for leading companies offering products, technologies and General Technology: Tools and The International Cryptographic techniques relating to cryptographic services in this evolving field. It’s a modules Module Conference is a growing unique opportunity to showcase forum for global expertise in your company to an engaged and Advanced Technology: High-level commercial cryptography. At the technology issues, or special-focus highly responsive audience. The subject matter fifth annual edition of the event features an efficient three-day conference in May, industry leaders participation format that respects Certification Programs: Issues from over 20 countries will convene related to the CMVP, government your busy schedule. programs and global certification to address the unique challenges faced by those who produce, use, In a Central Location User Experience: CM products, and test cryptographic modules, certifications, and vulnerabilities for Located in a beautiful, walkable city with a strong focus on standards organizations that rely on crypto environment with restaurants, security such as FIPS 140-2, ISO/IEC 19790, shopping and other sites, the and Common Criteria.
    [Show full text]
  • CCSP: a Compressed Certificate Status Protocol
    CCSP: a Compressed Certificate Status Protocol Antonios A. Chariton∗, Eirini Degkleri∗†, Panagiotis Papadopoulos∗†, Panagiotis Ilia∗†, and Evangelos P. Markatos∗† ∗University of Crete, Greece, [email protected] †FORTH-ICS, Greece, fdegleri, panpap, pilia, [email protected] Abstract—Trust in SSL-based communications is provided by website is stolen, the issued certificate needs to be revoked, Certificate Authorities (CAs) in the form of signed certificates. and users need to be updated as soon as possible in order to Checking the validity of a certificate involves three steps: (i) no more trust web servers using this certificate. There are two checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate main ways for the web browsers to know when a certificate revocation checks are done either via Certificate Revocation Lists has been revoked. The simplest and more traditional one is by (CRLs) or Online Certificate Status Protocol (OCSP) servers. downloading the list of all revoked certificates, the Certificate Unfortunately, despite the existence of these revocation checks, Revocation List (CRL), from the CA and locally lookup if the sophisticated cyber-attackers, may trick web browsers to trust a certificate is included in the CRL. If included, it means that the revoked certificate, believing that it is still valid. Consequently, the web browser will communicate (over TLS) with web servers certificate has been revoked and should not be trusted anymore. controlled by cyber-attackers. However, the increasing size of these CRLs [6] with median Although frequently updated, nonced, and timestamped certifi- size of 51 KB, reaching as large as 76 MB in some cases cates may reduce the frequency and impact of such cyber-attacks, [16], [20], has forced the clients to download CRLs rather they impose a very large overhead to the CAs and OCSP servers, sporadically.
    [Show full text]
  • Anonymity and One-Way Authentication in Key Exchange Protocols
    Designs, Codes and Cryptography manuscript No. (will be inserted by the editor) Anonymity and one-way authentication in key exchange protocols Ian Goldberg · Douglas Stebila · Berkant Ustaoglu January 5, 2012 Abstract Key establishment is a crucial cryptographic primitive for building secure communication channels between two parties in a network. It has been studied extensively in theory and widely deployed in practice. In the research literature a typical protocol in the public-key setting aims for key secrecy and mutual authentication. However, there are many important practical scenarios where mutual authentication is undesirable, such as in anonymity networks like Tor, or is difficult to achieve due to insufficient public-key infrastructure at the user level, as is the case on the Internet today. In this work we are concerned with the scenario where two parties establish a private shared session key, but only one party authenticates to the other; in fact, the unauthenticated party may wish to have strong anonymity guar- antees. We present a desirable set of security, authentication, and anonymity goals for this setting and develop a model which captures these properties. Our approach allows for clients to choose among different levels of authen- tication. We also describe an attack on a previous protocol of Øverlier and Syverson, and present a new, efficient key exchange protocol that provides one-way authentication and anonymity. Keywords Key exchange One-way authentication Anonymity Tor network Protocols Security· models · · · · Mathematics Subject Classification (2000) 94A60 Cryptography Ian Goldberg Cheriton School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada. E-mail: [email protected] Douglas Stebila Information Security Institute, Queensland University of Technology, Brisbane, Queensland, Australia.
    [Show full text]