An Architecture to Repair and Strengthen Certificate-Based Authentication

Total Page:16

File Type:pdf, Size:1020Kb

An Architecture to Repair and Strengthen Certificate-Based Authentication TrustBase: An Architecture to Repair and Strengthen Certificate-based Authentication Mark O’Neill Scott Heidbrink Scott Ruoti Jordan Whitehead Dan Bunker Kent Seamons Daniel Zappala Brigham Young University [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract—We describe TrustBase, an architecture that pro- were fabricated by the intruder, including a certificate vides certificate-based authentication as an operating system ser- for Gmail that allowed the intruder to access stored vice. TrustBase enforces best practices for certificate validation for email for 300,000 Iranians [27]. This happened despite all applications and transparently enables existing applications to the fact that Gmail does not use DigiNotar to sign its be strengthened against failures of the CA system. The TrustBase certificates. This problem is exacerbated by CAs that do system allows simple deployment of authentication systems that not follow best practices [34], [11] and governmental harden the CA system. This enables system administrators, for example, to require certificate revocation checks on all TLS ownership and access to CAs [13], [42]. connections, or require STARTTLS for email servers that support • Improvements to the CA system have difficulty it. TrustBase is the first system that is able to secure all TLS being widely deployed. Protocols to enhance the traffic, using an approach compatible with all operating systems. certificate validation process have been developed, but We design and evaluate a prototype implementation of TrustBase on Linux, evaluate its security, and demonstrate that it has the majority of applications have not yet integrated negligible overhead and universal compatibility with applications. these improvements. Even relatively simple fixes, such To demonstrate the utility of TrustBase, we have developed six as certificate revocation, are beset with problems [31]. authentication services that strengthen certificate validation for There is no widely-adopted platform assisting the de- all applications. ployment of improvements to the CA system, meaning researchers and developers have to individually modify I. INTRODUCTION applications to make advances, which severely limits their deployment. Server authentication on the Internet currently relies on the certificate authority (CA) system. While establishing a In this paper, we introduce TrustBase, an architecture for cer- secure SSL/TLS session with a server, clients receive a digital tificate authentication that solves each of these three problems. certificate that vouches for the identity of the server. This TrustBase is designed to (1) secure existing applications, (2) certificate is signed by a certificate authority, and the signature strengthen the CA system, and (3) provide simple deployment is validated against a list of certificate authorities that is shipped of improved authentication systems. with the client or operating system. This provides assurance To address these problems, TrustBase complements the that the client is connected to a legitimate server and not one existing certificate validation performed by applications with controlled by an attacker. additional validation methods that are enforced by the operating Unfortunately, certificate validation is challenged by three system. This approach has several advantages. First, it protects significant problems: against insecure applications—too much evidence shows that developers make mistakes, and this will only get more diffi- arXiv:1610.08570v1 [cs.CR] 26 Oct 2016 • Applications frequently do not properly validate cult as additional authentication methods such as Certificate the server’s certificate [20], [17], [6]. This is caused Transparency [29], [41], pinning [16], [33], and DANE [22] by failure to use validation functions, incorrect usage become more widely available. Second, it transparently enables of libraries, and also developers who disable validation existing applications to be strengthened against failures of the during development and forget to enable it upon CA system. For example, a browser that validates the EV release [18]. These implementation mistakes deceive cert of a bank is doing the best it currently can, but it is the user into believing her session is secure, while still vulnerable to a CA that is hacked, allowing a man-in-the leaving the application vulnerable to man-in-the-middle middle (MITM) to present fake but valid certificates. TrustBase (MITM) attacks. provides complementary authentication services that protect against these situations, such as checking notaries to ensure • The CA system is vulnerable to being hijacked even other hosts across the Internet are seeing the same certificate when applications are implemented correctly. This for the bank.1 Third, it enables a system administrator to ensure is largely due to the fact that most CAs are able to best security practices are properly followed. TrustBase allows sign certificates for any host, reducing the strength of the administrator for an organization to enforce consistent the CA system to that of the weakest CA [12]. This weakness was exploited in 2011 when DigiNotar’s 1Certificates for a notary service can be pinned in advance, so they are not servers were hacked and more than 500 certificates vulnerable to the MITM. certificate validation across all applications, such as requiring We also demonstrate the ability of TrustBase to fix revocation checking or mandating that pinning is used to protect applications that do not validate hostnames or skip against MITM attacks. TrustBase provides administrators with certificate validation altogether. a choice of authentication services that can be used to harden the CA system.2 • An evaluation of TrustBase: We evaluate the Trust- Base prototype for performance, compatibility, and We identify a number of design goals for TrustBase, utility. (1) We show that TrustBase has negligible including full application coverage, universal deployment, and performance overhead, with no measurable impact on negligible overhead. To meet these goals, TrustBase uses latency. (2) We demonstrate that TrustBase enforces traffic interception between the socket layer and the transport correct certificate validation on all popular Linux layer, an approach that is general enough to work on all libraries and tools and on the most popular Android major operating systems, both desktop and mobile. TrustBase applications. (3) We describe six authentication services detects the initiation of TLS connections, finds the handshake that we have developed and report on how simple and information, validates the server’s certificate using a variety of straightforward it was to develop these services in configurable authentication systems, and then allows or blocks TrustBase. the connection based on the results of this additional validation. This allows TrustBase to harden the certificate validation of all II. RELATED WORK applications, regardless of which libraries they use. For new or modified applications, TrustBase provides a simple certificate The typical goal of an attacker who wishes to subvert TLS is validation API that can be called directly. to perform a MITM attack on unsuspecting clients, substituting his own certificate in place of the original. In this section we Our contributions include: discuss flawed implementations, weaknesses of the CA system • An architecture for enforcing best practices for that make these attacks possible, and various alternatives to the certificate validation on all applications: TrustBase CA system that have been developed. We then discuss related requires standard certificate validation procedures and projects that try to address the problems affecting applications optionally adds additional authentication services, both and the CA system. of which are enforced by the operating system and controlled by the administrator. This repairs broken A. Flawed Implementations validation for poorly-written applications and option- ally strengthens the validation done by well-written Several studies demonstrate widespread vulnerabilities in applications. Applications are allowed to be more strict, client-side validation of certificates, in both mobile and desktop but not less strict than TrustBase in their validation environments. Georgiev et al. [20] discovered that many decisions. critical software applications outside the browser rely on completely broken TLS validation libraries, primarily due to • Simplified deployment of authentication systems poorly designed APIs. Brubaker et al. [6] conducted a large- that strengthen the CA system: TrustBase provides scale experiment of TLS implementations using millions of a plugin API and a policy engine that dynamically “frankencerts”, random certificates generated from portions of loads authentication systems based on administrator real certificates. They discovered hundreds of flaws in popular preferences. They can be written in either C or Python, TLS libraries and browsers. Fahl et al. [17] analyzed 13,500 with the ability to add support for additional languages. popular Android apps and determined that 8% were vulnerable The administrator can use policies to define how to TLS MITM attacks. Lucky et al. [37] experimented with multiple authentication systems cooperate, for example 100 popular Android apps and found that several accept all using unanimous consent or threshold voting. certificates and all hostnames, along with other unsafe practices. • A research prototype of TrustBase for Linux: We There
Recommended publications
  • Email Transport Encryption STARTTLS Vs. DANE Vs. MTA-STS
    Email Transport Encryption STARTTLS vs. DANE vs. MTA-STS Delimitation This document deals with the encrypted transport of messages between two email servers. Encryption during transport is crucial for a basic level of security for the exchange of messages. The exchange of messages between an email client and a server or the end-to-end encryption of messages are not covered in this article. If the aim is to create a secure overall system, these aspects should be considered in addition to the recommendations in this article. Initial situation When the first standard for the Simple Mail Transfer Protocol (in short: SMTP) was adopted, encryption of the transport was not included. All messages were exchanged as plain text between the servers. They were thus basically readable for anyone who could tap into the data exchange between the two servers. This only changed with the introduction of the SMTP Service Extensions for Extend SMTP (ESMTP for short), which made it possible to choose encrypted transport as an opportunistic feature of data exchange. Opportunistic, because it had to be assumed that not all servers would be able to handle transport encryption. They should only encrypt when it is opportune; encryption is not mandatory. To indicate the basic ability to encrypt to a sending server, it was agreed that the receiving server should send the keyword STARTTLS at the beginning of an ESMTP transport session. STARTTLS STARTTLS can encrypt the transport between two email servers. The receiving server signals the sending server that it is capable of encrypting after a connection is established and in the ESMTP session.
    [Show full text]
  • MTA STS Improving Email Security.Pdf
    Improving Email Security with the MTA-STS Standard By Brian Godiksen An Email Best Practices Whitepaper CONTENTS Executive Overview 03 Why Does Email Need Encryption in Transit? 04 The Problem with “Opportunistic Encryption” 07 The Anatomy of a Man-in-the-Middle Attack 08 The Next Major Step with Email Encryption: MTA-STS 10 What Steps Should Senders Take to Adopt MTA-STS? 11 About SocketLabs 12 Brian Godiksen Brian has been helping organizations optimize email deliverability since joining SocketLabs in 2011. He currently manages a team of deliverability analysts that consult with customers on best infrastructure practices, including email authentication implementation, bounce processing, IP address warm-up, and email marketing list management. Brian leads the fight against spam and email abuse at SocketLabs by managing compliance across the platform. He is an active participant in key industry groups such as M3AAWG and the Email Experience Council. You can read more of Brian’s content here on the SocketLabs website. ©2019 SocketLabs 2 Executive The Edward Snowden leaks of 2013 opened many peoples’ eyes to the fact that mass surveillance was possible by Overview intercepting and spying on email transmissions. Today, compromised systems, database thefts, and technology breaches remain common fixtures in news feeds around the world. As a natural response, the technology industry is rabidly focused on improving the security and encryption of communications across all platforms. Since those early days of enlightenment, industry experts have discussed and attempted a variety of new strategies to combat “pervasive monitoring” of email channels. While pervasive monitoring assaults can take many forms, the most prominent forms of interference were man-in-the-middle (MitM) attacks.
    [Show full text]
  • HTTP2 Explained
    HTTP2 Explained Daniel Stenberg Mozilla Corporation [email protected] ABSTRACT credits to everyone who helps out! I hope to make this document A detailed description explaining the background and problems better over time. with current HTTP that has lead to the development of the next This document is available at http://daniel.haxx.se/http2. generation HTTP protocol: HTTP 2. It also describes and elaborates around the new protocol design and functionality, 1.3 License including some implementation specifics and a few words about This document is licensed under the the future. Creative Commons Attribution 4.0 This article is an editorial note submitted to CCR. It has NOT license: been peer reviewed. The author takes full responsibility for this http://creativecommons.org/licenses/by/4.0/ article’s technical content. Comments can be posted through CCR Online. 2. HTTP Today Keywords HTTP 1.1 has turned into a protocol used for virtually everything HTTP 2.0, security, protocol on the Internet. Huge investments have been done on protocols and infrastructure that takes advantage of this. This is taken to the 1. Background extent that it is often easier today to make things run on top of HTTP rather than building something new on its own. This is a document describing http2 from a technical and protocol level. It started out as a presentation I did in Stockholm in April 2014. I've since gotten a lot of questions about the contents of 2.1 HTTP 1.1 is Huge that presentation from people who couldn't attend, so I decided to When HTTP was created and thrown out into the world it was convert it into a full-blown document with all details and proper probably perceived as a rather simple and straight-forward explanations.
    [Show full text]
  • Opportunistic Keying As a Countermeasure to Pervasive Monitoring
    Opportunistic Keying as a Countermeasure to Pervasive Monitoring Stephen Kent BBN Technologies Abstract This document was prepared as part of the IETF response to concerns about “pervasive monitoring” (PM) [Farrell-pm]. It begins by exploring terminology that has been used in IETF standards (and in academic publications) to describe encryption and key management techniques, with a focus on authentication and anonymity. Based on this analysis, it propose a new term, “opportunistic keying” to describe a goal for IETF security protocols, in response to PM. It reviews key management mechanisms used in IETF security protocol standards, also with respect to these properties. The document explores possible impediments to and potential adverse effects associated with deployment and use of techniques that would increase the use of encryption, even when keys are distributed in an unauthenticated manner. 1. What’s in a Name (for Encryption)? Recent discussions in the IETF about pervasive monitoring (PM) have suggested a desire to increase use of encryption, even when the encrypted communication is unauthenticated. The term “opportunistic encryption” has been suggested as a term to describe key management techniques in which authenticated encryption is the preferred outcome, unauthenticated encryption is an acceptable fallback, and plaintext (unencrypted) communication is an undesirable (but perhaps necessary) result. This mode of operation differs from the options commonly offered by many IETF security protocols, in which authenticated, encrypted communication is the desired outcome, but plaintext communication is the fallback. The term opportunistic encryption (OE) was coined by Michael Richardson in “Opportunistic Encryption using the Internet Key Exchange (IKE)” an Informational RFC [RFC4322].
    [Show full text]
  • Software-Defined Networking: Improving Security for Enterprise and Home Networks
    Worcester Polytechnic Institute Digital WPI Doctoral Dissertations (All Dissertations, All Years) Electronic Theses and Dissertations 2017-04-24 Software-defined etN working: Improving Security for Enterprise and Home Networks Curtis Robin Taylor Worcester Polytechnic Institute Follow this and additional works at: https://digitalcommons.wpi.edu/etd-dissertations Repository Citation Taylor, C. R. (2017). Software-defined Networking: Improving Security for Enterprise and Home Networks. Retrieved from https://digitalcommons.wpi.edu/etd-dissertations/161 This dissertation is brought to you for free and open access by Digital WPI. It has been accepted for inclusion in Doctoral Dissertations (All Dissertations, All Years) by an authorized administrator of Digital WPI. For more information, please contact [email protected]. Software-defined Networking: Improving Security for Enterprise and Home Networks by Curtis R. Taylor A Dissertation Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science by May 2017 APPROVED: Professor Craig A. Shue, Dissertation Advisor Professor Craig E. Wills, Head of Department Professor Mark Claypool, Committee Member Professor Thomas Eisenbarth, Committee Member Doctor Nathanael Paul, External Committee Member Abstract In enterprise networks, all aspects of the network, such as placement of security devices and performance, must be carefully considered. Even with forethought, networks operators are ulti- mately unaware of intra-subnet traffic. The inability to monitor intra-subnet traffic leads to blind spots in the network where compromised hosts have unfettered access to the network for spreading and reconnaissance. While network security middleboxes help to address compromises, they are limited in only seeing a subset of all network traffic that traverses routed infrastructure, which is where middleboxes are frequently deployed.
    [Show full text]
  • Ten Strategies of a World-Class Cybersecurity Operations Center Conveys MITRE’S Expertise on Accumulated Expertise on Enterprise-Grade Computer Network Defense
    Bleed rule--remove from file Bleed rule--remove from file MITRE’s accumulated Ten Strategies of a World-Class Cybersecurity Operations Center conveys MITRE’s expertise on accumulated expertise on enterprise-grade computer network defense. It covers ten key qualities enterprise- grade of leading Cybersecurity Operations Centers (CSOCs), ranging from their structure and organization, computer MITRE network to processes that best enable effective and efficient operations, to approaches that extract maximum defense Ten Strategies of a World-Class value from CSOC technology investments. This book offers perspective and context for key decision Cybersecurity Operations Center points in structuring a CSOC and shows how to: • Find the right size and structure for the CSOC team Cybersecurity Operations Center a World-Class of Strategies Ten The MITRE Corporation is • Achieve effective placement within a larger organization that a not-for-profit organization enables CSOC operations that operates federally funded • Attract, retain, and grow the right staff and skills research and development • Prepare the CSOC team, technologies, and processes for agile, centers (FFRDCs). FFRDCs threat-based response are unique organizations that • Architect for large-scale data collection and analysis with a assist the U.S. government with limited budget scientific research and analysis, • Prioritize sensor placement and data feed choices across development and acquisition, enteprise systems, enclaves, networks, and perimeters and systems engineering and integration. We’re proud to have If you manage, work in, or are standing up a CSOC, this book is for you. served the public interest for It is also available on MITRE’s website, www.mitre.org. more than 50 years.
    [Show full text]
  • How Can We Protect the Internet Against Surveillance?
    How can we protect the Internet against surveillance? Seven TODO items for users, web developers and protocol engineers Peter Eckersley [email protected] Okay, so everyone is spying on the Internet It's not just the NSA... Lots of governments are in this game! Not to mention the commerical malware industry These guys are fearsome, octopus-like adversaries Does this mean we should just give up? No. Reason 1: some people can't afford to give up Reason 2: there is a line we can hold vs. So, how do we get there? TODO #1 Users should maximise their own security Make sure your OS and browser are patched! Use encryption where you can! In your browser, install HTTPS Everywhere https://eff.org/https-everywhere For instant messaging, use OTR (easiest with Pidgin or Adium, but be aware of the exploit risk tradeoff) For confidential browsing, use the Tor Browser Bundle Other tools to consider: TextSecure for SMS PGP for email (UX is terrible!) SpiderOak etc for cloud storage Lots of new things in the pipeline TODO #2 Run an open wireless network! openwireless.org How to do this securely right now? Chain your WPA2 network on a router below your open one. TODO #3 Site operators... Deploy SSL/TLS/HTTPS DEPLOY IT CORRECTLY! This, miserably, is a lot harder than it should be TLS/SSL Authentication Apparently, ~52 countries These are usually specialist, narrowly targetted attacks (but that's several entire other talks... we're working on making HTTPS more secure, easier and saner!) In the mean time, here's what you need A valid certificate HTTPS by default Secure cookies No “mixed content” Perfect Forward Secrecy A well-tuned configuration How do I make HTTPS the default? Firefox and Chrome: redirect, set the HSTS header Safari and IE: sorry, you can't (!!!) What's a secure cookie? Go and check your site right now..
    [Show full text]
  • A Systematic Study of Cache Side Channels Across AES Implementations
    A Systematic Study of Cache Side Channels across AES Implementations Heiko Mantel1, Alexandra Weber1, and Boris K¨opf 2 1 Computer Science Department, TU Darmstadt, Darmstadt, Germany [email protected], [email protected] 2 IMDEA Software Institute, Madrid, Spain [email protected] Abstract While the AES algorithm is regarded as secure, many imple- mentations of AES are prone to cache side-channel attacks. The lookup tables traditionally used in AES implementations for storing precom- puted results provide speedup for encryption and decryption. How such lookup tables are used is known to affect the vulnerability to side chan- nels, but the concrete effects in actual AES implementations are not yet sufficiently well understood. In this article, we analyze and compare multiple off-the-shelf AES implementations wrt. their vulnerability to cache side-channel attacks. By applying quantitative program analysis techniques in a systematic fashion, we shed light on the influence of im- plementation techniques for AES on cache-side-channel leakage bounds. 1 Introduction The Advanced Encryption Standard (AES) is a widely used symmetric cipher that is approved by the U.S. National Security Agency for security-critical ap- plications [8]. While traditional attacks against AES are considered infeasible as of today, software implementations of AES are known to be highly suscepti- ble to cache side-channel attacks [5, 15, 18, 19, 32]. While such side channels can be avoided by bitsliced implementations [6,23], lookup-table-based implementa- tions, which aim at better performance, are often vulnerable and wide spread. To understand the vulnerability to cache side-channel attacks, recall that the 128bit version of AES relies on 10 rounds of transformations.
    [Show full text]
  • The BEAST Wins Again: Why TLS Keeps Failing to Protect HTTP Antoine Delignat-Lavaud, Inria Paris Joint Work with K
    The BEAST Wins Again: Why TLS Keeps Failing to Protect HTTP Antoine Delignat-Lavaud, Inria Paris Joint work with K. Bhargavan, C. Fournet, A. Pionti, P.-Y. Strub INTRODUCTION Introduction Cookie Cutter Virtual Host Confusion Crossing Origin Boundaries Shared Session Cache Shared Reverse Proxies SPDY Connection Pooling Triple Handshake Conclusion Why do we need TLS? 1. Authentication – Must be talking to the right guy 2. Integrity – Our messages cannot be tampered 3. Confidentiality – Messages are only legible to participants 4. Privacy? – Can’t tell who we are and what we talk about Why do we need TLS? 1. Authentication – Must be talking to the right guy Active Attacks 2. Integrity (MitM) – Our messages cannot be tampered 3. Confidentiality – Messages are only legible to participants Passive Attacks 4. Privacy? (Wiretapping) – Can’t tell who we are and what we talk about What websites expect of TLS • Web attacker – Controls malicious websites – User visits honest and malicious sites in parallel – Web/MitB attacks: CSRF, XSS, Redirection… • Network attacker – Captures (passive) and tampers (active) packets What websites expect of TLS • Web attacker – Controls malicious websites – User visits honest and malicious sites in parallel – Web/MitB attacks: CSRF, XSS, Redirection… • Network attacker Strictly stronger – Captures (passive) and tampers (active) packets What websites expect of TLS If a website W served over HTTP is secure against a Web attacker, then serving W over HTTPS makes it secure against a network attacker. What websites expect of TLS If a website W served over HTTP is secure against a Web attacker, then serving W over HTTPS makes it secure against a network attacker.
    [Show full text]
  • The Danger of the New Internet Choke Points
    The Danger of the New Internet Choke Points Authored by: Andrei Robachevsky, Christine Runnegar, Karen O’Donoghue and Mat Ford FEBRUARY 2014 Introduction The ongoing disclosures of pervasive surveillance of Internet users’ communications and data by national security agencies have prompted protocol designers, software and hardware vendors, as well as Internet service and content providers, to re-evaluate prevailing security and privacy threat models and to refocus on providing more effective security and confidentiality. At IETF88, there was consensus to address pervasive monitoring as an attack and to consider the pervasive attack threat model when designing a protocol. One area of work currently being pursued by the IETF is the viability of more widespread encryption. While there are some who believe that widely deployed encryption with strong authentication should be used extensively, many others believe that there are practical obstacles to this approach including a general lack of reasonable tools and user understanding as to how to use the technology, plus significant obstacles to scaling infrastructure and services using existing technologies. As a result, the discussion within the IETF has principally focused on opportunistic encryption and weak authentication. “Weak authentication” means cryptographically strong authentication between previously unknown parties without relying on trusted third parties. In certain contexts, and by using certain techniques, one can achieve the desired level of security (see, for instance, Arkko, Nikander. Weak Authentication: How to Authenticate Unknown Principals without Trusted Parties, Security Protocols Workshop, volume 2845 of Lecture Notes in Computer Science, page 5-19. Springer, (2002)). “Opportunistic encryption” refers to encryption without authentication. It is a mode of protocol operation where the content of the communication is secure against passive surveillance, but there is no guarantee that the endpoints are reliably identified.
    [Show full text]
  • Searching for Trust
    SEARCHing for Trust Scott Rea1, TBA GOV2, TBA EDU3 1DigiCert Inc, Lindon, UT U.S.A., [email protected] 2Gov Agency, TBA, Australia, [email protected] 3University or Corporation, TBA, Australia, [email protected] ABSTRACT The security of the X.509 “oligarchy” Public Key Infrastructure for browsers and SSL web servers is under scrutiny in response to Certification Authority (CA) compromises which resulted in the circulation of fraudulent certificates. These rogue certificates can and have been used to execute Man-in-the-Middle attacks and gain access to users’ sensitive information. In wake of these events, there has been a call for change to the extent of either securing the current system or altogether replacing it with an alternative design. This panel will review the results of the research paper to be published that will explore the following proposals which have been put forth to replace or improve the CA system with the goal of aiding in the prevention and detection of MITM attacks and improving the trust infrastructure: Convergence, Perspectives, Mutually Endorsed Certification Authority Infrastructure (MECAI), DNS-Based Authentication of Named Entities (DANE), DNS Certification Authority Authorization (CAA) Resource Records, Public Key Pinning, Sovereign Keys, and Certificate Transparency. In the paper, a new metric is described that ranks each proposal according to a set of well-identified criteria and gives readers an idea of the costs and benefits of implementing the proposed system and the potential strengths and weaknesses of the design. The results of the research and the corresponding impacts for eResearchers and Government collaborators will be discussed by the panel.
    [Show full text]
  • Whither Deprecating TCP-MD5? a Light Dose of Reality Vs
    Deprecating MD5 for LDP draft-nslag-mpls-deprecate-md5- 00.txt The IETF MPLS and PALS WG Chairs Our Problem • Control plane protocols are often carried over simple transport layers such as UDP or TCP. • Control planes are good targets for attack and their disruption or subversion can have serious operational consequences. • TCP RST attacks against BGP routers were the original motivation for RFC 2385, TCP-MD5. • LDP runs over TCP. • It currently uses TCP MD5 for authentication, which is no longer considered secure (see RFC 5925) • This is frequently pointed out to us when our documents go to the IESG for publication. Small Survey among operators and vendors - I • The survey was totally un-scientific, and just a small number of vendors and operators were asked. Questions could be better formulated. • Operators were asked. • If TCP-AO were available in products, would you use it? • Are you planning to deploy it? • Vendors were asked. • Do you have TCP-AO? • We will consider making a bigger and more scientific survey to send out to “everybody”. Small Survey among operators and vendors - II • Operators answered: • No plan to deploy TCP-AO as long as vendors support their MD-5 implementations. • Very few authenticated LDP sessions. • There is a cost to deploy TCP-AO. • Vendors answered: • No we don’t have TCP-AO in our products. • One vendor said that it will be available later this year. • We will not implement it until we hear from the operators that they need it. What we need • A security suit that: • Is more secure than MD5 when used over the long-lived sessions that support routing.
    [Show full text]