Security Usability Fundamentals an Important Consideration When You’Re Building an Application Is the Usability of the Security Features That You’Ll Be Employing
Total Page:16
File Type:pdf, Size:1020Kb
Security (Un-)Usability 17 Security Usability Fundamentals An important consideration when you’re building an application is the usability of the security features that you’ll be employing. Security experts frequently lament that security has been bolted onto applications as an afterthought, however the security community has committed the exact same sin in reverse, placing usability considerations in second place behind security, if they were considered at all. As a result, we spent the 1990s building and deploying security that wasn’t really needed, and now that we’re experiencing widespread phishing attacks with viruses and worms running rampant and the security is actually needed, we’re finding that no-one can use it. To understand the problem, it’s necessary to go back to the basic definition of functionality and security. An application exhibits functionality if things that are supposed to happen, do happen. Similarly, an application exhibits security if things that aren’t supposed to happen, don’t happen. Security developers are interested in the latter, marketers and management tend to be more interested in the former. Ensuring that things that aren’t supposed to happen don’t happen can be approached from both the application side and from the user side. From the application side, the application should behave in a safe manner, defaulting to behaviour that protects the user from harm. From the user side, the application should act in a manner in which the user’s expectations of a safe user experience are met. The following sections look at some of the issues that face developers trying to create a user interface for a security application. Security (Un-)Usability Before you start thinking about potential features of your security user interface, you first need to consider the environment into which it’ll be deployed. Now that we have 10-15 years of experience in (trying to) deploy Internet security, we can see, both from hindsight and because in the last few years people have actually started testing the usability of security applications, that a number of mechanisms that were expected to Solve The Problem don’t really work in practice [1]. The idea behind security technology is to translate a hard problem (secure/safe communication and storage) into a simpler problem, not just to shift the complexity from one layer to another. This is an example of Fundamental Truth No.6 of the Twelve Networking Truths, “It is easier to move a problem around than it is to solve it” [2]. Security user interfaces are usually driven by the underlying technology, which means that they often just shift the problem from the technical level to the human level. Some of the most awkward technologies not only shift the complexity but add an extra level of complexity of their own (IPsec and PKI spring to mind). Figure 1: Blaming the user for security unusability The major lesson that we’ve learned from the history of security (un-)usability is that technical solutions like PKI and access control don’t align too well with usability conceptual models. As a result, calling in the usability people after the framework of the application’s user interface measures have been set in concrete by purely 18 Security Usability Fundamentals technology-driven considerations is doomed to failure, since the user interface will be forced to conform to the straightjacket constraints imposed by the security technology rather than being able to exploit the full benefits of years of usability research and experience. Blaming security problems on the user when they’re actually caused by the user interface design (Figure 1) is equally ineffective. This chapter covers some of the issues that affect security user interfaces, and looks at various problems that you’ll have to deal with if you want to create an effective user interface for your security application. Theoretical vs. Effective Security There can be a significant difference between theoretical and effective security. In theory, we should all be using smart cards and PKI for authentication. However, these measures are so painful to deploy and use that they’re almost never employed, making them far less effectively secure than basic usernames and passwords. Security experts tend to focus exclusively on the measures that provide the best (theoretical) security, but often these measures provide very little effective security because they end up being misused, or turned off, or bypassed. Worse yet, when they focus only on the theoretically perfect measures, they don’t even try to get lesser security measures right. For example passwords are widely decried as being insecure, but this is mostly because security protocol designers have chosen to make them insecure. Both SSL and SSH, the two largest users of passwords for authentication, will connect to anything claiming to be a server and then hand over the password in plaintext after the handshake has completed. No attempt is made to provide even the most trivial protection through some form of challenge/response protocol, because everyone knows that passwords are insecure and so it isn’t worth bothering to try and protect them. This problem is exemplified by the IPsec protocol, which after years of discussion still doesn’t have any standardised way to authenticate users based on simple mechanisms like one-time passwords or password-token cards. The IETF even chartered a special working group, IPSRA (IPsec Remote Access), for this purpose. The group’s milestone list calls for an IPsec “user access control mechanism submitted for standards track” by March 2001, but six years later its sole output remains a requirements document [3] and an expired draft. As the author of one paper on effective engineering of authentication mechanisms points out, the design assumption behind IPsec was “all password-based authentication is insecure; IPsec is designed to be secure; therefore, you have to deploy a PKI for it” [4]. The result has been a system so unworkable that both developers and users have resorted to doing almost anything to bypass it, from using homebrew (and often insecure) “management tunnels” to communicate keys to hand-carrying static keying material to IPsec endpoints to avoiding IPsec altogether and using mechanisms like SSL-based VPNs, which were never designed to be used for tunnelling IP traffic but are being pressed into service because users have found that almost anything is preferable to having to use IPsec (this has become so pressing that there’s now a standard for transporting TLS over UDP to allow it to fill the gap that IPsec couldn’t, datagram TLS or DTLS [5]). More than ten years after SSL was introduced, support for a basic password-based mutual authentication protocol was finally (reluctantly) added, although even there it was only under the guise of enabling use with low-powered devices that can’t handle the preferred PKI-based authentication and lead to prolonged arguments on the SSL developers list whenever the topic of allowing something other than certificates for user authentication came up [6]. SSH, a protocol specifically created to protect passwords sent over the network, still operates in a manner in which the recipient ends up in possession of the plaintext password instead of having to perform a challenge-response authentication in its standard mode of authentication. This practice, under the technical label of a tunnelled authentication protocol, is known to be insecure [7][8][9] and is explicitly warned against in developer documentation like Apple’s security user interface guidelines, which instruct developers to avoid “handing [passwords] off to another program unless you can verify that the other Theoretical vs. Effective Security 19 program will protect the data” [10], and yet both SSL and SSH persist in using it. What’s required for proper password-based security for these types of protocols is a cryptographic binding between the outer tunnel and the inner authentication protocol, which TLS’ recently-added mutual authentication finally performs, but to date very few TLS implementations support it. A nice example of the difference between theory and practice from the opposite point of view is what its author describes as “the most ineffective CAPTCHA of all time” [11]. Designed to protect his blog from comment spam, it requires submitters to type the word “orange” into a text box when they provide a blog comment. This trivial speed-bump, which would horrify any (non-pragmatist) security expert, has been effective in stopping virtually all comment spam by changing the economic equation for spammers, who can no longer auto-post blog spam as they can for unprotected or monoculture-CAPTCHA protected blogs [12][13]. On paper it’s totally insecure, but it works because spammers would have to expend manual effort to bypass it, and keep expending effort when the author counters their move, which is exactly what spam’s economic model doesn’t allow. A lot of this problem arises from security’s origin in the government crypto community. For cryptographers, the security must be perfect — anything less than perfect security would be inconceivable. In the past this has lead to all-or-nothing attempts at implementing security such as the US DoD’s “C2 in ‘92” initiative (a more modern form of this might be “PKI or Bust”), which resulted in nothing in ’92 or at any other date — the whole multilevel-secure (MLS) operating system push could almost be regarded as a denial-of-service attack on security, since it largely drained security funding in the 1980s and was a significant R&D distraction. As security god Butler Lampson observed when he quoted Voltaire, “The best is the enemy of the good” (“Le mieux est l’ennemi du bien”) — a product that offers generally effective (but less than perfect) security will be panned by security experts, who would prefer to see a theoretically perfect but practically unattainable or unusable product instead [14].