Secure Systems Editors: Patrick McDaniel, [email protected]; Sean W. Smith, [email protected]

Author copy. Do not redistribute.

to machine rules; it’s where users Security and Cognitive experience frustration and is the medium through which that frus- tration is conveyed. : Exploring the Role While we practitioners have spent the last 40 years building fan- cier machines, psychologists have of the Mind spent those decades document- ing ways in which human minds systematically (and predictably) Sean W. Smith | Dartmouth College misperceive things. Minds are part of the system, and cognitive tell us how minds get things wrong. (For quick introductions to this field, seeRational Choice omputer security aims to to patch holes, but balancing those in an Uncertain World, an under- C ensure that only “good” updates while keeping mission- graduate-level textbook;2 Cognitive behavior happens in computer critical applications running un- Illusions, a graduate-level­ book;3 systems, despite potential action impaired is tricky—many users just or Stumbling on Happiness, more by malicious adversaries. Conse- give up. (Stuxnet was lauded for casual reading.4 A pioneer in this quently, practitioners have focused the number of 0-day holes it used, space, Daniel ­Kahneman—­a Nobel primarily on the technology to pro- but five-year holes would suffice to laureate—­also has a new book out, hibit “bad” things—according to penetrate much of our information Thinking, Fast and Slow, for a gen- some set of rules—and to a lesser infrastructure.) Savvy home users, eral audience.5) extent on the structure of such rules. trying to (legally) share music files Unfortunately, fieldwork and with another household computer, To What Extent Might anecdotes report how we con- will struggle over drop-down menu This Affect the Usable tinue to get the rules wrong. We options attempting to open only Security Problem? keep hearing that security is hard the proper holes in the network Consider the creation of security to use and gets in the way. In the perimeter. Developers might know policies—the formal rules stat- workplace, writing down pass- that advanced protection technol- ing whether subject S can perform words on Post-it notes hidden ogy, such as SELinux, will help keep action A on object O right now under keyboards, under tables, or programs in the bounds of secure (let’s call this time t1). It’s tempting in desk drawers is endemic, because behavior, but they have no easy way to imagine that an omniscient deity humans have too many to remem- of formally telling the system what hovers in the computer, looking at ber—and perhaps also because those bounds are. a request’s full context and implica- the IT system forces an authenti- So, it’s hard to create and config- tions and making the wisest possible cation system that doesn’t meet ure security technology and hard to decision. However, in reality, this users’ needs. (Recently, a school use it after deployment. However, decision was probably made much system secretary was lambasted the charter of this department is to earlier in time (at a time t0 t1) by for misusing the superintendent’s look at the broader “system” con- a security officer trying to imagine password to change grades—but text of security—and the human what S would be doing in the≪ future no one seemed to think it odd that mind is a component in both secu- and whether action A would be con- she knew the password in the first rity creation and use. The human sistent with the organization’s goals place.1) IT staffs know that keep- mind is the arena in which secu- and values. We can pretend that the ing software updated is important rity engineers translate “goodness” policy rules came from the deity at

1540-7993/12/$31.00 © 2012 IEEE Copublished by the IEEE Computer and Reliability Societies September/October 2012 75 Secure Systems

t1, but it was all in the officer’s head a set of jams. One set of test sub- even about dry factual things such at t0. can tell us how jects ranked the jams without think- as an estimated selling price for a cof- these rules might differ. If we don’t ing (that is, using system 1); their fee mug, when they are in the situa- pay attention to this difference, we rankings closely correlated with the tion themselves versus when they are risk creating incorrect policies. experts’ rankings. However, other speculating about themselves in the Alternatively, consider the case sets of test subjects were asked to future or about someone else.8–10 In of a subject S complaining about think carefully while ranking the our fieldwork in access control in unusable security features (or, for jams—and their rankings were very large enterprises, we kept hearing that matter, other unusable aspects different. Nonexperts could do the how users needed to work around of IT). It’s tempting to imagine that task with system 1 but not with sys- the access control system because an omniscient deity is hovering in tem 2. For jam, introspection inhib- the policy didn’t allow them to do S’s mind, who recorded this bad its intuition. what they needed to perform their experience at time t1. However, in Sticky jam made me think of jobs. In the case of healthcare IT, reality, we have a security engineer sticky security policy problems. some researchers have even reached hearing S’s recollections, at time We technologists build elaborate the conclusion that the problem is a t2 t1, of how S felt at t1. We can sets of knobs—drop-down menus, dearth of clinicians among the pol- pretend these recollections are the check boxes, access control lists— icy makers. same≫ as the deity’s observations, and expect users to figure out how Could the empathy gap be play- but they were all filtered through S’s to map their notion of “goodness” ing a role here? To examine this head. Cognitive bias can tell us how to a setting of the knobs, perhaps question, we recruited nearly 200 the recollections and observations moving a system 1 goal to a sys- clinicians and staff members at might differ. In this case, if we don’t tem 2 task. Might we see the same a large hospital and partitioned pay attention to this difference, we inhibition phenomenon here? To them into two groups.11 We gave risk “fixing” the wrong thing. test this, my team created a fictional one group a series of access con- social network. Users had various trol scenarios we developed with The Dual-Process Model categories of personal information, a medical informatics specialist. In my lab at Dartmouth, my col- and the GUI told users the vari- These scenarios were all phrased leagues and I have performed some ous levels of connection they had in an abstract, role-based way, as initial exploration into how two with each friend. We presented one is often found in security policies sources of cognitive bias—the dual- group of test subjects a sequence of (for example, “Should a physician process model and the empathy friends and asked them to decide be able to see information I about gap—affect security policy creation. which information they’d share patient A in this particular con- The dual-process model parti- with each friend. Another group text?”). We gave the other group tions the mind into two parts: an was asked to think about various the same scenarios but instead intuitive, nonverbal, and almost social network privacy issues, and phrased them in a way that put the nonconscious system 1, and a verbal, then given the same choices. The test subject directly in the setting; introspective, conscious system 2. second group made significantly each wildcard became specific Some tasks are better done by one different choices—but to our sur- (for example, “You are a physician system or the other, and the systems prise, the difference was one-sided: treating patient Alice...”). can interfere with each other. How- the group asked to think about pri- For two-thirds of the scenarios, ever, this isn’t just abstract theory; vacy gave more information away!7 the direct-experience group made what makes the last few decades of Perhaps introspection inhibits intu- significantly looser judgments than this science so interesting for peo- ition also when it comes to secu- the policy maker group, suggesting ple like me is that these theories are rity policy. (In hindsight, I wonder that even experienced medical staff reinforced by experiments. We can whether the cognitive bias toward will make access control policies use the theories to make predictions dissonance reduction might have that experienced medical staff will that are borne out in practice! been at play; maybe the results find overly constraining. (However, For example, psychologists Tim- would have differed if we didn’t call in some of the other scenarios, the othy Wilson and Jonathan Schooler them “friends.”) direct-experience group made sig- carried out some experiments nificantly tighter decisions, oddly.) regarding jam (and by “jam,” I mean The Empathy Gap Maybe the problem with policy cre- the sweet condiment one puts on We also examined what psycholo- ation isn’t the policy makers’ back- toast, not an obscure security acro- gists call the empathy gap: the very grounds but the cognitive bias built nym).6 Trained taste experts ranked different decisions people make, into human minds.

76 IEEE Security & Privacy September/October 2012 Bounded Rationality approaching many of these prob- outperformed the mathematically and the Anchoring Effect lems, human minds show no evi- optimal ones! In subsequent work, At the University of Southern Cali- dence of actually carrying out these Pita’s group further improved their fornia, Milind Tambe has also been algorithms, and so are perhaps model by taking into account pros- looking at the role of cognitive doing something much simpler and pect theory, which describes how bias, but in the context of optimiz- less correct. Tambe’s group allowed human minds tend to distort esti- ing system defense against human for the adversaries’ bounded ratio- mated probabilities of actions adversaries. In these scenarios, nality by allowing them to have only depending on how good or bad the defenders have a limited amount approximately optimal choices. perceived outcome is.13 of resources to distribute across Looking back on the inspiration various targets. Before mounting for his pioneering work in cognitive Some Other Cognitive their attack, the adversaries can bias, Kahneman tells how he and his Bias Techniques make repeated observations of the colleague would consistently mis- These study results stemmed from defenders’ actions. For looking at a few basic instance, defenders dis- It might be nice to pretend that ways the mind gets things tribute guards across a wrong. However, the certain number of airport adversaries are perfectly rational— literature on cognitive terminals, and the adver- but in fact, they’re human, with minds biases provides a verita- sary can quietly scope ble wonderland of addi- things out and see that, subject to biases and distortions. tional techniques. Here perhaps, the guards go to are just a few: the odd-numbered ter- minals on odd-numbered days and estimate statistical probabilities— ■■ Peak end (for example, see “End even-numbered terminals on even- but would misestimate the same Effects of Rated Life Quality”14). numbered days. way! (So maybe we can predict how Rather than considering the net The branch of mathematics humans get these things wrong.) amount of goodness over time, called game theory analyzes these The anchoring effect describes one human minds measure the qual- scenarios as a special type of Stack- type of distortion here: generally ity of an event with duration by elberg game. Formalized treatments put, human minds like to make basic considering just the maximum establish a set of possible adversar- assumptions about probability dis- value and the end value. Humans ies, under a known distribution, and tributions and only slowly change judge a short, happy life to be bet- assume that adversaries choose the them on the basis of observation. ter than the same life with a lon- attack strategies that maximize their As noted, formal treatments of ger but not quite as happy tail. expectations of success. Under this the defender game assume adver- Perhaps we can make an unus- formalized model, with “perfectly saries make the best possible choice able security system appear more rational” adversaries, optimal strate- against the defender’s strategy. usable just by making it end well. gies exist for the defenders. An omniscient adversary sees the ■■ Immune neglect (for example, In the real world, it might be defender’s strategy exactly; how- see “The Particular Longevity of nice to pretend that adversaries ever, human adversaries can only Things Not So Bad”15). Scenar- are perfectly rational—but in fact, act on their perceptions of the strat- ios exist in which less-bad events they’re human, with minds subject egy. Tambe’s group modeled this can have a longer negative impact to biases and distortions. In a 2009 effect by initially anchoring the (when recalled by human minds) project, James Pita and colleagues adversaries’ perceptions on uni- than worse events. Perhaps we considered the implications of two formity in the defender’s resource can make an unusable security of these biases: bounded rationality distribution, regardless of what the system appear more usable (after- and the anchoring effect.12 defender was doing. ward) by making things go really Computer scientists like to think Making these changes in the wrong when they start to go about how problems are solved by adversary model leads to defender wrong. Rather than simply reject precise, thorough algorithms. The strategies that differ from what a password, maybe we should concept of bounded rationality was previously considered math- crash the browser. (attributed to Herb Simon, whom ematically optimal. The punch ■■ Preview-based forecasting (for ex- the computer science field claims line? When evaluated in large- ample, see “Why the Brain Talks as one of its own) arises from the scale experiments against human to Itself”16). Humans evaluate annoying observation that, when adversaries, these new strategies future choices by “previewing” www.computer.org/security 77 Secure Systems

their consequences in their heads. Kids’ Grades,” ZDNet, 19 July 2012; -control-hygiene-and-empathy-gap However, psychologists have www.zdnet.com/mom-accessed -medical-it. identified various sources of sys- -school-system-110-times-to-change 12. J. Pita et al., “Effective Solutions for tematic error in such previews. -kids-grades-7000001230. Real-World Stackelberg Games: Perhaps this can tell us how to 2. R.K. Hastie and R.M. Dawes, Ratio- When Agents Must Deal with make a security policy tool (pre- nal Choice in an Uncertain World: Human Uncertainties,” Proc. 8th dicting the goodness of future ac- The Psychology of Judgment and Deci- Int’l Conf. Autonomous Agents and tions) that creates a policy users sion Making, 2nd ed., Sage, 2009. Multiagent Systems, Int’l Founda- are less likely to circumvent. 3. R.F. Pohl, Cognitive Illusions: A tion for Autonomous Agents and ■■ “Infernal” internal logic (for exam- Handbook on Fallacies and Biases in Multiagent Systems, 2009, http:// ple, see “Supposition and Repre- Thinking, Judgement and Memory, teamcore.usc.edu/papers/2009/ sentation in Human Reasoning”17). Psychology Press, 2005. COBRA.pdf. Human minds have interesting 4. D. Gilbert, Stumbling on Happiness, 13. R. Yang et al., “Improving Resource ways of drawing incorrect conclu- Vintage Books, 2007. Allocation Strategy against sions from a set of assertions and 5. D. Kahneman, Thinking, Fast and Human Adversaries in Security observations (for example, Google Slow, Farrar, Straus and Giroux, Games,” Int’l Joint Conf. Artificial the “Wason selection task”). Per- 2011. Intelligence (IJCAI 11), AAAI, haps this might shed light on 6. T.D. Wilson and J.W. Schooler, 2011; http://teamcore.usc.edu/ how even shrewd Unix users have “Thinking Too Much: Introspec- papers/2011/ijcai11_paper148 trouble setting file and directory tion Can Reduce the Quality of _cameraready.pdf. permissions correctly for various Preferences and Decisions,” J. Per- 14. E. Diener, D. Wirtz and S. Oishi, scenarios. (Think of “access” as sonality and Social Psychology, vol. “End Effects of Rated Life Quality: “conclusion,” and “rules/settings” 60, no. 2, 1991, pp. 181–192. The James Dean Effect,”Psychologi - as “assertions and observations.”) 7. S. Trudeau, S. Sinclair, and S. Smith, cal Science, vol. 12, no. 2, 2001, pp. ■■ Moral cognition (for example, “The Effects of Introspection on 124–128. see “The Emotional Dog and Its Creating Privacy Policy,” Proc. 15. D. Gilbert et al., “The Peculiar Rational Tail”18). Human minds 8th ACM Workshop on Privacy in Longevity of Things Not So Bad,” have interesting ways of reason- the Electronic Society (WPES 09), Psychological Science, vol. 15, no. 1, ing about moral and immoral ACM, 2008, pp. 1–10. 2004, pp. 14–19. actions. Perhaps this work can 8. E.W. Dunn and S.A. Laham, “Affec- 16. D.T. Gilbert and T.D. Wilson, shed light on why some security tive Forecasting: A User’s Guide to “Why the Brain Talks to Itself: officers pound fists and insist that Emotional Time Travel,” Affect in Sources of Error in Emotional Pre- the enterprise firewall must block Social Thinking and Behavior, J. For- diction,” Philosophical Trans. Royal all recreational browsing—even gas, ed., Psychology Press, 2006. Soc. B, vol. 364, no. 1521, 2009, pp. though studies show that such 9. E. Pronin, C. Olivola, and K. Ken- 1335–1341. browsing increases productivity. nedy, “Doing unto Future Selves as 17. S.J. Handley and J. Evans, “Sup- You Would Do unto Others: Psy- position and Representation in One could teach a whole course on chological Distance and Decision Human Reasoning,” Thinking and this—in fact, I’ve tried to. Making,” Personality and Social Psy- Reasoning, vol. 6, no. 4, 2000, pp. chology Bulletin, vol. 34, no. 2, 2007, 273–311. pp. 224–237. 18. J. Haidt, “The Emotional Dog and hy should human minds 10. L. Van Boven, D. Dunning, and G. Its Rational Tail: A Social Intuition- W behave this way? To para- Loewenstein, “Egocentric Empathy ist Approach to Moral Judgment,” phrase Tom Lehrer, that’s not our Gaps between Owners and Buy- Psychological Rev., vol. 108, no. 4, department. But that’s how they ers: Misperceptions of the Endow- 2001, pp. 814–834. seem to behave, and because human ment Effect”J. Personality and Social minds are part of the system of Psychology, vol. 79, no. 1, 2000, pp. Sean W. Smith is a professor of usable and effective security, we’d 66–76. computer science at Dartmouth be wise to take into account the 11. Y. Wang, S.W. Smith, and A. College. Contact him at sws@ strange ways they work. ­Gettinger, “Access Control Hygiene cs.dartmouth.edu. and the Empathy Gap in Medi- References cal IT,” HealthSec, Usenix Assoc., Selected CS articles and columns 1. E. Protalinksi, “Mom Accessed 2012; https://www.usenix.org/ are also available for free at School System 110 Times to Change conference/healthsec12/access http://ComputingNow.computer.org.

78 IEEE Security & Privacy September/October 2012