Comparative Strategy

ISSN: 0149-5933 (Print) 1521-0448 (Online) Journal homepage: http://www.tandfonline.com/loi/ucst20

Cyber deterrence and critical-infrastructure protection: Expectation, application, and limitation

Alex Wilner

To cite this article: Alex Wilner (2017) Cyber deterrence and critical-infrastructure protection: Expectation, application, and limitation, Comparative Strategy, 36:4, 309-318 To link to this article: http://dx.doi.org/10.1080/01495933.2017.1361202

Published online: 01 Nov 2017.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=ucst20

Download by: [134.117.239.71] Date: 01 November 2017, At: 12:26 COMPARATIVE STRATEGY , VOL. , NO. , – https://doi.org/./..

Cyber deterrence and critical-infrastructure protection: Expectation, application, and limitation

Alex Wilner Norman Paterson School of International Affairs, Carleton University, Ottawa, Canada

ABSTRACT Linking deterrence theory to cybersecurity policy and critical-infrastructure protection is easier said than done. Recent cybersecurity incidents involving the United States, China, , and North Korea illustrate the yawning gap between cyber deterrence expectations, applications, and results. This arti- cle draws on classical deterrence theory to illustrate how the logic of deter- rence applies to cybersecurity policy and strategy. By differentiating between physical and digital critical infrastructure protection, the article explores the promises and pitfalls of cyber deterrence in practice. Seven limitations are explored in detail, including: denying digital access, commanding cyber retal- iation, observing deterrence failure, thwarting cyber misfits, addressing the cyber power of weakness, attributing cyber attacks, and solidifying red lines.

On November 21, 2016, U.S. President-elect Donald J. Trump took to YouTube to release a three-minute video detailing his ambitions for the coming months. The video provided Trump with an opportunity to describe a “list of executive actions” that he would pursue on “day one” of his presidency. On national security, Trump explained that he would develop “a comprehensive plan to protect American’s vital infrastructure against cyberattacks and all other forms of attack.”1 Cybersecurity was an issue Trump returned to often during the 2016 presidential election. His website dedicated an entire subsection to the issue, where, among other things, it stated Trump’s intention to “develop the offensive cyber capabilities we [the U.S. Government] need to deter attacks by both state and non-state actors and, if necessary, to respond appropriately.”2 Given current trends, there is little doubt that cybersecurity will help define the Trump presidency, perhaps as much as international terrorism has helped define both the George W. Downloaded by [134.117.239.71] at 12:26 01 November 2017 Bush and administrations. Unfortunately for President Trump, linking deterrence theory to cybersecurity policy and critical infrastructure protection is easier said than done. Deterrence theory has come a long way since the hey- days of the Cold War, but in both theory and practice cyber deterrence is not yet well understood. Nor is cyber deterrence properly theorized. All processes of deterrence, new and old, are based on several theo- retical and logical prerequisites that help dictate how deterrence is put into practice. Deterrence does not just happen; it is something that you do to an adversary in order to change its behavior to your liking. What follows is an exploration of the promises and pitfalls of cyber deterrence as it relates to contem- porary critical infrastructure protection. The article begins with a brief discussion of the nuts and bolts of deterrence theory and practice. It then makes the distinction between hardware and software in think- ing though the application of deterrence to infrastructure protection. Seven dilemmas, or limitations, to applying deterrence to cyber infrastructure protection are then explored.

CONTACT Alex Wilner [email protected]

©  Taylor & Francis 310 A. WILNER

Deterrence theory: Logical prerequisites and practical dimensions From the literature on classical deterrence, four central prerequisites present themselves.3 First, deter- rence prompts voluntary changes in behavior. The goal is not to force an adversary to act in a certain way by destroying its ability to act any other way. There is a distinction between what deterrence scholars call brute force—which is destroying an adversary such that he cannot harm you—and deterrence—which isaboutconvincingacapableadversarynottoharmyou.Deterrenceinvolvesachoice;itisnotabout incapacitation.4 Second, for deterrence to work, adversaries must be sufficiently influenced by the costs and benefits of their actions, such that some form of threat will alter their behavior. Political scientists and economists alike call this rationality. Only rational actors that weigh the costs and benefits of their actions can be coerced. Third, deterrence involves at least two actors: the defender—the actor doing the deterrence and pro- tecting itself from aggression—and the challenger—the actor contemplating an aggressive move.5 For deterrence to work, the defender must define unwanted behaviors to the challenger, and communicate or signal a willingness to punish violations. If states want to defend their interests and assets by practicing deterrence, they must tell their adversaries how they will respond to different types of aggression. Red lines must be drawn, communicated, and defended. Deterrence communication hinges on a capability to act as promised, to punish or deny as threatened. States need to show resolve to carry out their threats. Finally, deterrence is best practiced against a known or suspected adversary. Who is it precisely we are trying to deter? In cases where the identities of adversaries are unknown or purposefully obfuscated, coercive threats may miss their mark. To practice deterrence, it helps a great deal to have someone or something to hold accountable and threaten appropriately. From theory, we can move to practice. Deterrence rests on convincing an adversary that the costs of taking a particular action outweigh the potential benefits. Deterrence is fundamentally about manipu- lating another’s behavior in ways that suit your own goals. It is about influencing what economists call the cost-benefit calculus of decision making. The deterrence most familiar to casual observers is deter- rence by punishment, or deterrence by retaliation.6 This type of deterrence was the basis for the Cold War. Here, a defender threatens to retaliate against an aggressor in the event the aggressor carries out an unwanted action: If you strike me, I will strike you back, tit for tat. The threat carries with it a cost to the aggressor. Mutual Assured Destruction—or MAD—relied on the threat of a U.S. nuclear exchange with Soviet Russia in order to deter Russia’s use of nuclear weapons against the United States and its allies in the first place. Threatening nuclear annihilation is taking deterrence to an obvious extreme, but the larger point is that actors can use a combination of threats, like conventional or nuclear attack, military intervention, economic sanctions, and diplomatic pressure to shape an adversary’s behavior. Deterrence does not only involve punishment, however. The flip side of retaliation is denial. Deter- rence by denial shapes an adversary’s behavior by threatening it with failure. Actors usually weigh both the costs and the benefits of an action. Punishment adds to the cost, but denial subtracts from the benefits. Downloaded by [134.117.239.71] at 12:26 01 November 2017 Both manipulate behavior but from different ends. With deterrence by denial, when an action becomes too difficult or too risky to conduct, an opponent may choose not to act in the first place. Here, thepre- sumed benefits of an action are lower than the costs, influencing behavior. For illustration, city-wide Green Zones, like those built in Iraq, and bulwarks around certain buildings, like embassies, restrict easy access to certain targets. This makes some types of attack more difficult to conduct, and potentially less likely to happen as a result. In this example, the primary intent is defense, but the subsequent effect on adversarial decision-making is denial. Another version of denial involves resilience. Resilience is the ability to bounce back, to mitigate the effects of an attack, to recover quickly after getting hit. A resilient power grid, for example, redistributes electricity even if certain nodes within the system are shut down. Resilience is primarily about recov- ery, but from a deterrence standpoint it also robs would-be aggressors of their objectives and strategic success. When even tactically successful attacks barely harm or disrupt a victim—the attack’s presumed intention—payoff to the aggressor is diminished, potentially altering its calculus and behavior. In sum- mary, deterrence by punishment or retaliation threatens harm, adding costs to certain behavior; deter- rence by denial or resilience threatens failure, subtracting benefits from certain behavior. COMPARATIVE STRATEGY 311

Protecting critical infrastructure: Hardware and software With the discussion of deterrence theory and practice serving as a backdrop, we can turn next to explor- ing the particularities of critical infrastructure protection. For our purposes, it is useful to think of infras- tructureastwoseparatethings:hardwareandsoftware.Thehardwarearetheengines,machines,systems, facilities,andprocessesthatcompriseinfrastructureplatforms.Thesearethephysicalattributesofinfras- tructure. Hardware is the working definition that is baked into the U.S. National Infrastructure Protection Plan. The U.S. government classifies 16 critical infrastructure sectors, including, for instance, manufac- turing, defense industries, dams, the financial sector, food and agriculture, and the chemical and nuclear sectors. Put together, these disparate classifications serve to illustrate the scope and breadth of physical contemporary critical infrastructure.7 The software side to infrastructure is different. It incorporates the digital side of infrastructure, the codes, computer networks, programs, information systems, SCADA (supervisory control and data acqui- sition) control systems, digital links, data, and web-based platforms that underpin and help operate crit- ical infrastructure. These are the intangible attributes of contemporary infrastructure. Interestingly, dig- ital infrastructure is a bit of a paradox: it is both its own hardware platform perhaps best captured within the communications and information technology sectors, but also has a cross-cutting element to it given that digital infrastructure runs through and informs all the other categories of infrastructure. With this distinction between hardware and software in mind, we can think of both physical deterrence—deterring terrorist bombings or other attacks against nuclear facilitates, for example— and digital deterrence—safeguarding the personnel records of nuclear facility staff and scientists, for example—in critical infrastructure protection. Importantly, this dichotomy, between physical and digi- tal critical infrastructure deterrence, provides us with an opportunity to illustrate and assess the limita- tions of applying deterrence to cyber infrastructure protection writ large. Seven high-level observations, dilemmas, and limitations become apparent.

Denying access First, tightening defenses around and within physical infrastructure to deny aggressors entry—a form of deterrence by denial—is more easily accomplished than denying aggressors entry into digital infrastruc- ture.ThereareonlysomanywaysyoucanphysicallygetintotheU.S.EmbassyinParis,forinstance. But there appear to be many more opportunities for infiltration within cyberspace and within digi- tal infrastructure. Backdoor portals and zero-day software vulnerabilities seemingly pile up. Even air- gapping secure networks from unsecured networks—which can involve physically separating internal digital space from online digital space—is not foolproof. Emerging technology, like artificial intelligence, improved , and more sophisticated cyber espionage techniques introduce new digital vulnera- 8

Downloaded by [134.117.239.71] at 12:26 01 November 2017 bilities. From a deterrence perspective, this discrepancy between denying physical entry from denying dig- ital entry may inform an adversary’s calculus and behavior. Aggressors learn from their successes and failures, just as defenders do. So as defenders shut physical doors to would-be challengers, some may start emphasizing digital attacks over physical attacks to achieve comparable goals and objectives. If so, in thinking about the future of infrastructure protection, states must find the right balance between their physical and digital defensive needs, and preempt the fact that actions in one domain may produce counter-actions among adversaries in the other.

Command and control in cyber retaliation A second observation has to do with retaliation. What government agency or department is expected to engage in cyber deterrence? To date, the division of labor remains uncertain. Take the United States as an example. A U.S. department of cybersecurity does not yet exist. Instead, multiple departments and agencies hold different components of the broader cyber file. The U.S. Department of Justice, with the FBI and the Cyber Investigative Join Task Force, lead investigations into criminal intrusions that affect 312 A. WILNER

national security. But the Department of Homeland Security helps private firms respond to breaches of their private networks. And a multitude of intelligence agencies are tasked with gathering infor- mation to identify a perpetrator’s identity. It is not yet clear, however, which department will carry out a cyber deterrent operation. There is a likely role for the NSA—the National Security Agency— which is preeminently capable in cyberspace. But at its core, the NSA is an intelligence platform, perhaps structurally ill-suited for cyber offense. Otherwise, U.S. Cyber Command, a Department of Defense structure established in 2009 but housed within the NSA, may be tasked to respond. It is not clear, however, if Cyber Command has a role to play in protecting both military and civilian cyber infras- tructure.Itmaychieflyrespondtoattacksontheformer,despitethefactthatciviliancyberinfrastructure appears far more vulnerable than military infrastructure to cyber attack.9 And even then, Cyber Com- mand is a defensive structure, meant to defend military cyber assets rather than to conduct offensive cyberwar. Deterrence works best when adversaries understand the risks they run, and part of that requires that they appreciate who is likely to respond to their aggression, and how. For now, U.S. leadership in respond- ing to cyber attacks and aggression is not clear. In his January 2017 confirmation hearing before the U.S. Senate Committee on Armed Services, Retired General James Mattis, President Trump’s nominee for Secretary of Defense, reiterated that the United States needed “to develop a clear whole-of-government policy”—a comprehensive “cyber doctrine”—for responding to “cyber aggression” that unified the Amer- ican response across a number of government departments and agencies.10 Under questioning, Mattis suggested that the doctrine might be ready within 18 months (July 2019). Until then, and until a clearer division of labor is developed that illustrates how the United States will follow through with cyber deter- rence in military, civilian, and private infrastructure protection, deterrence will remain uncertain and weak. The same holds true for all states eager to communicate a willingness and ability to punish attacks and violations on their cyber infrastructure. Leadership is needed to bolster the credibility of threats.

Deterrence failure A third observation involves deterrence failure. In physical space, you often know when your defenses are being tested. Attacks are evident and known: A fortress wall or national border is breached, a missile is launched, or a bomb goes off. But in cyberspace, this is not the case. Cyber attacks on infrastructure— even when they are spectacularly successful—often go unnoticed, even for years. —the mali- cious worm allegedly developed by the United States and Israel to degrade Iran’s nuclear enrichment efforts—was apparently unleashed sometime in 2007.11 Only three years later, in 2010, and well after Stuxnet had managed to penetrate Iran’s nuclear facilities and destroy some centrifuges, was it discov- ered and defeated. The United States provides another, more recent, example of the same phenomenon. In 2015, cybersecurity experts identified a Chinese-sponsored hack—two separate hacks, in fact—of

Downloaded by [134.117.239.71] at 12:26 01 November 2017 the U.S. Office of Personnel Management (OPM). Sensitive data, including personal information like health and financial history, social insurance numbers, even fingerprints, of millions of American who had gone through U.S. government background checks, was surreptitiously siphoned off and exfiltrated. But American investigators now believe that the malware penetrated U.S. cyber defenses sometime in early 2014.12 Not only did the United States fail to realize that it was under attack, but the attack contin- ued unnoticed for months. The OPM breach likely provided China with data useful for identifying U.S. intelligence personnel stationed abroad, and potentially for blackmailing other Americans. The CIA was reported to have recalled some of its personnel from the U.S. Embassy in Beijing as a precaution.13 Attacks that go undetected and ignored are a conundrum for cyber infrastructure protection. Deterrence occupies that middle ground between defense and offense. Classical interpretations of deter- rence theory suggest that deterrence has failed when defenses are tested or breached, such that offensive retaliation may be justified. Contemporary interpretations of deterrence, like “cumulative deterrence,” provide a more flexible interpretation of deterrence failure, suggesting instead that some attacks are inevitableandthatretaliationismetedout,piecemeal,inhopesofdelimitingandinfluencingarival’s behavior over the long term.14 But regardless of where the line is drawn between deterrence success and failure, knowledge of a breach or attack is a prerequisite to the next step of retaliation. That equation COMPARATIVE STRATEGY 313

does not easily exist in cyberspace. If states do not realize that their cyber infrastructure has been com- promised, they cannot appreciate that their deterrence has failed and that retaliation may be warranted and necessary.

Cyber misfits and miscreants A fourth observation involves rationality. As noted, deterrence works best against rational adversaries. Physical attacks on infrastructure are often launched by nuanced, if not always purely rational, adver- saries, who calculate gain in conducting attacks. In cyberspace, however, some malicious actors target infrastructure for fun. Quite literally. LulzSec, a hacking collective comprising half-a-dozen computer geeks, drew its name from LOL, texting for “laughing out loud.”Its motto was “Laughing at your security since 2011!” In June of that year it took down the CIA’s webpage. It announced this accomplishment on : “Tango Down—CIA.gov—For the Lulz.”15 LulzSec was eventually traced and dismantled. But other such groups have taken its place, actors willing to test the limits of infrastructure cybersecurity for bragging rights. For the glory. This is a unique condition not usually evident in physical security and tra- ditional conflict that deterrence has yet to properly address. But as the cyber domain greatly empowers disparate individuals and groups motivated by any number of potential grievances, all the while seem- ingly diminishing the personal and physical harm cyber aggressors face, deterrence will have to learn to creatively respond to thrill-seeking cyber attackers. Until coercive strategies that match a miscreant’s specific motivation can be established—making hacking less fun in this case—these less-rational cyber adversaries will continue to pose a challenge to states eager to practice cyber deterrence.

The [cyber] power of weakness A fifth observation has to do with asymmetry. In physical space, symmetrical harm is usually feasible, in that deterrence can threaten an equal measure of destruction to shape an adversary’s behavior. If you destroy this piece of physical infrastructure, the message goes, we might do the same in kind. But in cyberspace, symmetry is often lacking. Cyberspace provides traditionally weaker states, non-state actors, collectives, and individuals disproportionate power over traditionally powerful states. The more digitally connected a state is, the more vulnerable it may be to crippling cyber attacks. The United States has a lottoloseincyberspace;NorthKorea,notsomuch.MilitantsfightingonbehalfoftheIslamicState’s “Virtual Caliphate,” even less so. This digital asymmetry in power and vulnerability alters traditional coercive balances.16 It may compel weak actors to invest their limited resources in digital weaponry, to help offset their weaknesses in other domains. But it may also force strong actors to avoid digital retaliation altogether, favoring non-cyber, cross-domain forms of response, if only to prevent escalating a cyber conflict in ways in which they have more to lose than their weaker opponents. The concomitant risk

Downloaded by [134.117.239.71] at 12:26 01 November 2017 of purposefully avoiding escalation in cyberspace in the face of certain challenges, however, is damage to one’s coercive credibility overtime.17 A paradox, then, emerges in cyberspace: technologically advanced states are both the most able to conduct offensive cyber attacks, and the most susceptible to such attacks. Cyber deterrence, including cross-domain deterrence, will have to address these discrepancies.

The attribution dilemma A sixth problem for deterrence is attribution. This can be a problem with some physical attacks, too, especially with clandestine acts of sabotage or terrorism, but generally, in physical space, victims usually have a solid idea of their aggressor’s identity. Not so in cyberspace. The problem of attribution—who to blame for an attack and who to retaliate against as a result—is a knotty problem in digital space. The ease of digital obfuscation, the complexity of multistage cyber attacks, and the challenge of uncovering the physical source of a cyber attack can complicate how victims of cyber attack construct and communicate retaliatory threats. And as Joseph Nye reminds us, “knowing the true location of a machine” used in a cyber attack “is not the same as knowing the ultimate instigator of an attack.”18 If cyber adversaries believe that their identities are protected, even robust threats may fail to shape their behavior. Digital 314 A. WILNER

footprints certainly exist in infrastructure attacks, but perpetrators often appear able to sufficiently mask their identities to force victims not to retaliate. Without near-certain attribution, coercive threats will remain suspect.19

Shifting red lines A seventh observation is this: threats of retaliation are more easily understood in physical space than they are in digital space. States have had hundreds of years of practice turning their military threats into deterrence and coercion. Norms of engagement are generally understood, at least among governments, and are also codified in strategic and military doctrine. Physically attacking infrastructure usually crosses a well-understood red line. This is not the case in cyberspace. Cyberspace, as a domain, is itself relatively new; the logic of deterrence has only just begun catching up. It is not clear how states will respond to different types of digital attacks on their infrastructure. For starters, such attacks are relatively rare. But digital red lines are also fuzzy. Coercive communication is muddled as a result. The United States, in particular, is trying to fix this, but the process has proven difficult. In July 2016, the White House, under President Barack Obama, issued a presidential directive on cyber retaliation. The president distinguished between run-of-the-mill “cyber incidents” and more “significant cyber incidents.” The idea is that the latter might invite American retaliation, codifying cyber red lines, norms, and expectations with regard to cyber deterrence over time.20 Dilemmas persist, however. In the first case, “significant cyber incidents” are defined as attacks that harm national security or economic interests; foreign relations; public confidence; or the health, safety, andcivillibertiesofAmericanpeople.Thisdefinitionissobroadthatitrisksdilutingthecoerciveprocess altogether, diminishing the potency of coercive communications. In the second case, the way the United States has responded recently to three separate Russian, Chinese, and North Korean cyber attacks con- fuses U.S. coercive messaging in cyberspace. The latest cybersecurity incident involving Russian hacking of U.S. presidential election data is a case in point. In early January 2017, the American intelligence community published a declassified summary report concluding that President ordered Russian intelligence agencies to disrupt the U.S. election by hacking, exfiltrating, manipulating, and releasing sensitive data.21 The Russian influ- ence campaign targeted Democratic National Committee (DNC) data and emails of Democratic Party personnel, including , Chairman of ’s presidential campaign. Thousands of emails and documents were stolen. Some were subsequently leaked online, in real and doctored form, via WikiLeaks and DCLeaks (a Russian-linked website). Many of the emails embarrassed Democratic officials, and several subsequently resigned. Confusion still remains, however, as to whether and how the United States, under President Trump, will respond. Most important, for the purposes of thinking through the establishment of deterrence red lines in

Downloaded by [134.117.239.71] at 12:26 01 November 2017 cyberspace, is the way Washington interpreted the leaks, but not necessarily the original theft of the data, as an attempt to influence the 2016 U.S. presidential election. The U.S. intelligence report is blunt: “Russia’s goals were to undermine public faith in the US democratic process, denigrate Secretary Clin- ton [the US Democratic Party nominee], and harm her electability and potential presidency. We further assess Putin and the Russian Government developed a clear preference for President-elect Trump.”22 Subsequent testimony in 2017 by several U.S. officials, notably former FBI Director James Comey and NSA Director Admiral Mike Rogers, have repeated the assertion that Moscow probed the U.S. election in hopes of swaying the result.23 Cumulatively, these assessments bumped the incidence into the “sig- nificant” category. Obama’s CIA Director, John Brennan, was left with the difficult task of explaining why spying on U.S. political institutions was considered fair game, but that the subsequent release of that material to manipulate the democratic process was considered a “new level of malicious activity.”24 Hack- ing the Democrat’s emails was considered legitimate statecraft; but releasing the data had crossed a line. Some American officials called for classifying the U.S. electoral system as a new category of national crit- ical infrastructure. Speaking of the DNC hack, Shawn Henry, who previously led the FBI’s cyber division and worked to defend the DNC’s infrastructure during the breach, noted: “This is not a mom-and-pop delicatessen or a local library. This a critical piece of the U.S. infrastructure because it relates to our COMPARATIVE STRATEGY 315

electoral process, our elected officials, our legislative process, our executive process.”25 President Obama agreed. Through executive order, the United States retaliated against Russia on December 29, 2016: it imposed sanctions on several Russian individuals and entities responsible for “tampering, altering, or causing the misappropriation of information with the purpose or effect of interfering with or undermin- ing election processes or institutions,” expelled 35 Russian diplomats, and closed two U.S.-based estates used by Russian intelligence services.26 With characteristic aplomb, rather than reciprocating in kind, Russian President Vladimir Putin invited the children of American diplomats to celebrate New Year’s festivities (along with the Russian Orthodox Christmas) at the Kremlin. Putin had little to lose: Obama had less than a month left in his term. The Russian hacking episode put China’s earlier hacking of the U.S. Office of Personnel Management, mentioned above, into some context. The Chinese stole as many as 20 million personnel records of U.S. government workers, a massive trove of data orders of magnitude greater than was exfiltrated during the DNC hacks. But the OPM records were never published. Nor were they released or leaked. China, it seems, held on to the data. That prompted James Clapper, director of U.S. National Intelligence to label the Chinese hack “not an attack” on the United States, but rather simply good espionage. Given the chance,headded,theUnitedStates“wouldhavedonethesamething.”27 Another set of related lessons can be derived from North Korea’s attack on Sony Pictures Entertain- ment in late 2014. That attack was prompted by Sony’s production of a spy comedy film, The Interview, in which a pair of American journalists are recruited by the CIA and assassinate North Korean leader Kim Jong-un. The movie was to be released on December 25, but on November 22, a coordinated cyber attack destroyed 70 percent of Sony’s computing power. Embarrassing emails written by Sony officials, along with personal information, were subsequently released. On the surface, none of that concerned Washington. But when Sony later opted against releasing the movie altogether (and major American movie chains agreed not to screen the film), that struck a particular nerve with President Obama. He responded: “We cannot have a society in which some dictator someplace can start imposing censorship intheUnitedStates.”28 A threshold had been crossed. American retaliation was swift and impressive. On December 22, 2014, North Korea’s internet went dark for half a day. Two weeks later, by executive order, Obama slapped sanctions on several North Korean organizations and individuals.29 Sony’s response to the North Korean hack—not the hack itself—turned the incident into a viola- tion of U.S. freedom of expression, bumping the episode into the significant cyber incident category, inviting U.S. cyber retaliation. And the Russian manipulation and release of DNC data—but, again, not the hack itself—was similarly interpreted as a threat to the U.S. electoral and democratic processes. In both cases, the United States understood these two attacks as if they had been directed against critical U.S. national infrastructure. Retaliation followed suit. But in the Chinese case, the OPM hack was inter- preted differently, as an example of legitimate espionage and statecraft. “You have to kind of salute the Chinese for what they did,” Director of National Intelligence Clapper explained. The collective Amer-

Downloaded by [134.117.239.71] at 12:26 01 November 2017 ican response in this particular case was to strong-arm and otherwise compel Chinese President Xi Jinping to join Obama in a bilateral accord to curb economic (though not political) cyber espionage. The agreement was reached in September 2015, during President Xi’s visit to the United States.30 In sum, while the United States is attempting to develop and communicate cyber security norms that will bolster its deterrent posture and ultimately better protect its cyber assets, the process will take time to mature and harden. In the meantime, contradictory messaging—in terms of classifying types of attacks and the nature of American response—may altogether weaken coercive processes along the way.

Back to Trump: Toward a comprehensive cyber doctrine Up until the night of the U.S. election on November 8, 2016, many cybersecurity analysts—the author included—assumed that President Hillary Clinton would follow her predecessor’s strategic trajectory in solidifying American cyber red lines for protecting national infrastructure. It was further assumed that Madame President would retaliate, firmly, against Russia for its meddling in the U.S. election, a veiled 316 A. WILNER

threat both Obama and former Vice President Joe Biden issued in the weeks leading to the election and one Obama ultimately carried out, in some form, on December 29.31 But ’s surprise electoral victory complicates this scenario a great deal. Not only has Trump repeatedly rebuffed U.S. intelligence assessments concerning Russia’s culpability, but it also appears that his electoral victory itself may have been at least a byproduct of Russia’s cyber-influence operation. As of June 2017, several officials and individuals close to the Trump administration, including Michael Flynn, Trump’s disgraced National Security Advisor, Paul Manafort, a high-level Trump adviser, Jared Kurshner, Trump’s son-in-law turned key advisor, and Attorney General Jeff Sessions, are suspected of having played a role in the evolving imbroglio.32 That will cast a shadow on Trump’s presidency, and may ultimately dictate the way the U.S. government, under Trump’s leadership, comes to engage cybersecurity issues in the coming years. Gen- eral Mattis, during his January 2017 Senate confirmation hearing, argued, in responding to a question on cyber deterrence, that it was “important that [American] adversaries know what we will absolutely not tolerate. And by making that clear, you are less apt to have somebody stumble into a situation where, now,weareforcedtotakeaction.”33 Doctrine leads to credibility. And credibility, to deterrence. Perhaps with Mattis’s logic in mind, President Trump signed a long-delayed cybersecurity execu- tive order on May 11, 2017. (An earlier draft of the order, ready in January 2017, was scrapped.) Above all else, the executive order lays out a months-long plan to review and assess all elements of cyberse- curity from across the U.S. government. In terms of deterrence specifically, the order calls upon var- ious agencies and departments to jointly submit, within 90 days (i.e., August 2017), a report “on the Nation’s strategic options for deterring adversaries and better protecting the American people from cyber threats.”34 Developing a coherent American cyber deterrence doctrine may be a necessity, but a range of hurdles and pitfalls nonetheless present themselves. Developing, communicating, and upholding red lines—Mattis’s suggestion—is but one requirement. The United States will also have to grapple with the effects coercion in one domain (physical or digital) might have on an adversary’s behavior and motiva- tion in another. It will have to streamline how the United States, and its various complimentary agencies and departments, responds to different forms of cyber aggression in a systematic and unified manner. The United States will further need to improve its ability to deny adversaries access to sensitive cyber platforms, establish more rapid awareness of when its cyber defenses are being tested and breached, and construct greater attribution capabilities that together provide Washington with an ability to respond to cyber attacks in a timely and appropriate matter. The United States will also have to find ways to build cyber deterrence into its existing alliance structures.35 American cyber deterrence strategies will also have to be flexible enough to adapt to threats while making use of opportunities that emerging technologies, including those linked to the Internet of Things, —a digital, distributed ledger useful for recording and sharing data—and artificial intelligence, introduce in the coming years and decades. Finally, the United States will have to develop and issue clear guidelines for how it plans to address different forms of cyber aggression, from political and economic espionage, to subversion, and

Downloaded by [134.117.239.71] at 12:26 01 November 2017 to attacks that result in kinetic effect, conducted by different forms of cyber aggressors, ranging from hostile governments and antagonistic state-based actors, to international terrorists and crime syndi- cates, to thrill-seeking saboteurs and miscreants. Getting deterrence right in cyberspace requires a com- prehensive approach to cyber deterrence in practice that is rooted to the principles, prerequisites, and paradoxes of deterrence theory. Piecemeal approaches and haphazard responses to cyber aggression— the norm to date—only serve to undermine, weaken, and confuse U.S. cyber deterrence in the long run.

Notes

1. Michael Shear and Julie Hirschfeld Davis, “Trump, on YouTube, Pledges to Create Jobs,” New York Times,November 21, 2016. 2. Donald J. Trump, “Cybersecurity,”Campaign Website, 2016, https://www.donaldjtrump.com/policies/cyber-security (accessed December 2016). 3. The literature on deterrence is expansive. A selection of the books and volumes that highlight the field—both old and new—include: Glenn Snyder, Deterrence and Defense: Toward a Theory of National Security (Princeton, NJ: Princeton University Press, 1961); Thomas Schelling, Arms and Influence (New Haven: Yale University Press, 1966); COMPARATIVE STRATEGY 317

Alexander George and Richard Smoke, Deterrence in American Foreign Policy: Theory and Practice (New York: Columbia University Press, 1974); Patrick Morgan, Deterrence: A Conceptual Analysis (Beverly Hills, CA: Sage Publications, 1977); Robert Jervis, Richard Ned Lebow, and Janice Gross Stein, Psychology and Deterrence (Balti- more: Johns Hopkins University Press, 1985); Keith Payne, Deterrence in the Second Nuclear Age (Lexington: Uni- versity of Kentucky Press, 1996); Robert Art and Patrick Cronin. eds., The United States and Coercive Diplomacy (Washington, DC: U.S. Institute of Peace 2003); Patrick Morgan, Deterrence Now (Cambridge: Cambridge Uni- versity Press, 2003); Lawrence Freedman, Deterrence (Malden: Political Press 2004); T. V. Paul, Patrick M. Mor- gan, and James J. Wirtz, eds., Complex Deterrence: Strategy in the Global Age (Chicago: University of Chicago Press, 2009); Martin Libicki, Cyberdeterrence and Cyberwar (Washington, DC: RAND 2009); Martin Libicki, Con- quest in Cyberspace (Cambridge, UK: Cambridge University Press, 2007); Benjamin Sutherland (ed.), Modern War- fare, Intelligence, and Deterrence (Hoboken, NJ: John Wiley & Sons, 2011); Andreas Wenger and Alex Wilner, eds. Deterring Terrorism: Theory and Practice (Stanford, CA: Stanford University Press, 2012); P.W. Singer and Allan Friedman Cybersecurity and Cyberwar (Oxford: Oxford University Press, 2014); Alex Wilner, Deterring Rationale Fanatics (Philadelphia: University of Pennsylvania Press, 2015); Brian Mazanec and Bradley Thayer, Deterring Cyber Warfare (New York, NY: Palgrave, 2015); Anne-Marie Slaughter, The Chessboard and the Web: Strate- gies of Connection in a Networked World (New Haven: Yale University Press, 2017); and Robert Mandel, Optimizing Cyberdeterrence: A Comprehensive Strategy for Preventing Foreign Cyberattacks (Washington, DC: Georgetown Uni- versity Press, 2017). 4. Alex Wilner, “Contemporary Deterrence Theory and Counterterrorism: A Bridge Too Far?” New York University Journal of International Law and Politics 47 (2015): 451–452. 5. Other forms of deterrence, like extended deterrence and triadic deterrence, add a third necessary player—the pro- tégé, proxy, or ally. Alex Wilner, “The Dark Side of Extended Deterrence: Thinking through the State Sponsorship of Terrorism,” Journal of Strategic Studies 40, no. 1 (2017): 4–5; Franklin D. Kramer, Robert J. Butler, and Catherine Lotrionte, “Cyber, Extended Deterrence, and NATO,” Atlantic Council Issue Brief (May 2016): 5–6; Boaz Atzili and Wendy Pearlman, “Triadic Deterrence: Coercing Strength, Beaten by Weakness,” Security Studies 21, no. 2 (2012): 302–305. 6. In a 2017 report on cyber deterrence, the U.S. Department of Defense’s Defense Science Board rebranded deterrence by punishment as “deterrence by cost imposition.” Different title; roughly the same definition. U.S. Department of Defense, Defense Science Board, Task Force on Cyber Deterrence (Washington, DC: Author, February 2017), 3. 7. Government of the United States, The National Infrastructure Protection Plan: Partnering for Critical Infrastructure Security and Resilience (Washington, DC: Author, 2013). The White House, “Presidential Policy Directive: Critical Infrastructure Security and Resilience,” Presidential Policy Directive/PPD-21 (Washington, DC: Author, February 2013). 8. Benjamin Wittes and Gabriella Blum, Future of Violence: Germs, , and Drones (New York: Basic Books, 2015), 17–44; Cheryl Pellerin, “DARPA: Autonomous Bug-Hunting Bots Will Lead to Improved Cybersecurity,” DOD News, August 7, 2016; and John Markhoff and Matthew Rosenberg, “China’s Intelligent Weaponry Gest Smarter,” New York Times, February 3, 2017. 9. Symantec, Internet Security Threat Report, 21 (April 2016): 8–9. 10. U.S. Senate Confirmation Hearing, Advanced Policy Questions for James Mattis, January 12, 2017; CSPAN, Defense Secretary Nominee General James Mattis, Archived Videos, published January 15, 2017, https://archive.org/ details/CSPAN_20170115_201400_Defense_Secretary_Nominee_General_James_Mattis_Says_Russia_Trying_to_ Break …/start/2700/end/2760. 11. Jon Lindsay, “Stuxnet and the Limits of Cyber Warfare,” Security Studies 22, no. 3 (2013): 379–380.

Downloaded by [134.117.239.71] at 12:26 01 November 2017 12. Brendan Koerner, “Inside the Cyberattack that Shocked the US Government,” Wired, October 23, 2016. 13. Ellen Nakashima and Adam Goldman, “CIA Pulled Officers from Beijing after Breach of Federal Personnel Records,” Washington Post, September 29, 2015. 14. For the latest cyber iteration of cumulative deterrence, see Uri Tor, “‘Cumulative Deterrence’ as a New Paradigm for Cyber Deterrence,” Journal of Strategic Studies 40, no. 1 (2017): 92–117. 15. Ellen Nakashima, “CIA Web site Hacked,” Washington Post, June 15, 2011. 16. Amir Lupovici, “Cyber Warfare and Deterrence: Trends and Challenges in Research,” Military and Strategic Affairs 3, no. 3 (2011): 52. 17. Defense Science Board, Task Force, 13–14. 18. Joseph S. Nye, Jr., “Deterrence and Dissuasion in Cyberspace,” International Security 41, no. 3 (2016/17), 50. 19. David Clark and Susan Landau, “Untangling Attribution,” Harvard Law School National Security Journal 2 (2011), http://harvardnsj.org/wp-content/uploads/2011/02/Vol-2-Clark-Landau.pdf; Jon Lindsay, “Tipping the Scales: The Attribution Problem and the Feasibility of Deterrence against Cyberattack,” Journal of Cybersecurity 1, no. 1 (2015): 56; Thomas Rid and Ben Buchanan, “Attributing Cyber Attacks,” Journal of Strategic Studies 38, nos. 1/2 (2015): 5–8. 20. The White House, “Presidential Policy Directive: United States Cyber Incident Coordination,” July 26 2016. 21. U.S. National Intelligence Council, “Assessing Russian Activities and Intentions in Recent US Elections,”Intelligence Community Assessment, January 6, 2017; Matthew Rosenberg et al., “Obama Administration Rushed to Preserve Intelligence of Russian Election Hacking,” New York Times, March 1, 2017. 22. Ibid., ii. 318 A. WILNER

23. Washington Post, “Full Transcript: FBI Director James Comey Testifies on Russian Interference in 2016 Election,” March 20, 2017. 24. David Sanger, “US Wrestles with How to Fight Back against Cyberattacks,” New York Times, July 30, 2016. 25. Eric Lipton, David Sanger, and Scott Shane, “The Perfect Weapon: How Russian Cyberpower Invaded the US,” New York Times, December 13, 2016; Eric Lipton, “How We Identified the D.N.C. Hack’s ‘Patient Zero,”’ New York Times, December 20, 2016. 26. U.S. Department of the Treasury, “Issuance of Amended Executive Order 13694; Cyber-Related Sanctions Designa- tions,” December 2016; Neil MacFarquhar, “Vladimir Putin Won’t Expel US Diplomats as Russian Foreign Minister Urged,” New York Times, December 30, 2016. 27. Sanger, “US Wrestles,”; Mike Levine, “China is ‘Leading Suspect’ in Massive Hack of US Government Networks,” ABC News, June 25, 2016. 28. The White House, “Remarks by the President in Year-End Press Conference,” December 19, 2014. 29. BBC News, “Sony Cyber-Attack: North Korea Faces New US Sanctions,” January 3, 2015. 30. Julie Hirschfeld Davis and David Sanger, “Obama and Xi Jinping of China Agree to Steps on Cybertheft,” New York Times, September 25, 2015; David Sanger, “Chinese Curb Cyberattacks on US Interests, Report Finds,” New York Times, June 20, 2016. 31. Lipton et al., “The Perfect Weapon.” 32. Maggie Haberman et al., “Kushner Is Said to Have Discussed a Secret Chanel to Talk to Russia,” New York Times, May 26, 2017. 33. CSPAN, archive videos, 2017. 34. The White House, Presidential Executive Order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, May 11, 2017. 35. Following allegations of Russian interference in the U.S. election, France, The Netherlands, Germany, Britain, and Norway all later fingered Russia for similarly meddling in their own national and domestic affairs. Andrew Higgins, “Fake News, Fake Ukrainians: How a Group of Russians Tilted a Dutch Vote,” New York Times, February 16, 2017; Melissa Eddy, “After a Cyberattack, Germany Fears Election Disruption,” New York Times, December 8, 2016; Rachel Donadio, “Why the Macron Hacking Attack Landed with a Thud in France,” New York Times, May 8, 2017.

Notes on contributor

Alex Wilner ([email protected]) is an assistant professor of international affairs at the Norman Paterson School of International Affairs (NPSIA), Carleton University, Ottawa, Canada. He teaches graduate classes on intelligence, interna- tional affairs, terrorism, and strategic foresight. His books include Deterring Rational Fanatics (Philadelphia: University of Pennsylvania Press, 2015) and Deterring Terrorism: Theory and Practice, edited with Andreas Wenger (Stanford, CA: Stan- ford University Press, 2012), and he has published articles in International Security, NYU Journal of International Law and Politics, Security Studies, Journal of Strategic Studies, Comparative Strategy,andStudies in Conflict and Terrorism.Priorto joining NPSIA, Professor Wilner held a variety of positions at Policy Horizons Canada (Government of Canada), the Munk School of Global Affairs at the University of Toronto, the National Consortium for the Study of Terrorism and Responses to Terrorism (START) at the University of Maryland, and the ETH Zurich, Switzerland. He received a prestigious SSHRC Insight Development Grant in 2016 from the Government of Canada to explore cyber deterrence at the state and non-state level. This article stems from Prof. Wilner’s 2016 SSHRC Grant. Downloaded by [134.117.239.71] at 12:26 01 November 2017