Quick viewing(Text Mode)

Breaking the Myth of Cyber Doom: Securitization and Normalization of Novel Threats

Breaking the Myth of Cyber Doom: Securitization and Normalization of Novel Threats

Breaking the Myth of Cyber Doom: Securitization and Normalization of Novel Threats

Gomez, Miguel Alberto Whyte, Christopher

Physical damage resulting from cyber operations continues to reinforce the “cyber doom” narrative across societies dependent on information and communication technologies. This is paradoxical given the absence of severe, lasting consequences from cyber operations and the relative restraint exercised by cyber-capable actors. Moreover, the mass adoption of vulnerable digital systems raises questions whether or not individuals’ dread cyber insecurity is as severe as we are often asked to believe. Employing a survey experiment, we find that the assumptions of the "cyber doom" narrative are misleading. While sensitivity to cybersecurity threats is shaped by negative information, the onset of panic and dread is not a given. The impact of novel environmental circumstances on opinion formation is shaped by the individuals’ embeddedness in modern digital society. Consequently, long-term exposure to any invasive development mitigates the emotional response associated with it, normalizing novel threats over time. We present evidence suggesting that the unique characteristics of a development (i.e. web-technology proliferation) matter in opinion formation, as sensitivity to digital threats to the polity is grounded on personal threat sensitivity. Thus, policymakers can expect to see public responses to new national security threats manifest through the lens of prevailing social and political narratives.

1. Introduction

According to some, advancements in technological and organizational capabilities among capable state and state-affiliated actors over the past decade increase the likelihood that offensive cyber operations (OCOs) might soon produce destructive physical effects (Healey 2016; Saltzman 2013). Expectations of real-world damage inflicted through cyberspace reinforces the “cyber doom” narrative that digital insecurity might result in a massive failure of social and economic processes across societies dependent on new information technologies and that dread of such failure permeates public perspectives on cyber issues (Hansen and Nissenbaum 2009).

For those who study public opinion surrounding foreign policymaking, the “cyber doom” type of narrative is not especially uncommon. Environmental circumstances of sufficient visibility and meaning, such as the trauma-inducing experiences of 9/11 or the Cold War, often take on life of their own and affect opinion formation independent of an individuals’ priors or the cues of elites. Oddly, however, the oft-referenced notion of “cyber doom” appears paradoxical, even despite the link some scholars make between OCO and physical effects.

Presently, conflict in cyberspace is characterized by persistent-yet-limited effects and a condition of apparent restraint exercised by cyber-capable actors (Fischerkeller and Harknett 2018a; Maness and Valeriano 2016). Moreover, the unabated integration of vulnerable information systems across all aspects of modern societies raises the question of whether or not a sense of dread associated with the exploitation of cyberspace is as severe as commonly portrayed (Jarvis, Macdonald, and Whiting 2017). Most damningly, the idea that “cyber doom” is most visible in Western national experiences as a fear appeal employed by politicians to galvanize support for policy in no way explains these curious logical shortcomings.

At the heart of the “cyber doom” narrative is the assumption that information about cyber attacks released to the public – particularly information about sophisticated cyber operations of foreign countries and organized crime1 – produces anxiety about the digital health and security of a person. Ironically, this assumed relationship between cyber operations, their portrayal in public-facing media, and individual impact is often discussed in unclear terms by scholars than is the determining role of techno-strategic conditions.

Despite the recent turn by some to consider the societal impact of OCOs (Lin and Kerr 2017; Whyte 2020; Lindsay 2020), most research on cyber conflict continues to emphasize logic-of-the-domain explanations for the behavior of cyber-capable actors.2 This makes a certain sense because the domain is human-made and malleable. However, it is also puzzling given the scope of cyberspace and the degree to which digital action impacts both private industry and civil society across numerous levels. Though the mechanisms of interaction may be less precise than is the case with other forms of state power, the literature on public opinion, morale, and psychology in foreign policymaking tell us that popular perceptions of threat are shaped by a host of factors that then impact the formulation and implementation of state security policy.

In this article, we take aim at the “cyber doom” narrative logic as an initial step towards clarifying the relationship between cyber conflict, its portrayal, and public thinking about digital insecurity. Consequently, we align with critics of the narrative itself but argue that such criticisms make overly simplistic assumptions about public opinion and national security that do little to enrich and undergird evolving cyber conflict research. The logic of the core argument about digital disaster aside, the broader “cyber doom” argument – i.e. that the rhetorical and cognitive prospect of doom has some effect on a population – is undertheorized and understudied. This point is particularly important because scholarship aimed at explaining the sources of state public policy on cyberspace makes the curious misstep of holding domestic population preferences constant whilst focusing on third image determinants of strategy development. Authors argue that publics cyclically react with some fear to emergent threats and that, therefore, public policy is best explained by the

1 Though the “cyber doom” narrative generally refers to major digital disruptions in various forms, the most common uses of the term to either describe citizen reactions or a particular communication strategy used by elites to trigger popular responses invariably involve major intentional attacks on national infrastructure by dedicated, sophisticated foreign attackers (i.e. the now-generally-debunked “cyberwar” scenario). As such, we discuss the narrative primarily in reference to OCO prosecuted by capable, politically motivated competitors going forward as a proxy for the full spectrum of cyber disruption that might be assumed to cause the presumed onset of dread. 2 As suggested in numerous recent works, such as Kreps and Schneider (2019).

2 incidence of cyber conflict or steps taken by state peers. Given that such assumptions are clearly far from safe on the merits, this article aims to ascertain whether or not negativity among the general public associated with malicious behavior in cyberspace is as salient as broadly claimed. We add evidence to the argument that the “cyber doom” narrative is unrealistic (Sean Lawson 2013) by showing that the assumptions found therein are misleading.

Our study finds that sensitivity to cybersecurity threats is situationally shaped by exposure to negative reporting but that the onset of associated dread is not a given. Instead, it is influenced by expectations of the role of technology in modern society. Respondents dependent on such technologies are not as prone to negative affect as are those who remain not so deeply embedded in the fabric of digital society. Both groups become more sensitive to cybersecurity threats to their person as the information they consume becomes more negative, and this personal concern facilitates a heightened sensitivity to threats to the polity. However, the dread predicted by the “cyber doom” narrative is only weakly predictive of this dynamic and has no effect on the threat sensitivity of those who do not respond emotionally. Finally, in both cases, the link between concern for society is not a clear result of negative information so much as it is the result of initial sensitivity to threats at the personal level.

Consequently, we make two contributions. First, we show that the impact of novel environmental circumstances on individual opinion formation is shaped by issue embeddedness, suggesting that long-term exposure to any invasive development mitigates the affective response it is associated with. Second, we present evidence suggesting nevertheless that the unique characteristics of such a development matter in opinion formation, as sensitivity to digital threats to the polity is clearly premised on personal threat sensitivity.

These findings suggest that “cyber doom” is not only strategically and functionally unrealistic, but that the effects of the idea’s securitization are also minimal and prone to diminishment over time. In doing so, they speak to the broad research program on public opinion and audience dynamics in foreign policymaking. More specifically, in line with recent work (Kertzer and Zeitzoff 2017), our study suggests that citizens are far more capable of responding to threat stimuli absent elite cues. Significantly, our work joins research that locates responsiveness to policy issues in the interaction of cognitive priors and social context with incoming information about new events. Judgment is rarely as linear as the “cyber doom” narrative suggests in its linking of negative reporting, fearful response, and sensitivity to threat inflation. Instead, individuals are conditioned by social circumstances such that even novel threats are incorporated into the horizon of issues the public encounters.

The remainder of this article is divided into four sections. The first introduces the theoretical framework supporting the underlying claims investigated and adopts existing frameworks to account for the effects of

3 continued exposure to cyber threats. The second discusses the experimental design employed. Though common across political science research, the methodology is only recently employed in cybersecurity and cyber conflict scholarship in response to (1) difficulties related to obtaining observational data and (2) the growing interest in individual-level behavior as impactful in digital affairs (Gross, Canetti, and Vashdi 2017; Gomez 2019b; Jensen and Valeriano 2019). This is followed by the presentation and analysis of results. Finally, an in-depth discussion is offered that further develops the theoretical and policy implications of the findings. The findings are not limited to the validity of the core “cyber doom” narrative and the general disposition of non-elites towards cybersecurity issues. Instead, we contribute to the body of scholarship on public opinion in foreign policymaking and speak to on-going research linking decision-making and the modern digital information environment.

2. Cyber Effects and Domestic Ripples: The Case Against Cyber Doom

2.1. Proliferating Cyber Threats, Limited Effects and the Enduring Narrative of Doom

State-sponsored OCOs are an increasingly common feature of the international system in the 21st century. In the past decade, incidents caused by these have taken a number of forms and each year has brought a dramatic overall uptick in the number of significant and publicly disclosed incidents. Simultaneously, there has been an immense diversification of those elements of government, industry and civil society targeted; a trend that is apparent even in data that often misrepresents the impact of cyber events on civil society actors (Maschmeyer, Deibert, and Lindsay 2020). Cyber operations increasingly target prominent political personas and organizations in aid of influence campaigns in a remarkably prominent fashion. Moreover, beyond the routine theft of confidential information, malware designed to tamper with the of industrial control systems across power generation, factory production, and even dam infrastructure are found with some relative frequency. Herein lies the broad assertion that physical effects from OCOs seem more likely today than ever before.

Interestingly, even as cyber conflict evolves and becomes commonplace, professional narratives and media representations remain somewhat static (Dunn Cavelty 2013; Gomez and Villar 2018; Dunn Cavelty 2008). Since at least 2007, when distributed denial of service (DDoS) operations targeting Estonia constituted the first major cyber assault on a NATO member state, the potential for catastrophic damage resulting from malicious digital behavior surfaced as a recurring narrative in Western punditry and popular commentary. Despite such depictions, the growing empirical record suggests the limited efficacy of publicly disclosed OCOs. Only a handful of deployments of malicious code have been captured to date that were intended to cause the physical effects that is so often the substance of doomsayer reporting. Of these, most did not execute.

As an instrument of foreign policy, Iasiello (2013) and others argue that cyber operations often fall short of achieving their objectives despite advances in capabilities. Similarly, the latest version of the Dyadic

4 Cyber Incident and Dispute Data (DCID), which tracks major interstate cyber conflict incidents between 2000 and 2016, illustrates the limited coercive potential of OCOs by identifying the rarity of concession- generating incidents (Valeriano and Maness 2014). The absence of empirical support for the revolutionary potential of cyber operations encourages criticism of those who oversell the strategic utility of the method (Gartzke 2013; Lindsay 2013; Maness and Valeriano 2016; Borghard and Lonergan 2017; Kostyuk and Zhukov 2017), with the result that the real-world impact of cyber conflict remains somewhat unclear even as the landscape of incidents becomes more densely populated.

This growing skepticism surrounding the exercise of cyber power,3 however, continues to go against how the issue is framed in broader scholarship and in the public eye. Scholars still regularly cite the assumptions of “cyber doom” as they present their work, particularly in technical research (Sean Lawson et al. 2016; Dunn Cavelty 2009). Furthermore, Jarvis, Macdonald, and Whiting (2017) observe that media reporting continues to depict OCOs as existential threats to cyber-dependent societies.

Continued references to this narrative, even among otherwise skeptical researchers, is puzzling. While the six-year period Jarvis et al. (2017) study encompasses some of the most prominent cybersecurity incidents,4 it also permits the observation of its limited effects. Cyber operations that reflect the prevalent narrative are typically inaccessible to the majority of state actors (Pytlak and Mitchell 2016). Moreover, there is reason to think that prominent operations that could affect public perceptions of the shape of global cyber conflict should be few and far between. For one thing, the ability to inflict damage to critical infrastructure requires significant financial, scientific, and organizational resources (Slayton 2017). For another, if concessions are to be obtained, sustained pressure is typically required, which is challenging with OCOs relative to other mechanisms of state power (Borghard and Lonergan 2017). Lastly, strategic considerations need also be considered. As most interactions in cyberspace are regionally bound and may involve rivalries, restraint is necessary to manage escalatory risk (Valeriano and Maness 2015; Fischerkeller and Harknett 2017). Unlike conventional capabilities, cyber operations and their corresponding tools do not communicate intent well and increase the risk of misperception (Buchanan 2017).

It seems clear that while much conceptual and empirical clarifications about cyber conflict exists in scholarly and professional settings, there remains a distorted view of the impact of information about cyber conflict on the average citizen. This is problematic for a range of reasons, perhaps most notably because of how the diversification of cyber conflict has increasingly brought questions about the nature, integrity, and function of public information environments to the fore of strategic discussions about digital insecurity.

3 Meaning the use of OCOs to achieve strategically favorable effects both in and outside of cyberspace. 4 Such as the denial-of-service campaigns in Estonia (2007) and Georgia (2008), Operation Orchard (2007), the Buckshot Yankee incident (2007) and the Olympic Games campaign resulting in the sabotage of an Iranian uranium enrichment facility (2010).

5 2.2. The Under-Explored Lateral Effects of Cyber Conflict

While the consequences of cyber operations on political and military elites continue to generate scholarly interest (Macdonald and Schneider 2017), its effects on non-elites remain underexplored. This is paradoxical because those likely affected by OCOs that match the prevalent narrative are liberal-democratic regimes (Hare 2010; Whyte 2016). These publics are more likely to have first-hand experience with disruptive behavior in cyberspace and are likely to be willing to express a policy preference across a range of digital security issues. This notion makes sense given that OCOs appear to have discernable psychological effects (Gross, Canetti, and Vashdi 2017; Cheung-Blunden and Ju 2016).

The lack of research on how information about cyber operations impacts non-elites (i.e., citizens that are not decision-makers or key stakeholders traditionally associated with cyber conflict) is odd in light of the groundswell of research linking cyber conflict to societal effects. Whyte (2020), for instance, argues that peer competitors might achieve coercive effects in the exercise of cyber power by influencing domestic conditions. Among other effects, the narrow targeting of some domestic actors in a democratic society might tacitly coerce others who observe the initial effect to adjust their behavior to avoid being targeted. Likewise, targeting of processes critical to national function in times of change or crisis may incentivize national leaders to invoke audience costs in laying out more assertive foreign policy responses. Furthermore, OCOs that affect public or industry concerns may also prompt assertiveness on the part of national governments as a way to preempt the calls for retaliation that risk runaway escalation, for instance as a result of hack-back or demands by the population for a cross-domain response (Borghard and Lonergan 2019). In short, the effects of continued exposure to cyber conflict incidents and information about it remain undetermined.

2.3. Support for Cyber Conflict Policymaking: Risk, Disposition and Experience

Support for policies aimed at securing cyberspace is associated with the perceived risk held by the public. While the cybersecurity literature highlights risks associated with increased dependence on cyberspace as the determinant for its securitization, little is said regarding how non-elites utilize this to frame their respective judgments. Fortunately, research in the fields of cognitive psychology, risk management, and public opinion provides mechanisms to understand this phenomenon.

Prospect Theory (Kahneman and Tversky 1979) acknowledges that risk preferences are a function of issue framing. Given a loss frame, individuals engage in risky endeavors to avoid further losses while those in a gain frame refrain from risky endeavors to protect existing gains. Therefore, risks are not objectively evaluated such that (potential) losses are overweighed relative to (potential) gains. While several studies observe this mechanism across different issue areas (Reinhardt 2017; Ehrlich and Maestras 2010; Mandel 2001), the prominence of framing overemphasizes the role of elites in communicating the issue under

6 evaluation while limiting the importance of dispositional and experiential attributes of non-elites for whom these appeals are expected to resonate.

Ehrlich and Maestas (2010) observe that dispositional risk preferences interact with issue framing when it comes to complex policy debates such as free trade policies. Similarly, Eckles and Schaffner (2011) note that prior risk preferences influence endorsement of military intervention. When primed to consider risk (i.e., the consequences of intervention), risk-averse individuals are more likely to withdraw support than their risk-acceptant counterparts. The importance of these priors extend beyond the underlying risk preferences and may include other dispositional traits. Rathbun et al. (2016) illustrates how personal values align with specific policy positions. Rather than having a distinct of values that are mutually exclusive across issue areas, their research identifies the presence of a unified value system applied to both personal and polity- wide issues.

Aside from dispositional traits, the experiential aspect of risk also contributes to the formation of specific attitudes. Reinhardt (2017) observes that hypothetical disaster induces stronger reactions among those who have not experienced it first-hand. He argues that a critical difference exists between perception and experiential realities such that “one’s personal experience with a particular hazard pre-disposes the individual to disregard information coming from other sources as unreliable or overblown.” Relatedly, Saunders (2017) demonstrates that experience is a crucial component that affects risk assessment and, consequently, policy preferences among decision-makers. In other words, experience colors judgment.

While the cybersecurity literature appears to validate the importance of the dispositional and experiential dimensions of risk (Kostyuk and Wayne 2020; Gross, Canetti, and Vashdi 2017; Gomez 2019b; Gomez, Valeriano, and Jensen 2019), the former tends to be emphasized in the available literature. Specifically, research on elite decision-making in cyberspace underscores the importance of dispositional traits as a determinant of policy preferences (Schneider 2017). In contrast, the experiential dimension is left understudied. However, its importance in assessing the efficacy of cybersecurity policies should not be ignored. Hansen and Nissenbaum (2009) note that inexperience with cyber issues contributes to the hyper securitization of cyberspace. Inexperience with cybersecurity incidents may also explain the behavior of individuals under experimental conditions who depend on priors as a heuristic mechanism. For instance, analogical reasoning may be invoked when first-hand experience of an incident is absent (Axelrod 2014; Goldman and Arquilla 2014). This tendency is observed in studies that aim to understand elite behavior in cyberspace (Schneider 2019; Jensen and Valeriano 2019).

To demonstrate the significance of experience in risk assessment, Kostyuk and Wayne (2020) observe that while publics (i.e., non-elites) do not exhibit elevated levels of concern towards cybersecurity, direct exposure to threats moderate risk perception towards greater awareness and support for risk-avoidance.

7 This finding is supported by Blum, Silver, and Poulin (2014) who note that experience influences risk perception in two ways, altering schemas about safety in one’s environment and biasing an individual’s understanding of life experiences. At best, experience encourages the critical assessment of a situation. At worst, it serves as a basis for evaluating future events irrespective of fit and suitability.

Given the limited effects of cyber operations, it seems fair to argue that publics lack direct experience with the consequences of these. What little experience exists is the result of indirect exposure through media reporting. Jarvis, Mcdonald, and Whiting (2017) cite the prevalence of the “cyber doom” narrative in media depictions of cybersecurity incidents. This, along with limited domain expertise, inflates perceived risk (Gigerenzer 2006). Moreover, continued consumption of news that sensationalizes the consequences of these incidents increases the possibility of recollection bias that further aggravates misperception and inflates the perceived likelihood of these threats. Consequently, risk is inflated beyond levels projected by experts and promotes negativity bias among individuals who consume this information (Viscusi and Zeckhauser 2017).

Negativity bias is defined as the tendency to overweigh negative (i.e., bad) information relative to positive (i.e., good) information. Johnson and Tierney (2019) argue that the strength of this bias is affected by (1) the target of the assessment, (2) the availability of information, and (3) timing, ideology, and agency. When the target of an assessment involves others and the wider world, negative information is overweighed in contrast to positive information. Inversely, when the target is one’s self, this pattern is reversed (Baumeister et al. 2001). Cyber operations are often framed as a threat to society and the nation as a whole by adversely impacting critical infrastructure and socio-political processes that society depends on. This increases the possibility of negativity bias in the context of cybersecurity.

The tendency to overvalue negative information, however, is tempered by its availability. Unfortunately, a significant amount of positive information is required to counteract the effects of negative information (Johnson and Tierney 2011). As previously stated, cyberspace is disproportionately portrayed as a threatening environment. Although positive aspects are occasionally surfaced, this is often overshadowed by reports of malicious behavior. Consequently, skewed perceptions are likely to encourage the emergence of negativity bias. Thus far, it can collectively be argued that;

H1. Unfavorable narratives towards cyberspace increases negative perceptions of individuals among the general population towards cyberspace.

H2. Societal dependence on cyberspace increases negative perceptions of individuals among the general population towards cyberspace.

H3. Continued exposure to cybersecurity incidents through the media increases negative perceptions of individuals among the general population towards cyberspace.

8 While the two former attributes (i.e., target and information availability) govern the strength and direction of the negativity bias (i.e., positive or negative), the latter influences its manifestations. Of interest for this article is that of threat sensitivity. One aspect influencing threat sensitivity is the temporality of an event. Adverse events are perceived as increasingly salient as they approach (Rozin and Royzman 2001). For cyberspace, dependence on a vulnerable domain narrows the distance between threats and individuals who depend on the domain for their day-to-day lives (Hansen and Nissenbaum 2009).

Moreover, dependence on cyberspace is a cornerstone with which supporters of the cyber revolution thesis justify the existential threat posed by malicious cyber actors. As Saltzman (2013) asserts, the unique characteristics of cyberspace encourages capable actors to engage in coercion. Moreover, Forsyth and Pope (2014) note that the growing frequency of OCOs induces a demonstration effect that encourages other actors to exploit this domain further, increasing the likelihood of instability. While the continued absence of strategically significant cyber operations tempers these concerns, it has resulted in increased familiarity and behavioral expectations from certain actors.

Valeriano and Maness (2014), note that at least five actors could develop OCOs with potential physical effects5. Furthermore, such interactions may emerge under broader rivalrous contexts (Valeriano and Maness 2015). This situation raises the possibility that publics, lacking first-hand experience of severe OCOs, may frame expectations using previously observed behavior. This is not unlikely, as constant exposure to negative behavior, especially in a rivalry environment, reinforces pre-existing beliefs (Dreyer (2010). This is empirically demonstrated across cyber and non-cyber incidents (Gomez 2019b; Bar-Joseph and Kruglanski 2003) and validates the argument that individuals gravitate towards threats attributed to a specific actor or group (Morewedge 2009). In the context of cyberspace, growing familiarity with known malicious actors allows observers to ascribe agency to specific groups. This increases the possibility of biased perceptions stemming from constructs, such as enemy images, that inflate perceived threats (Herrmann et al. 1997; Gomez 2019a).

Although the preceding discussion focused exclusively on agency to account for negativity, this does not challenge arguments from Johnson and Tierney (2019) who also recognize the importance of timing and ideology. Instead, this is a constraint imposed on publics due to the availability of information. Given the opaque nature of cyber operations, reports by the media of cybersecurity incidents are sparse on details, often emphasizing outcomes and the identities of the suspected aggressors. While research notes the importance of timing (Axelrod and Iliev 2014; Edwards et al. 2017) and ideological preferences for OCOs (Hare 2010), these details are beyond the reach of investigators and are likely to remain unreported. Consequently, publics that consume the available information are likely to gravitate towards available

5 The United States, Russia, North Korea, Israel, and China.

9 information (e.g., the suspected actor responsible) and to utilize existing cognitive tools to formulate judgments concerning the incident.

Building on the earlier discussion, continuous malicious behavior by a familiar actor(s) may invoke a sense of insecurity among the public. Giving rise to the belief in the underlying vulnerability of cyberspace that is readily exploitable. Consequently, one can argue that cyberspace and the narratives that characterize it facilitates negativity bias among information consumers and increases the overall threat sensitivity. Therefore:

H4. An increase in the negative perception by individuals among the general population towards cyberspace further inflates threat sensitivity.

3. Research Design

To test the proposed framework, an Internet-based survey experiment is employed6. The use of experimental designs in cybersecurity scholarship continues to expand given an interest in micro-level attributes and its behavioral implications. Internet-based platforms such as MTurk and Prolific provide access to large participant pools compared to conventional lab environments while delivering comparable results (Casler, Bickel, and Hackett 2013). To recap and clarify the validity of our approach as it speaks to the detail and context of the “cyber doom” narrative, we choose to focus on the public-at-large for three reasons. First, the narrative itself does not differentiate in its expected effects across elite and non-elite populations. Second, even where some existing critics of “cyber doom” might express most concern about the reaction of elites to major digital disruptions, it is the democratic relationship between public dispositions and government policy that enduringly justifies continued focus on the narrative’s assumptions as significant. Finally, discussion of elite reactions to cyber incidents, while certainly worthy of study, are often generic and do not consider the diverse contexts in which specialists encounter information about digital disruption. As such, it is exceedingly difficult to test fundamental assumptions about the psychology of disaster response for elite populations in such a way that generalization is possible. The opposite is not true.

For the experiment, participants enact the role of citizens from the fictitious country of Aldoria participating in a national survey to determine whether or not the government should prioritize cybersecurity. While it would be easy to critique the degree to which the vignette is capable of eliciting the cognitive and emotive processes of interest in this study, a similar challenge may be leveled against research designs that employ fictitious scenarios. As noted by Perla and McGrady (2011), the success of observing these processes is grounded on the ability of the narrative to invoke the of disbelief among its

6 For complete details, please refer to the supplementary materials.

10 participants. The suspension of disbelief is contingent on the differences between automatic and systematic cognitive processes. The former is associated with adaptive processes that immediately evaluate a given situation with minimal cognitive resources. The latter, in contrast, engages in more effortful cognition. These are akin to the System 1 and System 2 thinking advanced by Kahneman (2011). Consequently, the suspension of disbelief is predicated on the suppression of systemic or system 2 processes. Neurological findings suggest that systemic processes are prompted by stimuli that requires real-world action. As such, the plausibility of a fictitious narrative is enough to allow participants to behave in a realistic manner without having to question its validity (Holland 2008; Gerrig 2018). Consequently, the vignette is patterned after real-world events to maintain a degree of mundane realism.

Three (3) randomly assigned treatments are applied: dependence, content, and exposure. A summary of these treatment groups is shown in Figure 2 below. Dependence represents the extent to which Information and Communication Technologies (ICT) is integrated into Aldorian society. Participants in the treatment group are informed that ICT is a cornerstone of the social and economic life in Aldoria. In contrast, those in the control are told that ICT is yet to have socio-economic relevance for Aldoria. This treatment provides the necessary contextual background. Furthermore, dependence is presented such that cyberspace is depicted as a relevant but not necessarily existential aspect of Aldorian society to avoid invoking a “cyber doom” bias.

Figure 1 Sample Headlines

After reading the vignette, participants are informed that they will be shown headlines that provide them with a snapshot of the cybersecurity environment faced by Aldoria7. Both content and exposure are applied during this phase of the experiment. Content refers to the tone of the headlines. Those in the treatment group are exposed to adverse cybersecurity incidents while the control is shown headlines with neutral content. As mentioned in the previous section, first-hand experience with the consequences of cyber operations is mostly indirect for publics. The use of negative headlines attempts to replicate this in an experimental setting (see Figure 1).

Finally, exposure represents the number of headlines shown to participants. Those in the treatment group are shown ten headlines, while the control is shown only five. These values were derived from the average number of publicly disclosed cyber operations from the DICD dataset. The decision to utilize five headlines as the control is an attempt to maintain mundane realism within the experiment. As the DICD dataset is

7 Adopted from real-world cybersecurity news headlines.

11 constructed using publicly available sources (i.e., media reports), it is likely that participants would encounter a similar number of headlines involving cybersecurity. Doubling this value to ten to serve as the treatment reflects the findings in the literature tackling cognitive and emotional desensitization wherein constant exposure reduces the perception of the “unexpectedness” of an incident as well as tempering the reaction of individuals to an incident given repeated experiences (Funk et al. 2004; Nussio 2020). Noting that it is difficult, if not impossible, to predict the growth of such incidents, the decision to double the average value was adopted.

Figure 2 Treatment Group Summary

Besides these treatments, the experiment incorporates additional covariates that include trust in computing systems (trust), preferences in cybersecurity accountability (responsibility), domain expertise (knowledge), exposure to cybercrime (crime), and risk preference (risk). The relationship between these covariates and the dependent variables are discussed in a succeeding subsection.

3.1. Dependent Variables

The above treatments are expected to affect the extent to which participants negatively perceive cyberspace that, in turn, influence their corresponding threat sensitivity. These are represented by the negativity, threat.self, and threat.polity, variables. Negativity is constructed using self-reported measures of valence and arousal, while

12 threat.self and threat.polity represent shifts in the self-reported measures of threat sensitivity at the level of the individual and the broader polity8.

Valence and arousal are captured using the Affective Slider (AS) developed by (Betella and Verschure 2016). The Affective Slider employs emoticons to help participants report valence and arousal. To measure valence, participants move a slider towards one of two emoticons that best represents their feeling towards a given headline. Values range from 0.000 to 1.000, with those closer to zero representing greater negative valence. Similarly, the level of arousal is captured when participants move a slider towards one of two emoticons that best represents their interest towards a headline presented to them. Values range from 0.000 to 1.000, with values closer to zero representing a lesser degree of arousal.

To measure negativity among participants, the of valence and arousal is computed as follows, (1 – valence) x arousal; resulting in an index ranging from 0.000 to 1.000. Values closer 1.000 represent participants who are both aroused and worried by the headlines presented to them. Values closer to the midpoint (0.500) represent participants that exhibit negative valence towards these headlines but are less aroused. Lastly, values below the midpoint represent participants who do not have a negative view of cyberspace.

Finally, threat sensitivity is measured at the individual and polity level. For the former, participants are asked to evaluate the importance of cybersecurity in their day-to-day lives using a 7-point Likert scale. The latter, in contrast, measures the extent to which the Aldorian government should prioritize cybersecurity. These measures are taken pre- and post-treatment. After this, both threat.self and threat.polity are computed by obtaining the difference between the pre- and post-treatment values. Positive values represent an increase in threat sensitivity, while negative values suggest a decrease. A caveat to this approach is that it does not permit tracing the formation of threat sensitivity incrementally.

3.2 Covariates

To account for alternative explanations for the observed outcome, the design incorporates measures for relevant covariates. These include trust in computing systems (trust), cybersecurity accountability (responsibility), domain expertise (knowledge), exposure to cybercrime (crime), and risk preference (risk). These are selected based on their influence on participant behavior identified in other studies (Kostyuk and Wayne 2020; Gomez 2019a).

Trust in computing systems may moderate threat sensitivity and is measured using the instrument developed by Shaft, Sharfman, and Wu (2004) . This consists of eight positive and negative word-pairs that describes

8 The interaction between affect and threat sensitivity speak to an increasingly active research program where negative emotions (e.g., anger) in response to cybersecurity incidents are seen to facilitate the willingness of publics to endorse policy responses such as retaliation (Shandler et al. 2020; Backhaus et al. 2020).

13 computer use. Participants move a slider towards one of two words that best fit their conception of computing technology. Negative descriptions have values closer to one while positive assessments are closer to seven. Trust in computing systems is derived by taking the mean value across the eight pairs and scaling it between 0.000 (little to no trust) and 1.000 (significant to complete trust).

Taking into consideration the influence of beliefs, the assignment of responsibility for cybersecurity is equally relevant. A preference for off-loading accountability to government agencies may result in reduced threat sensitivity at a personal level due to trust in government. Consequently, participants are presented with three different scenarios representing typical instances of cyber operations (Valeriano, Jensen, and Maness 2018). For each, participants are asked to indicate whether the incident should be addressed by either the government or private industry by moving a slider in the direction of either of these two entities. Greater government responsibility is represented by values closer one, while the accountability of the private sector is represented by values closer to seven on a seven-point Likert scale. The indicator is computed by taking the mean value across the three scenarios and scaling this between 0.000 (greater government responsibility) and 1.000 (greater private sector responsibility).

Domain expertise (knowledge) and exposure to cybercrime (crime) may directly affect negativity and threat sensitivity. Knowledge is measured using a questionnaire that captures familiarity with cybersecurity incidents and concepts. The indicator is computed by summing the number of correct answers and dividing this value by six (i.e., the number of questions). Scores closer to 0.000 represent limited knowledge, while those closer to 1.000 above-average knowledge. Relatedly, crime is measured using a three-item questionnaire that ascertains whether participants experienced cybercrime. This consists of three questions with binary responses (Yes/No) response. All answers in the affirmative are summed and divided by three. Scores closer to 1.000 represent higher victimization.

Risk preference (risk) may impact perceptions of negativity and threat sensitivity. Risk is measured using a seven-item questionnaire developed by Kam and Simas (2010). Participants either indicate their preferred course of action for a given situation or verify the extent to which a statement best describes themselves. Participants respond using a seven-point Likert scale from one to seven. The indicator is computed by taking the mean value across the items and scaling this between 0.000 and 1.000. With values closer to the former indicating risk aversion while those that approach the latter as risk acceptance.

3.3. Selection Criteria

Although Internet-based experiments typically feature a larger participant pool, its monetary benefits may skew the balance in favor of certain demographic groups. To ensure consistency among respondents with respect to culture, political sensitivities, and technological usage, the experiment is offered exclusively to nationals of the United States or the United Kingdom. These criteria aim to reduce variance due to

14 underlying perceptions towards cybersecurity. For comprehension, only individuals fluent in English and those without literacy difficulties (e.g., dyslexia) are invited to participate as the instrument is currently only available as text written in English.

To address concerns raised with Internet-based experiments, namely that of attention and naivete, only individuals with an approval rate of 100% and those who have not participated in previous studies conducted by the authors are recruited. Unlike lab environments where enforcing attention is easier, Internet-based participants are exposed to more distractions. While the experiment contains attention checks that measure participant focus, recruitment is also contingent on their approval rates. In this case, only those whose work was consistently accepted on Prolific are invited to participate. The growing popularity of Internet-based experiments also raises questions of participant naivete. As these tasks serve as an attractive source of income, individuals may already be familiar with treatments used in a variety of experiments. Consequently, only participants who have not previously participated in studies conducted by the authors are recruited.

4. Analysis

In August 2019, 1,055 individuals were recruited to participate. After pre-processing, a total of 1,016 samples were deemed suitable for further analysis. The majority are citizens of the United Kingdom (88.09%), with the remaining participants being U.S. nationals (11.91%). Given the imbalance between the two nationalities, the U.S. population is omitted from this analysis, resulting in a final sample size of 895. Doing so results in a more homogenous dataset but requires replication studies to rule out the effects of cultural confounders.

The participants reflect a mean age of 37 years and consist predominantly of women (69.5%). The reported annual income ranges from 10,000 to 59,000 USD, which is to be expected with the sample. The majority of participants possess either a high school (43.24%) or a bachelor’s degree (39.23%) in areas of study that are not directly related to either political or .

With respect to the covariates, participants hold a positive view of computing technology – unsurprising given their presence on this platform. The mean of trust is 0.764 (SD = 0.143). However, while trust is high, domain expertise is noticeably low, with a mean of only 0.367 (SD = 0.244). Relatedly, experience with cybercrime (crime) is also limited, given a mean of 0.344 (SD = 0.284). This combination of limited expertise and indirect experience with malicious actors in cyberspace may account for the reported level of trust. This may also explain observed responsibility with a mean of 0.491 (SD = 0.206), suggesting a balanced view and may signify the importance of context when it comes to assigning responsibility. Finally, participants appear to be risk-averse with a mean of 0.418 (SD = 0.168) for risk.

15 For the assignment of the treatments, the 2 x 2 x 2 design yields eight unique treatment groups with an average size of 111 samples. Balance tests rule out the undue influence of the covariates and signifies the success of randomization.

For the outcome variables, negativity has a mean of 0.303 (SD = 0.146) for the sample as a whole and reflects a lack of distrust towards cyberspace and its associated threats. With regards to shifts in threat perception for the individual and the broader polity, participants do not exhibit massive changes in response to the treatments with the former fluctuating by an average of 0.380 (SD = 1.117) and the latter by 0.326 (SD = 1.009). Both are positively and significantly correlated with one another (0.458; p = 0.000).

4.1. Causal Analysis

Mediation analysis is utilized using the specifications seen in Figure 3 (primary model) to test the proposed framework. Readers should note that a second model (alternative model) expressing an alternative explanation is also specified. Whereas the primary model asserts that threat sensitivity at the individual and polity level is a function of negativity, its effects at the polity level may be referenced using individual-level assessments (see Figure 4) (Ehrlich and Maestras 2010).

Figure 3 Primary Model Causal Path

16

Figure 4 Alternative Model Causal Path

Figure 5 illustrates the coefficients obtained for the primary model and shows that only content exhibits a statistically significant effect on negativity at the 0.05 level and increases negativity by 0.120. While both dependence (-0.012) and exposure (-0.011) function inversely to content, these are not statistically significant. Concerning threat sensitivity, both threat.self and threat.polity grow in response to an increase in negativity. The former by 0.596 (p = 0.051) while the latter by 0.489 (p = 0.040). Furthermore, content and dependence have direct effects on threat.self and threat.polity. Content increases threat.self by 0.226 (p = 0.005) and threat.polity by 0.336 (p = 0.000) respectively. In contrast, dependence reduces both by 0.447 (p = 0.000) and 0.359 (p = 0.000).

Figure 6 illustrates the coefficients obtained for the alternative model. As with the previous specification, only content exhibits a statistically significant effect on negativity, increasing it by 0.120 (p = 0.000). Concerning threat sensitivity, an increase in negativity corresponds to an increase in threat.self of approximately 0.544 (p = 0.030). As with the primary model, direct effects from both content and dependence is observed for threat.self. The former results in an increase of 0.294 (p = 0.000) while the latter leads to a decrease of 0.404 (p = 0.000). Unlike the primary model, a slight mediation effect exists with content acting through negativity to influence threat.self (0.0.65, p = 0.031).

As suspected, threat.self serves as a reference for threat.polity with an increase in the former resulting in a corresponding increase of 0.970 (p = 0.000) for the latter. Moreover, content is also found to have an effect on threat.gov as it is mediated through threat.self (0.063, p = 0.026). As suggested by Ehrlich and Maestas (2010), support for public policies are not dependent solely on framing by elites but may also depend on individual-level assessments.

17

Figure 5 Primary Model Mediation Analysis9

Figure 6 Alternative Model Mediation Analysis

5. Discussion

In much research, public opinion about foreign policy is a process inherently driven by the actions and dispositions of elites. However, public discourse and personal expression both often do not reflect the cues offered by elites, even in cases like that of the “cyber doom” narrative where non-elite opinion is often assumed to be a useful tool of elite politicking. Though specialized thinking on foreign policy issues may be necessary to the function of government and the discovery of prudent equilibria in political discussions, the limited ability of the median voter to analyze such issues doesn’t mean an inability to respond absent elite cues. Even where issues are multi-faceted and complex, prior perspective and issue experience gained from environmental osmosis can drive unique modalities of situational response.

9 Solid lines indicate statistical significance.

18 Our work here joins a growing body of research that locates the response of individuals towards the full gamut of public policy issues in social and societal context. Individuals’ cognitive orientations and issue- specific knowledge affect how different types of incoming information about the world around them is interpreted. As Kertzer and Zeitzoff (2017) and others have argued, the result of such contextual judgment is so often a process of decision-making that defies conventional, linear assumptions about the relationship between strategic events and public effects.

Our results further support this assertion that societal context not only impacts individual affect on cyber threats but also that it works to detach such affect from individuals’ abilities to assess and prioritize threats. In other words, the feeling of dread predicted by the pervasive “cyber doom” narrative is undoubtedly produced by exposure to negative portrayals of cyber conflict and that negative sentiment is linked to more acute threat perception. However, that dread is not a determinant of a person’s sensitivity to the threat itself. This is particularly significant given that our findings show that sensitivity to threats at a personal level drives sensitivity to threats for the wider polity. As we see it, the primary implication of the “cyber doom” narrative – that cyber events induce dread and that such dread can drive public support for assertive policy response – holds no water.

By contrast, we see that threat sensitivity is situationally shaped by exposure to negative reporting but that the onset of associated dread is influenced by expectations of the role of ICT in modern society. Those who acknowledge the normalcy of societal reliance on ICT for all manner of sociopolitical and economic activity are less affectively impacted by negative representations of cyber operations, but still, become more sensitive to the threat in line with variation in the information portrayed. There are numerous implications of this dynamic, not least of which is that the public may be far more prudent in assessing cyber threat issues on the merits of information appearing in media than previously assumed. However, the dynamic suggests that specific demographics may be particularly prone to threat inflation and fear appeal messaging.

5.1. Societal Dependence and Cyber Threat Assessment

Perhaps the most substantial finding identified is the palliative effect of dependence. This contradicts commonly held assumptions that increasing societal integration of ICT is frequently identified as an underlying cause of anxiety with respect to threats from cyberspace. However, it would be disingenuous to suggest that this finding is especially surprising given the existing sociological and psychological literature on risk.

Before proceeding further, it should be noted that perceptions of societal dependence on cyberspace as discussed in this article does not pertain to an objective assessment of the domain’s importance. Given limited domain expertise (knowledge) observed in this experiment and others (Kostyuk and Wayne 2020; Shandler, Gross, and Canetti 2021), it is likely that perceptions of dependence among participants and the

19 broader public stem from a subjective evaluation of the environment facilitated by readily accessible information (Jarvis, Macdonald, and Whiting 2017). Consequently, this may result in an exaggerated belief in the degree to which modern society is dependent on ICT.

As noted by the late Charles Perrow (1984), “no matter how effective conventional safety devices are, there is a form of accident that is inevitable.” The characteristics of cyberspace in terms of its growing complexity introduces vulnerabilities that are both unwanted and inevitable. For instance, it is estimated that software that does not adhere to secure development practices are likely to exhibit between 15 to 50 flaws per 1,000 lines of code (Mayer 2012).

Far from technologically deterministic assertions, it is worth acknowledging that this gives rise to a sense of normalcy in the everyday consideration of technology by the median citizen. Self-reported levels of trust with computing technology observed in the experiment reflects this. Between the dependence treatment and control groups, the difference in trust is not statistically significant (p = 0.304). Consequently, one cannot argue that this predisposition explains the observed effect of the dependence treatment. Instead, reminding participants of such a dependency may reinforce an ingrained sense of normalcy that tempers perceived negativity towards the environment. This is especially salient for the population sampled as these individuals willingly participate in this environment.

This argument is also linked with the concept of control in the risk literature. Earlier forays into this topic note that an individual’s ability to mitigate the consequences of risk reduces negative perceptions (Nordgren, Van Der Pligt, and Van Harreveld 2007). However, a distinction must be made between the voluntary exposure to risks (i.e., volition to engage in risky activity) and command over outcomes (i.e., control over the consequences). Volition increases perceived risk, while control over outcomes minimizes it. Taking into consideration the pervasiveness of these technologies, voluntary exposure to cyber threats becomes less tenable as we increasingly depend on varying manifestations of this domain. Moreover, campaigns that promote safe practices online introduces a sense of control that allows us to mitigate the perceived risk further that, in turn, affect our corresponding threat perception. Consequently, dependence anchors participant judgments within the experimental scenario to real-world experiences and expectations of control resulting in the observed outcome.

Apart from perceptions of control, domain expertise also accounts for this sanguine approach to cybersecurity. The literature supports the possibility of individuals oblivious to the implications of technologies they depend on. Kostyuk and Wayne (2020) observe that familiarity with safe online practices remains quite low. Building on this argument, Huang et al. (2011) find that both domain knowledge and first-hand experience contribute to perceptions of security in cyberspace. This is reinforced by both Kostyuk and Wayne, who observe that first-hand experience is likely to encourage greater awareness of the

20 consequences of these incidents. However, neither domain knowledge nor experience explains the observed outcomes. Knowledge (p = 0.574) and crime (p = 0.798) are statistically indistinguishable between the dependence treatment and control. Consequently, one cannot argue that a positive perception of cyberspace is linked to greater familiarity with the inner workings of the domain and the consequences of malicious behavior. The possibility that the observed behavior is a result of the normalization of the threat across the sample seems the most viable explanation.

5.2. Individual- and Polity Level Differences

A defining feature of the above experiment is its measurement of not only individual perceptions of threat but its influence at the polity-level. Specifically, the results speak to the existence of mechanisms that drive loss-aversion among the participants in the experiment. While loss-aversion research is focused on impact at the level of the individual. The consequences of cyber operations exhibit clear social implications that translate beyond the individual. Moreover, as social interactions become increasingly dependent on these interconnected systems, the effects of malicious behavior emerge as cascading consequences that affect the wider polity.

Recently, Osmundsen and Petersen (2020) proposed that two logics explaining the movement of these assessments across these levels, that of empathy or dominance. Using a logic of empathy, individual assessments are applied at a societal level based on the importance of the common good. High-need individuals10 that operate using this logic argue that the well-being of the wider society guarantees the security of their needs and thus merits their support. In contrast, a logic of dominance argues that low-need individuals are likely to support policies that guarantee their social status. In contrast, those in the periphery (i.e., high-need) are less likely to provide their endorsement given their existing priorities.

Although Osmundsen and Petersen experimentally demonstrate that a logic of dominance is at work in the context of both health and economic threats, this is not the case for the experiment. For the logic of dominance to function, there must exist a disproportionate level of benefits enjoyed by different groups that utilize cyberspace before the emergence of a threat. While this exists outside the experimental scenario given the persistence of digital divides, the scenario does not make such distinctions. In the case where high dependence on ICT is presented, participants are informed that a “blanket 3% income tax cut” is granted to all citizens. Similarly, where there is low dependence, there is no mention of any single group enjoying the benefits of digitization. Given this portrayal, it is unlikely that support for policies securing cyberspace is grounded on the need to maintain some form of social status. That being said, the logic of empathy explains the link between individual- and polity-level assessments.

10 Socially or economically disadvantaged individuals and groups.

21 Although no specific measures for empathy and level of need are provided, the statistically significant effect of threat.self on threat.polity suggests the existence of a process where the former serves as a reference for the latter. The direction and magnitude of these indicators, coupled with the uniform distribution of benefits, reinforces the existence of this altruistic mechanism.

5.3. Domestic and International Implications

As discussed earlier, the normalcy of threats in cyberspace and the benefits resulting from digitalization influences threat sensitivity at both the individual and polity level. These suggest that cyberspace and its associated insecurity are not as novel as routinely portrayed. That is to say; individuals appear to have accepted the risks associated with continued investment in cyberspace in exchange for the benefits that it provides. While this tempers the “cyber doom” narrative, it brings with it its own set of implications for both domestic and international behavior.

At the domestic level, alarmist narratives appear to be an ineffective means of strengthening cybersecurity initiatives. However, under conditions of disparity with respect to the benefits offered by these technologies, fear appeals may still succeed in currying support from groups that fear losing these benefits. Consequently, one could argue that the efficacy of fear-based appeals is inversely proportional to the distribution of benefits offered by cyberspace. That is to say, as the digital divide is closed, individuals are less likely to support calls for cybersecurity that tap underlying fears of being disadvantaged. Instead, support is rooted in the need to advance policies that recognize the interdependencies between individuals that contribute to the shared benefits offered by the domain.

At the international level, this sense of normalcy calls into question the efficacy of coercive cyber operations. For the most part, popular depictions of coercion highlight the potential utility of punishment and risk strategies against adversaries. While recent scholarship argues that these are problematic given systemic and technical realities (Slayton 2017; Borghard and Lonergan 2017), the underlying attitudes of publics mitigate its efficacy. If the consequences of a vulnerable cyberspace are internalized along with a pervasive sense of control rooted in best practices, instilling fear in hopes of gaining concessions becomes even more challenging. The need to inflict first and/or second-order effects becomes increasingly problematic. While advances in offensive capabilities are observed over the past decade, this has yet to reflect strategically viable results (Healey 2016). Moreover, given the apparent restraint exercised by capable actors, this interaction of systemic, technological, and socio-psychological factors moderates perceptions of the coercive utility of cyber operations and further weakens fear appeals.

5.4. Implications for Political Elites

The experimental design, specifically its recruitment strategy, is oriented towards gaining an understanding of non-elite behavior. However, most cybersecurity research involving threat perception addresses

22 concerns as to how elites are expected to respond to salient developments within cyberspace (Liff 2012; Gartzke and Lindsay 2015; Fischerkeller and Harknett 2018b). Consequently, the degree to which these findings speak to elite perception and preferences depends on (1) perceived psychological differences between the populations, (2) the extent to which information is available and processed, and (3) the degree of influence that elites exert in shaping public opinion.

First, without considering access to additional information, there is no reason to assume that cognitive and affective differences exist between elites and non-elites. Meta-analysis involving 162 paired treatments from paired experiments as well as elite and public opinion data over 43 years finds that perceived differences are overstated (Kertzer 2020). At most, these populations differ in the magnitude, but not the direction, of their behavior.

Second, while it is fair to argue that elites have greater access to information; this does not guarantee either expertise or comprehension. As Hansen and Nissenbaum (2009) observe, limited expertise among elites contributes to the hypersecuritization of cyberspace. Given the socio-cognitive constraints imposed upon these individuals (Kruglanski and Webster 1996), it is possible that a lack of expertise may result in threat inflation that colors both individual judgement and collective policy-making (Schneider 2017).

Third, the degree to which elites shape public opinion needs further consideration. While several authors note the importance of elite signaling, preference formation by individuals and groups needs to be considered (Kertzer and Zeitzoff 2017). If the latter is found to have greater influence in shaping public opinion, then escalatory narratives from elites may not matter as much as how publics perceives these threats.

6. Conclusions

In attempting to address digital insecurity and fashion effective security regimes, policymakers and strategic planners must begin with the reality that cyberspace is an environmental feature of 21st century society, not a gimmick or novelty. Specifically, it seems clear that policymaking based on the idea that cybersecurity and related phenomena enduringly promise to present as the exception to normal societal operation would be fundamentally flawed. For instance, fear appeals grounded in the idea of “cyber doom” would be an ineffective means for eliciting support for cybersecurity initiatives or broader public and foreign policy actions, at least outside of specific demographics. Rather, researchers and practitioners must increasingly be aware that environmental perturbations even on the scale of the global proliferation of Internet technologies have a shelf life in terms of their intrinsic affective impact. This stands in some contrast to the assumption of securitization theory that the effectiveness of instrumental threat exaggeration might not – absent some exogenous shock – be substantially affected by the nature of the threat itself. Here, while continued integration of cyberspace in our social, economic, and political lives carries risk, the consequences

23 appear to have been internalized, leading to a sense of normalcy and control. This process has resulted in publics that, while cognizant of the consequences of malicious behavior, are willing to accept this as the costs of the benefits enjoyed.

Contrary to the negativity that is often the norm, this dynamic encourages a degree of optimism about the nature of digital challenges yet in their infancy at the time of writing this article. Much has been written about fake news borne and spread via the Internet (Woolley and Howard 2018), particularly in the context of new artificial intelligence-aided methods and techniques for disinformation like the deepfake. Likewise, there is growing concern about the societal impacts of as new methods for securing information are rapidly made obsolete by advancing processing capabilities, prospectively leading to widespread compromise of personal data in what one scholar has labeled the “quantum cryptocalypse” (Lindsay 2020). Even if such developments do trigger emotional reactions across large elements of national populations (Backhaus et al. 2020), however, the effect is not likely to last indefinitely. As some political scientists have shown that individuals are remarkably capable assessors even without specialized knowledge or relevant elite cues, this study shows that individuals are adaptable to environmental conditions. While major traumatic or otherwise meaningful national developments have clear effects on the public’s outlook, effects are neither cumulative nor permanently debilitating.

The results of this study do not suggest that on-going attempts to institute safe cybersecurity practices and resolve digital insecurity in 21st century democracies are doomed to failure. Rather, it calls attention to the need to abandon the recurring narrative that emphasizes the consequences of malicious behavior. In its place, appeals by elites to publics should instead emphasize the collective benefits enjoyed through participation in a safe and secure digital environment. At the same time, more considerable effort should be spent on bridging existing digital divides to ensure that such benefits are enjoyed as broadly as possible while mitigating the indifference that may emerge from disenfranchised sectors. Naturally, this suggested approach is not a novel one. Organizations such as ASEAN, for instance, tightly link economic progress to that of a safe(r) cyberspace (ASEAN 2015). However, the quest for an informed public must specifically recognize that messaging intended to empower national audiences is often engulfed by tides of dystopian narratives that appeal readily to our innate tendency to pay attention to negative information about unfamiliar threats. Greater emphasis placed on articulating the faults in such narratives is a necessary part of messaging that is ultimately successful in mitigating the effects of threat sensationalism and ensuring that citizens’ assessments are able to form preferences free of kneejerk emotional reaction.

24 References

ASEAN. 2015. "ASEAN ICT Masterplan 2020." ASEAN. Accessed 12.05. https://www.trc.gov.kh/wp-content/uploads/2016/10/1.pdf. Axelrod, Robert. 2014. "A Repertory of Cyber Analogies." Cyber Analogies. Axelrod, Robert, and Rumen Iliev. 2014. "Timing of cyber conflict." Proceedings of the National Academy of Sciences 111 (4): 1298-1303. Backhaus, Sophia, Michael Gross, Israel Waismel-Manor, Hagit Cohen, and Daphna Canetti. 2020. "A cyberterrorism effect? Emotional reactions to lethal attacks on critical infrastructure. Cyberpsychology, Behavior, and Social Networking." Cyberpsychology, Behavior, and Social Networking 23 (9): 595 - 603. Bar-Joseph, Uri, and Arie W. Kruglanski. 2003. "Intelligence failure and need for cognitive closure: On the psychology of the Yom Kippur surprise." Political Psychology 24 (1): 75-99. https://doi.org/Doi 10.1111/0162-895x.00317. ://WOS:000181404300004. Baumeister, Roy F., Ellen Bratslavsky, Catrin Finkenauer, and Kathleen . Vohs. 2001. "Bad is stronger than good." Review of General Psychology 5 (4): 323-370. Betella, Alberto, and Paul Verschure. 2016. "The affective slider: A digital self-assessment scale for the measurement of human emotions." PloS one 11 (2): e0148037. Blum, Scott C., Roxane Cohen Silver, and Michael J. Poulin. 2014. "Perceiving risk in a dangerous world: Associations between life experiences and risk perceptions." Social Cognition 32 (3): 297-314. Borghard, Erica D., and Shawn W. Lonergan. 2017. "The Logic of Coercion in Cyberspace." Security Studies 26 (3): 452-481. Borghard, Erica D., and Shawn W. Lonergan. 2019. "Cyber operations as imperfect tools of escalation." Strategic Studies Quarterly 13 (1): 122-145. Buchanan, Ben. 2017. The Cybersecurity dilemma: Hacking, trust and fear between nations. London: Hurst & Company. Casler, Krista, Lydia Bickel, and Elizabeth Hackett. 2013. "Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing." Computers in Human Behavior 29 (6): 2156-2160. Cheung-Blunden, Violet, and Jiarun R. Ju. 2016. "Anxiety as a Barrier to Information Processing in the Event of a Cyberattack." Political Psychology 37 (3): 387-400. https://doi.org/10.1111/pops.12264. ://WOS:000380168500006. Dreyer, David R. 2010. "Issue conflict accumulation and the dynamics of strategic rivalry." International Studies Quarterly 54 (3): 779-795. Dunn Cavelty, Myriam. 2008. Cyber-security and threat politics: US efforts to secure the information age. New York: Routledge. Dunn Cavelty, Myriam. 2009. "National Security and the Internet: Distributed security through distributed Responsibility." Interntional Studies Review 11 (1): 214-218. Dunn Cavelty, Myriam. 2013. "From Cyber-Bombs to Political Fallout: Threat Representations with an Impact in the Cyber-Security Discourse." International Studies Review 15 (1): 105-122. ://WOS:000317589400007. Eckles, David L., and Brian F. Schaffner. 2011. "Risk tolerance and support for potential military interventions." Public Opinion Quarterly 75 (3): 533 - 544. Edwards, Benjamin, Alexander Furnas, Stephanie Forrest, and Robert Axelrod. 2017. "Strategic aspects of cyberattack, attribution, and blame." Proceedings of the National Academy of Sciences: 201700442. Ehrlich, Sean, and Cherie Maestras. 2010. "Risk orientation, risk exposure, and policy opinions: The case of free trade." Political Psychology 31 (5): 657-684. Fischerkeller, Michael P., and Richard J. Harknett. 2017. "Deterrence is not a credible strategy for cyberspace." Orbis 61 (3): 381-393.

25 Fischerkeller, Michael P., and Richard J. Harknett. 2018a. "Persistent Engagement and Tacit Bargaining: A Path Toward Constructing Norms in Cyberspace." Lawfare (blog). 09.11.2018. https://www.lawfareblog.com/persistent-engagement-and-tacit- bargaining-path-toward-constructing-norms-cyberspace. Fischerkeller, Michael P., and Richard J. Harknett. 2018b. "Persistent Engagement, Agreed Competition, Cyberspace Interaction Dynamics and Escalation." Orbis (Summer 2017) 61 (3): 381-393. Forsyth Jr, James Wood, and Maj Billy E Pope. 2014. "Structural Causes and Cyber Effects Why International Order is Inevitable in Cyberspace." Strategic Studies Quarterly 8 (4). Funk, Jeanne B., Heidi Bechtoldt Baldacci, Tracie Pasold, and Jennifer Baumgardner. 2004. "Violence exposure in real-life, video games, television, movies, and the internet: is there desensitization?" Journal of adolescence 27 (1): 23-39. Gartzke, Erik. 2013. "The myth of cyberwar: bringing war in cyberspace back down to earth." International Security 38 (2): 41-73. Gartzke, Erik, and Jon R. Lindsay. 2015. "Weaving Tangled Webs: Offense, Defense, and Deception in Cyberspace." Security Studies 24 (2): 316-348. ://WOS:000356591800008. Gerrig, Richard. 2018. Experiencing narrative worlds. Routledge. Gigerenzer, Gerd. 2006. "Out of the frying pan into the fire: Behavioral reactions to terrorist attacks." Risk Analysis 26 (2): 347-351. https://doi.org/DOI 10.1111/j.1539- 6924.2006.00753.x. ://WOS:000236248500005. Goldman, Emily, and John Arquilla. 2014. Cyber analogies. Monterey: Naval Postgraduate School. Gomez, Miguel Alberto. 2019a. "Past behavior and future judgements: seizing and freezing in response to cyber operations." Journal of Cybersecurity 5 (1): tyz012. Gomez, Miguel Alberto. 2019b. "Sound the alarm! Updating beliefs and degradative cyber operations." European Journal of International Security: 1 - 19. Gomez, Miguel Alberto, Brandon Valeriano, and Benjamin Jensen. 2019. "Revisionist Actors in Cyberspace: Experimenting with Power Imbalances and Digital Aggression." International Studies Association, Toronto. Gomez, Miguel Alberto, and Eula Bianca Villar. 2018. "Fear, Uncertainty, and Dread: Cognitive Heuristics and Cyber Threats." Politics and Governance 6 (2): 61-72. Gross, Michael L., Daphna Canetti, and Dana R. Vashdi. 2017. "Cyberterrorism: its effects on psychological well-being, public confidence and political attitudes." Journal of Cybersecurity 3 (1): 49-58. https://doi.org/10.1093/cybsec/tyw018. ://WOS:000434485900006. Hansen, Lene, and Helen Nissenbaum. 2009. "Digital Disaster, Cyber Security, and the Copenhagen School." International Studies Quarterly 53 (4): 1155-1175. ://WOS:000272438700013. Hare, Forrest. 2010. "The Cyber Threat to National Security: Why Can't We Agree?" Conference on Cyber Conflict, Proceedings 2010: 211-225. ://WOS:000392673700012. Healey, Jason. 2016. "Winning and losing in cyberspace." In 2016 8th International Conference on Cyber Conflict, edited by Nikolaos Pissanidis, Henry Rõigas and Matthijs Veenendaal, 37-49. Tallinn: IEEE. Herrmann, Richard K., James F. Voss, Tonya Y. E. Schooler, and Joseph Ciarrochi. 1997. "Images in international relations: An experimental test of cognitive schemata." International Studies Quarterly 41 (3): 403-433. https://doi.org/Doi 10.1111/0020-8833.00050. ://WOS:A1997XT86700002. Holland, Norman N. 2008. "Spider-Man? Sure! The neuroscience of suspending disbelief." Interdisciplinary science reviews 33 (4): 312-320.

26 Huang, Ding-Long, Pei-Luen Patrick Rau, Gavriel Salvendy, Fei Gao, and Jia Zhou. 2011. "Factors affecting perception of information security and their impacts on IT adoption and security practices." International Journal of Human-Computer Studies 69 (12): 870-883. Iasiello, Emilio. 2013. "Cyber attack: A dull tool to shape foreign policy." In 2013 5th International Conference on Cyber Conflict, edited by Karlis Podins, Jan Stinissen and Markus Maybaum, 451-470. Tallinn: IEEE. Jarvis, Lee, Stuart Macdonald, and Andrew Whiting. 2017. "Unpacking cyberterrorism discourse: Specificity, status, and scale in news media constructions of threat." European Journal of International Security 2 (1): 64-87. Jensen, Benjamin, and Brandon Valeriano. 2019. "The Cyber Character of Crisis Escala." International Studies Association Annual Convention, Toronto, 27.03.2019. Johnson, Dominic, and Dominic Tierney. 2011. "The Rubicon theory of war: how the path to conflict reaches the point of no return." International Security 36 (1): 7 - 40. ---. 2019. "Bad World: The Negativity Bias in International Politics." International Security 43 (3): 96- 140. Kahneman, Daniel. 2011. Thinking, fast and slow. 1st ed. New York: Farrar, Straus and Giroux. Kahneman, Daniel, and Amos Tversky. 1979. "Prospect theory: An analysis of decision under risk." Econometrica 47 (2): 263 - 291. Kam, Cindy D., and Elizabeth N. Simas. 2010. "Risk orientations and policy frames." The Journal of Politics 72 (2): 381-396. Kertzer, Joshua D. 2020. "Re-Assessing Elite-Public Gaps in Political Behavior." American Journal of Political Science. Kertzer, Joshua D., and Thomas Zeitzoff. 2017. "A bottom-up theory of public opinion about foreign policy." American Journal of Political Science 61 (3): 543-558. Kostyuk, Nadiya, and Carly Wayne. 2020. "The Microfoundations of State Cybersecurity: Cyber Risk Perceptions and the Mass Public." Journal of Global Security Studies. Kostyuk, Nadiya, and Yuri M. Zhukov. 2017. "Invisible Digital Front: Can Cyber Attacks Shape Battlefield Events?" Journal of Conflict Resolution: 0022002717737138. Kreps, Sarah, and Jacquelyn Schneider. 2019. "Escalation firebreaks in the cyber, conventional, and nuclear domains: moving beyond effects-based logics." Journal of Cybersecurity 5 (1). https://doi.org/https://doi.org/10.1093/cybsec/tyz007. Kruglanski, Arie W., and Donna M. Webster. 1996. "Motivated closing of the mind:" Seizing" and" freezing."." Psychological review 103 (2): 263. Lawson, Sean. 2013. "Beyond cyber-doom: Assessing the limits of hypothetical scenarios in the framing of cyber-threats." Journal of Information Technology & Politics 10 (1): 86-103. Lawson, Sean , Sara K. Yeo, Haoran Yu, and Ethan Greene. 2016. "The cyber-doom effect: The impact of fear appeals in the US cyber security debate." 2016 8th International Conference on Cyber Conflict (CyCon), Talinn. Liff, Adam P. 2012. "Cyberwar: a new ‘absolute weapon’? The proliferation of cyberwarfare capabilities and interstate war." Journal of Strategic Studies 35 (3): 401-428. Lin, Herbert, and Jaclyn Kerr. 2017. Lindsay, Jon. 2013. "Stuxnet and the Limits of Cyber Warfare." Security Studies 22 (3): 365-404. ://WOS:000322564300001. Lindsay, Jon. 2020. "Demystifying the Quantum Threat: Infrastructure, Institutions, and Intelligence Advantage." Security Studies 29 (2): 335-361. Macdonald, Julia, and Jacquelyn Schneider. 2017. "Presidential risk orientation and force employment decisions: The case of unmanned weaponry." Journal of Conflict Resolution 61 (3): 511-536. Mandel, David R. 2001. "Gain-loss framing and choice: Separating outcome formulations from descriptor formulations." Organizational Behavior and Human Decision Processes 85 (1): 56-76. https://doi.org/10.1006/obhd.2000.2932. ://WOS:000168652700003.

27 Maness, Ryan C, and Brandon Valeriano. 2016. "The Impact of Cyber Conflict on International Interactions." Armed Forces & Society 42 (2): 301-323. Maschmeyer, Lennart, Ronald J. Deibert, and Jon R. Lindsay. 2020. "A tale of two cybers - how threat reporting by cybersecurity firms systematically underrepresents threats to civil society." Journal of Information Technology & Politics: 1 - 20. Mayer, Dan. 2012. "Ratio of Bugs Per Line of Code." Continously Deployed (blog). 06.12. https://www.mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio. Morewedge, Carey K. 2009. "Negativity bias in attribution of external agency." Journal of Experimental Psychology: General 138 (4): 535. Nordgren, Loran F., Joop Van Der Pligt, and Frenk Van Harreveld. 2007. "Unpacking perceived control in risk perception: The mediating role of anticipated regret." Journal of Behavioral Decision Making 20 (5): 533 - 544. Nussio, Enzo. 2020. "Attitudinal and Emotional Consequences of Islamist Terrorism. Evidence from the Berlin Attack." Political Psychology. Osmundsen, Mathias, and Michael Bang Petersen. 2020. "Framing Political Risks: Individual Differences and Loss Aversion in Personal and Political Situations." Political Psychology 41 (1): 53 - 70. Perla, Peter P., and Ed McGrady. 2011. "Why wargaming works." Naval War College Review 64 (3): 111-130. Perrow, Charles. 1984. Normal accidents : living with high-risk technologies.Princeton paperbacks. Princeton, N.J.: Princeton University Press. Pytlak, Allison, and George E. Mitchell. 2016. "Power, rivalry, and cyber conflict: an empirical analysis." In Conflict in cyber space : theoretical, strategic and legal perspectives, edited by Karsten Friis and Jens Ringsmose, 65-82. London: Routledge. Rathbun, Brian C., Joshua D. Kertzer, Jason Reifler, Paul Goren, and Thomas J. Scotto. 2016. "Taking Foreign Policy Personally: Personal Values and Foreign Policy Attitudes." International Studies Quarterly 60 (1): 124-137. https://doi.org/10.1093/isq/sqv012. ://WOS:000376661600011. Reinhardt, Gina Y. 2017. "Imagining worse than reality: comparing beliefs and intentions between disaster evacuees and survey respondents." Journal of Risk Research 20 (2): 169-194. https://doi.org/10.1080/13669877.2015.1017827. ://WOS:000392853800001. Rozin, Paul, and Edward B. Royzman. 2001. "Negativity bias, negativity dominance, and contagion." Personality and social psychology review 5 (4): 296-320. Saltzman, Ilai. 2013. "Cyber posturing and the offense-defense balance." Contemporary Security Policy 34 (1): 40-63. Saunders, Elizabeth N. 2017. "No Substitute for Experience: Presidents, Advisers, and Information in Group Decision Making." International Organization 71: S219-S247. https://doi.org/10.1017/s002081831600045x. ://WOS:000399999700009. Schneider, Jacquelyn. 2017. ---. 2019. "Persistent Engagement: Foundation, Evolution and Evaluation of a Strategy." Lawfare (blog). 02.07. https://www.lawfareblog.com/persistent-engagement-foundation- evolution-and-evaluation-strategy. Shaft, Teresa M., Mark P. Sharfman, and Wilfred W. Wu. 2004. "Reliability assessment of the attitude towards computers instrument (ATCI)." Computers in human behavior 20 (5): 661- 689. Shandler, Ryan, Michael Gross, Sophia Backhaus, and Daphna Canetti. 2020. Shandler, Ryan, Michael L. Gross, and Daphna Canetti. 2021. "A fragile public preference for cyber strikes: Evidence from survey experiments in the United States, United Kingdom, and Israel." Contemporary Security Policy: 1-28.

28 Slayton, Rebecca. 2017. "What Is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment." International Security 41 (3): 72-109. Valeriano, Brandon, Benjamin Jensen, and Ryan C. Maness. 2018. Cyber Strategy: The Evolving Character of Power and Coercion. New York: Oxford University Press. Valeriano, Brandon, and Ryan C. Maness. 2014. "The dynamics of cyber conflict between rival antagonists, 2001-11." Journal of Peace Research 51 (3): 347-360. ://WOS:000335491800002. Valeriano, Brandon, and Ryan C. Maness. 2015. Cyber war versus cyber realities : cyber conflict in the international system. Oxford ; New York: Oxford University Press. Viscusi, Kip, and Richard J. Zeckhauser. 2017. "Recollection Bias and Its Underpinnings: Lessons from Terrorism Risk Assessments." Risk Analysis 37 (5): 969-981. https://doi.org/10.1111/risa.12701. ://WOS:000403450300012. Whyte, Christopher. 2016. "Ending cyber coercion: Computer network attack, exploitation and the case of North Korea." Comparative Strategy. Whyte, Christopher. 2020. "Poison, Persistence, and Cascade Effects." Strategic Studies Quarterly 14 (4): 18-46. Woolley, Samuel, and Philip Howard. 2018. Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. New York: Oxford University Press.

29