Towards A Computational
Unified Homeland Security Strategy:
An Asset Vulnerability Model
by
Richard White
M.S. CS, Old Dominion University, 1990
B.S. History, Southern Illinois University at Edwardsville, 1983
A dissertation submitted to the Graduate Faculty of the
University of Colorado at Colorado Springs
in partial fulfillment of the
requirement for the degree of
Doctor of Philosophy
Department of Computer Science
2013
© Copyright by Richard White 2013
All Rights Reserved
ii
This dissertation for
Doctor of Philosophy Degree by
Richard White
has been approved for the
Department of Computer Science
By
______Dr. C. Edward Chow, Chair
______Dr. Terrance Boult, Co-Chair
______Dr. Xiaobo Zhou
______Dr. Scott Trimboli
______Dr. Stan Supinski
______Date
White, Richard (Ph.D. in Engineering, Focus in Security)
Towards a Computational Unified Homeland Security Strategy:
An Asset Vulnerability Model
Dissertation directed by Professors C. Edward Chow and Terrance Boult
The attacks of September 11, 2001, exposed the vulnerability of critical infrastructure to precipitating domestic catastrophic attack. In the intervening decade, the
Department of Homeland Security (DHS) has struggled to develop a coherent infrastructure protection program but has been unable to formulate a risk measure capable of guiding strategic investment decisions. Most risk formulations use a threat-driven approach which suffers from a dearth of data incapable of supporting robust statistical analysis. This research examines prevailing challenges to propose criteria for developing an adequate strategic risk formulation. Key insights include 1) the viability of an asset- driven approach, 2) reducing threat prediction to threat localization, 3) eschewing complexity for transparency and repeatability, 4) addressing the five phases of emergency management, and 5) capturing the national impact of consequences. Accordingly, an
Asset Vulnerability Model (AVM) is proposed based on these criteria. AVM provides baseline analysis, cost-benefit analysis, and decision support tools compatible with the
DHS Risk Management Framework to 1) convey current risk levels, 2) evaluate alternative protection measures, 3) demonstrate risk reduction across multiple assets, and
4) measure and track improvements over time. AVM capabilities are unique among twenty-two models compared. AVM risk formulation is predicated on Θ, an attacker’s probability of failure, derived from earlier work in game theory that found a coordinated
iv defense more efficient than an uncoordinated one. This suggests that all means of domestic catastrophic attack should be protected collectively, both critical infrastructure and chemical, biological, radiological, and nuclear stockpiles. This research proposes a national policy framework supporting AVM extension to collectively defend all assets that may precipitate domestic catastrophic attack. This research concludes by using
AVM to evaluate seven alternative risk reduction strategies: 1) Least Cost, 2) Least
Protected, 3) Region Protection, 4) Sector Protection, 5) Highest DTheta (protective gain), 6) Highest Consequence, and 7) Random Protection. AVM simulations indicate that the Highest Consequence strategy is most effective across varying probabilities of attack, attacker perceptions, and different attack models. These simulations demonstrate the computational power of AVM, and how, with an appropriate supporting policy structure, AVM can objectively guide the nation towards a unified homeland security strategy.
v
In great appreciation for the patience, guidance, and understanding of my Advisory
Committee, family, and friends. This wouldn’t have been possible without your support.
Table of Contents
CHAPTER
I. INTRODUCTION ...... 1
1.1 Motivation and Problem Description ...... 1
1.2 Objectives and Scope ...... 3
1.3 Outline of Dissertation ...... 4
II. DOMESTIC CATASTROPHIC ATTACK ...... 8
2.1 An Unprecedented Threat ...... 8
2.2 Critical Infrastructure Vulnerability ...... 9
2.3 The WMD Threat ...... 13
2.4 The CI Threat ...... 16
2.5 WMD Protection ...... 30
2.6 CI Protection ...... 38
2.7 Assessing Protection Efforts ...... 42
2.8 Risk Analysis ...... 47
2.9 Risk Management ...... 49
2.11 Summary ...... 56
2.12 Contributions...... 56
III. AN ASSET VULNERABILITY MODEL ...... 58
3.1 Overview ...... 58
vii
3.2 Design Criteria ...... 59
3.3 AVM Description...... 70
3.4 AVM Instantiation ...... 82
3.5 AVM Sensitivity Analysis ...... 91
3.6 Model Comparisons ...... 97
3.7 Summary ...... 100
3.8 Contributions...... 101
IV. AVM IMPLEMENTATION ...... 104
4.1 Overview ...... 104
4.2 Strategy Coordination ...... 105
4.3 AVM/RMF ...... 112
4.4 Analyzing Investment Strategies ...... 115
4.5 Summary ...... 144
4.6 Contributions...... 145
V. CONTRIBUTIONS AND FUTURE WORK ...... 146
5.1 Research Contributions ...... 146
5.2 Future Research ...... 148
5.3. Conclusion ...... 150
REFERENCES ...... 152
APPENDICES
A. GLOSSARY ...... 161
B. CI PROTECTION MODELS ...... 166
B.1 BIRR...... 166
viii
B.2 BMI ...... 168
B.3 CAPRA...... 171
B.4 CARVER2 ...... 172
B.5 CIMS ...... 173
B.6 CIPDSS ...... 175
B.7 CIPMA ...... 176
B.8 CommAspen ...... 178
B.9 COUNTERACT ...... 181
B.10 DECRIS ...... 183
B.11 EURACOM ...... 184
B.12 FAIT ...... 185
B.13 MIN ...... 187
B.14 MDM ...... 188
B.15 N-ABLE ...... 189
B.16 NEMO ...... 191
B.17 NSRAM ...... 192
B.18 RAMCAP-Plus ...... 194
B.19 RVA ...... 195
B.20 SRAM...... 196
B.21 RMCIS ...... 197
C. AVM INSTALLATION AND CONFIGURATION...... 199
C.1 AVM Baseline Functional Description ...... 199
C.2 AVM Baseline Execution...... 201
ix
C.3 AVM Strategy Simulation ...... 202
C.4 AVM Simulation Data Analysis ...... 210
C.5 Summary ...... 216
D. AVM STATISTICAL ANALYSIS ...... 218
D.1 Introduction ...... 218
D.2 Data Extraction ...... 218
D.3 Excel Chi-Square Test ...... 219
D.4 Excel Kruskal-Wallis Test ...... 221
D.5 Excel Modified Tukey HSD Analysis ...... 222
D.6 Summary ...... 224
END NOTES ...... 225
x
List of Tables
Table 2-1: Critical Infrastructure Sectors ...... 17 Table 3-1: Homeland Security Risk Analysis Challenges/Criteria...... 64 Table 3-2: Domestic Catastrophic Threats (Infrastructure) ...... 66 Table 3-3: Risk Formulation Criteria ...... 70 Table 3-4: AVM Application Components...... 83 Table 3-5: Asset Record Values and Bounds for Simulated Baseline Analysis ...... 84 Table 3-6: Average Percent Change in Θ with Standard Deviation ...... 96 Table 3-7: Critical Infrastructure Risk Assessment Models ...... 97 Table 3-8: Comparison of CI Models to Risk Criteria ...... 99 Table 4-1: Protective Improvement Investment Strategies ...... 118 Table 4-2: AVM Investment Strategy Simulation Program Models ...... 124 Table 4-3: AVM Investment Strategy Simulation Supporting Batch Files ...... 125 Table 4-4: AVM Investment Strategy Simulation Data Analyses Programs ...... 130 Table 4-5: AVM18 Summary Totals ...... 133 Table 4-6: AVM19 Summary Totals ...... 134 Table 4-7: AVM20 Summary Totals ...... 136 Table 4-8: Results from Kruskal-Wallis Test of Damage Results ...... 137 Table 4-9: Results from Modified Tukey HSD Pairwise Comparisons ...... 138 Table 4-10: Results from Pairwise Comparison of Damage Results ...... 139 Table 4-11: Tabulation of Pairwise Comparison Damage Results ...... 140 Table 4-12: Results from AVM19 Chi-Square Tests ...... 141 Table C-1: CBA Output File Record Format ...... 200 Table C-2: AVM Strategy Simulation Program Files...... 209 Table C-3: AVM Strategy Simulation Result Files ...... 211 Table C-4: MAP Batch Execution & Output Files ...... 212 Table C-5: AVM Data Sensitivity Analysis Files...... 213 Table C-6: DSA Batch Execution & Output Files ...... 214 Table C-7: DAM Batch Execution & Output Files ...... 215
xi
List of Figures
Figure 2-1: CWMD Strategy Framework ...... 31 Figure 2-2: DHS Risk Management Framework ...... 39 Figure 3-1: Sandler & Lapan Model ...... 62 Figure 3-2.1: AVM Baseline Analysis Depicting Current Risk Profile ...... 80 Figure 3-2.2: Baseline AVM Data Identifying Assets from Least to Most Vulnerable .. 80 Figure 3-2.3: Baseline AVM Data Identifying Vulnerabilities by Sector ...... 80 Figure 3-2.4: Baseline AVM Data Identifying Vulnerabilities by Region ...... 80 Figure 3-3.1: Cost-Benefit Analysis Identifying Improvements in Order of Benefit ...... 81 Figure 3-3.2: Cost-Benefit Analysis Identifying Improvements by Cost ...... 81 Figure 3-3.3: Cost-Benefit Analysis Identifying Improvements by Sector ...... 81 Figure 3-3.4: Cost-Benefit Analysis Identifying Improvements by Region ...... 81 Figure 3-3: BAM Sample Output ...... 84 Figure 3-4: PIM Sample Output ...... 87 Figure 3-5: Risk Reduction Worth vs. Θ ...... 93 Figure 3-6.1: Risk Reduction Worth (Interval) for AVM Risk Formulation ...... 94 Figure 3-6.2: Risk Reduction Worth (Ratio) for AVM Risk Formulation ...... 94 Figure 3-6.3: Fussell-Vesely Measure of Importance for AVM Risk Formulation ...... 94 Figure 3-6.4: Sensitivity Analysis for AVM Risk Formulation ...... 94 Figure 3-7: Average Percent Change in Θ with Standard Deviation ...... 95 Figure 4-1: Asset Identification ...... 113 Figure 4-2: AVM Investment Strategy Simulation Program Architecture ...... 118 Figure 4-3: Probable Attack Model with Clauset & Woodard Attack Estimations ...... 122 Figure 4-4: AVM Investment Strategy Simulation for Single Dataset over Ten Years 127 Figure 4-5: AVM Investment Strategy Simulation for Single Asset over Ten Years ... 128 Figure 4-6.1: AVM18 Cumulative Successful Attacks Across Investment Strategies .. 131 Figure 4-6.2: AVM18 Cumulative Damages Across Investment Strategies ...... 131 Figure 4-6.3: AVM18 Cumulative Expenditures Across Investment Strategies ...... 131 Figure 4-6.4: AVM18 Cumulative Protective Purchases Across Investment Strategies 131 Figure 4-7.1: AVM19 Cumulative Successful Attacks Across Investment Strategies .. 134 Figure 4-7.2: AVM19 Cumulative Damages Across Investment Strategies ...... 134 Figure 4-7.3: AVM19 Cumulative Expenditures Across Investment Strategies ...... 134 Figure 4-7.4: AVM19 Cumulative Protective Purchases Across Investment Strategies 134 Figure 4-8.1: AVM20 Cumulative Successful Attack Across Investment Strategies ... 135 Figure 4-8.2: AVM20 Cumulative Damages Across Investment Strategies ...... 135 Figure 4-8.3: AVM20 Cumulative Expenditures Across Investment Strategies ...... 135 Figure 4-8.4: AVM20 Cumulative Protection Purchases Across Investment Strategies135 Figure 4-9.1: AVM18 Data Sensitivity Analysis Across Investment Strategies ...... 136
xii
Figure 4-9.2: AVM19 Data Sensitivity Analysis Across Investment Strategies ...... 136 Figure 4-9.3: AVM20 Data Sensitivity Analysis Across Investment Strategies ...... 136 Figure 4-10.1: AVM18 Comparison of Cumulative Damages Between Strategies ...... 140 Figure 4-10.2: AVM19 Comparison of Cumulative Damages Between Strategies ...... 140 Figure 4-10.3: AVM20 Comparison of Cumulative Damages Between Strategies ...... 140 Figure 4-11: Comparison of Damages Between Attack Models ...... 142 Figure C-1: AVM Baseline Program Architecture ...... 201 Figure C-2: Sample AVM Baseline Execution ...... 201 Figure C-3: AVM Baseline Extension for Investment Strategy Simulation ...... 203 Figure C-4: Extended AVM Sample Execution ...... 203 Figure C-5: Ten-Year AVM Strategy Simulation ...... 205 Figure C-6: Multiple AVM Strategy Simulations over Varying Probabilities ...... 207
xiii
CHAPTER 1
INTRODUCTION
1.1 Motivation and Problem Description
The Department of Homeland Security began operation on January 24, 2003. Yet after ten years and $542 billion, it can’t answer Congress’ questions asking 1) how safe are we, 2) how much safer can we be, and 3) how much will it cost? DHS was created in the wake of the September 11, 2001 attacks on New York and Washington DC killing nearly 3,000 people and causing over $41.5 billion in damages. On September 11, 2001, nineteen men hijacked four aircraft and turned them into guided missiles inflicting as much damage as the Imperial Japanese Navy on December 7, 1941. 9/11 exposed the vulnerability of the nation’s critical infrastructure to small groups or individuals seeking to inflict catastrophic damage. Prior to 9/11, most experts considered weapons of mass destruction the primary avenue for domestic catastrophic attack. The essential vulnerability of the nation’s critical infrastructure is that millions of lives depend on a network that’s not fully understood, riddled with weaknesses, and susceptible to malicious tampering. Accordingly, the 2002 Homeland Security Act made critical infrastructure protection a primary mission of the new department. In 2009, DHS
2 released the National Infrastructure Protection Plan establishing a Risk Management
Framework to 1) catalog critical assets, 2) assess their risk, and 3) prioritize protective investments. Risk is measured and protective improvements prioritized by the risk formula R=f(C,V,T) where C is the consequence of subverting the asset, V is its susceptibility to attack, and T is the probability the asset will be attacked. A number of internal reviews and external audits have since revealed serious flaws with the implementation of the Risk Management Framework indicating it is fragmented and uncoordinated. Moreover, a 2010 review by the National Research Council concluded that DHS’ risk formulation is inadequate for guiding strategic investment decisions. The
1993 Government Performance Results Act requires all federal agencies to use performance measures to justify their taxpayer expenditures to Congress. In the absence of a guiding metric, DHS can’t say where we are, where we’re going, or even how we’ll get there.
Developing an appropriate metric is essential to guiding homeland security strategy and ensuring efficient allocation of scarce national resources. According to the
2010 National Research Council report, the DHS risk formulation suffers from two well- known problems affecting all threat-driven risk approaches: 1) statistical outliers called
“black swans”, and 2) insufficient historical data. The report goes on to list ten challenges describing an ill-behaved problem whose risk calculations can fluctuate wildly depending on input that is not well understood. The National Research Council report concludes that DHS’ risk formulation and framework are “seriously deficient and in need of major revision” [1, p. 11].
3
1.2 Objectives and Scope
The purpose of this research is to develop a risk formulation to guide homeland security strategy in making informed national resource investments by 1) indicating current protection status, 2) demonstrating incremental improvement, and 3) assessing associated costs. The research begins by establishing criteria for an appropriate risk measure and its formulation. The chosen metric is Θ, an attacker’s probability of failure suggested by earlier work in game theory by Sandler and Lapan. The proposed risk formulation is derived from criteria suggested in the 2010 National Research Council report assessing current homeland security risk methodologies. An Asset Vulnerability
Model (AVM) is introduced that works with the DHS Risk Management Framework to
1) provide a baseline estimation of critical infrastructure protection status, 2) perform cost-benefit analysis identifying optimum resource investments, and 3) offer decision support tools helping decision makers at all levels make informed choices investing scarce national resources. As its name implies, AVM is an asset-driven risk methodology avoiding the problems of threat-driven approaches employed by most other methodologies. Still, AVM addresses the problem of threat prediction through threat localization by scoping the target set. AVM formulations are transparent and repeatable employing data that is mostly already available. AVM formulations are comprehensive, capturing the broader effects of a disaster at a national level beyond the immediate damage. AVM decision support tools offer flexible presentation of results to inform decisions at different levels of management. Sensitivity analysis of the AVM risk formulation demonstrates that it is stable.
4
Critical infrastructure protection, though, is only half the homeland security problem; WMD protection is the other half. WMD protection is the responsibility of federal agencies outside the Department of Homeland Security and beyond the current reach of the Risk Management Framework. A coordinated defense among all targets is more efficient than an uncoordinated one. Consequently, a policy framework is proposed that will accommodate combined AVM analysis of CI and WMD risks.
With the advantage of AVM, it becomes possible to evaluate alternative risk investment strategies. Seven possible strategies are examined: 1) Least Cost, 2) Least
Protected, 3) Region Protection, 4) Sector Protection, 5) Highest DTheta (protective gain), 6) Highest Consequence, and 7) Random Protection. These are evaluated against varying probabilities of attack, attacker perceptions, and attack models. Strategy simulations using AVM determined that the Highest Consequence strategy produced the least amount of damages over a ten-year probability period. These simulations demonstrate the computational power of AVM, and how, with an appropriate supporting policy structure, AVM can objectively guide the nation towards a unified homeland security strategy.
1.3 Outline of Dissertation
Chapter 2 lays out the case for domestic catastrophic attack. It reviews current literature and primary sources to frame the homeland security threat, assess current protection efforts, and review related research in the area. Chapter 2 begins with the threat of critical infrastructure exposed by the 9/11 attacks. It describes how CI and
5
WMD may be subverted or employed for domestic catastrophic attack. It then examines federal efforts to protect WMD and CI. An evaluation of DHS’ risk management approach to CI protection reveals that it is fragmented, uncoordinated, and inadequate for guiding strategic investment decisions. A review of risk analysis and risk management methodologies reveals the general weakness of the threat-driven approach. Despite much research since 9/11 in terrorism risk modeling most methodologies suffer similar deficiencies, or have not yet been successfully adapted to guide strategic decisions.
Chapter 3 introduces the Asset Vulnerability Model for CI protection. It begins by developing design criteria for selection of an appropriate metric and risk formulation.
The key criteria in choosing a metric are ones that can determine acceptable risk. Earlier research in game theory by Sandler and Lapan developed an attacker-defender model that determined the optimum defense strategy is to protect all targets equally, not necessarily maximally. The key determinant in their model was a value they designated as θ, an attacker’s probability of failure, though they provided no formulation for its computation.
Sandler and Lapan’s θ, however, represented an attacker’s perception. In its place, AVM is a computational model that uses Θ representing the defender’s true knowledge of the probability of attack failure. The switch from θ to Θ was precipitated by risk formulation criteria derived from the 2010 National Research Council review. The 2010 report listed ten challenges to developing a well-behaved risk model. These were evaluated to derive seven bounding criteria to help condition the problem. The seven criteria are summarized as 1) asset-driven approach, 2) threat localization, 3) transparency & reliability, 4) qualified results, 5) comprehensive scope, 6) national impact, and 7) applicable results.
AVM develops a corresponding risk formulation for Θ within these bounding criteria.
6
AVM works with the DHS Risk Management Framework to provide 1) baseline analysis,
2) cost-benefit analysis, and 3) decision support tools. Baseline analysis produces a risk profile of all critical assets based on Θ. Cost-benefit analysis finds the optimum combination of protective improvement measure for each asset as determined by their Θ protective improvement benefit and their cost D( Θ ). Decision support tools graphically portray the results of baseline and cost-benefit analysis in a flexible manner that best informs resource investment decisions. AVM program models were instantiated and evaluated using simulated data; critical infrastructure data collected by DHS is protected by the 2002 Homeland Security Act making it exempt even from release under the
Freedom of Information Act. AVM program models and their parameters are described towards the end of this chapter. The chapter concludes by conducting sensitivity analysis on the AVM risk formulation demonstrating its stability.
Chapter 4 addresses AVM implementation to address all sources of domestic catastrophic attack, including WMD. AVM works with the DHS Risk Management
Framework to protect critical infrastructure. WMD protection is the responsibility of federal agencies outside the Department of Homeland Security, currently beyond the reach of RMF. However, coordination between federal agencies is directed from the
National Security Council to achieve national objectives specified in the National
Security Strategy. This chapter examines the policy framework needed to bring WMD into the AVM fold. It makes the case that the current homeland security definition and strategy objectives are unnecessarily restricted by their focus on terrorism. Terrorism is but one motive that may precipitate domestic catastrophic attack. It proposes a new homeland security definition and strategy statement to focus attention on the CI and
7
WMD threats. Additional oversight by the National Security Council will be necessary to oversee this implementation. The chapter concludes by using AVM to examine alternative risk investment strategies, demonstrating how, with an appropriate supporting policy structure, AVM can objectively guide the nation towards a unified homeland security strategy.
Chapter 5 summarizes contributions and suggests areas for future research. It makes broad contributions to understanding the homeland security threat, developing a viable risk formulation, supporting informed risk management, and objectively directing national efforts towards a unified strategy. And while it has endeavored to present a comprehensive framework for defining and improving homeland security, some important implementation details remain for future research including: 1) developing a comprehensive taxonomy of critical infrastructure, 2) assessing the gains of proposed improvement measures, 3) validating results, and 4) accounting for psychological impacts.
CHAPTER 2
DOMESTIC CATASTROPHIC ATTACK
2.1 An Unprecedented Threat
The United States was profoundly changed by the attacks of September 11, 2001.
9/11 instigated the longest war in US history, 1 challenges to our civil liberties, 2 and the most extensive government reorganization since World War II. 3 On September 11, 2001, hijackers exploited elements of the aviation infrastructure to attack the World Trade
Center and the Pentagon representing seats of US economic and military power [2, p. 8].
They killed nearly 3,000 people and caused more than $41.5B in damage 4 [3, p. 2]. On
September 11, 2001, nineteen men inflicted as much damage on the United States as the
Imperial Japanese Navy did on December 7, 1941.5 While by no means as threatening as
Japan’s act of war, the 9/11 attack was in some ways more devastating. It was carried out by a tiny group of people, not even enough to fill a platoon, dispatched by an organization based in one of the poorest, most remote, and least industrialized countries on earth [4, pp. 339-340]. Most of the attackers spoke English poorly, some hardly at all. In groups of four or five, carrying with them only small knives, box cutters, and cans of Mace or pepper spray, they hijacked four planes and turned them into deadly guided
9 missiles [4, p. 2]. Measured on a governmental scale, the resources behind it were trivial 6
[4, pp. 339-340] 9/11 was a “wake-up call” to the catastrophic potential of critical infrastructure [2, p. 5], and the unprecedented threat of small groups and individuals capable of wielding destructive power on a scale once only achievable through the combined might of a nation.
2.2 Critical Infrastructure Vulnerability
Prior to 9/11, most experts considered weapons of mass destruction (WMD) the primary avenue for domestic catastrophic attack [5]. WMD encompasses a class of
Chemical, Biological, Nuclear, and Radiological (CBRN) agents. These concerns were not unfounded. In 1995, the Japanese cult Aum Shinrikyo manufactured Sarin gas, a chemical nerve agent, and deployed it on the Tokyo subway killing 12 people and injuring hundreds more. Five people were killed in the weeks following 9/11 when envelopes containing anthrax were mailed to Washington DC, New York, and Florida [6, p. 1]. While nuclear and radiological devices have yet to be employed, documents and interrogations from military operations in Afghanistan have reinforced the assessment that the Taliban sought, and al-Qaeda, continues to seek to obtain radioactive material for a radiological weapon [7, p. 3]. Experts cite the difficulty of acquiring, manufacturing, and deploying WMD as the main reason terrorists rely on conventional weapons [7, p. 4].
However, for those wishing to inflict catastrophic damage on the US, critical infrastructure offers an attractive alternative to WMD.
10
Critical infrastructure is defined by the 2001 USA Patriot Act as: “Systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” The vast majority of critical infrastructure is owned and operated by the private sector. The great diversity and redundancy of the nation’s critical infrastructure provides significant physical and economic resilience. However, this vast and diverse aggregation of highly interconnected assets, systems, and networks may also present an attractive array of targets to terrorists, and magnify greatly the potential for cascading failure. [8, p.
11]
The essential vulnerability of today’s critical infrastructure is that little of it was centrally planned or designed, and virtually none of it was built to withstand deliberate attack. The result is that millions of lives depend upon a network that’s not fully understood, riddled with weaknesses, and susceptible to malicious tampering. A key concern of the federal government since 9/11 is that terrorists will target critical infrastructure to achieve three general types of effects: 1) Direct infrastructure effects:
Cascading disruption or arrest of the functions of critical infrastructure through direct attacks on a critical node, system, or function; 2) Indirect infrastructure effects:
Cascading disruption and financial consequences for government, society, and economy through public- and private-sector reactions to an attack; or 3) Exploitation of infrastructure: Exploitation of elements of a particular infrastructure to disrupt or destroy another target. [2, p. viii]
11
A cyber attack against the national electric grid is very unsettling prospect. Electric utilities rely on supervisory control and data acquisition (SCADA) systems to manage the nation’s power generation, transmission, and distribution networks. While generally protected from intrusion, SCADA systems operate over the Internet. The move to
SCADA boosts efficiency at utilities because it allows workers to operate equipment remotely. But this access to the Internet exposes these once-closed systems to cyber attacks [9].
In 2006, the Department of Energy (DOE) and Department of Homeland Security jointly conducted Project Aurora to assess the potential vulnerability of the national electric grid. In a dramatic video-taped demonstration, engineers at Idaho National Labs showed how weaknesses could be exploited to cause a generator to physically self- destruct [9]. As a critical node in the infrastructure network, a cyber attack on the electric grid would produce cascading effects across most every other sector of critical infrastructure [10, p. 9].
Cascading failure within the electric grid itself can disrupt broad geographic regions affecting millions. On July 2, 1996, a cascade event left 2 million customers in 11 states and 2 Canadian provinces without power [10, p. 9]. The August 2003 blackout of the northeastern United States and parts of Canada, has been invoked as indicative of the potential effects of a successful terrorist cyber attack on electrical utility control systems
[10, p. 10]. While widespread, these outages were not long term. The same might not be true of a targeted attack. The physical damage of certain system components (e.g. extra- high-voltage transformers) on a large scale could result in prolonged outages as procurement cycles for these components range from months to years [11, p. 11]. Such
12 an outage would be unprecedented, and the impact would be as great if not greater than those from CBRN agents.
Admittedly, a cyber attack on the electric grid, or any other critical infrastructure would not be easy. Yet, the sophistication required to infiltrate and compromise such systems has been amply demonstrated by the Stuxnet worm. Using thumb drives to bypass the Internet, a malicious software program known as Stuxnet infected computer systems that were used to control an Iranian nuclear power plant. Once inside the system,
Stuxnet had the ability to degrade or destroy the software on which it operated. It was also able to spread throughout multiple countries worldwide [12, p. 2].
Many consider Stuxnet the type of risk that could seriously damage critical infrastructure basic to the functioning of modern society. A successful attack by a
Stuxnet-like worm could inflict long-term damage by ordering control systems to overload or otherwise destroy critical mechanical components. Long lead times to repair or replace such components could leave millions without life sustaining or comforting services for a long period of time. The resulting damage to the nation’s critical infrastructure could threaten many aspects of life, including the government’s ability to safeguard national security interests [12, p. 2].
The worst case terrorist attack envisioned in the 2004 Planning Scenarios estimates 450,000 people would be displaced after detonating a 10-kiloton improvised nuclear device in a major US city [13, pp. 1-1]. By comparison, a targeted terrorist attack on the national electric grid could disrupt millions of lives and have national strategic consequences.
13
9/11 exposed the vulnerability of critical infrastructure to anybody with the means, motive, and opportunity to inflict catastrophic damage on the US. Both weapons of mass destruction and critical infrastructure provide ready avenues of attack for small groups and individuals wishing to inflict catastrophic consequences.
2.3 The WMD Threat
Weapons of mass destruction are highly destructive. The homeland security concern is that they fall into the hands of malicious agents and are employed inside the
US. Worldwide, the likelihood of non-state actors being capable of producing or obtaining WMD may be growing due to looser controls of stockpiles and technology in the former Soviet states specifically, and the broader dissemination of related technology and information in general. However, WMD remain significantly harder to produce or obtain than what is commonly depicted in the press. The Central Intelligence Agency believes most hostile actors will continue to choose conventional explosives over WMD, but warns that the al-Qaeda network has made obtaining WMD capability a very high priority [7, p. 2].
2.3.1 Chemical Agents
Toxic industrial chemicals such as chlorine or phosgene are easily available and do not require great expertise to be adapted into chemical weapons. Aerosol or vapor forms are the most effective for dissemination, which can be carried out by sprayers or an explosive device. Nerve agents are more difficult to produce, and require a synthesis of multiple precursor chemicals. They also require high-temperature processes and create dangerous
14 by-products, which makes their production unlikely outside an advanced laboratory.
Blister agents such as mustard can be manufactured with relative ease, but also require large quantities of precursor chemicals. The production and transfer of precursor chemicals is internationally monitored under the Chemical Weapons Convention and the informal international export control regime of the Australia Group, providing some degree of control over their distribution [7, p. 6].
2.3.2 Biological Agents
Experts say malicious agents working outside a state-run laboratory would have to overcome extraordinary technical and operational challenges to effectively and successfully weaponize and deliver a biological agent to cause mass casualties. While many biological agents can be obtained or grown with relative ease, several significant steps remain on the way to weaponization and effective use of these agents. The main challenge is effective dissemination, which requires an aerosol form. The formulation of agents for airborne dispersal requires dissolving optimal amounts of agent in a specific combination of different chemicals (with each agent requiring a unique formulation).
Moreover, aerosol disseminators need to be properly designed for the agent used, and suitable meteorological conditions must be present to carry out a successful biological mass casualty attack. Of particularly great concern is the threat of highly contagious diseases, particularly smallpox. Anthrax is not contagious from person to person, consequently its spread can be relatively easily contained. With a disease like smallpox, however, contagion can spread very rapidly. The breath or coughing of an infected person at the fever stage of the disease is sufficient to infect those around him or her. The
15 disease has an incubation period of 12-14 days, during which an infected person experiences no symptoms. Consequently, a clandestine smallpox release in a major transportation hub could infect hundreds, and would, in two weeks’ time, result in disease outbreaks wherever the passengers eventually traveled. Smallpox has been eradicated as a naturally occurring disease, and the only two known existing cultures of the virus are held by the United States and Russia. Even so, concerns over the security of the Russian samples and the possibilities of unknown samples, have kept smallpox in the forefront of threat considerations. Though the probability of hostile agents gaining access to the virus may be very low, the severity of the potential consequences has nevertheless led the federal government to stockpile 300 million smallpox vaccine doses [7, pp. 5-6].
2.3.3 Radiological Agents
Explosive-driven “dirty bombs” are an often-discussed type of radiological dispersion device (RDD), though radioactive material can also be dispersed in other ways.
Radioactive material is the necessary ingredient for an RDD. This material is composed of atoms that decay, emitting radiation. Some types and amounts of radiation are harmful to human health. Terrorist groups have shown some interest in RDDs. They could use them in an attempt to disperse radioactive material to cause panic, area denial, and economic dislocation. While RDDs would be far less harmful than nuclear weapons, they are much simpler to build and the needed materials are used worldwide. Accordingly, some believe RDDs are more likely to be employed than nuclear weapons. RDDs could contaminate areas with radioactive material, increasing long-term cancer risks, but would probably kill few people promptly compared to nuclear weapons which could destroy
16 much of a city, kill tens of thousands of people, and contaminate much larger areas with fallout. Cleanup cost after an RDD attack could range from less than a billion dollars to tens of billions of dollars, depending on area contaminated, decontamination technologies used, and level of cleanup required [14, p. 2]. Despite the seeming ease of launching a successful RDD attack, to date this has not been done. The reasons are necessarily speculative, but may include difficulties in handling radioactive material, lack of sufficient expertise to fabricate material into an effective weapon, a shift to smaller-scale but simpler attacks using standard weapons and explosives, and improved security. Of course, such factors cannot guarantee that no attack will occur [14, p. 2].
2.3.4 Nuclear Agents
While a nuclear weapon is the most destructive of all WMD, obtaining one poses the greatest difficulty for hostile non-state actors. The key obstacle to building such a weapon is the availability of a sufficient quantity of fissile material—either plutonium or highly enriched uranium. Some experts believe that if allowed access to the necessary quantities of fissile material, extraordinarily capable groups could build a crude nuclear weapon [7, p. 4].
2.4 The CI Threat
By itself, critical infrastructure is not destructive. Only through subversion by degrading, destroying, or diverting its intended function might it become destructive. At present, the government recognizes 16 distinct sectors of critical infrastructure 7 as enumerated in Presidential Policy Directive #21 (PPD-21) [15].
17
Table 2-1: Critical Infrastructure Sectors 1. Chemical 9. Financial Services
2. Commercial Facilities 10. Food & Agriculture
3. Communications 11. Government Facilities
4. Critical Manufacturing 12. Healthcare and Public Health
5. Dams 13. Information Technology
6. Defense Industrial Base 14. Nuclear Reactors, Materials, & Waste
7. Emergency Services 15. Transportation Systems
8. Energy 16. Water and Wastewater Systems
2.4.1 Chemical Plants
The potential harm to public health and the environment from a sudden release of hazardous chemicals has long concerned the US Congress. The sudden, accidental release in December 1984 of methyl isocyanate in an industrial incident at the Union Carbide plant in Bhopal, India, and the attendant loss of thousands of lives and widespread injuries spurred legislative proposals to reduce the risk of chemical accidents in the
United States [16, p. 1]. Potential hostile acts against chemical facilities might be classified roughly into two categories: direct attacks on facilities or chemicals on site, or efforts to use business contacts, facilities, and materials (e.g., letterhead, telephones, computers, etc.) to gain access to potentially harmful materials. In either case, agents may be employees (saboteurs) or outsiders, acting alone or in collaboration with others. In the case of a direct attack, traditional or nontraditional weapons may be employed, including explosives, incendiary devices, firearms, airplanes, computer programs, or weapons of mass destruction (nuclear, radiological, chemical, or biological). A hostile agent may intend to use chemicals as weapons or to make weapons, including but not limited to explosives, incendiaries, poisons, and caustics. Access to chemicals might be gained by
18 physically entering a facility and stealing supplies, or by using legitimate or fraudulent credentials (e.g., company stationary, order forms, computers, telephones or other resources) to order, receive, or distribute chemicals [16, p. 2].
2.4.2 Commercial Facilities
About 460 skyscrapers are listed among concerns for attack among commercial facilities [2, p. 9]. As was seen on 9/11, the collapse of the Twin Towers in New York
City killed thousands of people and caused billions of dollars in damages. As places of aggregation, national symbols, and the potential for collateral damage, large facilities make lucrative targets.
2.4.3 Communications
The communications infrastructure is comprised of 2 billion miles of cable [2, p.
9] providing essential connectivity across the nation and around the globe. The central concern is the impact disrupting this network could have on all other infrastructures that depend on it. Damage or degradation of the communications network could have grave repercussions for the national economy.
2.4.4 Critical Manufacturing
Lean production systems based on just-in-time deliveries make manufacturers sensitive to supply disruptions. Attacks on critical manufacturing nodes could have cascading effects severely damaging the national economy.
19
2.4.5 Dams
The federal government has built hundreds of water projects, primarily dams and reservoirs for irrigation development and flood control, with municipal and industrial water use as an incidental, self-financed, project purpose. Many of these facilities are critically entwined with the nation’s overall water supply, transportation, and electricity infrastructure. The largest federal facilities were built and are managed by the Bureau of
Reclamation (Reclamation) of the Department of the Interior and the U.S. Army Corps of
Engineers (Corps) of the Department of Defense. Reclamation reservoirs, particularly those along the Colorado River, supply water to millions of people in southern California,
Arizona, and Nevada via Reclamation and non-Reclamation aqueducts. Reclamation’s inventory of assets includes 471 dams and dikes that create 348 reservoirs with a total storage capacity of 245 million acre-feet of water. Reclamation projects also supply water to 9 million acres of farmland and other municipal and industrial water users in the 17 western states. The Corps operates 276 navigation locks, 11,000 miles of commercial navigation channel, and approximately 1,200 projects of varying types, including 609 dams. It supplies water to thousands of cities, towns, and industries from the 9.5 million acre-feet of water stored in its 116 lakes and reservoirs throughout the country, including service to approximately 1 million residents of the District of Columbia and portions of northern Virginia. The largest Corps and Reclamation facilities also produce enormous amounts of power. For example, Hoover and Glen Canyon dams on the Colorado River represent 23% of the installed electrical capacity of the Bureau of Reclamation’s 58 power plants in the West and 7% of the total installed capacity in the Western United
States. Similarly, Corps facilities and Reclamation’s Grand Coulee Dam on the Columbia
20
River provide 43% of the total installed hydroelectric capacity in the West (25% nationwide). Still, despite its critical involvement in such projects, especially in the West, the federal government is responsible for only about 5% of the dams whose failure could result in loss of life or significant property damage. The remaining dams belong to state or local governments, utilities, and corporate or private owners. Attacks resulting in physical destruction to any of these systems could include disruption of operating or distribution system components, power or telecommunications systems, electronic control systems, and actual damage to reservoirs and pumping stations. Further, destruction of a large dam could result in catastrophic flooding and loss of life [17, p. 2].
2.4.6 Defense Industrial Base
The Defense Industrial Base includes 250,000 firms in 215 distinct industries that manufacture and supply military equipment [2, p. 9]. These too are susceptible to supply chain disruptions that could affect military readiness, or more insidiously, introduce vulnerabilities exploitable by an adversary.
2.4.7 Emergency Services
These encompass more than 87,000 tribal, state, county, and municipal First
Responder jurisdictions [2, p. 9]. Few are prepared to adequately cope with large-scale deployment of CBRN agents. The increased likelihood of catastrophic attack requires an unprecedented level of coordination and cooperation across jurisdictions. The ability of
First Responders to operate together in a contaminated environment will greatly affect their capacity to save lives.
21
2.4.8 Energy
As was noted previously, electricity is vital to the commerce and daily functioning of United States. The modernization of the grid to accommodate today’s uses is leading to the incorporation of information processing capabilities for power system controls and operations monitoring. The “Smart Grid” is the name given to the evolving electric power network as new information technology systems and capabilities are incorporated. While these new components may add to the ability to control power flows and enhance the efficiency of grid operations, they also potentially increase the susceptibility of the grid to computer-related attack since they are built around microprocessor devices whose basic functions are controlled by software programming
[18, p. 2]. Industrial control systems are particularly vulnerable to cyber attack because of their intelligence and communications capabilities. Industrial control systems perform a number of functions in the electrical grid, ranging from microprocessor-based control systems which control the actuation and operation of one or more devices, to more sophisticated industrial control systems which can manage entire industrial processes or automated systems. SCADA systems are a well-known industrial control application to remotely monitor and control electric transmission system components. While cyber- intrusions into the US grid have been reported in recent years, no impairment or other damage has been publicly reported as a result of the attacks. By exploiting loopholes in cyber security, cyber attackers could breach the privacy of customer’s power usage data and access large numbers of meters, perhaps sending deliberately misleading information to the grid. This could potentially overload systems or cause grid operators to respond to false readings. However, concerns exist as to the potential damage that could result in the
22 future from malware left behind by such intrusions or doorways created in systems which could be exploited. The revelation of the Stuxnet worm and the alleged targeting of the control systems of a nuclear power plant in Iran have raised additional concerns about the vulnerability of electric power systems worldwide [18, p. 6] .
2.4.9 Financial Services
Long before 9/11, analysts identified financial sector system vulnerabilities as elements of national economic security in the work of the President’s Commission on
Critical Infrastructure Protection in 1996 and 1997. Financial institutions operate as intermediaries — accepting funds from various sources and making them available as loans or other investments to those who need them. The test of their collective operational effectiveness is how efficiently the financial system as a whole allocates resources among suppliers and users of funds to produce real goods and services.
America has grown far beyond a bank-centered financial economy: financial value has largely become resident on computers as data rather than physical means of payment.
This element of the financial system is an area of particular vulnerability. Financial institutions face two categories of emergencies that could impair their functioning. The first is directly financial: danger of a sudden drop in the value of financial assets, whether originating domestically or elsewhere in the world, such that a global financial crisis might follow. The second is operational: failure of physical support structures that underlie the financial system. Either could disrupt the nation’s ability to supply goods and services and alter the behavior of individuals in fear of the disruption (or fear of greater disruption). They could reduce the pace of economic activity, or at an extreme, cause an
23 actual contraction of economic activity. Financial regulators generally address the former set of problems through deposit insurance and other sources of liquidity to distressed institutions, safety and soundness regulation, and direct intervention. They address the latter, operational, set through remediation (as with the Y2K problem 8), redundancy, and other physical security. Under the worst case scenarios, the Federal Reserve (Fed) can relieve the economic effects of either set by acting as lender of last resort to supply liquidity to the financial system, employing monetary policy to expand domestic demand
(as it did following the 9/11 terrorist attacks). In the Terrorism Risk Insurance Act of
2002 (TRIA), Congress expanded the Fed’s ability to act as lender of last resort to the financial and real economies. Congress may also legislate direct federal assistance to protect the financial infrastructure. It has done so to prevent troubled entities such as
Chrysler, the Farm Credit System, and New York City from defaulting, thus harming their lenders, and potentially causing failure in major parts of the financial system and the economy. Collapse of one prominent entity could evoke a contagion effect, in which sound financial institutions become viewed as weak — today’s equivalent of a bank run, in which panicked customers withdraw funds from many entities, probably causing others to fail as well [19, pp. 1-2].
2.4.10 Food & Agriculture
The potential for malicious attacks against agricultural targets (agroterrorism) is increasingly recognized as a national security threat, especially after 9/11. Agroterrorism is a subset of bioterrorism, and is defined as the deliberate introduction of an animal or plant disease with the goal of generating fear, causing economic losses, and/or
24 undermining social stability. The goal of agroterrorism is not to kill cows or plants. These are the means to the end of causing economic damage, social unrest, and loss of confidence in government. Human health could be at risk if contaminated food reaches the table or if an animal pathogen is transmissible to humans (zoonotic). While agriculture may not be the first choice of attack because it lacks the “shock factor” of more traditional targets, many analysts consider it a viable secondary target. Agriculture has several characteristics that pose unique vulnerabilities. Farms are geographically disbursed in unsecured environments. Livestock are frequently concentrated in confined locations, and transported or commingled with other herds. Many agricultural diseases can be obtained, handled, and distributed easily. International trade in food products often is tied to disease-free status, which could be jeopardized by an attack. Many veterinarians lack experience with foreign animal diseases that are eradicated domestically but remain endemic in foreign countries [20, p. 2].
2.4.11 Government Facilities
The US government owns and operates some 3,000 facilities including the places where state and national representatives meet to conduct the peoples’ business [21, p. 9].
Government facilities present an attractive target mostly for their symbolic value. While the catastrophic loss of a substantial portion of government would significantly affect the country, it would not lead to the collapse of the nation. Constitutions at all levels provide for succession of leadership, and plans for re-establishing central authority developed during the Cold War survive and are maintained to this day.
25
2.4.12 Healthcare & Public Health
This sector encompasses 5800 registered hospitals [2, p. 9]. The situation here is similar to that for First Responders. The increased likelihood of catastrophic attack requires an unprecedented level of coordination and cooperation between medical facilities. And their ability to treat contaminated or contagious victims will greatly affect their capacity to save lives.
2.4.13 Information Technology
The FBI reports that cyber attacks attributed to hostile actors have largely been limited to unsophisticated efforts such as e-mail bombing of ideological foes, denial-of- service attacks, or defacing of websites. However, it says, their increasing technical competency is resulting in an emerging capability for network-based attacks. The FBI has predicted that hostile agents will either develop or hire hackers for the purpose of complementing large conventional attacks with cyber attacks. The Internet, whether accessed by a desktop computer or by the many available handheld devices, is the medium through which a cyber attack would be delivered. However, for a targeted attack to be successful, the attackers usually require that the network itself remain more or less intact, unless the attackers assess that the perceived gains from shutting down the network entirely would offset the accompanying loss of their own communication. A future targeted cyber attack could be effective if directed against a portion of the US critical infrastructure, and if timed to amplify the effects of a simultaneous conventional physical or chemical, biological, radiological, or nuclear terrorist attack. The objectives of a cyber attack may include 1) loss of integrity, such that information could be modified
26 improperly; 2) loss of availability, where mission-critical information systems are rendered unavailable to authorized users; 3) loss of confidentiality, where critical information is disclosed to unauthorized users; and 4) physical destruction, where information systems create actual physical harm through commands that cause deliberate malfunctions [22, pp. 5-6].
2.4.14 Nuclear Reactors, Materials, & Waste
Protection of nuclear power plants from land-based assaults, deliberate aircraft crashes, and other hostile acts has been a heightened national priority 9/11 [23, p. 1]. The major concerns in operating a nuclear power plant are controlling the nuclear chain reaction and assuring that the reactor core does not lose its coolant and “melt down” from the heat produced by the radioactive fission products within the fuel rods. US plants are designed and built to prevent dispersal of radioactivity, in the event of an accident, by surrounding the reactor in a steel-reinforced concrete containment structure, which represents an intrinsic safety feature. Thee major accidents have taken place in power reactors, at Three Mile Island (TMI) in 1979 9 at Chernobyl in the Soviet Union in 1986 10 , and the Fukushima Daiichi Nuclear Power Plant in Japan 11 . Although these accidents resulted from a combination of operator error and design flaws, TMI’s containment structure effectively prevented a major release of radioactivity from a fuel meltdown caused by the loss of coolant. In the Chernobyl accident, the reactor’s protective barriers were breached when an out-of-control nuclear reaction led to a fierce graphite fire that caused a significant part of the radioactive core to be blown into the atmosphere [23, p.
4]. In 2011, a combined earthquake and tsunami damaged cooling systems at the
27
Fukushima Daiichi power plant causing reactor cores to melt and explosions to breach containment facilities, contaminating the surrounding area out to 20 km. US nuclear power plants were designed to withstand hurricanes, earthquakes, and other extreme events. But deliberate attacks by large airliners loaded with fuel, such as those that crashed into the World Trade Center and Pentagon, were not analyzed when design requirements for today’s reactors were determined. A taped interview shown September
10, 2002, on Arab TV station al- Jazeera, which contains a statement that Al Qaeda initially planned to include a nuclear plant in its 2001 attack sites, intensified concern about aircraft crashes. According to former Nuclear Regulatory Commission (NRC)
Chairman Nils Diaz, NRC studies “confirm that the likelihood of both damaging the reactor core and releasing radioactivity that could affect public health and safety is low.”
Even so, NRC announced in April 2007, that it would issue a proposed rule requiring license applicants for new reactors to assess potential design improvements that would improve protection against impact by large commercial aircraft. However, even if the reactor is secure from aircraft impact, the same may not be true for spent fuel. When no longer capable of sustaining a nuclear chain reaction, “spent” nuclear fuel is removed from the reactor and stored in a pool of water in the reactor building and at some sites later transferred to dry casks on the plant grounds. Because both types of storage are located outside the reactor containment structure, particular concern has been raised about the vulnerability of spent fuel to attack by aircraft or other means. If hostile agents could breach a spent fuel pool’s concrete walls and drain the cooling water, the spent fuel’s zirconium cladding could overheat and catch fire [23, p. 5]. The National
Academy of Sciences released a report in April 2005 that found that “successful terrorist
28 attacks on spent fuel pools, though difficult, are possible,” and that “if an attack leads to a propagating zirconium cladding fire, it could result in the release of large amounts of radioactive material.” [23, p. 6]
2.4.15 Transportation Systems
The nation’s air, land, and marine transportation systems are designed for accessibility and efficiency, two characteristics that make them highly vulnerable to attack. Aviation security has been a major focus of transportation security policy following 9/11. In the aftermath of these attacks, the 107th Congress moved quickly to pass the Aviation and Transportation Security Act (ATSA; P.L. 107-71) creating the
Transportation Security Administration (TSA) and mandating a federalized workforce of security screeners to inspect airline passengers and their baggage. The act gave the TSA broad authority to assess vulnerabilities in aviation security and take steps to mitigate these risks. The July 2005 bombing of trains in London and the bombings of commuter trains and subway trains in Madrid and Moscow in 2004 highlighted the vulnerability of passenger rail systems to attacks. The volume of ridership and number of access points make it impractical to subject all rail passengers to the type of screening airline passengers undergo. A leading issue with regard to securing truck, rail, and waterborne cargo is the desire of government authorities to track a given freight shipment at a particular time. Most of the attention with regard to cargo vulnerability concerns the tracking of marine containers as they are trucked to and from seaports. Security experts believe this is a particularly vulnerable point in the container supply chain. Hazardous materials (HAZMAT) transportation raises numerous security issues. There are issues
29 regarding routing of hazmat through urban centers, and debate persists over the pros and cons of rerouting high hazard shipments [24, p. 3].
2.4.16 Water and Wastewater Systems
Damage to or destruction of the nation’s water supply and water quality infrastructure by hostile attack or natural disaster could disrupt the delivery of vital human services, threatening public health and the environment, or possibly causing loss of life. Across the country, water infrastructure systems extend over vast areas, and ownership and operation responsibility are both public and private, but are overwhelmingly non-federal. Since the attacks, federal dam operators and local water and wastewater utilities have been under heightened security conditions and are evaluating security plans and measures. There are no federal standards or agreed-upon industry practices within the water infrastructure sector to govern readiness, response to security incidents, and recovery. Efforts to develop protocols and tools are ongoing since the 9/11 terrorist attacks [17, p. 1]. A fairly small number of large drinking water and wastewater utilities located primarily in urban areas (about 15% of the systems) provide water services to more than 75% of the U.S. population. Arguably, these systems represent the greatest targets of opportunity for terrorist attacks, while the larger number of small systems that each serve fewer than 10,000 persons are less likely to be perceived as key targets by those who might seek to disrupt water infrastructure systems. However, the more numerous smaller systems also tend to be less protected and, thus, are potentially more vulnerable to attack. A successful attack on even a small system could cause widespread panic, economic impacts, and a loss of public confidence in water supply
30 systems [17, p. 2]. Since 9/11, many water and wastewater utilities have switched from using chlorine gas as disinfection to alternatives which are believed to be safer, such as sodium hypochlorite or ultraviolet light. However, some consumer groups remain concerned that many wastewater utilities, including facilities that serve heavily populated areas, continue to use chlorine gas. Damage to a wastewater facility prevents water from being treated and can impact downriver water intakes. Destruction of containers that hold large amounts of chemicals at treatment plants could result in release of toxic chemical agents, such as chlorine gas, which can be deadly to humans if inhaled and, at lower doses, can burn eyes and skin and inflame the lungs [17, pp. 4-5] .
2.5 WMD Protection
US efforts to keep WMD out of the hands of hostile agents are collectively termed
Counter-WMD (CWMD) strategy. CWMD strategy is a shared responsibility between the Department of Defense (DOD), Department of Energy (DOE), and the Department of
State (DOS), in coordination with the Department of Justice (DOJ) and the Department of
Homeland Security [25, p. 3]. Strategy coordination between executive departments is conducted at the highest-level in the National Security Council (NSC) [26]. The NSC has responsibility to “assess and appraise the objectives, commitments, and risks of the
United States” and to “consider policies on matters of common interest to the departments and agencies of the Government concerned with the national security” [27, p. 25]. The NSC has four statutory members (the President, Vice President, Secretary of
State and Secretary of Defense) and also includes the National Security Advisor, the
Secretary of the Treasury, the Director of National Intelligence, the Chairman of the Joint
31
Chiefs of Staff and other department secretaries, agency directors and advisors designated by the President. The NSC staff numbers 240 personnel located within the Executive
Office of the President (EOP) and is managed by the President’s National Security
Advisor, also known as the Assistant to the President for National Security Affairs [26].
The NSC is supported by a hierarchical collection of committees designed to screen policy matters and oversee their implementation. The Principals Committee (PC) is the
“senior interagency forum for consideration of policy issues affecting national security” while the Deputies Committee (DC) “reviews and monitors the work of the NSC interagency process” and is “responsible for day-to-day crisis management.” Further management of the development and implementation of national security policies are overseen by Interagency Policy Committees (IPCs) that can be established to address specific issues [28, pp. 23-24]. At the highest levels of government, CWMD strategy is coordinated by the WMD-Terrorism Principals Committee [27, p. 18].
Current CWMD strategy is rooted in
National Security Strategy
[25, p. 3]. The 2010
National Security Strategy established the objective of reversing the spread of nuclear and biological weapons, and securing nuclear materials to Figure 2-1: CWMD Strategy Framework [25, p. 3]
32 preclude their use by terrorist agents and rogue states [29, p. 23]. The 2002 National
Strategy to Combat Weapons of Mass Destruction provides the basic blueprint for pursuing this national security objective [25, p. 3]. CWMD strategy is founded on three pillars: nonproliferation (NP), counterproliferation (CP), and consequence management
(CM). Nonproliferation seeks to dissuade or impede both state and non-state actors from acquiring CBRN weapons. Counterproliferation seeks to develop both active and passive measures to deter and defend against the employment of CBRN weapons. Consequence management seeks to develop measures to quickly respond and recover against a domestic CBRN attack [30, p. 2]. CWMD strategy is further refined in the 2010 Nuclear
Posture Review Report and the 2009 National Strategy for Countering Biological Threats supporting the overall National Biodefense Strategy (HSPD-10/NSPD-33). These national-level documents provide strategic guidance for US government departments and agencies to develop goals and objectives, identify capability requirements, and ultimately provide material and nonmaterial solutions for CWMD [25, p. 3].
2.5.1 Nonproliferation
Nonproliferation employs an array of international agreements, multilateral organizations, national laws, regulations, and policies to prevent the spread of dangerous weapons and technologies. The nuclear nonproliferation regime is presently the most extensive, followed by those dealing with chemical and biological weapons [31, p. 1].
The nuclear nonproliferation regime encompasses several treaties, extensive multilateral and bilateral diplomatic agreements, multilateral organizations and domestic
33 agencies, and the domestic laws of participating countries [32, p. xix]. Although the
Nuclear Nonproliferation Treaty (NPT) is perhaps the most visible aspect of the nuclear nonproliferation regime, the success of nonproliferation efforts relies on the functioning of national export control laws and effective inspections conducted by the International
Atomic Energy Agency (IAEA). Technical assistance in the peaceful uses of nuclear energy is also an important tool for nuclear nonproliferation [31, p. 16].
The cornerstone of international efforts to prevent biological weapons proliferation and terrorism is the 1972 Biological Weapons Convention (BWC). This treaty bans the development, production, and acquisition of biological and toxin weapons and the delivery systems specifically designed for their dispersal [32, p. xviii]. In 1969, the United States declared a unilateral end to its offensive biological weapons program
[31, p. 27]. But because biological activities, equipment, and technology can be used for good as well as harm, BW-related activities are exceedingly difficult to detect, rendering traditional verification measures ineffective. In addition, the globalization of the life sciences and technology has created new risks of misuse by states and terrorists [32, p. xviii]. The 2008 WMD Commission report concluded that terrorists are more likely to be able to obtain and use a biological weapon than a nuclear weapon. [32, p. xv].
Prohibitions against the use of chemical weapons date back to the International
Peace Conferences that met at the Hague in 1899 and 1907; these pre-World War I prohibitions were reaffirmed in the 1919 Versailles Treaty and further expanded in the
1925 Geneva Protocol [31, p. 26]. Controls on exports of chemical and biological agents with military applications have been regulated under the Arms Export Control Act
34
(AECA) of 1968, and their dual-use technologies have been regulated under the Export
Administration Act (EAA) of 1979 and its predecessors [31, p. 1].
A key aspect of all nonproliferation regimes is their attempt to control exports of sensitive goods and technologies through supplier agreements. These are the Nuclear
Suppliers Group and the Zangger Committee for nuclear technology, and the Australia
Group for chemical and biological weapons technology [31, p. 1]. Although proliferation control regimes are a useful tool in preventing dangerous technology transfers, several factors undermine their effectiveness. One is the difficulty of addressing underlying motivations of countries to acquire WMD. Regional security conditions as well as the desire to compensate for other countries’ superior conventional or unconventional forces have been common motivations for WMD programs. Some countries may want WMD to dominate their adversaries. Prestige is another reason why certain countries seek WMD. Another factor working against the regimes is the steady diffusion of technology over time—much of the most significant WMD technology is 50 years old, and growing access to dual-use equipment makes it easier for countries or groups to build their own WMD production facilities from commonly available civilian equipment [31, pp. 1-2].
2.5.2 Counterproliferation
Since 9/11, significant US government interest has focused on counterproliferation programs—that is, military measures against weapons of mass destruction [31, p. 20]. Defense cooperation and arms transfers to US allies can ease concerns about security that can lead them to consider acquiring WMD, and also signal
35 potential adversaries that acquisition or use of WMD may evoke a strong military response. US conventional and nuclear military capabilities and the threat of retaliation help deter WMD attacks against US forces, territory, and allies. One key tool of counterproliferation has been interdiction of WMD-related equipment shipments at sea, on land, and by air through the Proliferation Security Initiative [31, p. 5].
In recent years, counterproliferation capabilities were expanded to include more advanced “passive” and “active” defense measures. Passive counterproliferation tools include protective gear such as gas masks and detectors to warn of the presence of WMD.
Active measures include missile defenses to protect US territory, forces, and allies; precision-guided penetrating munitions and special operation forces to attack WMD installations; and intelligence gathering and processing capabilities [31, p. 5]. Although counterproliferation is a main pillar of CWMD strategy, political and technical hurdles
(hidden underground bunkers, locations near civilians, etc.) tend to make counterproliferation a method of last resort, after other options have failed [31, p. 5].
2.5.3 Consequence Management
Consequence Management is government preparations to respond to the use of
WMD on US territory, against US forces abroad, and to assist friends and allies [33, p.
5]. Pursuant to the Homeland Security Act of 2002 and Homeland Security Presidential
Directive #5 (HSPD-5), the Secretary of Homeland Security is the Principal Federal
Official for domestic incident management [34, p. 6]. In accordance with the 1988
Stafford Disaster Relief and Emergency Assistance Act, the Federal Emergency
Management Agency (FEMA) is the coordinating agency within the Department of
36
Homeland Security for federal disaster response [35]. The DHS guide for responding to all-hazard incidents is the National Response Framework (NRF) [34, p. 2]. The NRF organizes federal response capabilities into 15 Emergency Support Functions (ESFs).
ESFs align categories of resources and provide strategic objectives for their use [34, p.
29]. FEMA coordinates response support across the federal government by calling up, as needed, one or more of the 15 ESFs [34, p. 57]. ESF requirements for specific incidents are anticipated in NRF Incident Annexes. The NRF currently includes annexes for 1)
Biological Incidents, 2) Catastrophic Incidents, 3) Food and Agriculture Incidents, 4)
Mass Evacuation, 5) Nuclear/Radiological Incidents, 6) Cyber Incidents, and 7)
Terrorism Incident / Law Enforcement and Investigation [36]
The Catastrophic Incident Annex to the National Response Framework (NRF-
CIA) establishes the context and overarching strategy for implementing and coordinating an accelerated, proactive national response to a catastrophic incident [37]. A catastrophic incident is characterized as any natural or manmade incident, including terrorism, that results in extraordinary levels of mass casualties, damage, or disruption severely affecting the population, infrastructure, environment, economy, national morale, and/or government functions [34, p. 42]. The NRF-CIA establishes protocols to pre- identify and rapidly deploy key essential resources (e.g., medical teams, search and rescue teams, transportable shelters, medical and equipment caches, etc.) that are expected to be urgently needed to save lives and contain incidents [38, pp. CAT-1].
Presumably such incidents include chemical attack as the annex incorporates victim decontamination under ESF #8, Public Health and Medical Services [38, pp. CAT-4].
37
Biological, radiological, and nuclear attacks are separately addressed under different annexes.
The Biological Incident Annex outlines actions, roles, and responsibilities associated with response to a human disease outbreak of known or unknown origin requiring federal assistance. This annex outlines biological incident response actions including threat assessment notification procedures, laboratory testing, joint investigative/response procedures, and activities related to recovery [39]. A key component of the government’s strategy is the Strategic National Stockpile (SNS). The
Project BioShield Act of 2004 authorized the Secretary of Health and Human Services
(HHS), among other things, to procure medical countermeasures in advance of a potential infectious disease outbreak [40, p. 1]. The Centers for Disease Control and Prevention
(CDC) maintains the SNS, and when requested, delivers it to state and local governments for distribution. State and local governments are responsible for developing and exercising distribution plans [41, pp. 13-14].
The Nuclear/Radiological Incident Annex (NRIA) to the National
Response Framework addresses federal response to a nuclear or radiological incident including: (1) inadvertent or otherwise accidental releases and (2) releases related to deliberate acts. The second category includes, but is not limited to, deliberate attacks with radiological dispersal devices (RDDs), nuclear weapons, or INDs [42, pp. NUC-2]. The
Department of Defense is expected to play a significant role in response to a domestic
IND or RDD incident. In 2008, Secretary of Defense Robert M. Gates launched a plan to train and equip three CBRNE Consequence Management Forces (CCMRFs) for domestic response [43, p. 4]. The first CCMRF just became operational in 2010 when plans were
38 scrapped in favor of a new CBRN Enterprise replacing the three CCMRFs with a single
Defense CBRN Response Force (DCRF). The DCRF is a 5,200 personnel federal military rapid response force composed of two force packages. The first force package is comprised of 2,100 personnel prepared to deploy within 24 hours. The second force package consists of 3,100 personnel and is prepared to deploy within 48 hours of notification. The new structure also consists of two Command and Control CBRN
Consequence Response Elements (C2CRE) composed of 1,500 personnel each. The
DCRF may be deployed to augment one of 10 regional Homeland Response Forces
(HRFs) [44] comprised of National Guardsmen. HRFs provide regional capabilities able to respond within six to 12 hours, and function alongside other National Guard assets belonging to state governors including 57 WMD civil support teams (WMD-CSTs) and
17 CBRNE enhanced response force packages (CERFPs). This new construct is considered more responsive compared to the CCMRFs, with a better match of lifesaving capabilities that allows for an improved balance between state and federal control [43, pp.
4-5].
2.6 CI Protection
The Secretary of Homeland Security is vested by statute and Presidential directive with coordinating national efforts to secure and protect critical infrastructure and key resources, which the Department does currently through the National Protection and
Programs Directorate (NPPD) [45, p. 7]. Current critical infrastructure protection traces its roots back to President Clinton’s PDD-63 12 which set a national goal to protect the nation’s critical infrastructure from intentional attacks, both physical and cyber by the
39 year 2003 [46, p. 4]. In December 2003, the Bush Administration released HSPD-7 establishing a critical infrastructure protection policy framework based on PDD-63 but with added emphasis on physical infrastructure. [46, p. 11] In June 2006, the Bush
Administration released a National Infrastructure Protection Plan (NIPP) outlining the process by which the Department of Homeland Security would work with private industry to identify and protect the nation’s critical infrastructure. In 2009, the Obama
Administration released an updated version of the National Infrastructure Protection Plan, and, for the most part, continues to follow the basic organizational structures and strategy of prior administrations [46, p. Summary].
Figure 2-2: DHS Risk Management Framework [8, p. 4]
According to DHS, the NIPP meets the requirements set forth in HSPD-7and provides the overarching approach for integrating the Nation’s many critical infrastructure protection initiatives into a single national effort [8, p. i]. The NIPP classifies critical infrastructure into sectors and assigns a federal Sector-Specific Agency
(SSA) to represent each [8, pp. 18-19]. The cornerstone of the NIPP is its Risk
Management Framework. The RMF is an iterative process comprised of six steps: 1) Set
Goals and Objectives, 2) Identify Assets, Systems, and Networks, 3) Assess Risks, 4)
40
Prioritize, 5) Implement Programs, and 6) Measure Effectiveness. [8, p. 27] Step Two is supported by the National Critical Infrastructure Prioritization Program (NCIPP) which conducts an annual data call to state and federal partners to identify and update an inventory of high priority infrastructure [47, p. 14]. Step Three is supported by the
Enhanced Critical Infrastructure Protection (ECIP) program whereby DHS Protective
Security Advisors (PSAs) conduct voluntary security surveys and vulnerability assessments across the designated sectors [47, pp. 3-4]. Beginning in FY2009, DHS expanded its vulnerability assessments to consider collections of infrastructure, or clusters, as part of its Regional Resiliency Assessment Program (RRAP) [46, p. 25]. Risk is assessed as a function of consequence, vulnerability, and threat, according to the formulation R = f (C,V,T) [8, p. 32]. Step Four is also supported by the NCIPP which classifies infrastructure as priority level 1 or level 2 based on the consequence to the nation in terms of loss of life or economic impact [47, p. 2]. In Step Five, federal resources are spent in a number of ways, including agencies’ internal budgets for operations and programs, grants to states and localities, and research and development funding for universities and industry [46, p. 28]. Working with public and private industry representatives, SSAs are responsible for applying the Risk Management
Framework to develop Sector Specific Plans (SSPs) [8, p. 27] to implement risk reduction activities, with timelines and responsibilities identified, and tied to resources
[46, p. 22].
In conjunction with the RMF, SSAs participate in a Strategic Homeland
Infrastructure Risk Assessment (SHIRA) to build a National Risk Profile (NRP) [8, p.
33]. Each year, the NRP identifies the highest relative risks to critical infrastructure and
41 serves as the foundation of the National Critical Infrastructure and Key Resources
Protection Annual Report (NAR) [8, p. 42]. The NAR provides a summary of national infrastructure protection priorities and requirements and makes recommendations for prioritized resource allocation across the federal government [48]. The NAR is submitted to the Executive Office of the President together with the DHS budget, on or before
September 1 st each year [8, p. 99]. Funding for DHS critical infrastructure protection programs are identified within the DHS budget as Infrastructure Protection and
Information Security (IPIS) programs. Funding for all CIKR protection programs across federal government are categorized by the Office of Management and Budget (OMB) as programs to “protect the American people, our critical infrastructure and key resources”
[49, pp. 33-36]
Presumably, the NIPP guides critical infrastructure protection investments towards the HSPD-8 National Preparedness Goal (NPG), [47, pp. 7-8] first enunciated in
2005: “To engage federal, state, local, and tribal entities, their private and nongovernmental partners, and the general public to achieve and sustain risk-based target levels of capability to prevent, protect against, respond to, and recover from major events in order to minimize the impact on lives, property, and economy” [50, p. 3]. In 2011,
HSPD-8 was replaced by PPD-8, [51, p. 1] and a new National Preparedness Goal formulated: “A secure and resilient Nation with capabilities required across the whole community to prevent, protect against, mitigate, respond to, and recover from the threats and hazards that pose the greatest risk” [52, p. 1]. PPD-8 also directed development of a
National Preparedness System (NPS) to achieve the National Preparedness Goal. The
FEMA National Preparedness Directorate administers NPS [53]. The National
42
Preparedness System provides an incremental development plan for achieving the
National Preparedness Goal by building twenty “core capabilities” across five mission areas: prevent, protect, mitigate, respond, and recover [52, p. 2]. The core capabilities were derived from a 2007 Target Capabilities List (TCL) of 37 capabilities representing a
“consensus of the community” of more than 120 national associations, non-governmental organizations, and the private sector [54, p. 2]. The National Preparedness System entails its own process to 1) identify and assess risk, 2) estimate current capabilities, 3) build and sustain capabilities, 4) plan to deliver capabilities, and 5) validate capabilities [55, p. 1].
Steps One and Two are supported by a web-based Threat and Hazard Identification and
Risk Assessment (THIRA) system [56, p. 2]. Step Three is supported by FEMA State and
Local Grant Programs (SLGP), which in 2012 Congress was asked to repackage as the
National Preparedness Grant Program [57, p. 5]. Step Four requires federal agencies to develop and maintain a family of plans to deliver identified capabilities. And Step Five propose validating capabilities through the FEMA administered National Exercise
Program (NEP) [55, pp. 4-5].
2.7 Assessing Protection Efforts
Assessing WMD protection efforts is beyond the scope and classification of this research. For now, suffice it to say that responsible agencies maintain CBRN weapons and materials under tight security to prevent them from falling into the wrong hands.
Assessing CI protection efforts is also limited by security concerns as infrastructure data collected by DHS is protected under the 2002 Homeland Security Act from disclosure even under the Freedom of Information Act. CI analysis is further complicated by the
43 fact it has been in constant evolution since the founding of DHS, making some programs such as the National Preparedness System too new to gauge performance. However, a series of internal audits and Congressionally mandated reviews provide some insight and indicate some serious flaws, starting with the basic task of identifying critical infrastructure.
Taking an inventory of one’s assets is a standard first step for most risk management processes used to prioritize the protection of those assets [49, p. 7]. Step
Two in the NIPP Risk Management framework specifies “a comprehensive inventory of assets, systems, and networks that make up the Nation’s critical infrastructure and key resources” [8, p. 29]. Efforts to develop such a database trace their origin to Operation
Liberty Shield, which developed a list of 160 “critical sites” as part of a national plan to protect the homeland during the 2003 US invasion of Iraq. Starting from this list, in 2003
DHS issued a data call to states yielding information on 28,368 more assets, from which it derived a Protected Measures Target List of 1,849 items judged to be in most need of additional protection from attack. In 2004 DHS issued another data call to states yielding even more information such that by 2006, DHS had compiled a list of 77,069 assets which it called the National Asset Database (NADB). Contained within the NADB were
4,055 malls, shopping centers, and retail outlets, 224 racetracks, 539 theme parks and 163 water parks, 1,305 casinos, 234 retail stores, 514 religious meeting places, 127 gas stations, 130 libraries, 4,164 educational facilities, 217 railroad bridges, and 335 petroleum pipelines. Notably missing from the NADB were many other items from the defense industrial base, postal and shipping, banking and finance, and food and agriculture sectors. At the time, DHS officials acknowledged that only about 600 assets
44 in the NADB were considered truly critical to the nation. The DHS Inspector General
(IG) concluded that the NADB contained “many unusual or out-of-place assets whose criticality is not readily apparent, and too few assets in essential areas and may represent an incomplete picture.” The problem was attributed to the difficulty in defining critical infrastructure. Different definitions yielded widely varying results in the 2003 and 2004 data calls. As a result, additional assets of questionable national significance were added to the database [49, pp. 1-7].
DHS responded to the IG report saying that the NADB was an inventory of assets from which critical assets could be drawn, and that it would not be possible or useful to develop a single definitive prioritized list of critical assets [46, p. 26]. In response,
Congress mandated the establishment of both a National Asset Database and a Prioritized
Critical Infrastructure List [58, p. Sec. 1001]. Afterwards, the NADB evolved first into the Infrastructure Data Warehouse, then into the Infrastructure Information Collection
Project. In 2009, the DHS IG released a follow-up report determining that while opportunities for improvement exist, DHS efforts were well conceived and maturing [46, pp. 26-27]. Be that as it may, it does not explain what happened next.
In 2012, the Government Accountability Office (GAO) was asked to examine the
DHS Enhanced Critical Infrastructure Protection program. Among their tasks was to identify how many security surveys and vulnerability analysis were being conducted on
DHS identified high-priority critical infrastructure, essentially Step Three in the Risk
Management Framework. According to GAO, DHS conducted about 2,800 combined surveys over a two-year period from 2009 to 2011. Of these, GAO was able to identify
179 assessments conducted on high-priority infrastructure. Because of discrepancies
45 between lists, GAO acknowledged another 129 assessments might also have been done on high-priority infrastructure. Given the benefit of the doubt, only 11% (308) of DHS’ assessments were conducted on high-priority assets [47, p. Summary]. These results seem contrary to the NIPP assertion that “DHS works to ensure that appropriate vulnerability assessments for nationally critical CIKR” [8, p. 36]. GAO acknowledged that DHS has little control over industry participation in the voluntary ECIP program, but it also noted that DHS 1) has not developed institutional performance goals to measure owner/operator participation, nor 2) is it positioned to assess why some high-priority asset owners and operators decline to participate [47, p. 14]. Granted, this problem may only be programmatic, but other reports indicate NIPP problems that are more systemic.
The NIPP identifies the risk formula R = f (C,V,T) as an important component of vulnerability analysis conducted in Step Three of the Risk Management Framework. Yet, its one reported application seems more appropriate to Step Four, prioritization of assets.
In 2004, the allocation of Homeland Security Grant Program (HSGP) funds raised debate when Wyoming received $28.34 per capita compared to $4.10 and $3.73 per capita allocated to New York and California respectively. The rationale behind the disbursement seemed counterintuitive. After the 9/11 Commission weighed in on the issue, and spurred on by Congressional legislation, DHS undertook to develop a more risk-based approach for determining Homeland Security Grant Program allocations.
Accordingly, the FY2007 HSGP grant guidance announced the adoption of the risk formula R=T*V*C where T is the likelihood of an attack occurring, and V*C is the relative impact of an attack. The problem with the formula as applied, however, was that
DHS was unable to differentiate vulnerability across areas and states, and consequently
46 assigned vulnerability a constant value of one [59, pp. 2-7]. In effect, DHS treated all assets as equally vulnerable to make resource decisions about reducing vulnerability. As observed in one report: “While understandable at some level, this essentially eviscerates any interplay between vulnerability and consequence…” [59, p. 18].
The previous episode also indicates another systemic problem of adequate coordination within DHS. The establishment of the National Preparedness System in
2011 effectively divided infrastructure protection between the National Programs and
Protection Directorate and FEMA. The NPPD is responsible for the NIPP, while FEMA is responsible for NPS. NPPD works with public and private institutions to improve infrastructure protection “inside the perimeter”, while FEMA works with state, local, and nonprofit agencies to improve infrastructure protection “outside the perimeter”. Yet, even before the NPS was established, the programs were split when authority for State and Local Grant Programs was transferred from NPPD to FEMA by the 2006 Post-
Katrina Emergency Management Reform Act [60, p. Sec. 505]. According to a 2011
Congressional Research Service report, “it is not clear to what extent the NIPP process influences the allocation of resources to states and localities. DHS states that information contained in its list of high-priority sites is reviewed when making these grant allocation decisions. However, these grants are managed by FEMA which apparently assesses risk independent of the NIPP. Similarly, port security grants, while managed by FEMA, are influenced largely by the review of vulnerability assessments and security plans performed by the Coast Guard.” [46, p. 29] In short, the NIPP is fragmented.
Perhaps the biggest indictment of DHS CI protection efforts is that the American public doesn’t know what it’s getting for its taxes. Between 2001 and 2008, DHS gave
47 approximately $12 billion to state and local governments to prepare for and respond to terrorist attacks and other disasters. Yet, as the report notes: “A central question that may be asked is what has been the rate of return, as defined by identifiable and empirical returns on this $12 billion investment?” [59, p. 14]. The first homeland security strategy stipulated that plans should “ensure that the taxpayers’ money is spent only in a manner that achieves specific objectives with clear performance-based measures of effectiveness”
[61, p. xiii]. The 2009 National Infrastructure Protection Plan affirms this commitment by stating it will “track progress toward a strategic goal by measuring beneficial results or outcomes” [8, p. 47]. What’s more, it’s the law: the 1993 Government Performance and
Results Act requires all federal agencies develop performance measures with respect to achieving their goals. But according to the same report, those metrics don’t exist, and nobody knows how much security was gained by the $12 billion investment. One DHS official attributed the lack of metrics to an “absence of methodology” [59, p. 14].
2.8 Risk Analysis
Risk analysis seeks to inform complex and difficult choices among possible measures to mitigate it [59, p. 16]. Risk implies uncertain consequences. It is defined as the probability of loss or damage and its impact [62, p. 9]. Accordingly, risk formulations are comprised of two elements: probability and magnitude. Combined, they provide an estimate of likelihood for a specified loss. Risk analyses may be roughly classified into two groups based on their probability component: 1) event- or threat- driven, or 2) asset-driven or asset-based methodologies. Event-driven methodologies begin with a predefined set of initiating events. Asset-driven methodologies begin with
48 inherent susceptibilities of the system being evaluated focusing on finding and correcting vulnerabilities regardless what type of event occurred. A 2001 study indicated that 80% of risk assessment models are of the event- or threat-driven variety [63, pp. 15-16]. The
DHS risk formulation, f(T,V,C), includes both a threat and vulnerability component, but is properly classified as an event-driven methodology as T is the probability that an asset will be subjected to a particular threat or hazard [8, pp. 32-33]. DHS estimates threat probabilities from attack scenarios developed through a formal elicitation process with subject matter experts based on analytic reports and past records [64, p. 33]. There are two well-known deficiencies to this approach: 1) black-swans, and 2) insufficient historical data.
A “black swan” is a high-impact, large-magnitude attack that is a rare, hard-to- predict statistical outlier. Indeed, DHS recognizes that the use of generic attack scenarios based on today’s knowledge can leave risk analyses vulnerable to the unanticipated
“never before seen” attack scenario, a black swan, or to being behind the curve in emerging terrorist tactics and techniques [64, p. 59]. Event-driven approaches are appropriate for studying initiating events that are well understood and whose rate of occurrence can be reliably predicted from historical data; however, they fail to consider emerging or unrecognized threats by an innovative adversary or those naturally-occurring events for which there is no human-record [63, pp. 15-16]. In the insurance or financial sectors, the assessment of risk benefits from a rich and voluminous set of data which can be mined for patterns of historical behavior. While there are various governmental and non-governmental databases on terrorism, these data sources are relatively less robust
[59, pp. 16-17]. The National Research Council observed that: “with respect to
49 exceedingly rare or never-observed events, the historical record is essentially nonexistent, and there is poor understanding of the sociological forces from which to develop assessment techniques.” They concluded: “Thus, it will rarely be possible to develop statistically valid estimates of attack frequencies (threat) or success probabilities
(vulnerability) based on historical data” [64, pp. 45, 47].
2.9 Risk Management
Risk analysis supports risk management. Risk management is a continual process or cycle in which risks are identified, measured, and evaluated; countermeasures are then designed, implemented and monitored to see how they perform, with a continual feedback loop for decision-makers input to improve countermeasures and consider tradeoffs between risk acceptance and avoidance [59, p. 16]. While the DHS Risk
Management Framework ostensibly embraces these precepts, the 2010 National Research
Council report faulted the absence of a viable cost-benefit analysis capability to adequately inform resource investment decisions [64, p. 68]. Risks can be reduced in a number of ways: by reducing threats (e.g., through eliminating or intercepting the adversary before he strikes); by reducing vulnerabilities (e.g., harden or toughen the asset to withstand the attack); or, by reducing the impact or consequences (e.g., build back-ups systems or isolate facilities from major populations). For each potential countermeasure, the benefit in risk reduction should be determined. More than one countermeasure may exist for a particular asset, or one countermeasure may reduce the risk for a number of assets. Multiple countermeasures should be assessed together to determine their net effects. The cost of each countermeasure must also be determined. Once a set of
50 countermeasures have been assessed and characterized by their impact on risk and cost
(assuming they’re feasible), priorities can be set [62, p. 11].
Even if a truly effective risk assessment tool were to be created to help decision makers, we are reminded that “management of risk is not elimination of risk”. Tools that attempt to quantify risk from human actions will always be inexact. “However, sound data, a well thought-out formula, and consistent application of the methodology are important when attempting to measure terrorism risk to the U.S. and systematically buy down the risk to a particular location or asset” [59, p. 23].
2.10 Related Research
Since 9/11, much research has been undertaken in terrorism risk modeling to help predict and subsequently deter or defeat terrorist actions. Terrorism risk modeling has been undertaken in various forms: 1) deterministic modeling, 2) stochastic games, 3) network analysis, and 4) game theory [65].
Deterministic modeling has long been used by the insurance industry to assess risk. For example, insurance companies calculate Probable Maximum Loss (PML) for earthquakes by 1) identifying the fault posing the greatest threat, 2) assigning the maximum credible earthquake to the fault, and 3) calculating portfolio loss assuming this size event occurs on this fault. PML estimation amounts to a series of problems in the domain of engineering, physical, chemical, and biological sciences. The same method can be applied to terrorist attacks such as evaluating the blast effect of a bomb detonation, the extent of fire from a fuel tanker explosion, the radiation fall-out from a radiological dispersal device, the spread of contagion from a smallpox outbreak, etc. These problems
51 may still be technically complex and challenging, but the core mathematical models for blast analysis, conflagration, atmospheric dispersion, pollution transport, epidemiology, etc. are well established [65, p. 2]. Notable is the research carried out by the RAND
Center for Terrorism Risk Management Policy, which is a joint project of the RAND
Institute for Civil Justice, RAND Public Safety and Justice, and Risk Management
Solutions (RMS). A detailed RAND study based on the RMS model has developed an approach for making allocation decisions robust against uncertainty in model parameterization. A considerable volume of terrorism risk research has also been undertaken to support national public policy, notably at the University of Southern
California’s Center for Risk and Economic Analysis of Terrorism Events (CREATE), a
DHS University Center of Excellence [65, p. 7]. Another method, Probabilistic Risk
Analysis, initially developed for the purpose of assessing the safety of nuclear reactors has also been applied to terrorism risk modeling [66, p. 11]. Deterministic models, however, are packed with assumptions, and often resort to expert judgment to assign probabilities to terrorist attack scenarios, introducing a range of variability and subjectivity to the results. Furthermore, the deterministic approach largely removes the human behavioral component from estimation. The uncertainty introduced by assumptions is one reason why a deterministic approach can only be partially satisfactory
[65, p. 3].
Unlike naturally occurring or accidental events, such as floods, earthquakes, or system failures, terrorism is fundamentally adversarial and adaptive [66, p. 5].
Randomness plays a significant part in any human conflict. But there are causal factors as well, which shape the conflict landscape, including the temporal pattern of successful
52 attacks [65, p. 4]. Stochastic games were first introduced to the literature by Lloyd S.
Shapley in 1953. The first paper on stochastic games considers two-person zero-sum stochastic games. Two-person indicates that there are two players, and zero-sum denotes that a player’s gain is the cost to the other player. Play proceeds in stages, from one state to the other according to the transition probabilities controlled jointly by the two opponents. The game consists of states and actions associated with each player. Once in a state, each player chooses their respective action. The play then moves into another state with some probability that is determined by the actions chosen and by the state in which they are chosen. Given that opponents make their respective decisions in a given stage, a cost is incurred to each player. An opponent discounts his projected cost by a factor Beta. The usual interpretation of Beta is that decision makers consider that costs incurred in future stages have less value in the present stage. Another interpretation of
Beta in homeland security applications is the interest rate interpretation that determines the return on investment that could have been earned if the decision maker had not invested the funds in security investments [66, p. 7]. Similarly, the time development of the al-Qaeda conflict is a stochastic process which may be described by a controlled
Markov chain model 13 . At any moment in time, the predator (i.e., al-Qaeda) is in some specific state of attack preparedness, while the prey (i.e., USA) is in some corresponding state of defense preparedness. In a democracy, there are rigorous checks and balances imposed on law enforcement and security services. Accordingly, the counterterrorism response has to be commensurate with the terrorism threat: draconian measures (e.g., detention without trial) are only tolerable when the threat level is high. Democracies are prevented constitutionally from mounting an unlimited war on terrorism. Whatever state
53 al-Qaeda occupies, police and security forces counter the prevailing threat with actions which aim to control terrorism. Because of these controlling counter-actions, the frequency of attack occurrence is not Poissonian 14 , as is generally assumed for natural hazards. In mathematical terms, these counter-actions are termed Markov feedback policy 15 [65, p. 4]. Stochastic processes and Markov chains, however, require continual adjustment and parameter tuning to reflect observed behavior. The question of their utility, therefore, is whether they are best utilized as explanatory models rather than predictive tools.
Network analysis seeks to gain insight into the dynamics of a terrorist network by looking inside and analyzing the social network of interconnections between nodes corresponding to individual terrorists. Al Qaeda has shown flexibility in adapting to counterterrorism action, and has been compared to the ability of a virus to mutate faster than its environment changes. This adaptation process can be simulated by evolving the social network according to a set of basic rules. Nodes communicate with one another to exchange information, financial and logistical resources, subject to the risk that any communication might be detected by security services. Local cells are autonomous to a substantial degree, and recruit attack team members and carry out target reconnaissance.
Large-scale attacks are planned, but the larger and more ambitious that an attack becomes, the higher the chance of it being compromised by one of the attack team. If any node is removed from the network, there is a chance that any node connected to it might also be named and removed. Thus the more hierarchical the network, the greater the chance of destabilization through the arrest of senior leaders [65, pp. 7-8]. The opportunity for surveillance experts to spot a community of terrorists, and gather
54 sufficient evidence for courtroom convictions, increases nonlinearly with the number of operatives; above a critical number, the opportunity improves dramatically. This nonlinearity emerges from analytical studies of networks using modern graph theory methods. Below the tipping point, the pattern of terrorist links may not necessarily betray much of a signature to the counterterrorism services. However, above the tipping point, a far more obvious signature may become apparent in the guise of a large connected network cluster of dots, which reveals the presence of a form of community. As exemplified by the audacious attempted replay in 2006 of the Bojinka plot 16 , too many terrorists spoil the plot. Intelligence surveillance and eavesdropping of terrorist networks thus constrain the pipeline of planned attacks that logistically might otherwise seem almost boundless. For example, in the three years before the 7/7/05 London attack 17 , eight plots were interdicted. Thanks to the diligence of the security services, which deter the planning of large numbers of attacks, and interdict most of those that are planned, the frequency of successful terrorist attacks is kept low. Only a small proportion of attacks succeed, and these tend to be those involving fewer operatives [65, pp. 5-6]. Such network analysis, though, has to cope with the problem of missing data. Massive amounts of uncertainty and a dearth of data plague network analysis [65, p. 8]. The amount and type of data required to support network analysis comes at the cost of personal civil liberty 18 . As happened after 9/11 and 7/7, after each major terrorist attack, democracies will respond by rebalancing the desire for liberty with the need for security
[65, p. 6].
Unlike above methods, game theory incorporates human behavior directly into its mathematical analysis. The two fundamental precepts underlying game theory are 1) that
55 protagonists are rational and 2) intelligent in strategic reasoning. In applying game theory to terrorism, it is important to leave behind popular notions of rationality, and to return to formal mathematical definitions of rational behavior, namely that actions are taken in accordance with a specific preference relation called “utility”. There is no requirement that a terrorist’s preference relation should involve economic advantage or financial gain. Nor is it necessary that a terrorist’s preference relation conform with those of society at large. Game theory is not restricted to any one cultural or religious perspective. The test of any mathematical risk model is its explanatory and predictive capability. Among its insights, game theory predicts that, as prime targets are hardened, rational terrorist will tend to substitute lesser softer targets. Explicit admission of this soft target strategy has since come from Khalid Sheikh Mohammed, the al-Qaeda operations chief and mastermind behind 9/11 after his capture in March 2003. As with burglar alarms, self-protection has the externality of shifting risk to one’s neighbors.
Further validation of the terrorism target prioritization model is provided by analysis of the Irish Republican Army campaign in Ulster and England 19 , and the GIA campaign in
France 20 . The success of this game theory model illustrates the future potential for quantitative terrorism model development [65, p. 7].
The 2010 National Research Council report recommends some of the preceding methods for potentially improving DHS risk analysis [1, p. 5], but apparently none have yet been successfully applied. The closest comparable research is a Critical Asset and
Portfolio Risk Analysis for Homeland Security (CAPRA) by McGill. CAPRA is a threat- driven risk methodology comprised of six phases: 1) scenario identification, 2) consequence and severity assessment, 3) overall vulnerability assessment, 4) threat
56 probability assessment, 5) actionable risk assessment, and 6) benefit-cost analysis [63, p.
44]. Aside from the general difficulty of a threat-driven approach, which his research recognizes, it doesn’t seem to meet criteria for transparency or national impact that will be presented in the next chapter.
2.11 Summary
9/11 exposed the nation’s vulnerability to domestic catastrophic attack by small groups with limited means. Catastrophic damage may be inflicted within the US by employing weapons of mass destruction or subverting critical infrastructure. While DHS has adopted a risk management framework to protect critical infrastructure, an examination of their efforts reveals they are uncoordinated and uninformed by supporting metrics. A 2010 Review of the Department of Homeland Security’s Approach to Risk
Analysis by the National Research Council of the National Academies determined that
“DHS’s operationalization of that framework—its assessment of individual components of risk and their integration into a measure of risk—is in many cases seriously deficient and is in need of major revision” [1, p. 11].
2.12 Contributions
This chapter has provided a comprehensive review of the homeland security threat represented by critical infrastructure and weapons of mass destruction.
Furthermore, it has drawn across a wide variety of sources to provide a consolidated view of what it being done to protect them. Similarly, it has provided a comprehensive
57 analysis of DHS critical infrastructure protection programs that have been in a continual state of evolution since the department was created. The challenge of this research was in presenting a fair and balanced assessment based on authoritative sources addressing problems that remain relevant to current DHS operations.
CHAPTER 3
AN ASSET VULNERABILITY MODEL
3.1 Overview
The purpose of this research is to provide a quantitative risk methodology that will effectively and efficiently guide national investments protecting critical infrastructure and WMD agents that may be targeted to precipitate a domestic catastrophic attack. This chapter will focus on the CI component. The next chapter will examine integrating WMD into the methodology proposed here.
Recall from the previous chapter that the 2002 Homeland Security Act made critical infrastructure protection a priority mission for the Department of Homeland
Security. DHS has since adopted a risk management approach to buy down risk through protective measure investments. A 2010 Review of the Department of Homeland
Security’s Approach to Risk Analysis by the National Research Council of the National
Academies concluded that “with the exception of risk analyses for natural disaster preparedness, the committee did not find any DHS risk analysis capabilities and methods that are yet adequate for supporting DHS decision making… Moreover, it is not yet clear
59 that DHS is on a trajectory for development of methods and capability that is sufficient to ensure reliable risk analyses other than for natural disasters” [1, pp. 2-3].
This chapter introduces an Asset Vulnerability Model (AVM) designed to work with the DHS Risk Management Framework to 1) provide a baseline estimation of critical infrastructure protection status, 2) perform cost-benefit analysis identifying optimum resource investments, and 3) offer decision support tools helping decision makers at all levels make informed choices investing scarce national resources. This chapter begins by proposing design criteria based upon analysis of current DHS efforts.
It then derives a risk formulation and supporting methodology based on the design criteria. It examines an instantiation of the model and assesses its performance. It concludes by comparing AVM with other critical infrastructure risk assessment methodologies against criteria developed in this section.
3.2 Design Criteria
The Government Performance and Results Act (GPRA) of 1993 seeks greater accountability of taxpayer expenditures by requiring federal agencies to set goals and use performance measures for management and budgeting. GPRA requires agencies to develop long-term goals and strategic plans to set specific performance goals (targets) based on the general goals, and to report annually on actual performance compared to the targets [67, pp. ii, 5]. DHS critical infrastructure protection programs are predicated on the risk formulation R=f(C,V,T) where C is the consequence that disrupting or destroying an asset will have on public health, safety, economy, and government; V is the
60 vulnerability of an asset to exploitation, disruption, or destruction, expressed as a probability it will succumb to a particular threat or hazard; and T is the probability an asset will be subjected to a particular threat or hazard [8, pp. 32-33]. This formula is central not only to the allocation of DHS grant programs, but to all of the department’s activities [59, p. 22], yet no such risk assessment is included in the annual DHS performance report accompanying its budget justification to Congress [46, p. 27].21 The
GPRA requirement for accountability suggests three overarching requirements for any corresponding performance measure: 1) it must provide an indication of current status;
2) it must be able to demonstrate incremental improvement; and 3) it must include associated costs. The choice of metric and corresponding risk formulation will be key to meeting these requirements.
3.2.1 Choosing A Metric
Insofar as measuring risk is concerned, it is essential to identify the primary drivers of risk and collect the most appropriate data to quantify those risks. Collecting and measuring data that is readily available, but not central to risk analysis yields quantifiable risk scores, but the results would be indefensible and meaningless.
Accordingly, three questions frame the choice of choosing an appropriate metric: 1) what is the risk to, 2) what is the risk from, and 3) how much risk is acceptable [59, pp. 17-18].
In answer to the first question, the 2002 Homeland Security Act establishes that the risk is to critical infrastructure, making its protection a priority mission of the Department of
Homeland Security [68]. In answer to the second question, Homeland Security
Presidential Directive #7 (HSPD-7), Critical Infrastructure Identification, Prioritization,
61 and Protection, establishes that the risk is from attacks that could exploit or destroy infrastructure creating effects comparable to those from the use of a weapon of mass destruction [2]. In answer to the third question we turn to earlier work in game theory.
Game theory has yielded fundamental insights into processes where indeterminate choice, either conscious or unconscious, is a critical element. Game theory derives its power by framing choice as an optimizing factor between competing agents with dependent relationships. The strength of this approach is that it reduces the many components comprising choice into a singular goal representing agents’ interests, or
“utility”. Thus “winning” becomes the optimizing factor in strategy games, “profit” becomes the optimizing factor in economics, and “propagation” becomes the optimizing factor in evolution. Game theory lends understanding and prediction to fields that were once thought unknowable and chaotic. Thus, it was not unexpected that game theory should yield valuable insights when it started to be applied to the problem of terrorism in the 1970s [65, p. 7].
In 1988, Todd Sandler and Harvey Lapan used game theory to examine the strategic relationship between terrorists’ choice of targets and targets’ investment decisions. They discovered that an investment decision by one target had a direct impact on the vulnerability, or likelihood of attack on the other. From this insight they concluded that 1) a coordinated defense policy among all targets is more efficient than an uncoordinated one, and 2) the optimum defense strategy is to protect all targets equally, not necessarily maximally [69, pp. 249-254]. Sandler and Lapan’s findings, later confirmed and extended by the Center for Risk and Economic Analysis of Terrorism
62
Events (CREATE) [70, pp. 20-24], were dependent on a particular value representing the terrorist’s probability of attack failure, which they designated as θ.
Sandler and Lapan’s research suggest a metric based on θ, an attacker’s probability of failure. If critical infrastructure is the designated target, then θ answers the questions what is the risk from, what is the risk to, and how much risk is acceptable.
Assessing critical infrastructure in terms of its ability to withstand attack also satisfies the overarching requirement of determining current protection status. How to make this assessment is the next question.
Figure 3-1: Sandler & Lapan Model [69, p. 250]
63
3.2.2 Formulating Risk
In its 2010 Review of the Department of Homeland Security’s Approach to Risk
Analysis, the National Research Council enumerated the many challenges to developing risk formulas amenable to homeland security in general, and critical infrastructure in particular. These challenges are summarized in Table 3-1 [64, p. 51]. The first two challenges address the difficulty of reliably estimating threat as part of any formulation.
The third and fourth challenges indicate the formulation should include some indication of tolerance or confidence in the results, and properly account for data dependencies within the calculation. The fifth, sixth, and seventh challenges require that the formulation be comprehensive in scope, addressing all significant consequence factors across human, temporal, and spatial dimensions. And the last three challenges stipulate that results must be meaningful to a wide variety of stakeholders in such a manner that will help them make appropriate decisions at different levels of management. The challenges listed by the National Research Council describe an ill-behaved problem whose risk calculation can fluctuate wildly depending on input that is not well understood. As with any ill-behaved problem, conditioning can help tame the calculations and provide insight through approximate results. Accordingly, The National
Research Council list of challenges suggest bounding criteria that may help condition an appropriate risk analysis.
64
Table 3-1: Homeland Security Risk Analysis Challenges/Criteria 1. Availability and reliability of data 2. Modeling the decision making and behaviors of intelligent adversaries 3. Appropriately characterizing and communicating uncertainty in models, data inputs, and results 4. Methodological issues around implementing risk as a function of threats, vulnerabilities, and consequences 5. Modeling cascading risks across infrastructures and sectors 6. Incorporating broader social consequences 7. Dealing with different perceptions and behaviors about terrorism versus natural hazards 8. Providing analyses of value to multiple, distributed decision makers 9. Varying levels of access to necessary information for analysis and decision making 10. Developing risk analysis communication strategies for various stakeholders
3.2.2.1 Asset-Driven Approach
As detailed in the previous chapter, an event-driven approach to risk analysis for malicious anthropic threats, such as that posed by the DHS risk formulation R=f(C,V,T), suffers from two well-known problems: 1) statistical outliers called “black swans”, and
2) insufficient historical data to support robust statistical analysis. These shortcomings render traditional methods inadequate for accounting for human actions such as event- trees, event sequence diagrams, fault trees, and reliability block diagrams. Indeed, according to one report, “the threat element of the risk reduction formula is what differentiates terrorism from all other hazards” [59, p. 28]. While the problem may be unique, the situation is not. In its 2010 report the National Research Council noted a similar inability for scientists to predict earthquakes. In response, local governments within affected regions have adopted an asset-driven risk approach implementing stringent seismic standards within their building codes [1, p. 46]. The results have been palpable: “Prior to the implementation of codes, magnitude 6 earthquakes (on the Richter
65 scale) were a major source of risk, whereas now the potential for loss given their occurrence is much less. Thus, from some decision makers’ point of view, a magnitude 6 earthquake afflicting southern California is no longer a threat despite having been so in the not too distant past” [63, p. 13]. The observation here is that an asset-driven approach to risk analysis focusing on vulnerability can prove effective when an event-driven approach, focusing on threat, is not necessarily feasible.
3.2.2.2 Threat Localization
As noted by the National Research Council, “Terrorism risk analysis is hampered by datasets that are too sparse and targeted events that are too situation specific” [1, p.
50]. By comparison, natural hazards have amassed a great deal of data and been subject to extensive statistical analysis. Even with this advantage, forecasters still can’t predict with certitude where or when a natural disaster will strike. The primary benefit of statistical analysis to hazard prediction has been in localizing their effects. Using statistical analysis, forecasters can assign reasonable probabilities that a given location will experience a given disaster over a given period. Localization is important to optimizing resource allocation by directing protection measures where they’re most needed. Thus, for example, while earthquakes are a national phenomenon, California incorporates more stringent seismic standards in its building codes than Connecticut. For similar reasons localization is also important to critical infrastructure protection: resources should be directed where they’re most needed. From this perspective, a large historical database is unnecessary. The potential target set has already been identified in
HSPD-7: infrastructure that may be exploited or destroyed to create effects comparable
66 to those from the use of a weapon of mass destruction. Of the sixteen infrastructure sectors currently recognized by the federal government, the nine listed in Table 3-2 may be targeted to create catastrophic effects on such a scale.
Table 3-2: Domestic Catastrophic Threats (Infrastructure) 1. Chemical Plants 6. Information Networks
2. Dams 7. Nuclear Reactors, Materials, & Waste
3. Energy 8. Transportation Systems
4. Financial Services 9. Water & Wastewater Systems
5. Food & Agriculture
Not included in this list are commercial facilities, communications, critical manufacturing, defense industrial base, emergency services, government facilities, and healthcare and public health. As described in the previous chapter, commercial facilities include 460 skyscrapers, the loss of two of which proved particularly deadly on 9/11.
But the collapse of the Twin Towers was due to subversion of the transportation sector turning passenger jets into guided missiles. The buildings themselves didn’t pose a catastrophic threat and had withstood a conventional bombing attack in 1993. The criticality of large buildings rests in their value as secondary targets where large numbers of people congregate. By themselves they cannot initiate catastrophic attack. The same holds true for government facilities whose attack would be mostly symbolic, and yet, as previously described, government would survive. Similarly, emergency services and healthcare and public health do not present targets for precipitating catastrophic attack.
Their criticality rests with saving lives following an attack by preparing to coordinate mass rescue and care in a potentially contaminated environment. Conversely, an attack on communications, critical manufacturing, and the defense industrial base could have
67 cascading effects with broader consequences to the national economy and national defense. While certainly disruptive, they are unlikely to cause large scale depravation of essential services comparable to a successful attack on the federal banking system. Their subversion is unlikely to be catastrophic, whereas a successful attack on the sectors listed in Table 3-2 could be.
The essential insight of localization is that it has three significant impacts for homeland security. First, it is worth repeating, is that it overcomes prevailing concerns with developing strong statistical models similar to those for natural hazards. Second, it shifts attention from protecting the US population as the target of direct attack to the target of indirect attack. And third, as will be seen in the next chapter, it overcomes past problems with developing a definitive inventory of critical assets.
3.2.2.3 Transparency & Repeatability
In its 2010 Review of the Department of Homeland Security’s Approach to Risk
Analysis, the National Research Council cautioned that a bad metric can be worse than no metric [1, p. 66]. In this respect the National Research Council decried the oversimplification of the DHS risk formula reducing R=f(C,V,T) to R=C*V*T, ignoring inherent dependencies among the terms [1, p. 46]. By the same token, the National
Research Council warned about undue complexity undercutting transparency, making it difficult to validate proposed formulations. The problem with developing high fidelity models is the same lack of historical data that troubles threat estimation. In the absence of hard data, assumptions must be made. The more complex the model, the more assumptions must be made, compounding potential errors. The middle ground,
68 recommended by the National Research Council is to “ensure that vulnerability and consequence analyses for infrastructure protection are documented, transparent, and repeatable” [1, pp. 64-65].
3.2.2.4 Qualified Results
In a similar fashion, the National Research Council expressed concern about fostering a blind faith in numbers. A metric without any qualification on its value can also lead to bad decisions. For example, there is a statistically significant difference between a measure that produces a 50% improvement compared to one with a 45% projected improvement; that is unless there’s a +/- 5% confidence in the first measure.
To preclude such errors in false precision, the National Research Council recommended that “DHS should ensure that models undergo verification and validation—or sensitivity analysis at the least” [1, p. 12].
3.2.2.5 Comprehensive Scope
While endorsing the DHS risk formulation R=f(C,V,T), the National Research
Council contends that the DHS implementation is deficient in scope omitting key contributing factors to vulnerability. Specifically, the report states that “vulnerability is much more than physical security; it is a complete systems process consisting at least of exposure, coping capability, and longer term accommodation or adaptation” [1, p. 62]. In other words, vulnerability needs to consider mitigating factors both before and after an attack; or put more colloquially, “to the left of the boom and the right of the boom”. This suggests that risk analysis draw on lessons from the Federal Emergency Management
69
Agency (FEMA) Integrated Emergency Management System (IEMS). IEMS was derived from the Comprehensive Emergency Management (CEM) system proposed by the National Governor’s Association in 1978. CEM was based on observations by the
Disaster Research Center that both manmade and natural catastrophes placed similar response-generated demands on society. CEM consequently proposed a common framework for addressing all phases of all types of disaster. The current IEMS identifies those phases as prevent, protect, mitigate, respond, and recover [71, pp. 23-26]. A comprehensive vulnerability analysis should accordingly address mitigating factors across the five phases of emergency management.
3.2.2.6 National Impact
The National Research Council also faulted deficiencies in DHS’ current assessment of consequence. “The fundamental challenge for analyzing the consequences of a terrorist event is how to measure the intangible and secondary effects. DHS’s consequence analyses tend to limit themselves to deaths, physical damage, first-order economic effects, and in some cases, injuries and illness. Other effects, such as interdependencies, business interruptions, and social and psychological ramifications, are not always modeled, yet for terrorism events these could have more impact than those consequences that are currently included” [1, p. 51]. Put more succinctly, consequence needs to consider the broader effects of a disaster beyond the immediate damage, both in time and space. As homeland security concerns impacts to the nation’s welfare, it suggests that consequence should be assessed in similar terms. Currently, the federal government assesses the welfare of the nation in terms of economic vitality and individual longevity. The two key measures are Gross Domestic Product (GDP) and
70 national mortality. In their detailed formulation by the Department of Commerce and
Centers for Disease Control and Prevention, they capture the broader effects of impacts to national welfare both in time and space. Indeed, in 2001 the gross domestic product dropped 47% 22 , and the national homicide rate increased 20% 23 , registering the impact of
9/11.
3.2.2.7 Applicable Results
Ultimately, the effectiveness of any risk analysis is judged by its usefulness to decision makers in managing resources. According to the National Research Council, the attributes of a good risk analysis includes the ability to 1) convey current risk levels, 2) support cost-benefit analysis, 3) demonstrate risk reduction effects across multiple assets at different levels of management, and 4) measure and track investments and improvement in overall system resiliency over time [1, pp. 68-70].
Table 3-3: Risk Formulation Criteria 1. Asset-Driven Approach 5. Comprehensive Scope
2. Threat Localization 6. National Impact
3. Transparency & Repeatability 7. Applicable Results
4. Qualified Results
3.3 AVM Description
An Asset Vulnerability Model is now introduced to 1) reasonably define and measure risk, 2) provide a means for measuring how developing capabilities are reducing that risk, and 3) illustrate how to identify specific capability gaps which might serve as an input for allocation of homeland security grants. AVM is comprised of 1) baseline
71 analysis, 2) cost-benefit analysis, and 3) decision support tools. AVM analysis is predicated on a risk measure designated as Θ representing an attacker’s probability of failure based on the Sandler and Lapan value θ. The two values differ in that the Sandler and Lapan θ represents an attacker’s perception while the AVM Θ represents the defender’s known understanding. AVM is not concerned whether or not the attacker knows the true value of Θ. The switch in perspective value was made to accommodate transparency and repeatability criteria thus reducing error margins.
3.3.1 Baseline Analysis
Baseline analysis produces a risk profile of all critical assets based on Θ. The
Greek capital theta (hereafter referred to as “theta”) represents the probability of attack failure on a given asset. Theta is calculated in an asset-based risk formula (criteria 1) addressing the five phases of emergency management (criteria 2). A separate Θ is calculated for every critical infrastructure asset that may be exploited or destroyed to created WMD effects, as listed in Table 3-2 (criteria 2). The proposed risk formulation for Θ is as follows:
Θ = P(dis)*P(def)*P(den)*P(dim)*%(dam) (3-1)
where:
P(dis) = Probability an attack can be detected/disrupted (3-1.1) P(def) = Probability an attack can be defeated (3-1.2) P(den) = Probability a worst case disaster can be averted (3-1.3) P(dim) = Probability 100% survivors can be saved (3-1.4) %(dam) = % decrease in GDP * % increase in mortality rate (3-1.5)
72
3.3.1.1 Probability of Disrupting an Attack
P(dis) corresponds to the “prevent” phase of emergency management. As noted by former Secretary Chertoff, “…our best solution is a solution that prevents a terrorist act before it actually comes about” [59, p. 27]. P(dis) is the probability an attack can be detected and disrupted before it is launched. P(dis) differentiates threat warning from threat prediction. Threat warning is short-term based on current indicators compared to threat prediction which is long term based on historical data. The distinction is analogous to the difference between hurricane warning and hurricane prediction, except where it is impossible to stop a hurricane, it may be possible to stop an attack. As noted by
Secretary Chertoff, “a critical element in that is our early warning system, which is intelligence…” [59, p. 27]. P(dis) may be calculated from historical intelligence data for past attempts to attack domestic critical infrastructure. The basic calculation divides the number of attempts that were thwarted by the sum of the attempts that were both thwarted and executed. The calculation should be specific to each CI sector.
3.3.1.2 Probability of Defeating an Attack
P(def) corresponds to the “protect” phase of emergency management. P(def) is the probability an attack that is launched can be defeated. This term examines the amount and type of security protecting the CI target in question. P(def) may be derived from the Protective Measure Index (PMI) assessed from security surveys and vulnerability assessments currently conducted by DHS. Trained Protective Security
Advisors (PSAs) work with owners and operators to examine more than 1,500 variables covering six major components – information sharing, security management, security
73 force, protective measures, physical security, or dependencies – as well as 42 more specific subcomponents within those categories) [47, p. 9]. Vulnerability analysis is predicated on about 25 attack scenarios generated for each sector. The attack scenarios are developed through a structured process by intelligence analysts drawing on experts, previous attacks, and reporting [1, p. 33]. Again, the threat estimation is not being used to predict the probability of attack but rather the type of attack in order to assess current defensive measures. While this too may be imperfect, it has the advantage of consistency
(criteria 3) as survey results are turned over to the Argonne National Laboratory to produce a Protective Measures Index score ranging from 0 (low protection) to 100 (high protection) [47, p. 9]. The PMI is normalized for use in the AVM risk formulation. Of course it will take time to conduct vulnerability assessments on all assets making a default PMI necessary for baseline analysis. The default P(def) value may calculated from historical data by dividing the number of thwarted attacks by the total number of attacks on a particular type of asset.
3.3.1.3 Probability of Denying Success
P(den) corresponds to the “mitigate” phase of emergency management. P(den) is the probability that the worst case disaster can be averted, even if an attacker successfully breaches an asset’s security. P(den) examines the failure mode and/or redundancy built into the asset. An asset that is failsafe or redundant, as applicable, may be assigned a
P(den) of 1 meaning no consequences result, that the attack was for naught. Lesser values may be assigned to P(den) using the same data and methodologies for producing
74 the Protective Measure Index (see 3.3.1.2.). The default value should be some constant greater than zero.
3.3.1.4 Probability of Diminishing Consequences
P(dim) corresponds to the “response” phase of emergency management. P(dim) is the probability that 100% of survivors can be saved from the consequences of the worst case disaster. In 2012, FEMA deployed a web-based Threat and Hazard Identification and Risk Assessment (THIRA) system and made it a requirement for states and territories competing for funds under the Homeland Security Grant Program [56]. THIRA assists states and territories with examining their “core capabilities” established in the 2011
National Preparedness Goal. THIRA collects data on regional response capabilities in critical transportation, environmental response, fatality management services, mass care services, mass search and rescue operations, on-scene security and protection, operational communications, public and private services and resources, and public health and medical services [72, pp. 3-4]. This database could assist in assessing regional capabilities within 72-hours of an incident, the critical window for saving lives [73, p.
11]. P(dim) may be calculated many different ways, but the central concept is to divide available capacity by the number of anticipated casualties. The calculation may take into account a reduction in capacity due to losses among the responding agencies due to the incident. A default value may be calculated from historical data on similar size incidents independent of cause.
75
3.3.1.5 Percent Damages
The %(dam) parameter simultaneously represents the “recovery” phase of emergency management and the magnitude component of the risk assessment formula. It captures the national impact on lives and the economy for incidents of both mass destruction and disruption (criteria 6) by tapping the extensive capabilities already invested by the federal government to collect this information. Just as important, both sets of data can be expressed as percentages, facilitating mathematical manipulation while avoiding awkward value judgments comparing the loss of lives and property.
3.3.2 Cost-Benefit Analysis
AVM baseline analysis identifies CI assets with highest risk scores. The next step is to identify protective measures expected to result in the greatest reduction of risk for any given investment. Protective measures should include actions that can prevent, deter, or mitigate a threat, reduce a vulnerability, minimize the consequences, or enable timely and efficient response and recovery [62, p. 17]. Cost-benefit analysis finds the optimum combination of protective improvement measures proposed for each asset.
Cost-benefit analysis is conducted using Θ and D( Θ) estimations for each improvement measure. Delta theta is the estimated increase in Θ for the proposed security measure. Delta theta is provided in component form as P( dis), P( def),
P( den), and P( dim). The magnitude component of the Θ formulation, %(dam) remains constant for Θ and Θ as it represents the worst case disaster if the asset is subverted or disrupted. An associated cost component is provided for each Θ in the form of D( dis),
D( def), D( den), and D( dim). Each proposed security improvement has an associated
76 set of paired Θ and D( Θ) data tuples. Only the P( def) and P( den) tuples are directly associated with an asset. P( dis) and P( dim) representing national and regional improvement proposals are proportionally assigned to affected assets. The given Θ and
D( Θ) values are discrete, representing specific capabilities for purchase. The choice whether to buy them or not is also discrete; there are no fractional solutions. For this reason, values are provided in tabular format as opposed to polynomial curves. The data sets associated with each improvement measure are also independent. This stipulation simplifies the estimation of Θ by eliminating dependency analysis. Estimating Θ will be sufficiently difficult using either expert analysis or computer modeling. Consistency will be key (criteria 3), suggesting that Θ should be estimated by a central source.
Precision can be gradually improved by comparing estimated and actual performance data over time. Thus, each asset may have a number of tables representing corresponding improvement measures. Cost-benefit analysis evaluates each proposed measure to determine which provides the greatest return on investment. First, the combined Θ and
D( Θ) is calculated for each measure according to the formulations shown in 3-2 and 3-
3. Then a proportional value is calculated for each measure by dividing Θ by D( Θ).
The measure with the highest proportional value representing the “biggest bang for the buck” and is nominated for that asset.