This page intentionally left blank Managing Strategic Surprise
The scope and applicability of risk management has expanded greatly over the past decade. Banks, corporations, and public agencies employ its new technologies both in their daily operations and in their long-term investments. It would be unimaginable today for a global bank to operate without such systems in place. Similarly, many areas of public management, from NASA to the Centers for Disease Control, have recast their programs using risk management strategies. It is particularly striking, therefore, that such thinking has failed to penetrate the field of national security policy. Venturing into uncharted waters, Managing Strategic Surprise brings together risk manage- ment experts and practitioners from different fields with internationally recognized national security scholars to produce the first systematic inquiry into risk and its applications in national security. The contributors examine whether advance risk assessment and management techniques can be success- fully applied to address contemporary national security challenges. paul bracken is Professor of Management and Political Science at Yale University. He is a member of the council on Foreign Relations and works with private equity and hedge funds on using scenarios for investment strategies. ian bremmer is President of Eurasia Group, the world's leading political risk consultancy. He is also Senior Fellow at the World Policy Institute and Contributing Editor of The National Interest. His research focuses on states in transition, global political risk, and US national security. david gordon is Director of Policy Planning at the US Department of State. He previously served as Vice-Chairman of the National Intelligence Council (NIC) in the Office of the Director of National Intelligence (DNI) and is the former Director of the CIA's Office of Transnational Issues (OTI). He has directed major analytic projects on country-level economic and financial crises, emerging infectious disease risks, global demographic trends, and the changing geopolitics of energy.
Managing Strategic Surprise
Lessons from Risk Management and Risk Assessment
Edited by paul bracken Yale University ian bremmer Eurasia Group david gordon United States Department of State CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo
Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521883153
© Cambridge University Press 2008
This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2008
ISBN-13 978-0-511-42316-1 eBook (EBL)
ISBN-13 978-0-521-88315-3 hardback
ISBN-13 978-0-521-70960-6 paperback
Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Contents
List of figures page vii List of tables ix List of contributors x Acknowledgements xvii 1 Introduction 1 paul bracken, ian bremmer and david gordon
2 How to build a warning system 16 paul bracken
3 Intelligence management as risk management: the case of surprise attack 43 uzi arad
4 Nuclear proliferation epidemiology: uncertainty, surprise, and risk management 78 lewis a. dunn
5 Precaution against terrorism 110 jessica stern and jonathan b. wiener
6 Defense planning and risk management in the presence of deep uncertainty 184 paul k. davis
7 Managing energy security risks in a changing world 228 coby van der linde
8 What markets miss: political stability frameworks and country risk 265 preston keat
v vi Contents
9 The risk of failed-state contagion 287 jeffrey herbst
10 Conclusion: managing strategic surprise 302 paul bracken, ian bremmer and david gordon
Index 311 Figures
2.1 A framework for warning page 26 2.2 Risk management framework – example 32 2.3 Two risk management profiles for US defense 38 2.4 A warning value chain 40 4.1 WMD proliferation triad 88 6.1 DoD’s enduring decision space 187 6.2 An illustrative point scenario 196 6.3 Schematic of uncertainty-sensitive planning 201 6.4 Defense planning in a portfolio management framework 205 6.5 Exploratory analysis in a scenario space 211 6.6 Success tree for stylized defense of an ally 213 6.7 Exploratory analysis results showing benefits of a new capability option 215 6.8 Steps in using a portfolio analysis tool 216 6.9 Composite cost-effectiveness as a function of view 218 6.10 An illustrative spider plot comparing options along multiple dimensions 219 6.11 Schematic of a parametric outcome of exploratory analysis using a capabilities model 222 6.12 Schematic use of DynaRank to build a program as a function of cumulative cost savings desired, relative to a baseline program 223 6.13 Creating portfolio views tuned to high-level decision making 226 6.14 Summary of where decision makers want to be 227 7.1 Determinants of the risk landscape 242 8.1 The variables of Eurasia Group’s Global Political Risk Index 267 8.2 Strength of government in Russia 1997–2000 270 8.3 Strength of opposition in Russia 1997–2000 271 8.4 Environment for the private sector and social tension 272
vii viii List of figures
8.5 Stability index economy section variables for Russia 1997–2000 273 8.6 Price of Urals blend crude (FOB Med) 274 8.7 Index government scores and bond prices in Brazil 275 8.8 GDP growth, 5-year average 2002–2006 276 8.9 Average of government and society scores 276 8.10 Global Political Risk Index composite scores, November 2006 280 Tables
5.1 Costs of the Iraq war: Nordhaus estimates page 153 5.2 Costs and benefits of the Iraq war: Davis et al. estimates 155 5.3 Costs and benefits of the Iraq war: Wallsten and Kosec estimates 156 5.4 Deaths in selected wars 162 6.1 Balancing degrees of conservatism to manage risk 190 6.2 Illustrative surprises in foreign policy 198 6.3 Illustrative military shocks from WWII until now 199 6.4 An illustrative (notional) top-level portfolio-analysis display 218 6.5 “Explanation” of higher-level result (capabilities for 2012 in Table 6.4) 220
ix Contributors
editors Paul Bracken is Professor of Management and Political Science at Yale University. He teaches the required core course at the Yale School of Management on Strategic Environment of Management; and also teaches Business, Government, and Globalization which covers inter- national political risk and its implications for business; and Seminar on Grand Strategy. A member of the Council on Foreign Relations, he was a visiting professor at Beijing University. Professor Bracken works with private equity and hedge funds on using scenarios for investment strategies. Before joining the Yale faculty Professor Bracken was on the senior staff of the Hudson Institute for ten years, where he directed the management consulting arm of the Institute. Professor Bracken received his PhD from Yale University in Operations Research and his B.S. from Columbia University in Engineering. Ian Bremmer is President of Eurasia Group, the political risk consultancy. An expert on US foreign policy, states in transition, and global political risk, Dr. Bremmer’s five books include The J Curve: A New Way to Understand Why Nations Rise and Fall (2006), selected by The Economist as one of the Best Books of the Year. In 2001, Bremmer authored Wall Street’s first global political risk index, now the GPRI (Global Political Risk Index), a joint venture with investment bank Citigroup. Bremmer has also published over 200 articles and essays in The Harvard Business Review, Survival, The New Republic, Fortune, The Los Angeles Times, The Washington Post, The Wall Street Journal, The Financial Times, and The New York Times. He is a regular contributor for The International Herald Tribune and the webzine Slate, contributing editor at The National Interest, and a political commentator on CNN, Fox News and CNBC.
x List of contributors xi
Bremmer has spent much of his time advising world leaders on US foreign policy, including US presidential candidates from both Democratic and Republican parties, Russian Prime Minister Sergei Kiriyenko, and Japanese Prime Minister Shinzo Abe. Bremmer received his PhD in Political Science from Stanford University in 1994. He went on to the faculty of the Hoover Institution where, at 25, he became the Institution’s youngest ever National Fellow. He has held research and faculty positions at Columbia Univ- ersity (where he presently teaches), the EastWest Institute, Lawrence Livermore National Laboratory, and the World Policy Institute, where he has served as Senior Fellow since 1997. He lives in New York. David Gordon is Director of Policy Planning at the US Department of State. He previously served as Vice-Chairman of the National Intelli- gence Council (NIC) in the Office of the Director of National Intelligence (DNI) and is the former Director of the CIA’s Office of Transnational Issues (OTI), an office that covers a broad array of critical national security issues, including global energy and economic security, corrup- tion and illicit financial activity, foreign denial and deception programs, and societal and humanitarian conflicts. Dr. Gordon joined the CIA in May 1998, when he was appointed National Intelligence Officer for Economics and Global Issues on the NIC. He directed major analytic projects on country-level economic and financial crises, emerging infectious disease risks, global demographic trends, and the changing geopolitics of energy, as well as provided leadership for the NIC’s seminal “Global Trends 2015” report. Prior to his earlier service on the NIC, Dr. Gordon was Senior Fellow and Director of the US Policy Program at the Overseas Development Council. He also served as a senior staff member on the International Relations Committee of the US House of Representatives; and as the regional economic policy advisor for the US Agency for International Development, based in Nairobi, Kenya. In the 1980s, Dr. Gordon pursued an academic career with a joint appointment at the University of Michigan and Michigan State Univ- ersity. He has also taught at the College of William and Mary, Princeton University, Georgetown University and the University of Nairobi. Dr. Gordon is a graduate of Bowdoin College and undertook graduate studies in both Political Science and Economics at the University of Michigan, where he received his PhD in 1981. xii List of contributors contributors Uzi Arad is Director of the Institute for Policy and Strategy (IPS) and Professor of Government at the Lauder School of Government, Strategy and Diplomacy at Herzliya’s Interdisciplinary Center. Concurrently, he serves as Advisor to the Knesset Foreign Affairs and Defense Committee. Between 1975 and 1999 Professor Arad served with Israel’s foreign intelligence service, the Mossad, in senior positions both in Israel and overseas. Among these he held the post of Director of the Intelligence Division and that of the National Security Advisor to Prime Minister Benjamin Netanyahu. Professor Arad obtained his PhD and MA degrees from Princeton University, to which he came as a Fulbright Scholar, and is a graduate of advanced executive courses at Harvard University. Prior to joining the Mossad he was Professional Staff member of the Hudson Institute in New York. His areas of specialization include foreign and security affairs, intelligence and policy making. He is a co-author of Sharing Global Resources, written for the New York Council on Foreign Relations. Paul K. Davis is a Senior Scientist and Research Leader at RAND, and a Professor of Policy Analysis in the Pardee RAND graduate school. His research areas include defense planning, planning under deep uncertainty more generally, deterrence theory, and advanced methods of analysis and modeling. Dr. Davis has published recent books on capabilities-based planning, effects-based operations, the deterrence and influence components of counterterrorism, model composability, and virtual collaboration. He also serves on several national panels dealing with planning, and with modeling, simulation, and analysis. Dr. Davis holds a BS from the University of Michigan and a PhD (Chemical Physics) from the Massachusetts Institute of Technology. Lewis A. Dunn is a Senior Vice-President of Science Applications International Corporation. Dr. Dunn served as Assistant Director of the US Arms Control and Disarmament Agency from 1983–1987 (appointed by President Reagan and confirmed by the US Senate with the rank of Assistant Secretary) and as Ambassador to the 1985 Nuclear Non-Proliferation Treaty Review Conference. Prior to joining the Reagan Administration, he was a member of the staff of the Hudson Institute. From 1969–1974, he taught Political Science at Kenyon College. Dr. Dunn is the author of Controlling the Bomb (1982) and of “Containing Nuclear Proliferation,” Adelphi Paper No. 263 (1992). List of contributors xiii
Other recent publications are: “Must Acquisition Equal Employment: Can al-Qaeda be Deterred from Using Nuclear Weapons?” (National Defense University monograph, 2005); “Rethinking Deterrence: A New Logic to Meet Twenty-First Century Challenges,” in Stephen J. Cimbala (ed.), Deterrence and Nuclear Proliferation in the Twenty- First Century; (with Victor Alessi), “Arms Control by Other Means,” Survival, Vol. 42, No. 4 (Winter 2000–01); “Coordinated Security Management,” Survival, Vol. 43, No. 3 (Autumn 2001); “The Case for an Enforceable Consensus against NBC First Use,” The Non- proliferation Review (Fall/Winter 2002). He has a PhD in Political Science from the University of Chicago. He is a member of the International Institute for Strategic Studies and the Council on Foreign Relations. Jeffrey Herbst is Provost and Executive Vice-President for Academic Affairs at Miami University. His primary research interests are in the politics of sub-Saharan Africa, the politics of political and economic reform, and the politics of boundaries. He is the author of States and Power in Africa: Comparative Lessons in Authority and Control (2000) and several other books and articles. He has also taught at the University of Zimbabwe, the University of Ghana, Legon, the University of Cape Town, and the University of the Western Cape. He is a Research Associate of the South African Institute of International Affairs and was a Fellow of the John Simon Guggenheim Memorial Foundation. He received his PhD from Yale University. Preston Keat is a Director of Research and head of the Europe & Eurasia Practice Group at Eurasia Group. He holds a PhD in Political Science from UCLA, an MSc from the London School of Economics, and a BA from the College of William and Mary. Preston is an emerging Europe and EU analyst, and he also played a key role in the development of the Deutsche Bank Eurasia Group Stability Index methodology, a cutting edge tool for global market risk analysis. He previously worked for the German Marshall Fund of the US in Washington DC, where he worked on the Fund’s programs for political and economic development in Poland, Hungary, the Czech Republic, Slovakia, Romania, Bulgaria, and Albania. Preston has spent several years living in the region, most recently as a Fulbright Scholar in Poland. Preston has conducted extensive field research in Poland, Hungary, the Czech Republic, Turkey, Russia, and Slovakia. His academic research focuses on the process of xiv List of contributors economic reform and enterprise restructuring, and he has profiled numerous firms in a range of industrial sectors, including automobiles, chemicals, coal, food processing, shipbuilding, steel, and textiles. He has presented papers at numerous venues, including the Annual Meetings of American Political Science Association, the Wharton School, and US government agencies. Preston also teaches courses in Political Risk Assessment and Management as a visiting professor at Columbia University (SIPA). Jessica Stern is Lecturer in Public Policy at Harvard University’s Kennedy School of Government, where she teaches courses on terrorism and on religion and conflict. She is the author of Terror in the Name of God: Why Religious Militants Kill (2003), based on surveys and interviews of terrorists around the world. She is also the author of The Ultimate Terrorists (2001), and of numerous articles on terrorism and proliferation of weapons of mass destruction. She served on President Clinton’s National Security Council Staff in 1994–1995. She has held fellowships at the Council on Foreign Relations and at Stanford University’s Hoover Institution, and has worked as an analyst at Lawrence Livermore National Laboratory. Dr. Stern previously worked in Moscow, first as Assistant to the Commercial Attache´, and later as a representative of a US company. She has a BA from Barnard College in Chemistry, a MA from MIT in Technology Policy (chemical engineering), and a doctorate from Harvard University in Public Policy. Coby van der Linde is Director of the Clingendael International Energy Programme at the Netherlands Institute for International Relations (“Clingendael”) and Professor of Geopolitics and Energy Management at Groningen University in the Netherlands. Her research areas include energy diplomacy, international oil and gas markets, energy policy, and the political economy of energy producing countries. Dr. van der Linde recently completed a study on Energy Security of Supply and Geopolitics for the European Commission. She has also written on the influence of the state in the oil market and various articles on energy markets and energy relations in the world. She is a member of both the Energy Council, an advisory board to the Dutch government, and the advisory board to the Chairman of the International Gas Union (IGU). She holds a MA in Political Science (International Relations) and a PhD in Economics from the University of Amsterdam. List of contributors xv
Jonathan B. Wiener is William R. and Thomas L. Perkins Professor of Law at Duke Law School, Professor of Environmental Policy at the Nicholas School of the Environment & Earth Sciences, and Professor of Public Policy Studies and the Sanford Institute of Public Policy, at Duke University. He also served as the founding Faculty Director of the Duke Center for Environmental Solutions from 2000–2005. Since 2002 he has been a University Fellow of Resources for the Future (RFF), the environmental economics think-tank. In 2003, he received the Chauncey Starr Young Risk Analyst Award from the Society for Risk Analysis (SRA) for the most exceptional contributions to the field of risk analysis by a scholar aged 40 or under. In 1999, he was a visiting professor at Harvard Law School, and in 2005–2006 he was a visiting professor at l’Ecole des Hautes Etudes en Sciences Sociales and at the environmental economics think-tank CIRED in Paris. Before coming to Duke, he worked on US and international environmental policy at the White House Council of Economic Advisers, at the White House Office of Science and Technology Policy, and at the United States Department of Justice, serving in both the first Bush and Clinton administrations. He attended the Rio Earth Summit in 1992. He has written widely on US and international environmental law and risk regulation, including numerous articles and the books The Reality of Precaution (forthcoming 2007), Reconstructing Climate Policy (2003, with Richard B. Stewart) and Risk vs. Risk (1995, with John D. Graham).
Acknowledgements
Managing Strategic Surprise was the work of many people without whom it would not have been possible to launch such an ambitious exploratory effort. We would especially like to express our sincere gratitude to the many individuals who contributed to this endeavor, both those who attended our meetings and those who met with us to give us ideas about the science and practice of risk management. First and foremost, thank you to the authors, who came to this project with open minds and a willingness to trespass in fields outside of their own well-established expertise. In order for us to confidently traverse in unfamiliar terrain, we relied upon the knowledge and counsel of the many individuals listed above as Members of the Conversation on Risk and National Security. We are very grateful to have had the input of these individuals who offered valuable information, insight and feedback at various stages of the project as we explored ways of applying concepts and methods from their fields to our own. We would specially like to thank Michael Sherman, Martin Shubik, Adm. Henry Gehman (USN, ret.), Jacques Aigrain, Ken Knight, Dan Esty, Ken Minihan, Alf Andreassen, Sam Forman, and Garry Brewer. Special respect must also be paid to Ross Schaap, Alexsandra Lloyd and others at Eurasia Group, who managed this endeavor throughout its various phases. Finally, we are very fortunate to have had the support of our editor, John Haslam, at Cambridge University Press, who recognized the significance of this book at its early stages and made it possible for us to contribute this work to the dialogue on National Security and Risk. The views expressed in this book belong solely to the editors and chapter authors and do not represent the organizations with whom each are affiliated. paul bracken ian bremmer david gordon
xvii
1 Introduction PAUL BRACKEN, IAN BREMMER AND DAVID GORDON
The timing couldn’t be better for a book on risk management and international affairs. Risks from weapons of mass destruction (WMD) proliferation, terrorism, energy availability, failed states, and from other sources are growing. The failure to anticipate major risks in the Iraq war has had enormous consequences, to say the least. And the continuing debate about how the intelligence community and the executive branch of government assess risk make it central to any discussion of foreign and defense policy. For all of these reasons it is an opportune time to focus on how risk is assessed and managed in international affairs. But there is a second reason why the timing is right for a book on this subject. Separate from all of the above considerations is the emergence of risk man- agement as a distinctive field of study which has transformed one discipline after another, in finance, business, engineering, environ- mental protection, and epidemiology. Today, it would be unthinkable for a company to invest money without first putting it through a risk “screen” to see what could go wrong. Assessment of an epidemic, likewise, entails a thorough-going risk analysis to see where interven- tions to stop it should be made. And analysis of engineering failures like the Columbia shuttle crash make a lot more sense when looked at from a risk management framework than from the customary practice of finding someone to blame it on. Yet thinking systematically about risk has barely touched the world of national security and international affairs. Whether in the intelligence or defense communities, or in energy policy, non-proliferation, or terro- rism, the systematic consideration of risk has hardly advanced beyond truisms. This project brings together for the first time these two clusters of thinking: the risks of international affairs, and the risk management frameworks which have transformed so many other disciplines. The need for better risk management in international affairs is acknowledged by virtually everyone. We find no disagreement either
1 2 Paul Bracken, Ian Bremmer and David Gordon that risk management is an important, indeed central, framework for thinking about problems in fields like finance, business, epidemics, or power grid crashes. The rub comes in the next step: that some of the ideas from these fields might have application in international affairs. On this point there is major controversy and resistance. We believe that the resistance to such intellectual trespassing, trying to import ideas from one field into another where they have never been tried before, is itself interesting and revealing. Our take is that the actual practice of how risks are handled in international affairs by the United States has been in decline since the 1990s. Before that, the stark dangers of the Cold War and the threat of nuclear annihilation enforced a kind of discipline on Washington, and on the international system. On big issues it paid to be cautious. In the 1990s, with the disappearance of the Soviet Union, this “fear con- straint” was lifted. The United States in the 1990s was by far the most powerful country in the world. And this one-sided power led to a sloppiness when it came to managing risks. Across ideological lines it was thought that whatever might go wrong, US power could easily make it right. Whether in dollars or military action, power was mis- takenly thought to be a substitute for good thinking about risk. The other source of resistance to importing concepts from risk management in to international affairs comes from the natural ten- dency of international affairs specialists to stick with what they already know and to hone this to increased specialization.1 New conceptual frameworks, broadly speaking, are not very welcome. In recent years risk management in international affairs, beyond simple- minded truisms, has become almost an alien concept. In fact, in the many meetings and conversations we had on this project more time was spent with international affairs experts on why risk management “can’t possibly work” in their particular field than on trying to understand how these approaches might be usefully applied. But the purpose of this book isn’t to criticize anyone. Rather, it’s to start a productive conversation on how risk management can be applied to international challenges in the twenty-first century. For nearly two years we worked with domain experts in various international security
1 An exception here is the interesting use of risk concepts in M. V. Rasmussen, The risk society at war, terror, technology and strategy in the twenty-first century (Cambridge University Press, 2007). Introduction 3
fields to introduce them to risk management concepts and frameworks from outside fields. The intent was to see how risk management thinking might change the frameworks used in their areas of domain expertise. But before we describe the mechanics of the project, it’s necessary to understand what we mean by the term risk management.
Risk management defined One of our earliest discoveries in this project was that risk management means different things to different people. More, that there is relatively little cross-fertilization among specialized fields which do risk manage- ment, like finance, environmental protection, and epidemiology. Engineers studying the safety of nuclear power plants have developed a high art of risk management. They look at complex processes, flows of information and materiel, through large networks of pipes and reactors. Wall Street financial analysts have a different notion of risk management. They focus on changes in currency values and stock prices using probability and stochastic processes. Each does risk management. And each has its own frameworks, vocabulary, and set of distinctions. There is nothing wrong with this. Each field, whether engineers interested in nuclear plant safety or Wall Street analysts worried about the value of their portfolios, has certain recurrent tasks that they have to manage. They develop techniques and distinctions which work for them. Yet this diversity makes defining risk management across disciplines an important thing to get right if we are to raise the level of conver- sation about risk in international affairs. Our solution to this problem was to go back to the historical development of risk management, because all of the specialized risk management done today in finance, engineering, and environmental protection emerged from the same intellectual roots. Modern risk management grew out of the application of statistical methods in mass production in the 1920s and 1930s.2 It later developed in World War II, with the application of mathematical concepts in the military effort,
2 A classic book in this regard was W. A. Shewhart, Statistical method from the viewpoint of quality control (Washington, DC: Graduate School of the Department of Agriculture, 1939). (Republished in 1986 by Dover Publications, Mineola, NY.) 4 Paul Bracken, Ian Bremmer and David Gordon called operations research. By the 1950s a distinct discipline of deci- sion sciences had developed, and within this a common conception of risk management emerged.3 Stated simply, risk is defined as the product of two things: likelihood and consequences. Risk separates out the likelihood that some event will take place from the consequences if it does. This is the definition of “risk” used throughout the book.4 This definition allows for three conversations. One about likeli- hood. One about consequences. And a third about the management of the two. Each of these conversations can quickly get complicated. But since the world is complicated, this isn’t much of a surprise. Still, the simple act of recognizing that there are three conversations has proven to be extraordinarily useful. It means that a financial institution doesn’t focus its risk management attention only on predicting currency and stock prices. The track record for doing this is poor, and has been known to be so for decades. Instead a financial institution will bundle its total exposure in to a portfolio, and then stress test this against different shocks to see what the overall effect is on its value. One method for doing this is called value at risk (VaR). But the choice of methods for doing these calculations is less interesting for our pur- poses than is borrowing the insight from finance that there are better ways to manage risk than trying to predict the future. We think that bundling a number of foreign policy strategies together, and subjecting them to stress testing, is a very useful insight. It would highlight inter- actions. It would focus attention on important consequences, leaving aside for the moment their likelihood, which is often a matter of dis- pute. And it would provide an overall way to structure alternatives which are rarely clear in advance. Risk management necessarily involves how risk is perceived, and how it’s processed by individuals, groups, and organizations. This is a very complicated and interesting subject. Not only do different
3 See, for example, L. J. Savage, The foundations of statistics (New York: Wiley & Sons, 1954); and R. D. Luce and H. Raiffa, Games and decisions, introduction and critical survey (New York: Wiley & Sons, 1957). 4 In economics there is a distinction between risk and uncertainty. Risk is used if there is a known probability distribution about a likelihood. Uncertainty describes cases where there isn’t such a distribution. We do not take this as our fundamental definition, although we think it an important distinction and use it in the project. Introduction 5 individuals assess likelihood in different ways, they often also see the consequences of what could take place differently as well. No meth- odology will ever overcome these tendencies. But being able to lay them out for clear discussion, with an appropriate vocabulary, is a step toward a more productive discussion.
Our project Recognizing the need to incorporate risk management in to some very important fields we conceived the idea to connect the diverse areas of risk management to the fields of national security and international affairs. With the financial support from the National Intelligence Council, part of the Director of National Intelligence (DNI), we held meetings over two years bringing together experts in risk management together with domain experts from various fields of international affairs. Small group conversations were held as well. Meetings were held in New York, New Haven, Washington, DC, and Tel Aviv. Paul Bracken, Professor of Management and Political Science at Yale University, brought management and operations research skills to the project. Ian Bremmer, President of the Eurasia Group and an academic political scientist by training, brought expertise on emerging markets and global political risk as it applies to financial, corporate, and government entities. And David Gordon, from the national intelligence community brought real world experience in national security risk management to the table. To identify risk management concepts we spent many days in meetings and discussions with experts, selecting those that might be salient to security problems. A word about objectives is in order. Using risk management in international affairs is an exceedingly ambitious goal, and we recog- nize this. Our view was that a search for the solution to the myriad challenges in international affairs was futile. We had no expectation of finding a computing formula for stopping the spread of WMD or for stopping terrorist attacks. Rather, we believe that it is possible to understand the processes associated with these dynamics better, and define alternatives for managing them. Our goal was to raise the level of conversation about important subjects using a risk management framework. Major decisions always have an element of risk in them, and decision makers and their staff acknowledge this. But too frequently there is only the lightest 6 Paul Bracken, Ian Bremmer and David Gordon consideration given to its systematic assessment and management. Reference is often made, for example, to taking “calculated risks,” or to the “risk of not acting.” One of our favorite questions in carrying out this project came from hearing these two truisms so many times; it was to ask the decision maker, and their staff, to show us the calcu- lations that underlie their calculated risks. Usually there were none. The casual invocation of the “calculated risk” is often a cover for not thinking about risk at all. Likewise, frequent reference to “the greater risk of not taking any risk” is often a mask for actions a decision maker is going to take anyway. It often represents a thinly disguised justification for going ahead with an action with little or no consideration for its upside or downside consequences. We have no doubt that not taking any risk can be a great mistake. On the other hand we believe that its blanket application regardless of context represents a serious misunderstanding of how risks should be assessed and managed. The purpose of project meetings was cross-fertilization: to have security domain experts listen to and speak with risk management experts drawn from finance, operations research, political risk, epi- demiology, and environmental risk. We leaned toward conversation rather than PowerPoint briefs. In addition, articles drawn from risk management disciplines were circulated to the international affairs experts, and the three of us interposed into each of the experts’ fields to keep the conversation going. One of the key findings coming from our conversations with risk management practitioners can be described as follows: Risk manage- ment is about insight, not numbers. It isn’t the predictions that matter most but the understanding and discovery of the dynamics of the problems. Another way of saying this was nicely put by one mathematically inclined risk analyst, an authority on the reliability of engineering systems: You don’t need data to think statistically. Statistics is valu- able for the terminology, distinctions, and frameworks that it intro- duces. In the real world, even in a field where there exists rigorous data, one often finds that the data is unavailable, or too messy to put much stock in. Still, formulating the problem as if one had data is an extraordinarily useful exercise. The charge given to each of the international affairs experts was to “think like risk management” in describing the current issues and Introduction 7 challenges in their fields. They were to sample from what they had learned about risk management and apply its line of thinking to their subject. We felt that it was important to allow each of these experts to make their own judgment about which concepts to use, because one of the lessons of good risk management is that it is as much an art as a science. Rather than application of rigid methodologies to a subject which might not be appropriate, the authors were free to pick and choose risk management concepts that fit their problem. Instead of making the problem fit risk management, we tried to make risk management fit the problem.
Related perspectives Over the past few years a large literature devoted to the subject of risk has appeared. Some of this overlaps with our project in that it tries to tackle “big” problems. Seeing how our project fits in with these efforts gives a useful intellectual positioning to what we are trying to do. One strand of work, from Kahneman, Slovic and Tversky, as well as others, gives many examples where reaction to both likelihood and consequences depends less on actual probabilities than it does on behavioral factors.5 Insights drawn from psychology are used in place of the assumption that decision makers behave rationally according to the laws of economics. In other words, most people don’t maximize their expected utility using probability. They hang on to investments too long even when they shouldn’t, a particular tendency so prevalent that experts in behavioral finance have even given it a name, the dis- position effect. What this literature points to are systematic ways in making bad decisions, e.g., hanging on to an investment too long. These patterns of bad decisions seem to be especially prevalent for high and low probability events. In addition, an individual’s initial approaches to a problem have a powerful enduring influence on their later decisions. Generally, they stick with these predispositions for too long. Across the problems analyzed in our project we found these ten- dencies to be pervasive. This raises some controversial issues which are
5 A classic book here is D. Kahneman, P. Slovic and A. Tversky, Judgment under uncertainty: heuristics and biases (Cambridge University Press, 1982) which has inspired a wide ranging follow-on literature. 8 Paul Bracken, Ian Bremmer and David Gordon better dealt with by the domain experts in the individual chapters. Suffice it to say here that this line of thinking adds an important dimension to risk management in international affairs, namely that there really are systematic patterns in making good and bad risk judgments. While it can often be difficult to apply this insight oper- ationally, simply knowing that it is the case can provide a useful checklist of errors to keep in mind. A second literature, from sociology, explores how the late modern societies such as Western Europe and the United States have become increasingly structured around the ideas of risk and risk management. For Anthony Giddens (1991),6 the concept of risk gained its centrality due to the great increase in human security in the modern world. “It is a society increasingly preoccupied with the future (and also with safety), which generates the notion of risk.”7 The development of new technologies, drugs and the existence of strong markets and states have resulted in longer life spans and reduction of basic dangers, while at the same time generating a new class of unknown or unknowable “manufactured” risks. For Ulrich Beck, the “risk society” is precisely concerned with miti- gating the risks and uncertainty generated by modernization and glob- alization.8 These “manufactured” risks are argued to be “reflexive,”9 meaning that they are inadvertently caused by modernity’s attempts at mitigating older, classical risks, such as disease, market fluctuations or strategic issues. “Manufactured” risks, with a low probability but potentially catastrophic consequences, are becoming the main concern of all the modern industrialized societies, which are increasingly trans- forming themselves into “risk societies.”10 The “risk society” approach has been increasingly applied to the field of international relations and national security by a number of writers,
6 See A. Giddens, Modernity and self-identity: self and society in the late modern age (Cambridge: Polity Press, 1991). 7 A. Giddens, “Risk and responsibility,” Modern Law Review, 62, No. 1 (1999), 3. 8 U. Beck, Risk society: towards a new modernity, trans. by M. Ritter (London: Sage, 1992), p. 26. 9 See U. Beck, W. Bons and C. Lau, “The theory of reflexive modernization problematic, hypotheses and research programme,” Theory, Culture & Society, Vol. 20(2) (2003), 1–33. 10 Beck, Risk society, as cited in note 8. Introduction 9 especially in the context of terrorism,11 contemporary warfare,12 and security in the West, especially in relation to NATO.13 Faced with asymmetrical risks, such as terrorism, governments can no longer aim for “the concept of complete security.”14 In the past, national security dealt with meeting security threats, which was a finite process in which the aim was to eliminate the threats faced.15 However, risks, as opposed to threats, can only be managed or con- trolled. In practical terms, this means that modern states are learning to cope with problems,16 rather than aiming for a solution, so risks tend to be of long duration (if not infinite) and often managing one risk gives rise to a set of others, given the reflexivity of the risk soci- ety.17 In the case of post 9/11 terrorism, the proponents of the “risk society” generally argue that the main development has been the rise of pre-emptive governmental action.
11 See C. Aradau and R. Van Munster, “Governing terrorism through risk: taking precautions, (un)knowing the future,” European Journal of International Relations, 13, No. 1 (2007); U. Beck, “The terrorist threat: world risk society revisited,” Theory, Culture & Society, 19, No. 4 (2002); M. V. Rasmussen, “Reflexive security: NATO and the international risk society,” Millennium: Journal of International Studies 30, No. 2 (2001); Rasmussen, The risk society at war, as cited in note 1; K. Spence, “World risk society and war against terror,” Political Studies, 53, No. 2 (2005). 12 See U. Beck, “War is peace: on post-national war,” Security Dialogue,36,No.1 (2005); M. Shaw, “Risk-transfer militarism, small massacres and the historic legitimacy of war,” International Relations, 16, No. 3 (2002); M. Shaw, The new western way of war: risk transfer and its crisis in Iraq (Cambridge: Polity Press, 2005); Y.-K. Heng, “The ‘transformation of war’ debate: through the looking glass of Ulrich Beck’s World risk society,” International Relations,20,No.1 (2006); C. Coker, “Security, independence and liberty after September 11: balancing competing claims,” introductory paper presented to the 21st Century Trust, Klingenthal Castle, near Strasbourg, France, 12–18 May, 2002, www.21stcenturytrust.org/post911.htm (accessed 22 January 2008); C. Coker, Waging war without warriors? the changing culture of military conflict,IISS Studies in International Security (London: Lynne Rienner, 2002); V. Jabri, “War, security and the liberal state,” Security Dialogue, 37, No. 1 (2006). 13 See C. Coker, “Globalisation and insecurity in the twenty-first century: NATO and the management of risk,” Adelphi Papers, No. 345 (London: International Institute for Strategic Studies, 2002); Rasmussen, “Reflexive Security,” as cited in note 11. 14 Aradau and Munster, “Governing terrorism through risk,” 93, as cited in note 11. 15 See Rasmussen, “Reflexive security,” as cited in note 11; Heng, “The ‘transformation of war’ debate,” as cited in note 12. 16 See Spence, “World risk society and war against terror,” as cited in note 11. 17 See Rasmussen, “Reflexive security,” as cited in note 11. 10 Paul Bracken, Ian Bremmer and David Gordon
Overall, the “risk society” approach to national security tends to be highly conceptual, given that its origins are in the theoretical debate between “modernity” and “post-modernity”.18 A significant part of the literature is aimed at a methodological re-conceptualization of inter- national affairs as a “transnational science”.19 The literature is also often driven by normative concerns, be they critiques of the neo-liberal underpinnings of globalization and desire for the formation of “cosmopolitan states,”20 or a desire to reinforce pacifist positions and delegitimize certain types of warfare.21 That said, with a few excep- tions,22 the literature does not offer concrete solutions to policy makers, and it is unclear how the literature on “risk society” can be practically employed by policy makers for dealing with risks and strategic surprises. A much smaller literature deals with the way organizations,as distinct from individuals, process information about risk. Partly in response to 9/11 a number of studies have focused on the shape of the US intelligence community, including the Report of the 9/11 Com- mission itself.23 Organizations turn out to be different from people, and understanding their dynamics in processing risk is critically important. This literature, and the following two chapters in this book (by Bracken and Arad) take organizations as central for improving risk management. In finance, epidemiology, and the environment, the systems built to support risk management – the warning, communi- cation, and IT systems – have become extremely important. Factoring them in to risk management is critical from this perspective.
18 See M. Shaw, “The development of ‘common risk’ society: a theoretical overview,” Paper delivered at seminar on ‘Common Risk Society,’ Garmisch- Partenkirchen (1995), www.sussex.ac.uk/Users/hafa3/crisksocs.htm (accessed June 8, 2007). 19 Beck, “The terrorist threat,” 53, as cited in note 11. 20 Ibid., 13. 21 See Shaw, “Risk-transfer militarism, small massacres, and the historic legitimacy of war,” as cited in note 12; Shaw, “The development of ‘common risk’ society,” as cited in note 18. 22 See C. Coker, “NATO as a post-modern alliance,” in S. P. Ramet and C. Ingebritsen (eds.) Coming in from the Cold War: changes in US–European interactions since 1980 (Lanham, MD: Rowman and Littlefiend, 2002). 23 For examples see Charles Perrow, The Next catastrophe: reducing our vulnerabilities to natural, industrial, and terrorist disasters (Princeton University Press, 2007); R. A. Posner, Preventing surprise attacks: intelligence reform in the wake of 9/11 (Roman and Littlefield, 2005); Diane Vaughan, The Challenger launch decision: risky technology, culture, and deviance at NASA (University of Chicago Press, 1996); and Amy B. Zegart, Spying Blind, The CIA, the FBI, and the origins of 9/11 (Princeton University Press, 2007). Introduction 11
Finally, there is a recent literature which focuses on the hubris, and in some cases even the chicanery of making predictions.24 These works provide reminders of how difficult it is to make accurate predictions, especially about the future. But we have a problem understanding why after many decades it is still necessary to discuss prediction at all. We think that it is just as important to guard against what can be called “sophisticated cynicism”. This is the tendency to deny that any progress is possible on these matters. There is no silver bullet that will solve the major challenges of national security and international affairs.25 In terms of our project we view prediction and discovery as fundamentally different activities.26 Confusing the two is a mistake. The problems discussed in this book have no all-embracing solutions. But this hardly means that we can’t do better at managing them. To do this we have to discover more about them, and here risk management can provide a very helpful framework.
Chapter overviews As part of a broad approach to seeing how risk management could be a framework for better understanding important national security and international affairs a wide range of problems were selected. Our intent was to be provocative, pressing the limits. We freely admit that the consistency of approach varies considerably, but because of our conviction that this also applies to risk management in general we see this as a small limitation compared to the greater gain of under- standing the problems better. The first two substantive chapters, written by Paul Bracken and Uzi Arad, place risk assessment and management in its organizational context. Both essays make an essential point, one that we feel has been neglected in too many instances. It is that uncertainty needs to be understood not only in terms of things we don’t know about the
24 See N. N. Taleb, The black swan, the impact of the highly improbable (Random House, 2007), P. E. Tetlock, Expert political judgment, how good is it? How can we know? (Princeton University Press, 2005). 25 We have often wondered what the fields are where there are silver bullets. 26 We are indebted to Professor Garry Brewer of Yale University for putting the matter so succinctly. 12 Paul Bracken, Ian Bremmer and David Gordon world. It also needs to include what is and isn’t known about the way an organization processes the uncertainties extant in the world. It isn’t just that we don’t know how many nuclear weapons North Korea has, it’s that there are major uncertainties inside the US government about how this information will be processed. Bracken’s and Arad’s chapters share a number of additional insights. Bracken argues that warning system design must reflect the underlying strategy whose failure the system is supposed to be giving warning of. That is, warning has to be considered in a broader context of alternative ways for dealing with uncertainty, of which there are only a limited number. Uzi Arad uses his considerable experience as a senior official in the Israeli intelligence community to argue against the customary view that surprise attack is an inevitable and insurmountable challenge. He does not argue that prediction of attack is possible, but makes a far more powerful case that this way of conceiving of the problem is itself misguided. In its place Arad suggests putting the marginal investment dollar into information collection systems, not better analysis. This conclusion has enormous consequences, and at a minimum should inform discussions of the multi-billion-dollar budgets that intelligence agencies spend. Lewis Dunn in his chapter on “Nuclear proliferation epidemiology: uncertainty, surprise, and risk management” explores different ways that the global non-proliferation regime might rapidly come apart. Even five years ago this chapter might have been considered alarmist. Today, the fear of such a development is palpable. Epidemiology provides a heuristic model, a lens, for analyzing it. Dunn extends the traditional “supply and demand” analysis of WMD to include key individuals serving as vectors for transmitting nuclear and other know-how across borders. Broadening the framework for analyzing proliferation produces a number of insights, which on the face of it seem convincing. For example, adding key people to the standard models and considering them as “vectors” of nuclear know-how reveals the strong interdependencies among countries. This, in turn, offers one path for understanding how large numbers of countries could turn to WMD in a relatively short period of time. Dunn contrasts this with the slow motion spread of WMD sug- gested by supply and demand models of proliferation. He concludes that there is a real danger that such a rapid spread of WMD could Introduction 13 come about, and in the process, that current non-proliferation norms are unlikely to be effective against such a development. Jessica Stern and Jonathan B. Wiener take on the topic of terrorism from a risk management perspective. Their key contribution is to borrow one of the central concepts from environmental risk assess- ment – the precautionary principle – and apply it to terrorism. The application will be controversial. The precautionary principle says that in the face of scientific uncertainty about the pathogenic effects of certain toxins, it is prudent to eliminate them from the market place. The analogy with terrorism is that in the face of lack of proof that terrorists are about to strike, or even that a group of people are in fact terrorists, it may be prudent to take action. The precautionary principle raises important ethical and practical policy issues. Will such a policy develop as a response to terrorism? Whether it should or not, Stern and Weiner argue that it nonetheless might happen in response to escalating levels of terrorist attacks. Paul Davis looks at how the US Department of Defense (DoD) handles risk and uncertainty. His conclusion points to our earlier discussion about the limits of prediction. Davis concludes that for- mulating the problem as one of “making predictions” is a mistake, and that long ago the DoD abandoned any such efforts. Davis develops in detail what the DoD has actually done in the face of the need to make decisions in the face of what he calls “deep uncertainty.” This is a level of uncertainty as to what kinds of wars will be fought in the future, who the enemy might be, what the relevant weapons and technology investments might be, etc. Davis concludes that investments in agility, flexibility, and robust- ness have been what DoD has actually done. Indeed, he goes on to describe how these investments have been institutionalized, and how there is deep skepticism within the US military toward conceiving of the challenge of facing deep uncertainty by betting on the accuracy of predictions. Coby van der Linde explores how risk management shapes our view of energy security. Her paper uses concepts developed by psychologists and behavioral risk analysts which underscore the importance of context in shaping how probabilities of things like an oil cut-off are assessed. Asking whether or not Moscow would cut off energy flows to Europe, without addressing the context isn’t the right question. Instead, pinning down which of the competing contexts that various 14 Paul Bracken, Ian Bremmer and David Gordon nations have for making energy security decisions is key. Her chapter contrasts the US preferred strategic context of globalization with what she argues is the European context of loose, or limited, globalization. With loose globalization market forces do not automatically deliver the needed energy supplies. Additional management decisions involving foreign policy, aid, coercion, stockpiling, etc. have to be included. Her conclusion gives a very different picture of the international energy risk map. For example, she believes Western Europe will have to subordinate its claims for emphasizing human rights, environmental protection, etc. to more pragmatic and extra market action to insure energy supplies. As with proliferation of WMD, even formulating the energy security issue in these terms would have been unthinkable five years ago. But energy has dramatically increased in importance over the last few years. Preston Keat analyzes three cases – Russia, Brazil, and Hungary – where conventional approaches to sovereign credit risk failed to capture key political dynamics that drove economic outcomes. He outlines a state stability framework that captures many of the social and political explanatory factors that traditional market analysis overlooks. The fascinating case of Russian decision making in their international financial dealings in the late 1990s is illustrative. Keat points out that the Russian decision to default on their sovereign bonds was political in the sense that the Russian government had the money and could have paid international creditors. The Russian bond default brought down a major US hedge fund and forced the US Federal Reserve to intervene to pressure its lenders to pump in more liquidity to avoid an international financial crisis. Dis- covering the dynamics of how this happened would appear to be highly important. Yet we still come across many analysts who stick to the view that political decisions are “soft” and have no place in understanding credit risk. This is a short-sighted view, to say the least. As globalization increases international financial coupling it is also an increasingly risky view. Jeffrey Herbst looks at the challenges of Africa. Failed states, AIDs, corruption, genocide, and other maladies all seem to plague this continent. What can risk assessment and management possibly say about this enormous challenge? Herbst makes a simple yet key insight: that Africa is not uniform, more, that the differences between different countries on many scores is increasing. We think this is an excellent Introduction 15 example of our dictum that you don’t need data to think statistically. For Herbst is saying “look at the variance, not the average performance.” Focusing attention on “the average” gives a picture of overwhelming misery that masks the very significant differences on the continent. It creates a sense of hopelessness which paralyzes action. Focusing attention on the range of African states, and the way some are doing much better than the others, not only inspires hope, it also points to what the better performers are doing that could be a model for the others. 2 How to build a warning system
PAUL BRACKEN
Warning is one critical way to avoid strategic surprise. To some degree it is used in all fields and by nearly all organizations. There are many specialized studies of it in different fields, including epidemiology, finance and national security. Some of the ideas in these fields can be usefully applied to the others. For example, risk analysis and Bayesian networks, developed in operations research and finance, have been imported into the warning programs of the intelligence community. But there is a more basic prior question that has been given little attention. How does someone actually build a warning system? I mean this in the sense of how it fits in with other important factors, like other ways to deal with risk that do not rely on warning, and with overall strategy. This question is becoming more pressing. Various disasters, like September 11, 2001, the Asian tsunami, African famine and many others involve elements of warning, for sure. But they involve a lot more as well. Getting good warning is only the beginning of a process that has many other political and socio-bureaucratic elements to it. Ignoring this larger setting almost guarantees that warning will not perform well for the simple reason that no one will pay attention to it. A related issue is that hundreds of billions of dollars are spent on warning technology – IT, satellites, software and sensors. This tech- nology has transformed the structure and behavior of already complex organizations. Yet too often it seems that the added cost of these systems does not pay off in better warning performance. The argument of this chapter is that there cannot be a general theory of how to build a warning system that does not account for local problems and context. And since local detail and context will vary tre- mendously, even in a single field like national security, the tendency is for a mass of particulars to swamp important general design principles. It may seem hopeless that warning performance can ever be improved. But the more important claim of this paper is that warning per- formance can be improved. The way to do this is twofold. First, what
16 How to build a warning system 17 is needed is a coherent way to talk about warning. This leads to what management theorist Peter Senge calls having a productive conversa- tion. A productive conversation can begin to organize the mass of local complexity involved in building real systems if it supplies the builders, and senior executives, with the concepts, vocabulary, and distinctions that describe major design alternatives in understandable ways. One of the striking findings from reviewing the literature on intel- ligence warning is that while it contains many ideas, it has far fewer that are managerially useful for actually building a warning system, in that it fails to provide a vocabulary and concepts for a productive conversation about building a better system. Instead, the literature is gripped by an obsession with failure. It focuses on failure chains, rather than success chains. As a result it is very difficult to advance the conversation toward building better systems. This chapter is aimed at getting a better vocabulary, distinctions and frameworks in place to allow for a productive conversation about warning. Fit, variance of performance, formalization, warning value added, common operational picture, horizontal management, surprise as a function of (your own) complexity, loose and tight coupling, system integration, and other terms and concepts are used to advance the discussion about warning, and importantly where warning fits in with larger issues of strategy and managing uncertainty. Our purpose is to raise the level of conversation and analysis so that the mass of local particulars that will vary from one case to the next can be absorbed in a way that does not overwhelm decision makers with a flood of information. The need for a conceptual framework for warning cannot be emphasized too strongly. Top policy makers who do not have such a conceptual framework will not have a way to impose a sense of dir- ection on their staffs, or on the large technology investments that increasingly go in to building a warning system. The tendency will be to define policy about warning as a series of compromises among different administrative proposals. But if the need for a conceptual framework about warning is great, so is the challenge. The approach of this chapter is to strive for a framework that lies between the two extremes of a rigorous academic theory and a collection of insider war stories. I do not believe a real theory of warning exists, in academia or anyplace else. Nor is one close to being developed. On the other end, while insider practitioner 18 Paul Bracken accounts of what went wrong and how the system works are useful, they tend to describe small pieces of a larger problem. They are not systematic because they do not focus on how different parts of these large, complex organizations, and the people and technology in them, behave. The second purpose of this chapter is to embed warning in a larger risk management framework. Too often warning is treated in narrow ways as if it was the only way to deal with uncertainty. It is not. There are other ways (six in fact) that I will outline in this chapter. Warning can complement, or substitute, for these, creating a trade space among them. Embedding warning in risk management links it with larger questions of strategy and risk, and therefore makes it tangible to the world of policy makers. The way to build a warning system suggested in this chapter can be summarized as, first, develop a managerially useful vocabulary and set of distinctions that allow a productive conversation about what the warning system is supposed to do. Second, embed warning in the larger set of considerations about strategy and risk management. Looked at this way, policy makers can better understand how warn- ing, risk and strategy all fit together.
Some definitions Warning is an advance notification of an event or development that would seriously affect some aspect of an organization. Usually, warning is about harmful events or developments. But this is not necessary to the definition. A warning system is an interacting set of parts that acts to produce a warning. The system involves people, technology, organizations and processes. At some level nearly every organization has a warning sys- tem. The CEO who reads the newspaper for business insights is func- tioning as a very simple warning system. Formal warning systems, as distinct from informal ones (like the CEO reading the paper), focus on specialized or named dangers, and are governed by prescribed rules and regulations for collecting, ana- lyzing and distributing the information about them. A bank’s foreign exchange trading desk is a formal warning system because there are rules about what to do when certain events occur. The dangers are specified – the Yen drops against the Euro – and there are rules about How to build a warning system 19 what is to be done and who is to be notified. A missile launch detection system is also a formalized warning system. Sensors see the launch, relay the data to ground stations where the missile is identified, and this information is passed on to decision makers. The sensors, assess- ment templates, and communication channels are all officially approved in advance and function as rules for dealing with the potential dangers. Formal and informal warning systems are questions of degree. In even the most highly formalized warning systems recipients of infor- mation will understand it in terms of informal background informa- tion, whether from the news or other sources. A fundamental question in building a warning system is to ask whether it should be formalized or whether it can be left informal. Most organizations do not have formal warning systems. They rely on current information produced by executives doing their day-to-day jobs. This is still a warning system, and it may be a very good one. Dedicated formal warning systems can get expensive. But they can also pay for themselves many times over. It all depends on the underlying problem structure, the importance of critical events to the organization and the budget. In many cases there may be no alternative to building a formal warning system. Barings Bank was put out of business because it was in a highly volatile business, currency trading, and relied on highly informalized monitoring of its own traders. One large company I know relies on the day-to-day executives who run their divisions for warning. But it employs one additional person who reports directly to the CEO, and does not report to the division chiefs. This individual looks out for dangers that they might miss, or that they might choose not to report to the CEO. The distinctions used here can be useful as a first step in mapping out an organization’s warning system. As obvious as this is, there are many corporate examples (Barings, Enron, Arthur Andersen) where senior managers did not do this. They had little idea of whether or not warning of disaster would reach them in time for it to be averted. There are also government examples where officials were unaware of how their warning systems functioned, as the 9/11 Commission Report makes clear.27
27 National Commission on Terrorist Attacks upon the United States, The 9/11 Commission Report: final report of the National Commission on Terrorist Attacks upon the United States Authorized Edition (New York: W. W. Norton, 2004). 20 Paul Bracken
Generic approaches to warning Three broad approaches to warning have tended to predominate in recent years. People often fall into one of these default modes of thinking about warning with little conscious recognition of doing so: Criticism and cynicism Psychological approaches “Connecting the dots”
Criticism and cynicism Criticism is one of the chief approaches to warning, especially after a disaster like Pearl Harbor or 9/11. Critical post mortems of warning failure are essential to improved future performance. The US, UK, Australia and Israel launched no fewer than six official studies of 9/11 and the absence of WMD in Iraq. The suggestions of these and other studies are very important because they document what went wrong. But there is a tendency for criticism to degenerate into a treasure hunt for dysfunctional behavior. Opening up any big organization and exposing its inner workings will invariably reveal such behavior. Demolishing bureaucracy, house cleaning and finding the guilty can be therapeutic, as well as useful. But criticism alone is not helpful when offered in such negative terms. Consider the vocabulary used to describe the CIA’s performance before 9/11. “Broken corporate culture,” “poor management that won’t be solved by more money and personnel,” “dysfunctional bureaucracy,” “groupthink,” “overly bureaucratic,” “structurally defective,” and even “hoodwinked”28 have all been used. Building a warning system with this type of vocabulary is not likely to be very productive. The cynic’s approach is, if anything, worse. It is skeptical about any improvement. Even if you shake up the bureaucracy, and get rid of the old crew in charge, surprise will not be eliminated. Their view can be summed up as, “Pearl Harbor, the 1973 Middle East War and 9/11 are examples of an inevitable pattern of failure, and this can be expected to continue into the future whatever changes are made.”
28 Each of these terms is taken from criticism of the CIA either by official commissions or respected commentators on intelligence. How to build a warning system 21
Surprise cannot be eliminated. This is true. Let us get this out on the table at the outset. But the cynic’s counsel suffers from the same drawback as the critic’s. It places a vocabulary of failure at the center of the framework. Policy makers – senior leaders and politicians – are by temperament and inclination unlikely to respond well to this way of approaching problems. They want to know what they can do and how they should do it. Their focus is not so much on what is true, but on what will work. Many of the cynic’s insights are very sophisticated, bringing in theories of groupthink and misperception. But what results is an advanced, or “sophisticated,” cynicism. Senior managers are likely to find this even less satisfying. They do not find it useful, and may find it annoying. An obsession with failure may end a briefing with an invitation never to come back. The executive loses confidence in warning, but does not have anything to replace it with, other than gut instinct. A failure complex with respect to warning has had another unfor- tunate consequence. Uzi Arad points out how most analysts of surprise attack have focused on assessment and analysis, and paid relatively little attention to the collection and distribution of information. But collection and distribution are where the money is spent. So the big- gest controllable item, technology budgets for collection and distri- bution, is left out of the picture. I would emphasize a related point: the intelligence failure literature almost never mentions management – the value added coming from good implementation and execution. Two warning systems can have identical technologies and people. But they can perform radically differently, as they did at Pearl Harbor and seven months later at Midway. The reason for the difference was management. Manage- ment is the key to performance of every other institution in our society, from business to health care, so it would be highly surprising if it was not important in warning and intelligence as well.
Psychological approaches Psychological approaches to warning focus on how mental models shape judgment, at the individual and small group (social psychology) levels of analysis. Insights from this field have had major practical impacts on how warning systems are built, and on intelligence man- agement generally. 22 Paul Bracken
The basic argument is that because misperception and reliance on unexamined assumptions is so prevalent, the answer is to get more diverse inputs about perceptions and assumptions into the warning process. This is called the pluralistic approach. It may be operation- alized in a number of different ways. Instead of having one person evaluate the warning, have many people do it. Variations of this include red teaming, multiple advocacy, and team A and B approaches. These are discussed in more detail in Arad’s chapter. Yet another way to increase pluralism is to have separate agencies check on each other. Competition forces each to reanalyze the assumptions that drive their estimates. During the Cold War, for example, the Central Intelligence Agency (CIA) and the Defense Intelligence Agency (DIA) would estimate the Soviet threat each year in Congressional hearings. The DIA would always go first and would paint an enlarged and sometimes alarmist portrait of Soviet military power. This was followed by a more nuanced estimate from the CIA. It gently explained why the DIA outlook was overstated, and how more benign assumptions could be put on the same facts. This arrangement worked well. Congress got a good understanding of what was happening. The interesting thing is that in 1977 the CIA came up and calmly endorsed the DIA estimate. Congress was shocked. It convinced them that a turning point had taken place. And it kick- started increases in the defense budget. Recent advances in some fields should increase the utility of insights from the psychological approach. Behavioral economics is especially interesting here. It examines human judgment of economic issues in terms of such themes as over-confidence, regret over very big losses (“deep regret”), trust, and control. An important finding is that most people do not follow standard economic theory for making decisions, or interpreting information. They do not maximize expected gain or minimize expected loss. A great deal of research shows that judgments are made using some other criterion.29 The desire to avoid really big losses is a common one, even at the expense of not getting big gains. There are many examples of behavioral psychology in international security. Warning of surprise attack is an example. Many people believe
29 F. Sortino and S. Satchell (eds.), Managing downside risks in financial markets: theory, practice, and implementation (Oxford: Butterworth- Heinemann, 2001). How to build a warning system 23 that US and Russian nuclear weapons are in a hair-trigger posture, set to launch at the first radar blip coming back from a flock of Canada geese that is wrongly interpreted as an attack. But this is not how real nuclear warning systems operate. Behavioral factors have greatly dampened this possibility. Nuclear warning systems have not been built as two opposing systems with atomic missiles wired to them. A deep concern over catastrophic loss, by both sides, is an overarching theme. This is operationalized with checks and balances, and many other controls, built in to the system. These “deep regret” anchors are wired in. This would come as no revelation to behavioral economists, for it is just the sort of assessment shaping factor they find common in a wide range of behavior, whether in finance or elsewhere. Lewis Dunn, in his chapter on nuclear proliferation, raises a fasci- nating question that I have not seen asked anywhere else: do the new nuclear states (North Korea, Pakistan, etc.) use a deep-regret concept or a different one? Dunn has some profound concerns about this. But the important point is that behavioral psychology provides a vocabulary and distinctions to raise the level of conversation about some very important warning systems.
Connecting the dots The 9/11 Commission Report’s finding that many US intelligence agencies were not talking to each other was summarized in many news headlines as the problem of “connecting the dots.” Here the “dots” were pieces of information about the impending attack. The argument of the report was that no one was putting these together into a com- mon operational picture. The 9/11 Commission had another conclusion that they never came out and said. It was implicit in their report, and it relates to connecting the dots, but in a different way than piecing together data about what was taking place in the outside world. Surprise is a function of com- plexity, not only of uncertainty. Here the complexity is not just in the problem – finding terrorists or WMD in Iraq – but in the organization doing the looking. This is a different perspective on surprise. The 9/11 Commission discovered that major parts of US intelligence had vir- tually no connection with other parts of the system. Senior people did not even understand how their own systems worked. It took nearly two years for the Commission to convince the aerospace defense 24 Paul Bracken command of what actually happened on 9/11, and that what they originally thought had occurred did not. The reason this complexity argument is important is that studies of intelligence failure before 9/11 dealt with organizations whose size, complexity and technology bore little resemblance to those in our time. In 1941 the US had two small intelligence agencies, in the army and navy. In 2001 it had sixteen. This does not begin to count the dozens of specialized intelligence units contained in this macro group. Everyone knew that the scale and complexity of intelligence had grown. But they did not appreciate what difference it made. The events of 9/11 led to exhaustive studies (The 9/11 Commission Report and all the others) that for the first time tore into the details and intercon- nections of these vast sprawling techno-structures, which were far more complex than at Pearl Harbor or during the Cold War. The connect-the-dots approach has also had practical impacts. It led to the 2003 creation of the Department of Homeland Security, and to the 2005 creation of the Director of National Intelligence. The former is charged with organizing the many units of homeland defense (Coast Guard, Border Patrol, Citizenship and Immigration, etc.), and the latter with making sure the intelligence community is connected with itself. Both new organizations are a move to centralization as a way to connect the dots in the belief that this will lead to more coordinated actions. Connecting the dots has also meant large investments in IT systems, which can do a much better job of seeing connections that a human would easily miss. Data mining, neural networks and Bayesian networks show recognition that if internal complexity is not managed, it is likely to increase the chances of surprise. No one can say how these organizational and IT changes will work. They depend on implementation and execution – good management – but this is something that is at least partly under our control. What can be said, however, is that the changes are hardly the mindless moving of boxes and lines around an organization chart that many critics believe. Nor do they simply add to the intelligence layer cake that came before. The events of 9/11 demonstrated that the loosely coupled intelligence community created back in the 1960s was incapable of reorganizing itself to meet the challenges of a changed strategic environment. Some degree of increased centralization was needed to rationalize and tighten intelligence, to prevent its different parts from drifting off into ever more specialized units. How to build a warning system 25
The problems of warning are difficult, very difficult. The standard by which approaches to it should be judged cannot be ones of pre- dictive accuracy, e.g., which approach best predicts the fall of the House of Saud or Iran’s break out of the non-proliferation treaty. We will never know which method is best for answering these questions. But nor can the standard be one that says that nothing works. A reasonable performance standard between these two extremes is a central argument of this chapter, and this project. Approaches that generate insights, and that offer a vocabulary and distinctions which raise the level of analysis and discussion, and which give some man- agerial handles for implementation, are to be welcomed.
The importance of fit A starting point for building a warning system is to deconstruct the problem into two separate parts: the strategic environment and the capacities of the organization, as shown in Figure 2.1. The strategic environment is the outside world: threats, dangers and opportunities. The question Figure 2.1 poses about the environment is this: is the environment changing in ways that might surprise my organization? The figure can be used to elicit the dangers that you most care about. Environmental changes might lead to “new” kinds of surprise that were not as likely when the original strategy and warning system were established. “Capacities” deals with the inside of the organization, its ability to collect, process and distribute warning. For a real organization there would be many rows for these capacities; IT, processes, organization and people are all important components of warning capacity. These can improve or decline in performance. The introduction of overhead satellite reconnaissance, as an example, dramatically improved the capacities of US intelligence. The key idea in Figure 2.1 is “fit.” That is, capacities may not necessarily be good or bad in themselves, but rather good or bad for detecting certain kinds of dangers. A company that relies on estimates of future demand (warning) for their product might have a very good system in place for this. But if the environment becomes more volatile, then this system may not work nearly as well. In the example of security, the US’s multi-billion-dollar warning system for nuclear attack was not nearly as effective in warning of the 9/11 terrorist attacks. 26 Paul Bracken
Organization’s warning capacities Strategic environment
Stable Unstable
Increasing
Decreasing
Figure 2.1: A framework for warning.
Call this the contingency theory of warning: there is no one best way to build a warning system; it depends on the dangers. The warning system should fit the problem of interest, and that is the key design test. There is another way of describing the case for “fit” between capacities and the environment. In the 1970s, Yale economist Richard Nelson wrote an excellent book that applies to warning systems, and to a lot more. Called The moon and the ghetto,itansweredthequestion often posed in the turbulent 1960s,30 “if we can put a man on the moon, why can’t we clean up the ghetto?” The answer, Nelson argued, was that the underlying problem “environments” of putting a man on the moon and cleaning up the ghetto were fundamentally different from each other. Capacities that worked in one environment are unlikely to work in the other. Going to the moon involved engineering skills. Cleaning up the ghetto entailed complex socio-bureaucratic and political skills that were very poorly understood. Sending NASA engineers armed with computers into the ghetto was an example of taking capacities that worked in one environment and asking them to work in a radically different one. It made no sense. Warning systems should be built with the same idea in mind. For example, Coby van der Linde argues that oil supply is becoming more uncertain because political forces are rising in importance relative to market forces. Lewis Dunn fears that triggering events could quickly increase the danger of widespread proliferation. Jessica Stern and Jonathan Wiener argue that the dangers of terrorism are growing for several reasons, and that in some circumstances a “precautionary” strike makes sense. Each of these chapters sees basic environmental
30 R. R. Nelson, The moon and the ghetto (New York: Norton, 1977). How to build a warning system 27 change, and each suggests a link to organizational capacities for dealing with them. Figure 2.1 can be put to use in scenarios to explore the implications of different dangers. Van der Linde’s chapter points out that a widely accepted energy scenario of the early 1990s of globalization and multilateralism contained important risk management implications built in to it. A different scenario, call it weak globalization, has dif- ferent risks. It therefore requires a different mix of capacities for dealing with it. On the capacities side of the framework there has been a veritable explosion in warning technology. Not all IT expenditures are focused on warning, of course. But the harnessing of computers for information collection offers an opportunity to increase organizational performance. It provides new tools that were not available to previous generations. This does not mean that warning is, ipso facto, getting better. It only means that it could get better because of the new tools, most especially if good management links them to the dangers in the strategic envi- ronment. Figure 2.1 can be used as a starting point for discussion among senior managers and their staffs about what it is they want warning of. The emphasis should be on the inter-relationship of changes in the environment with the skills needed inside the organization to deal with these problems. An example may help to clarify how this conversation might pro- ceed. A feature of the twenty-first century environment is the multi- plicity of dangers facing the US. There is no longer one principal enemy, and one really large threat of nuclear attack. Terrorism, energy issues, state failure, and proliferation are higher on the agenda than they were. But the people skills inside the intelligence community have not kept up with these changes. What is needed, among other things, are more diverse people working in intelligence. This means people who have broader business and academic backgrounds than those recruited during the late Cold War. It is unlikely that individuals with no business experience, working in a secure isolated facility, will anticipate the potential dangers, or even the changes, in the strategic environment. Increased diversity has several implications. The US intelligence community operates in a new world with a bureaucratic Cold War security clearance system. The Cold War security system was built to 28 Paul Bracken protect critical secrets like the A-bomb and satellite intelligence sys- tems. But it is hardly clear that this kind of information is as critical as it was during the Cold War. Security could be changed; not reduced, but changed to match the new strategic environment better. It has been changed in US business, where a similar problem has been faced. “Provisioning technologies” are IT-based systems used by many companies to hire and clear individuals, fit them with the tools that they need for the job, and get them to work as quickly as possible. In a US with a more diverse workforce, it has proven very useful to get better efficiencies. A more diverse workforce, and IT-based provisioning technologies to get them on the job faster, may appear to be a mundane way to improve warning. But it is exactly this type of management issue that has been the cause of many earlier intelligence failures. Figure 2.1 offers a framework for starting this conversation. Another use of Figure 2.1 is to compare the different departments of an organization. The US intelligence community, a global bank and a multinational corporation are complex organizations with many departments. The capacities for collecting, assessing and distributing warning information are likely to vary from one department to the next. It is important to know how variable this is. For example, the 2002 collapse of the Arthur Andersen accounting firm arose from the poor performance of a single field unit, the Houston office, and the inability or unwillingness of headquarters to apply the risk controls that existed for scores of other units to Houston. As intelligence and business become more networked, the importance of the variability of perfor- mance across departments increases. A failure in one unit can cascade through the larger enterprise. Often it is not the average performance but the variance in per- formance that is most important. If Arthur Andersen had been able to guard against really extreme behavior in its Houston office, the com- pany might still be in business. A warning system may miss a number of small calls, but prevent a really big disaster. This concept underlies the use of the Value at Risk (VaR) metric used in finance. The idea is to look not at what is most likely, but at the chance that a really big event will wipe out a significant part of a portfolio. Like any single metric, VaR can be misunderstood and abused. But it underscores the use- fulness of looking at warning from a variance, rather than a most likely, perspective. How to build a warning system 29
The variance of a warning system is too often overlooked. For example, US warning in the Cold War was conservative and risk averse. This made sense in the first nuclear age. There were a number of warning failures, of course. The Chinese attack on Korea (1950), Soviet deployment of missiles in Cuba (1962), Soviet invasion of Czechoslo- vakia (1968), the Tet Offensive (1968), and Saddam Hussein’s invasion of Kuwait (1990) are all examples. But US warning got one big thing right. It was quite adept at detecting anything that might lead up to a Soviet nuclear attack. On this one big thing, the Soviets could not easily make a transcendental throw of the dice, to use the words of former Secretary of Defense Harold Brown. An ancient fragment of Greek poetry captures this quite well: “The fox knows many things, but the hedgehog knows one big thing.” The US knew one big thing, that nuclear war was a disaster. The systems and the culture of the intelligence community were built on this recognition. There were many surprises, and many things were missed. But the US never botched the one big thing; nor could the Soviets capitalize on the several failings. The capacity–environment framework also focuses attention on another rapidly growing problem. As large organizations get more complex there is a tendency toward high levels of specialization. The number of departments increases, and so does the number of spe- cialists. The legal department behaves like lawyers, the technology people like “techies,” the marketing people like marketing people. A result is increased fragmentation at the level of the whole enterprise. A fascinating feature of many of the case studies of corporate disasters (Enron, WorldCom, Equitable Life Assurance, Arthur Andersen, and others) was the way people at the top were completely unaware of the state of affairs in their own companies. For many years a standard explanation of organizational failure in general, and of warning in particular, was “groupthink.” Groupthink is the tendency to not consider all alternatives as a result of the desire to preserve unanimity at the expense of quality decisions. It has been used as an explanation of intelligence failures from the Bay of Pigs (1961) to 9/11. But increased complexity (number of departments, size, specialization) may produce the opposite of groupthink. In the Enron meltdown it was not that the upper management team thought alike. Rather, they shared no common operational picture of their company, or of their strategic environment. The department heads of finance, 30 Paul Bracken public relations, trading and the CEO had no common map. Each did their own thing. And each had a warning system. As complexity increased and time shrank during the crisis, warning performance utterly failed. The two major case studies of Enron have diametrically opposite titles. The first book was The smartest guys in the room: the amazing rise and scandalous fall of Enron, the second was called Conspiracy of fools: a true story.31 Which summary of Enron was correct? The people at Enron were smart, at least individually. But their warning and risk management capacities were incoherent at the corporate level. They did not produce a common operational picture of the dangers facing the company. The public affairs office did not know the company was deep in debt. The CFO did not see how the stock price going down would invoke covenants to repay that debt immediately. The CEO did not have any of this data at his fingertips. As a result, Enron appeared to be hapless in the business press. Executives had little concept of cor- relations among departments, or of how a bad story in the media would accelerate the collapse. There simply was no system to assess, let alone manage, any of this. If there is a word for the opposite of groupthink – over-specialization, fragmentation – Enron had it in spades.
Risk management Warning by itself is not enough. It’s only one piece of a larger system for dealing with uncertainty. Putting warning on a risk management foundation means recognizing this. The warning system should “fit” the larger risk management system, as well as the strategic environment. Failure to understand this could lead to a dangerous over-reliance on warning. In a casual way this is understood. Global banks do not bet their future on getting warning of a currency devaluation. Oil companies do not invest all of their capital in a single country just because their political risk department forecasts a stable environment. And the Department of Defense (DoD) does not build its military on forecasts of where wars will occur. All three of these organizations benefit from good warning. But they do not bet the farm on it.
31 B. McLean and P. Elkind, The smartest guys in the room: the amazing rise and scandalous fall of Enron (New York: Portfolio, 2004); K. Eichenwald, Conspiracy of fools: a true story (Broadway, 2005). How to build a warning system 31
A more technical way of saying this is that the marginal contribu- tion of warning to the overall performance of managing uncertainty requires an enlarged risk management framework. Looked at this way, warning is a business, and the question is the value added to the whole risk management enterprise. Only within a larger framework can allocation of dollars for technology, new departments and man- agement attention be rationalized. But, as emphasized by Arad, the overall risk management of many organizations is scattershot. It is highly variable across departments. And the departments are not integrated into a composite whole (e.g., Enron). Often there is not even a vocabulary or set of distinctions to facilitate discussion and analysis of the organization’s approach to dealing with uncertainty. This section develops a framework, in addi- tion to a vocabulary, and some basic distinctions for doing this. People in the warning business do not have total control over resources or their organizations. The intelligence warning service cannot tell the secretary of defense or the president what to do. But people in the warning business need to understand the larger picture of what they are doing. They need to speak a language that gives senior executives a good set of alternatives and a managerial handle on the actions that need to be taken. Senior executives, in turn, need to understand that their investments in warning are part of a larger system of risk management. Otherwise they may look at warning in isolation, and expect it to perform in ways that are better handled by other approaches. The framework developed here comes from studies of how organizations deal with uncertainty and with modern man- agement theory.32 The heart of the framework (Figure 2.2) is that there are six, and only six, fundamental ways to manage uncertainty: isolating, smoothing, warning, agility, alliances and environment shaping. Different emp- hases of these six make up an overall risk management approach. The height of the lines represents the degree of emphasis on the particular approach. Figure 2.2 gives an arbitrary example. It describes an organization that puts great emphasis on warning and shaping the
32 J. D. Thompson, Organizations in action (New York: McGraw Hill, 1967); and W. J. McEwen, “Organizational goals and environment: goal-setting as an interaction process,” American Sociological Review, 23 (1958), 23–31. The addition of the co-optation strategy is by the current author, and was not part of Thompson’s original framework. 32 Paul Bracken
High
Emphasis
Low Smoothing Agility Environment Isolating Warning Alliances shaping
Figure 2.2: Risk management framework – example. environment, and little on alliances and smoothing. A discussion of each of the six approaches will clarify the framework.
Isolating critical assets from uncertainty The isolating approach insulates critical assets from external shocks. There are many ways to do this. Targets can be hardened to withstand attack. Or, critical assets can be grouped in ways that makes sure all of them will not be vulnerable at the same time, or in the same way. Portfolio diversification of investments preserves capital by insula- ting it from “big” shocks. The idea is to group uncorrelated assets together. Some may go down in value, but others will go up, preserving overall value. This buffers the portfolio from the shocks of the market place. Another form of isolating is through a strong balance sheet and þ AAA credit rating. These are prized because they insulate a company from shocks. If a bad event happens, companies can ride it out, or if they have a good credit rating they can easily borrow to get through the storm. In the security field there are many important examples of isolating. Roberta Wohlstetter’s analysis of Pearl Harbor was used by the US government not as a case study of how to get better warning, but quite the opposite.33 Most people think that her study had two major
33 R. Wohlstetter, Pearl Harbor: warning and decision (Stanford University Press, 1962). How to build a warning system 33 points: (1) the information needed to see that an attack coming was inside the government, and (2) it was hard to see this information because the signals of attack could not be separated from the back- ground noise until after the attack occurred. After the attack, signals were clear. But then it was too late. But this was not the functional conclusion of her book, which had more to do with risk management than it did with warning. The US needed to isolate critical assets so that they would survive regardless of the performance of the warning system. At Pearl Harbor the aircraft carriers were at sea, dispersed, and therefore isolated from a warning failure because the Japanese could not find them. But writing in 1962, Wohlstetter was not worrying about aircraft carriers. She was using Pearl Harbor as a metaphor for surprise nuclear attack, and everyone knew it. Her argument was that US nuclear forces had to be built to survive, isolated from warning failures. This insight had an enormous impact on US national security and on international order. The nuclear forces were built to survive regardless of the success or failure of the warning system. Missiles were put in concrete silos; dispersed to hard-to-find places; and even put under- water on submarines. This isolating approach was one of the primary reasons that the arsenals of both sides grew to numbers in the ten- thousands; one of the best ways to ensure against a warning failure was to have so many weapons that some were bound to survive any possible surprise attack. The problem with isolating critical assets is that it can be prohibitively expensive. Isolating the US infrastructure and society from terrorist attack would be economically impractical. In theory, every shopping mall, airport and office could be hardened to withstand bombs and other dangers. But the result would be too costly to implement.
Smoothing An alternative approach to dealing with uncertainty is to smooth it out so it can be managed in smaller chunks. US war planning uses this approach. As Paul Davis points out, Washington has based its war plans over the years on preparing to fight one, two or two and a half wars. The standing forces were built to handle these contingencies, above which different strategies were used. These might be mobiliza- tion, nuclear escalation or diplomacy. 34 Paul Bracken
A corporate example is to group uncorrelated business divisions inside a company to smooth earnings. Many large diversified com- panies own unrelated business divisions for the purpose of earnings management, that is smoothing the earnings out over time. There is some accounting freedom in assigning costs and profits, and companies can legally shift these around their divisions. This smoothes out earn- ings to fit the environment of Wall Street expectations. Smoothing of wars is very common. The World War I Schlieffen Plan tried to defeat France early on, and only then to fight Russia. Franklin Roosevelt opted for a “Europe first” strategy in World War II as a way to bound the risks of large-scale fighting on two fronts at the same time. And the debate over the wars against Iraq and Afghanistan in late 2001 vividly illustrates smoothing. Recall that at the time the debate was whether to go to war against both countries at the same time, or to attack one and then the other. Washington opted for the smoothing approach because it seemed to lower risks by not having to mount such a large force structure as would be required to defeat Afghanistan and Iraq together. Preventing an enemy from smoothing is the flip side of this approach. The 1973 war against Israel entailed many political calculations by Damascus and Cairo. But coordinating their simultaneous attack made it much more difficult for Israel to respond by first attacking one state, and then the other. Egypt and Syria knew that if Israel smoothed them, their chances of success would be lowered.
Warning Viewed in terms of risk management, warning is an effort to forecast the environment so that tailored responses can be used. If the warning system is good, “optimal” responses can be applied, ones that do not waste any resources. If Israel had had good warning in 1973, Tel Aviv could have pre-empted Egypt as it did in 1967 and then turned to fight Syria. When warning is unlikely to be good, the marginal investment should be put elsewhere. Cemex, the giant Mexican cement producer, has built its global success on this insight. Cement is sold to builders, a notoriously fickle and volatile customer. Forecasts of demand con- stantly change. Most cement companies have tried to discipline their customers by charging more for last-minute orders. That way they can How to build a warning system 35 try to get more accurate forecasts of demand. Cemex did not do this. It gave up on predicting demand. In its place it invested in a more agile logistic system which can shift supplies around in a day. Its inventory systems and trucks are interconnected by computers and GPS beacons. The company could not get good warning of demand, so it invested in agility instead.
Agility Agility involves the rapid reorganization of the organization to cope with the threat, as Cemex did. But it also can include more mundane actions. Re-routing communications after an attack and mobilizing more workers are examples. Rationing is a historic form of agility. Gasoline was rationed in the US in the 1970s after the oil embargo to cut down on demand. For military uncertainty, agility includes moving toward a more modular force structure, as argued by Davis. Modular forces can be rapidly reconfigured for different missions. Intelligence agility is becoming much more important. The US is moving toward using small satellites that can be quickly launched by cheap rockets that do not require years of development through the tortuous defense acquisition system. The current generation of intel- ligence satellites takes many years to develop, and needs large launch vehicles that can only be supported and fired from a few specialized bases in California and Florida. For monitoring a slowly changing target, the Soviet Union, this was tolerable. But the risk environment has changed, and agility has become more important. There are limits to reconfiguration in a given period of time. In the nuclear age isolation from warning was chosen for the nuclear forces because people thought that agility was not possible after receiving a first strike. That is, unlike the attack on Pearl Harbor where the US reconfigured its economy for war, in the Cold War the destruction was seen to be so vast as to preclude this approach. Sometimes even small shocks knock out big systems. The electrical blackout of 2003 in the Northeastern US arose from tree branches that should have been trimmed, but were not. A tree fell over in a rain- storm onto some electric lines and the resulting disruption cascaded thought the entire Northeastern power grid. Admiral Henry Gehman, the Director of the Columbia Shuttle Disaster Study and a key advisor to our project, made a key point here. The electrical utility in question 36 Paul Bracken had cut costs to the bone, and given little thought to agility, or risk management of any kind beyond the narrowest financial measures.
Alliances Alliances spread risks to several actors, and bring more resources to bear in limiting the consequences of a disaster. Alliances are a tradi- tional foreign policy tool to increase the risks to anyone who might consider attacking one of an alliance’s members. “An attack on one is an attack on all” was used by NATO for five decades to limit Soviet risk taking in Europe. Alliances can also bring countries into a closer relationship, which increases their incentive for cooperation. An important feature of EU energy policy has been to build alliances with Middle Eastern oil producers (see van der Linde’s chapter). Even if this brings the EU into tension with the US, it is one of the basic means it uses to manage oil supply risk.
Environment shaping Managing the environment to make it less dangerous, or less unstable, focuses on the two elements in most definitions of risk: likelihood and consequences. In the Cold War the US tried to shape Soviet behavior, and in many cases it succeeded. The introduction of the Hot Line and confidence-building actions reduced the chances that conflict might arise out of some inadvertent sequence of actions. It was a form of reassurance, which dampened the competitive relationship in a key area. Transferring the locus of competition is another way to shape the environment. US–Soviet competition was much less dangerous (to these two parties at least) in Southeast Asia or Central America than when it was conducted in the heart of a Europe bristling with tens of thousands of nuclear weapons. Not all opponents are open to being managed, obviously. Ideological movements are particularly difficult to co-opt. But it should be recalled that the Soviet Union, revolutionary France (1789–1815) and revo- lutionary Mexico (1910–1920) all eventually lost ideological fervor and became easier to manage. The Mexican case is revealing as a way of managing risk by limiting one’s involvement with it. President How to build a warning system 37
Woodrow Wilson intervened twice in Mexico, in 1914 and 1916, to shape the Mexican revolution’s outcome. But both cases interventions were extremely limited in size, geography and time. Wilson tried to manage the outcomes in Mexico, but in a way that limited US liability.
Back to warning Seeing that warning is one part of risk management raises important issues that might be missed if warning is looked at in a more narrow way. One of the most important ones is its relationship to overall strategy. Should risk drive strategy? Or should strategy drive risk? Many of the greatest risks arise from a strategy change. But so do many of the greatest rewards. On the other hand, some risks are so great that avoiding them determines strategy. In the Cold War, risk avoidance drove the US strategy of containment. Avoiding nuclear war was an overriding objective that shaped Washington’s strategy. Consider the relationship of warning to strategy. Paul Davis argues that in the face of deep uncertainty the DoD should emphasize agility, by building modular forces and using real options in the development of new technological systems. The idea is to substitute agility for warning, in the expectation that warning will not be available. There is another way of looking at this issue, however: agility may complement warning. Better warning could enable more opportunistic strategies that exploit information advantage by marrying it with agile US forces or with other US actions. Figure 2.3 shows two different risk management profiles. The heavy dark line represents a conservative strategy. Agility (modularity, use of real options) is emphasized and warning de-emphasized. The dashed line stands for a proactive approach with warning and agility both emphasized. The US has a tremendous comparative advantage in information technology and its related disciplines, arising from having the world’s leading companies in these fields. Likewise, the US military is extremely agile, at least when it wants to be. A strategy in which warning exploits both of these advantages may make a great deal of sense.34 Again, there cannot be iron-fast rules. But what makes little sense is to fail to see the inter-relatedness of the six elements of risk
34 I am indebted to Ken Minihan, former Director of the National Security Agency, for many helpful discussions on these points. 38 Paul Bracken
Proactive approach Conservative approach High Proactive approach Conservative approach Emphasis
Low Smoothing Agility Environment Isolating Warning Alliances shaping
Figure 2.3: Two risk management profiles for US defense. management displayed in Figures 2.2 and 2.3. What Figure 2.3 shows is that depending on overall strategy, the six elements may be either complements or substitutes for each other.35 For certain problems, agility can substitute for warning. But in others it complements warning. This is a useful insight because it offers a language for building a warning system that fits in with overall risk management and strategy. It can help to make coherent what is too often incoherent: the building of systems from technology, processes, organizational structures and people that fit together in an overall strategy. And it can provide an understandable language and framework for discussions with senior leaders about what they want and do not want. In many other disciplines agility and warning are becoming comple- mentary rather than substitutes for each other. Epidemiology warning systems are increasingly linked to quick actions to stem the spread of diseases. The SARS virus of 2003 only killed some 80 people worldwide. This was because the warning triggers immediately dispatched teams around the world to quarantine, treat and contain the virus. Hedge fund traders are using their warning systems – currency and stock fluctuations on computer screens – to make quick moves in the markets.
35 Technically-inclined readers should know that a calculus of complements and substitutes has been developed in mathematical economics. See D. M. Topkis, Supermodularity and complementarity (Princeton University Press, 1998). How to build a warning system 39
None of this, of course, is to argue that US national security should move to a proactive warning system because epidemiologists and hedge fund traders do. But neither is it to argue that the US should not. Rather it is to emphasize that strategy should drive risk, rather than the other way around. As strategy changes, so does risk. But too often this relationship is not considered. In the Cold War, when avoiding nuclear destruction was paramount, there was a clear understanding that certain strategies, like rolling back the iron curtain in Europe or invading North Vietnam, were very risky because they might escalate to consequences that were unacceptable to Washington. It is very interesting that the science of manipulating risk for stra- tegic gain, (escalation theory) has fallen very much out of favor. Countries do take bold actions, still. But since the end of the Cold War the use of escalation as a framework has almost died off. Over the last ten years there have been hundreds of books written about war and deterrence. Nearly all aspects of these subjects have been covered. But there is almost nothing written about escalation. Escalation dynamics are at bottom assessments about risk and strategy. There is a great need to rethink what these look like in the twenty-first century.
The warning value chain Another analytical tool for building a warning system is the warning value chain. The value chain (Figure 2.4) depicts warning as set of value-creating activities, such as information collection, analysis/ assessment and distribution. Each activity in the chain can potentially add to the benefit that consumers get from it. Likewise, each activity can add to costs. The value chain breaks down warning as if it were a business with constituent parts. It is a way to get away from the problem of having any one division going its own way, without regard for what it con- tributes to the overall value of the whole enterprise. The value chain can deconstruct a very complicated problem into smaller pieces for individual analysis. Otherwise discussions of warning and warning failure tend to have a circular character to them that sometimes bor- ders on the scholastic.36
36 See R. K. Betts, “Surprise, scholasticism, and strategy: a review of Ariel Levite’s ‘Intelligence and strategic surprises,’” International Studies Quarterly,33 40 Paul Bracken
Collection Analysis Distribution
Figure 2.4: A warning value chain.
A major conclusion of Arad’s chapter on intelligence management is that much of the literature on intelligence failures focuses dis- proportionately on the middle box: analysis. Further, he argues, most of the theory and framework of warning has tended toward this part of the value chain, ignoring the importance of collection systems. The warning chain offers some additional insights into warning also underscored in Arad’s paper. It is not only the pieces of the system that matter. It is also how well the whole chain is managed. Managing this horizontal organization is of critical importance, and tends to be the unwanted stepchild of the intelligence literature. A vivid example of the importance of this horizontal management of the warning process should be one of the premier cases of intelli- gence failure and rapid reform. Usually only the first part of the story – the failure – is emphasized. US warning performed disastrously at Pearl Harbor in December 1941. Yet only seven months later at Midway, in June 1942, it performed magnificently. The underlying technologies of radar, direction finding, and code breaking did not change in seven months. No new reconnaissance airplanes were added to the air fleets. Nor did a flood of better-trained people suddenly appear, for training time took longer than this. What had improved was the horizontal management of the system in Figure 2.4. A new emphasis on collection and a tighter lateral integration of the pieces of the value chain were instilled in the months after Pearl Harbor. In 1941, the connections between the boxes in Figure 2.4 were informal and sloppy. Before Pearl Harbor, President Franklin D. Roosevelt created a dozen special intelligence units which reported to him personally, and which had only the loosest coupling with each other, or with the military forces.37 At the tactical level, the
(1989), 329–43; and A. Levite, “Intelligence and strategic surprises revisited: a response to Richard K. Betts’s ‘Surprise, scholasticism, and strategy,’” International Studies Quarterly, 33 (1989), 345–49. 37 See J. E. Persico, Roosevelt’s secret war: FDR and World War II espionage (New York: Random House, 2001). How to build a warning system 41 reconnaissance airplanes at Pearl Harbor were mostly grounded, and the radar operated for only a few hours a day. Through principled intervention George Marshall streamlined the top of the high command after Pearl Harbor. He cut down on frag- mentation of operations and put the military and intelligence on a war footing. Roosevelt went along with this. But the most important changes bearing on warning were in the Pacific. In seven months the warning value chain was completely overhauled. The collection scan rate was sharply increased through more frequent launches of recon- naissance aircraft from Pearl Harbor and from the carriers. It had a wider geographic scope as well, looking for the enemy in many dif- ferent areas. Analysis of the collected data was vastly improved. It was accelerated and put into reporting formats useful to the senior lead- ership of the navy in the Pacific and by the combatant commanders. The intelligence report sitting on a desk in-box became a thing of the past. Conceptually, in Figure 2.4 the output of the distribution system fed into a fourth box to the right. It went to the navy’s high command, and to the combat officers fighting the war. There is a very important distinction here, one that many discussions overlook. The overhauled warning chain in the seven months leading up to the Battle of Midway was integrated with operations.38 It was not a case (only) of integrating intelligence with itself, but of integrating it with military operations. This is what provided the disciplining effect that dramatically improved the horizontal management of warning. Many small actions made a big difference. Cryptanalysts started working with traffic analysts; the call signs of Japanese ships were painstakingly built up into shared files; and intelligence analysts worked more closely with the force commanders.39 The overhaul was a bottom- up, decentralized overhaul of the system. People changed their behavior, undoubtedly because of the gravity of the war. But they did not require a two-year national commission to tell them what to do. The navy’s senior leadership did its part to direct the changes, but the thousands
38 The importance of this has been emphasized to me by Rich Haver, a long-time US intelligence specialist. He points out the same pattern in the Battle of the Atlantic. 39 A glimpse into this is given in C. Boyd, “American naval intelligence of Japanese submarine operations early in the Pacific War,” The Journal of Military History, 53 (April 1989), 168–89. 42 Paul Bracken of micro improvements by people in the system were the real drivers behind the tightening of the value chain.
Conclusions There are two major conclusions of this paper. First, the way to build a warning system is to use a vocabulary that leads to a productive conversation among the people who are going to use it. Second, it is crucial to recognize that warning is one piece of a larger risk man- agement system. Beyond this, there are several useful management oriented analytical tools, including the capacities–strategic environ- ment framework, warning value chains and the way different risk management approaches are substitutes or complements to each other. Strategy can and should be the driver behind warning. Most people understand this, but there has been relatively little research done to develop ways to implement the insight in a practical way. One con- sequence has been a tendency toward increased departmentalization and specialization to improve warning. But absent anyone assigned the integrative job of bringing back organizational and strategic coherence to risk management, performance can decline very quickly. Groupthink may be a problem in some instances, but its opposite – fragmentation without any common operational picture of the risks that should be worried about – has come in to its own as the com- plexity of our organizations has increased. While there can never be a guarantee against surprise, a more sober and managerial approach to building warning can lead to major improvements in the performance of these vital systems. 3 Intelligence management as risk management: the case of surprise attack
UZI ARAD
Surprise attack and the challenge of intelligence early warning are familiar topics with which both academic and professional intelligence circles have been grappling extensively and intensively. The reason is clear: a military attack on a state is a painful experience that often causes military and other significant damages to the target, sometimes to the point of a crushing defeat. If an attack is carried out without the target’s prior anticipation, it becomes a surprise attack. Its damages are more severe than if the target had been at a higher level of preparedness. The surprise factor significantly increases the aggressing party’s chances of military suc- cess, because it prevents the target from utilizing the full potential of its military and other capabilities, which in other circumstances might have been able to contain the attack and cope with it effectively. Aside from the military–technical dimension, surprise attacks usually pro- duce a panic and/or a paralyzing shock that embraces all the systems of leadership and command, and may even spread to the entire nation. This element may be no less significant than the military advantage provided by the surprise.40 Surprise attacks, such as Pearl Harbor in 1941, the Yom Kippur War in 1973 or the terror attack on September 11, 2001, were
40 In the nuclear era, a further category of surprise attack has been added whose occurrence may well be catastrophically destructive – namely, a comprehensive nuclear first strike. Early warning of a nuclear surprise attack is, then, a different subject – related of course to the general issue of surprise attack and how to deal with it, but distinct because of the unique aspects of the delivery of nuclear weapons. This resides on a different plane than the threat of a low- or medium-intensity surprise attack, whose execution is not immediate. Therefore, we will not linger here on this variant of surprise attack and the means, theory of early warning, command and control systems, and issues of deterrence that are all aspects of coping with it.
43 44 Uzi Arad perceived as traumatic events on the national level and became an ongoing focus of public and scholarly attention. Hence, in the modern era, states have set up intelligence mechanisms aimed at preventing surprise attacks. The objective is not to prevent the attack itself (the means for this usually exist in the political or military domain), but to neutralize the element of surprise in the planned attack. The need to forestall the fatal combination of attack and surprise has made the task of early warning the cardinal responsibility of intelligence agencies. It is noteworthy that despite all the attention given to the problems entailed in early warning of a surprise attack, there has been no explicit, systematic treatment of the subject by the discipline of risk management. This is somewhat surprising, given that practitioners of intelligence and intelligence management are trained in probability thinking and in the need to use various tools in assessing situations with regard to early warning. The modes of thinking about risk management and related concepts are especially well suited to the surprise-attack problem. It is not only by chance that risk analysis has not been established as a viable tool against surprise attacks. Several inherent limitations obstruct the use of existing risk-management tools in the context of intelligence; these difficulties emanate from the unique characteristics of the intelligence product. Despite these difficulties and limitations, intelligence organizations must seriously consider the use of tools and analytical methods designed for the handling of uncertainty that are provided by risk management. Close examination of the various elements in intelligence work dis- closes that intelligence organizations already tacitly implement funda- mentals of risk assessment and management. In addition to probabilistic measurements, evaluation of risks and the use of scenarios, there is a wide use of explicit risk-control and risk-management tools, such as backup systems, and risk reduction via diversification and redundancy. Yet all these elements have not coalesced into a comprehensive risk- management doctrine for intelligence. The position of intelligence agencies today resembles that of other fields a decade ago; fields that now rely extensively on risk manage- ment. The insurance and banking industries, which had been using an intuitive approach to risk, have developed over the last decade a highly elaborate risk-management culture as a key decision support tool in Intelligence management as risk management 45 their day-to-day behavior as well as in their strategic management.41 Both insurance and banking operate several parallel risk-management systems to handle various aspects of their risk exposure, to hedge themselves from environmental risks emanating from their invest- ments and credit portfolio, and to control internal risks stemming from organizational activities and operations. Like intelligence, the banking and insurance industries play a critical role in national sta- bility and therefore attract public attention. A series of high-profile catastrophes in these industries led to massive public intervention that initiated a search for long-term stabilizing mechanisms, mainly as a tool for the prevention of surprise. Risk management was found to be a solution to the problems that caused the catastrophes in those fields. Comprehensive regulation has been developed and implemented for the insurance and banking industries, regulation that set an inter- national standard of risk-averse culture.42 The current trend of reform toward consolidation and centraliza- tion of intelligence communities aims to create huge intelligence organizations that require the introduction of new tools and methods for their management. The experience gained and the lessons learned from other fields that trod the same well-worn path underline the benefits expected from the implementation of risk-management tools. Since intelligence already employs various elements of risk manage- ment, all that is needed is to move one step forward to the formulation of a comprehensive risk doctrine for intelligence.
Intelligence and risk management: basic assumptions and definitions By definition, intelligence is a national risk-management mechanism built to cope with the risk of a violent attack. The very existence of intelligence organizations reflects the presumption that there is an adversary that might consider the option of attack. In intelligence and early warning, the uncertainty regarding attack is not binary – namely, whether an attack will occur or not. The uncertainty refers to questions
41 It also should be mentioned that this development has been accompanied by a significant decrease in the price of IT networks that provide the extensive data needed to support advanced risk management. 42 Mainly Sarbanes-Oxley (SOX) and Basel II regulation for banking, and the Solvency regulation for the insurance industry. 46 Uzi Arad of timing, places and modes of attack, and the various combinations thereof. These combinations are the risks and uncertainties facing early-warning systems. In this wide and highly complex space of uncertainty, risk management is a tool for systematic, comprehensive, and pre-emptive analysis of such a wide range of possible attacks. In its most advanced phase, risk management is a powerful instrument that provides a comparative overview of the entire system and of the processes and interactions within it. Recent intelligence failures have shown that this wide-angle perspective is of the utmost necessity. There are several obstacles to the use of risk management in intel- ligence work. In general, there are two types of risks: environmental risk, and an internal one that results from flawed internal operations called operational risk. The main obstacle for implementation of risk management in the realm of security is the nature of security risks, since environmental risks are defined as threats. Contrary to environ- mental risks in other fields, threats are a special type of risk since they derive from malicious intent. This characteristic creates the first obstacle of using risk management for defense issues, since it lowers the relevance of statistics and probabilistic distributions. It is unnecessary to point out that the adversary does not commit himself to statistics. For these two types of risk, different types of risk-management tools have been developed. There are tools that focus on possible losses stemming from the interaction between the organization and its environment, and there are tools for the management of potential losses derived from internal flaws in the operation of the organizations, called Operational Risk Management (ORM). Many organizations operate separate units for the management of each type of risk. This division between two different families of tools represents another obstacle to using existing risk-management tools in intelli- gence. Most organizations utilize risk management by identifying and controlling events that may cause losses to their core business or core activities. The core business of intelligence is managing the risk of attack; this is an inherent tautology that erodes the effectiveness of risk management in this context. In intelligence, it is very difficult to separate these two aspects, since most tasks include operational aspects while concurrently handling environmental risks. Deception is an example of an environmental threat that turns into an operational risk if it is not identified correctly. In this case the adversary does its utmost to ensure that surprise is Intelligence management as risk management 47 achieved by neutralizing and sabotaging the tools designated to warn of the impending attack. The aggressing party, for whom surprise may be a necessary condition for the attack, is able to invest considerable effort in subverting the target’s early-warning capabilities. Sometimes this involves actions aimed at damaging and disrupting specific links in the early-warning system. The target’s first line of defense – its early- warning capability – is also the element that is vulnerable to disrup- tion, even before the military attack is launched.43 This example can be analyzed both as internal operational risk and as external environ- mental risk. Early warning is a product of a sequential process, a characteristic that makes Operational Risk Management to be more applicable for intelligence. ORM underlines organizational processes, procedures and mechanisms. Banking, insurance and intelligence have an enor- mous responsibility to support national stability. In all these fields, it was found that lack of transparency, of control and of risk-averse planning led to tremendous losses. Since there are still no studies that deal with the issue of risk management in the context of intelligence, analyzing intelligence problems through the prism of ORM can also be supported by existing studies about intelligence and intelligence failures. A review of this extensive material enables a detailed illustration of the intelligence process, also called the intelligence cycle,44 to map risks along that cycle and to underline measures developed to handle them.
43 In certain cases, very lengthy exercises in deception are involved, some of which last several years. For example, in the case of Egypt’s project of deceiving Israel prior to the Yom Kippur War, some claim that President Sadat had already initiated misleading steps in 1971, two years before the attack itself, when he publicly declared his intention to attack Israel. He then began carrying out cycles of offensive, followed by defensive deployments of his army along the front facing Israel, so that the actual deployment on the eve of the October 1973 attack seemed part of the routine, and the early-warning information was perceived as a false alarm, a kind of “crying wolf”. In general, surprise attacks can be regarded as disastrous situations that develop slowly. Human experience shows that people tend to ignore or delay a response to disasters that develop slowly, and have greater vigilance regarding those that develop rapidly. See E. Kam, Surprise attack (Tel Aviv: Ma’archot, 1990), p. 49 (Hebrew). 44 The term “intelligence cycle” refers to the different parts of intelligence work as interrelated stages of a production process. Three main stages are commonly identified: collection, assessment–analysis, and dissemination. However, sometimes a subdivision is used with different terms to describe the process. 48 Uzi Arad
Academic research has yet to adopt a comprehensive theoretical approach to surprise attacks as a general risk. The literature on the surprise-attack problem tends to deal with the issue from other standpoints: historical treatments relate to a single incident, as a test case from which general principles are derived. Other studies address the phenomenon inclusively and analyze it by means of political science tools. The studies have a clear tendency to point to a central, dominant factor as responsible for failures of early warning. This search for a key factor, on which the early-warning capacity supposedly depends, has led to a less thorough treatment of the complexity of intelligence work in general. Indeed, the research on intelligence failures has not progressed much over the past forty years. The focus on analysis and assessment has fostered a neglect of the other stages of the intelligence cycle, not only from an analytical– theoretical perspective but also from a practical standpoint. Collection is the main victim of this dynamic, since there is little point in investing effort and resources if inherent flaws at the assessment and analysis stage subvert the collection endeavors. Neglect of the other parts of intelligence work actually reflects a failure to adopt a comprehensive view of the process, from collection coverage to patterns of consump- tion. The preoccupation with a certain segment of the process, i.e., analysis, has prevented the development of the theoretical and empirical base needed to formulate a comprehensive theory of early warning. Concomitantly, there has been a failure to treat the subject as a management problem that can be analyzed within the disciplinary framework of risk management. This does not mean that there have been no studies scrutinizing the entire intelligence process or analyzing certain segments of the process in terms of analysis and management of risk. However, none of these was done explicitly in these works, and we still have no theory of early warning that is based on risk management. Analyzing the surprise-attack phenomenon as a management problem of coping with risk requires a much higher resolution analysis of the intelligence system than has been done in the surprise-attack literature. Therefore, what is needed is a
Examples of a different description of the intelligence cycle may be found in W. E. Odom, Fixing intelligence (New Haven: Yale University Press, 2003), pp. 12–13; M. Herman, Intelligence power in peace and war (Cambridge University Press, 1996), pp. 283–85. Intelligence management as risk management 49 breakdown dissection of the system into its component parts, so that each part’s points of strength and weakness can be considered separately and then reviewed comprehensively, while mapping the links between these points of strength/weakness throughout the entire process. This type of analysis also reflects the past penetration of risk man- agement into many fields, such as engineering, health care and the environment. In these cases, the use of risk analysis had developed as a transition from dealing with discrete risks to sophisticated systems of aggregate risk analysis. This transition was motivated first and fore- most by financial considerations. It appears that the standardization of a system’s entire range of risks, articulated for the purposes of comparative assessment, as well as the costs entailed in managing them, led to a higher efficiency in dealing with these risks.45 This paper was written at a time of an unprecedented plethora of reports and studies that have been published in different parts of the world on today’s intelligence issues. This material is, of course, an out- come of the surprise attack of 9/11. It is also related to the events that came in its wake: the Iraq war can be viewed as, among other things, a consequence of the shock of the terror attack, in which intelligence failures were involved. These malfunctions prompted the establishment of several investigative commissions that, over the past two years, have yielded a good many instructive reports, penetrating the recesses of intelligence activity while considering the issue of early warning in broad terms. These reports, most of which have appeared in the United States, but also in other countries having considerable interest and capabilities in intelligence such as Britain, Israel, Australia and India, have enriched significantly the thinking and modus operandi of intelligence in the twenty-first century. Thus, our discussion turns to a survey, mapping and analysis of the means and instruments recommended by this up-to-date literature regarding the different stages of the intelligence cycle, while weighing these capacities in terms of risk analysis and management.
Collection: the first phase of the cycle When surprise attacks occur, it is rarely claimed that the weak link was the collection link. On the contrary, much of the academic thought on
45 C. Marshal, Operational risks in financial institutions (New York: John Wiley & Sons, 2001), pp. 34–35. 50 Uzi Arad the subject centers on the counterintuitive thesis, according to which surprise attacks have occurred even in situations where the collecting bodies functioned properly and early-warning information was seem- ingly abundant. That, for example, is what Roberta Wohlstetter argued in her pioneering work on Pearl Harbor,46 as did subsequent studies pointing to alleged successful collection in the Yom Kippur War47 and other familiar examples. There exists, however, almost no available study of a surprise event that was foiled, or, more precisely, of an early warning that was conveyed properly. A classified study conducted by this author reveals that in many surprise events, including those where it is claimed that there was no lack of early-warning information, the picture of col- lection coverage was often inadequate and sometimes even exceed- ingly primitive. In the case of Pearl Harbor, for example, the gaps in coverage regarding Japanese intentions, capacities and military activity were glaring, thus fundamentally undermining Wohlstetter’s position. In the Yom Kippur War as well, the picture of collection coverage was problematic, either because of errors in the use of collection capabilities or because the situation was not identified as an emergency. More attention to the change in routine would have increased preparedness and fostered the use of additional collection instruments that were not exploited, or a greater frequency of sampling means such as photo- graphing sorties. Maximum utilization of the sources might have tilted the balance. In addition, there were problems in the area of effectiveness of col- lection and cleanness of the information. With regard to other thoroughly studied surprise attacks, the main flaw appears to have existed from the start at the collection stage. For example, Barton Whaley attributes the surprise attack in the Barbarossa Operation to the German capacity for deception.48 The invasion of Normandy is yet another example that occurred by surprise due to deception, among other factors. The same holds true even for the Yom Kippur War, which also involved an Egyptian deception effort. In any case, it is abundantly clear that the only way to crack deception is by intelligence-collection penetration
46 Wohlstetter, Pearl Harbor, as cited in note 33. 47 U. Bar-Joseph, The watchman fell asleep (Tel Aviv: Zmora-Bitan, 2001), p. 407 (Hebrew). 48 B. Whaley, Codeword Barbarossa (Cambridge, MA: MIT Press, 1973). Intelligence management as risk management 51 of the system perpetrating the deception. This explains the acute dependence on collection capability for the existence of significant early-warning capabilities. In analyzing surprise attacks, a dire situation for the target is one where the surprise is part of the deception enterprise. As mentioned, studies I conducted that compared early-warning successes and early- warning failures clearly revealed that in many more cases than are believed to exist, the seeds of calamity for the early-warning failure could already be located at the collection stage. The investigative reports on the 9/11 events49 and the report of the Senate committee that dealt with intelligence on Iraq50 state explicitly that the source of failure was intelligence collection. To be sure, other difficulties and failures have been noted, but in the cases of the 9/11 attack and the intelligence prior to the 2003 war in Iraq (while not a surprise attack, it is widely considered an intelligence failure), it was not claimed that the failure stemmed only from the assessment process and that intelligence agencies accumulated an abundance of early-warning information. Rather, the glaring lacunae were at the level of information – its quality, its definiteness, its cleanness and its quantity. The excessive concern with the advanced stages of the production cycle of intelligence, at the expense of exploring the problems of collection, has contributed over time to the emergence of acute intel- ligence-collection shortfalls, as revealed in 2001 and 2003. Some say the challenges of collection are daunting by nature, and no doubt, some aspects of collection are extremely complicated. The Human Intelligence field (HUMINT) is a well-known example of the inherent problems and difficulties. The penetration of levels that yield intelli- gence access to states’ strategic intentions entails the penetration of circles to which only small numbers of secret-holding participants
49 US Select Committee on Intelligence, “Joint inquiry into intelligence community activities before and after the terrorist attacks of September 11, 2001,” Report of the US Senate Select Committee on Intelligence, Washington, DC (December, 2002), www.gpoaccess.gov/serialset/creports/pdf/ fullreport_errata.pdf. 50 The Select Committee on Intelligence, “Report on the US intelligence community’s pre-war intelligence assessment on Iraq,” (2004), www.globalsecurity.org/intell/library/congress/2004_rpt/iraq-wmd-intell_ toc.htm. 52 Uzi Arad belong.51 In cases where a surprise attack is being planned, this number is even smaller since planning an attack of this kind requires maxi- mal degrees of compartmentalization and concealment attempts.52 In addition, the HUMINT suffers from all the fluctuations and shifts that characterize human interaction – in both the quality of the source and the quality of the operator. Along with dependency come additional variables, such as the condition of the theater, culture and language. These aspects explain why it is difficult to rely on HUMINT sources as dominant means of early warning for attacks. The building of a system doctrine for collection and early warning that is based on HUMINT sources is an ambitious goal, whose definitive achievement remains uncertain. Hence, it was natural for intelligence systems to seek to limit the uncertainty entailed in attaining information via HUMINT means by supplementing them with other, primarily technological, intelli- gence-collection tools. Unquestionably, in the modern concept of col- lection, the wise use of all-source intelligence – HUMINT, SIGINT, VISINT/IMINT, OSINT and MASINT53 – is in practice an enterprise of risk management. Every collection tool has limitations, and only judi- cious use involving a combination of tools will enable a certain sphere of collection to constitute a solution to a blind spot or disadvantage in a different sphere of collection. It is not only a matter of blind spots; it is a multidimensional process that also takes into account political, financial, and technological aspects, and their attached risks. In terms of comprehensive, early-warning capability, the source of the warning does not make much difference. Specialization, which has often led to an organizational separation according to types of sources, was created and is perpetuated because of the need for
51 An extensive discussion about intelligence assessment of intentions vs. assessment of capacities appears in Kam, Surprise attack, p. 49, as cited in note 43. Kam notes that in many cases of surprise attack, the assessment of capacities prevailed over the assessment of intentions, so that in cases where capacities were assessed as low, the concern about intentions declined. 52 This observation refers to information regarding the exact timing, place, course of action and goals of the attack. As mentioned earlier, deception measures are taken since it is impossible to conceal actions related to the planned attack. 53 SIGINT is Signal Intelligence, received from radio-magnetic or electro-optical transmissions, VISINT/IMINT is intelligence based on data gathered by optical or electro-optical means, OSINT is intelligence gathered from open sources, and MASINT is intelligence received from chemical and optical tests made on samples from observed subjects. Intelligence management as risk management 53 professionalization. This separation fostered the conjoint use of a number of agencies with collection capabilities, and sometimes also with functions of assessment and analysis. In collection terms, organ- izational separation reflects a valid principle of specialization and professionalization that fosters a comprehensive collection network, so that one collection capability provides what another collection capability is unable to cover. It is not, however, true that the HUMINT system is good for information collection about intentions, whereas the SIGINT tools and the others pertain to capabilities or actions. The proper management of collection, as with risk management, requires an encompassing, integrative vision that incorporates the strengths of certain means to compensate for the flaws of others. From this stand- point, HUMINT sources are critical, but time and again, SIGINT methods enable penetration and access to the level of intentions no less than HUMINT sources with direct natural access. For example, the capacities demonstrated by British intelligence in World War II in the field of cracking codes enabled the reading of the German battle orders on the eve of the Battle of El Alamein – an example of SIGINT pene- tration that reached the level of intentions and the orders that stem from them. Therefore, it could be expected that this practical recognition of the critical nature of full and systematic collection coverage, and the approach to risk management that identifies as a risk every major lacuna in the coverage picture, would lead to a greater concentration of effort on attainment and exhaustion of all the potential sources. Even if it means redundancy, it would not do to leave so many “bald patches” in knowledge to the point where a significant, aggregate risk emerges. Nevertheless, given the absence of a systematic doctrine on the collection enterprise or a sense of the critical need for such a formu- lation, the objective difficulties entailed in the collection endeavor and the heavy emphasis on assessment and analysis all contributed to the emergence of the considerable gaps apparent in recently examined intelligence events. Such gaps caused a substantial loss of early- warning capability. The common explanation is that the relative convenience and security afforded by technological means of collec- tion, compared to the volatility and difficulty of using and exhausting HUMINT sources, created a strong structural bias among Western intelligence communities, whose strengths are on the technological side, in favor of SIGINT sources at the expense of HUMINT ones. If this indeed occurred – and to repeat, the recent investigative reports 54 Uzi Arad emphasize it in their findings – then it is clear what sort of correction is needed, as the recommendation sections of these reports indeed indicate. HUMINT systems are operated under the assumption that every HUMINT source should be viewed, prima facie, as problematic. In HUMINT, each source has a profile of problems characterizing it, which include the value of the intelligence material yielded, the liaison activities, frequency of contact, the incentives for operating the source, and so on. At the same time, HUMINT sources have always supplied intelligence material with early-warning value for which there is no substitute. At times, HUMINT early-warning systems operate a source con- sidered the “crown jewel” – an agent with direct access to relevant information about an approaching attack. These sources are given special attention and are operated with what are likely to be the best available methods and means of communication, to exploit their early- warning potential fully and continuously. However, the more com- mon situation is that states do not possess in their collection arsenal even a single source of this kind. This, of course, reflects the prob- lematic nature of recruiting and operating HUMINT early-warning sources. It also underlines the need to give top priority to developing such sources.54 There are also cases in which a source that was considered a “crown jewel” turned out, at the moment of truth, to be counterfeit. Reality
54 The American investigative reports and commissions (the 9/11 report, the report of the Senate committee that investigated intelligence preparedness regarding discovery of WMD in Iraq, the Silberman-Robb Report, www.wmd. gov), and the report of the British Butler Commission (www.butlerreview.org. uk/report/index.asp) deal extensively with the problem of developing HUMINT sources. Apart from the operational difficulty in infiltrating or developing agents at the highest levels of interest (in this case of Saddam Hussein’s regime and Al Qaeda cells), the above reports also refer to the operational/ professional difficulty of an agent’s activity in this environment, and, moreover, the dangers involved in handling him or her. The reports note the Western intelligence services’ avoidance of risking their people by sending them on missions to dangerous states and regions. The American report on the WMD issue in Iraq calls for reconsidering this policy and re-examining whether in this case intelligence needs outweigh the risk entailed by the direct use of agents in such dangerous locales. On all the aspects concerned with improving HUMINT, see also: S. Chamliss, “Reforming intelligence,” The National Interest, No. 74 (Spring, 2005). Intelligence management as risk management 55 has shown that in practice, there also have been cases where early- warning sources became double agents or were operated from the start as deceptive sources. Presumably, the side that plots a surprise attack will not gamble with the possibility of its aims being exposed. Some- times it will act to disrupt the collection capability that it confronts, including the HUMINT capability. The ultimate disruption in this context is the doubling of an agent or the planting of a deceptive one. The problem of deception as an adjunct strategy for executing a surprise attack is especially challen- ging, since deception is aimed at damaging the collection capacity of the intended target. When the misleading information is fed directly into the collection channels that the intended target believes are at his disposal for receiving an early warning, not only is the early warning not received, but the target swallows the misleading or deceptive information, thus bringing down upon itself an early-warning failure. The understanding that such a special risk reveals a particularly vul- nerable link in the early-warning system should justify a policy of reducing risks by constructing a filter with great powers of validation regarding the reliability and believability of the material collected, particularly HUMINT material. Yet, in practice, the validation func- tion, and its failure, is given a relatively modest place in the analytical literature and in the programs aimed at correcting and improving intelligence functions. It appears that this recognition was more deeply ingrained in the British system than, for example, in the early-warning systems in the United States and Israel. But one should recall that institutionalized validation mechanisms are also disaster-prone, especially in cases where collection agencies are eager to continue operating sources even if there is doubt about their quality and also, often, their trustworthi- ness. Therefore, it is preferable to separate the validation team from the operational one that actually runs the agent. Some of the authors of surprise-attack studies choose to isolate the bits of information that seemingly warned of the impending attack from the sum total of information that was at intelligence agencies’ disposal prior to the surprise. This is a distortion, since those same bits of information came along with a large amount of false or deceptive data, which was disseminated without sufficient filtering or sifting. In these situations, the total intelligence picture usually underplays the early-warning data. To use Wohlstetter’s classic dichotomy, which 56 Uzi Arad distinguished between signals and noise, insofar as noise or flawed sources exist, a much larger number of quality sources are needed so that their total weight will surpass the misleading weight of the false, deceptive, or tendentious sources. That is why focused identification of the inferior sources as an acute risk is so essential; it depends upon upgrading and developing the validation functions. While this function presumably exists in all intelligence organizations, a comparative observation shows that such activities have not been sufficiently upgraded, and not enough intel- lectual and methodological creativity has been devoted to them in order to make them an effective filter. The assessment stage is gener- ally regarded as the main one responsible for intelligence failures, and considerable resources and thought have been devoted to improving it. Similar intellectual, management and planning efforts are needed for the upgrading of collection and of the accompanying validation process.
The indicator-analysis method An approach developed by intelligence agencies mainly for early- warning purposes is the indicator-analysis method. It grew against a background of recurrent cases in which early-warning data on inten- tions was completely lacking, or in which early-warning information was insufficient, quantitatively and qualitatively, to reach the critical threshold of activating an alarm. The indicator-analysis method is an important tool for streamlining the early-warning system, is usable in all the collection disciplines and enables a priori definition of the collection tools that are most suitable for monitoring each indicator. The basic assumption behind the indicator-analysis approach is that no attack is possible, not even one planned as a surprise or boosted by deception, that does not emit signals, at least in terms of deviating from routine. Therefore, intelligence organizations have developed early-warning systems based on the indicator-analysis method. In her book Anticipating Surprise, Cynthia M. Grabo presents a survey of the logic underlying this method.55 If classic intelligence instruments for clarifying intentions and tools for locating and identifying actions and movements for executing an attack are used conjointly on the basis of
55 C. M. Grabo, Anticipating surprise: analysis for strategic early warning (Lanham: University Press of America, 2004), pp. 45–47. Intelligence management as risk management 57 indicator analysis the chances of surprise are considerably reduced. This involves, then, risk management via a strategy of hedging, in the framework of two complementary or reinforcing systems.56 That is, replacing a single, central system with the parallel use of two different and separate, but complementary, systems should significantly lower the probability of surprise. The indicator-analysis approach assumes that an early warning will be received about the opponent’s intentions, and the means for moni- toring its activity will then raise the necessary flags. As a set of defi- nitions and terms that are prepared and used systematically, this method is the fullest manifestation of the principles of risk management in the intelligence world. Likewise, the system transcends organizations with their different organizational cultures. For example, the Central Intelligence Agency (CIA), the Defense Intelligence Agency (DIA) and the Bureau of Intelligence and Research (INR) can all work according to the same list of indicators. Indeed, the system is aimed at constituting a cross-community standard for the early-warning endeavor. Interesting, of course, is the fact that the theory of early warning according to indicator analysis requires appropriate collection and analytical preparedness. Using the method as a central pillar of early warning requires the systematic ordering of lists of indicators, and of appropriate collection tools for locating and monitoring them. The method also requires creating an index that integrates all the indicators identified in the data at any given time to determine the alert level. This demand is met by defining matrices that link the sets of indicators, both the more and the less important, to set the level of early warning. There are situations in which a limited group of indicators may prompt an immediate early warning. There are others that require a lengthy series of indicators to produce an early warning at the same level of urgency and criticality. The endeavor of linking lists of indicators and creating matrices, that is, the correlations between various indicators and the significance
56 It is also important to emphasize that the main task of intelligence limits risk management mainly to hedging measures. In other arenas, risk management is performed through hedging measures as well as contingency actions. Hedging measures are measures planned and taken a priori; contingency actions focus on containment and limitation of a risk’s consequences. J. A. Dewar, Assumption-based planning: a tool for reducing avoidable surprises (Cambridge University Press, 2002), pp. 124–25. 58 Uzi Arad of these correlations, is a demanding task that also includes a dimension of probability assessment – namely, determining the frequency of an indicator or group of indicators that are important in terms of early warning. The main problem, which is also the major pitfall of the method, is that in many situations there are indicators that constitute a necessary condition for developing an attack, but that also appear with considerable frequency in routine situations. Under such cir- cumstances, these indicators may lose their early-warning status. If so, the problem of salience is the main drawback of the system: what is the qualitative and quantitative composition that constitutes a critical threshold indicating an imminent attack, and only an immi- nent attack? Viewing this problem as a type of secondary risk – that is, the probability problem related to the salience of an indicator – pro- vides a management tool for the early-warning system. In many cases, the early-warning system is managed according to probability findings that arise from the use of the indicator-analysis approach. Constella- tions that raise suspicion about a possibility of attack are refuted or corroborated by directing additional resources to a more in-depth investigation of the situation, whether by enlisting additional collec- tion platforms, sometimes at the expense of other sectors, or by increasing the rate of sampling to the point of continuous coverage. The use of indicator analysis was developed historically as a tool for managing collection based on technological means of intelligence, which, in turn, enables continuous operation and hence yields huge quantities of information. This method enables defining an order of priorities that sets the collection tasks. Moreover, the indicator analysis serves as a primary criterion for classification and routing of infor- mation before it is transferred for additional processing and analysis, thus helping cope with the influx of information that is received without losing any early-warning implications. It is important to note that indicator analysis is used in the HUMINT field as well. This, however, involves sources that are not at the highest decision-making level but at lower levels, to which access is easier, and such sources are capable of providing information that can be indicative of an imminent attack. An operational problem of the indicator-analysis method lies in the fact that surprise attacks are rare. This problem is common to different security systems, such as protection systems, in which the low fre- quency of the event may cause a loss of preparedness. In the case of the Intelligence management as risk management 59 indicator-analysis method, the lists of indicators must be constantly updated and analyzed, even in regard to actions and scenarios that have never occurred. Of course, surprise attacks can occur because of a lack of differ- entiation in the indicator-analysis matrices according to the different threats and opponents. Indicator analysis regarding a terror attack, for example, will naturally be different in purpose than that regarding a threat of a conventional surprise attack.57 The maintenance of early- warning systems according to indicator analysis requires, then, flexi- bility and adjustment to changing circumstances, and this affects the necessary collection deployment. This set of considerations may yield especially complex lists or matrices of indicators that sometimes greatly augment the difficulty of the early-warning endeavor. Intelli- gence systems that assigned the early-warning function as the top priority (for example, those of the United States and Israel, which have been subject to surprise attacks) are familiar with the problematic managerial nature of the parallel use of the classic early-warning system along with the indicator-analysis approach. And it is not simple to maintain the two capacities over time.
Research and analysis Intelligence organizations usually adopt two approaches to prevention of an early-warning failure at the analysis phase. The first is the use of advanced research methods that involve analyzing the types of problems with which intelligence must cope. The second is the use of tools and instruments designed to overcome the risk of missing an early warning. The first approach utilizes all the instruments of knowledge and general research methods of relevant fields, such as international rela- tions, economics and regional expertise; all contribute to the quality of the analysis. Intelligence agencies seek to upgrade their work and adopt the most advanced technologies and tools to achieve “best practice.” Overall, the intelligence assessment bodies in the advanced countries have not lagged much behind universities in the use of innovative
57 It is also claimed that signals or indicators reflecting a terror attack will be weaker than those of a military attack. However, the recent Israeli experience has shown that under circumstances of a constant terror alert, intelligence agencies developed a highly efficient system of indicator analysis. 60 Uzi Arad models and theories. In any case, there has never been an early-warning failure in the history of intelligence that was caused by a theoretical disciplinary lag in the assessment units. On the contrary, there are known cases of grave intelligence analysis errors and failures that occurred in Western services where the analysts met every academic or professional standard. Thus, special attention has been given to failures that appeared uni- que to intelligence analysis, including in the early-warning context. As noted, the work specifically examining surprise attacks and intelligence failures maintains that in almost every case, the analysts had abundant information with early-warning characteristics before the surprise. Because it is impossible always to blame the analysts for sloppiness, it was necessary to seek other explanations for failures. Most of the explanations pointed to human psychological or organizational phe- nomena at the stage of intelligence analysis, characterized as pathologies embedded in human nature and in social-organizational dynamics that are hard to get away from. These pathologies have been analyzed extensively and can be divided into three main groups of problems: The problem of misperception of the material, which stems from the difficulty of understanding the objective reality, or the reality as it is perceived by the opponent; The problems stemming from the prevalence of pre-existing mindsets among the analysts that do not allow an objective professional inter- pretation of the reality that emerges from the intelligence material; Group pressures, groupthink, or social-political considerations that bias professional assessment and analysis. There is an ongoing effort to develop and upgrade tools for dealing with these pathologies. Regarding the socio-psychological reality in processes of intelligence research and analysis, it is clear that the involvement of many people in the process creates numerous problems. There may be debates over assessment or cases where a minority position emerges against a majority one, or of varied opinions and disagreements. In various cases of early-warning failures, it was found that an early-warning minority opinion was blocked or silenced by the majority opinion or by senior figures in the system. In hindsight, it turned out that if there had been greater awareness of the minority opinion or other opinions, the surprise might not have occurred. Intelligence management as risk management 61
The relation between majority and minority positions was one of the first pathologies to be identified with regard to the intelligence analysis process, lending validity to the argument for creating a mechanism that would at least ensure that a minority opinion that leans to the warning side will be given appropriate attention. This led to the practice of having a built-in mechanism to produce an opinion that opposes the prevailing or majority assessment, known as the Devil’s Advocate method. A designated unit is authorized to attack analytically the main conclusion and the main assumptions of every assessment, so that a diametrical opposite is generated for every analytical assertion. After determining the diametrically opposite value, the analysts search for evidence and explanations that would support the contrary conclusion. Historically speaking, this was one of the first methods to be applied to solving biases of intelligence assessment, and it was also implemented in Israeli military intelligence as one of the lessons of the Yom Kippur War. However, there have been cases in the Israeli experience where a Devil’s Advocate mechanism created divergence to the point of absurdity. In these instances, a perverse assessment emerged that was regarded as artificial, and did not resemble real disagreements among analysts that are natural and desirable in terms of intellectual pluralism. At any rate, there are no known Israeli cases in which a Devil’s Advocate assessment was adopted. There are several other tools that have been developed to overcome the socio-psychological pathologies among analysts. This set of ana- lytical tools is called Alternative Analysis (AA). Richards Heuer summarized a list of them:58 Group A/Group B: Two groups of experts are asked to come up with separate analyses on a certain issue based on the same material. Another version of this technique is to ask a second group of experts to give an opinion on the analysis that a previous group prepared. This technique is a narrow version of the pluralist approach, since here too a redundancy is created by establishing two independent centers of thought. Points on which the two groups reached different conclusions require additional investigation or discussion, thus developing better understanding and familiarity with the issue being examined.
58 R. J. Heuer, “The limits of intelligence analysis,” Orbis, 49, No.1 (2005). 62 Uzi Arad