Affirmative Advice While potentially initially seeming a little silly, I think this affirmative may become one of the biggest on this topic. I’m going to discuss here the strengths of it and at least give an overview of important concepts for the sake of literacy of the literature.

Strengths- any affirmative that can access strongly both hegemony and global warming (or a major war impact and an environment impact) is a flexible affirmative that allows the affirmative team to capture almost every impact presented in the round. Another strength is the evidence that says the Floating SMRs are already being employed by Russia, making inevitable most negative disads pertaining to proliferation/terrorism/ or environmental threats. There is a strong federal government key warrant in the licensing arguments represented in the Free Market CP. There is also a strong US key warrant through the hegemony advantage. Last is the diverse array of addons that can be turned into advantages later on in the year, for example water wars, grid advantages, and proliferation advantages.

Now I’m going to discuss some key terms of art that are important-

According to the International Atomic Energy Agency, "small" reactors are defined to have power outputs up to 300 MWe and "medium" reac- tors have outputs between 300 and 700 MWe

International Atomic Energy Agency (IAEA) definitions, a large conventional nuclear reactor typically exceeds an output of 700 MW. In contrast, small nuclear reactors are defined as those producing less than 300 MW (IAEA, 2007).

Purchase-Power Agreement- government purchases the energy output of the SMR which was created and funded by private markets. (private companies would foot the bill and the government would purchase the energy it created (which cost less than fossil fuels so it is a win- win). In essence, it is an agreement to purchase power.

They are factory built and can be shipped by train or boat (Modular) 1AC Hegemony Advantage DOE development of floating nuclear power critical to military readiness- prevents energy shocks, cost overruns, and supply chain restrictions Pfeffer and Macon ‘1 (Robert A. Pfeffer is a physical scientist at the Army Nuclear and Chemical Agency in Springfield, Virginia, working on nuclear weapons effects. He is a graduate of Trinity University and has a master's degree in physics from The Johns Hopkins University. Previous Government experience includes Chief of the Electromagnetic Laboratory at Harry Diamond Laboratories (HDL) in Adelphi, Maryland, and Chief of the HDL Woodbridge Research Facility in Virginia; William A. Macon, Jr., is a project manager at the Nuclear Regulatory Commission. He was formerly the acting Army Reactor Program Manager at the Army Nuclear and Chemical Agency. He is a graduate of the U.S. Military Academy and has a master's degree in nuclear engineering from Rensselaer Polytechnic Institute. His military assignments included Assistant Brigade S4 in the 1st Armored Division, “Nuclear Power: An Option for the Army's Future”, http://www.almc.army.mil/alog/issues/SepOct01/MS684.htm, September/October 2001)

The Army Transformation initiative of Chief of Staff General Eric K. Shinseki represents a significant change in how the Army will be structured and conduct operations. Post-Cold War threats have forced Army leaders to think "outside the box" and develop the next-generation Objective Force , a lighter and more mobile fighting army that relies heavily on tech nology and joint- force support. More changes can be anticipated. As we consider what the Army might look like beyond the Objective Force of 2010, nuclear power could play a major role in another significant change: the shift of military energy use away from carbon -based resources. Nuclear reactor tech nology could be used to generate the ultimate fuel s for both vehicles and people : environmentally neutral hydrogen for equipment fuel and potable water for human consumption. Evolving Energy Sources Over the centuries, energy sources have been moving away from carbon and toward pure hydrogen. Wood (which has about 10 carbon atoms for every hydrogen atom) remained the primary source of energy until the 1800s, when it was replaced with coal (which has 1 or 2 carbon atoms for every hydrogen atom). In less than 100 years, oil (with two hydrogen atoms for every carbon atom) began to replace coal. Within this first decade of the new millennium, natural gas (with four hydrogen atoms for every carbon atom) could very well challenge oil's dominance. In each case, the natural progression has been from solid, carbon-dominated, dirty fuels to more efficient, cleaner-burning hydrogen fuels. Work already is underway to make natural gas fuel cells the next breakthrough in portable power. However, fuel cells are not the final step in the evolution of energy sources, because even natural gas has a finite supply. Fuel cells are merely another step toward the ultimate energy source, seawater, and the ultimate fuel derived from it, pure hydrogen (H2). Environmental Realities There are three geopolitical energy facts that increasingly are affecting the long-term plans of most industrialized nations— Worldwide coal reserves are decreasing. At the present rate of consumption, geological evidence indicates that worldwide low- sulfur coal reserves could be depleted in 20 to 40 years . This rate of depletion could accelerate significantly as China, India, and other Third World countries industrialize and use more coal. Most major oil reserves have been discovered and are controlled by just a few OPEC [Organization of Petroleum-Exporting Countries] nations. Some of these reserves are now at risk ; Bahrain, for example, estimates that its oil reserves will be depleted in 10 to 13 years at the current rate of use. The burning of carbon-based fuels continues to add significant pollutants to the atmosphere. These and other socioeconomic pressures are forcing nations to compete for finite energy sources for both fixed-facility and vehicle use. For the United States, the demand for large amounts of cheap fuel to generate electricity for industry and fluid fuel to run vehicles is putting considerable pressure on energy experts to look for ways to exploit alternate energy sources. The energy crisis in California could be the harbinger of things to come. The threat to affordable commercial power could accelerate development of alternative fuels. It is here that private industry may realize that the military's experience with small nuclear power plants could offer a n affordable path to converting seawater into fuel. Military Realities Today, the military faces several post-Cold War realities. First, the threat has changed. Second, regional conflicts are more probable than all-out war. Third, the United States will participate in joint and coalition operations that could take our forces anywhere in the world for undetermined periods of time. Finally, the U.S. military must operate with a smaller budget and force structure. These realities already are forcing substantial changes on the Army. So, as we consider future Army energy sources, we foresee a more mobile Army that must deploy rapidly and sustain itself indefinitely anywhere in the world as part of a coalition force. In addition, this future Army will have to depend on other nations to provide at least some critical logistics support. An example of such a cooperative effort was Operation Desert Storm, where coalition forces (including the United States) relied on some countries to supply potable water and other countries to provide fuel. This arrangement allowed U.S. cargo ships to concentrate on delivering weapon systems and ammunition. But consider the following scenario. The U.S. military is called on to suppress armed conflict in a far-off region. The coalition forces consist of the United States and several Third World countries in the region that have a vested interest in the outcome of the conflict. Our other allies are either unwilling or unable to support the regional action, either financially or militarily. The military effort will be a challenge to support over time, especially with such basic supplies as fuel and water. How can the United States sustain its forces? One way to minimize the logistics challenge is for the Army to produce fuel and potable water in, or close to, the theater. Small nuclear power plants could convert seawater into hydrogen fuel and potable water where needed, with less impact on the environment than caused by the current production, transportation, and use of carbon-based fuels. Seawater: The Ultimate Energy Source Industrial nations are seeing severe energy crises occur more frequently worldwide, and, as world population increases and continues to demand a higher standard of living, carbon-based fuels will be depleted even more rapidly. Alternative energy sources must be developed. Ideally, these sources should be readily available worldwide with minimum processing and be nonpolluting. Current options include wind, solar, hydroelectric, and nuclear energy, but by themselves they cannot satisfy the energy demands of both large, industrial facilities and small, mobile equipment. While each alternative energy source is useful, none provides the complete range of options currently offered by oil. It is here that thinking "outside the box" is needed. As difficult as the problem seems, there is one energy source that is essentially infinite, is readily available worldwide, and produces no carbon byproducts. The source of that energy is seawater, and the method by which seawater is converted to a more direct fuel for use by commercial and military equipment is simple. The same conversion process generates potable water. Seawater Conversion Process Temperatures greater than 1,000 degrees Celsius, as found in the cores of nuclear reactors, combined with a thermochemical water-splitting process, is probably the most efficient means of breaking down water into its component parts: molecular hydrogen and oxygen. The minerals and salts in seawater would have to be removed by a desalination process before the water-splitting process and then burned or returned to the sea. Sodium iodide (NaI) and other compounds are being investigated as possible catalysts for high-temperature chemical reactions with water to release the hydrogen, which then can be contained and used as fuel. When burned, hydrogen combines with oxygen and produces only water and energy; no atmospheric pollutants are created using this cycle. Burning coal or oil to generate electricity for production of hydrogen by electrolysis would be wasteful and counterproductive. Nuclear power plants, on the other hand, can provide safe, efficient, and clean power for converting large quantities of seawater into usable hydrogen fuel. For the military, a small nuclear power plant could fit on a barge and be deployed to a remote theater, where it could produce both hydrogen fuel and potable water for use by U.S. and coalition forces in time of conflict. In peacetime, these same portable plants could be deployed for humanitarian or disaster relief operations to generate electricity and to produce hydrogen fuel and potable water as necessary. Such dual usage (hydrogen fuel for equipment and potable water for human consumption) could help peacekeepers maintain a fragile peace. These dual roles make nuclear-generated products equally attractive to both industry and the military, and that could foster joint programs to develop modern nuclear power sources for use in the 21st century. So What's Next? The Army must plan for the time when carbon-based fuels are no longer the fuel of choice for military vehicles. In just a few years, oil and natural gas prices have increased by 30 to 50 percent, and, for the first time in years, the United States last year authorized the release of some of its oil reserves for commercial use. As the supply of oil decreases, its value as a resource for the plastics industry also will increase. The decreasing supply and increasing cost of carbon-based fuels eventually will make the hydrogen fuel and nuclear power combination a more attractive alternative. One proposed initiative would be for the Army to enter into a joint program with private industry to develop new engines that would use hydrogen fuel. In fact, private industry already is developing prototype automobiles with fuel cells that run on liquefied or compressed hydrogen or methane fuel. BMW has unveiled their hydrogen-powered 750hL sedan at the world's first robotically operated public hydrogen fueling station, located at the Munich, Germany, airport. This prototype vehicle does not have fuel cells; instead, it has a bivalent 5.4-liter, 12-cylinder engine and a 140-liter hydrogen tank and is capable of speeds up to 140 miles per hour and a range of up to 217.5 miles. Another proposed initiative would exploit previous Army experience in developing and using small, portable nuclear power plants for the future production of hydrogen and creation of a hydrogen fuel infrastructure. Based on recent advances in small nuclear power plant technology, it would be prudent to consider developing a prototype plant for possible military applications. The MH-1A Sturgis floating nuclear power plant, a 45-MW pressurized water reactor, was the last nuclear power plant built and operated by the Army. The MH-1A Sturgis floating nuclear power plant, a 45-MW pressurized water reactor, was the last nuclear power plant built and operated by the Army. The Army Nuclear Power Program The military considered the possibility of using nuclear power plants to generate alternate fuels almost 50 years ago and actively supported nuclear energy as a means of reducing logistics requirements for coal, oil, and gasoline. However, political, technical, and military considerations forced the closure of the program before a prototype could be built. The Army Corps of Engineers ran a Nuclear Power Program from 1952 until 1979, primarily to supply electric power in remote areas. Stationary nuclear reactors built at Fort Belvoir, Virginia, and Fort Greeley, Alaska, were operated successfully from the late 1950s to the early 1970s. Portable nuclear reactors also were operated at Sundance, Wyoming; Camp Century, Greenland; and McMurdo Sound in Antarctica. These small nuclear power plants provided electricity for remote military facilities and could be operated efficiently for long periods without refueling. The Army also considered using nuclear power plants overseas to provide uninterrupted power and defense support in the event that U.S. installations were cut off from their normal logistics supply lines. In November 1963, an Army study submitted to the Department of Defense (DOD) proposed employing a military compact reactor (MCR) as the power source for a nuclear-powered energy depot, which was being considered as a means of producing synthetic fuels in a combat zone for use in military vehicles. MCR studies, which had begun in 1955, grew out of the Transportation Corps' interest in using nuclear energy to power heavy, overland cargo haulers in remote areas. These studies investigated various reactor and vehicle concepts, including a small liquid-metal-cooled reactor, but ultimately the concept proved impractical. The energy depot, however, was an attempt to solve the logistics problem of supplying fuel to military vehicles on the battlefield. While nuclear power could not supply energy directly to individual vehicles, the MCR could provide power to manufacture, under field conditions, a synthetic fuel as a substitute for conventional carbon-based fuels. The nuclear power plant would be combined with a fuel production system to turn readily available elements such as hydrogen or nitrogen into fuel, which then could be used as a substitute for gasoline or diesel fuel in cars, trucks, and other vehicles. Of the fuels that could be produced from air and water, hydrogen and ammonia offer the best possibilities as substitutes for petroleum. By electrolysis or high- temperature heat, water can be broken down into hydrogen and oxygen and the hydrogen then used in engines or fuel cells. Alternatively, nitrogen can be produced through the liquefaction and fractional distillation of air and then combined with hydrogen to form ammonia as a fuel for internal-combustion engines. Consideration also was given to using nuclear reactors to generate electricity to charge batteries for electric-powered vehicles—a development contingent on the development of suitable battery technology. By 1966, the practicality of the energy depot remained in doubt because of questions about the cost-effectiveness of its current and projected technology. The Corps of Engineers concluded that, although feasible, the energy depot would require equipment that probably would not be available during the next decade. As a result, further development of the MCR and the energy depot was suspended until they became economically attractive and technologically possible. Other efforts to develop a nuclear power plant small enough for full mobility had been ongoing since 1956, including a gas-cooled reactor combined with a closed- cycle gas-turbine generator that would be transportable on semitrailers, railroad flatcars, or barges. The Atomic Energy Commission (AEC) supported these developments because they would contribute to the technology of both military and small commercial power plants. The AEC ultimately concluded that the probability of achieving the objectives of the Army Nuclear Power Program in a timely manner and at a reasonable cost was not high enough to justify continued funding of its portion of projects to develop small, stationary, and mobile reactors. Cutbacks in military funding for long-range research and development because of the Vietnam War led the AEC to phase out its support of the program in 1966. The costs of developing and producing compact nuclear power plants were simply so high that they could be justified only if the reactor had a unique capability and filled a clearly defined objective backed by DOD. After that, the Army's participation in nuclear power plant research and development efforts steadily declined and eventually stopped altogether. Nuclear Technology Today The idea of using nuclear power to produce synthetic fuels, originally proposed in 1963, remains feasible today and is gaining significant attention because of recent advances in fuel cell technology, hydrogen liquefaction, and storage. At the same time, nuclear power has become a significant part of the energy supply in more than 20 countries—providing energy security, reducing air pollution, and cutting greenhouse gas emissions. The performance of the world's nuclear power plants has improved steadily and is at an all-time high. Assuming that nuclear power experiences further technological development and increased public acceptance as a safe and efficient energy source, its use will continue to grow. Nuclear power possibly could provide district heating, industrial process heating, desalination of seawater, and marine transportation. Demand for cost-effective chemical fuels such as hydrogen and methanol is expected to grow rapidly. Fuel cell technology, which produces electricity from low-temperature oxidation of hydrogen and yields water as a byproduct, is receiving increasing attention. Cheap and abundant hydrogen eventually will replace carbon-based fuels in the transportation sector and eliminate oil's grip on our society. But hydrogen must be produced, since terrestrial supplies are extremely limited. Using nuclear power to produce hydrogen offers the potential for a limitless chemical fuel supply with near-zero greenhouse gas emissions. As the commercial transportation sector increasingly moves toward hydrogen fuel cells and other advanced engine concepts to replace the gasoline internal combustion engine, DOD eventually will adopt this technology for its tactical vehicles. The demand for desalination of seawater also is likely to grow as inadequate freshwater supplies become an urgent global concern. Potable water in the 21st century will be what oil was in the 20th century—a limited natural resource subject to intense international competition. In many areas of the world, rain is not always dependable and ground water supplies are limited, exhausted, or contaminated. Such areas are likely to experience conflict among water-needy peoples, possibly prompting the deployment of U.S. ground forces for humanitarian relief, peacekeeping, or armed intervention. A mobile desalination plant using waste heat from a nuclear reactor could help prevent conflicts or provide emergency supplies of freshwater to indigenous populations, and to U.S. deployed forces if necessary. Promising Technology for Tomorrow Compact reactor concepts based on high-temperature, gas-cooled reactors are attracting attention worldwide and could someday fulfill the role once envisioned for the energy depot. One proposed design is the pebble bed modular reactor (PBMR) being developed by Eskom in South Africa. Westinghouse, BNFL Instruments Ltd., and Exelon Corporation currently are supporting this project to develop commercial applications. A similar design is the remote site-modular helium reactor (RS-MHR) being developed by General Atomics. If proven feasible, this tech nology could be used to replace retiring power plants, expand the Navy's nuclear fleet, and provide mobile electric power for military or disaster relief operations. Ideally, modular nuclear power plants could be operated by a small staff of technicians and monitored by a central home office through a satellite uplink. The technology of both the PBMR and the RS-MHR features small, modular, helium-cooled reactors powered by ceramic-coated fuel particles that are inherently safe and cannot melt under any scenario. This results in simpler plant design and lower capital costs than existing light water reactors. The PBMR, coupled with a direct-cycle gas turbine generator, would have a thermal efficiency of about 42 to 45 percent and would produce about 110 megawatts of electricity (MWe). The smaller RS-MHR would produce about 10 to 25 MWe, which is sufficient for powering remote communities and military bases. Multiple modules can be installed on existing sites and refueling can be performed on line, since the fuel pebbles recycle through the reactor continuously until they are expended. Both designs also feature coolant exit temperatures high enough to support the thermochemical water- splitting cycles needed to produce hydrogen. For military applications, RS-MHR equipment could be transported inland by truck or railroad, or single modules could be built on barges and deployed as needed to coastal regions . The Army's nuclear reactor on the barge Sturgis , which provided electric power to the Panama Canal from 1968 to 1976, demonstrated the feasibility of this concept . In fact, the military previously used several power barges (oil-fired, 30-MWe power plants) during World War II and in Korea and Okinawa as emergency sources of electric power. Research teams around the world also are examining other reactor concepts based on liquid-metal-cooled reactor systems with conventional sodium or lead-alloy coolants and advanced water-cooled systems. The Department of Energy (DOE) is supporting research and development of innovative concepts that are based on ultra-long-life reactors with cartridge cores. These reactors would not require refueling, and they could be deployed in the field, removed at the end of their service life, and replaced by a new system. The proposed international reactor innovative and secure (IRIS) design, funded by DOE's Nuclear Energy Research Initiative, would have a straight burn core lasting 8 years and may be available by 2010. Based on increasing costs of fossil fuels, a growing consensus that greenhouse gas emissions must be reduced , and a growing demand for energy, there is little doubt that we will continue to see significant advances in nuclear energy research and development. Nuclear power is expected to grow in the 21st century, with potential benefits applicable to the military. Small, modular nuclear power reactors in mobile or portable configurations, coupled with hydrogen production and desalination systems, could be used to produce fuel and potable water for combat forces deployed in remote areas and reduce our logistics requirements. Assuming the inevitability of hydrogen fuel replacing fossil fuels, a clearly defined objective that was missing in 1966 now exists. The partnership between DOD and the former AEC to develop Army nuclear reactors contributed to the technology of both military and small commercial power plants. This historical relationship should be renewed based on recent technological advances and projected logistics requirements. DOD logistics planners should reconsider military applications of nuclear power and support ongoing DOE research and development initiatives to develop advanced reactors such as RS-MHR, IRIS, and others. For the Army to fight and win on tomorrow's distant battlefields , nuclear power will have to play a significant role. Would this necessarily lead to a rebirth of the old Army Nuclear Power Program, with soldiers trained as reactor operators and reactor facilities managed by the Corps of Engineers? Probably not. A more likely scenario would be a small fleet of nuclear power barges or other portable power plant configurations developed by DOE, operated and maintained by Government technicians or civilian contractors, and deployed as necessary to support the Federal Emergency Management Agency, the Department of State, and DOD. Construction, licensing, refueling, and decommissioning issues would be managed best under DOE stewardship or Nuclear Regulatory Commission oversight. As an end user of these future nuclear reactors, however, the Army should understand their proposed capabilities and limitations and provide planners with appropriate military requirements for their possible deployment to a combat zone.

Specifically two internal links key First is grid collapse Loudermilk ‘11 (Micah J. Loudermilk, Micah J. Loudermilk is a Research Associate for the Energy & Environmental Security Policy program with the Institute for National Strategic Studies at National Defense University, “Small Nuclear Reactors: Enabling Energy Security for Warfighters”, March 27, 2011)

Last month, the Institute for National Strategic Studies at National Defense University released a report entitled Small Nuclear Reactors for Military Installations: Capabilities, Costs, and Technological Implications. Authored by Dr. Richard Andres of the National War College and Hanna Breetz from Harvard University, the paper analyzes the potential for the Department of Defense to incorporate small reactor technology on its domestic military bases and in forward operating locations. According to Andres and Breetz, the reactors have the ability to solve two critical vulnerabilities in the military's mission: the dependence of domestic bases on the civilian electrical grid and the challenge of supplying ample fuel to troops in the field. Though considerable obstacles would accompany such a move -- which the authors openly admit -- the benefits are significant enough to make the idea merit serious consideration. At its heart, a discussion about military uses of small nuclear reactors is really a conversation about securing the nation's warfighting capabilities. Although the point that energy security IS national security has become almost redundant -- quoted endlessly in government reports, think tank papers, and the like -- it is repeated for good reason. Especially on the domestic front, the need for energy security on military bases is often overlooked. There is no hostile territory in the United States, no need for fuel convoys to constantly supply bases with fuel, and no enemy combatants. However, while bases and energy supplies are not directly vulnerable, the civilian electrical grid on which they depend for 99% of their energy use is -- and that makes domestic installations highly insecure. The U.S. grid, though a technological marvel, is extremely old, brittle , and susceptible to a wide variety of problems that can result in power outages -- the 2003 blackout throughout the Northeast United States is a prime example of this. In the past, these issues were largely limited to accidents including natural disasters or malfunctions, however today, intentional threats such as cyber attacks represent a very real and growing threat to the grid. Advances in U.S. military technology have further increased the risk that a grid blackout poses to the nation's military assets. As pointed out by the Defense Science Board, critical missions including national strategic awareness and national command authorities depend on the national transmission grid. Additionally, capabilities vital to troops in the field -- including drones and satellite intelligence/reconnaissance -- are lodged at bases within the United States and their loss due to a blackout would impair the ability of troops to operate in forward operating areas. Recognition of these facts led the Defense Science Board to recommend "islanding" U.S. military installations to mitigate the electrical grid's vulnerabilities. Although DOD has undertaken a wide array of energy efficiency programs and sought to construct renewable energy facilities on bases, these endeavors will fall far short of the desired goals and still leave bases unable to function in the event of long-term outages . As the NDU report argues though, small nuclear reactors have the potential to alleviate domestic base grid vulnerabilities . With a capacity of anywhere between 25 and 300 megawatts, s mall r eactors possess sufficient generation capabilities to power any military installation , and most likely some critical services in the areas surrounding bases, should a blackout occur. Moreover, making bases resilient to civilian power outages would reduce the incentive for an opponent to disrupt the grid in the event of a conflict as military capabilities would be unaffected. Military bases are also secure locations, reducing the associated fears that would surely arise from the distribution of reactors across the country. Second is oil dependency Andres and Breetz ‘11 (Richard B. Andres is professor of National Security Strategy at the National War College and a Senior Fellow and Energy and Environmental Security and Policy chair in the Center for Strategic Research, Institute for National Strategic Studies, at the National Defense University, Hanna L. Breetz is a doctoral candidate in the Department of Political Science at the Massachusetts Institute of Technology, “Small Nuclear Reactors for Military Installations: Capabilities, Costs, and Technological Implications”, February 16, 2011)

Operational Vulnerability. Operational energy use represents a second serious vulnerability for the U.S. military. In recent years, the military has become significantly more effective by making greater use of technology in the field. The price of this improvement has been a vast increase in energy use. Over the last 10 years, for instance, the Marine Corps has more than tripled its operational use of energy. Energy and water now make up 70 percent of the logistics burden for troops operating in forward locations in the wars in Afghanistan and Iraq. This burden represents a severe vulnerability and is costing lives. In 2006, troop losses from logistics convoys became so serious that Marine Corps Major General Rich- ard Zilmer sent the Pentagon a “Priority 1” request for renewable energy backup.11 This unprecedented request put fuel convoy issues on the national security agenda, triggering several high-level studies and leading to the establishment of the Power Surety Task Force, which fast-tracked energy innovations such as mobile power stations and super-insulating spray foam. Currently, the Marine Corps is considering a goal of producing all non- vehicle energy used at forward bases organically and substantially increasing the fuel efficiency of vehicles used in forward areas. Nevertheless, attempts to solve the current energy use problem with efficiency measures and renewable sources are unlikely to fully address this vulnerability. Wind, solar, and hydro generation along with tailored cuts of energy use in the field can reduce the number of convoys needed to supply troops, but these measures will quickly reach limits and have their own challenges, such as visibility, open exposure, and intermittency. Deploying vehicles with greater fuel efficiency will further reduce convoy vulnerability but will not solve the problem. A strong consensus has been building within planning circles that small reactors have the potential to significantly reduce liquid fuel use and, consequently, the need for convoys to supply power at forward locations. Just over 30 percent of operational fuel used in Afghanistan today goes to generating electricity. Small reactors could easily generate all electricity needed to run large forward operating bases. This innovation would, for in- stance, allow the Marine Corps to meet its goal of self- sufficient bases. Mobile reactors also have the potential to make the Corps significantly lighter and more mobile by reducing its logistics tail . Another way that small reactors could potentially be used in the field is to power hydrogen electrolysis units to generate hydrogen for vehicles.12 At forward locations, ground vehicles currently use around 22 percent imported fuel. Many ground transport vehicles can be converted to run on hydrogen, considerably reducing the need for fuel convoys. If the wars in Iraq and Afghanistan are indicative of future operations, and fuel convoys remain a target for enemy action, using small reactors at forward locations has the potential to save hundreds or thousands of U.S. lives.

Hegemony good- multiple scenarios for global war Brooks, Ikenberry, and Wohlforth ’13 (Stephen, Associate Professor of Government at Dartmouth College, John Ikenberry is the Albert G. Milbank Professor of Politics and International Affairs at Princeton University in the Department of Politics and the Woodrow Wilson School of Public and International Affairs, William C. Wohlforth is the Daniel Webster Professor in the Department of Government at Dartmouth College “Don’t Come Home America: The Case Against Retrenchment,” International Security, Vol. 37, No. 3 (Winter 2012/13), pp. 7– 51)

A core premise of deep engagement is that it prevents the emergence of a far more dangerous global security environment. For one thing, as noted above, the United States’ overseas presence gives it the leverage to restrain partners from taking provocative action. Perhaps more important, its core alliance commitments also deter states with aspirations to regional hegemony from contemplating expansion and make its partners more secure, reducing their incentive to adopt solutions to their security problems that threaten others and thus stoke security dilemmas. The contention that engaged U.S. power dampens the baleful effects of anarchy is consistent with influential variants of realist theory. Indeed, arguably the scariest portrayal of the war-prone world that would emerge absent the “American Pacifier” is provided in the works of John Mearsheimer, who forecasts dangerous multipolar regions replete with security competition, arms races , nuclear proliferation and associated preventive war temptations, regional rivalries, and even runs at regional hegemony and full-scale great power war . 72 How do retrenchment advocates, the bulk of whom are realists, discount this benefit? Their arguments are complicated, but two capture most of the variation: (1) U.S. security guarantees are not necessary to prevent dangerous rivalries and conflict in Eurasia; or (2) prevention of rivalry and conflict in Eurasia is not a U.S. interest. Each response is connected to a different theory or set of theories, which makes sense given that the whole debate hinges on a complex future counterfactual (what would happen to Eurasia’s security setting if the United States truly disengaged?). Although a certain answer is impossible, each of these responses is nonetheless a weaker argument for retrenchment than advocates acknowledge. The first response flows from defensive realism as well as other international relations theories that discount the conflict-generating potential of anarchy under contemporary conditions. 73 Defensive realists maintain that the high expected costs of territorial conquest, defense dominance, and an array of policies and practices that can be used credibly to signal benign intent, mean that Eurasia’s major states could manage regional multipolarity peacefully without the American pacifier. Retrenchment would be a bet on this scholarship, particularly in regions where the kinds of stabilizers that nonrealist theories point to—such as democratic governance or dense institutional linkages—are either absent or weakly present. There are three other major bodies of scholarship, however, that might give decisionmakers pause before making this bet. First is regional expertise. Needless to say, there is no consensus on the net security effects of U.S. withdrawal. Regarding each region, there are optimists and pessimists. Few experts expect a return of intense great power competition in a post-American Europe, but many doubt European governments will pay the political costs of increased EU defense cooperation and the budgetary costs of increasing military outlays. 74 The result might be a Europe that is incapable of securing itself from various threats that could be destabilizing within the region and beyond (e.g., a regional conflict akin to the 1990s Balkan wars), lacks capacity for global security missions in which U.S. leaders might want European participation, and is vulnerable to the influence of outside rising powers. What about the other parts of Eurasia where the United States has a substantial military presence? Regarding the Middle East, the balance begins to swing toward pessimists concerned that states currently backed by Washington— notably Israel, Egypt, and Saudi Arabia— might take actions upon U.S. retrenchment that would intensify security dilemmas. And concerning East Asia, pessimism regarding the region’s prospects without the American pacifier is pronounced. Arguably the principal concern expressed by area experts is that Japan and South Korea are likely to obtain a nuclear capacity and increase their military commitments, which could stoke a destabilizing reaction from China. It is notable that during the Cold War, both South Korea and Taiwan moved to obtain a nuclear weapons capacity and were only constrained from doing so by a still-engaged United States. 75 The second body of scholarship casting doubt on the bet on defensive realism’s sanguine portrayal is all of the research that undermines its conception of state preferences. Defensive realism’s optimism about what would happen if the United States retrenched is very much dependent on its particular—and highly restrictive—assumption about state preferences; once we relax this assumption, then much of its basis for optimism vanishes. Specifically, the prediction of post-American tranquility throughout Eurasia rests on the assumption that security is the only relevant state preference, with security defined narrowly in terms of protection from violent external attacks on the homeland. Under that assumption, the security problem is largely solved as soon as offense and defense are clearly distinguishable, and offense is extremely expensive relative to defense. Burgeoning research across the social and other sciences, however, undermines that core assumption: states have preferences not only for security but also for prestige, status, and other aims, and they engage in trade-offs among the various objectives. 76 In addition, they define security not just in terms of territorial protection but in view of many and varied milieu goals. It follows that even states that are relatively secure may nevertheless engage in highly competitive behavior. Empirical studies show that this is indeed sometimes the case. 77 In sum, a bet on a benign postretrenchment Eurasia is a bet that leaders of major countries will never allow these nonsecurity preferences to influence their strategic choices. To the degree that these bodies of scholarly knowledge have predictive leverage, U.S. retrenchment would result in a significant deterioration in the security environment in at least some of the world’s key regions. We have already mentioned the third, even more alarming body of scholarship. Offensive realism predicts that the withdrawal of the American pacifier will yield either a competitive regional multipolarity complete with associated insecurity, arms racing, crisis instability, nuclear proliferation, and the like, or bids for regional hegemony, which may be beyond the capacity of local great powers to contain (and which in any case would generate intensely competitive behavior, possibly including regional great power war). Hence it is unsurprising that retrenchment advocates are prone to focus on the second argument noted above: that avoiding wars and security dilemmas in the world’s core regions is not a U.S. national interest. Few doubt that the United States could survive the return of insecurity and conflict among Eurasian powers, but at what cost? Much of the work in this area has focused on the economic externalities of a renewed threat of insecurity and war, which we discuss below. Focusing on the pure security ramifications, there are two main reasons why decisionmakers may be rationally reluctant to run the retrenchment experiment. First, overall higher levels of conflict make the world a more dangerous place. Were Eurasia to return to higher levels of interstate military competition, one would see overall higher levels of military spending and innovation and a higher likelihood of competitive regional proxy wars and arming of client states—all of which would be concerning, in part because it would promote a faster diffusion of military power away from the United States. Greater regional insecurity could well feed proliferation cascades, as states such as Egypt, Japan, South Korea, Taiwan, and Saudi Arabia all might choose to create nuclear forces. 78 It is unlikely that proliferation decisions by any of these actors would be the end of the game: they would likely generate pressure locally for more proliferation. Following Kenneth Waltz, many retrenchment advocates are proliferation optimists, assuming that nuclear deterrence solves the security problem. 79 Usually carried out in dyadic terms, the debate over the stability of proliferation changes as the numbers go up. Proliferation optimism rests on assumptions of rationality and narrow security preferences. In social science, however, such assumptions are inevitably probabilistic. Optimists assume that most states are led by rational leaders, most will overcome organizational problems and resist the temptation to preempt before feared neighbors nuclearize, and most pursue only security and are risk averse. Confidence in such probabilistic assumptions declines if the world were to move from nine to twenty, thirty, or forty nuclear states. In addition, many of the other dangers noted by analysts who are concerned about the destabilizing effects of nuclear proliferation—including the risk of accidents and the prospects that some new nuclear powers will not have truly survivable forces—seem prone to go up as the number of nuclear powers grows. 80 Moreover, the risk of “unforeseen crisis dynamics” that could spin out of control is also higher as the number of nuclear powers increases. Finally, add to these concerns the enhanced danger of nuclear leakage, and a world with overall higher levels of security competition becomes yet more worrisome. The argument that maintaining Eurasian peace is not a U.S. interest faces a second problem. On widely accepted realist assumptions, acknowledging that U.S. engagement preserves peace dramatically narrows the difference between retrenchment and deep engagement. For many supporters of retrenchment, the optimal strategy for a power such as the United States, which has attained regional hegemony and is separated from other great powers by oceans, is offshore balancing: stay over the horizon and “pass the buck” to local powers to do the dangerous work of counterbalancing any local rising power. The United States should commit to onshore balancing only when local balancing is likely to fail and a great power appears to be a credible contender for regional hegemony, as in the cases of Germany, Japan, and the Soviet Union in the midtwentieth century. The problem is that China’s rise puts the possibility of its attaining regional hegemony on the table, at least in the medium to long term. As Mearsheimer notes, “The United States will have to play a key role in countering China, because its Asian neighbors are not strong enough to do it by themselves.” 81 Therefore, unless China’s rise stalls, “the United States is likely to act toward China similar to the way it behaved toward the Soviet Union during the Cold War.” 82 It follows that the United States should take no action that would compromise its capacity to move to onshore balancing in the future. It will need to maintain key alliance relationships in Asia as well as the formidably expensive military capacity to intervene there. The implication is to get out of Iraq and Afghanistan, reduce the presence in Europe, and pivot to Asia— just what the United States is doing. 83 In sum, the argument that U.S. security commitments are unnecessary for peace is countered by a lot of scholarship, including highly influential realist scholarship. In addition, the argument that Eurasian peace is unnecessary for U.S. security is weakened by the potential for a large number of nasty security consequences as well as the need to retain a latent onshore balancing capacity that dramatically reduces the savings retrenchment might bring. Moreover, switching between offshore and onshore balancing could well be difªcult. Bringing together the thrust of many of the arguments discussed so far underlines the degree to which the case for retrenchment misses the underlying logic of the deep engagement strategy. By supplying reassurance, deterrence, and active management, the United States lowers security competition in the world’s key regions, thereby preventing the emergence of a hothouse atmosphere for growing new military capabilities. Alliance ties dissuade partners from ramping up and also provide leverage to prevent military transfers to potential rivals. On top of all this, the United States’ formidable military machine may deter entry by potential rivals. Current great power military expenditures as a percentage of GDP are at historical lows, and thus far other major powers have shied away from seeking to match top-end U.S. military capabilities. In addition, they have so far been careful to avoid attracting the “focused enmity” of the United States. 84 All of the world’s most modern militaries are U.S. allies (America’s alliance system of more than sixty countries now accounts for some 80 percent of global military spending), and the gap between the U.S. military capability and that of potential rivals is by many measures growing rather than shrinking. 85 Nations aren’t nice- international order not resilient Kagan ‘12 (Robert, senior fellow in foreign policy at the Brookings Institution, “Why the World Needs America,” February 11th, http://online.wsj.com/article/SB10001424052970203646004577213262856669448.html)

With the outbreak of World War I, the age of settled peace and advancing liberalism—of European civilization approaching its pinnacle— collapsed into an age of hyper-nationalism, despotism and economic calamity. The once-promising spread of democracy and liberalism halted and then reversed course, leaving a handful of outnumbered and besieged democracies living nervously in the shadow of fascist and totalitarian neighbors. The collapse of the British and European orders in the 20th century did not produce a new dark age—though if Nazi Germany and imperial Japan had prevailed, it might have—but the horrific conflict that it produced was, in its own way, just as devastating. Would the end of the present American-dominated order have less dire consequences? A surprising number of American intellectuals, politicians and policy makers greet the prospect with equanimity. There is a general sense that the end of the era of American pre-eminence, if and when it comes, need not mean the end of the present international order, with its widespread freedom, unprecedented global prosperity (even amid the current economic crisis) and absence of war among the great powers. American power may diminish, the political scientist G. John Ikenberry argues, but "the underlying foundations of the liberal international order will survive and thrive." The commentator Fareed Zakaria believes that even as the balance shifts against the U.S., rising powers like China "will continue to live within the framework of the current international system." And there are elements across the political spectrum—Republicans who call for retrenchment, Democrats who put their faith in international law and institutions—who don't imagine that a "post-American world" would look very different from the American world. If all of this sounds too good to be true, it is . The present world order was largely shaped by American power and reflects American interests and preferences. If the balance of power shifts in the direction of other nations, the world order will change to suit their interests and preferences. Nor can we assume that all the great powers in a post-American world would agree on the benefits of preserving the present order, or have the capacity to preserve it, even if they wanted to. Take the issue of democracy. For several decades, the balance of power in the world has favored democratic governments. In a genuinely post-American world, the balance would shift toward the great-power autocracies. Both Beijing and Moscow already protect dictators like Syria's Bashar al-Assad. If they gain greater relative influence in the future, we will see fewer democratic transitions and more autocrats hanging on to power. The balance in a new, multipolar world might be more favorable to democracy if some of the rising democracies—Brazil, India, Turkey, South Africa—picked up the slack from a declining U.S. Yet not all of them have the desire or the capacity to do it. What about the economic order of free markets and free trade? People assume that China and other rising powers that have benefited so much from the present system would have a stake in preserving it. They wouldn't kill the goose that lays the golden eggs. Unfortunately, they might not be able to help themselves. The creation and survival of a liberal economic order has depended, historically, on great powers that are both willing and able to support open trade and free markets, often with naval power. If a declining America is unable to maintain its long-standing hegemony on the high seas, would other nations take on the burdens and the expense of sustaining navies to fill in the gaps? Even if they did, would this produce an open global commons —or rising tension? China and India are building bigger navies, but the result so far has been greater competition, not greater security. As Mohan Malik has noted in this newspaper, their "maritime rivalry could spill into the open in a decade or two," when India deploys an aircraft carrier in the Pacific Ocean and China deploys one in the Indian Ocean. The move from American-dominated oceans to collective policing by several great powers could be a recipe for competition and conflict rather than for a liberal economic order. And do the Chinese really value an open economic system? The Chinese economy soon may become the largest in the world, but it will be far from the richest. Its size is a product of the country's enormous population, but in per capita terms, China remains relatively poor. The U.S., Germany and Japan have a per capita GDP of over $40,000. China's is a little over $4,000, putting it at the same level as Angola, Algeria and Belize. Even if optimistic forecasts are correct, China's per capita GDP by 2030 would still only be half that of the U.S., putting it roughly where Slovenia and Greece are today. Although the Chinese have been beneficiaries of an open international economic order, they could end up undermining it simply because, as an autocratic society, their priority is to preserve the state's control of wealth and the power that it brings. They might kill the goose that lays the golden eggs because they can't figure out how to keep both it and themselves alive. Finally, what about the long peace that has held among the great powers for the better part of six decades? Would it survive in a post-American world? Most commentators who welcome this scenario imagine that American predominance would be replaced by some kind of multipolar harmony. But multipolar systems have historically been neither particularly stable nor particularly peaceful. Rough parity among powerful nations is a source of uncertainty that leads to miscalculation. Conflicts erupt as a result of fluctuations in the delicate power equation. War among the great powers was a common, if not constant, occurrence in the long periods of multipolarity from the 16th to the 18th centuries, culminating in the series of enormously destructive Europe-wide wars that followed the French Revolution and ended with Napoleon's defeat in 1815. The 19th century was notable for two stretches of great-power peace of roughly four decades each, punctuated by major conflicts. The Crimean War (1853-1856) was a mini-world war involving well over a million Russian, French, British and Turkish troops, as well as forces from nine other nations; it produced almost a half-million dead combatants and many more wounded. In the Franco-Prussian War (1870-1871), the two nations together fielded close to two million troops, of whom nearly a half-million were killed or wounded. The peace that followed these conflicts was characterized by increasing tension and competition, numerous war scares and massive increases in armaments on both land and sea. Its climax was World War I, the most destructive and deadly conflict that mankind had known up to that point. As the political scientist Robert W. Tucker has observed, "Such stability and moderation as the balance brought rested ultimately on the threat or use of force. War remained the essential means for maintaining the balance of power." There is little reason to believe that a return to multipolarity in the 21st century would bring greater peace and stability than it has in the past. The era of American predominance has shown that there is no better recipe for great-power peace than certainty about who holds the upper hand. President Bill Clinton left office believing that the key task for America was to "create the world we would like to live in when we are no longer the world's only superpower," to prepare for "a time when we would have to share the stage." It is an eminently sensible- sounding proposal. But can it be done? For particularly in matters of security, the rules and institutions of international order rarely survive the decline of the nations that erected them. They are like scaffolding around a building: They don't hold the building up; the building holds them up. Many foreign-policy experts see the present international order as the inevitable result of human progress, a combination of advancing science and technology, an increasingly global economy, strengthening international institutions, evolving "norms" of international behavior and the gradual but inevitable triumph of liberal democracy over other forms of government—forces of change that transcend the actions of men and nations. Americans certainly like to believe that our preferred order survives because it is right and just— not only for us but for everyone. We assume that the triumph of democracy is the triumph of a better idea, and the victory of market capitalism is the victory of a better system, and that both are irreversible. That is why Francis Fukuyama's thesis about "the end of history" was so attractive at the end of the Cold War and retains its appeal even now, after it has been discredited by events. The idea of inevitable evolution means that there is no requirement to impose a decent order. It will merely happen. But international order is not an evolution; it is an imposition. It is the domination of one vision over others—in America's case, the domination of free-market and democratic principles, together with an international system that supports them. The present order will last only as long as those who favor it and benefit from it retain the will and capacity to defend it. There was nothing inevitable about the world that was created after World War II. No divine providence or unfolding Hegelian dialectic required the triumph of democracy and capitalism, and there is no guarantee that their success will outlast the powerful nations that have fought for them. Democratic progress and liberal economics have been and can be reversed and undone. The ancient democracies of Greece and the republics of Rome and Venice all fell to more powerful forces or through their own failings. The evolving liberal economic order of Europe collapsed in the 1920s and 1930s. The better idea doesn't have to win just because it is a better idea. It requires great powers to champion it. If and when American power declines, the institutions and norms that American power has supported will decline, too. Or more likely, if history is a guide, they may collapse altogether as we make a transition to another kind of world order, or to disorder. We may discover then that the U.S. was essential to keeping the present world order together and that the alternative to American power was not peace and harmony but chaos and catastrophe—which is what the world looked like right before the American order came into being.

Aggressive military engagement is inevitable Dorfman ‘12 (Zach Dorfman, Zach Dorfman is assistant editor of Ethics & International Affairs, the journal of the Carnegie Council, and co-editor of the Montreal Review, an online magazine of books, art, and culture, “What We Talk About When We Talk About Isolationism”, http://dissentmagazine.org/online.php?id=605, May 18, 2012)

The idea that global military dominance and political hegemony is in the U.S. national interest—and the world’s interest—is generally taken for granted domestically. Opposition to it is limited to the libertarian Right and anti- imperialist Left, both groups on the margins of mainstream political discourse. Today, American supremacy is assumed rather than argued for: in an age of tremendous political division, it is a bipartisan first principle of foreign policy, a presupposition. In this area at least, one wishes for a little less agreement. In Promise and Peril: America at the Dawn of a Global Age, Christopher McKnight Nichols provides an erudite account of a period before such a consensus existed, when ideas about America’s role on the world stage were fundamentally contested. As this year’s presidential election approaches, each side will portray the difference between the candidates’ positions on foreign policy as immense. Revisiting Promise and Peril shows us just how narrow the American worldview has become, and how our public discourse has become narrower still. Nichols focuses on the years between 1890 and 1940, during America’s initial ascent as a global power. He gives special attention to the formative debates surrounding the Spanish-American War, U.S. entry into the First World War, and potential U.S. membership in the League of Nations—debates that were constitutive of larger battles over the nature of American society and its fragile political institutions and freedoms. During this period, foreign and domestic policy were often linked as part of a cohesive political vision for the country. Nichols illustrates this through intellectual profiles of some of the period’s most influential figures, including senators Henry Cabot Lodge and William Borah, socialist leader Eugene Debs, philosopher and psychologist William James, journalist Randolph Bourne, and the peace activist Emily Balch. Each of them interpreted isolationism and internationalism in distinct ways, sometimes deploying the concepts more for rhetorical purposes than as cornerstones of a particular worldview. Today, isolationism is often portrayed as intellectually bankrupt, a redoubt for idealists, nationalists, xenophobes, and fools. Yet the term now used as a political epithet has deep roots in American political culture. Isolationist principles can be traced back to George Washington’s farewell address, during which he urged his countrymen to steer clear of “foreign entanglements” while actively seeking nonbinding commercial ties. (Whether economic commitments do in fact entail political commitments is another matter.) Thomas Jefferson echoed this sentiment when he urged for “commerce with all nations, [and] alliance with none.” Even the Monroe Doctrine, in which the United States declared itself the regional hegemon and demanded noninterference from European states in the Western hemisphere, was often viewed as a means of isolating the United States from Europe and its messy alliance system. In Nichols’s telling, however, modern isolationism was born from the debates surrounding the Spanish-American War and the U.S. annexation of the Philippines. Here isolationism began to take on a much more explicitly anti-imperialist bent. Progressive isolationists such as William James found U.S. policy in the Philippines—which it had “liberated” from Spanish rule just to fight a bloody counterinsurgency against Philippine nationalists—anathema to American democratic traditions and ideas about national self-determination. As Promise and Peril shows, however, “cosmopolitan isolationists” like James never called for “cultural, economic, or complete political separation from the rest of the world.” Rather, they wanted the United States to engage with other nations peacefully and without pretensions of domination. They saw the United States as a potential force for good in the world, but they also placed great value on neutrality and non-entanglement, and wanted America to focus on creating a more just domestic order. James’s anti- imperialism was directly related to his fear of the effects of “bigness.” He argued forcefully against all concentrations of power, especially those between business, political, and military interests. He knew that such vested interests would grow larger and more difficult to control if America became an overseas empire. Others, such as “isolationist imperialist” Henry Cabot Lodge, the powerful senator from Massachusetts, argued that fighting the Spanish-American War and annexing the Philippines were isolationist actions to their core. First, banishing the Spanish from the Caribbean comported with the Monroe Doctrine; second, adding colonies such as the Philippines would lead to greater economic growth without exposing the United States to the vicissitudes of outside trade. Prior to the Spanish-American War, many feared that the American economy’s rapid growth would lead to a surplus of domestic goods and cause an economic disaster. New markets needed to be opened, and the best way to do so was to dominate a given market—that is, a country—politically. Lodge’s defense of this “large policy” was public and, by today’s standards, quite bald. Other proponents of this policy included Teddy Roosevelt (who also believed that war was good for the national character) and a significant portion of the business class. For Lodge and Roosevelt, “isolationism” meant what is commonly referred to today as “unilateralism”: the ability for the United States to do what it wants, when it wants. Other “isolationists” espoused principles that we would today call internationalist. Randolph Bourne, a precocious journalist working for the New Republic, passionately opposed American entry into the First World War, much to the detriment of his writing career. He argued that hypernationalism would cause lasting damage to the American social fabric. He was especially repulsed by wartime campaigns to Americanize immigrants. Bourne instead envisioned a “transnational America”: a place that, because of its distinct cultural and political traditions and ethnic diversity, could become an example to the rest of the world. Its respect for plurality at home could influence other countries by example, but also by allowing it to mediate international disputes without becoming a party to them. Bourne wanted an America fully engaged with the world, but not embroiled in military conflicts or alliances. This was also the case for William Borah, the progressive Republican senator from Idaho. Borah was an agrarian populist and something of a Jeffersonian: he believed axiomatically in local democracy and rejected many forms of federal encroachment. He was opposed to extensive immigration, but not “anti-immigrant.” Borah thought that America was strengthened by its complex ethnic makeup and that an imbalance tilted toward one group or another would have deleterious effects. But it is his famously isolationist foreign policy views for which Borah is best known. As Nichols writes: He was consistent in an anti-imperialist stance against U.S. domination abroad; yet he was ambivalent in cases involving what he saw as involving obvious national interest….He also without fail argued that any open-ended military alliances were to be avoided at all costs, while arguing that to minimize war abroad as well as conflict at home should always be a top priority for American politicians. Borah thus cautiously supported entry into the First World War on national interest grounds, but also led a group of senators known as “the irreconcilables” in their successful effort to prevent U.S. entry into the League of Nations. His paramount concern was the collective security agreement in the organization’s charter: he would not assent to a treaty that stipulated that the United States would be obligated to intervene in wars between distant powers where the country had no serious interest at stake. Borah possessed an alternative vision for a more just and pacific international order. Less than a decade after he helped scuttle American accession to the League, he helped pass the Kellogg-Briand Pact (1928) in a nearly unanimous Senate vote. More than sixty states eventually became party to the pact, which outlawed war between its signatories and required them to settle their disputes through peaceful means. Today, realists sneer at the idealism of Kellogg-Briand, but the Senate was aware of the pact’s limitations and carved out clear exceptions for cases of national defense. Some supporters believed that, if nothing else, the law would help strengthen an emerging international norm against war. (Given what followed, this seems like a sad exercise in wish-fulfillment.) Unlike the League of Nations charter, the treaty faced almost no opposition from the isolationist bloc in the Senate, since it did not require the United States to enter into a collective security agreement or abrogate its sovereignty. This was a kind of internationalism Borah and his irreconcilables could proudly support. The United States today looks very different from the country in which Borah, let alone William James, lived, both domestically (where political and civil freedoms have been extended to women, African Americans, and gays and lesbians) and internationally (with its leading role in many global institutions). But different strains of isolationism persist. Newt Gingrich has argued for a policy of total “energy independence” (in other words, domestic drilling) while fulminating against President Obama for “bowing” to the Saudi king. While recently driving through an agricultural region of rural Colorado, I saw a giant roadside billboard calling for American withdrawal from the UN. Yet in the last decade, the Republican Party, with the partial exception of its Ron Paul/libertarian faction, has veered into such a belligerent unilateralism that its graybeards—one of whom, Senator Richard Lugar of Indiana, just lost a primary to a far-right challenger partly because of his reasonableness on foreign affairs—were barely able to ensure Senate ratification of a key nuclear arms reduction treaty with Russia. Many of these same people desire a unilateral war with Iran. And it isn’t just Republicans. Drone attacks have intensified in Yemen, Pakistan, and elsewhere under the Obama administration. Massive troop deployments continue unabated. We spend over $600 billion dollars a year on our military budget; the next largest is China’s, at “only” around $100 billion. Administrations come and go, but the national security state appears here to stay. Independently- SMRs solve competitiveness Fleischmann ’11 (Chuck, Representative from the 3rd District in Tennessee, “Small Modular Reactors Could Help With U.S. Energy Needs”, American Physical Society, Vol. 6, No. 2, http://www.aps.org/publications/capitolhillquarterly/201110/backpage.cfm, October 2011) The timely implementation of small reactors could position the United States on the cutting edge of nuclear technolog y. As the world moves forward in developing new forms of nuclear power, the United States should set a high standard in safety and regulatory process. Other nations have not been as rigorous in their nuclear oversight with far reaching implications. As we consider the disastrous events at the Fukushima Daiichi nuclear facility, it is imperative that power companies and regulatory agencies around the world adequately ensure reactor and plant safety to protect the public. Despite terrible tragedies like the natural disaster in Japan, nuclear power is still one of the safest and cleanest energy resources available. The plan to administer these small reactors would create technologically advanced U.S. jobs and improve our global competitiveness. Our country needs quality, high paying jobs. Increasing our competitive edge in rapidly advancing industries will put the United States in a strategic position on the forefront of expanding global technologies in the nuclear arena. Nuclear war Khalilzad ‘11 (Zalmay Khalilzad, Counselor at the Center for Strategic and International Studies, served as the United States ambassador to Afghanistan, Iraq, and the United Nations during the presidency of George W. Bush, served as the director of policy planning at the Defense Department during the Presidency of George H.W. Bush, holds a Ph.D. from the University of Chicago, 2011 (“The Economy and National Security,” National Review, February 8th, Available Online at http://www.nationalreview.com/articles/print/259024, Accessed 02-08- 2011)

Today, economic and fiscal trends pose the most severe long-term threat to the United States’ position as global leader. While the United States suffers from fiscal imbalances and low economic growth, the economies of rival powers are developing rapidly. The continuation of these two trends could lead to a shift from American primacy toward a multi-polar global system, leading in turn to increased geopolitical rivalry and even war among the great powers. The current recession is the result of a deep financial crisis, not a mere fluctuation in the business cycle. Recovery is likely to be protracted. The crisis was preceded by the buildup over two decades of enormous amounts of debt throughout the U.S. economy — ultimately totaling almost 350 percent of GDP — and the development of credit- fueled asset bubbles, particularly in the housing sector. When the bubbles burst, huge amounts of wealth were destroyed, and unemployment rose to over 10 percent. The decline of tax revenues and massive countercyclical spending put the U.S. government on an unsustainable fiscal path. Publicly held national debt rose from 38 to over 60 percent of GDP in three years. Without faster economic growth and actions to reduce deficits, publicly held national debt is projected to reach dangerous proportions. If interest rates were to rise significantly, annual interest payments — which already are larger than the defense budget — would crowd out other spending or require substantial tax increases that would undercut economic growth. Even worse, if unanticipated events trigger what economists call a “sudden stop” in credit markets for U.S. debt, the United States would be unable to roll over its outstanding obligations, precipitating a sovereign-debt crisis that would almost certainly compel a radical retrenchment of the United States internationally. Such scenarios would reshape the international order. It was the economic devastation of Britain and France during World War II, as well as the rise of other powers, that led both countries to relinquish their empires. In the late 1960s, British leaders concluded that they lacked the economic capacity to maintain a presence “east of Suez.” Soviet economic weakness, which crystallized under Gorbachev, contributed to their decisions to withdraw from Afghanistan, abandon Communist regimes in Eastern Europe, and allow the Soviet Union to fragment. If the U.S. debt problem goes critical, the United States would be compelled to retrench , reducing its military spending and shedding international commitments. We face this domestic challenge while other major powers are experiencing rapid economic growth. Even though countries such as China, India, and Brazil have profound political, social, demographic, and economic problems, their economies are growing faster than ours, and this could alter the global distribution of power. These trends could in the long term produce a multi-polar world. If U.S. policymakers fail to act and other powers continue to grow, it is not a question of whether but when a new international order will emerge. The closing of the gap between the United States and its rivals could intensify geopolitical competition among major powers, increase incentives for local powers to play major powers against one another, and undercut our will to preclude or respond to international crises because of the higher risk of escalation. The stakes are high. In modern history, the longest period of peace among the great powers has been the era of U.S. leadership. By contrast, multi-polar systems have been unstable, with their competitive dynamics resulting in frequent crises and major wars among the great powers. Failures of multi-polar international systems produced both world wars. American retrenchment could have devastating consequences. Without an American security blanket, regional powers could rearm in an attempt to balance against emerging threats. Under this scenario, there would be a heightened possibility of arms races, miscalc ulation, or other crises spiraling into all-out conflict. Alternatively, in seeking to accommodate the stronger powers, weaker powers may shift their geopolitical posture away from the United States. Either way, hostile states would be emboldened to make aggressive moves in their regions. Warming Advantage Coal is increasing globally now- destroying the environment Lazenby 12/16 (Henry Lazenby, Contributing Editor: Americas – Online for Mining Weekly, Organisation for Economic Co-operation and Development, “Global coal demand slows, peak demand not yet in sight”, http://www.miningweekly.com/article/global-coal-demand-slows-peak- demand-not-yet-in-sight-2013-12-16, December 16, 2013)

TORONTO (miningweekly.com) – Tougher Chinese policies aimed at reducing the country’s dependency on coal would help restrain global coal demand growth over the next five years, the International Energy Agency (IEA) found in its yearly ‘Medium-Term Coal Market Report’ released in Paris on Monday. Despite the slightly slower pace of growth, coal would meet more of the increase in global primary energy than oil or gas – continuing a trend that has been in place for more than a decade. “Like it or not, coal is here to stay for a long time to come. Coal is abundant and geopolitically secure, and coal-fired plants are easily integrated into existing power systems. With advantages like these, it is easy to see why coal demand continues to grow. “But it is equally important to emphasise that coal in its current form is simply unsustainable,” IEA executive director Maria van der Hoeven said at the launch of the report. The IEA found that coal was the fastest growing fossil fuel in absolute and relative terms in 2012. About 29% of global primary energy consumption were derived from coal and coal strengthened its position as the second-largest primary energy source, behind oil. Global coal consumption grew by 2.3%, from 7.53-billion tonnes in 2011, to an estimated 7.7-billion tonnes in 2012. Despite coal demand increasing by 170- million tonnes, demand growth was the third lowest on record over the last decade. The report found that China was the growth engine of global coal demand. In 2012, China posted the second-lowest demand growth (4.7%) since 2001. Nevertheless, coal consumption increased by 165- million tonnes, to an estimated 3.68-billion. Measured in energy units, China alone accounted for more than 50% of global coal demand in 2012. Total 2012 Chinese coal consumption was roughly equal to total coal demand of the US since 2009, Japan since 1993 and Germany since 1990. Put differently, China consumed over four times more thermal coal and almost ten times more metallurgical coal (met coal), than the world’s two largest consumers, the US (thermal coal) and Russia (met coal). In 2012, coal demand in the US decreased by 98-million tonnes – the second- strongest decline ever in the country. Due to the mild winter, low gas prices and plant retirements, coal-fired generation decreased by 235 TWh in 2012, while coal demand plummeted to an estimated 822-million tonnes. Coal consumption increased in Organisation for Economic Co-operation and Development (OECD) Europe (+17-million tonnes) and OECD Asia Oceania countries (+12-million tonnes) in 2012. Demand was the highest ever in OECD Asia Oceania (467-million tonnes) and the highest since 2008 in OECD Europe (810-million tonnes). In 2012, global coal supply reached an estimated 7.83-million. Compared with 2011, supply increased by 223-million tonnes, an amount greater than the yearly consumption of Japan. More supply came mainly from China (+130-million) and Indonesia (+82-million), whereas production declined strongly (-71-million) in the US. Coal demand would grow at an average rate of 2.3% a year through 2018, compared with the 2012 report’s forecast of 2.6% for the five years through to 2017 and the actual growth rate of 3.4% a year between 2007 and 2012. Chinese policies were already affecting the global coal market. While China would account for nearly 60% of new global demand over the next five years, government efforts to encourage energy efficiency and diversify electricity generation would dent that growth, slowing the global increase in demand. NO PEAK DEMAND YET Despite its moderated demand forecast, the report did not expect peak coal in China within the next five years, and the nation’s consumption and production would remain comparable to that of the rest of the world combined. Further, the report noted that China had approved a number of coal conversion projects to produce liquid fuels and synthetic natural gas – developments that could significantly reduce the country’s demand for other fossil fuels. “During the next five years, coal gasification will contribute more to China’s gas supply than shale gas. While there are many uncertainties about this technology, the potential scale of projects in China involving coal to produce synthetic natural gas and synthetic liquids is enormous. “If this were to become reality, it would mark not just an important development in coal markets but would also imply revisions to gas and oil forecasts,” IEA director of energy markets and security Keisuke Sadamori said. For the rest of Asia, coal demand was expected to stay buoyant over the next five years. India and countries in Southeast Asia were increasing consumption, and India would rival China as the top importer in the next five years, the report says.

Warming causes extinction- tipping point Dyer ‘12 (London-based independent journalist, PhD from King's College London, citing UC Berkeley scientists (Gwynne, "Tick, tock to mass extinction date," The Press, 6-19-12, l/n, accessed 8-15-12)

Meanwhile, a team of respected scientists warn that life on Earth may be on the way to an irreversible " tipping point" . Sure. Heard that one before, too. Last month one of the world's two leading scientific journals, Nature, published a paper, "Approaching a state shift in Earth's biosphere," pointing out that more than 40 per cent of the Earth's land is already used for human needs. With the human population set to grow by a further two billion by 2050, that figure could soon exceed 50 per cent. "It really will be a new world, biologically, at that point," said the paper's lead author, Professor Anthony Barnofsky of the University of California, Berkeley. But Barnofsky doesn't go into the details of what kind of new world it might be. Scientists hardly ever do in public, for fear of being seen as panic- mongers. Besides, it's a relatively new hypothesis, but it's a pretty convincing one, and it should be more widely understood. Here's how bad it could get. The scientific consensus is that we are still on track for 3 degrees C of warming by 2100, but that's just warming caused by human greenhouse- gas emissions. The problem is that +3 degrees is well past the point where the major feedbacks kick in: natural phenomena triggered by our warming, like melting permafrost and the loss of Arctic sea-ice cover, that will add to the heating and that we cannot turn off. The trigger is actually around 2C (3.5 degrees F) higher average global temperature. After that we lose control of the process: ending our own carbon- dioxide emissions would no longer be enough to stop the warming. We may end up trapped on an escalator heading up to +6C (+10.5F), with no way of getting off. And +6C gives you the mass extinction. There have been five mass extinctions in the past 500 million years, when 50 per cent or more of the species then existing on the Earth vanished, but until recently the only people taking any interest in this were paleontologists, not climate scientists. They did wonder what had caused the extinctions, but the best answer they could come up was "climate change". It wasn't a very good answer. Why would a warmer or colder planet kill off all those species? The warming was caused by massive volcanic eruptions dumping huge quantities of carbon dioxide in the atmosphere for tens of thousands of years. But it was very gradual and the animals and plants had plenty of time to migrate to climatic zones that still suited them. (That's exactly what happened more recently in the Ice Age, as the glaciers repeatedly covered whole continents and then retreated again.) There had to be a more convincing kill mechanism than that. The paleontologists found one when they discovered that a giant asteroid struck the planet 65 million years ago, just at the time when the dinosaurs died out in the most recent of the great extinctions. So they went looking for evidence of huge asteroid strikes at the time of the other extinction events. They found none. What they discovered was that there was indeed major warming at the time of all the other extinctions - and that the warming had radically changed the oceans. The currents that carry oxygen- rich cold water down to the depths shifted so that they were bringing down oxygen- poor warm water instead, and gradually the depths of the oceans became anoxic: the deep waters no longer had any oxygen. When that happens, the sulfur bacteria that normally live in the silt (because oxygen is poison to them) come out of hiding and begin to multiply. Eventually they rise all the way to the surface over the whole ocean, killing all the oxygen- breathing life . The ocean also starts emitting enormous amounts of lethal hydrogen sulfide gas that destroy the ozone layer and directly poison land- dwelling species. This has happened many times in the Earth's history.

Emissions cause extinction- ocean acidification Payet et al. ’10 (Janot Mendler de Suarez, Biliana Cicin-Sain, Kateryna Wowk, Rolph Payet, Ove Hoegh-Guldberg Global Forum on Oceans, Coasts, and Islands Global Oceans Conference 2010 May 3-7, 2010, Ensuring Survival: Oceans, Climate and Security Prepared by Janot Mendler de Suarez is a founding member of the Pardee Center Task Force, Games for a New Climate, serves on the Council of Advisors for the Collaborative Institute on Oceans Climate and Security at the University of Massachusetts-Boston, and chairs the Global Oceans Forum Working Group on Oceans and Climate. Mendler de Suarez was instrumental in the design, testing and development of the GEF International Waters Learning Exchange and Resource Network, or GEF-IW:LEARN. Ove Hoegh-Guldberg (born 26 September 1959, in Sydney, Australia), is the inaugural Director of the Global Change Institute at the University of Queensland, and the holder of a Queensland Smart State Premier fellowship (2008–2013). He is best known for his work on climate change and coral reefs. His PhD topic focused upon the physiology of corals and their zooxanthellae under thermal stress. Hoegh-Guldberg is a professor [4] at the University of Queensland. He is a leading coral biologist whose study focuses on the impact of global warming and climate change on coral reefs e.g. coral bleaching.[5] As of 5 October 2009, he had published 236 journal articles, 18 book chapters and been cited 3,373 times.[6] Dr. Biliana Cicin-Sain (PhD in political science, UCLA, postdoctoral training, Harvard University) is Director of the Gerard J. Mangone Center for Marine Policy and Professor of Marine Policy at the University of Delaware’s College of Earth, Ocean, and Environment. Rolph Payet FRGS is an international policy expert, researcher and speaker on environment, climate and island issues, and was the first President & Vice-Chancellor of the University of Seychelles, He was educated at the University of East Anglia (BSc), University of Surrey (MBA), University of Ulster (MSc), Imperial College London, and the John F. Kennedy School of Government at Harvard University. He received his PhD from Linnaeus University in Environmental Science, where he undertook multidisciplinary research in sustainable tourism)

The global oceans play a vital role in sustaining life on Earth by generating half of the world’s oxygen, as the largest active carbon sink absorbing a significant portion of anthropogenic carbon dioxide (CO2), regulating climate and temperature, and providing economic resources and environmental services to billions of people around the globe. The oceans of our planet serve as an intricate and generous life-support system for the entire biosphere. Ocean circulation, in constant interaction with the earth’s atmosphere, regulates global climate and temperature – and through multiple feedback loops related to ocean warming, is also a principal driver of climate variability and long-term climate change. Climate change is already affecting the ability of coastal and marine ecosystems to provide food security, sustainable livelihoods, protection from natural hazards, cultural identity, and recreation to coastal populations, especially among the most vulnerable communities in tropical areas. There is now global recognition of the importance of forests and terrestrial ecosystems in addressing climate change. An emerging understanding, through ecosystem-based management, of the complex and intimate relationship between climate change and the oceans offers new hope for mitigating the negative impacts of global warming, and for building ecosystem and community resilience to the climate-related hazards that cannot be averted. Ecosystem-based ocean and coastal management also generates co-benefits ranging from food security and health to livelihoods and new technologies that contribute to progress in equitable and environmentally sustainable development towards a low-carbon future. Recent observations indicate that impacts of our changing global climate on oceans and coasts – especially in the Arctic–now far exceed the findings of the 2007 report of the Intergovernmental Panel on Climate Change (IPCC). Moreover, we know that increasingly ocean acidification (a consequence of rising atmospheric CO2) is impacting on coral reefs, marine invertebrates and as a consequence changing the structure and nature of ocean ecosystems . The oceans offer an important key to averting some of the potentially far-reaching, devastating and long-lasting humanitarian and environmental consequences of climate change. With good governance and ecosystem-based management, the world’s oceans and coastal regions can play a vital role in transitioning to a low-carbon economy through improved food security, sustainable livelihoods, as well as natural protection from threats to human health, hazards and extreme weather events. Out of all the biological carbon captured in the world, over half is captured by marine living organisms, and hence the term “blue carbon.” In a 2009 report produced by three United Nations agencies, leading scientists found that carbon emissions equal to half the annual emissions of the global transport sector are being captured and stored by marine ecosystems such as mangroves, salt marshes and seagrass meadows. A combination of reducing deforestation on land, allied to restoring the coverage and health of these coastal ecosystems could deliver up to 25 percent of the emissions reductions needed to avoid ‘dangerous’ climate change. But the report warns that instead of maintaining and enhancing these natural carbon sinks, humanity is damaging and degrading them at an accelerating rate. It estimates that up to seven percent of these ‘blue carbon sinks’ are being lost annually or seven times the rate of loss of 50 years ago (UNEP 2009). “Oceans” and “coasts” must be integrated into the UNFCCC negotiating text in order to appropriately address both the critical role of oceans in the global climate system, and the potential for adaptive management of coastal and marine ecosystems to make significant contributions to both mitigation and adaptation. Ecosystem- based approaches generate multiple co-benefits, from absorbing greenhouse gas emissions to building resilience to the significant and differential impacts that coastal and island communities are facing due to global climate change. While the international community must redouble its efforts to adopt major emissions reduction commitments, at the same time, there is a need to focus on the scientifically supported facts about natural solutions through ecosystem-based approaches that contribute to climate adaptation and mitigation, to human health and well-being, and to food security. This policy brief provides an overview of the latest facts and concerns on the synergy between oceans and climate, highlights climate change impacts on ocean ecosystems and coastal and island communities, and presents key recommendations for a comprehensive framework to better integrate vital ocean and coastal concerns and contributions into climate change policy and action. 1. The Oceans Have a Vital Role in Combating Climate Change The oceans are the blue lungs of the planet – breathing in CO2 and exhaling oxygen. The oceans have also absorbed over 80 percent of the heat added to the climate system (IPCC 2007), and act as the largest active carbon sink on earth. Ocean absorption of CO2 reduces the rate at which it accumulates in the atmosphere, and thus slows the rate of global warming (Denman 2007). Over the last 250 years, oceans have been responsible for absorbing nearly half of the increased CO2 emissions produced by burning fossil fuels (Laffoley 2010) as well as a significant portion of increased greenhouse gas emissions due to landuse change (Sabine et al. 2004). A combination of cyclical processes enables the ocean to absorb more carbon than it emits. Three of the ocean’s key functions drive this absorption: first is the “solubility pump,” whereby CO2 dissolves in sea water in direct proportion to its concentration in the atmosphere – the more CO2 in the atmosphere, the more will dissolve in the ocean; second is water temperature – CO2 dissolves more easily in colder water so greater absorption occurs in polar regions; third is mixing of CO2 to deeper levels by ocean currents. Convergence of carbon-enriched currents at the poles feed into the so called ocean ‘conveyor belt,’ a global current which cycles carbon into ocean depths with a very slow (about 1500 years) turnover back to the surface. The ‘biological pump’ begins with carbon captured through photosynthesis in surface water micro-organisms, which make up 80-90 percent of the biomass in the ocean. These tiny plants and animals feed carbon into the food chain, where it is passed along to larger invertebrates, fish, and mammals. When sea plants and animals die and part of their organic matter sinks to the ocean floor, it is transformed into dissolved forms of carbon. The seabed is the largest reservoir of sequestered carbon on the planet. However the efficiency of the ocean’s ability to capture carbon relies on the structure and ‘health’ of the upper layer marine ecosystem (Williams 2009). Increasing oceanic concentrations of CO2 influence the physiology, development and survival of marine organisms, and the basic functioning and critical life support services that ocean ecosystems provide will be different under future acidified ocean conditions (UNEP 2010). Increased atmospheric CO2 has already increased the acidity of the ocean by 30 percent, making the ocean more acidic than it has been in the last 650,000 years, and affecting marine life, such as corals, microscopic plants and animals. Increased ocean acidity is likely to not only affect the ‘biological pump’ and ocean food webs, but is also likely to influence the global carbon cycle leading to an increase in global warming (Williams 2009). Ocean Acidification: Facts, Impacts and Action Ocean acidification is happening now –at a rate and to a level not experienced by marine organisms for about 20 million years (Turley et al. 2006; Blackford and Gilbert 2007, Pelejero et al. 2010). Mass extinctions have been linked to previous ocean acidification events and such events require tens of thousands of years for the ocean to recover. Levels of CO2 produced by humans have decreased the pH (i.e. increased the acidity) of the surface ocean by 0.1 units lower than pre- industrial levels, and are predicted to further decrease surface ocean pH by roughly 0.4 units by 2100 (IPCC 2001). Decreases in calcification and biological function due to ocean acidification are capable of reducing the fitness of commercially valuable sea life by directly damaging their shells or by compromising early development and survival (Kurihara et al. 2007, Kurihara et al. 2009, Gazeau et al. 2007). Many ecosystems such as coral reefs are now well outside the conditions under which they have operated for millions of years (Hoegh-Guldberg et al. 2007, Pelejero et al. 2010). Even if atmospheric CO2 is stabilized at 450 parts per million (ppm), it is estimated that only about eight percent of existing tropical and subtropical coral reefs will be surrounded by waters favorable to shell construction. At 550 ppm, coral reefs may dissolve globally (IAP 2009). Climate change is adversely impacting marine and coastal ecosystems and biodiversity. Further, acidification of the oceans can impact food security both directly and indirectly through impacts on marine ecosystems and food webs, and also threatens the ocean’s ability to continue providing important ecosystem services to billions of people around the world (Worm et al. 2006). The bottom line is that no effective means of reversing ocean acidification currently exists at a scale sufficient to protect marine biodiversity and food webs. There are no short-term solutions to ocean acidification. Substantial perturbations to ocean ecosystems can only be avoided by urgent and rapid reductions in global g reen h ouse g as emissions and the recognition and integration of this critical issue into the global climate change debate (UNEP 2010). Its anthro Powell ‘13 (science author. He has been a college and museum president and was a member of the National Science Board for 12 years, appointed first by President Reagan and then by President George H. W. Bush. February 25, 2013. (Jim, “Consensus: 99.84% of Peer-Reviewed Articles Support the Idea of Global Warming,” http://thecontributor.com/why-climate-deniers- have-no-scientific-credibility-one-pie-chart)

Polls show that many members of the public believe scientists substantially disagree about human-caused global warming. The gold standard of science is the peer-reviewed literature. If there is disagreement among scientists, based not on opinion but on hard evidence, it will be found in the peer-reviewed literature. I searched the Web of Science for peer-reviewed scientific articles published between January 1, 1991 and November 9, 2012 that have the keyword phrases "global warming" or "global climate change." The search produced 13,950 articles. See my methodology. I read whatever combination of titles, abstracts, and entire articles necessary to identify articles that "reject" human-caused global warming. To be classified as rejecting, an article had to clearly and explicitly state that the theory of global warming is false or, as happened in a few cases, that some other process better explains the observed warming. Articles that merely claimed to have found some discrepancy, some minor flaw, some reason for doubt, I did not classify as rejecting global warming. Articles about methods, paleoclimatology, mitigation, adaptation, and effects at least implicitly accept human-caused global warming and were usually obvious from the title alone. John Cook and Dana Nuccitelli also reviewed and assigned some of these articles; Cook provided invaluable technical expertise. This work follows that of Oreskes (Science, 2005) who searched for articles published between 1993 and 2003 with the keyword phrase “global climate change.” She found 928, read the abstracts of each and classified them. None rejected human-caused global warming. Using her criteria and time-span, I get the same result. Deniers attacked Oreskes and her findings, but they have held up. Some articles on global warming may use other keywords, for example, “climate change” without the "global" prefix. But there is no reason to think that the proportion rejecting global warming would be any higher. By my definition, out of 13,950 peer-reviewed articles published on global warming since 1991, only 23, or 0.16 percent, clearly reject global warming or endorse a cause other than CO2 emissions for observed warming. The list of articles that reject global warming is here. The 23 articles have been cited a total of 112 times over the nearly 21-year period, for an average of close to 5 citations each. That compares to an average of about 19 citations for articles answering to "global warming," for example. Four of the rejecting articles have never been cited; four have citations in the double-digits. The most-cited has 17. Of one thing we can be certain: had any of these articles presented the magic bullet that falsifies human-caused global warming, that article would be on its way to becoming one of the most-cited in the history of science. The articles have a total of 33,690 individual authors. The top 10 countries represented, in order, are USA, England, China, Germany, Japan, Canada, Australia, France, Spain, and Netherlands. (The chart shows results through November 9, 2012.) Global warming deniers often claim that bias prevents them from publishing in peer-reviewed journals. But 23 articles in 18 different journals, collectively making several different arguments against global warming, expose that claim as false . Articles rejecting global warming can be published, but those that have been have earned little support or notice, even from other deniers. A few deniers have become well known from newspaper interviews, Congressional hearings, conferences of climate change critics, books, lectures, websites and the like. Their names are conspicuously rare among the authors of the rejecting articles. Like those authors, the prominent deniers must have no evidence that falsifies global warming. Anyone can repeat this search and post their findings. Another reviewer would likely have slightly different standards than mine and get a different number of rejecting articles. But no one will be able to reach a different conclusion, for only one conclusion is possible: Within science, global warming denial has virtually no influence . Its influence is instead on a misguided media, politicians all-too-willing to deny science for their own gain, and a gullible public. Scientists do not disagree about human-caused global warming. It is the ruling paradigm of climate science , in the same way that plate tectonics is the ruling paradigm of geology. We know that continents move. We know that the earth is warming and that human emissions of greenhouse gases are the primary cause. These are known facts about which virtually all publishing scientists agree. Floating SMRs solve- DOE engagement key- collapse of nuclear industry means warming inevitable Licata 4/27 (John Licata, John Licata is the Founder & Chief Energy Strategist of Blue Phoenix Inc, Motley Fool, “Can Small Modular Nuclear Reactors Find Their Sea Legs?”, http://www.fool.com/investing/general/2014/04/27/can-small-modular-nuclear-reactors-find- their-sea.aspx, April 27, 2014) Nuclear power plants do bring jobs to rural areas, and in some cases they actually boost local housing prices since these plants create jobs. However, whether or not you believe nuclear power does or does not emit harmful radiation, many people would likely opt to not live right next door to a nuclear power plant facility if they had the choice. Today, they may not even need to consider such a move thanks to a floating plant concept coming out of MIT, which largely builds on the success of the U.S. Army of Corp Engineers' MH-1A floating nuclear reactor, installed on the Sturgis , a vessel that provided power to military and civilians around the Panama Canal. The Sturgis was decommissioned, but only because there was ample power generation on land. So the viability of a floating nuclear plant does make a lot of sense. Presently the only floating nuclear plant is being constructed in Russia (expected to be in service in two years). However, that plant is slated to be moored on a barge in a harbor. That differs from MIT's idea to put a 200 MWe reactor on a floating platform roughly six miles out to sea. The problem with the floating reactor idea or land- based SMR version is most investors are hard-pressed to fork over money needed for a nuclear build-out that could cost billions of dollars and take over a decade to complete. That very problem is today plaguing the land-based mPower SMR program of The Babcock & Wilcox Co. (NYSE: BWC ) . Also, although the reactors would have a constant cooling source in the ocean water, I'd like to see studies that show that sea life is not disrupted. Then there is always the issue with security and power lines to the mainland which needs to be addressed. At a time when reducing global warming is becoming a hotly debated topic by the IPCC, these SMRs (land or sea based) can help reduce our carbon footprint if legislation would allow them to proceed. Instead, the government is taking perfectly good cathedral-sized nuclear power plants offline , something they will likely come to regret in coming years from an economic and environmental perspective. Just ask the Germans . SMRs can produce dependable baseload power that is more affordable for isolated communities, and they can be used in remote areas by energy and metals production companies while traditional reactors cannot. So the notion of plopping SMRs several miles offshore so they can withstand tsunami swells is really interesting. If the concept can actually gain momentum that would help Babcock, Westinghouse, and NuScale Power. I would also speculate that tech nology currently being used in the oil and gas drilling sector , possibly even from the robotics industry, could be integrated into offshore light water nuclear designs for mooring, maintenance, and routine operational purposes. In today's modern world, we have a much greater dependence on consumer electronics, we are swapping our dependence of foreign oil with a growing reliance for domestic natural gas, and we face increasing pressures to combat climate change here at home as well as meet our own 2020 carbon goals. With that said, we need to think longer term and create domestic clean energy industries that can foster new jobs, help keep the power on even when blackouts occur and produce much less carbon at both the private and public sector levels. Therefore to me, advancing the SMR industry on land or by sea is a nice way to fight our archaic energy paradigm and move our energy supply into a modern era. Yet without the government's complete commitment to support nuclear power via legislation and a much needed expedited certification process, the idea of a floating SMR plant will be another example of wasted energy innovation that could simply get buried at sea. And United States creates a massive export market for SMR’s – latent nuclear capability ensures speed- significant reduction of emissions Rosner, Goldberg, and Hezir et. al. ‘11 (Robert Rosner, Robert Rosner is an astrophysicist and founding director of the Energy Policy Institute at Chicago. He was the director of Argonne National Laboratory from 2005 to 2009, and Stephen Goldberg, Energy Policy Institute at Chicago, The Harris School of Public Policy Studies, Joseph S. Hezir, Principal, EOP Foundation, Inc., Many people have made generous and valuable contributions to this study. Professor Geoff Rothwell, Stanford University, provided the study team with the core and supplemental analyses and very timely and pragmatic advice. Dr. J’Tia Taylor, Argonne National Laboratory, supported Dr. Rothwell in these analyses. Deserving special mention is Allen Sanderson of the Economics Department at the University of Chicago, who provided insightful comments and suggested improvements to the study. Constructive suggestions have been received from Dr. Pete Lyons, DOE Assistant Secretary of Nuclear Energy; Dr. Pete Miller, former DOE Assistant Secretary of Nuclear Energy; John Kelly, DOE Deputy Assistant Secretary for Nuclear Reactor Technologies; Matt Crozat, DOE Special Assistant to the Assistant Secretary for Nuclear Energy; Vic Reis, DOE Senior Advisor to the Under Secretary for Science; and Craig Welling, DOE Deputy Office Director, Advanced Reactor Concepts Office, as well as Tim Beville and the staff of DOE’s Advanced Reactor Concepts Office. The study team also would like to acknowledge the comments and useful suggestions the study team received during the peer review process from the nuclear industry, the utility sector, and the financial sector. Reviewers included the following: Rich Singer, VP Fuels, Emissions, and Transportation, MidAmerican Energy Co.; Jeff Kaman, Energy Manager, John Deere; Dorothy R. Davidson, VP Strategic Programs, AREVA; T. J. Kim, Director—Regulatory Affairs & Licensing, Generation mPower, Babcock & Wilcox; Amir Shahkarami, Senior Vice President, Generation, Exelon Corp.; Michael G. Anness, Small Modular Reactor Product Manager, Research & Technology, Westinghouse Electric Co.; Matthew H. Kelley and Clark Mykoff, Decision Analysis, Research & Technology, Westinghouse Electric Co.; George A. Davis, Manager, New Plant Government Programs, Westinghouse Electric Co.; Christofer Mowry, President, Babcock & Wilcox Nuclear Energy, Inc.; Ellen Lapson, Managing Director, Fitch Ratings; Stephen A. Byrne, Executive Vice President, Generation & Transmission Chief Operating Officer, South Carolina Electric & Gas Company; Paul Longsworth, Vice President, New Ventures, Fluor; Ted Feigenbaum, Project Director, Bechtel Corp.; Kennette Benedict, Executive Director, Bulletin of the Atomic Scientist; Bruce Landrey, CMO, NuScale; Dick Sandvik, NuScale; and Andrea Sterdis, Senior Manager of Strategic Nuclear Expansion, Tennessee Valley Authority. The authors especially would like to acknowledge the discerning comments from Marilyn Kray, Vice-President at Exelon, throughout the course of the study, “Small Modular Reactors – Key to Future Nuclear Power”, http://epic.uchicago.edu/sites/epic.uchicago.edu/files/uploads/SMRWhite_Paper_Dec.14.2011co py.pdf, November 2011)

As stated earlier, SMRs have the potential to achieve significant greenhouse gas emission reductions . They could provide alternative base load power generation to facilitate the retirement of older, smaller, and less efficient coal generation plants that would, otherwise, not be good candidates for retrofitting carbon capture and storage technology. They could be deployed in regions of the U.S. and the world that have less potential for other forms of carbon-free electricity, such as solar or wind energy. There may be technical or market constraints, such as projected electricity demand growth and transmission capacity, which would support SMR deployment but not GW-scale LWRs. From the on-shore manufacturing perspective, a key point is that the manufacturing base needed for SMRs can be developed domestically. Thus, while the large commercial LWR industry is seeking to transplant portions of its supply chain from current foreign sources to the U.S., the SMR industry offers the potential to establish a large domestic manufacturing base building upon already existing U.S. manufacturing infrastructure and capability , including the Naval shipbuilding and underutilized domestic nuclear component and equipment plants. The study team learned that a number of sustainable domestic jobs could be created – that is, the full panoply of design, manufacturing, supplier, and construction activities – if the U.S. can establish itself as a credible and substantial designer and manufacturer of SMRs. While many SMR technologies are being studied around the world, a strong U.S. commercialization program can enable U.S. industry to be first to market SMRs , thereby serving as a fulcrum for export growth as well as a lever in influencing international decisions on deploying both nuclear reactor and nuclear fuel cycle tech nology. A viable U.S.- centric SMR industry would enable the U.S. to recapture technological leadership in commercial nuclear technology, which has been lost to suppliers in France, Japan, Korea, Russia, and, now rapidly emerging, China. Small reactors are key to jumpstarting a global industry –NRC sends a global signal Shellenberger ’12 (Michael, president of the breakthrough institute, Jessica Lovering, policy analyst at the breakthough institute, Ted Nordhaus, chairman of the breakthrough institute. September 7, 2012. [“Out of the Nuclear Closet,” http://www.foreignpolicy.com/articles/2012/09/07/out_of_the_nuclear_closet?page=0,0)

To move the needle on nuclear energy to the point that it might actually be capable of displacing fossil fuels, we'll need new nuclear technologies that are cheaper and smaller. Today, there are a range of nascent, smaller nuclear power plant designs, some of them modifications of the current light-water reactor technologies used on submarines, and others, like thorium fuel and fast breeder reactors, which are based on entirely different nuclear fission technologies. Smaller , modular reactors can be built much faster and cheaper than traditional large-scale nuclear power plants. Next-generation nuclear reactors are designed to be incapable of melting down , produce drastically less radioactive waste , make it very difficult or impossible to produce weapons grade material, useless water, and require less maintenance. Most of these designs still face substantial technical hurdles before they will be ready for commercial demonstration. That means a great deal of research and innovation will be necessary to make these next generation plants viable and capable of displacing coal and gas. The United States could be a leader on developing these technologies, but unfortunately U.S. nuclear policy remains mostly stuck in the past. Rather than creating new solutions, efforts to restart the U.S. nuclear industry have mostly focused on encouraging utilities to build the next generation of large, light-water reactors with loan guarantees and various other subsidies and regulatory fixes. With a few exceptions, this is largely true elsewhere around the world as well. Nuclear has enjoyed bipartisan support in Congress for more than 60 years, but the enthusiasm is running out. The Obama administration deserves credit for authorizing funding for two small modular reactors, which will be built at the Savannah River site in South Carolina. But a much more sweeping reform of U.S. nuclear energy policy is required. At present, the N uclear R egulatory C ommission has little institutional knowledge of anything other than light-water reactors and virtually no capability to review or regulate alternative designs. This affects nuclear innovation in other countries as well, since the NRC remains, despite its many critics, the global gold standard for thorough regulation of nuclear energy. Most other countries follow the NRC's lead when it comes to establishing new technical and operational standards for the design, construction, and operation of nuclear plants. What's needed now is a new national commitment to the development, testing, demonstration, and early stage commercialization of a broad range of new nuclear technologies -- from much smaller light-water reactors to next generation ones -- in search of a few designs that can be mass produced and deployed at a significantly lower cost than current designs. This will require both greater public support for nuclear innovation and an entirely different regulatory framework to review and approve new commercial designs. In the meantime, developing countries will continue to build traditional, large nuclear power plants. But time is of the essence. With the lion's share of future carbon emissions coming from those emerging economic powerhouses , the need to develop smaller and cheaper designs that can scale faster is all the more important . A true nuclear renaissance can't happen overnight. And it won't happen so long as large and expensive light-water reactors remain our only option. But in the end, there is no credible path to mitigating climate change without a massive global expansion of nuclear energy. If you care about climate change, nothing is more important than developing the nuclear technologies we will need to get that job done. Other countries model our technology- global demonstration Traub ‘12 (James, fellow of the Centre on International Cooperation. He writes Terms of Engagement for Foreign Policy,” “Transforming the future lies in our hands,” http://gulfnews.com/opinions/columnists/transforming-the-future-lies-in-our-hands-1.1118704, December 14, 2012)

Despite President Barack Obama’s vow, in his first post-reelection press conference, to take decisive action on climate change, the global climate talks in Doha dragged to a close with the US, as usual, a target of activists’ wrath. The Obama administration has shown no interest in submitting to a binding treaty on carbon emissions and refuses to increase funding to help developing countries reduce their own emissions, even as the US continues to behave as a global scofflaw on climate change. Actually, that is not true — the last part, anyway. According to the International Energy Agency, US emissions have dropped 7.7 per cent since 2006 — “the largest reduction of all countries or regions”. Yes, you read that correctly. The US, which has refused to sign the Kyoto Accords establishing binding targets for emissions, has reduced its carbon footprint faster than the greener-than-thou European countries. The reasons for this have something to do with climate change itself (warm winters mean less heating oil — something to do with market forces — the shift from coal to natural gas in power plants) and something to do with policy at the state and regional levels. And in the coming years, as both new gas-mileage standards and new power-plant regulations, championed by the Obama administration kick in, policy will drive the numbers further downwards. US emissions are expected to fall 23 per cent between 2002 and 2020. Apparently, Obama’s record on climate change is not quite as calamitous as reputation would have it. The West has largely succeeded in bending downwards the curve of carbon emissions. However, the developing world has not. Last year, China’s emissions rose 9.3 per cent; India’s, 8.7 per cent. China is now the world’s No 1 source of carbon emissions, followed by the US, the European Union (EU) and India. The emerging powers have every reason to want to emulate the energy-intensive economic success of the West — even those, like China, who have taken steps to increase energy efficiency, are not prepared to do anything to harm economic growth. The real failure of US policy has been, first, that it is still much too timid; and second, that it has not acted in such a way as to persuade developing nations to take the truly difficult decisions which would put the world on a sustainable path. There is a useful analogy with the nuclear nonproliferation regime. In an earlier generation, the nuclear stockpiles of the US and the Soviet Union posed the greatest threat to global security. Now, the threat comes from the proliferation of weapons to weak or rogue states or to non-state actors. However, the only way that Washington can persuade other governments to join in a tough nonproliferation regime is by taking the lead in reducing its own nuclear stockpile — which the Obama administration has sought to do, albeit with very imperfect success. In other words, where power is more widely distributed, US action matters less in itself, but carries great weight as a demonstration model — or anti-demonstration model. Logic would thus dictate that the US bind itself in a global compact to reduce emissions, as through the Nuclear Nonproliferation Treaty (NPT) it has bound itself to reduce nuclear weapons. However, the Senate would never ratify such a treaty. And even if it did, would China and India similarly bind themselves? Here the nuclear analogy begins to break down because the NPT mostly requires that states submit to inspections of their nuclear facilities, while a climate change treaty poses what looks very much like a threat to states’ economic growth. Fossil fuels are even closer to home than nukes. Is it any wonder that only EU countries and a few others have signed the Kyoto Accords? A global version of Kyoto is supposed to be readied by 2015, but a growing number of climate change activists — still very much a minority — accept that this may not happen and need not happen. So what can Obama do? It is possible that much tougher action on emissions will help persuade China, India and others that energy efficiency need not hinder economic growth. As Michael Levi, a climate expert at the Council on Foreign Relations points out, the US gets little credit abroad for reducing emissions largely — thanks to “serendipitous” events. Levi argues, as do virtually all policy thinkers and advocates, that the US must increase the cost of fossil fuels, whether through a “carbon tax” or cap-and-trade system, so that both energy efficiency and alternative fuels become more attractive and also to free-up money to be invested in new technologies. This is what Obama’s disappointed supporters thought he would do in the first term and urge him to do now. Obama is probably not going to do that. In his post-election news conference, he insisted that he would find “bipartisan” solutions to climate change and congressional Republicans are only slightly more likely to accept a sweeping change in carbon pricing than they are to ratify a climate-change treaty. The president also said that any reform would have to create jobs and growth, which sounds very much like a signal that he will avoid new taxes or penalties (even though advocates of such plans insist that they would spur economic growth). All these prudent political calculations are fine when you can afford to fail. But we cannot afford to fail. Global temperatures have already increased 0.7 degrees Celsius. Disaster really strikes at a 2 degree Celsius increase, which leads to large-scale drought, wildfires, decreased food production and coastal flooding. However, the current global trajectory of coal, oil and gas consumption means that, according to Fatih Birol, the International Energy Agency’s chief economist, “ the door to a 2 degree Celsius trajectory is about to close.” That is how dire things are. What, then, can Obama do that is equal to the problem? He can invest. Once the fiscal cliff negotiations are behind him, and after he has held his planned conversation with “scientists, engineers and elected officials,” he can tell the American people that they have a once-in-a-lifetime opportunity to transform the future — for themselves and for people everywhere. He can propose — as he hoped to do as part of the stimulus package of 2009 — that the US build a “smart grid” to radically improve the efficiency of electricity distribution. He can argue for large-scale investments in research and development of new sources of energy and energy-efficient construction technologies and lots of other whiz-bang things. This, too, was part of the stimulus spending; it must become bigger and permanent. The reason Obama should do this is, first, because the American people will (or could) rally behind a visionary programme in a way that they never will get behind the dour mechanics of carbon pricing. Second, because the way to get to a carbon tax is to use it as a financing mechanism for such a plan. Third, because oil and gas are in America’s bloodstream; as Steven Cohen, executive director of the Earth Institute, puts it: “The only thing that’s going to drive fossil fuels off the market is cheaper renewable energy.” Fourth, the US cannot afford to miss out on the gigantic market for green technology. Finally, there’s leverage. China and India may not do something sensible but painful, like adopting carbon pricing, because the US does so, but they will adopt new technologies if the US can prove that they work without harming economic growth. Developing countries have already made major investments in reducing air pollution, halting deforestation and practising sustainable agriculture. They are just too modest. It is here, above all, that the US can serve as a demonstration model — the world’s most egregious carbon consumer showing the way to a low-carbon future. Global warming-denial is finally on the way out. Three-quarters of Americans now say they believe in global warming and more than half believe that humans are causing it and want to see a US president take action. President Obama does not have to do the impossible. He must, however, do the possible. Plan Plan: The United States federal government should initiate power-purchase agreements of floating Small Modular Reactors.

Plan: The Department of Energy should initiate power-purchase agreements of floating Small Modular Reactors for non-military purposes. Solvency

Floating nuclear safe- no risk of accidents- use of pre-existing technology from offshore drilling and nuclear submarines mean fast development Chandler ’14 (David L. Chandler, MIT News Office, The paper was co-authored by NSE students Angelo Briccetti, Jake Jurewicz, and Vincent Kindfuller; Michael Corradini of the University of Wisconsin; and Daniel Fadel, Ganesh Srinivasan, Ryan Hannink, and Alan Crowle of Chicago Bridge and Iron, based in Canton, Mass, The MIT Energy Initiative (MITei) is MIT's hub for research, education, campus energy management and outreach programs that cover all areas of energy supply and demand, security, and environmental impact, “Floating Nuclear Plants Could Ride Out Tsunamis”, http://theenergycollective.com/energyatmit/369266/floating- nuclear-plants-could-ride-out-tsunamis, April 17, 2014)

When an earthquake and tsunami struck the Fukushima Daiichi nuclear plant complex in 2011, neither the quake nor the inundation caused the ensuing contamination. Rather, it was the aftereffects — specifically, the lack of cooling for the reactor cores, due to a shutdown of all power at the station — that caused most of the harm. A new design for nuclear plants built on floating platforms , modeled after those used for offshore oil drilling, could help avoid such consequences in the future . Such floating plants would be designed to be automatically cooled by the surrounding seawater in a worst-case scenario, which would indefinitely prevent any melting of fuel rods, or escape of radioactive material. Floating Nuclear Plants Cutaway view of the proposed plant shows that the reactor vessel itself is located deep underwater, with its containment vessel surrounded by a compartment flooded with seawater, allowing for passive cooling even in the event of an accident. Illustration courtesy of Jake Jurewicz/MIT-NSE The concept is being presented this week at the Small Modular Reactors Symposium, hosted by the American Society of Mechanical Engineers, by MIT professors Jacopo Buongiorno, Michael Golay, and Neil Todreas, along with others from MIT, the University of Wisconsin, and Chicago Bridge and Iron, a major nuclear plant and offshore platform construction company. Such plants, Buongiorno explains, could be built in a shipyard, then towed to their destinations five to seven miles offshore, where they would be moored to the seafloor and connected to land by an underwater electric transmission line. The concept takes advantage of two mature technologies: light-water nuclear reactors and offshore oil and gas drilling platforms. Using established designs minimizes technological risks, says Buongiorno , an associate professor of nuclear science and engineering (NSE) at MIT. Although the concept of a floating nuclear plant is not unique — Russia is in the process of building one now, on a barge moored at the shore — none have been located far enough offshore to be able to ride out a tsunami, Buongiorno says. For this new design, he says, “the biggest selling point is the enhanced safety.” A floating platform several miles offshore, moored in about 100 meters of water, would be unaffected by the motions of a tsunami; earthquakes would have no direct effect at all. Meanwhile, the biggest issue that faces most nuclear plants under emergency conditions — overheating and potential meltdown, as happened at Fukushima, Chernobyl, and Three Mile Island — would be virtually impossible at sea , Buongiorno says: “It’s very close to the ocean , which is essentially an infinite heat sink, so it’s possible to do cooling passively, with no intervention. The reactor containment itself is essentially underwater.” Buongiorno lists several other advantages. For one thing, it is increasingly difficult and expensive to find suitable sites for new nuclear plants: They usually need to be next to an ocean, lake, or river to provide cooling water, but shorefront properties are highly desirable. By contrast, sites offshore, but out of sight of land, could be located adjacent to the population centers they would serve. “The ocean is inexpensive real estate,” Buongiorno says. In addition, at the end of a plant’s lifetime, “decommissioning” could be accomplished by simply towing it away to a central facility, as is done now for the Navy’s carrier and submarine reactors. That would rapidly restore the site to pristine conditions. This design could also help to address practical construction issues that have tended to make new nuclear plants uneconomical : Shipyard construction allows for better standardization, and the all-steel design eliminates the use of concrete , which Buongiorno says is often responsible for construction delays and cost overruns . There are no particular limits to the size of such plants, he says: They could be anywhere from small, 50-megawatt plants to 1,000-megawatt plants matching today’s largest facilities. “ It’s a flexible concept ,” Buongiorno says. Most operations would be similar to those of onshore plants, and the plant would be designed to meet all regulatory security requirements for terrestrial plants. “ Project work has confirmed the feasibility of achieving this goal , including satisfaction of the extra concern of protection against underwater attack ,” says Todreas, the KEPCO Professor of Nuclear Science and Engineering and Mechanical Engineering. Buongiorno sees a market for such plants in Asia , which has a combination of high tsunami risks and a rapidly growing need for new power sources. “It would make a lot of sense for Japan,” he says, as well as places such as Indonesia, Chile, and Africa. This is a “very attractive and promising proposal,” says Toru Obara, a professor at the Research Laboratory for Nuclear Reactors at the Tokyo Institute of Technology who was not involved in this research. “I think this is technically very feasible. ... Of course, further study is needed to realize the concept, but the authors have the answers to each question and the answers are realistic.”

DOE cost sharing solves cost overrun- licensing and technology barriers have already been overcome- action now key to develop tech that will prevent a nuclear energy crunch in 2019 when licenses will exist Fertel ’14 (Marv Fertel, NEI President and CEO, Nuclear Energy Institute, “Why DOE Should Back SMR Development”, http://neinuclearnotes.blogspot.com/2014/04/why-doe-should-back- smr-development.html, April 8, 2014)

Nuclear energy is an essential source of base-load electricity and 64 percent of the United States’ greenhouse gas-free electricity production. Without it, the U nited S tates cannot meet either its energy requirements or the goals established in the President’s Climate Action Plan . In the decades to come, we predict that the country’s nuclear fleet will evolve to include not only large, advanced light water reactors like those operating today and under construction in Georgia, Tennessee, and South Carolina, but also a complementary set of smaller, modular reactors . Those reactors are under development today by companies like Babcock &Wilcox (B&W), NuScale and others that have spent hundreds of millions of dollars to develop next-generation reactor concepts. Those companies have innovative designs and are prepared to absorb the lion’s share of design and development costs, but the federal government should also play a significant role given the enormous promise of small modular reactor tech nology for commercial and other purposes. Most important, partnerships between government and the private sector will enable the full promise of this tech nology to be available in time to ensure U.S. leadership in energy, the environment, and the global nuclear market. The D epartment o f E nergy’s Small Modular Reactor ( SMR) program is built on the successful Nuclear Power 2010 program that supported design certification of the Westinghouse AP-1000 and General Electric ESBWR designs. Today, Southern Co. and South Carolina Electric & Gas are building four AP-1000s for which they submitted license applications to the Nuclear Regulatory Commission in 1998. Ten years earlier, in the early years of the Nuclear Power 2010 program, it was clear that there would be a market for the AP-1000 and ESBWR in the United States and overseas, but it would have been impossible to predict which companies would build the first ones, or where they would be built, and it was even more difficult to predict the robust international market for that technology. The SMR program is off to a promising start. To date, B&W’s Generation mPower joint venture has invested $400 million in developing its mPower design; NuScale approximately $200 million in its design. Those companies have made those investments knowing they will not see revenue for approximately 10 years. That is laudable for a private company, but, in order to prepare SMRs for early deployment in the United States and to ensure U.S. leadership worldwide, investment by the federal government as a cost-sharing partner is both necessary and prudent. Some have expressed concern about the potential market and customers for SMR technology given Babcock & Wilcox’s recent announcement that it will reduce its level of investment in the mPower technology, and thus the pace of development. This decision reflects B&W’s revised market assessment, particularly the slower-than-expected growth in electricity demand in the United States following the recession. But that demand will eventually occur, and the American people are best-served – in terms of cost and reliability of service – when the electric power industry maintains a diverse portfolio of electricity generating technologies. The industry will need new, low-carbon electricity options like SMRs because America’s electric generating technology options are becoming more challenging. For example: While coal-fired generation is a significant part of our base-load generation, coal-fired generation faces increasing environmental restrictions, including the likelihood of controls on carbon and uncertainty over the commercial feasibility of carbon capture and sequestration. The U.S. has about 300,000 MW of coal-fired capacity, and the consensus is that about one-fifth of that will shut down by 2020 because of environmental requirements. In addition, development of coal-fired projects has stalled: Less than 1,000 megawatts of new coal-fired capacity is under construction. Natural gas- fired generation is a growing and important component of our generation portfolio and will continue to do so given our abundant natural gas resources. However, prudence requires that we do not become overly dependent on any given energy source particularly in order to maintain long-term stable pricing as natural gas demand grows in the industrial sector and for LNG exports. Renewables will play an increasingly large role but, as intermittent sources, cannot displace the need for large-scale , 24/7 power options. Given this challenging environment, the electric industry needs as many electric generating options as possible, particularly zero-carbon options. Even at less-than-one-percent annual growth in electricity demand, the Energy Information Administration forecasts a need for 28 percent more power by 2040. That’s the equivalent of 300 one- thousand-megawatt power plants. America’s 100 nuclear plants will begin to reach 60 years of operation toward the end of the next decade. In the five years between 2029 and 2034, over 29,000 megawatts of nuclear generating capacity will reach 60 years. Unless those licenses are extended for a second 20-year period, that capacity must be replaced. If the United States hopes to contain carbon emissions from the electric sector, it must be replaced with new nuclear capacity. The runway to replace that capacity is approximately 10 years long, so decisions to replace that capacity with either large, advanced light-water reactors or SMRs must be taken starting in 2019 and 2020 – approximately the time that the first SMR designs should be certified by the N uclear R egulatory C ommission.

Power purchase agreements are key Rosner, Goldberg, and Hezir et. al. ‘11 (Robert Rosner, Robert Rosner is an astrophysicist and founding director of the Energy Policy Institute at Chicago. He was the director of Argonne National Laboratory from 2005 to 2009, and Stephen Goldberg, Energy Policy Institute at Chicago, The Harris School of Public Policy Studies, Joseph S. Hezir, Principal, EOP Foundation, Inc., Many people have made generous and valuable contributions to this study. Professor Geoff Rothwell, Stanford University, provided the study team with the core and supplemental analyses and very timely and pragmatic advice. Dr. J’Tia Taylor, Argonne National Laboratory, supported Dr. Rothwell in these analyses. Deserving special mention is Allen Sanderson of the Economics Department at the University of Chicago, who provided insightful comments and suggested improvements to the study. Constructive suggestions have been received from Dr. Pete Lyons, DOE Assistant Secretary of Nuclear Energy; Dr. Pete Miller, former DOE Assistant Secretary of Nuclear Energy; John Kelly, DOE Deputy Assistant Secretary for Nuclear Reactor Technologies; Matt Crozat, DOE Special Assistant to the Assistant Secretary for Nuclear Energy; Vic Reis, DOE Senior Advisor to the Under Secretary for Science; and Craig Welling, DOE Deputy Office Director, Advanced Reactor Concepts Office, as well as Tim Beville and the staff of DOE’s Advanced Reactor Concepts Office. The study team also would like to acknowledge the comments and useful suggestions the study team received during the peer review process from the nuclear industry, the utility sector, and the financial sector. Reviewers included the following: Rich Singer, VP Fuels, Emissions, and Transportation, MidAmerican Energy Co.; Jeff Kaman, Energy Manager, John Deere; Dorothy R. Davidson, VP Strategic Programs, AREVA; T. J. Kim, Director—Regulatory Affairs & Licensing, Generation mPower, Babcock & Wilcox; Amir Shahkarami, Senior Vice President, Generation, Exelon Corp.; Michael G. Anness, Small Modular Reactor Product Manager, Research & Technology, Westinghouse Electric Co.; Matthew H. Kelley and Clark Mykoff, Decision Analysis, Research & Technology, Westinghouse Electric Co.; George A. Davis, Manager, New Plant Government Programs, Westinghouse Electric Co.; Christofer Mowry, President, Babcock & Wilcox Nuclear Energy, Inc.; Ellen Lapson, Managing Director, Fitch Ratings; Stephen A. Byrne, Executive Vice President, Generation & Transmission Chief Operating Officer, South Carolina Electric & Gas Company; Paul Longsworth, Vice President, New Ventures, Fluor; Ted Feigenbaum, Project Director, Bechtel Corp.; Kennette Benedict, Executive Director, Bulletin of the Atomic Scientist; Bruce Landrey, CMO, NuScale; Dick Sandvik, NuScale; and Andrea Sterdis, Senior Manager of Strategic Nuclear Expansion, Tennessee Valley Authority. The authors especially would like to acknowledge the discerning comments from Marilyn Kray, Vice-President at Exelon, throughout the course of the study, “Small Modular Reactors – Key to Future Nuclear Power”, http://epic.uchicago.edu/sites/epic.uchicago.edu/files/uploads/SMRWhite_Paper_Dec.14.2011co py.pdf, November 2011)

6.2 GOVERNMENT SPONSORSHIP OF MARKET TRANSFORMATION INCENTIVES Similar to other important energy technologies, such as energy storage and renewables, “market pull” activities coupled with the traditional “technology push” activities would significantly increase the likelihood of timely and successful commercialization. Market transformation incentives serve two important objectives. They facilitate demand for the off-take of SMR plants, thus reducing market risk and helping to attract private investment without high risk premiums. In addition, if such market transformation opportunities could be targeted to higher price electricity markets or higher value electricity applications, they would significantly reduce the cost of any companion production incentives. There are three special market opportunities that may provide the additional market pull needed to successfully commercialize SMRs : the federal government, international applications, and the need for replacement of existing coal generation plants . 6.2.1 Purchase Power Agreements with Federal Agency Facilities Federal facilities could be the initial customer for the output of the LEAD or FOAK SMR plants. The federal government is the largest single consumer of electricity in the U.S., but its use of electricity is widely dispersed geographically and highly fragmented institutionally (i.e., many suppliers and customers). Current federal electricity procurement policies do not encourage aggregation of demand, nor do they allow for agencies to enter into long-term contracts that are “bankable” by suppliers. President Obama has sought to place federal agencies in the vanguard of efforts to adopt clean energy technologies and reduce greenhouse gas emissions. Executive Order 13514, issued on October 5, 2009, calls for reductions in greenhouse gases by all federal agencies, with DOE establishing a target of a 28% reduction by 2020, including greenhouse gases associated with purchased electricity. SMRs provide one potential option to meet the President’s Executive Order. One or more federal agency facilities that can be cost effectively connected to an SMR plant could agree to contract to purchase the bulk of the power output from a privately developed and financed LEAD plant. 46 A LEAD plant, even without the benefits of learning, could offer electricity to federal facilities at prices competitive with the unsubsidized significant cost of other clean energy technologies. Table 4 shows that the LCOE estimates for the LEAD and FOAK-1plants are in the range of the unsubsidized national LCOE estimates for other clean electricity generation technologies (based on the current state of maturity of the other technologies). All of these technologies should experience additional learning improvements over time. However, as presented earlier in the learning model analysis, the study team anticipates significantly greater learning improvements in SMR technology that would improve the competitive position of SMRs over time. Additional competitive market opportunities can be identified on a region-specific, technology-specific basis. For example, the Southeast U.S. has limited wind resources. While the region has abundant biomass resources, the estimated unsubsidized cost of biomass electricity is in the range of $90-130 per MWh (9-13¢/kWh), making LEAD and FOAK plants very competitive (prior to consideration of subsidies). 47 Competitive pricing is an important, but not the sole, element to successful SMR deployment. A bankable contractual arrangement also is required , and this provides an important opportunity for federal facilities to enter into the necessary purchase power arrangements. However, to provide a “bankable” arrangement to enable the SMR project sponsor to obtain private sector financing, the federal agency purchase agreement may need to provide a guaranteed payment for aggregate output, regardless of actual generation output . 48 Another challenge is to establish a mechanism to aggregate demand among federal electricity consumers if no single federal facility customer has a large enough demand for the output of an SMR module. The study team believes that highlevel federal leadership, such as that exemplified in E.O. 13514, can surmount these challenges and provide critical initial markets for SMR plants.

The plan jump starts the nuclear industry- government demonstrations key to investor confidence Madia ‘12 (Chairman of the Board of Overseers and Vice President for the NAL at Stanford and was the Laboratory Director at the Oak Ridge National Laboratory and the Pacific Northwest National Laboratory) (William Madia, Stanford Energy Journal, Dr. Madia serves as Chairman of the Board of Overseers and Vice President for the SLAC National Accelerator Laboratory at Stanford University. Previously, he was the Laboratory Director at the Oak Ridge National Laboratory from 2000-2004 and the Pacific Northwest National Laboratory from 1994-1999., “SMALL MODULAR REACTORS: A POTENTIAL GAME-CHANGING TECHNOLOGY”, http://energyclub.stanford.edu/index.php/Journal/Small_Modular_Reactors_by_William_Madia, Spring 2012)

There is a new type of nuclear power plant (NPP) under development that has the potential to be a game changer in the power generation market: the small modular reactor (SMR). Examples of these reactors that are in the 50-225 megawatt electric (MW) range can be found in the designs being developed and advanced by Generation mPower (http://generationmpower.com/), NuScale (http://nuscale.com/), the South Korean SMART reactor (http://smart.kaeri.re.kr/) and Westinghouse (http://www.westinghousenuclear.com/smr/index.htm/). Some SMR concepts are up to 20 times smaller than traditional nuclear plants Today’s reactor designers are looking at concepts that are 5 to 20 times smaller than more traditional gigawatt-scale (GW) plants. The reasons are straightforward; the question is, “Are their assumptions correct?” The first assumption is enhanced safety. GW-scale NPPs require sophisticated designs and cooling systems in case of a total loss of station power, as happened at Fukushima due to the earthquake and tsunami. These ensure the power plant will be able to cool down rapidly enough, so that the nuclear fuel does not melt and release dangerous radioactive fission products and hydrogen gas. SMRs are sized and designed to be able to cool down without any external power or human actions for quite some time without causing damage to the nuclear fuel. The second assumption is economics. GW-scale NPPs cost $6 billion to $10 billion to build. Very few utilities can afford to put this much debt on their balance sheets. SMRs offer the possibility of installing 50-225 MW of power per module at a total cost that is manageable for most utilities. Furthermore, modular configurations allow the utilities to deploy a more tailored power generation capacity, and that capacity can be expanded incrementally. In principle, early modules could be brought on line and begin producing revenues , which could then be used to fund the addition of more modules, if power needs arise. The third assumption is based on market need and fit. Utilities are retiring old fossil fuel plants. Many of them are in the few hundred MW range and are located near load centers and where transmission capacity currently exists. SMRs might be able to compete in the fossil re-power markets where operators don’t need a GW of power to serve their needs. This kind of “plug and play” modality for NPPs is not feasible with many of the current large-scale designs, thus giving carbon-free nuclear power an entry into many of the smaller markets, currently not served by these technologies. There are numerous reasons why SMRs might be viable today. Throughout the history of NPP development, plants grew in size based on classic “economies of scale” considerations. Bigger was cheaper when viewed on a cost per installed kilowatt basis. The drivers that caused the industry to build bigger and bigger NPPs are being offset today by various considerations that make this new breed of SMRs viable. Factory manufacturing is one of these considerations. Most SMRs are small enough to allow them to be factory built and shipped by rail or barge to the power plant sites. Numerous industry “rules of thumb” for factory manufacturing show dramatic savings as compared to “on-site” outdoor building methods. Significant schedule advantages are also available because weather delay considerations are reduced. Of course, from a total cost perspective, some of these savings will be offset by the capital costs associated with building multiple modules to get the same total power output. Based on analyses I have seen, overnight costs in the range of $5000 to $8000 per installed kilowatt are achievable. If these analyses are correct, it means that the economies of scale arguments that drove current designs to GW scales could be countered by the simplicity and factory-build possibilities of SMRs. No one has yet obtained a design certification from the Nuclear Regulatory Commission (NRC) for an SMR, so we must consider licensing to be one of the largest unknowns facing these new designs. Nevertheless, since the most developed of the SMRs are mostly based on proven and licensed components and are configured at power levels that are passively safe, we should not expect many new significant licensing issues to be raised for this class of reactor. Still, the NRC will need to address issues uniquely associated with SMRs, such as the number of reactor modules any one reactor operator can safely operate and the size of the emergency planning zone for SMRs. To determine if SMRs hold the potential for changing the game in carbon-free power generation, it is imperative that we test the design, engineering, licensing, and economic assumptions with some sort of public-private development and demonstration program. Instead of having government simply invest in research and development to “buy down” the risks associated with SMRs, I propose a more novel approach. Since the federal government is a major power consumer , it should commit to being the “first mover” of SMRs. This means purchasing the first few hundred MWs of SMR generation capacity and dedicating it to federal use. The advantages of this approach are straightforward. The government would both reduce licensing and economic risks to the point where utilities might invest in subsequent units, thus jumpstarting the SMR industry. It would then also be the recipient of additional carbon-free energy generation capacity. This seems like a very sensible role for government to play without getting into the heavy politics of nuclear waste, corporate welfare, or carbon taxes. If we want to deploy power generation technologies that can realize near-term impact on carbon emissions safely, reliably, economically, at scale, and at total costs that are manageable on the balance sheets of most utilities, we must consider SMRs as a key component of our national energy strategy. 2AC Off Case Topicality 2AC T- “Non Military” We meet- floating SMRs would be used for civilian and commercial purposes by the DOD

We meet- they would be owned by the DOE- thus could only be used to assist the DOD- makes it an effect and not a mandate of the plan- they conflate solvency and topicality- any affirmative that won a spill over claim would be untopical- kills logical affirmatives

Non-military is an adverb- their interpretation leads to a slippery slope that makes everything potentially untopical- non-military just means how its used not by whom Malykhina 3/10 (Elena Malykhina began her career at The Wall Street Journal, and her writing has appeared in various news media outlets, including Scientific American, Newsday, and the Associated Press. For several years, she was the online editor at Brandweek and later Adweek. “Drones In Action: 5 Non-Military Uses”, http://www.informationweek.com/government/mobile- and-wireless/drones-in-action-5-non-military-uses/d/d-id/1114175?image_number=3, March 10, 2014)

At the moment, almost all commercial drones are banned by the FAA. But that should change in 2015, when the agency expects to release its guidelines for safely operating drones. In the meantime, government agencies, a number of universities, and a handful of private companies are putting robotic aircraft to good use -- and in some cases challenging the FAA's authority. A judge agreed March 6 the FAA had overreached fining businessman Raphael Pirker, who used a model aircraft to take aerial videos for an advertisement. The judge said the FAA lacked authority to apply regulations for aircraft to model aircraft. That may open the skies to a lot more privately controlled drones.

Default to reasonability- good is good enough- we aren’t stealing negative ground- specifying DOE solves- still get DOD CP 2AC T- Development We meet- the plan increases development of the oceans through investment in float nuclear power Counter-interpretation- development includes building things Merriam-Webster No Date (http://www.merriam-webster.com/dictionary/development)

Full Definition of DEVELOPMENT 1 : the act, process, or result of developing 2 : the state of being developed 3 : a developed tract of land; especially : one with houses built on it

Includes economic development Longman No Date (Online Dictionary, http://www.ldoceonline.com/Geography- topic/development)

de ‧ vel ‧ op ‧ment 1growth [uncountable] the process of gradually becoming bigger, better, stronger, or more advanced: British Englishchild development development of British Englisha course on the development of Greek thought professional/personal development American Englishopportunities for professional development 2 economic activity [uncountable] the process of increasing business, trade, and industrial activity economic/industrial/business etc development

Our topic is best- their interpretation kills best affirmative grounds- better debates outweigh more limited ones- the topic says development of the ocean- not the ocean itself- means their interpretation is contrived Default to reasonability- good is good enough- competing interpretations incentivize a race to the bottom

Counterplan 2AC CP- Free Market Perm do both Perm do the counterplan The 1AC was an impact turn to the counterplan 1- Upfront coast disad- floating solar is too expensive for investors alone- government assistance key- that’s Fertel 2- Licensing disad- government action is key to get the NRC on board- that’s Madia and Chandler Licensing crushes the domestic industry NTH ‘10 (Nuclear Townhall “Despite Small Reactor Optimism, Industry Leaders Wonder If They Can Run The NRC Gantlet,” http://www.nucleartownhall.com/blog/despite-small-reactor- optimism-industry-leaders-wonder-if-they-can-run-the-nrc-gantlet/, October 21, 2010)

Despite the excitement, there was a lingering sense that the nuclear industry is stagnating in this country and that all the action is shifting abroad. “There’s more excitement in emerging markets right now,” said Ali, of Advanced Reactor Concepts. “Nuclear is sexy right now in India and China. That isn’t happening here.” “ How are we going to compete with China if we don’t innovate in this country,” asked Dr. Robert Schleicher, project manager of General Atomics’ EM2, a 240-MW reactor that runs on spent fuel. “The tradition of the U.S. is innovation. New reactors are important to this.” To some surprise, venture capitalists on the Wednesday panel seemed very enthusiastic about nuclear. “Most of our investments have been in biotech and nuclear seems much less risky to me than biotech,” said Richard Kreger, senior managing director at the Source Capital Group. “Only one out of 100 drug properties ever make it through the FDA approval process. To me a nuclear reactor with a license is a much better risk.” Yet it was the licensing issue that hung like a cloud over the three-day proceedings. “The NRC is the gold standard,” said Ali, of Advanced Reactor Concepts, in a comment often repeated throughout the week. But the question remained whether the U.S. would get stuck on a decade-long quest for gold while the rest of the world moves ahead with silver and bronze . “The FDA is a `Yes, if’ organization,” said Nordan, of Venrock. “They try to help you through the process. The NRC is a `No, because’ agency. You get the feeling they’re not concerned whether you make it or not. The words `generating electricity’ do not appear in the NRC’s mission statement.” Representatives from several SMR companies said they are already looking abroad as a way of risk-managing the NRC licensing process. “”We’re exploring licensing in Britain,” said Deal-Blackwell, of Hyperion. “We may be dealing with Canada before the U.S.,” said Paul Farrell, president of Radix Power and Energy, an outgrowth of Brookhaven National Laboratory. “There’s a big need for isolated power up there.” Ifran Ali, of Advanced Reactor Concepts, complained that his company’s sodium-cooled SMR had been virtually eliminated from the competition because the NRC can only deal with l ight w ater r eactor s. “The regulatory process is making decisions," he said. "Already we’ve been moved to the back of the line without having the chance to demonstrate our technology. These decisions should be made by the market, not the bureaucracy.” Perhaps the most dramatic confrontation of the conference came when William A. Macon, Jr., of the Department of Energy, tired of hearing criticisms of the government, pronounced, “Nobody is going to bypass the NRC.”

3- Learning improvements- government facilitate faster innovation than free market alone- that’s Rosner No bubble – energy is distinct CSPO/CATF ‘9 (A Joint Project of CSPO AND CATF, INNOVATION AND POLICY CORE GROUP Jane “Xan” Alexander Independent Consultant D. Drew Bond Vice President, Public Policy, Battelle David Danielson Partner, General Catalyst Partners David Garman Principal, Decker Garman Sullivan and Associates, LLC Brent Goldfarb Assistant Professor of Management and Entrepreneurship, University of Maryland David Goldston Bipartisan Policy Center; Visiting Lecturer on Environmental Science and Public Policy, Harvard University David Hart Associate Professor, School of Public Policy, George Mason University Michael Holland Program Examiner, Office of Management and Budget Suedeen Kelly Commissioner, Federal Energy Regulatory Commission Jeffrey Marqusee Director, Environmental Security Technology Certification Program, and Technical Director, Strategic Environmental Research and Development Program, Department of Defense Tony Meggs MIT Energy Initiative Bruce Phillips Director, The NorthBridge Group Steven Usselman Associate Professor, School of History, Technology, and Society, Georgia Tech Shalini Vajjhala Fellow, Resources for the Future Richard Van Atta Core Research Staff Member for Emerging Technologies and Security, Science and Technology Policy Institute PV TECHNICAL EXPERTS Dan Shugar President, SunPower Corporation, Systems Tom Starrs Independent Consultant Trung Van Nguyen Director, Energy for Sustainability Program, National Science Foundation Ken Zweibel Director, Institute for Analysis of Solar Energy, George Washington University PCC TECHNICAL EXPERTS Howard Herzog Principal Research Engineer, MIT Laboratory for Energy and the Environment Pat Holub Technical and Marketing Manager, Gas Treating, Huntsman Corporation Gary Rochelle Carol and Henry Groppe Professor in Chemical Engineering, University of Texas at Austin Edward Rubin Alumni Professor of Environmental Engineering and Science; Professor of Engineering & Public Policy and Mechanical Engineering, Carnegie Mellon University AIR CAPTURE TECHNICAL EXPERTS Roger Aines Senior Scientist in the Chemistry, Materials, Earth and Life Sciences Directorate, Lawrence Livermore National Laboratory David Keith Director, Energy and Environmental Systems Group Institute for Sustainable Energy, Environment and Economy, University of Calgary Klaus Lackner Ewing-Worzel Professor of Geophysics, Department of Earth and Environmental Engineering, Columbia University Jerry Meldon Associate Professor of Chemical and Biological Engineering, Tufts University Roger Pielke, Jr. Professor, Environmental Studies Program, and Fellow of the Cooperative Institute for Research in Environmental Sciences, University of Colorado OTHER PARTICIPANTS Keven Brough CCS Program Director, ClimateWorks Foundation Mike Fowler Technical Coordinator Coal Transition Project, Clean Air Task Force Nate Gorence Policy Analyst, National Commission on Energy Policy Melanie Kenderdine Associate Director for Strategic Planning, MIT Energy Initiative Michael Schnitzer Director, The NorthBridge Group Kurt Waltzer Carbon Storage Development Coordinator, Coal Transition Project, Clean Air Task Force Jim Wolf Doris Duke Charitable Foundation, “Innovation Policy for Climate Change”, http://www.cspo.org/projects/eisbu/report.pdf, September 2009)

The complexities of innovation systems and their management must be grasped and mastered to develop effective energy-climate policies. In past episodes of fast paced innovation, government policies have been crucial catalysts. What can be learned from casessuch asinformation technology? The IT revolution stemmed from a pair of truly radical technologies: electronic computers running software programs stored in memory, and the solid -state components , transistors and integrated circuits(ICs) that became a primary source of seemingly limitless performance increases. These spawned countless further innovations that transformed the products of many industries and the internal processes of businesses worldwide. 4.1 Why Energy Is Not Like IT The needed revolution in energy -related technologies will necessarily proceed differently. There are two fundamental reasons. First, the laws of nature impose ceilings—impenetrable ceilings—on all energy conversion processes, whereas performance gains in IT face no similar limits. Second, digital systems were fundamentally new in the 1950s. To a minor extent they replaced existing “technologies”— paper-andpencil mathematics, punched card business information systems. To far greater extent, they made possible wholly new end-products, indeed created markets for them. By contrast, energy is a commodity, new “products” will consist simply of new ways of converting energy from one form to another, and costs are more likely to rise than decline, at least in the near term, as a result of innovations that reduce GHG emissions. IT performance gains have often been portrayed in terms of Moore’s Law, the well-known observation that IC density (e.g., the number of transistors per chip, now in the hundreds of millions) doubles every two years or so. Since per-chip costs have not changed much over time, increases in IC density translate directly into more performance per dollar. And while IC density will ultimately be limited by quantum effects, the ceiling remains well ahead, even though performance has already improved by eight or nine orders of magnitude. For conversion of energy from one form to another, by contrast, whether sunlight into electricity or chemical energy stored in coal into heat (e.g., in the boiler of a power plant) and then into electricity (in a turbo- generator), fundamental physical laws dictate that some energy will be lost: efficiency cannot reach 100 percent. Solar energy may be abundant, but only a fraction of the energy conveyed by sunlight can be turned into electrical power and only in the earliest years of PV technology was improvement by a single order-ofmagnitude possible (as efficiency passed 10 percent). Even though both PV cells and IC chips are built on knowledge foundations rooted in semiconductor physics, PV systems operate under fundamentally different constraints. Today the best commercial PV cells exhibit efficiencies in the range of 15-20 percent (considerably higher figures have been achieved in the laboratory). After more than a century of innovation, the best steam power plants reach about 40 percent, somewhat higher in combined cycle plants (in which gas turbines coupled with steam turbines produce greater output). Limited possibilities for performance gains translate into modest prospects for cost reductions, and in some cases innovations to reduce, control, or ameliorate GHGs imply reductions in performance on familiar measures, such as electricity costs, as already recounted for power plants fitted for carbon capture. While energy is a commodity, and PV systems compete with other energy conversion technologies more-or-less directly (generators for off-grid power, wind turbines and solar thermal for grid-connected applications), successive waves of IT products have performed new tasks, many of which would earlier have been all but inconceivable. Semiconductor firms designed early IC chips in response to government demand for very challenging and very costly defense and space applications, such as intercontinental missiles and the Apollo guidance and control system. Within a few years, they were selling inexpensive chips for consumer products. Sales to companies making transistor radios paved the way for sales to TV manufacturers at a time when color (commercialized in the late 1940s and slow to find a market) was replacing black-and-white (color TV sales in the United States doubled during the 1970s). IC chips led to microprocessors and microprocessors led to PCs, mobile telephones, MP3 players, and contributed to the Internet. Innovations in microelectronics made possible innovations in many other industries. That is why, although the semiconductor and PV industries began at about the same time, sales of microelectronics devices grew much faster, by 2007 reaching $250 billion worldwide compared with PV revenues of $17 billion. Like other technological revolutions, the revolution in IT reflected research conducted in earlier years, at first exploiting foundations laid before World War II when quantum mechanics was applied to solid-state physics and chemistry. Much relatively basic work now finds its way quite rapidly into applications: like many others, the semiconductor industry lives off what might be termed just-in-time (JIT) research. JIT research, conducted internally, by suppliers, by consortia of firms such as Sematech, and in universities, has helped firms sustain the Moore’s Law pace. Generally similar processes have characterized developments in PV technology, but natural limits on efficiency gains , and the commodity-like nature of electricity, keep this technology from following a path similar to IT . 4.2 Energy-Climate Innovation Policy Choices Plain ly, the needed revolution in energy-climate technologies will be very different from that in IT , even though the sources will be similar: technological innovation taking place within private firms, with assists from government and, for GHG reduction, either a strong regulatory prod or the government as direct purchaser. The technology and innovation policies on which the U.S. government can call come in many varieties and work in many ways (Table 2). Competition both in R&D and in procurement, for example, were highly effective in IT but have not been very significant in energyclimate technologies, which have been monopolized by the Department of Energy (DOE).

*Read Market Involvement Block* 2AC CP- DOD Perm do both Perm do the counterplan PICS are a voting issue- kill affirmative ground and destroy the 1AC- make it impossible to be affirmative- reject the counterplan for fairness

Counterplan doesn’t solve A) Licensing- government legislation is key to get the NRC on board- that’s Fertel B) Power purchase agreements key- demonstrations insufficient- high up front capital cost necessitate government funding- that’s Rosner and Chandler And the DOD is politicized Davenport ’12 (Coral Davenport is the energy and environment correspondent for National Journal. Prior to joining National Journal in 2010, Davenport covered energy and environment for Politico, and before that, for Congressional Quarterly, “Pentagon's Clean-Energy Initiatives Could Help Troops—and President Obama”, http://www.nationaljournal.com/pentagon-s-clean- energy-initiatives-could-help-troops-and-president-obama-20120411?mrefid=site_search, April 11, 2012)

While Pentagon officials haven’t tried to politicize their renewable-energy portfolio, the White House and Democratic candidates aren’t shying away from publicizing Defense Department moves that highlight their broader energy agenda . On Tuesday evening, White House and Pentagon officials held a telephone press briefing on Wednesday’s clean-energy announcements in an evident effort to raise their profile. In a non-election year, it’s doubtful that announcements about how military bases will generate electricity would merit a White House background call—or whether a slate of such programs would even be rolled out together publicly. In Michigan, Democratic Sens. Carl Levin and Debbie Stabenow are scheduled to be present at the opening of the advanced combat-vehicle lab. Stabenow could face a tough reelection fight this fall. As it happens, the conservative super-PAC American Crossroads is also rolling out on Wednesday a $1.7 million television ad campaign attacking Obama’s energy policie s in six 2012 battleground states: Colorado, Florida, Iowa, Nevada, Ohio, and Virginia. Turn and no solvency- DOD involvement in energy trades-off with hard power focus and causes failure O’Keefe ‘12 (William O'Keefe, CEO, George C. Marshall Institute, “DOD’s ‘Clean Energy’ Is a Trojan Horse”, May 22, 2012)

The purpose of the military is to defend the United States and our interests by deterring aggression and applying military force when needed . It is not to shape industrial policy. As we’ve learned from history, energy is essential for military success, independent of whether it is so called “clean energy” or traditional energy , which continues to get cleaner with time . There are three reasons for the Department of Defense (DOD) to be interested in biofuels—to reduce costs, improve efficiency, and reduce vulnerability. These are legitimate goals and should be pursued through a well thought out and rational Research-and-Development (R&D) program. But it’s not appropriate to use military needs to push a clean energy agenda that has failed in the civilian sector . Packaging the issue as a national security rationale is a Trojan Horse that hides another attempt to promote a specific energy industrial policy. Over the past four decades such initiatives have demonstrated a record of failure and waste . As part of the military’s push for green initiatives, both the Navy and Air Force have set goals to obtain up to 50 percent of their fuel needs from alternative sources. The underlying rationale is to reduce US dependence on foreign oil. But the Rand Corporation, the preeminent military think tank in the nation , recently conducted a study, Alternative Fuels for Military Applications; it concludes , "The use of alternative fuels offers the armed services no direct military benefit." It also concludes that biofuels made from plant waste or animal fats could supply no more than 25,000 barrels daily. That’s a drop in the bucket considering the military is the nation’s largest fuel consumer. Additionally, there is no evidence that commercial technology will likely to b e available in the near future to produce large quantities of biofuels at lower costs than conventional fuels. The flipside of that argument is that the cost of conventional fuels is uncertain because of dependence on imports from unstable sources. While that is true, it misses the point. For example, our reliance on imports from the Persian Gulf is declining and could be less if we expanded our own domestic production. Until alternatives that are cost competitive can be developed, DOD should look at alternative ways to reduce price volatility, just as large commercial users do. The second reason for pursuing alternative fuels is related to the first. Greater efficiency reduces costs by reducing the amount of fuel used. The military has been pursuing this goal for some time, as has the private sector. DOD total energy consumption declined by more than 60% between 1985 and 2006, according to Science 2.0. Improvements will continue because of continued investments in new technologies , especially in the private sector, which has market-driven incentives to reduce the cost of fuel consumption. Finally , there is the argument that somehow replacing conventional fuels with bio-fuels will reduce supply chain vulnerability and save lives. Rand also addressed this issue from both the perspective on naval and ground based forces. It concluded that there is no evidence that a floating bio -fuels plant “ would be less expensive than using either Navy oilers or commercial tankers to deliver finished fuel products .” It also dismissed the concept of small scale production units that would be co-located with tactical units. It concluded, “any concepts that require delivery of a carbon containing feedstock appear to place a logistical and operational burden on forward-based tactical units that would be well beyond that associated with the delivery of finished fuels.” Future military needs are met by a robust R&D program carried out by the services and the Defense Advanced Research Projects Agency (DARPA). Letting that agency and the services invest in future technologies to meet their specific service needs and maintain our military strength without political meddling is in the nation’s best interest. Advances in military technology that has civilian applications eventually enters the market place. Take for example the DARPA’s research into improved military communication that eventually developed into internet technology that revolutionized how we communicate and obtain and use information. If DOD pursues research focused on lower costs, greater efficiency, and more secure fuel supplies, the civilian economy will eventually benefit. At a time when the military if faced with substantial budget cuts , allocating scarce resources to pursue so called “clean energy” objectives is worse than wasteful. It borders on a dereliction of duty. 1AR- DOD $ Kills Military

Turn- funding for green military trades-off with actual hard power capabilities- kills solvency Hodge ’12 (Hope Hodge, Hope Hodge reports on national security and defense issues for Human Events. “The Green Monster”, http://www.humanevents.com/article.php?id=51594, May 19, 2012)

Their agenda: spend millions on expensive alternative biofuels. Invest even more in undeveloped “green” technology. Prepare for the melting of the polar ice caps brought on by climate change. Some aggressive and well-funded environmentalist group? Nope. It’s the U.S. military. A few days ago, Defense Secretary Leon Panetta added fuel to the fire of an emerging controversy—just now capturing the attention of some members of Congress—by sharing his plans for the future of the military with a group of rapt environmentalists at an Environmental Defense Fund gala in his honor in Washington, D.C. “Our mission at the Department is to secure this nation against threats to our homeland and to our people,” he said. “In the 21st century, the reality is that there are environmental threats which constitute threats to our national security. For example, the area of climate change has a dramatic impact on national security: rising sea levels, to severe droughts, to the melting of the polar caps, to more frequent and devastating natural disasters all raise demand for humanitarian assistance and disaster relief.” Despite pending defense cuts that have had a dismayed Panetta pounding lecterns across the country, the Defense Secretary said DoD would be committing $2 billion in the next fiscal year alone to energy-efficient equipment and efficiency programs, and research and development for green technology. Not so fast, Secretary Panetta. Sen. James Inhofe (R-Okla.), a staunchly pro-military member of the Senate Armed Services Committee, takes the opposite view. He argues that’s money that could be used to manufacture or update a new fleet of aircraft. He now has defense leaders squarely in his crosshairs, determined to hold them to account for espousing debunked philosophies on climate change and promoting costly green initiatives while procurement needs go unmet. Following Panetta’s speech, Inhofe fired out a statement promising to provide congressional oversight and build awareness about the Defense Department’s “radical agenda.” Inhofe deconstructs Panetta Inhofe sat down with Human Events in his office last week and countered one by one each of Panetta’s climate change claims, reading from a ring-bound folder of research drawn from academic journals: there has been no statistically significant acceleration in sea level rise over the past century. The oft-cited severity of the 2011 drought, which covered 25 percent of the country, was nothing compared to one in 1984, which affected 80 percent of the land mass. Hurricanes, a common natural disaster, have been on the decline since the U.S. started keeping records of them in the 19th century. Everything Panetta said, Inhofe concluded, was a talking point cribbed from Al Gore’s 2006 global warming opus “An Inconvenient Truth,” and each, he said, has been refuted. Inhofe had a head start on the research. The minority leader of the Senate Environment and Public Works Committee, he is also the author of The Greatest Hoax, a refutation of climate change theory published earlier this year. The senator doesn’t expect Panetta to be as well-versed on climate change as he is, saying Panetta’s role is to lead the troops, not create environmental policy. Nor does Inhofe attribute all the far-left language and green initiatives to the defense secretary, who Inhofe said knows better than to spearhead such programs. “I’ve always liked Panetta; I served with him in the House and he’s always been one who has been very straightforward, very honest,” Inhofe said. “However, he has a commander in chief named Obama, so he has to say what Obama tells him to say.” Panetta has publicly and strongly defended the climate change and green energy talking points to critics, however, such as when he responded in March to criticism from Rep. Mike Conaway (R-Texas) at a House Armed Services Committee that Conaway’s premise for disagreement was “absolutely wrong” and that embracing the green agenda would make for a better military. The “Green Fleet” ready to launch While having America’s fighting forces plan for hypothetical climate change might be regarded as silly, DoD’s aggressive pursuit of biofuels as an alternative to traditional fossil fuels is a more immediate and potentially more damaging proposition . At the same gala featuring Panetta, Navy Secretary Ray Mabus told guests about plans to launch “Great Green Fleet” featuring ships and aircraft operating on a blend of traditional and biofuels. At up to $26 per gallon, biofuels can cost more than six times as much as traditional fuel sources at $3 or $4 per gallon, putting them out of the price range of many private industry maritime consumers with similar needs. Last December, in what was the largest government biofuel purchase in history, the Defense Logistics Agency procured 450,000 gallons of an advanced variety of the alternative fuel, made from both non-food waste and algae, for the relative bargain price of $12 million. Other test fuels have used the oil of the camelina mustard seed. According to a plan first made public by Mabus in 2009, the Navy expects to launch the fleet this summer for its exercises on the Pacific Rim—powered by the $12 million biofuels purchase—and to deploy it by 2016. Mabus listed his reasons for promoting the infant biofuel technology for his audience: the U.S. was too dependent on volatile areas of the world for fossil fuels, and unexpected fuel price fluctuation, as during the Libya conflict, could and did cost the DoD billions of dollars. Troops were endangered transporting traditional fuel to the battlefield. And like American steel in the 1880s, biofuel was a new technology waiting for an investor to come and purchase it at above-market prices, so eventually it could reduce its costs and become competitive. “That’s what we can do with energy,” Mabus said. “We can break the market.” The environmentalists applauded. Military inappropriate for green testing While keeping troops safe and lowering long-run costs are valuable goals for the Defense Department, biofuels won’t accomplish either , said Dr. David Kreutzer, a Research Fellow in Energy Economics and Climate Change for the Heritage Foundation. In the first case, he said, convoys would still have to transport fuel , whether “green” or petroleum, over ground to reach deployed forward operating bases. And since biofuels have a lower energy density, transport convoys would actually have to be larger to carry the supply , creating a broader target for the enemy. Second, Kreutzer said, if the technology behind alternative fuel sources was truly propitious , endorsement by the military should not be necessary to ensure its survival. “The fact that you have to get the Department of Defense to fund this to me is a sign that (biofuels are) not all that promising,” he said. Moreover, Kreutzer said, there were plenty of cheaper alternatives closer at hand. “We could drill a couple of wells in the Gulf of Mexico and get way more than we could for their biofuel initiatives,” he said. Kenneth P. Green, an energy and environment expert with the American Enterprise Institute, said the idea of energy security and independence was equally suspect . “The price shock issue is real,” he said. “ But trying to decouple from the world energy economy isn’t going to fix that.” Biofuels , subject to the laws of supply and demand , would increase in cost during a fuel price spike —and if kept off the world market, the cost of keeping them off would be high. “It’s more a matter of energies-phobia,” Green said. “The idea of survival as sort of independence in everything is the sort of reflexive mindset. We don’t think about this with regard to smartphones, knapsacks... with almost everything, we understand that it’s better with world trade.” And, Green said, the military had no business choosing the winners in fuel technology, especially with untapped options such as shale gas close at hand. “You don’t economize on keeping your soldiers alive, but where possible, don’t they have an obligation to conserve costs with the public’s dollar?” Green said. “Find the cheapest fuel, not the most politically correct fuel.” Biofuels could hurt combat readiness A study released in late March by the Bipartisan Policy Center on energy innovation within the Department of Defense found that while the military had some success in piloting new efficient technologies that would keep troops safer, its size and capacity meant it was ill-equipped to become a pioneer for green energy. “DoD’s ability to house supply and demand under one roof, and to produce lasting improvements in complex systems over time, driven in part by large, sustained procurement programs, is nearly unique—and unlikely to be widely reproduced in the energy and climate context,” a summary read. “There are significant constraints upon what DoD is likely to do directly in this area; the department is unlikely to become an all-purpose engine of energy innovation.” The study concluded the military would do best if pragmatism, not politics, drives energy and environmental decisions. “We believe that DoD’s scope in this area will be significantly constrained to issues and opportunities... that will also reliably assist DoD’s ability to fulfill its core mission,” one of the study’s authors, Samuel Thernstrom of the Clean Air Task Force, told Human Events. “Where those activities do not fall squarely within DoD’s core mission, it seems less likely that those efforts will be successful.” Sen. Inhofe’s game plan On Capitol Hill, Inhofe said he was the loudest voice protesting wasteful defense energy policies, but he said there were others who agreed, including Democrats who worried that the issue would affect their re-election races. While Inhofe’s options in terms of direct political action are limited, he said, because the Republicans lack a majority in the Senate, he plans to maintain a watchdog role to keep public attention on the issue. Later this month, he will deliver an extended address on the Senate floor denouncing the military’s far-left energy policies. And Inhofe looks forward to seeing how this year’s presidential election may provide a way to walk back the liberal Defense energy policies of the last term. Panetta is a great Secretary of Defense, Inhofe said; he would just be a better one serving under someone else. 1AR- PTX Link Military clean tech development is controversial Zaffos 4/2/12 (Joshua, Scientific American, “US Military Forges Ahead with Plans to Combat Climate Change”) http://www.scientificamerican.com/article.cfm?id=us-military-forges-ahead- with-plans-to-combat-climate-change

Connecting the military's fossil-fuel and overall energy use with risks to our national security hasn't been easy in this political environment , especially with the presidential election looming. Congressional Republicans have repeatedly questioned and criticized the Armed Forces' new- energy strategies, portraying initiatives as political favors to clean-energy businesses. Disad 2AC DA- Market Involvement Government intervention case- that’s 1AC Chandler and Fertel- high up front cost and fear of licensing require government legislation Plan generates competition amongst SMR industry – through competitive bidding process by a power purchase agreement Cory, Canavan, and Koenig, No Date (Karlynn Cory, Brendan Canavan, and Ronald Koenig of NREL, National Renewable Energy Laboratory, a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, “Power Purchase Agreement Checklist for State and Local Governments”, No Date)

This fact sheet provides information and guidance on the solar photovoltaic (PV) power purchase agreement ( PPA ), which is a financing mechanism that state and local government entities can use to acquire clean, renewable energy. We address the financial, logistical, and legal questions relevant to implementing a PPA, but we do not examine the technical details—those can be discussed later with the developer/con- tractor. This fact sheet is written to support decision makers in U.S. state and local governments who are aware of solar PPAs and may have a cursory knowledge of their structure but they still require further information before committing to a particular project. Overview of PPA Financing The PPA financing model is a “third-party” ownership model, which requires a separate , taxable entity (“system owner”) to procure, install, and operate the solar PV system on a consumer’s premises (i.e., the government agency). The government agency enters into a long- term contract (typically referred to as the PPA) to purchase 100% of the electricity generated by the system from the system owner. Figure 1 illustrates the financial and power flows among the consumer, system owner, and the utility. Renewable energy certificates (RECs), interconnection, and net metering are dis- cussed later. Basic terms for three example PPAs are included at the end of this fact sheet. The system owner is often a third-party investor (“tax inves- tor”) who provides investment capital to the project in return for tax benefits. The tax investor is usually a limited liability corporation (LLC) backed by one or more financial institu- tions. In addition to receiving revenues from electricity sales, they can also benefit from federal tax incentives. These tax incentives can account for approximately 50% of the project’s financial return (Bolinger 2009, Rahus 2008). Without the PPA structure, the government agency could not benefit from these federal incentives due to its tax-exempt status.1 The developer and the system owner often are distinct and separate legal entities. In this case, the developer structures the deal and is simply paid for its services. However, the developer will make the ownership structure transparent to the government agency and will be the only contact through- out the process. For this reason, this fact sheet will refer to “system owner” and developer as one in the same. While there are other mechanisms to finance solar PV systems, this publication focuses solely on PPA financing because of its important advantages:2 1. No/low up-front cost. 2. Ability for tax-exempt entity to enjoy lower electricity prices thanks to savings passed on from federal tax incentives. 3. A predictable cost of electricity over 15–25 years. 4. No need to deal with complex system design and permitting process. 5. No operating and maintenance responsibilities. High-Level Project Plan for Solar PV with PPA Financing Implementing power purchase agreements involves many facets of an organization: decision maker, energy manager, facilities manager, contracting officer, attorney, budget offi- cial, real estate manager, environmental and safety experts, and potentially others (Shah 2009). While it is understood that some employees may hold several of these roles, it is important that all skill sets are engaged early in the process. Execution of a PPA requires the following project coordina- tion efforts, although some may be concurrent:3 Step 1. Identify Potential Locations Identify approximate area available for PV installation including any potential shading. The areas may be either on rooftops or on the ground. A general guideline for solar installations is 5–10 watts (W) per square foot of usable rooftop or other space.4 In the planning stages, it is useful to create a CD that contains site plans and to use Google Earth software to capture photos of the proposed sites (Pechman 2008). In addition, it is helpful to identify current electricity costs. Estimating System Size (this page) discusses the online tools used to evaluate system performance for U.S. buildings. Step 2. Issue a Request for Proposal ( RFP ) to Competitively Select a Developer If the aggregated sites are 500 kW or more in electricity demand, then the request for proposal ( RFP) process will likely be the best way to proceed. If the aggregate demand is significantly less, then it may not receive sufficient response rates from developers or it may receive responses with expensive electricity pricing. For smaller sites, government entities should either 1) seek to aggregate multiple sites into a single RFP or 2) contact developers directly to receive bids without a formal RFP process (if legally permissible within the jurisdiction). Links to sample RFP documents (and other useful docu- ments) can be found at the end of this fact sheet. The materi- als generated in Step 1 should be included in the RFP along with any language or requirements for the contract. In addition, the logistical information that bidders may require to create their proposals (described later) should be included. It is also worthwhile to create a process for site visits. 3 Adapted from a report by GreenTech Media (Guice 2008) and from conver- sations with Bob Westby, NREL technology manager for the Federal Energy Management Program (FEMP). 4 This range represents both lower efficiency thin-film and higher efficiency crystalline solar installations. The location of the array (rooftop or ground) can also affect the power density. Source: http://www.solarbuzz.com/Consumer/ FastFacts.htm Renewable industry associations can help identify Web sites that accept RFPs. Each bidder will respond with an initial proposal including a term sheet specifying estimated output, pricing terms, ownership of environmental attributes (i.e., RECs) and any perceived engineering issues. Step 3. Contract Development After a winning bid is selected, the contracts must be negoti- ated—this is a time-sensitive process. In addition to the PPA between the government agency and the system owner, there will be a lease or easement specifying terms for access to the property (both for construction and maintenance). REC sales may be included in the PPA or as an annex to it (see Page 6 for details on RECs). Insurance and potential municipal law issues that may be pertinent to contract development are on Page 8. Step 4. Permitting and Rebate Processing The system owner (developer) will usually be responsible for filing permits and rebates in a timely manner. However, the government agency should note filing deadlines for state-level incentives because there may be limited windows or auction processes. The Database of State Incentives for Renewables and Efficiency (http://www.dsireusa.org/) is a useful resource to help understand the process for your state. Step 5. Project Design, Procurement, Construction, and Commissioning The developer will complete a detailed design based on the term sheet and more precise measurements; it will then procure, install, and commission the solar PV equipment. The commissioning step certifies interconnection with the utility and permits system startup. Once again, this needs to be done within the timing determined by the state incentives. Failure to meet the deadlines may result in forfeiture of benefits, which will likely change the electricity price to the government agency in the contract. The PPA should firmly establish realistic developer responsibilities along with a process for determining monetary damages for failure to perform. This solves their turn- an RFP would stimulate competition INSIDE the SMR industry Wheaton ‘8 (Glenn Wheaton, Glenn B. Wheaton, Sergeant First Class, US Army (ret.), is the co-founder and president of the non-profit Hawaii Remote Viewer's Guild dedicated to the research and training of remote viewing; and a director of the International Remote Viewing Association (IRVA), “Request for Proposal (RFP)”, http://www.epiqtech.com/request-for- proposal-rfp.htm, November 20, 2008)

A Request for Proposal (RFP), is the primary document that is sent to suppliers that invites them to submit a proposal to provide goods or services. Internally, an RFP can also be referred to as a sourcing project, a document, or an associated event (competitive bidding). Unlike a Request for Information (RFI) or a Request for Quotation (RFQ), an RFP is designed to get suppliers to provide a creative solution to a business problem or issue. RFPs should be used carefully since they can take a lot of time for both the organization and its suppliers. However, for more complex projects, an RFP may be the most effective way to source the goods or services required . When to Use an RFP Purchasing personnel should not use an RFP when they are only requesting information from suppliers, want merely pricing information, or only want to engage in a competitive bidding scenario. An RFP does make use of competitive bidding (this is an effective way to source), but an RFP should not be used if cost is the sole or main evaluation criteria. An RFP should be used when a project is sufficiently complex that it warrants a proposal from a supplier. RFPs are helpful when supplier creativity and innovative approaches to problems are needed. It is important to remember that the RFP process can take a significant amount of time to complete and could result in delays to the start of the project. Therefore, it only makes sense to use this when the benefits from obtaining supplier proposals are greater than the extra time it takes to prepare the RFP and to manage the RFP process. Benefits One of the main benefits that can arise if the RFP process is handled well is that the organization will have a good handle on the potential project risks for a complex project. The organization will also understand the prospective benefits that it can realize during the course of the project. Using an RFP also encourages suppliers to submit organized proposals that can be evaluated using a quantifiable methodology. In addition, an RFP lets suppliers know that the situation will be competitive. The competitive bidding scenario is often the best method available for obtaining the best pricing and, if done correctly, the best value. An RFP also gives purchasing personnel and project stakeholders the ability to visualize how the project will go and the approach that the suppliers will use to complete it. 2AC DA- Politics SMR’s are bipartisan Sullivan, Stenger, and Roma ’10 (Mary Anne Sullivan is a partner in Hogan Lovells' energy practice in Washington, D.C. Congress, Daniel F. Stenger is a partner in Hogan Lovells' energy practice in Washington, D.C., Amy C. Roma is a senior associate in Hogan Lovells' energy practice in Washington, D.C., “Are Small Reactors the Next Big Thing in Nuclear?”, www.pennenergy.com/index/power/display/3288852302/articles/electric-light-power/volume- 88/issue-6/sections/are-small-reactors-the-next-big-thing-in-nuclear.html, November 2010)

SMRs have enjoy ed bipartisan support in Congress. The House Committee on Science and Technology and the Senate Energy and Natural Resources Committee have approved similar legislation designed to promote the development and deployment of SMRs along the lines the DOE has proposed. Promoting SMR development in legislation has its price. The Congressional Budget Office recently estimated that the Senate bill would cost $407 million over the next five years to support cost-sharing programs with private companies for the development of two standard SMR designs. Costs for the out-years were not included in the estimate, but the bill would require the DOE to obtain NRC design certifications for the reactors by 2018 and to secure combined construction and operating licenses by Jan. 1, 2021. If Congress can pass an energy bill, it seems likely the bill will support SMRs. Even in the absence of new authorizing legislation, however, appropriations bills that must be passed to keep the government running almost certainly will contain strong support for the DOE's research and development program for SMRs. Plan has unanimous support Press Action ’12 (3/12/12 (“US Nuclear Industry Operates as if Fukushima Never Happened”) http://www.pressaction.com/news/weblog/full_article/nuclearsubsidies03122012/)

Both Democrats and Republicans have had a long love affair with commercial nuclear power, and the relationship is showing no signs of losing steam. Since the 1950s, members of both parties have enthusiastically lavished electric utility companies with expensive gifts, ranging from subsidies to protection from liability for disasters to loan guarantees, all underwritten by U.S. taxpayers. The political calculus is simple: nuclear power enjoys unanimous support in Washington. Try to name one member of the U.S. Senate or House of Representatives who favors shutting down the nation’s 104 commercial nuclear reactors. Federal agencies, from the Atomic Energy Commission to the Department of Energy to the Nuclear Regulatory, have worked diligently through the years to promote nuclear power. At the state level, support for nuclear power also is extremely strong, although there are some politicians— albeit a tiny number—who have publicly called for the closure of certain nuclear plants. On the one-year anniversary of the start of the nuclear disaster at the Fukushima Dai-ichi nuclear power plant in Japan, one would assume a voice in official Washington would have emerged calling for an end to the nation’s experiment with nuclear power. In Germany, government officials made the decision to phase out nuclear power by 2022 in response to Fukushima. There’s no such sentiment among the ruling elite in the United States. Locating a member of Congress opposed to the continued operation of nuclear power plants is as hard as finding a lawmaker who favors breaking ties with Israe l over its mistreatment of Palestinians for the last 60 years. In fact, it’s more than hard, it’s impossible. It’s very rare to find an issue where there is a noteworthy difference between Democrats and Republicans. When there are differences, they tend to be subtle, although party officials and the corporate media will attempt to sensationalize a slight difference to create an impression that the U.S. political system permits honest and real debate. PC not key Edwards 9 – Distinguished Professor of Political Science at Texas A&M University, holds the George and Julia Blucher Jordan Chair in Presidential Studies and has served as the Olin Professor of American Government at Oxford (George, “The Strategic President”, Printed by the Princeton University Press, pg. 149-150)

Even presidents who appeared to dominate Congress were actually facilitators rather than directors of change . They understood their own limitations and explicitly took advantage of opportunities in their environments. Working at the margins, they successfully guided legislation through Congress. When their resources diminished, they reverted to the stalemate that usually characterizes presidential-congressional relations. As legendary management expert Peter Drucker put it about Ronald Reagan, "His great strength was not charisma, as is commonly thought, but his awareness and acceptance of exactly what he could and what he could not do."134 These conclusions are consistent with systematic research by Jon Bond, Richard Fleisher, and B. Dan Wood. They have focused on determining whether the presidents to whom we attribute the greatest skills in dealing with Congress were more successful in obtaining legislative support for their policies than were other presidents. After carefully controlling for other influences on congressional voting, they found no evidence that those presidents who supposedly were the most proficient in persuading Congress were more successful than chief executives with less aptitude at influencing legislators.135 Scholars studying leadership within Congress have reached similar conclusions about the limits on personal leadership. Cooper and Brady found that institutional context is more important than personal leadership skills or traits in determining the influence of leaders and that there is no relationship between leadership style and effectiveness.136 Presidential legislative leadership operates in an environment largely beyond the president's control and must compete with other, more stable factors that affect voting in Congress in addition to party. These include ideology, personal views and commitments on specific policies, and the interests of constituencies . By the time a president tries to exercise influence on a vote, most members of Congress have made up their minds on the basis of these other factors. Thus, a president's legislative leadership is likely to be critical only for those members of Congress who remain open to conversion after other influences have had their impact. Although the size and composition of this group varies from issue to issue, it will almost always be a minority in each chamber. No link – plan doesn’t cost money DOE ‘11 (“Funding Federal Energy and Water Projects”, July, http://www.nrel.gov/docs/fy11osti/52085.pdf)

On-site renewable PPAs allow Federal agencies to fund on-site renewable energy projects with no upfront capital costs incurred. A developer installs a renewable energy system on agency property under an agreement that the agency will purchase the power generated by the system. The agency pays for the system through these power purchase payments over the life of the contract . After installation, the developer owns, operates, and maintains the system for the life of the contract. The PPA price is typically determined through a competitive procurement process. AT: Environmentalist Link Either alt causes outweigh or uniqueness overwhelms the link The Guardian ‘12 (Suzanne Goldenberg, US Environment Correspondent, “Obama launches fundraising campaign to win back environmental voters”, April 23, 2012, http://www.guardian.co.uk/environment/2012/apr/23/obama-launches-fundraising- environmental-voters#start-of-comments)

Barack Obama has launched a new green re-election site hoping to make up with environmental voters ahead of next November's vote. Environmentalists for Obama is aimed at organising green voters, who have had a complicated relationship with the Obama White House. Republicans have gone out of their way to cast Obama as a leader who put the environment ahead of the economy. Newt Gingrich even called him "President Algae". But environmental groups are disappointed with Obama for blocking higher ozone standards, opening the door to Arctic drilling, encouraging fracking for oil and natural gas, and advancing the controversial Keystone XL tar sands pipeline. Now Obama is trying to get them back on his side. As the site points out, Obama also raised gas mileage standards for cars and set tough new standards that will effectively ban new coal plants. His 2009 economic recovery plan also ploughed millions into clean energy industry. "None of this progress came easy," Obama said in a video timed for release on Earth Day on Sunday. "What we do over the next few months will decide whether we have the chance to make even more progress." The site aims to replicate Obama's success in organising and raising funds from greens in the 2008 election, even offering a smattering of green- tinged merchandise such as $10 bumper stickers that are "perfect for a hybrid or a bicycle". Obama is unlikely to get much competition for the green vote. Over the last four years, Republicans have moved sharply away from environmental causes, and many Tea Party activists have been vocal in expressing their disbelief in human-made climate change. Obama is nearly 40 points ahead of Mitt Romney, the Republican presidential candidate, among environmental voters. But Obama also wants to maintain an organising and fundraising edge. For green-minded voters, it will be difficult to rekindle the earlier enthusiasm of his 2008 campaign, when virtually every speech included a promise to help save a "planet in peril". Some campaigners have warned Obama could lose green voters because he failed to live up to that promise. Environmental groups were disappointed in Obama for failing to press strongly for a climate-change law. The bill that emerged from the House of Representatives in the summer of 2009 eventually died in the Senate. Last September, Obama overruled the environmental protection agency's efforts to limit ozone, sticking with standards set by George W Bush and regarded by scientists as weak and out-of-date. Obama won back some campaigners last January when he rejected the Keystone XL pipeline. But he moved last March to fast-track the southern portion of the pipeline. He has also frustrated campaigners with his response to the BP oil spill, reopening the Gulf of Mexico to offshore drilling and pushing for more oil and gas drilling in the Arctic. Environmentalists love the plan VEIA ’12 (Virginina Energy Independence Alliance, 9/11/12, “Why liberals and environmentalists are embracing Nuclear Energy”)

In what is becoming a trend, liberals are starting to peel off the anti-nuclear environmentalist bandwagon and acknowledge – and embrace – the importance and value of nuclear to meet growing world power demands. Last December, the Progressive Policy Institute published a memo in support of nuclear power, citing the “impeccable safety record of nuclear power reactors under normal operating conditions,” the absence of scientific consensus regarding low-dose radiation risks, and its lack of polluting emissions. They called upon liberals, normally “champions of reason and science,” to actually take the time for an honest and fact-based evaluation of nuclear energy, and cautioned against giving too much weight to the “feeling of risk,” as opposed to real risk: So far we have spoken of risk in terms of assessments based on logic, reasoning, and scientific deliberation. But this is not the way most people think about nuclear energy. Their perceptions are shaped by risk as a feeling – an instinctive and intuitive reaction dominated by worry, fear, dread, and anxiety. These feelings often reflect a conflation of nuclear power and nuclear weapons, and feelings of anxiety stoked by the Cold War arms race. This is exactly what Virginia is running up against in the Coles Hill Uranium Mine fight, as environmentalists lose control of their reason and let irrational fears and the feelings of risk cloud their judgment. Fortunately, the tide may be turning away from feelings, anxiety, and fear, and toward reason and science. On Friday, September 7th, a group of environmentalists from the Breakthrough Institute published an article with the subtitle, “Why it’s time for environmentalists to stop worrying and love the atom,” in which they defend nuclear power against the nay-saying of their less reasonable brethren. Environmental concerns spur plan supporters to lobby harder Shellenberger ‘12 (Michael, president of the breakthrough institute, Jessica Lovering, policy analyst at the breakthough institute, Ted Nordhaus, chairman of the breakthrough institute. September 7, 2012. [“Out of the Nuclear Closet,” http://www.foreignpolicy.com/articles/2012/09/07/out_of_the_nuclear_closet?page=0,0]

Arguably, the biggest impact of Fukushima on the nuclear debate, ironically, has been to force a growing number of pro-nuclear environmentalists out of the closet, including us. The reaction to the accident by anti- nuclear campaigners and many Western publics put a fine point on the gross misperception of risk that informs so much anti-nuclear fear. Nuclear remains the only proven technology capable of reliably generating zero-carbon energy at a scale that can have any impact on global warming. Climate change -- and, for that matter, the enormous present-day health risks associated with burning coal, oil, and gas -- simply dwarf any legitimate risk associated with the operation of nuclear power plants. About 100,000 people die every year due to exposure to air pollutants from the burning of coal. By contrast, about 4,000 people have died from nuclear energy -- ever -- almost entirely due to Chernobyl. But rather than simply lecturing our fellow environmentalists about their misplaced priorities, and how profoundly inadequate present-day renewables are as substitutes for fossil energy, we would do better to take seriously the real obstacles standing in the way of a serious nuclear renaissance. Many of these obstacles have nothing to do with the fear-mongering of the anti-nuclear movement or, for that matter, the regulatory hurdles imposed by the U.S. Nuclear Regulatory Commission and similar agencies around the world. As long as nuclear technology is characterized by enormous upfront capital costs, it is likely to remain just a hedge against overdependence on lower-cost coal and gas, not the wholesale replacement it needs to be to make a serious dent in climate change. Developing countries need large plants capable of bringing large amounts of new power to their fast-growing economies. But they also need power to be cheap. So long as coal remains the cheapest source of electricity in the developing world, it is likely to remaining. The most worrying threat to the future of nuclear isn't the political fallout from Fukushima -- it's economic reality. Even as new nuclear plants are built in the developing world, old plants are being retired in the developed world. For example, Germany's plan to phase out nuclear simply relies on allowing existing plants to be shut down when they reach the ends of their lifetime. Given the size and cost of new conventional plants today, those plants are unlikely to be replaced with new ones. As such, the combined political and economic constraints associated with current nuclear energy technologies mean that nuclear energy's share of global energy generation is unlikely to grow in the coming decades, as global energy demand is likely to increase faster than new plants can be deployed. To move the needle on nuclear energy to the point that it might actually be capable of displacing fossil fuels, we'll need new nuclear tech nologies that are cheaper and smaller. Today, there are a range of nascent, smaller nuclear power plant designs, some of them modifications of the current light-water reactor technologies used on submarines, and others, like thorium fuel and fast breeder reactors, which are based on entirely different nuclear fission technologies. Smaller, modular reactors can be built much faster and cheaper than traditional large-scale nuclear power plants. Next-generation nuclear reactors are designed to be incapable of melting down, produce drastically less radioactive waste, make it very difficult or impossible to produce weapons grade material, useless water, and require less maintenance. Most of these designs still face substantial technical hurdles before they will be ready for commercial demonstration. That means a great deal of research and innovation will be necessary to make these next generation plants viable and capable of displacing coal and gas. The United States could be a leader on developing these technologies, but unfortunately U.S. nuclear policy remains mostly stuck in the past. Rather than creating new solutions, efforts to restart the U.S. nuclear industry have mostly focused on encouraging utilities to build the next generation of large, light-water reactors with loan guarantees and various other subsidies and regulatory fixes. With a few exceptions, this is largely true elsewhere around the world as well. Nuclear has enjoyed bipartisan support in Congress for more than 60 years, but the enthusiasm is running out. The Obama administration deserves credit for authorizing funding for two small modular reactors, which will be built at the Savannah River site in South Carolina. But a much more sweeping reform of U.S. nuclear energy policy is required. At present, the Nuclear Regulatory Commission has little institutional knowledge of anything other than light-water reactors and virtually no capability to review or regulate alternative designs. This affects nuclear innovation in other countries as well, since the NRC remains, despite its many critics, the global gold standard for thorough regulation of nuclear energy. Most other countries follow the NRC's lead when it comes to establishing new technical and operational standards for the design, construction, and operation of nuclear plants. Environment doesn’t matter Schor ‘12 (Elana, Energy and Environment Daily reporter, “David vs. Goliath or even money? Greens weigh their election-year matchup” E&E News -- January 23 -- http://www.eenews.net/public/EEDaily/2012/01/23/1)

" Nobody's afraid of the environmental community on Capitol Hill," this advocate said, noting that the president's re-election campaign last week t ook charge of an ad response on energy jobs that greens are not yet positioned to amplify. " They don't have political money ready at hand, but it's not that they don't have money." 2AC DA- Clean Tech

Coal increasing globally- not renewables- that’s 1AC Lazenby Furthermore, renewable investment impossible Loki ’12 (Reynard, Justmeans staff writer for Sustainable Finance and Corporate Social Responsibility [“With Uncertain Financial Future, Cloudy Skies Ahead for American Cleantech,” 9-19, http://www.justmeans.com/With-Uncertain-Financial-Future-Cloudy-Skies-Ahead-for- American-Cleantech/56050.html]

Looking at 2011 VC investment figures, it seems like the cleantech industry in the United States is doing just fine. According to the Cleantech Group, a market intelligence advisory group based in San Francisco that has been tracking cleantech investments for the past decade, 2011 is the first year that saw more than $2 billion in cleantech venture investment in all four quarters. 4Q11 saw an impressive $2.21 billion in cleantech VC investments.[1] But if you take a closer—and wider—view , the bigger story isn't all that great. For one thing, the numbers for seed-stage deals were flat as investor focus turned to re-investing in firms already in their portfolios, firms that needed later-stage growth capital. In dollar terms, the news for early-stage startups across all industries is even worse. In 2011, VCs invested just $919 million in seed capital in 396 companies, a decrease of almost 50 percent from the previous year. In fact, seed-stage deals were the only stage of VC funding in 2011 to experience a decrease in average size. On the other side, late-stage VC investments in 2011 experienced a 37 percent increase. [2] TROUBLED TIMES FOR CLEANTECH VC That change in focus is part of a worrisome trend: According to Third Way, a Washington DC- based think tank, there were twice as many late-stage deals than early-stage deals in the cleantech sector in 2010—the first time that late-stage financing overtook early-stage development since 1999.[3] The trend has sounded alarm bells about the state of cleantech innovation in the United States: If VCs are targeting re-investments in portfolio companies, where does that leave innovative start-ups in dire need of financing? One concern is that without VC interest in committing seed money to new ideas, America's start-ups will look overseas for funding, leaving the nation in a cleantech innovation drag . For investors, the move away from start-up financing towards companies that are closer to turning a profit is understandable, particularly considering the nation's uncertain economic state. Untested ideas, though they may have merit, are left to the wayside. "Cleantech hasn't been a failure," noted Daniel Yates, CEO of Opower, a customer engagement platform for the utility industry. "It's VC investment in cleantech that has been troubled."[4] The Valley Of Death: only Uncle Sam can help build the bridge to clean energy The answer, according to some analysts, isn't to stimulate the VC industry, but to look to Uncle Sam. Indeed, over the past few years, the federal government's investment in cleantech has dwarfed that of venture capitalists. Between 2009 and 2014, Washington will have spent more than $150 billion in cleantech—more than three times the amount spent during the previous five-year period.[5] But, according to researchers from the Brookings Institution, the World Resources Institute and the Breakthrough Institute, in the excellent 2012 report Beyond Boom and Bust: Putting Clean Tech on a Path to Subsidy Independence, "To ensure a fully competitive energy market, the federal government must also do more to speed the demonstration and commercialization of new advanced energy technologies." The authors—Jesse Jenkins, Director of Energy and Climate Policy, Breakthrough Institute; Mark Muro, Senior Fellow, Metropolitan Policy Program, Brookings Institution; Ted Nordhaus and Michael Shellenberger, Cofounders, Breakthrough Institute; Letha Tawney, Senior Associate, World Resources Institute; and Alex Trembath, Policy Associate, Breakthrough Institute—note that "private sector financing is typically insufficient to move new energy innovations from early-stage laboratory research on to proof-of-concept prototype and then to full commercial scale."[6] They cite two financing gaps that they say " kill off too many promising new technologies before they have a chance to develop." One is known as the "technological valley of death," in which investors are hesitant to invest in early-stage R&D , hampering a start-up's ability to develop breakthrough concepts into marketable products. The other is the "commercialization valley of death," when young firms cannot find financing to take them from the pilot or demonstration phase of their product's tech development cycle to full commercial readiness.[7] "To avoid locking America's entrepreneurs and innovators out of energy markets, Congress should implement new policies to navigate the clean energy valleys of death," the authors recommend. "Without such policies, conventional fossil energy technologies are effectively insulated from new challengers, preventing a fully competitive US energy market."[8] THE WELL IS RUNNING DRY: FEDERAL CLEAN ENERGY INVESTMENT TO ENTER STEEP DECLINE The problem, however, is that federal cleantech funding, as described by The New York Times editorial board, " is about to drop off a cliff."[9] The reason for this is simple: The clean energy incentives and subsidies provided by President Obama's 2009 economic stimulus bill—amounting to $65 billion, including loan guarantees for wind and solar power—will largely be dismantled by 2014. To make matters worse, other longer-standing subsidies, like the mission-critical Production Tax Credit (PTC), are expiring.[10] For the cleantech industry , the numbers are hard to swallow. By 2014, annual federal cleantech spending is set to decline 75 percent to $11 billion (the high, in 2009, was $44.3 billion). In addition, 70 percent of all federal clean energy policies that were active in 2009 are set to expire at the end of 2014.[11] There's also the effect that the expiries will have on jobs. According to a Brookings Institute report, Obama's stimulus package was the cause of an 8.3-percent increase in jobs in the renewable energy sector, an impressive figure especially considering it happened at the height (or rather, depth) of the recession.[12] The thought of renewing such incentives is a bit pie-in-the- sky. Obama's stimulus bill was passed when Democrats controlled both houses of Congress. While the clean energy-friendly side of the aisle still controls the Senate, "the Republican wrecking crew in the House," as The New York Times notes, "remains generally hostile to programs that threaten the hegemony of the oil and gas interests."[13] DRILL, BABY, DRILL: THE GOP WILL KILL CLEAN ENERGY The House, for example, recently defeated an amendment proposed by Rep. Ed Markey (D-Mass.) to extend the wind energy PTC, mostly along party lines. Many analysts say the loss of the PTC is a significant blow to America's wind sector.[14][15] "There's such uncertainty in the market right now," said Laura Arnold, who sits on the board of directors of the Indiana Renewable Energy Association. " Uncertainty is not a positive stimulus for the growth of the industry…It's not completely over, but it's going to be on life support until we have another policy in its place to give the right inducement to the industry."[16] And if Mitt Romney wins the presidency, more dark days for the nation's cleantech sector are certain. The GOP hopeful's recently unveiled energy plan calls for opening up oil and gas development along the Atlantic Coast and—much to the chagrin of environmentalists and conservationists—the Arctic National Wildlife Refuge (ANWR), while ending much-needed subsidies for wind and solar.[17] Squo won’t solve warming – the plan is key Loudermilk ’11 (Micah K. Loudermilk, Contributor Micah J. Loudermilk is a Research Associate for the Energy & Environmental Security Policy program with the Institute for National Strategic Studies at National Defense University, contracted through ASE Inc, “Small Nuclear Reactors and US Energy Security: Concepts, Capabilities, and Costs”, http://www.ensec.org/index.php?option=com_content&view=article&id=314:small-nuclear- reactors-and-us-energy-security-concepts-capabilities-and- costs&catid=116:content0411&Itemid=375, May 31, 2011)

Economies of Scale Reversed? Safety aside, one of the biggest issues associated with reactor construction is their enormous costs—often approaching up to $10 billion apiece. The outlay costs associated with building new reactors are so astronomical that few companies can afford the capital required to finance them. Additionally, during the construction of new reactors, a multi-year process, utilities face “single-shaft risk”—forced to tie up billions of dollars in a single plant with no return on investment until it is complete and operational. When this is coupled with the risks and difficulties classically associated with reactor construction, the resulting environment is not conducive to the sponsorship of new plants. Conventional wisdom says that SMRs cannot be cost-competitive with large reactors due to the substantial economies of scale loss transitioning down from gigawatt-sized reactors to ones producing between 25MW and 300 MW, but, a closer examination may result in a different picture. To begin with, one of the primary benefits of SMRs is their modularity. Whereas conventional reactors are all custom-designed projects and subsequently often face massive cost overruns, SMRs are factory-constructed—in half the time of a large reactor—making outlay costs largely fixed. Moreover, due to their scalability, SMRs at a multi-unit site can come online as installed, rather than needing to wait for completion of the entire project, bringing a faster return on invested capital and allowing for capacity additions as demand increases over time. Other indirect cost-saving measures further increase the fiscal viability of small nuclear reactors. Due to the immense power output of conventional reactors, they also require special high-power transmission lines. In contrast, small reactor output is low enough to use existing transmission lines without overloading them. This allows for small reactors to serve as “drop-in” replacements at existing old fossil fuel-based power plants, while utilizing the transmission lines, steam turbines, and other infrastructure already in place. In fact, the Tennessee Valley Authority (TVA) hopes to acquire two Babcock & Wilcox small reactors for use in this manner—perhaps precipitating a movement whereby numerous fossil fuel plants could be converted. Lastly, and often ignored, is the ability of small reactors to bring a secure energy supply to locations detached from the grid. Small communities across Canada, Alaska, and other places have expressed immense interest in this opportunity. Additionally, the incorporation of small reactors may be put to productive use in energy-intensive operations including the chemical and plastics industries, oil refineries, and shale gas extraction. Doing so, especially in the fossil fuels industry would free up the immense amounts of oil and gas currently burned in the extraction and refining process. All told, small reactors possess numerous direct and indirect cost benefits which may alter thinking on the monetary competitiveness of the technology. Nuclear vs. Alternatives: a realistic picture When discussing the energy security contributions offered by small nuclear reactors, it is not enough to simply compare them with existing nuclear technology, but also to examine how they measure up against other electricity generation alternatives—renewable energy technologies and fossil fuels. Coal, natural gas, and oil currently account for 45%, 23% and 1% respectively of US electricity generation sources. Hydroelectric power accounts for 7%, and other renewable power sources for 4%. These ratios are critical to remember because idealistic visions of providing for US energy security are not as useful as realistic ones balancing the role played by fossil fuels, nuclear power, and renewable energy sources. Limitations of renewables Renewable energy technologies have made great strides forward during the last decade. In an increasingly carbon emissions and greenhouse gas (GHG) aware global commons, the appeal of solar, wind, and other alternative energy sources is strong, and many countries are moving to increase their renewable electricity generation. However, despite massive expansion on this front, renewable sources struggle to keep pace with increasing demand, to say nothing of decreasing the amount of energy obtained from other sources. The continual problem with solar and wind power is that, lacking efficient energy storage mechanisms, it is difficult to contribute to baseload power demands. Due to the intermittent nature of their energy production, which often does not line up with peak demand usage, electricity grids can only handle a limited amount of renewable energy sources—a situation which Germany is now encountering. Simply put, nuclear power provides virtually carbon-free baseload power generation, and renewable options are unable to replicate this, especially not on the scale required by expanding global energy demands. Small nuclear reactors, however, like renewable sources, can provide enhanced, distributed, and localized power generation. As the US moves towards embracing smart grid technologies, power production at this level becomes a critical piece of the puzzle. Especially since renewable sources, due to sprawl, are of limited utility near crowded population centers, small reactors may in fact prove instrumental to enabling the smart grid to become a reality. Pursuing a carbon-free world Realistically speaking, a world without nuclear power is not a world full of increased renewable usage, but rather, of fossil fuels instead. The 2007 Japanese Kashiwazaki-Kariwa nuclear outage is an excellent example of this, as is Germany’s post-Fukushima decision to shutter its nuclear plants, which, despite immense development of renewable options, will result in a heavier reliance on coal-based power as its reactors are retired, leading to a 4% increase in annual carbon emissions. On the global level, without nuclear power, carbon dioxide emissions from electricity generation would rise nearly 20% from nine to eleven billion tons per year. When examined in conjunction with the fact that an estimated 300,000 people per year die as a result of energy-based pollutants, the appeal of nuclear power expansion grows further. As the world copes simultaneously with burgeoning power demand and the need for clean energy, nuclear power remains the one consistently viable option on the table. With this in mind, it becomes even more imperative to make nuclear energy as safe as possible, as quickly as possible—a capacity which SMRs can fill with their high degree of safety and security. Additionally, due to their modular nature, SMRs can be quickly constructed and deployed widely. While this is not to say that small reactors should supplant large ones, the US would benefit from diversification and expansion of the nation’s nuclear energy portfolio. Path forward: Department of Defense as first-mover Problematically, despite the immense energy security benefits that would accompany the wide-scale adoption of small modular reactors in the US, with a difficult regulatory environment, anti-nuclear lobbying groups, skeptical public opinion, and of course the recent Fukushima accident, the nuclear industry faces a tough road in the battle for new reactors. While President Obama and Energy Secretary Chu have demonstrated support for nuclear advancement on the SMR front, progress will prove difficult. However, a potential route exists by which small reactors may more easily become a reality: the US military. The US Navy has successfully managed, without accident, over 500 small reactors on-board its ships and submarines throughout 50 years of nuclear operations. At the same time, serious concern exists, highlighted by the Defense Science Board Task Force in 2008, that US military bases are tied to, and almost entirely dependent upon, the fragile civilian electrical grid for 99% of its electricity consumption. To protect military bases’ power supplies and the nation’s military assets housed on these domestic installations, the Board recommended a strategy of “islanding” the energy supplies for military installations, thus ensuring their security and availability in a crisis or conflict that disrupts the nation’s grid or energy supplies. DOD has sought to achieve this through decreased energy consumption and renewable technologies placed on bases, but these endeavors will not go nearly far enough in achieving the department’s objectives. However, by placing small reactors on domestic US military bases, DOD could solve its own energy security quandary—providing assured supplies of secure and constant energy both to bases and possibly the surrounding civilian areas as well. Concerns over reactor safety and security are alleviated by the security already present on installations and the military’s long history of successfully operating nuclear reactors without incident. Unlike reactors on-board ships, small reactors housed on domestic bases would undoubtedly be subject to Nuclear Regulatory Commission (NRC) regulation and certification, however, with strong military backing, adoption of the reactors may prove significantly easier than would otherwise be possible. Additionally, as the reactors become integrated on military facilities, general fears over the use and expansion of nuclear power will ease, creating inroads for widespread adoption of the technology at the private utility level. Finally, and perhaps most importantly, action by DOD as a “first mover” on small reactor technology will preserve America’s badly struggling and nearly extinct nuclear energy industry. The US possesses a wealth of knowledge and technological expertise on SMRs and has an opportunity to take a leading role in its adoption worldwide. With the domestic nuclear industry largely dormant for three decades, the US is at risk of losing its position as the global leader in the international nuclear energy market. If the current trend continues, the US will reach a point in the future where it is forced to import nuclear technologies from other countries—a point echoed by Secretary Chu in his push for nuclear power expansion. Action by the military to install reactors on domestic bases will guarantee the short-term survival of the US nuclear industry and will work to solidify long-term support for nuclear energy. Conclusions In the end, small modular reactors present a viable path forward for both the expansion of nuclear power in the US and also for enhanced US energy security. Offering highly safe, secure, and proliferation-resistant designs, SMRs have the potential to bring carbon-free baseload distributed power across the United States. Small reactors measure up with, and even exceed, large nuclear reactors on questions of safety and possibly on the financial (cost) front as well. SMRs carry many of the benefits of both large-scale nuclear energy generation and renewable energy technologies. At the same time, they can reduce US dependence on fossil fuels for electricity production—moving the US ahead on carbon dioxide and GHG reduction goals and setting a global example. While domestic hurdles within the nuclear regulatory environment domestically have proven nearly impossible to overcome since Three Mile Island, military adoption of small reactors on its bases would provide energy security for the nation’s military forces and may create the inroads necessary to advance the technology broadly and eventually lead to their wide-scale adoption. Clean tech unsustainable Pappagallo ‘12 (Linda studying a Masters in International Affairs with a concentration in Energy and the Environment in New York [“Rare Earth Metals Limits Clean Technology’s Future” August 5th, http://www.greenprophet.com/2012/08/rare-earth-metal-peak/)

As the world moves toward greater use of zero- carbon energy sources, the supply of certain key metals needed for such clean-energy technologies may dry up, inflating per unit costs and driving the renewable energy market out of business. We’ve talked about peak phosphorus for food; now consider that rare earth metals like neodymium which are used in magnets to help drive wind energy turbines, and dysprosium needed for electric car performance are becoming less available on the planet. Until the 1980s, the most powerful magnets available were those made from an alloy containing samarium and cobalt. But mining and processing those metals presented challenges: samarium, one of 17 so-called “rare earth elements”, was costly to refine, and most cobalt came from mines in unstable regions of Africa. In 1982, researchers at General Motors developed a magnet based on neodymium, also a rare earth metal but more abundant than samarium, and at the time, it was cheaper. When combined with iron and boron, both readily available elements, it produced very strong magnets. Nowadays wind turbines, one of the fastest-growing sources of emissions-free electricity, rely on neodymium magnets. In the electric drive motor of a hybrid car neodymium-based magnets are essential. Imagine that one kilogram of neodymium can deliver 80 horsepower, enough to move a 3,000-pound vehicle like the Toyota Prius. When the second rare earth metal dysprosium is added to the alloy, performance at high temperatures is preserved. Soaring Demand for Rare Earth Metals These two metals have exceptional magnetic properties that make them especially well-suited to use in highly efficient, lightweight motors and batteries. However, according to a new MIT study led by a team of researchers at MIT’s Materials Systems Laboratory and co-authored by three researchers from Ford Motor Company, the supply of both elements neodymium and dysprosium — currently imported almost exclusively from China — could face significant shortages in coming years. The study looked at ten so- called “rare earth metals,” a group of 17 elements that have similar properties and which have some uses in high-tech equipment, in many cases in technology related to low-carbon energy. Of those ten, two are likely to face serious supply challenges in the coming years. Neodymium and dysprosium are not the most widely used rare earth elements, but they are the ones expected to see the biggest “pinch” in supplies, due to projected rapid growth in demand for high-performance permanent magnets. The biggest challenge is likely to be for dysprosium: Demand could increase by 2,600 percent over the next 25 years while Neodymium demand could increase by as much as 700%. A single large wind turbine (rated at about 3.5 megawatts) typically contains 600 kilograms of rare earth metals. A conventional car uses approximately a half kilogram of rare earth materials while an electric car uses nearly ten times as much. The picture starts to become clear, clean technology requires a lot of rare elements, and relying on clean technology is what the whole world is striving for – including the Middle East and North Africa. Rare earth metals will become the next political obsession. 1AR – REE Shortage

Yes shortage Pell ‘11 (Elza Holmstedt, environmental finance writer for oilprice.com [“Rare Earth Shortages - A Ticking Timebomb for Renewables?” December 12th, http://oilprice.com/Alternative- Energy/Renewable-Energy/Rare-Earth-Shortages-A-Ticking-Timebomb-For-Renewables.html)

A global scarcity of rare earth metals over the next five years could be “ a ticking timebomb ” for renewables and clean-tech, according to consultancy PwC. Hybrid cars, rechargeable batteries and wind turbines are among the sectors which could be affected by a shortage of these metals, which include cobalt, lithium and platinum, says PwC’s report Minerals and metals scarcity in manufacturing: A ‘ticking timebomb’. Rare earth metals are a key element for producing gearless wind turbines using permanent magnet generators, said Daniel Guttmann, London-based director for renewables and clean-tech at PwC. Manufacturers favour gearless turbines increasingly as they are more reliable than geared turbines, which are heavier and have more moving parts. “ This is a real headache for the industry and may negatively impact the cost-curve of offshore wind,” he said. Guttman added that two ways that automotive manufacturers expect to meet tightening emission regulations are electric vehicles and reducing vehicle weight, and rare earth metals are required to construct batteries of the right cost, weight and size. “Scarce supply and the associated price implications could make it more difficult for [manufacturers ] to keep pushing emissions down cost effectively,” he said.

That jacks the industry Snyder ‘12 (Jim, reporter for Bloomberg news [“Five Rare Earths Crucial For Clean Energy Seen In Short Supply,” Jan 5th, http://www.bloomberg.com/news/2012-01-05/five-rare-earths- crucial-for-clean-energy-seen-in-short-supply.html)

Limited supplies of five rare-earth minerals pose a threat to increasing use of clean-energy technologies such as wind turbines and solar panels, a U.S. Energy Department report found. The substances -- dysprosium, terbium, europium, neodymium and yttrium -- face potential shortages until 2015, according to the report, which reiterates concerns identified a year ago. The 2011 report studied 16 elements and related materials, including nickel and manganese, which are used to make batteries. The analysis of so-called critical elements began after rare-earth prices jumped following imposition of export restrictions in 2010 by China , the world’s major producer. “Diversifying the global supply chain is key,” David Sandalow , assistant secretary for policy and international affairs at the Energy Department, said today in Washington. “Developing substitutes is also key.” Demand for rare-earth materials has grown more rapidly than that for commodity metals such as steel, he said. Rare earths became a political and legislative issue after China moved to reduce export quotas in July 2010 by 40 percent. The country accounts for 95 percent of rare-earth production, according to the Energy Department. The Chinese government said late last month it was leaving the export limits unchanged, and more production from companies including Greenwood Village , Colorado-based Molycorp Inc. (MCP) may ease some supply concerns. Falling Prices While prices of rare earths fell in the second half of 2011, they remain volatile, leading some companies to search for ways to consider reducing reliance on the minerals, the Energy Department said. The department is also researching how to use rare-earths more efficiently, including through recycling, and to increase production in the U.S. The department’s Advanced Research Projects Agency--Energy has given about $31.6 million to 14 research projects to study ways to reduce or eliminate use of rare-earth elements. In Congress, at least a dozen bills have been introduced supporting development of a domestic rare-earth industry, including through U.S. loan guarantees, according to the Energy Department report. None of the measures has passed. “The biggest challenge is a permitting system that has historically taken multiple years to go from exploration to production,” Daniel McGroarty, president of Lonoke, Arkansas- based U.S. Rare Earths Inc. (UREE), said in an interview. The company has claims in Colorado, Montana and Idaho , he said. Worldwide Demand The five minerals most at risk of supply disruptions are used to make wind turbines, solar panels , electric car batteries and energy -efficient lights, according to the report. A 2007 law requiring the phase-out of incandescent light bulbs may increase demand for terbium, europium and yttrium, used in compact fluorescent bulbs that comply with higher efficiency standards, according to the report. “While these materials are generally used in low volumes relative to other resources, the anticipated deployment of clean-energy technologies could substantially increase worldwide demand,” the report said.2 2AC DA- Meltdown/ Accidents

No meltdown risk- oceans are infinite heat sinks- that’s 1AC Chandler Russia tech alt cause Wagstaff ’13 (Keith Wagstaff, Journalist @ The Week, “Why floating nuclear power plants might actually be a good idea”, http://news.yahoo.com/why-floating-nuclear-power-plants- might-actually-good-185900472.html, July 8, 2013)

Russia wants nuclear power-generating ships by 2016. It's not as crazy as it sounds At first, the idea of floating nuclear plants seems kind of dangerous, especially after an earthquake and tsunami knocked out the coastal Fukushima Daiichi power plant in Japan in 2011. Russia's biggest shipbuilder, however, plans to have one ready to operate by 2016. Is this a brilliant solution to the country's energy problems or a recipe for floating Chernobyls? SEE ALSO: Facebook's Graph Search goes public: What you need to know Proponents of wind, solar, and other sources of clean energy may not be too happy. Not only is there the question of Russia's less-than-stellar record of nuclear waste disposal, there is also the fact that the floating power plants are being designed to power offshore oil-drilling platforms in the Arctic, according to RT. Still, the barges themselves don't seem to be any more dangerous than Russia's nuclear-powered ice-breaker ships, which use the same KLT-40 naval propulsion reactors. The reactor-equipped barges would hold 69 people, and would have to be towed to their locations. They would also be able to power 200,000 homes, and could be modified to desalinate 240,000 cubic meters of water per day.

Turn- the plan safeguards that nuclear expansion Loudermilk ’11 (Micah K. Loudermilk, Contributor Micah J. Loudermilk is a Research Associate for the Energy & Environmental Security Policy program with the Institute for National Strategic Studies at National Defense University, contracted through ASE Inc, “Small Nuclear Reactors and US Energy Security: Concepts, Capabilities, and Costs”, http://www.ensec.org/index.php?option=com_content&view=article&id=314:small-nuclear- reactors-and-us-energy-security-concepts-capabilities-and- costs&catid=116:content0411&Itemid=375, May 31, 2011)

For years, proponents of nuclear power expansion both in the US and around the world have been proclaiming the onset of a global “nuclear renaissance.” Faced with the dual-obstacles of growing worldwide energy demand and a stronger push for clean energy sources, the stage seemed set for a vibrant revival of the industry. Nuclear power’s 25 years of accident-free operation following the 1986 disaster at Chernobyl shed favorable light upon the industry, dulled anti-nuclear arguments, and brought noted environmentalists into the nuclear camp as they began to recognize the role nuclear power could play in promoting clean energy solutions. The March 2011 failure at Japan’s Fukushima Daiichi reactor following a 9.0 magnitude earthquake and subsequent tsunami reignited the debate over nuclear energy and erased much of the goodwill that the nuclear industry had accumulated. Now, at least in the US, where images of Three Mile Island had finally faded, nuclear energy again finds its future in doubt. However, the Fukushima incident notwithstanding, the fundamental calculus driving the renewed push for nuclear power has not changed: in a carbon-conscious world with burgeoning electricity demands, nuclear power represents the only option for substantial and reliable baseload power generation. In recent years, though the “renaissance” has yet to occur, thinking on the nuclear power development front has begun to shift away from traditional gigawatt-plus reactors and towards a new category of small modular reactors (SMRs). Boasting an unprecedented degree of reactor safety and multiple applications in the power-generation process, these reactors could revolutionize the nuclear power industry and contribute to US energy security while also reviv ing the flagging American nuclear industry. Though they have yet to be built and deployed, years of SMR research, including a two-decade experiment with the Experimental Breeder Reactor-II (EBR-II), a 20 MWe reactor at Argonne-West in Idaho, demonstrate the potential of such technology. Nuclear vs. Nuclear: why go small? As the EBR-II demonstrates, the concept of small reactors is not new, but has resurfaced recently. The United States Navy has successfully utilized small reactors to power many of its vessels for over fifty years, and the earliest power reactors placed on land in the US were mostly similar, though larger, iterations of the Navy’s reactors. Eventually, due to siting and licensing issues affecting economies of scale, reactor outputs were pushed ever higher to between 800 and 1200 MW and new reactors constructed today—such as the ones under construction at the Olkiuoto plant in Finland—approach as much as 1600 MW. In contrast, the International Atomic Energy Agency (IAEA) defines a small reactor as generating under 300 MW of power. On the surface, a move in this direction may appear to be a step backwards in development, however, amid concerns over issues including safety, proliferation risks, and cost, many in the industry are beginning to seriously examine the possible applications of widespread and distributed nuclear power from low-output reactors. Promoting safer nuclear power The debate over nuclear energy over the years has consistently revolved around the central question “Is nuclear power safe?” Certainly, the events at Fukushima illustrate that nuclear power can be unsafe, however, no energy source is without its own set of some inherent risks on the safety front—as last year’s oil spill in the Gulf of Mexico or the long-term environmental consequences of fossil fuel use demonstrate—and nuclear power’s operating record remains significantly above that of other energy sources. Instead, accepting the role that nuclear energy plays in global electricity generation, especially in a clean-energy environment, a more pointed question to ask is “How can nuclear power be made safer?” Although large reactors possess a stellar safety record throughout their history of operation, SMRs are able to take safety several steps further, in large part due to their small size. Due to simpler designs as a result of advancing technology and a heavy reliance on passive safety features, many problems plaguing larger and earlier generations of reactors are completely averted. Simpler designs mean less moving parts, less potential points of failure or accident, and fewer systems for operators to monitor. Additionally, small reactor designs incorporate passive safety mechanisms which rely on the laws of nature—such as gravity and convection—as opposed to human- built systems requiring external power to safeguard the reactor in the event of an accident, making the reactor inherently safer. Furthermore, numerous small reactor concepts incorporate other elements—such as liquid sodium—as coolants instead of the pressurized water used in large reactors today. While sodium is a more efficient heat-transfer material, it is also able to cool the reactor core at normal atmospheric pressure, whereas water which must be pressurized at 100-150 times normal to prevent it boiling away. As an additional passive safety feature, sodium’s boiling point is 575-750 degrees higher than the reactor’s operating temperature, providing an immense natural heat sink in the event that the reactor overheats. Even should an accident occur, without a pressurized reactor no radiation would be released into the surrounding environment. Even on the most basic level, small reactors provide a greater degree of security by merit of providing lower energy output and using less nuclear fuel. To make up for the loss in individual reactor generating capacity, small reactors are generally designed as scalable units, enabling the siting of multiple units in one location to rival the output capacity of a large nuclear plant. However, with each reactor housed independently and powering its own steam turbine, an accident affecting one reactor would be limited to that individual reactor. Combating proliferation with US leadership Reactor safety itself notwithstanding, many argue that the scattering of small reactors around the world would invariably lead to increased proliferation problems as nuclear technology and know-how disseminates around the world. Lost in the argument is the fact that this stance assumes that US decisions on advancing nuclear technology color the world as a whole. In reality, regardless of the US commitment to or abandonment of nuclear energy technology, many countries (notably China) are blazing ahead with research and construction, with 55 plants currently under construction around the world—though Fukushima may cause a temporary lull. Since Three Mile Island , the US share of the global nuclear energy trade has declined precipitously as talent and technology begin to concentrate in countries more committed to nuclear power. On the small reactor front, more than 20 countries are examining the technology and the IAEA estimates that 40-100 small reactors will be in operation by 2030. Without US leadership, new nations seek to acquire nuclear tech nology turn to countries other than the US who may not share a deep commitment to reactor safety and nonproliferation objectives. Strong US leadership globally on nonprolif eration requires a vibrant American nuclear industry. This will enable the US to set and enforce standards on nuclear agreements , spent fuel reprocessing, and developing reactor technologies. As to the small reactors themselves, the designs achieve a degree of proliferation-resistance unmatched by large reactors. Small enough to be fully buried underground in independent silos, the concrete surrounding the reactor vessels can be layered much thicker than the traditional domes that protect conventional reactors without collapsing. Coupled with these two levels of superior physical protection is the traditional security associated with reactors today. Most small reactors also are factory-sealed with a supply of fuel inside. Instead of refueling reactors onsite, SMRs are returned to the factory, intact, for removal of spent fuel and refueling. By closing off the fuel cycle, proliferation risks associated with the nuclear fuel running the reactors are mitigated and concerns over the widespread distribution of nuclear fuel allayed . Seriously, zero risk Rosner and Goldberg ‘11 (Robert Rosner, Stephen Goldberg, Energy Policy Institute at Chicago, The Harris School of Public Policy Studies, November 2011, SMALL MODULAR REACTORS –KEY TO FUTURE NUCLEAR POWER GENERATION IN THE U.S., https://epic.sites.uchicago.edu/sites/epic.uchicago.edu/files/uploads/EPICSMRWhitePaperFinalc opy.pdf)

While the focus in this paper is on the business case for SMRs, the safety case also is an important element of the case for SMRs. Although SMRs (the designs addressed in this paper) use the same fuel type and the same light water cooling as gigawatt (GW)-scale light water reactors (LWRs), there are significant enhancements in the reactor design that contribute to the upgraded safety case. Appendix A provides a brief overview of the various technology options for SMRs, including the light water SMR designs that are the focus of the present analysis. Light water SMR designs proposed to date incorporate passive safety features that utilize gravity-driven or natural convection systems – rather than engineered, pump -driven systems – to supply backup cooling in unusual circumstances. These passive systems should also minimize the need for prompt operator actions in any upset condition. The designs rely on natural circulation for both normal operations and accident conditions, requiring no primary system pumps. In addition, these SMR designs utilize integral designs, meaning all major primary components are located in a single, high-strength pressure vessel. That feature is expected to result in a much lower susceptibility to certain potential events, such as a loss of coolant accident, because there is no large external primary piping. In addition, light water SMRs would have a much lower level of decay heat than large plants and, therefore, would require less cooling after reactor shutdown . Specifically, in a post-Fukushima lessons-learned environment, the study team believes that the current SMR designs have three inherent advantages over the current class of large operating reactors, namely: 1. These designs mitigate and, potentially, eliminate the need for back-up or emergency electrical generators , relying exclusively on robust battery power to maintain minimal safety operations. 2. They improve seismic capability with the containment and reactor vessels in a pool of water underground; this dampens the effects of any earth movement and greatly enhances the ability of the system to withstand earthquakes . 3. They provide large and robust underground pool storage for the spent fuel , drastically reducing the potential of uncovering of these pools. These and other attributes of SMR designs present a strong safety case. Differences in the design of SMRs will lead to different approaches for how the Nuclear Regulatory Commission (NRC) requirements will be satisfied. Ongoing efforts by the SMR community, the larger nuclear community, and the NRC staff have identified licensing issues unique to SMR designs and are working collaboratively to develop alternative approaches for reconciling these issues within the established NRC regulatory process. These efforts are summarized in Appendix B; a detailed examination of these issues is beyond the scope of this paper. SMRs solve Schimmoller ‘11 (Brian, contributing editor to Power Engineering “Go Small or Go Home,” July, Power Engineering115. 7 (Jul 2011): 12.)

Safety: the smaller size of SMRs equates to a smaller inventory of radionuclides in the core, reducing the source term. In other words, a smaller reactor vessel means smaller impacts if an accident should occur. Most SMR designs rely on passive cooling systems to provide decay heat removal, reducing dependence on pumps and electric power to support cooling. Also, because some designs envision below-grade construction, the vulnerability to certain external events (airline crash, other terrorist act) is theoretically reduced. If an accident like Fukushima did happen, proponents contend the underground construction means that any external impacts could be relatively contained. Navy disproves Loudermilk and Andres ’10 (Richard B. Andres is a Senior Fellow at the Institute for National Strategic Studies at National Defense University and a Professor of National Security Strategy at the National War College, Micah J. Loudermilk is a researcher at the Institute for National Strategic Studies at National Defense University, “Small Reactors and the Military's Role in Securing America's Nuclear Industry”, http://sitrep.globalsecurity.org/articles/100823646-small-reactors-and-the-militar.htm, April 23, 2010)

Faced with the dual-obstacles of growing worldwide energy demand and a renewed push for clean energy, the stage is set for a vibrant revival of the nuclear power industry in the United States. During his 2008 campaign, President Barack Obama committed to setting the country on the road to a clean, secure, and independent energy future - and nuclear power can play a vital role in that. With abundant energy resources available and near-zero emission levels, nuclear power offers a domestically-generated, clean, and long-term solution to America's energy dilemma. While countries around the world are building new reactors though, the U.S. nuclear power industry has remained dormant - and even borders on extinction - as no new plants have been approved for construction in the more than three decades following the Three Mile Island accident in 1979. Although Congress and the Executive Branch have passed laws and issued proclamations over the years, little actual progress has been made in the nuclear energy realm. A number of severe obstacles face any potential entrant into the reactor market - namely the Nuclear Regulatory Commission (NRC), which lacks the budget and manpower necessary to seriously address nuclear power expansion. Additionally, public skepticism over the safety of nuclear power plants has impeded serious attempts at new plant construction. However, despite the hurdles facing private industry, the U.S. military is in a position to take a leading role in the advancement of nuclear reactor technology through the integration of small reactors on its domestic bases. While the Obama Administration has pledged $8 billion in federal loan guarantees to the construction of two new reactors in Georgia and an additional $36 billion in new guarantees to the nuclear industry, this comes on top of $18.5 billion budgeted, but unspent, dollars. Despite this aid, it is still improbable that the U.S. will see any new large reactors now or in the foreseeable future as enormous cost, licensing, construction, and regulatory hurdles must be overcome. In recent years though, attention in the nuclear energy sphere has turned in a new direction: small-scale reactors. These next-generation reactors seek to revolutionize the nuclear power industry and carry a host of benefits that both separate them from their larger cousins and provide a legitimate opportunity to successfully reinvigorate the American nuclear industry. When compared to conventional reactors, small reactors have a number of advantages. First, the reactors are both small and often scalable - meaning that sites can be configured to house one to multiple units based on power needs. Although they only exist on paper and the military has yet to embrace a size or design, the companies investing in these technologies are examining a range of possibilities. Hyperion, for example, is working on a so-called "nuclear battery" - a 25 MWe sealed and transportable unit the size of a hot tub. Similarly, Babcock & Wilcox - the company which built many of the Navy's reactors - is seeking licensing for its mPower reactor, which is scalable and produces 125 MWe of power per unit. Other designs, such as Westinghouse's International Reactor Innovative and Secure (IRIS) model, have a generating capacity of up to 335 MWe. Second, large reactors come with enormous price tags - often approaching $10 billion in projected costs. The costs associated with building new reactors are so astronomical that few companies can afford the capital outlay to finance them. Additionally, the risks classically associated with the construction of nuclear reactors serve as an additional deterrent to interested utilities. As a result, companies must be willing to accept significant financial risks since ventures could potentially sink them or result in credit downgrades - as evidenced by the fact that 40 of 48 utilities issuing debt to nuclear projects suffered downgrades following the accident at Three Mile Island. All of this adds up to an environment that is not conducive to the sponsorship of new reactor plants. On the other hand, small reactors are able to mostly circumvent the cost hurdles facing large reactors. During the construction of large reactors, utilities face "single-shaft risk" - forced to invest and tie up billions of dollars in a single plant. However, small reactors present the opportunity for utilities to buy and add reactor capacity as needed or in a step-by-step process, as opposed to an all-or-nothing approach. Small reactors are also factory-constructed and shipped, not custom-designed projects, and can be built and installed in half the time - all of which are cost-saving measures. Additionally, despite concerns from critics over the proliferation and safety risks that a cadre of small reactors would potentially pose, the reality is considerably different. On the safety side, the new designs boast a number of features - including passive safety measures and simpler designs, thus reducing the number of systems to monitor and potential for system failure, enhancing the safety of the reactors. Small reactors can often be buried underground, are frequently fully contained and sealed (complete with a supply of fuel inside), can run longer between refueling cycles, and feature on-site waste storage - all of which serve to further insulate and secure the units. Finally, due to their small size, the reactors do not require the vast water resources needed by large reactors and in the event of an emergency, are far easier to isolate, shut off, and cool down if necessary. Notwithstanding all of these benefits, with a difficult regulation environment, anti-nuclear lobbying groups, and skeptical public opinion, the nuclear energy industry faces an uphill - and potentially unwinnable - battle in the quest for new reactors in the United States. Left to its own devices it is unlikely, at best, that private industry will succeed in bringing new reactors to the U.S. on its own. However, a route exists by which small reactors could potentially become a viable energy option: the U.S. military . Since 1948, the U.S. Navy has deployed over 500 reactors and possesses a perfect safety record in managing them. At the same time, grave concern exists over the fact that U.S. military bases are tied to and entirely dependent upon the civilian electric grid - from which they receive 99% of their power. Recently, attention has turned to the fact that the civilian grid, in addition to accidents, is vulnerable to cyber or terrorist attacks. In the event of a deliberate attack on the United States that knocks out all or part of the electric grid, the assets housed at the affected bases would be unavailable and U.S. global military operations potentially jeopardized. The presence of small-scale nuclear reactors on U.S. military bases would enable these facilities to effectively become "islands" - insulating them from the civilian grid and even potentially deterring attacks if the opponent knows that the military network would be unaffected. Unlike private industry, the military does not face the same regulatory and congressional hurdles to constructing reactors and would have an easier time in adopting them for use. By integrating small nuclear reactors as power sources for domestic U.S. military bases, three potential energy dilemmas are solved at the same time. First, by incorporating small reactors at its bases, the military addresses its own energy security quandary. The military has recently sought to "island" its bases in the U.S. -protecting them from grid outages, be they accidental or intentional. The Department of Defense has promoted this endeavor through lowering energy consumption on bases and searching for renewable power alternatives, but these measures alone will prove insufficient. Small reactors provide sufficient energy output to power military installations and in some cases surrounding civilian population centers. Secondly, as the reactors become integrated on military facilities, the stigma on the nuclear power industry will ease and inroads will be created for the adoption of small-scale reactors as a viable source of energy. Private industry and the public will see that nuclear reactors can indeed be utilized safely and effectively, resulting in a renewed push toward the expansion of nuclear power. Although many of the same hurdles will still be in place, a shift in public opinion and a stronger effort by utilities, coupled with the demonstrated success of small reactors on military bases, could prove the catalysts necessary for the federal government and the NRC to take more aggressive action. Finally, while new reactors are not likely in the near future, the military's actions will preserve, for a while longer, the badly ailing domestic nuclear energy industry. Nuclear power is here to stay around the globe, and the United States has an opportunity to take a leading role in supplying the world's nuclear energy and reactor technology. With the U.S. nuclear industry dormant for three decades, much of the attention, technology, and talent have concentrated overseas in countries with a strong interest in nuclear technology. Without the United States as a player in the nuclear energy market, it has little say over safety regulations of reactors or the potential risks of proliferation from the expansion of nuclear energy. If the current trend continues, the U.S. will reach a point where it is forced to import nuclear technology and reactors from other countries. Action by the military to install reactors on domestic bases will both guarantee the survival of the American nuclear industry in the short term, and work to solidify support for it in the long run. Ultimately, between small-scale nuclear reactors and the U.S. military, the capability exists to revitalize America's sleeping nuclear industry and promoting energy security and clean energy production. The reactors offer the ability to power domestic military bases, small towns, and other remote locations detached from the energy grid. Furthermore, reactor sites can house multiple units, allowing for greater energy production - rivaling even large reactors. Small reactors offer numerous benefits to the United States and a path initiated by the military presents a realistic route by which their adoption can be achieved. No impact WNA ’11 [World Nuclear Association, “Safety of Nuclear Power Reactors”, (updated December 2011), http://www.world-nuclear.org/info/inf06.html]

From the outset, there has been a strong awareness of the potential hazard of both nuclear criticality and release of radioactive materials from generating electricity with nuclear power. As in other industries, the design and operation of nuclear power plants aims to minimise the likelihood of accidents, and avoid major human consequences when they occur. There have been three major reactor accidents in the history of civil nuclear power - Three Mile Island, Chernobyl and Fukushima. One was contained without harm to anyone, the next involved an intense fire without provision for containment, and the third severely tested the containment, allowing some release of radioactivity. These are the only major accidents to have occurred in over 14,500 cumulative reactor-years of commercial nuclear power operation in 32 countries. The risks from western nuclear power plants, in terms of the consequences of an accident or terrorist attack, are minimal compared with other commonly accepted risks. Nuclear power plants are very robust. 2AC DA- Waste Russia tech alt cause Wagstaff ’13 (Keith Wagstaff, Journalist @ The Week, “Why floating nuclear power plants might actually be a good idea”, http://news.yahoo.com/why-floating-nuclear-power-plants- might-actually-good-185900472.html, July 8, 2013)

Russia wants nuclear power-generating ships by 2016. It's not as crazy as it sounds At first, the idea of floating nuclear plants seems kind of dangerous, especially after an earthquake and tsunami knocked out the coastal Fukushima Daiichi power plant in Japan in 2011. Russia's biggest shipbuilder, however, plans to have one ready to operate by 2016. Is this a brilliant solution to the country's energy problems or a recipe for floating Chernobyls? SEE ALSO: Facebook's Graph Search goes public: What you need to know Proponents of wind, solar, and other sources of clean energy may not be too happy. Not only is there the question of Russia's less-than-stellar record of nuclear waste disposal, there is also the fact that the floating power plants are being designed to power offshore oil-drilling platforms in the Arctic, according to RT. Still, the barges themselves don't seem to be any more dangerous than Russia's nuclear-powered ice-breaker ships, which use the same KLT-40 naval propulsion reactors. The reactor-equipped barges would hold 69 people, and would have to be towed to their locations. They would also be able to power 200,000 homes, and could be modified to desalinate 240,000 cubic meters of water per day.

Turn- SMR’s solve waste disposal Szondy ‘12 (David, writes for charged and iQ magazine, award-winning journalist [“Feature: Small modular nuclear reactors - the future of energy?” February 16th, http://www.gizmag.com/small-modular-nuclear-reactors/20860/)

SMRs can help with proliferation, nuclear waste and fuel supply issues because, while some modular reactors are based on conventional pressurized water reactors and burn enhanced uranium, others use less conventional fuels. Some, for example, can generate power from what is now regarded as " waste ", burning depleted uranium and plutonium left over from conventional reactors. Depleted uranium is basically U-238 from which the fissible U-235 has been consumed. It's also much more abundant in nature than U-235, which has the potential of providing the world with energy for thousands of years. Other reactor designs don't even use uranium. Instead, they use thorium. This fuel is also incredibly abundant, is easy to process for use as fuel and has the added bonus of being utterly useless for making weapons, so it can provide power even to areas where security concerns have been raised. 1AR- SMRs Solve Waste More evidence Spencer and Lorris ‘11 (Jack Spencer is Research Fellow in Nuclear Energy in the Thomas A. Roe Institute for Economic Policy Studies, and Nicolas D. Loris is a Research Associate in the Roe Institute, at The Heritage Foundation “A Big Future for Small Nuclear Reactors?”, http://www.heritage.org/research/reports/2011/02/a-big-future-for-small-nuclear-reactors, February 2, 2011)

The lack of a sustainable nuclear waste management solution is perhaps the greatest obstacle to a broad expansion of U.S. nuclear power. The federal government has failed to meet its obligations under the 1982 Nuclear Waste Policy Act, as amended, to begin collecting nuclear waste for disposal in Yucca Mountain. The Obama Administration’s attempts to shutter the existing program to put waste in Yucca Mountain without having a backup plan has worsened the situation. This outcome was predictable because the current program is based on the flawed premise that the federal government is the appropriate entity to manage nuclear waste. Under the current system, waste producers are able to largely ignore waste management because the federal government is responsible. The key to a sustainable waste management policy is to directly connect financial responsibility for waste management to waste production. This will increase demand for more waste-efficient reactor technologies and drive innovation on waste- management technologies, such as reprocessing. Because SMRs consume fuel and produce waste differently than LWRs, they could contribute greatly to an economically efficient and sustainable nuclear waste management strategy. SMRs solve James and Anniek Hansen ‘8 (James and Anniek Hansen, That really smart climate dude, http://www.pdfdownload.org/pdf2html/pdf2html.php?url=http%3A%2F%2Fwww.columbia.edu %2F~jeh1%2Fmailings%2F20081229_DearMichelleAndBarack.pdf&images=yes, December 29, 2008,)

(3) Urgent R&D on 4 th generation nuclear power with international cooperation. Energy efficiency, renewable energies, and a "smart grid" deserve first priority in our effort to reduce carbon emissions. With a rising carbon price, renewable energy can perhaps handle all of our needs. However, most experts believe that making such presumption probably would leave us in 25 years with still a large contingent of coal-fired power plants worldwide. Such a result would be disastrous for the planet, humanity, and nature. 4 th generation nuclear power (4 th GNP) and coal-fired power plants with carbon capture and sequestration (CCS) at present are the best candidates to provide large baseload nearly carbon-free power (in case renewable energies cannot do the entire job). Predictable criticism of 4 th GNP (and CCS) is: "it cannot be ready before 2030." However, the time needed could be much abbreviated with a Presidential initiative and Congressional support. Moreover, improved (3 rd generation) light water reactors are available for near-term needs. In our opinion, 4 th GNP ii deserves your strong support, because it has the potential to help solve past problems with nuclear power: nuclear waste, the need to mine for nuclear fuel, and release of radioactive material iii . Potential proliferation of nuclear material will always demand vigilance, but that will be true in any case, and our safety is best secured if the United States is involved in the technologies and helps define standards. Existing nuclear reactors use less than 1% of the energy in uranium, leaving more than 99% in long-lived nuclear waste. 4 th GNP can "burn" that waste, leaving a small volume of waste with a half-life of decades rather than thousands of years. Thus 4 th GNP could help solve the nuclear waste problem, which must be dealt with in any case. Because of this, a portion of the $25B that has been collected from utilities to deal with nuclear waste justifiably could be used to develop 4 th generation reactors. The principal issue with nuclear power, and other energy sources, is cost. Thus an R&D objective must be a modularized reactor design that is cost competitive with coal. Without such capability, it may be difficult to wean China and India from coal. But all developing countries have great incentives for clean energy and stable climate, and they will welcome technical cooperation aimed at rapid development of a reproducible safe nuclear reactor. 2AC DA- Terror/ Prolif Russia tech alt cause Wagstaff ’13 (Keith Wagstaff, Journalist @ The Week, “Why floating nuclear power plants might actually be a good idea”, http://news.yahoo.com/why-floating-nuclear-power-plants- might-actually-good-185900472.html, July 8, 2013)

Russia wants nuclear power-generating ships by 2016. It's not as crazy as it sounds At first, the idea of floating nuclear plants seems kind of dangerous, especially after an earthquake and tsunami knocked out the coastal Fukushima Daiichi power plant in Japan in 2011. Russia's biggest shipbuilder, however, plans to have one ready to operate by 2016. Is this a brilliant solution to the country's energy problems or a recipe for floating Chernobyls? SEE ALSO: Facebook's Graph Search goes public: What you need to know Proponents of wind, solar, and other sources of clean energy may not be too happy. Not only is there the question of Russia's less-than-stellar record of nuclear waste disposal, there is also the fact that the floating power plants are being designed to power offshore oil-drilling platforms in the Arctic, according to RT. Still, the barges themselves don't seem to be any more dangerous than Russia's nuclear-powered ice-breaker ships, which use the same KLT-40 naval propulsion reactors. The reactor-equipped barges would hold 69 people, and would have to be towed to their locations. They would also be able to power 200,000 homes, and could be modified to desalinate 240,000 cubic meters of water per day.

Turn- the plan safeguards that nuclear expansion Loudermilk ’11 (Micah K. Loudermilk, Contributor Micah J. Loudermilk is a Research Associate for the Energy & Environmental Security Policy program with the Institute for National Strategic Studies at National Defense University, contracted through ASE Inc, “Small Nuclear Reactors and US Energy Security: Concepts, Capabilities, and Costs”, http://www.ensec.org/index.php?option=com_content&view=article&id=314:small-nuclear- reactors-and-us-energy-security-concepts-capabilities-and- costs&catid=116:content0411&Itemid=375, May 31, 2011)

For years, proponents of nuclear power expansion both in the US and around the world have been proclaiming the onset of a global “nuclear renaissance.” Faced with the dual-obstacles of growing worldwide energy demand and a stronger push for clean energy sources, the stage seemed set for a vibrant revival of the industry. Nuclear power’s 25 years of accident-free operation following the 1986 disaster at Chernobyl shed favorable light upon the industry, dulled anti-nuclear arguments, and brought noted environmentalists into the nuclear camp as they began to recognize the role nuclear power could play in promoting clean energy solutions. The March 2011 failure at Japan’s Fukushima Daiichi reactor following a 9.0 magnitude earthquake and subsequent tsunami reignited the debate over nuclear energy and erased much of the goodwill that the nuclear industry had accumulated. Now, at least in the US, where images of Three Mile Island had finally faded, nuclear energy again finds its future in doubt. However, the Fukushima incident notwithstanding, the fundamental calculus driving the renewed push for nuclear power has not changed: in a carbon-conscious world with burgeoning electricity demands, nuclear power represents the only option for substantial and reliable baseload power generation. In recent years, though the “renaissance” has yet to occur, thinking on the nuclear power development front has begun to shift away from traditional gigawatt-plus reactors and towards a new category of small modular reactors (SMRs). Boasting an unprecedented degree of reactor safety and multiple applications in the power-generation process, these reactors could revolutionize the nuclear power industry and contribute to US energy security while also reviv ing the flagging American nuclear industry. Though they have yet to be built and deployed, years of SMR research, including a two-decade experiment with the Experimental Breeder Reactor-II (EBR-II), a 20 MWe reactor at Argonne-West in Idaho, demonstrate the potential of such technology. Nuclear vs. Nuclear: why go small? As the EBR-II demonstrates, the concept of small reactors is not new, but has resurfaced recently. The United States Navy has successfully utilized small reactors to power many of its vessels for over fifty years, and the earliest power reactors placed on land in the US were mostly similar, though larger, iterations of the Navy’s reactors. Eventually, due to siting and licensing issues affecting economies of scale, reactor outputs were pushed ever higher to between 800 and 1200 MW and new reactors constructed today—such as the ones under construction at the Olkiuoto plant in Finland—approach as much as 1600 MW. In contrast, the International Atomic Energy Agency (IAEA) defines a small reactor as generating under 300 MW of power. On the surface, a move in this direction may appear to be a step backwards in development, however, amid concerns over issues including safety, proliferation risks, and cost, many in the industry are beginning to seriously examine the possible applications of widespread and distributed nuclear power from low-output reactors. Promoting safer nuclear power The debate over nuclear energy over the years has consistently revolved around the central question “Is nuclear power safe?” Certainly, the events at Fukushima illustrate that nuclear power can be unsafe, however, no energy source is without its own set of some inherent risks on the safety front—as last year’s oil spill in the Gulf of Mexico or the long-term environmental consequences of fossil fuel use demonstrate—and nuclear power’s operating record remains significantly above that of other energy sources. Instead, accepting the role that nuclear energy plays in global electricity generation, especially in a clean-energy environment, a more pointed question to ask is “How can nuclear power be made safer?” Although large reactors possess a stellar safety record throughout their history of operation, SMRs are able to take safety several steps further, in large part due to their small size. Due to simpler designs as a result of advancing technology and a heavy reliance on passive safety features, many problems plaguing larger and earlier generations of reactors are completely averted. Simpler designs mean less moving parts, less potential points of failure or accident, and fewer systems for operators to monitor. Additionally, small reactor designs incorporate passive safety mechanisms which rely on the laws of nature—such as gravity and convection—as opposed to human- built systems requiring external power to safeguard the reactor in the event of an accident, making the reactor inherently safer. Furthermore, numerous small reactor concepts incorporate other elements—such as liquid sodium—as coolants instead of the pressurized water used in large reactors today. While sodium is a more efficient heat-transfer material, it is also able to cool the reactor core at normal atmospheric pressure, whereas water which must be pressurized at 100-150 times normal to prevent it boiling away. As an additional passive safety feature, sodium’s boiling point is 575-750 degrees higher than the reactor’s operating temperature, providing an immense natural heat sink in the event that the reactor overheats. Even should an accident occur, without a pressurized reactor no radiation would be released into the surrounding environment. Even on the most basic level, small reactors provide a greater degree of security by merit of providing lower energy output and using less nuclear fuel. To make up for the loss in individual reactor generating capacity, small reactors are generally designed as scalable units, enabling the siting of multiple units in one location to rival the output capacity of a large nuclear plant. However, with each reactor housed independently and powering its own steam turbine, an accident affecting one reactor would be limited to that individual reactor. Combating proliferation with US leadership Reactor safety itself notwithstanding, many argue that the scattering of small reactors around the world would invariably lead to increased proliferation problems as nuclear technology and know-how disseminates around the world. Lost in the argument is the fact that this stance assumes that US decisions on advancing nuclear technology color the world as a whole. In reality, regardless of the US commitment to or abandonment of nuclear energy technology, many countries (notably China) are blazing ahead with research and construction, with 55 plants currently under construction around the world—though Fukushima may cause a temporary lull. Since Three Mile Island , the US share of the global nuclear energy trade has declined precipitously as talent and technology begin to concentrate in countries more committed to nuclear power. On the small reactor front, more than 20 countries are examining the technology and the IAEA estimates that 40-100 small reactors will be in operation by 2030. Without US leadership, new nations seek to acquire nuclear tech nology turn to countries other than the US who may not share a deep commitment to reactor safety and nonproliferation objectives. Strong US leadership globally on nonprolif eration requires a vibrant American nuclear industry. This will enable the US to set and enforce standards on nuclear agreements , spent fuel reprocessing, and developing reactor technologies. As to the small reactors themselves, the designs achieve a degree of proliferation-resistance unmatched by large reactors. Small enough to be fully buried underground in independent silos, the concrete surrounding the reactor vessels can be layered much thicker than the traditional domes that protect conventional reactors without collapsing. Coupled with these two levels of superior physical protection is the traditional security associated with reactors today. Most small reactors also are factory-sealed with a supply of fuel inside. Instead of refueling reactors onsite, SMRs are returned to the factory, intact, for removal of spent fuel and refueling. By closing off the fuel cycle, proliferation risks associated with the nuclear fuel running the reactors are mitigated and concerns over the widespread distribution of nuclear fuel allayed . 2AC DA- REM Alt cause – clean tech Pappagallo ‘12 (Linda studying a Masters in International Affairs with a concentration in Energy and the Environment in New York [“Rare Earth Metals Limits Clean Technology’s Future”, http://www.greenprophet.com/2012/08/rare-earth-metal-peak/, August 5, 2012)

As the world moves toward greater use of zero- carbon energy sources, the supply of certain key metals needed for such clean-energy technologies may dry up, inflating per unit costs and driving the renewable energy market out of business. We’ve talked about peak phosphorus for food; now consider that rare earth metals like neodymium which are used in magnets to help drive wind energy turbines, and dysprosium needed for electric car performance are becoming less available on the planet. Until the 1980s, the most powerful magnets available were those made from an alloy containing samarium and cobalt. But mining and processing those metals presented challenges: samarium, one of 17 so-called “rare earth elements”, was costly to refine, and most cobalt came from mines in unstable regions of Africa. In 1982, researchers at General Motors developed a magnet based on neodymium, also a rare earth metal but more abundant than samarium, and at the time, it was cheaper. When combined with iron and boron, both readily available elements, it produced very strong magnets. Nowadays wind turbines, one of the fastest-growing sources of emissions-free electricity, rely on neodymium magnets. In the electric drive motor of a hybrid car neodymium-based magnets are essential. Imagine that one kilogram of neodymium can deliver 80 horsepower, enough to move a 3,000-pound vehicle like the Toyota Prius. When the second rare earth metal dysprosium is added to the alloy, performance at high temperatures is preserved. Soaring Demand for Rare Earth Metals These two metals have exceptional magnetic properties that make them especially well-suited to use in highly efficient, lightweight motors and batteries. However, according to a new MIT study led by a team of researchers at MIT’s Materials Systems Laboratory and co-authored by three researchers from Ford Motor Company, the supply of both elements neodymium and dysprosium — currently imported almost exclusively from China — could face significant shortages in coming years. The study looked at ten so- called “rare earth metals,” a group of 17 elements that have similar properties and which have some uses in high-tech equipment, in many cases in technology related to low-carbon energy. Of those ten, two are likely to face serious supply challenges in the coming years. Neodymium and dysprosium are not the most widely used rare earth elements, but they are the ones expected to see the biggest “pinch” in supplies, due to projected rapid growth in demand for high-performance permanent magnets. The biggest challenge is likely to be for dysprosium: Demand could increase by 2,600 percent over the next 25 years while Neodymium demand could increase by as much as 700%. A single large wind turbine (rated at about 3.5 megawatts) typically contains 600 kilograms of rare earth metals. A conventional car uses approximately a half kilogram of rare earth materials while an electric car uses nearly ten times as much. The picture starts to become clear, clean technology requires a lot of rare elements, and relying on clean technology is what the whole world is striving for – including the Middle East and North Africa. Rare earth metals will become the next political obsession. Their authors assume large reactors not SMRs Zyga 11 (Science Reporter for PhysOrg, quoting analysis by Abbott, Prof. of Electrical Engineering, 2011)

[5/11/11, Lisa, BA in rhetoric from University of Illinois at Urbana-Champaign, known science reporter for PhysOrg, Derek Abbott, Professor of Electrical and Electronic Engineering at the University of Adelaide in Australia, “Why nuclear power will never supply the world’s energy needs,” PhysOrg, http://phys.org/news/2011-05-nuclear-power-world-energy.html] Land and location: One nuclear reactor plant requires about 20.5 km2 (7.9 mi2) of land to accommodate the nuclear power station itself, its exclusion zone, its enrichment plant, ore processing, and supporting infrastructure. Secondly, nuclear reactors need to be located near a massive body of coolant water, but away from dense population zones and natural disaster zones. Simply finding 15,000 locations on Earth that fulfill these requirements is extremely challenging. Shortage is self-correcting Whitehouse ‘11 (Tom, Chairman of the London Environmental Investment Forum [“Critical Metals and Cleantech – Part 1,” http://blog.cleantechies.com/2011/10/11/critical-metals-and- cleantech-part-1/, October 11, 2011)

Is China and its rare earth supply restrictions actually doing cleantech a favor? On the one hand, limiting the supply of these metals, which are used in the manufacture of many clean technologies, clearly isn’t great for the growth of the low carbon economy. Swiss-based VC firm Mountain Cleantech says it’s a troubling area for a number of prominent clean energy technologies such as wind turbines, electric vehicles, fuel cells and energy efficient lighting and that, in the short term at least, cleantech could suffer from a supply risk. On the other, China’s supply limitations are driving efforts in other parts of the world to develop solutions to recover and recycle these metals from waste streams, rather than be at the mercy of virgin supply. This will not only reduce waste but will have much less environmental impact than mining operations. Rare earths have been getting most of the attention lately, but it’s worth noting that generally strong prices across the metals markets as a whole, and increasing efficiencies in recovery processes, mean many other metals also have compelling recovery or ‘urban’ mining financials. Old mobile phones are one example. According to Mountain Cleantech, in 1 billion mobile phones (in 2009, 1.4 billion were sold worldwide), you’ll find 15,000 tonnes of copper, 3,000 tonnes of aluminum, 3,000 tonnes of iron, 2,000 tonnes of nickel, 1,000 tonnes of tin, 500 tonnes of silver, 100 tonnes of gold and 20 tonnes of other metals like palladium, tantalum and indium. And when you look at the numbers from Umicore, which compared the amount of silver that’s extracted from one tonne of ore from a primary mine (5 grams), with that from a tonne of mobile phones (300 – 350 grams), you get an indication of just how valuable the market for recovery and recycling is from just one waste stream. But it’s the opportunities for recovering metals in the ‘critical’ category – tantalum and indium for example and, yes, rare earths – which are getting investors and innovators particularly excited. Because these metals are found in such tiny quantities, recovering them economically is not without its challenges though (an issue I’ll explore in more detail in the next parts of this blog). So, how does a metal make it onto the ‘critical’ list? Different analysts have different criteria. Oakdene Hollins is a UK consultancy focused on the low carbon sector which has produced several reports analysing the metals markets. It says that while there’s no precise definition, most studies don’t just base criticality on physical scarcity but also look at factors such as political risk, concentration of production and ‘importance’ of the materials. Taking these factors into account, Oakdene formed the following consensus on the current critical nature of certain metals: Highly critical: Beryllium, gallium, indium, magnesium, platinum group, rare earths, tin, tungsten Moderately critical: antimony, cobalt, germanium, manganese, nickel, niobium, rhenium, tantalum, tellurium, zinc Near critical: bismuth, chromium, fluorspar, lead, lithium, silicon / silica sand, silver, titanium, zirconium Not critical: aluminium, boron / borates, cadmium, copper, molybdenum, selenium, vanadium The majority of those in the ‘highly’ and ‘moderately’ critical categories are low volume specialty metals which are used for hi-tech applications such as smart phones, tablets, flat panel displays, semiconductors, photovoltaics, magnets, specialist alloys and catalysts. The notable exceptions are nickel, magnesium, tin and zinc, which have bulk uses in alloys, batteries and tooling. Oakdene then conducted a review of supply-demand forecasts for 2015 to 2020. The rare earths, neodymium and dysprosium; as well as tellurium, indium, gallium, platinum group, tantalum and graphite, were identified as those having increasing demand for specific applications and where there are also limitations on increasing supply. This projected supply deficit provides positive support for prices and big opportunities for recovery. But we should remember that it’s not just recovery measures that offer solutions to the critical metals crisis. Considerable effort is also going into finding alternatives to these materials and success here could meaning they’ll lose some of their main markets. This will affect the supply-demand imbalance and negatively impact on prices. 2AC DA- IAEA Overstretch SMRs solve IAEA overstretch Scherer ‘10 (C. Scherer, Los Alamos Natl Laboratory, et al. R. Bean, Idaho Natl Laboratory, M. Mullen, Los Alamos Natl Laboratory, and G. Pshakin, State Scientific Centre of the Russian Federation-Institute for Physics and Power Engineering, (http://www.iaea.org/OurWork/SV/Safeguards/Symposium/2010/Documents/PapersRepository/1 64.pdf, 2010)

Abstract: Incorporating safeguards early in the design process can enhance the safeguardability of a nuclear facility by influencing and becoming part of the intrinsic design. This concept is transformational because historically safeguarding nuclear facilities was often considered after completion of the facility design or even construction of the facility. Safeguards concepts and applications were therefore retrofit to the design. By designing safeguards into the facility practical solutions from best practices and lessons learned can be implemented , thus improving the safeguardability of the facility and making safeguards more efficient and cost effective for both the plant operator and international inspectors. A methodology for integrating Safeguards-by-Design early into the facility design process is proposed. The architecture field uses the following design phases: Planning, Schematic, Design Development, and Construction Documents. During the Planning phase defining functions and listing requirements for the facility is essential; at this time safeguards requirements should be documented and become part of the facility functions and requirements list. The schematic phase is the beginning of early design drawings; the design addresses the functions and requirements needs, space utilization begins and the facility design begins taking on volume and shape. Early planning allows evaluation and incorporation of improved solutions from best practices and lessons learned. The safeguardability of the facility could become part of the intrinsic design. It is during the planning and into the schematic design phase s that the customer or facility operator has the most influence on the design. Considering International Atomic Energy Agency (IAEA) safeguards verification requirements at this design stage, allows for inclusion of concepts that maximize efficiency and minimize inspection impact. Design changes during the Design Development, and later design phases tend to be very costly; at later stages of the design process, design changes are retrofit into the existing space envelope and are generally much less efficient and economical. Planning for safeguards early in the design process therefore has benefit to both the facility operator and IAEA inspectors . With the emerging nuclear renaissance IAEA inspections will need to be more efficient and economical. Safeguards-by-Design offers a process to design in this efficiency for both the facility operator and the IAEA inspectors. Safeguards experts from the United States and the Russian Federation are cooperating to jointly develop and demonstrate this safeguards-by- design concept for advanced nuclear energy systems. Tradeoffs now and fails Findlay ’12 (Trevor Findlay, Senior Fellow at Centre for International Governance Innovation and Director of the Canadian Centre for Treaty Compliance. Professor at the Norman Paterson School of International Affairs, 2012, UNLEASHING THE NUCLEAR WATCHDOG: strengthening and reform of the iaea, http://www.cigionline.org/sites/default/files/IAEA_final_0.pdf)

In spite of this well-deserved reputation and its apparently starry prospects, the Agency remains relatively undernourished, its powers significantly hedged and its technical achievements often overshadowed by political controversy. This evidently prized body has, for instance, been largely unable to break free of the zero real growth (ZRG) budgeting imposed on all UN agencies from the mid-1980s onwards (ZRG means no growth beyond inflation). As a result, the Agency has not been provided with the latest technologies and adequate human resources. Moreover, despite considerable strengthening, its enhanced nuclear safeguards system is only partly mandatory. Notwithstanding the increasing influence of its recommended standards and guides, its safety and security powers remain entirely non-binding. Although the Agency’s long-term response to the Fukushima disaster remains to be seen, its role in nuclear safety and security continues to be hamstrung by states’ sensitivity about sovereignty and secrecy, and by its own lack of capacity. Many states have shown a surprising degree of ambiguity towards supporting the organization both politically and financially. The politicization of its governing bodies has increased alarmingly in recent years, crimping its potential. Most alarming of all, the Agency has failed, by its own means, to detect serious non-compliance by Iraq, Iran and Libya with their safeguards agreements and, by extension, with the NPT (although it was the first to detect North Korea’s non-compliance). Iran’s non- compliance had gone undetected for over two decades. Most recently, the Agency missed Syria’s attempt to construct a nuclear reactor with North Korean assistance. Despite significant improvements to the nuclear safeguards regime, there is substantial room for improvement, especially in detecting undeclared materials, facilities and activities.6 Case 2AC- Tech Now Tech now- economically feasible Wellock ’13 (Thomas Wellock, NRC Historian, “Floating Nuclear Power Plants: A Technical Solution to a Land-based Problem (Part I)”, http://public-blog.nrc- gateway.gov/2013/09/24/floating-nuclear-power-plants-a-technical-solution-to-a-land-based- problem-part-i-2/, September 24, 2013)

In July, Russia announced it planned to build the world’s first floating nuclear power plant to supply 70 megawatts of electricity to isolated communities. If successful, the plan would bring to fruition an idea hatched in the United States nearly a half-century ago. It’s not widely known, but in 1971, Offshore Power Systems (OPS), a joint venture by Westinghouse Corporation and Tenneco, proposed manufacturing identical 1,200 MW plants at a $200 million facility near Jacksonville, Fla. Placed on huge concrete barges, the plants would be towed to a string of breakwater-protected moorings off the East Coast. Using a generic manufacturing license and mass production techniques, Westinghouse President John Simpson predicted this approach could cut in half typical plant construction time and make floating reactors economical. While Simpson touted their economic advantages, utilities wanted floating power plants to overcome mounting opposition to land-based reactors . Site selection had ground to a near halt in the Northeast and the West Coast due to public opposition, seismic worries and environmental concerns. In July 1971, a federal court complicated siting further by forcing the NRC’s predecessor, the Atomic Energy Commission, to develop thorough Environmental Impact Statements for nuclear plant projects. In fact, West Coast utilities met defeat so often on proposed coastal power plant sites they turned inland in an ill-fated move to find acceptable arid locations. By heading out to sea, Northeast utilities hoped they could overcome their political problems. New Jersey’s Public Service Electric and Gas Corporation (PSEG) responded enthusiastically and selected the first site, the Atlantic Generating Station, about 10 miles north of Atlantic City at the mouth of Great Bay. A PSEG spokesman said floating reactors were “the only answer to the problem of siting nuclear power plants.” Other reactor vendors, including General Electric, also studied the possibility of floating reactors. A supportive regulatory response heartened OPS officials . The AEC’s Advisory Committee for Reactor Safeguards issued a fairly positive assessment of floating reactors in late 1972. “We think this is a very favorable letter,” a Westinghouse official said of the committee response, “and we don’t see any delay whatsoever.” Westinghouse moved forward with its grand plan and built its manufacturing facility near Jacksonville. The facility included a gigantic crane that was 38 stories high — the world’s tallest. It appeared to be smooth sailing ahead for floating plants with a RAND Corporation study that touted their superior ability to withstand earthquakes and other natural hazards. Spoiler alert: RAND selected for floating power plants one of the most ill-conceived yet prescient of acronyms, FLOPPS. 2AC- AT: Pfeffer Says Hydrogen The plan solves- Pfeffer says new nuclear tech comes with desal tech Reactors make hydrogen feasible and economical Science 2.0 ’12 (quoting Dr. Ibrahim Khamis of the International Atomic Energy Agency (IAEA), 3/26/12, One Day, You May Thank Nuclear Power For The Hydrogen Economy, www.science20.com/news_articles/one_day_you_may_thank_nuclear_power_hydrogen_econom y-88334

The hydrogen economy has been ready to start for decades and could begin commercial production of hydrogen in this decade but, says Dr. Ibrahim Khamis of the International Atomic Energy Agency (IAEA) in Vienna, Austria, it will take heat from existing nuclear plants to make hydrogen economical . Khamis said scientists and economists at IAEA and elsewhere are working intensively to determine how current nuclear power reactors — 435 are operational worldwide — and future nuclear power reactors could be enlisted in hydrogen production. Most hydrogen production at present comes from natural gas or coal and results in releases of the greenhouse gas carbon dioxide. On a much smaller scale, some production comes from a cleaner process called electrolysis, in which an electric current flowing through water splits the H2O molecules into hydrogen and oxygen. This process, termed electrolysis, is more efficient and less expensive if water is first heated to form steam, with the electric current passed through the steam. "There is rapidly growing interest around the world in hydrogen production using nuclear power plants as heat sources," Khamis said. "Hydrogen production using nuclear energy could reduce dependence on oil for fueling motor vehicles and the use of coal for generating electricity. In doing so, hydrogen could have a beneficial impact on global warming, since burning hydrogen releases only water vapor and no carbon dioxide, the main greenhouse gas. There is a dramatic reduction in pollution." Khamis said that nuclear power plants are ideal for hydrogen production because they already produce the heat for changing water into steam and the electricity for breaking the steam down into hydrogen and oxygen. Experts envision the current generation of nuclear power plants using a low-temperature electrolysis which can take advantage of low electricity prices during the plant's off- peak hours to produce hydrogen. Future plants, designed specifically for hydrogen production, would use a more efficient high-temperature electrolysis process or be coupled to thermochemical processes, which are currently under research and development. "Nuclear hydrogen from electrolysis of water or steam is a reality now, yet the economics need to be improved," said Khamis. He noted that some countries are considering construction of new nuclear plants coupled with high-temperature steam electrolysis (HTSE) stations that would allow them to generate hydrogen gas on a large scale in anticipation of growing economic opportunities. Tech is viable—just need hydrogen fuel Squatriglia ’11 (Chuck Squatriglia, Wired, 4/22/11, Discovery Could Make Fuel Cells Much Cheaper, www.wired.com/autopia/2011/04/discovery-makes-fuel-cells-orders-of-magnitude- cheaper/)

One of the biggest issues with hydrogen fuel cells, aside from the lack of fueling infrastructure, is the high cost of the technology. Fuel cells use a lot of platinum, which is frightfully expensive and one reason we’ll pay $50,000 or so for the hydrogen cars automakers say we’ll see in 2015. That might soon change. Researchers at Los Alamos National Laboratory have developed a platinum-free catalyst in the cathode of a hydrogen fuel cell that uses carbon, iron and cobalt. That could make the catalysts “two to three orders of magnitude cheaper,” the lab says, thereby significantly reducing the cost of fuel cells. Although the discovery means we could see hydrogen fuel cells in a wide variety of applications, it could have the biggest implications for automobiles. Despite the auto industry’s focus on hybrids, plug-in hybrids and battery-electric vehicles — driven in part by the Obama administration’s love of cars with cords — several automakers remain convinced hydrogen fuel cells are the best alternative to internal combustion. Hydrogen offers the benefits of battery-electric vehicles — namely zero tailpipe emissions — without the drawbacks of short range and long recharge times. Hydrogen fuel cell vehicles are electric vehicles; they use a fuel cell instead of a battery to provide juice. You can fill a car with hydrogen in minutes, it’ll go about 250 miles or so and the technology is easily adapted to everything from forklifts to automobiles to buses. Toyota, Mercedes-Benz and Honda are among the automakers promising to deliver hydrogen fuel cell vehicles in 2015. Toyota has said it has cut the cost of fuel cell vehicles more than 90 percent by using less platinum — which currently goes for around $1,800 an ounce — and other expensive materials. It plans to sell its first hydrogen vehicle for around $50,000, a figure Daimler has cited as a viable price for the Mercedes-Benz F-Cell (pictured above in Australia). Fifty grand is a lot of money, especially something like the F-Cell — which is based on the B-Class compact — or the Honda FCX Clarity. Zelenay and Wu in the lab. In a paper published Friday in Science, Los Alamos researchers Gang Wu, Christina Johnston and Piotr Zelenay, joined by Karren More of Oak Ridge National Laboratory, outline their platinum-free cathode catalyst. The catalysts use carbon, iron and cobalt. The researchers say the fuel cell provided high power with reasonable efficiency and promising durability. It provided currents comparable to conventional fuel cells, and showed favorable durability when cycled on and off — a condition that quickly damages inferior catalysts. The researchers say the carbon-iron-cobalt catalyst completed the conversion of hydrogen and oxygen into water, rather than producing large amounts of hydrogen peroxide. They claim the catalyst created minimal amounts of hydrogen peroxide — a substance that cuts power output and can damage the fuel cell — even when compared to the best platinum-based fuel cells. In fact, the fuel cell works so well the researchers have filed a patent for it. The researchers did not directly quantify the cost savings their cathode catalyst offers, which would be difficult because platinum surely would become more expensive if fuel cells became more prevalent. But the lab notes that iron and cobalt are cheap and abundant, and so the cost of fuel cell catalysts is “definitely two to three orders of magnitude cheaper.” “The encouraging point is that we have found a catalyst with a good durability and life cycle relative to platinum- based catalysts,” Zelenay said in a statement. “For all intents and purposes, this is a zero-cost catalyst in comparison to platinum, so it directly addresses one of the main barriers to hydrogen fuel cells.” New fuel cell tech makes that affordable—old evidence irrelevant Commodity Online ‘11, US researchers claim breakthrough in Hydrogen Fuel Cell tech , www.commodityonline.com/news/us-researchers-claim-breakthrough-in-hydrogen-fuel-cell- tech-37501-3-37502.html, 2011)

U.S. researchers say they've made a breakthrough in the development of low-cost hydrogen fuel cells that one day could power electric cars. Researchers at Case Western Reserve University in Cleveland say catalysts made of carbon nanotubes dipped in a polymer solution can outperform traditional platinum catalysts in fuel cells at a fraction of the cost . The scientists say the new tech nology can remove one of the biggest roadblocks to widespread cell use: the cost of the catalysts. Platinum, which represents at least a quarter of the cost of fuel cells, currently sells for about $30,000 per pound, while the activated carbon nanotubes cost about $45 per pound, a Case release said Tuesday. "This is a breakthrough," Liming Dai, a professor of chemical engineering and the research team leader, said. 2AC- Oil $ Destroy Readiness Dependency on oil collapses the military Voth ‘12 (Jeffrey M. Voth is the president of Herren Associates leading a team of consultants advising the federal government on issues of national security, energy and environment, health care and critical information technology infrastructure, George Washing University Homeland Security Policy Institute, “In Defense of Energy – A Call to Action”, http://securitydebrief.com/2012/04/11/in-defense-of-energy-a-call-to-action/, April 11, 2012)

Last month, the Pentagon released its widely anticipated roadmap to transform operational energy security. As published in a World Politics Review briefing, energy security has become a strategic as well as an operational imperative for U.S. national security. As tensions continue to escalate with Iran in the Strait of Hormuz, it has become clear that the U.S. military urgently requires new approaches and innovative technologies to improve fuel efficiency, increase endurance, enhance operational flexibility and support a forward presence for allied forces while reducing the vulnerability inherent in a long supply-line tether . Assured access to reliable and sustainable supplies of energy is central to the military’s ability to meet operational requirements globally, whether keeping the seas safe of pirates operating off the coast of Africa, providing humanitarian assistance in the wake of natural disasters in the Pacific or supporting counterterrorism missions in the Middle East. From both a strategic and an operational perspective, the call to action is clear. Rapid employment of energy-efficient technologies and smarter systems will be required to transform the military’s energy-security posture while meeting the increasing electric-power demands required for enhanced combat capability. As recently outlined by Chairman of the Joint Chiefs of Staff Gen. Martin Dempsey, “Without improving our energy security, we are not merely standing still as a military or as a nation, we are falling behind.” Independently- fuel cost wrecks the DOD’s budget - spills over Freed ‘12 (Josh Freed, Vice President for Clean Energy, Third Way, “Improving capability, protecting 'budget”, http://energy.nationaljournal.com/2012/05/powering-our-military-whats- th.php, May 21, 2012)

As Third Way explains in a digest being released this week by our National Security Program, the Pentagon’s efforts to reduce energy demand and find alternative energy sources could keep rising fuel costs from encroaching on the budgets of other important defense programs. And the payoff could be massive. The Air Force has already been able to implement behavioral and technology changes that will reduce its fuel costs by $500 million over the next five years. The Army has invested in better energy distribution systems at several bases in Afghanistan, which will save roughly $100 million each year. And, using less than 10% of its energy improvement funds, the Department has begun testing advanced biofuels for ships and planes. This relatively small investment could eventually provide the services with a cost-effective alternative to the increasingly expensive and volatile oil markets. These actions are critical to the Pentagon’s ability to focus on its defense priorities. As Secretary Panetta recently pointed out, he’s facing a $3 billion budget shortfall caused by “higher-than-expected fuel costs.” The Department’s energy costs could rise even further if action isn’t taken. DOD expects to spend $16 billion on fuel next year. The Energy Information Administration predicts the price of oil will rise 23% by 2016 , without a major disruption in oil supplies, like the natural disasters, wars, and political upheaval the oil producing states have seen during the last dozen years. Meanwhile, the Pentagon’s planned budget, which will remain flat for the foreseeable future, will require significant adjustment to the Department’s pay-any-price mindset, even if sequestration does not go into effect. Unless energy costs are curbed, they could begin to eat into other budget priorities for DOD. In addition, the Pentagon’s own Defense Science Board acknowledges that using energy more efficiently makes our forces more flexible and resilient in military operations, and can provide them with greater endurance during missions. Also, by reducing energy demand in the field, DOD can minimize the number of fuel convoys that must travel through active combat zones, reducing the chances of attack to avoiding casualties and destruction of material. At our domestic bases, DOD is employing energy conservation, on-site clean energy generation, and smart grid technology to prevent disruptions to vital activities in case the civilian grid is damaged by an attack or natural disaster. The bottom line is, developing methods and technologies to reduce our Armed Forces’ use of fossil fuels and increase the availability of alternative energy makes our military stronger. That’s why the Pentagon has decided to invest in these efforts. End of story. 2AC- AT: Peak Uranium No impact – SMRs don’t need uranium Szondy ‘12 (David, writes for charged and iQ magazine, award-winning journalist [“Feature: Small modular nuclear reactors - the future of energy?” February 16th, http://www.gizmag.com/small-modular-nuclear-reactors/20860/)

SMRs can help with proliferation, nuclear waste and fuel supply issues because, while some modular reactors are based on conventional pressurized water reactors and burn enhanced uranium, others use less conventional fuels. Some, for example, can generate power from what is now regarded as " waste ", burning depleted uranium and plutonium left over from conventional reactors. Depleted uranium is basically U-238 from which the fissible U-235 has been consumed. It's also much more abundant in nature than U-235, which has the potential of providing the world with energy for thousands of years. Other reactor designs don't even use uranium. Instead, they use thorium. This fuel is also incredibly abundant, is easy to process for use as fuel and has the added bonus of being utterly useless for making weapons, so it can provide power even to areas where security concerns have been raised. No peak uranium MIT ‘11 [“The Future of the Nuclear Fuel Cycle”, 2011, http://web.mit.edu/mitei/research/studies/documents/nuclear-fuel- cycle/The_Nuclear_Fuel_Cycle-all.pdf]

We developed a price elasticity model to estimate the future costs of uranium as a function of the cumulative mined uranium. The details of this model are in the appendix. The primary input is the model of uranium reserves as a function of ore grade [14] developed in the late 1970s by Deffeyes. The results of this model are shown in Figure 3.2. For uranium ores of practical interest, the supply increases about 2% for every 1% decrease in average grade mined down to an ore grade of ~1000 ppm. His work extended models previously applied to individual mined deposits (e.g., by Krige for gold) [15] to the worldwide ensemble of deposits of uranium. The region of interest in the figure is on the left-hand side, above about 100 ppm uranium, below which grade the energy expended to extract the uranium will approach a significant fraction of that recoverable by irradiation of fuel in LWRs. The resources of uranium increase significantly if one is willing to mine lower-grade resources . An important factor not accounted for here in prediction of uranium resources is the recovery of uranium as a co-product or by-product of other mining operations. The most important category here is phosphate deposits. A recent CEA assessment [8] projects 22 million MT from this source: by itself enough for 1000 one-GWe reactors for 100 years, subject to the caveat that co-production is fully pursued.Finally, several authors have noted that Deffeyes’ assessment was completed before the rich ore deposits in Canada, at grades in excess of 3% (30,000 ppm) were discovered. This could imply that the projected cost escalation based on his results would, in effect, be postponed for a period. Our model included three other features in addition to uranium supply versus ore grade elasticity: p Learning curve. In all industries there is a learning curve where production costs go down with cumulative experience by the industry. p Economics of scale. There are classical economics of scale associated with mining operations. p Probabilistic assessment. Extrapolation into an ill-defined future is not properly a deterministic undertaking—we can not know the exact answer. Hence, following the lead in a similar effort in 1980 by Starr and Braun of EPRI, a probabilistic approach was adopted [16] in our models. The results of our model are shown in Figure 3.3 where the relative cost of uranium is shown versus the cumulative electricity produced by LWRs of the current type. The unit of electricity is gigawatt-years of electricity generation assuming that 200 metric tons of uranium are required to produce a gigawatt-year of electricity—the amount of uranium used by a typical light water reactor. The horizontal axis shows three values of cumulative electricity production: p G1 = 100 years at today’s rate of uranium consumption and nuclear electric generation rate p G5 = 100 years at 5 times today’s uranium consumption and nuclear electricity generation rate p G10 = 100 years at 10 times today’s uranium consumption and nuclear electricity generation rate. Three lines are shown based on the probabilistic assessment described in the appendix of Chapter 3. The top line is to be interpreted as an 85% probability that the cost relative to the baseline cost will be less than the value on the trace plotted as a function of the cumulative electricity production using today’s LWR once-through fuel cycle. The three lines meet at the far left where the baseline cost of uranium is taken as 100 $/kg, and the baseline total cumulative nuclear electricity production is (somewhat arbitrarily) taken as 10 4 GWe-yr using 2005 as the reference year. The other lines correspond to 50% and 15% probabilities. As one example at 10 GWe-yr cumulative production, there is an 85% probability that uranium will cost less than double 2005 costs (i.e., less than $200/kg), a 50% probability that it will cost less than 30% greater than 2005 costs, and a 15% probability that it will be 20% or lower in cost. As another example, if there were five times as many nuclear plants (G5) and they each operated for 100 years, we would expect (at 50% probability) uranium costs to increase by less than 40%. Because uranium is ~4% of the production cost of electricity, an increase to 6% of the production costs would not have a large impact on nuclear power economics. The two points plotted on Figure 3.3 correspond to 2007 Red Book values for identified (RBI) and identified-plus-undiscovered (RBU) resources at under 130 $/kg: 5.5 and 13.0 million metric tons. These benchmarks support the expectation that uranium production costs should be tolerable for the remainder of the 21st century – long enough to develop and smoothly transition to a more sustainable nuclear energy economy. No shortage–most qualified ev NEI ‘12 (Nuclear Energy Institute, “Myths & Facts About Nuclear Energy”, June, http://www.nei.org/resourcesandstats/documentlibrary/reliableandaffordableenergy/factsheet/myt hs--facts-about-nuclear-energy-january-2012/)

Fact: Readily available uranium resources (5.5 million metric tons) will last at least 100 years at today’s consumption rate, according to the World Nuclear Association and the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. An estimated additional 10 .5 million metric tons that remain untapped will expand the available supply to at least 200 years at today’s consumption rate. The agency also determined that further exploration and improvements in extraction technology are likely to at least double this estimate over time. These estimates do not take into account the effect that increased recycling of used nuclear fuel would have on global supplies. The Massachusetts Institute of Technology (MIT) recently confirmed that uranium supplies will not limit the expansion of nuclear energy in the U.S. in its 2010 study “The Future of the Nuclear Fuel Cycle.” Asia solves US and Asian demand GBI Research ’12 (Global Business Intelligence, “Uranium Mining Market in Asia-Pacific to 2020 - Availability of Large Uranium Reserves to Lay the Foundation for the Industry's Future Development”, http://www.marketresearch.com/GBI-Research-v3759/Uranium-Mining-Asia- Pacific-Availability-7039053/, June 25, 2012) Uranium mining is set to soar in the future due to rising demand for nuclear power, with Asia’s colossal reserves look ing set to meet these needs, according to natural resources expert GBI Research. The new report* suggests that mounting demand for nuclear power generation will boost uranium prices , offering to make Asia-Pacific a tidy sum, but only if they can overcome environmental and professional hazards. According to the Australian Bureau of Resources and Energy Economics and Sciences (ABARES), the spot price of uranium, which hung around the $62 per pound mark in 2011, will reach around $81 a pound by 2016 due to the massive surge in demand. The availability of huge reserves , coupled with rising prices, will drive the Asia-Pacific uranium mining industry. Asia-Pacific is the largest uranium-producing region in the world, boasting an estimated production of 34,041 tons of uranium (tU) during 2011, over half the global production. Kazakhstan contributed just under two-thirds of this, followed by Australia with 25.2%, Uzbekistan with 7.2%, China with 2.6% and India with a 1.1% share. Asia’s uranium mining industry has been helped by the region’s abundant reserves , which, according to the World Nuclear Association (WNA), stood at 3,599,421tU in 2010, accounting for about 57% of total global reserves. New planned projects and the expansion of existing uranium mines will drive up uranium mine production throughout Asia-Pacific, with Australia, India, Kazakhstan and China all being expected to increase regional uranium mine production in the outlook period. 2AC- NRC Licensing Extend Chandler and Fertel- government action spurs NRC licensing and builds expertise- cross-applications of existing technology solves regulatory hurdles And its not a question of if licensing but when licensing- requires tech to be able to fill in immediately to prevent a nuclear energy crunch- that’s Licata and Fertel And the tech is already at the last stage- licensing inevitable Wald ’13 (Matthew L. Wald, New York Times, “Energy Department to Give $226 Million to Support Nuclear Reactor Design”, http://www.nytimes.com/2013/12/13/business/energy- environment/energy-dept-to-give-226-million-to-new-nuclear-reactor-design.html?_r=0, December 12, 2013)

WASHINGTON — The Energy Department will give a small company in Corvallis, Ore., up to $226 million to advance the design of tiny nuclear reactors that would be installed under water, making meltdown far less likely and opening the door to markets around the world where the reactors now on the market are too big for local power grids. The company, NuScale Power, has made substantial progress in developing “an invented-in-America, made-in-America product that will export U.S. safety standards around the world,” Peter B. Lyons, the assistant secretary for nuclear energy, said in an interview. For supplying electricity without global warming gases and for providing the United States with a new export product, the reactor had “immense global and national importance,” he said. The award is the second of two under a $452 million, multiyear program to assist in the development of “small modular reactors,” which would be built in American factories, potentially improving quality and cutting costs, and delivered by truck. The first award, in November 2012, went to Babcock & Wilcox, which formerly sold full-scale reactors. Its small model, called mPower, is a step ahead of NuScale’s because it has a preliminary agreement with a customer, the Tennessee Valley Authority. In seeking a high-tech, high-value export product, the Energy Department is hardly alone. On Tuesday, Sergey V. Kirienko, the director general of Rosatom, the Russian state atomic energy company, told a small group of reporters in Washington that a floating nuclear power plant, which could be moored near a load center, was “entering its finalization stage.” The units will be built in pairs near St. Petersburg, Russia he said, and the first pair are to be installed in Russia in 2016. His company is in preliminary discussions with potential customers around the world, he said. While the two designs chosen by the Energy Department are radically smaller in their size and method of construction, both mPower and NuScale use ordinary water to transfer the heat created in the reactor so that the water can be used to make steam for electricity and help control the flow of neutrons, the subatomic particles that sustain the chain reaction. (So does the Russian floating plant.) In that sense, the department’s choices were technologically conservative, because other designers are working on reactors that would use sodium, graphite and helium for those functions. One advantage of sticking to water, Mr. Lyons said, was that the Nuclear Regulatory Commission, the agency that will decide whether to license the reactors, is already familiar with that technology. NuScale’s plans to place its reactors in something resembling a thermos bottle installed at the bottom of a giant pool. If a failure threatens overheating, a vacuum space in the bottle would fill with water and excess heat would be drawn away passively, without pumps or valves, by the huge surface area of the bottle sitting in cool water. No licensing problems – our ev is from the NRC Magwood ’11 (William d. Magwood iv, nuclear regulatory commission, July 14, http://www.gpo.gov/fdsys/pkg/chrg-112shrg72251/html/chrg-112shrg72251.htm s. Hrg. 112-216, an examination of the safety and economics of light water small modular reactors hearing before a subcommittee of the committee on appropriations united states senate one hundred twelfth congress first session special hearing, July 14, 2011)

At the same time, one often hears the industry is concern ed that the NRC might make decisions that will render these new systems to be uncompetitive. In my opinion, these concerns are not well grounded in an understanding of how the NRC develops regulatory requirements. Using security as a general example , the size of guard forces and the nature of security barriers protecting U.S. nuclear power plants is not determined in accordance with a set formula that might somehow be applied to SMRs. The security strategies of each individual plant are designed by licensees to defend their facilities against threats postulated by NRC. These strategies are tested on a periodic basis using force-on-force exercises, and when issues arise as a result of these exercises, licensees are obligated to make necessary adjustments. I believe this exact same process will work very well with SMRs. Whatever else they are SMRs are power reactors. While the size of SMRs may eventually prove to have financial or implementation benefits the fact that they are small has far less significance from a regulatory standpoint than I think many expect. That said, SMR vendors have proposed design components that, if fully realized, incorporate technologies and approaches that can have significant safety benefits and, therefore, must be considered as risk- informed regulatory decisions are made. 1AR- NRC Licensing NRC is back on track- no delay Conca ‘13 (James Conca, Energy Contributor in Forbes, “Nuclear Waste Confidence -- NRC Ruling No Big Deal”, http://www.forbes.com/sites/jamesconca/2012/08/11/nuclear-waste- confidence-nrc-ruling-no-big-deal/, August 11, 2012)

Dry cask storage behind a security fence. The safest, easiest method for putting spent fuel aside until used, burned as new fuel or eventually disposed of in a deep geologic repository. We are very confident it is safe for 100 years or more. There has been some fist-bumping this week in the anti-nuclear sector over the recent vacating of two NRC rules by the U.S. Court of Appeals for the District of Columbia Circuit in June; the waste-confidence decision and the storage rule. The judges felt that the agency had failed to conduct an environmental impact statement, or a finding of no significant environmental impact, before ruling that it is safe to store nuclear waste in wet pools and dry casks without a permanent solution in sight. But it was just that the initial NRC rule was too vague, not that this type of storage is unsafe (platts NRC Ruling). In response, the NRC this week voted unanimously to delay final approval of licenses for new nuclear plants, or renewing the licenses of existing facilities, until the agency responds with a more complete ruling and addresses the dilemma of long-term nuclear waste storage across the country. The 24 environmental groups that petitioned NRC to respond to the court are acting like they actually stopped all action on nuclear licensing (Marketwatch NRC Ruling). While no final decisions will be made in issuing licenses, the process for licensing new and existing plants will continue as before, the NRC sa id, which means the impact to the industry will be minimal . Also, reactors can operate even after their present license expires as long as it is the NRC that is dragging it out. And most reactors have already been relicensed in the last ten years . Only 18 out of 104 reactors are not and primarily because they have to operate beyond 20 years before they can apply. The four new GenIII plants being built at Vogtle (Georgia) and V.C. Summer (South Carolina) are also not affected at all since their licenses have already been issued. Since NRC needed to do this anyway and will get it done before any of the critical licensing deadlines pass, this is no big deal. The nuclear industry has long been resigned to a slow-moving regulatory system. The environmental groups also stated that this action exacerbated an already dying nuclear industry, plagued with runaway costs and competition with far less expensive energy alternatives. Huh? Re- licensing nuclear reactors is the absolute cheapest form of energy, about 2¢/kWhr for 20 years. They are obviously referring to new natural gas plants versus new nuclear GenIII plants which is not impacted by this ruling at all. New nuclear is actually cheaper than new gas in the long run, e.g., 20 years or more, even at present gas prices, but our society doesn’t like to plan for the long-term so it usually gets these things wrong. And why anyone thinks gas plants are environmentally preferable to nuclear is odd from a carbon-emissions standpoint. NRC’s decision also marks the first major action since Dr. Allison Macfarlane was sworn in as chair of the NRC. MacFarlane describes the agency as “…a fantastic place, I’m enjoying it very much”, bearing out the general hope that her tenure will be more congenial and productive than the former Chair Gregory Jaczko, who stepped down amid infighting and controversy. But the elephant in the room on this whole issue is opening a deep geologic nuclear waste repository. As easy and safe as dry cask storage is, even for 100 years, spent fuel was always envisioned to be permanently disposed in such a repository, within 20 to 40 years of leaving the reactor. The original NRC ruling was to address the fact that we are not moving forward on that issue. Of course, the choice of a final repository is not NRC’s to make. Congress has to approve any site chosen by DOE or the yet-to-be-formed quasi-government agency recommended by the President’s Blue Ribbon Commission on which Chairwomen MacFarlane served. Transportation of high-activity nuclear waste is easy using this 72B Cask, and we've been doing it for many years. This load of high-activity bomb waste is being shipped to the deep geologic repository near Carlsbad, NM. Source: DOE And confidence is a funny thing. Just look at the Stock Market. The scientific community has been researching deep geologic disposal for 60 years. We are more than confident we can accomplish it, relatively easily and within budget. If we are allowed to do it. We have performed thousands of studies, hundreds of environmental impacts, and have even built one of these deep geologic repositories in the U.S. that has been operating for 13 years without a hitch (Helman – WIPP). The Yucca Mountain Project also conducted a huge number of environmental impact studies regarding the containment of radionuclides underground, protection of nearby communities, impacts to groundwater for 100,000 years, and a host of other studies that show we can do this if society wants it done. We know this problem very well and we know where and how to put this strange material away forever and ever. Scientific confidence is not the issue here. The lack of confidence has always been with the political side. The court faulted NRC for assuming a national repository would be built within the next 60 years, even though that’s the law. Funny that the court is slapping the NRC for something it can’t seem to enforce itself. But there is hope for the future. President Obama formed the B lue R ibbon C ommission on America’s Nuclear Future to staunch the wound to the Nuclear Waste Policy Act left by the demise of the Yucca Mountain Project (Helman – BRC), and it was a brilliant dressing. The BRC drafted a number of recommendations addressing nuclear energy and waste issues, but three recommendations, in particular, set the stage for a new strategy to dispose of high-level nuclear waste and to manage spent nuclear fuel in the United States: 1) interim storage for spent nuclear fuel, 2) resumption of the site selection process for a second repository, and 3) a quasi-government entity to execute the program and take control of the Nuclear Waste Fund in order to do so. The first and third are already being acted upon by Congre ss, led by Senators Bingaman (D-NM), Murkowski (R-AK), Feinstein (D-CA), Landrieu (D-LA) and Alexander (R-TN) who are trying to put the Commission’s recommendations into legislative language. The latest attempt was just this last week, a proposed Nuclear Waste Administration Act, “To establish a new organization to manage nuclear waste, provide a consensual process for siting nuclear waste facilities, ensure adequate funding for managing nuclear waste, and for other purposes.” We will get this right as a Nation, and we will lead the way for the rest of the world. Just let us do it. So was waste management Conca ’12 (James Conca, Energy Contributor in Forbes, “Congress Goes Nuclear”, http://www.forbes.com/sites/jamesconca/2012/04/28/congress-goes-nuclear/#, April 28, 2012)

So much for the notion that Congress can’t do anything right. The thoughtful and smart actions of Senators Murkowski and Landrieu, working with Senators Feinstein, Alexander and Bingaman, produced a bill out of the Senate Energy and Water Appropriations Subcommittee last Tuesday, approved Thursday by the full Committee, that took the first step to solving our nation’s nuclear waste problem. I’ve been waiting my entire career for this to happen. In fact, this first step is so significant that I’m having trouble catching my breath! If you remember, the Yucca Mountain Project, the nation’s first selected nuclear disposal site, was recently scrapped for being not workable and the President’s Blue Ribbon Commission on America’s Nuclear Future was appointed to find another path forward. After reviewing the last 60 years of frustrated science and policy, in February the BRC released a number of very good recommendations addressing nuclear in general, but three specific ones were critical to actually dealing with high-level nuclear waste and managing spent nuclear fuel for the next hundred years. They were: 1) executing interim storage for spent nuclear fuel, 2) resuming the site selection process for a second repository (Yucca being the first, the massive salts being the best), and 3) forming a quasi- government entity, or FedCorp, to execute the program and take control of the Nuclear Waste Fund in order to do so. The first recommendation separates fuel from real waste, allowing storage of still-usable spent nuclear fuel from reactor sites either to be used in future reactors or eventually disposed, without needing to retrieve it from deep in the earth as is presently the Law. The second recommendation allows us to choose the best geology for the permanent disposal of actual high-level waste that has no value since it is the waste from reprocessing old fuel. This real waste needs to be disposed of promptly, not just looked at for another few decades. It has cost billions to manage this waste in places that were always meant to be temporary. The third recommendation controls cost and administration, because, duh, we’re broke. Dry cask storage behind a security fence. The safest, easiest method for putting spent fuel aside until used, burned as new fuel or eventually disposed of in a deep geologic repository. Tuesday’s bill starts the ball rolling by implementing the first recommendation, authorizing “the Secretary of Energy to site, construct, and operate consolidated storage facilities to provide storage as needed for spent nuclear fuel and high- level radioactive waste.” – IN THE SENATE OF THE UNITED STATES—112th Cong., 2d Sess. The short version is this bill is consent-based, meaning the Feds can’t just pick a site and force it down a State’s throat, but have to wait for someone to bid for it and requires approval of the Governor, any affected Tribes, and the local representatives of that State. Plus, it authorizes the Nuclear Waste Fund to be used for what it always was intended. And DOE has only 120 days from passage to begin accepting proposals so it won’t languish for years. This bill breaks the nuclear waste logjam . It’s simple, it’s the right thing to do, it will save lots of money, it’s the best thing for the environment, and it’s a win-win , so how did the Senate do this? And so fast! Now it’s up to the House to maintain the do-nothing image of Congress, kill this bill, and let us get back to wasting billions of dollars looking at the problem for 30 more years. 2AC- AT: Cost No cost problems Skutnik ‘11 (Steve, Assistant Professor of Nuclear Engineering at the University of Tennessee “Are Small Modular Reactors A Nuclear Economics Game-Changer?” June 28th, http://theenergycollective.com/skutnik/60188/excellent-op-ed-small-modular-reactors-and-then- some, June 28, 2011)

SMRs have the potential to change the economics of the game by several means. First, many proposed SMR designs are engineered to be mass-produced and pre-fabricated in factories, rather than built on-site. This could tremendously push down prices while also shortening construction times, thus ameliorating what is currently one of nuclear's biggest weaknesses at the moment. Meanwhile, the "small" in SMRs also may have potentially positive implications for both cost and safety: SMRs can be potentially built into the ground , using the surrounding earth as containment, due to their relatively small size. Given the lower total power and nuclear material within the reactor, it can be said to have a lower overall "radiological footprint," meaning simplified safety planning. Finally, the "right-size" power of SMR capacity may allow them to be sold in a greater number of markets - places both where a new full-sized reactor is too big for the needs of a community (for example, Fort Calhoun, north of Omaha, is the smallest reactor in the U.S. nuclear fleet, clocking in at only 500 MW; compare this to currently proposed new reactor designs, which begin in the neighborhood of 1000-1100 MW). Likewise, the smaller size means that for utilities only looking to incrementally expand capacity, small reactors may prove to be competitive with alternatives such as natural gas turbines. One point which I think nuclear advocates tend to allow themselves to be blindsided to at times is in the fact that above all else, it is economics which will ultimately determine the future of the nation's electricity portfolio. Factors like politics certainly come into play (particularly such issues as energy portfolio mandates, etc.), and likewise factors such as safety can never be understated. Nor should public acceptance ever be ignored, much as it has to the industry's peril in the past. However, those ultimately committing the funds to expand energy sources are the utilities, many of whom answer either directly to shareholders or to ratepayers. In this regard, they have an obligation in either sense to produce power as profitably or affordably as possible. Thus, the decision for utilities will always ultimately come down to economics, something that nuclear advocates cannot simply ignore. I don't necessarily doubt the assertions of fellow advocates such as Rod Adams, who assert that fossil fuels have a strong interest to defend in continuing to sell their products. (Although I will say that I also don't necessarily buy the idea that those who argue natural gas is currently more economical based on short-term factors are necessarily on the fossil fuel dole, either.) But the fact remains - for nuclear to succeed, it must be able to compete, head to head, dollar for dollar. Nuclear energy has tremendous advantages to offer, in that is clean, abundant, and easily the most energy-dense source we have available at our disposal. Yet at the end of the day, decisions over energy investments do not necessarily come down to these factors: they come down to economics, and often (regrettably) economic return over the short-term. This may be where SMRs ultimately change the game for nuclear, then - namely, by bringing the advantages of nuclear to bear in a more economically attractive package. AT: Makujani Makujani isn’t talking about floating modular reactors- new design saves cost Independently- he is wrong Barton ‘10 (Charles, frmr PhD Candidate in History, MA in Philsophy, worked on the LFTR concept for about 2/3eds of his ORNL career and recognized by nuclear bloggers most of whom have technical training, and has been mentioned by the Wall Street Journal, “Arjun Makhijani and the Modular Small Reactor null-hypothesis” October 2, 2010, http://nucleargreen.blogspot.com/2010/10/arjun-makhijani-and-modular-small.html)

Arjun Makhijan i (with Michele Boyd) has recently published a fact sheet on Small Modular Reactors which in effect advertises itself as the null-hypothesis to the case I an others have been making for some time on the advantages of small reactors. Small Modular ReactorsNo Solution for the Cost, Safety, and Waste Problems of Nuclear Power, Makhijani's title proclaims. But what is the evidence that backs Makhijani's case up. As it turns out Makhijani offers no empirical data to back up his assertion , so as an example of scientific reasoning, Makhijani's fact sheet rates an F. He’s wrong Barton ‘10 (Charles Barton, Masters in Philosophy from Memphis University, [ “Arjun Makhijani and the Modular Small Reactor null-hypothesis”, http://robertmayer.wordpress.com/2010/10/31/arjun-makhijani-and-the-modular-small-reactor- null-hypothesis/ October 2, 2010)

Finally, we should consider Makhijani assertions about small reactor costs. First he claims, SMR proponents claim that small size will enable mass manufacture in a factory, enabling considerable savings relative to field construction and assembly that is typical of large reactors. In other words, modular reactors will be cheaper because they will be more like assembly line cars than handmade Lamborghinis. In the case of reactors, however, several offsetting factors will tend to neutralize this advantage and make the costs per kilowatt of small reactors higher than large reactors. Makujani claims in contrast to cars or smart phones or similar widgets, the materials cost per kilowatt of a reactor goes up as the size goes down . This is because the surface area per kilowatt of capacity, which dominates materials cost, goes up as reactor size is decreased. Material costs do effect the cost of other industrial produced products including cars, and manufacturers t ake several approaches to that problem, including careful redesign of components to eliminate part of the expensive material, or the substit ution of low cost materials for high cost materials. Makujani does not believe that this is possible, but for example it is possible to eliminate some of the cement and steel in the massive reactor containment dome by housing the reactor in an underground chamber. Thus high cost concrete and steel are replaced by low cost earth and rock, Reactors with compact cores, require less manufacturing material, and smaller housing facilities . Thus the choice of a compact core nuclear technology might offer considerable savings in materials costs. Thus the sm all r eactor manufacturer may have several options to lower materials costs .¶ Makhijani claims that other costs might be inversely proportional to reactor size, Similarly, the cost per kilowatt of secondary containment, as well as independent systems for control, instrumentation, and emergency management, increases as size decreases. Yet as I have already noted there are things that manufacturers can do about containment costs. Control rooms are not huge parts of overall reactor costs, and there are undoubtedly things which reactor manufacturers could do to lower control room building costs. For example whole control room modules can be factory fabricated and moved to the reactor housing site where they could be house underground or in preexisting recycled structures. Similar solutions could be found for the emergency management housing issues. Finally Makhijani tells us Cost per kilowatt also increases if each reactor has dedicated and independent systems for control, instrumentation, and emergency management. Yet sm aller r eactor s will require few er sensors , reactor control and emergency management and with the very large number of instruments required by mass produced factory manufactured reactors, the cost of instrument manufacture and indeed whole instrument room manufacture will fall significantly. Small reactors require smaller, less costly control and emergency management systems, and the the cost benefits of serial manufacturing will affect the costs of these systems as well. Finally it should be noted that Makhijani fails to mention the clear cut cost lowering benefits of factory manufacture d reactors. For example, Labor costs are significantly lowered in several ways. Factory assembly offers superior labor organization and thus the same tasks take less time in the factory. Secondly workers can live close to factory sites, thus do not require high wages to induce them into the transient lifestyle of construction workers. Thirdly, in a factory in which several reactors are being constructed at any one time, individual workers will require fewer skills. The less skilled workers will command lower wages. Taken together significant labor savings are possible through factory manufacture. Labor is by no means the only source of savings. A further source of savings would come from the serial manufacture of parts. It is well known that as the number of a part built increases, the cost of manufacturing that part falls. Thus serial production tends to lower unit costs. In addition serial production introduces cost lowering learning . As knowledge of a manufacturing process rises, awareness of cost lowering possibilities also increase. This is called the learning curve. It is reasonable to anticipate a learning curve based saving for serial produced small reactors. Thus cost savings will be available to the manufacturers of small factory built reactors. We lack cost the cost date that we need to judge the extent to which small factory manufactured reactors will lower nuclear costs. Arguments for the nuclear cost lowering benefits of economies of scale are not nearly strong as Makhijani believes them to be, while the evidence of a cost lowering effect of serial reactor manufacture r is stronger . Thus Makhijani has chosen to reject the stronger evidence while upholding the case for which the evidence appears to be so weak as to offer no support.¶ We can conclude then, that Arjun Makhijani has not established reasonable grounds in support of his assertion that Small Modular Reactors offer no solution for the cost, safety, and waste problems of nuclear power. Thus to the extent that this assertion can be viewed as a null-hypothesis to the claim that Small Modular Reactors offer an valuable attractive alternative to large conventional power plants, the hypothesis must be still be viewed as unfalsified by the available evidence. Further evidence could still change this picture, but for the moment advocates of small reactors have plausible grounds for their case. AT: Magwood Magwood’s a hack Grim ‘12 (Ryan, Washington bureau chief for The Huffington Post “Bill Magwood, NRC Democrat, Is 'Treacherous, Miserable Liar' And 'First-Class Rat,' Says Harry Reid,” 7/30, http://www.huffingtonpost.com/2012/07/30/bill-magwood-nrc-_n_1712181.html)

Harry Reid isn't known for hyperbole. The soft-spoken Senate majority leader tends to wield his power behind the scenes, and when he does speak at his weekly press briefing, reporters lean in and bend their ears to make out the words. But if Reid is lied to, all that changes. It may sound dissonant to the public to say that honesty is the mostly highly valued quality in Washington. But while members of Congress may lie to their constituents with regularity, lying to one another is considered an unforgiveable sin. In an interview with The Huffington Post, the Nevada Democrat savaged Bill Magwood, a member of the Nuclear Regulatory Commission, when asked if he thought the Democrat had a chance to become NRC chairman. "You know, when you're in this government, this business of politics, the only thing that you have is your word," said Reid, seated in his Capitol office. "I can be as partisan as I have to be, but I always try to be nice. I try never to say bad things about people. Bill Magwood is one of the" -- Reid paused, deciding which adjective to reach for, before picking them all -- " most unethical, prevaricating " -- he paused again, this time for 10 full seconds -- "incompetent people I've ever dealt with. The man sat in that chair -- right there -- and lied to me. I've never, ever in my life had anyone do that. Never." Magwood didn't respond to a request for comment left with his assistant. Reid is a vociferous opponent of storing nuclear waste in Nevada's Yucca Mountain. By backing Obama early in his campaign for president, he persuaded the candidate to promise to block the project. A former staffer of Reid's was named chairman, and Reid said he was assured by Pete Rouse, a senior White House official, that Magwood would also oppose Yucca. Instead, according to Reid and confirmed by sources familiar with the internal dynamics of the NRC, Magwood worked against the effort to shut down Yucca. "That man I will never , ever forget what a treacherous , miserable liar he is. I met with him because Pete Rouse asked me to meet with him. I said, 'Is he OK on Yucca Mountain?' Pete said, 'Yeah.' So I went through some detail with him as to how important this was to me. 'Senator, I know this industry like the back of my hand. You don't have to worry about me,' [Magwood said]. And the conversation was much deeper than that." Late in 2011, HuffPost reported that Magwood was working with Republicans and the nuclear industry to oust then-NRC Chairman Greg Jaczko, just as he had done to his boss Terry Lash at the Department of Energy in the 1990s. "What I eventually found was that he had been deceptive and disloyal," Lash said of his then- number two, when told what Reid said. "I'm surprised at the strength of it, but it's certainly consistent with what I've seen." Reid and Lash have company in their critique of Magwood. In the earlier story about Magwood and the industry, multiple people who've worked closely with him questioned his integrity, but none did so on the record like Reid and Lash: Concludes aff Magwood ’11 (William d. Magwood iv, nuclear regulatory commission, July 14, http://www.gpo.gov/fdsys/pkg/chrg-112shrg72251/html/chrg-112shrg72251.htm s. Hrg. 112-216, an examination of the safety and economics of light water small modular reactors hearing before a subcommittee of the committee on appropriations united states senate one hundred twelfth congress first session special hearing, July 14, 2011)

All small reactor technologies of the past failed to find a way to overcome the fact that the infrastructure required to safely operate a nuclear power reactor of any size is considerable. Tons of steel and concrete are needed to construct containment buildings. Control rod drives, steam generators, and other key systems are hugely expensive to design and build. A larger plant with greater electric generating capacity simply has an inherently superior opportunity to recover these large upfront costs over a reasonable period. [their card stops, Magwood continues]

So why is today different from yesterday? The greatest difference is the fact that the technology has evolved significantly over the years. Having learned lessons from the development of Generation III+ technologies and from the failure of previous small reactors, today's SMR vendors clearly believe they have solved the riddle of small reactor economics. They are presenting novel design approaches that could lead to significant improvements in nuclear safety. For example, design concepts that I have seen thus far further advance the use of passive safety systems, applying gravity, natural circulation, and very large inventories of cooling water to reduce reliance on human intervention during an emergency. SMR designs also apply novel tech nologies such as integral pressure vessels that contain all major system components and use fewer and smaller pipes and pumps, thereby reducing the potential for a serious loss-of-coolant accident. Very importantly, these new SMRs are much smaller than the systems designed in the 1990s; this choice was made to assure that they could be factory-built and shipped largely intact by rail for deployment. The ability to ``manufacture'' a reacto r rather than ``constructing'' it onsite could prove to be a major advantage in terms of cost, schedule reliability, and even quality control. Worked with industry POGO ‘9 (project on government oversight “ POGO Opposes Nomination of William Magwood to NRC,”, http://www.pogo.org/pogo-files/letters/nuclear-security-safety/nss-npp- 20090914.html, October 14, 2009)

Mr. Magwood's nomination violates the spirit of President Obama's "Ethics Commitment by Executive Branch Personnel Executive Order" (Ethics Executive Order). Since his retirement from government service in 2005, Mr. Magwood has been actively involved in efforts to advance nuclear industry business opportunities domestically and abroad. He founded Advanced Energy Strategies which provides "expert advice and analysis of U.S. and international energy policy activities; nuclear industry developments and prospects; and supporting business development efforts." Mr. Magwood has also been an investor in and President of Secure Energy North America Corporation, a company that is "working with industry and investors to develop novel approaches to finance new nuclear power stations in the United States." Prior to his government service, Mr. Magwood also managed nuclear policy programs at the Edison Electric Institute, an industry trade association. 2AC- AT: Natural Gas Glut AC SMR key to help nuclear beat-out natural gas Lamonica ’12 (Tech Review Writer. 20 years of experience covering technology and business (8/9/12, Martin, A Glut of Natural Gas Leaves Nuclear Power Stalled, www.technologyreview.com/news/428737/a-glut-of-natural-gas-leaves-nuclear-power/)

The nuclear renaissance is in danger of petering out before it has even begun, but not for the reasons most people once thought. Forget safety concerns, or the problem of where to store nuclear waste—the issue is simply cheap, abundant natural gas. ¶ General Electric CEO Jeffrey Immelt caused a stir last month when he told the Financial Times that it's "hard to justify nuclear" in light of low natural gas prices. Since GE sells all manner of power generation equipment, including components for nuclear plants, Immelt's comments hold a lot of weight.¶ Cheap natural gas has become the fuel of choice with electric utilities, making building expensive new nuclear plants an increasingly tough sell . The United States is awash in natural gas largely thanks to horizontal drilling and hydraulic fracturing, or "fracking" technology, which allows drillers to extract gas from shale deposits once considered too difficult to reach. In 2008, gas prices were approaching $13 per million BTUs; prices have now dropped to around $3. ¶ When gas prices were climbing, there were about 30 nuclear plant projects in various stages of planning in the United States. Now the Nuclear Energy Institute estimates that, at most, five plants will be built by 2020, and those will only be built thanks to favorable financing terms and the ability to pay for construction from consumers' current utility bills. Two reactors now under construction in Georgia, for example, moved ahead with the aid of an $8.33 billion loan guarantee from the U.S. Department of Energy. ¶ What happens after those planned projects is hard to predict. "The question is whether we'll see any new nuclear," says Revis James, the director of generation research and development at the Electric Power Research Institute. "The prospects are not good." ¶ Outside the United States, it's a different story. Unconventional sources of natural gas also threaten the expansion of nuclear, although the potential impact is less clear-cut. Around the world, there are 70 plants now under construction, but shale gas also looms as a key factor in planning for the future. Prices for natural gas are already higher in Asia and Europe, and shale gas resources are not as fully developed as they are the United States.¶ Some countries are also blocking the development of new natural gas resources. France, for instance, which has a strong commitment to nuclear, has banned fracking in shale gas exploration because of concerns over the environmental impact.¶ Fast-growing China, meanwhile, needs all the energy sources available and is building nuclear power plants as fast as possible.¶ Even in United States, of course, super cheap natural gas will not last forever. With supply exceeding demand, some drillers are said to be losing money on natural gas, which could push prices back up. Prices will also be pushed upward by utilities, as they come to rely on more natural gas for power generation, says James.¶ Ali Azad, the chief business development officer at energy company Babcock & Wilcox, thinks the answer is making nuclear power smaller , cheaper, and faster. His is one of a handful of companies developing small modular reactors that can be built in three years, rather than 10 or more, for a fraction of the cost of gigawatt-size reactors. Although this technology is not yet commercially proven, the company has a customer in the Tennessee Valley Authority, which expects to have its first unit online in 2021 (see "A Preassembled Nuclear Reactor").¶ "When we arrive, we will have a level cost of energy on the grid, which competes favorably with a brand-new combined-cycle natural gas plants when gas prices are between $6 to $8," said Azad. He sees strong demand in power-hungry China and places such as Saudia Arabia, where power is needed for desalination.¶ Even if natural gas remains cheaper, utilities don't want to find themselves with an overreliance on gas , which has been volatile on price in the past, so nuclear power will still contribute to the energy mix. "[Utilities] still continue [with nuclear] but with a lower level of enthusiasm—it's a hedging strategy," says Hans-Holger Rogner from the Planning and Economics Studies section of the International Atomic Energy Agency. "They don't want to pull all their eggs in one basket because of the new kid on the block called shale gas." Addon Oil Entanglement Add-on Plan solves military oil entanglement Buis ’12 (Tom Buis, CEO, Growth Energy, Co-written by Buis and Growth Energy Board Co- Chair Gen. Wesley K. Clark (Ret.), “American Families Need American Fuel”, http://energy.nationaljournal.com/2012/05/powering-our-military-whats-th.php, May 23, 2012)

Our nation is dangerously dependent on foreign oil. We import some 9 million barrels per day, or over 3 billion barrels per year; the U.S. military itself comprises two percent of the nation’s total petroleum use, making it the world’s largest consumer of energy and oil imports. Of U.S. foreign oil imports, one out of five barrels comes from unfriendly nations and volatile areas, including at least 20 percent stemming from the Persian Gulf, including Bahrain, Iraq, Iran, Kuwait, Qatar, Saudi Arabia, and the United Arab Emirates. Further, our nation heavily relies on hot-beds of extremism, as Saudi Arabia, Venezuela, Nigeria are our third, fourth, and fifth, respectively, largest exporters of oil. How dangerous is this? Very! Not only does America’s huge appetite for oil entangle us into complicated relationships with nations marred by unstable political, economic, and security situations, it also gravely impacts our military, who risk their lives daily to protect foreign energy supply routes. Because of our addiction to oil , we have been in almost constant military conflict, lost more than 6,500 soldiers and created a whole new class of wounded warriors, thousands of whom will need long-term care funded by our government. One in eight soldiers killed or wounded in Iraq from 2003-2007 were protecting fuel convoys, with a total of 3,000 Army casualties alone. We maintain extra military forces at an annual cost of about $150 billion annually, just to assure access to foreign oil - because we know that if that stream of 9 million barrels per day is seriously interrupted, our economy will crash. That's what I call dangerously dependent. Even worse, according to a new Bloomberg Government analysis, Pentagon spending on fuel is dramatically increasing. This will force the military to dedicate even more funds toward energy costs, at the expense of other priorities, like training and paying soldiers. In fact, every $.25 increase in the cost of jet fuel makes a $1 billion difference in the Department of Defense’s bottom line – a debt that will be passed along to the American taxpayer. And if that's not enough to make you want to avoid foreign oil, then consider this: every dollar hike in the international, politically-rigged price of oil hands Iran about $3 million more per day, that their regime can use to sow mischief, fund terrorism, and develop missiles and nuclear weapons. Enough is enough! We have domestic alternatives that can protect American interests, and promote prosperity and security – including, more domestic oil production, using natural gas and biofuels, like ethanol, as fuel, converting coal to liquid fuel, and moving as rapidly as possible to vehicles powered by green energy. By introducing clean energy and fuel alternatives, this would rapidly reduce both the strain of securing foreign energy supply routes in unstable regions, as well as unnecessary economic and political entanglement with volatile regimes. It is imperative the U.S. military leverage its position as a leader and enact pertinent energy policies to best enhance American energy – and national – security. These escalate Collina ‘5 (Executive Director of 20-20 Vision, Tom Z. Collina, Executive Director of 20- 20Vision; testimony in front of Committee on Foreign Relations Subcommittee on Near Eastern and South Asian Affairs United States Senate “Oil Dependence and U.S. Foreign Policy: Real Dangers, Realistic Solutions”. October 19, 2005 http://www.globalsecurity.org/military/library/congress/2005_hr/051020-collina.pdf)

More conflicts in the Middle East America imports almost 60% of its oil today and, at this rate, we’ll import 70% by 2025. Where will that oil come from? Two-thirds of the world’s oil is in the Middle East, primarily in Saudi Arabia, Iran and Iraq. The United States has less than 3% of global oil. The Department of Energy predicts that North American oil imports from the Persian Gulf will double from 2001 to 2025.i Other oil suppliers, such as Venezuela, Russia, and West Africa, are also politically unstable and hold no significant long-term oil reserves compared to those in the Middle East. Bottom line: our economy and security are increasingly dependent on one of the most unstable regions on earth. Unless we change our ways , we will find ourselves even more at the mercy of Middle East oil and thus more likely to get involved in future conflicts. The greater our dependence on oil, the greater the pressure to protect and control that oil. The growing American dependence on imported oil is the primary driver of U.S. foreign and military policy today, particularly in the Middle East, and motivates an aggressive military policy now on display in Iraq. To help avoid similar wars in the future and to encourage a more cooperative, responsible, and multilateral foreign policy the U nited S tates must significantly reduce its oil use . Before the Iraq war started, Anthony H. Cordesman of the Center for Strategic and International Studies said: “Regardless of whether we say so publicly, we will go to war, because Saddam sits at the center of a region with more than 60 percent of all the world's oil reserves.” Unfortunately, he was right. In fact, the use of military power to protect the flow of oil has been a central tenet of U.S. foreign policy since 1945. That was the year that President Franklin D. Roosevelt promised King Abdul Aziz of Saudi Arabia that the United States would protect the kingdom in return for special access to Saudi oil—a promise that governs U.S. foreign policy today. This policy was formalized by President Jimmy Carter in 1980 when he announced that the secure flow of oil from the Persian Gulf was in “the vital interests of the United States of America” and that America would use “any means necessary, including military force” to protect those interests from outside forces. This doctrine was expanded by President Ronald Reagan in 1981 to cover internal threats, and was used by the first President Bush to justify the Gulf War of 1990-91, and provided a key, if unspoken rationale for the second President Bush’s invasion of Iraq in 2003.ii The Carter/Reagan Doctrine also led to the build up of U.S. forces in the Persian Gulf on a permanent basis and to the establishment of the Rapid Deployment Force and the U.S. Central Command (CENTCOM). The United States now spends over $50 Billion per year (in peacetime) to maintain our readiness to intervene in the Gulf.iii America has tried to address its oil vulnerability by using our military to protect supply routes and to prop up or install friendly regimes. But as Iraq shows the price is astronomical—$200 Billion and counting. Moreover, it doesn’t work—Iraq is now producing less oil than it did before the invasion. While the reasons behind the Bush administration’s decision to invade Iraq may be complex, can anyone doubt that we would not be there today if Iraq exported coffee instead of oil? It is time for a new approach. Americans are no longer willing to support U.S. misadventures in the Persian Gulf. Recent polls show that almost two-thirds of Americans think the Iraq war was not worth the price in terms of blood and treasure. Lt. Gen William Odom, director of the National Security Agency during President Reagan's second term, recently said: "The invasion of Iraq will turn out to be the greatest strategic disaster in U.S. history." The nation is understandably split about what to do now in Iraq, but there appears to be widespread agreement that America should not make the same mistake again—and we can take a giant step toward that goal by reducing our dependence on oil. Water Wars Add-on SMRs solve inevitable water wars Palley ’11 Reese Palley, The London School of Economics, The Answer: Why Only Inherently Safe, Mini Nuclear Power Plans Can Save Our World, p. 168-71, 2011)

The third world has long been rent in recent droughts, by the search for water. In subsistence economies, on marginal land, water is not a convenience but a matter of life and death. As a result small wars have been fought , rivers diverted, and wells poisoned in what could be a warning of what is to come as industrialized nations begin to face failing water supplies. Quite aside from the demand for potable water is the dependence of enormous swaths of industry and agriculture on oceans of water used for processing, enabling, and cleaning a thousand processes and products. It is interesting to note that fresh water used in both industry and agriculture is reduced to a nonrenewable resource as agriculture adds salt and industry adds a chemical brew unsuitable for consumption. More than one billion people in the world already lack access to clean water , and things are getting worse. Over the next two decades, the average supply of water per person will drop by a third, condemning millions of people to waterborne diseases and an avoidable premature death.81 So the stage is set for water access wars between the first and the third worlds, between neighbors downstream of supply, between big industry and big agriculture, between nations, between population centers, and ultimately between you and the people who live next door for an already inadequate world water supply that is not being renewed. As populations inevitably increase, conflicts will intensify.82 It is only by virtue of the historical accident of the availability of nuclear energy that humankind now has the ability to remove the salt and other pollutants to supply all our water needs. The problem is that desalination is an intensely local process. Some localities have available sufficient water from renewable sources to take care of their own needs, but not enough to share with their neighbors, and it is here that the scale of nuclear energy production must be defined locally . Large scale 1,000 MWe plants can be used to desalinate water as well as for generating electricity However we cannot build them fast enough to address the problem, and, if built they would face the extremely expensive problem of distributing the water they produce. Better, much better, would be to use small desalinization plants sited locally. Beyond desalination for human use is the need to green some of the increasing desertification of vast areas such as the Sahara. Placing twenty 100 MWe plants a hundred miles apart along the Saharan coast would green the coastal area from the Atlantic Ocean to the Red Sea, a task accomplished more cheaply and quickly than through the use of gigawatt plants.83 This could proceed on multiple tracks wherever deserts are available to be reclaimed. Leonard Orenstein, a researcher in the field of desert reclamation, speculates: If most of the Sahara and Australian outback were planted with fast-growing trees like eucalyptus, the forests could draw down about 8 billion tons of carbon a year—nearly as much as people emit from burning fossil fuels today. As the forests matured, they could continue taking up this much carbon for decades.84 The use of small , easily transported, easily sited, and walk away safe nuclear reactors dedicated to desalination is the only answer to the disproportionate distribution of water resources that have distorted human habitation patterns for millennia. Where there existed natural water, such as from rivers, great cities arose and civilizations flourished. Other localities lay barren through the ages. We now have the power, by means of SMRs profiled to local conditions, not only to attend to existing water shortages but also to smooth out disproportionate water distribution and create green habitation where historically it has never existed. The endless wars that have been fought, first over solid bullion gold and then over oily black gold, can now engulf us in the desperate reach for liquid blue gold. We need never fight these wars again as we now have the nuclear power to fulfill the biblical ability to “strike any local rock and have water gush forth.” North Korean Prolif Add-on SMRs solve North Korean prolif Goodby and Heiskanen ‘12 (James, former arms control negotiator and a Hoover Institution Fellow, Markku, Associate and Program Director of The Asia Institute at the Kyung Hee University in Seoul [“The Seoul Nuclear Security Summit: New Thinking in Northeast Asia?” March 20th, http://nautilus.org/napsnet/napsnet-policy-forum/the-seoul-nuclear-security-summit- new-thinking-in-northeast-asia/)

The nuclear crises in the Middle East and Northeast Asia and the stalled promise of a nuclear renaissance in civil nuclear power could all be solved by a more rational approach to the generation of electric power. Although it will take years before the current, outdated system is replaced, the Seoul meeting could provide a political impetus. The new system would rest on three legs: small modular reactors (“mini-reactors”), internationally managed nuclear fuel services, and increasing reliance on the distributed (local) generation of electricity. After the disaster in Fukushima, there has been an understandable retreat from plans for large-scale reactors, with their inevitable safety issues. A vivid example of this reaction is found in Germany, which has cancelled its plans to increase the generation of electricity from nuclear reactors even though they are cleaner and more dependable than most other sources currently available. Vulnerabilities and inefficiencies of long-distance transmission lines point to a paradigm for generation and distribution of electric power that is more local – connected to national grids, to be sure, but able to operate independently of them. This is an ideal situation for mini- reactors, which are safer and less prone to encourage the spread of nuclear weapons. International ly managed nuclear fuel services already exist and the security of supply can be assured by policies that foster more fuel service centers in Asia and elsewhere, including in the United States. These factors would enable suppliers of mini-reactors to expand their business to nations like North Korea and Iran under IAEA safeguards. The relevance of this energy paradigm to resolving the issues in North Korea and Iran is evident: both nations could develop civil nuclear programs with assured supplies of nuclear fuel from multiple internationally managed fuel service centers in Russia, China, and Western Europe while avoiding the ambiguity of national ly operated plutonium reprocessing and uranium enrichment. Reliance on distributed generation of electricity would be more efficient and less prone to blackouts. And the presence of a level playing field should be apparent from the fact that similar arrangements would be the 21st-century way of generating electricity from nuclear energy in the developed economies as well as in energy-starved economies such as India and China. Nuclear war Hayes & Hamel-Green ’10 [*Victoria University AND **Executive Director of the Nautilus Institute (Peter and Michael, “-“The Path Not Taken, the Way Still Open: Denuclearizing the Korean Peninsula and Northeast Asia”, 1/5, http://www.nautilus.org/fora/security/10001HayesHamalGreen.pdf]

The consequences of failing to address the proliferation threat posed by the North Korea developments, and related political and economic issues, are serious , not only for the Northeast Asian region but for the whole international community. At worst, there is the possibility of nuclear attack1, whether by intention, miscalc ulation, or merely accident , leading to the resumption of Korean War hostilities. On the Korean Peninsula itself, key population centres are well within short or medium range missiles. The whole of Japan is likely to come within North Korean missile range. Pyongyang has a population of over 2 million, Seoul (close to the North Korean border) 11 million, and Tokyo over 20 million. Even a limited nuclear exchange would result in a holocaust of unprecedented proportions. But the catastrophe within the region would not be the only outcome. New research indicates that even a limited nuclear war in the region would rearrange our global climate far more quickly than global warming. Westberg draws attention to new studies modelling the effects of even a limited nuclear exchange involving approximately 100 Hiroshima-sized 15 kt bombs2 (by comparison it should be noted that the United States currently deploys warheads in the range 100 to 477 kt, that is, individual warheads equivalent in yield to a range of 6 to 32 Hiroshimas).The studies indicate that the soot from the fires produced would lead to a decrease in global temperature by 1.25 degrees Celsius for a period of 6-8 years.3 In Westberg’s view: That is not global winter, but the nuclear darkness will cause a deeper drop in temperature than at any time during the last 1000 years. The temperature over the continents would decrease substantially more than the global average. A decrease in rainfall over the continents would also follow...The period of nuclear darkness will cause much greater decrease in grain production than 5% and it will continue for many years... hundreds of millions of people will die from hunger...To make matters even worse, such amounts of smoke injected into the stratosphere would cause a huge reduction in the Earth’s protective ozone.4 These, of course, are not the only consequences. Reactors might also be targeted, causing further mayhem and downwind radiation effects, superimposed on a smoking, radiating ruin left by nuclear next-use. Millions of refugees would flee the affected regions. The direct impacts , and the follow-on impacts on the global economy via ecological and food insecurity , could make the present global financial crisis pale by comparison. How the great powers, especially the nuclear weapons states respond to such a crisis, and in particular, whether nuclear weapons are used in response to nuclear first-use, could make or break the global non proliferation and disarmament regimes. There could be many unanticipated impacts on regional and global security relationships5, with subsequent nuclear breakout and geopolitical turbulence, including possible loss-of-control over fissile material or warheads in the chaos of nuclear war, and aftermath chain-reaction affects involving other potential proliferant states. The Korean nuclear proliferation issue is not just a regional threat but a global one that warrants priority consideration from the international community Lashout Add-on Grid collapse causes nuclear lashout Lawson ‘9 (Sean, Assistant professor in the Department of Communication at the University of Utah, Cross- Domain Response to Cyber Attacks and the Threat of Conflict Escalation, May 13th 2009, http://www.seanlawson.net/?p=477)

Introduction At a time when it seems impossible to avoid the seemingly growing hysteria over the threat of cyber war,[1] network security expert Marcus Ranum delivered a refreshing talk recently, “The Problem with Cyber War,” that took a critical look at a number of the assumptions underlying contemporary cybersecurity discourse in the United States. He addressed one issue in partiuclar that I would like to riff on here, the issue of conflict escalation–i.e. the possibility that offensive use of cyber attacks could escalate to the use of physical force. As I will show, his concerns are entirely legitimate as current U.S. military cyber doctrine assumes the possibility of what I call “cross-domain responses” to cyberattacks. Backing Your Adversary (Mentally) into a Corner Based on the premise that completely blinding a potential adversary is a good indicator to that adversary that an attack is iminent, Ranum has argued that “The best thing that you could possibly do if you want to start World War III is launch a cyber attack. [...] When people talk about cyber war like it’s a practical thing, what they’re really doing is messing with the OK button for starting World War III. We need to get them to sit the f-k down and shut the f-k up.” [2] He is making a point similar to one that I have made in the past: Taking away an adversary’s ability to make rational decisions could backfire. [3] For example, Gregory Witol cautions that “attacking the decision maker’s ability to perform rational calculations may cause more problems than it hopes to resolve… Removing the capacity for rational action may result in completely unforeseen consequences, including longer and bloodier battles than may otherwise have been.” [4] Cross-Domain Response So, from a theoretical standpoint, I think his concerns are well founded. But the current state of U.S. policy may be cause for even greater concern. It’s not just worrisome that a hypothetical blinding attack via cyberspace could send a signal of imminent attack and therefore trigger an irrational response from the adversary. What is also cause for concern is that current U.S. policy indicates that “kinetic attacks” (i.e. physical use of force) are seen as potentially legitimate responses to cyber attacks. Most worrisome is that current U.S. policy implies that a nuclear response is possible, something that policy makers have not denied in recent press reports. The reason, in part, is that the U.S. defense community has increasingly come to see cyberspace as a “domain of warfare” equivalent to air, land, sea, and space. The definition of cyberspace as its own domain of warfare helps in its own right to blur the online/offline, physical-space/cyberspace boundary. But thinking logically about the potential consequences of this framing leads to some disconcerting conclusions. If cyberspace is a domain of warfare, then it becomes possible to define “cyber attacks” (whatever those may be said to entail) as acts of war. But what happens if the U.S. is attacked in any of the other domains? It retaliates. But it usually does not respond only within the domain in which it was attacked. Rather, responses are typically “cross-domain responses”–i.e. a massive bombing on U.S. soil or vital U.S. interests abroad (e.g. think 9/11 or Pearl Harbor) might lead to air strikes against the attacker. Even more likely given a U.S. military “way of warfare” that emphasizes multidimensional, “joint” operations is a massive conventional (i.e. non-nuclear) response against the attacker in all domains (air, land, sea, space), simultaneously. The possibility of “kinetic action” in response to cyber attack, or as part of offensive U.S. cyber operations, is part of the current (2006) National Military Strategy for Cyberspace Operations [5]: Of course, the possibility that a cyber attack on the U.S. could lead to a U.S. nuclear reply constitutes possibly the ultimate in “cross-domain response.” And while this may seem far fetched, it has not been ruled out by U.S. defense policy makers and is, in fact, implied in current U.S. defense policy documents. From the National Military Strategy of the United States (2004): “The term WMD/E relates to a broad range of adversary capabilities that pose potentially devastating impacts. WMD/E includes chemical, biological, radiological, nuclear, and enhanced high explosive weapons as well as other, more asymmetrical ‘weapons’. They may rely more on disruptive impact than destructive kinetic effects. For example, cyber attacks on US commercial information systems or attacks against transportation networks may have a greater economic or psychological effect than a relatively small release of a lethal agent.” [6] The authors of a 2009 National Academies of Science report on cyberwarfare respond to this by saying, “Coupled with the declaratory policy on nuclear weapons described earlier, this statement implies that the United States will regard certain kinds of cyberattacks against the United States as being in the same category as nuclear, biological, and chemical weapons, and thus that a nuclear response to certain kinds of cyberattacks (namely, cyberattacks with devastating impacts) may be possible. It also sets a relevant scale–a cyberattack that has an impact larger than that associated with a relatively small release of a lethal agent is regarded with the same or greater seriousness.” [7] Asked by the New York Times to comment on this, U.S. defense officials would not deny that nuclear retaliation remains an option for response to a massive cyberattack: “Pentagon and military officials confirmed that the United States reserved the option to respond in any way it chooses to punish an adversary responsible for a catastrophic cyberattack. While the options could include the use of nuclear weapons, officials said, such an extreme counterattack was hardly the most likely response.” [8] The rationale for this policy: “Thus, the United States never declared that it would be bound to respond to a Soviet and Warsaw Pact conventional invasion with only American and NATO conventional forces. The fear of escalating to a nuclear conflict was viewed as a pillar of stability and is credited with helping deter the larger Soviet-led conventional force throughout the cold war. Introducing the possibility of a nuclear response to a catastrophic cyberattack would be expected to serve the same purpose.” [9] Non-unique, Dangerous, and In-credible? There are a couple of interesting things to note in response. First is the development of a new acronym, WMD/E (weapons of mass destruction or effect). Again, this acronym indicates a weakening of the requirement of physical impacts. In this new definition, mass effects that are not necessarily physical, nor necessarily destructive, but possibly only disruptive economically or even psychologically (think “shock and awe”) are seen as equivalent to WMD. This new emphasis on effects, disruption, and psychology reflects both contemporary, but also long-held beliefs within the U.S. defense community. It reflects current thinking in U.S. military theory, in which it is said that U.S. forces should be able to “mass fires” and “mass effects” without having to physically “mass forces.” There is a sliding scale in which the physical (often referred to as the “kinetic”) gradually retreats–i.e. massed forces are most physical; massed fire is less physical (for the U.S. anyway); and massed effects are the least physical, having as the ultimate goal Sun Tzu’s “pinnacle of excellence,” winning without fighting. But the emphasis on disruption and psychology in WMD/E has also been a key component of much of 20th century military thought in the West. Industrial theories of warfare in the early 20th century posited that industrial societies were increasingly interdependent and reliant upon mass production, transportation, and consumption of material goods. Both industrial societies and the material links that held them together, as well as industrial people and their own internal linkages (i.e. nerves), were seen as increasingly fragile and prone to disruption via attack with the latest industrial weapons: airplanes and tanks. Once interdependent and fragile industrial societies were hopelessly disrupted via attack by the very weapons they themselves created, the nerves of modern, industrial men and women would be shattered, leading to moral and mental defeat and a loss of will to fight. Current thinking about the possible dangers of cyber attack upon the U.S. are based on the same basic premises: technologically dependent and therefore fragile societies populated by masses of people sensitive to any disruption in expected standards of living are easy targets. Ultimately, however, a number of researchers have pointed out the pseudo-psychological, pseudo-sociological, and a-historical (not to mention non-unique) nature of these assumptions. [10] Others have pointed out that these assumptions did not turn out to be true during WWII strategic bombing campaigns, that modern, industrial societies and populations were far more resilient than military theorists had assumed. [11] Finally, even some military theorists have questioned the assumptions behind cyber war, especially when assumptions about our own technology dependence-induced societal fragility (dubious on their own) are applied to other societies, especially non-Western societies (even more dubious). [12] Finally, where deterrence is concerned, it is important to remember that a deterrent has to be credible to be effective. True, the U.S. retained nuclear weapons as a deterrent during the Cold War. But, from the 1950s through the 1980s, there was increasing doubt among U.S. planners regarding the credibility of U.S. nuclear deterrence via the threat of “massive retaliation.” As early as the 1950s it was becoming clear that the U.S. would be reluctant at best to actually follow through on its threat of massive retaliation. Unfortunately, most money during that period had gone into building up the nuclear arsenal; conventional weapons had been marginalized. Thus, the U.S. had built a force it was likely never to use. So, the 1960s, 1970s, and 1980s saw the development of concepts like “flexible response” and more emphasis on building up conventional forces. This was the big story of the 1980s and the “Reagan build-up” (not “Star Wars”). Realizing that, after a decade of distraction in Vietnam, it was back in a position vis-a-viz the Soviets in Europe in which it would have to rely on nuclear weapons to offset its own weakness in conventional forces, a position that could lead only to blackmail or holocaust, the U.S. moved to create stronger conventional forces. [13] Thus, the question where cyber war is concerned: If it was in-credible that the U.S. would actually follow through with massive retaliation after a Soviet attack on the U.S. or Western Europe, is it really credible to say that the U.S. would respond with nuclear weapons to a cyber attack, no matter how disruptive or destructive? Beyond credibility, deterrence makes many other assumptions that are problematic in the cyber war context. It assumes an adversary capable of being deterred. Can most of those who would perpetrate a cyber attack be deterred? Will al-Qa’ida be deterred? How about a band of nationalistic or even just thrill-seeker, bandwagon hackers for hire? Second, it assumes clear lines of command and control. Sure, some hacker groups might be funded and assisted to a great degree by states. But ultimately, even cyber war theorists will admit that it is doubtful that states have complete control over their armies of hacker mercenaries. How will deterrence play out in this kind of scenario? Iran Prolif Add-on SMRs solve Iran prolif Goodby and Heiskanen ’12 (James, former arms control negotiator and a Hoover Institution Fellow, Markku, Associate and Program Director of The Asia Institute at the Kyung Hee University in Seoul [“The Seoul Nuclear Security Summit: New Thinking in Northeast Asia?” March 20th, http://nautilus.org/napsnet/napsnet-policy-forum/the-seoul-nuclear-security-summit- new-thinking-in-northeast-asia/]

The nuclear crises in the Middle East and Northeast Asia and the stalled promise of a nuclear renaissance in civil nuclear power could all be solved by a more rational approach to the generation of electric power. Although it will take years before the current, outdated system is replaced, the Seoul meeting could provide a political impetus. The new system would rest on three legs: small modular reactors (“mini-reactors”), internationally managed nuclear fuel services, and increasing reliance on the distributed (local) generation of electricity. After the disaster in Fukushima, there has been an understandable retreat from plans for large-scale reactors, with their inevitable safety issues. A vivid example of this reaction is found in Germany, which has cancelled its plans to increase the generation of electricity from nuclear reactors even though they are cleaner and more dependable than most other sources currently available. Vulnerabilities and inefficiencies of long-distance transmission lines point to a paradigm for generation and distribution of electric power that is more local – connected to national grids, to be sure, but able to operate independently of them. This is an ideal situation for mini-reactors, which are safer and less prone to encourage the spread of nuclear weapons. International ly managed nuclear fuel services already exist and the security of supply can be assured by policies that foster more fuel service centers in Asia and elsewhere, including in the United States. These factors would enable suppliers of mini-reactors to expand their business to nations like North Korea and Iran under IAEA safeguards. The relevance of this energy paradigm to resolving the issues in North Korea and Iran is evident: both nations could develop civil nuclear programs with assured supplies of nuclear fuel from multiple internationally managed fuel service centers in Russia, China, and Western Europe while avoiding the ambiguity of national ly operated plutonium reprocessing and uranium enrichment. Reliance on distributed generation of electricity would be more efficient and less prone to blackouts. And the presence of a level playing field should be apparent from the fact that similar arrangements would be the 21st- century way of generating electricity from nuclear energy in the developed economies as well as in energy-starved economies such as India and China. Iran prolif causes israel-iran war and destabilizing regional prolif Montgomery ‘11 (Eric Edelman, distinguished fellow at the center for strategic and budgetary assessments, Andrew Krepinevich, President of the CSBA, evan montgomery, research fellow at the CSBA [“Why Obama Should Take Out Iran's Nuclear Program,” http://www.foreignaffairs.com/articles/136655/eric-s-edelman-andrew-f-krepinevich-jr-and- evan-braden-montgomer/why-obama-should-take-out-irans-nuclear-program?cid=nlc- this_week_on_foreignaffairs_co-111011-why_obama_should_take_out_iran-111011#)

Even so, the U.S. government might persist with its existing approach if it believes that the consequences of a nuclear-armed Iran are manageable through a combination of containment and deterrence. In fact, the Obama administration has downplayed the findings of the new IAEA report, suggesting that a change in U.S. policy is unlikely. Yet this view underestimates the challenges that the United States would confront once Iran acquired nuc lear weapon s . For example, the Obama administration should not discount the possibility of an Israeli-Iranian nuclear conflict . From the very start, the nuclear balance between these two antagonists would be unstable . Because of the significant disparity in the size s of their respective arsenals (Iran would have a handful of warheads compared to Israel's estimated 100-200), both sides would have huge incentives to strike first in the event of a crisis. Israel would likely believe that it had only a short period during which it could launch a nuclear attack that would wipe out most, if not all, of Iran's weapons and much of its nuclear infrastructure without Tehran being able to retaliate. For its part, Iran might decide to use its arsenal before Israel could destroy it with a preemptive attack. The absence of early warning systems on both sides and the extremely short flight time for ballistic missiles heading from one country to the other would only heighten the danger . Decision-makers would be under tremendous pressure to act quickly. Beyond regional nuclear war, Tehran's acquisition of these weapons could be a catalyst for additional prolif eration throughout the Middle East and beyond. Few observers have failed to note that the U nited S tates has treated nuclear -armed rogues , such as North Korea, very differently from non-nuclear ones, such as Iraq and Libya. If Iran became a nuclear power and the United States reacted with a policy of containment, nuc lear weapon s would only be more appealing as the ultimate deterrent to outside intervention. Meanwhile, Iran's rivals for regional dominance, such as Turkey, Egypt, and Saudi Arabia, might seek their own nuc lear device s to counterbalance Tehran. The road to acquiring nuclear weapons is generally a long and difficult one, but these nations might have shortcuts. Riyadh , for example, could exploit its close ties to Islamabad -- which has a history of illicit proliferation and a rapidly expanding nuclear arsenal -- to become a nuclear power almost overnight .