Why Maximize Expected Choice-Worthiness?1 WILLIAM MACASKILL and TOBY ORD University of Oxford
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Effective Altruism William Macaskill and Theron Pummer
1 Effective Altruism William MacAskill and Theron Pummer Climate change is on course to cause millions of deaths and cost the world economy trillions of dollars. Nearly a billion people live in extreme poverty, millions of them dying each year of easily preventable diseases. Just a small fraction of the thousands of nuclear weapons on hair‐trigger alert could easily bring about global catastrophe. New technologies like synthetic biology and artificial intelligence bring unprece dented risks. Meanwhile, year after year billions and billions of factory‐farmed ani mals live and die in misery. Given the number of severe problems facing the world today, and the resources required to solve them, we may feel at a loss as to where to even begin. The good news is that we can improve things with the right use of money, time, talent, and effort. These resources can bring about a great deal of improvement, or very little, depending on how they are allocated. The effective altruism movement consists of a growing global community of peo ple who use reason and evidence to assess how to do as much good as possible, and who take action on this basis. Launched in 2011, the movement now has thousands of members, as well as influence over billions of dollars. The movement has substan tially increased awareness of the fact that some altruistic activities are much more cost‐effective than others, in the sense that they do much more good than others per unit of resource expended. According to the nonprofit organization GiveWell, it costs around $3,500 to prevent someone from dying of malaria by distributing bed nets. -
Apocalypse Now? Initial Lessons from the Covid-19 Pandemic for the Governance of Existential and Global Catastrophic Risks
journal of international humanitarian legal studies 11 (2020) 295-310 brill.com/ihls Apocalypse Now? Initial Lessons from the Covid-19 Pandemic for the Governance of Existential and Global Catastrophic Risks Hin-Yan Liu, Kristian Lauta and Matthijs Maas Faculty of Law, University of Copenhagen, Copenhagen, Denmark [email protected]; [email protected]; [email protected] Abstract This paper explores the ongoing Covid-19 pandemic through the framework of exis- tential risks – a class of extreme risks that threaten the entire future of humanity. In doing so, we tease out three lessons: (1) possible reasons underlying the limits and shortfalls of international law, international institutions and other actors which Covid-19 has revealed, and what they reveal about the resilience or fragility of institu- tional frameworks in the face of existential risks; (2) using Covid-19 to test and refine our prior ‘Boring Apocalypses’ model for understanding the interplay of hazards, vul- nerabilities and exposures in facilitating a particular disaster, or magnifying its effects; and (3) to extrapolate some possible futures for existential risk scholarship and governance. Keywords Covid-19 – pandemics – existential risks – global catastrophic risks – boring apocalypses 1 Introduction: Our First ‘Brush’ with Existential Risk? All too suddenly, yesterday’s ‘impossibilities’ have turned into today’s ‘condi- tions’. The impossible has already happened, and quickly. The impact of the Covid-19 pandemic, both directly and as manifested through the far-reaching global societal responses to it, signal a jarring departure away from even the © koninklijke brill nv, leiden, 2020 | doi:10.1163/18781527-01102004Downloaded from Brill.com09/27/2021 12:13:00AM via free access <UN> 296 Liu, Lauta and Maas recent past, and suggest that our futures will be profoundly different in its af- termath. -
Existential Risk and Existential Hope: Definitions
Future of Humanity Institute – Technical Report #2015-1 Existential Risk and Existential Hope: Definitions Owen Cotton-Barratt* & Toby Ord† We look at the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening. An existential risk is a chance of a terrible event occurring, such as an asteroid striking the earth and wiping out intelligent life – we could call such events existential catastrophes. In order to understand what should be thought of as an existential risk, it is necessary to understand what should be thought of as an existential catastrophe. This is harder than it first seems to pin down. 1. The simple definition One fairly crisp approach is to draw the line at extinction: Definition (i): An existential catastrophe is an event which causes the end of existence of our descendants. This has the virtue that it is a natural division and is easy to understand. And we certainly want to include all extinction events. But perhaps it doesn’t cast a wide enough net. Example A: A totalitarian regime takes control of earth. It uses mass surveillance to prevent any rebellion, and there is no chance for escape. This regime persists for thousands of years, eventually collapsing when a supervolcano throws up enough ash that agriculture is prevented for decades, and no humans survive. In Example A, clearly the eruption was bad, but the worst of the damage was done earlier. After the totalitarian regime was locked in, it was only a matter of time until something or other finished things off. -
Whether and Where to Give1 (Forthcoming in Philosophy and Public Affairs)
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by St Andrews Research Repository 1 Whether and Where to Give1 (Forthcoming in Philosophy and Public Affairs) Theron Pummer University of St Andrews 1. The Ethics of Giving and Effective Altruism The ethics of giving has traditionally focused on whether, and how much, to give to charities helping people in extreme poverty.2 In more recent years, the discussion has increasingly focused on where to give, spurred by an appreciation of the substantial differences in cost- effectiveness between charities. According to a commonly cited example, $40,000 can be used either to help 1 blind person by training a seeing-eye dog in the United States or to help 2,000 blind people by providing surgeries reversing the effects of trachoma in Africa.3 Effective altruists recommend that we give large sums to charity, but by far their more central message is that we give effectively, i.e., to whatever charities would do the most good per dollar donated.4 In this paper, I’ll assume that it’s not wrong not to give bigger, but will explore to what extent it may well nonetheless be wrong not to give better. The main claim I’ll argue for here is that in many cases it would be wrong of you to give a sum of money to charities that do less good than others you could have given to instead, even if 1 I am extremely grateful to Derek Parfit, Roger Crisp, Jeff McMahan, and Peter Singer for extremely helpful feedback and encouragement. -
194 William Macaskill. Doing Good Better: How Effective Altruism Can Help You Help Others, Do Work That Matters, and Make Smarte
Philosophy in Review XXXIX (November 2019), no. 4 William MacAskill. Doing Good Better: How Effective Altruism Can Help You Help Others, Do Work that Matters, and Make Smarter Choices About Giving Back. Avery 2016. 272 pp. $17.00 USD (Paperback ISBN 9781592409662). Will MacAskill’s Doing Good Better provides an introduction to the Effective Altruism movement, and, in the process, it makes a strong case for its importance. The book is aimed at a general audience. It is fairly short and written for the most part in a light, conversational tone. Doing Good Better’s only real rival as a treatment of Effective Altruism is Peter Singer’s The Most Good You Can Do, though MacAskill’s and Singer’s books are better seen as companion pieces than rivals. Like The Most Good You Can Do, Doing Good Better offers the reader much of philosophical interest, and it delivers novel perspectives and even some counterintuitive but well-reasoned conclusions that will likely provoke both critics and defenders of Effective Altruism for some time to come. Before diving into Doing Good Better we want to take a moment to characterize Effective Altruism. Crudely put, Effective Altruists are committed to three claims. First, they maintain that we have strong reason to help others. Second, they claim that these reasons are impartial in nature. And, third, they hold that we are required to act on these reasons in the most effective manner possible. Hence, according to Effective Altruists, those of us who are fortunate enough to have the leisure to write (or read) scholarly book reviews (1) should help those who are most in need, (2) should do so even if we lack any personal connection to them, and (3) should do so as efficiently as we can. -
The Definition of Effective Altruism
OUP CORRECTED PROOF – FINAL, 19/08/19, SPi 1 The Definition of Effective Altruism William MacAskill There are many problems in the world today. Over 750 million people live on less than $1.90 per day (at purchasing power parity).1 Around 6 million children die each year of easily preventable causes such as malaria, diarrhea, or pneumonia.2 Climate change is set to wreak environmental havoc and cost the economy tril- lions of dollars.3 A third of women worldwide have suffered from sexual or other physical violence in their lives.4 More than 3,000 nuclear warheads are in high-alert ready-to-launch status around the globe.5 Bacteria are becoming antibiotic- resistant.6 Partisanship is increasing, and democracy may be in decline.7 Given that the world has so many problems, and that these problems are so severe, surely we have a responsibility to do something about them. But what? There are countless problems that we could be addressing, and many different ways of addressing each of those problems. Moreover, our resources are scarce, so as individuals and even as a globe we can’t solve all these problems at once. So we must make decisions about how to allocate the resources we have. But on what basis should we make such decisions? The effective altruism movement has pioneered one approach. Those in this movement try to figure out, of all the different uses of our resources, which uses will do the most good, impartially considered. This movement is gathering con- siderable steam. There are now thousands of people around the world who have chosen -
Unprecedented Technological Risks
Policy Brief: Unprecedented Technological Risks Future of Humanit y Institute UNIVERSITY OF OXFORD Unprecedented Technological Risks Over the next few decades, the continued development of dual-use technologies will provide major benefits to society. They will also pose significant and unprecedented global risks, including risks of new weapons of mass destruction, arms races, or the accidental deaths of billions of people. Synthetic biology, if more widely accessible, would give terrorist groups the ability to synthesise pathogens more dangerous than smallpox; geoengineering technologies would give single countries the power to dramatically alter the earth’s climate; distributed manufacturing could lead to nuclear proliferation on a much wider scale; and rapid advances in artificial intelligence could give a single country a decisive strategic advantage. These scenarios might seem extreme or outlandish. But they are widely recognised as significant risks by experts in the relevant fields. To safely navigate these risks, and harness the potentially great benefits of these new technologies, we must proactively provide research, assessment, monitoring, and guidance, on a global level. This report gives an overview of these risks and their importance, focusing on risks of extreme catastrophe, which we believe to be particularly neglected. The report explains why market and political circumstances have led to a deficit of regulation on these issues, and offers some policy proposals as starting points for how these risks could be addressed. September 2014 1 Policy Brief: Unprecedented Technological Risks Executive Summary The development of nuclear weapons was, at the than nuclear weapons, because they are more time, an unprecedented technological risk. The difficult to control. -
Rationality Spring 2020, Tues & Thurs 1:30-2:45 Harvard University
General Education 1066: Rationality Spring 2020, Tues & Thurs 1:30-2:45 Harvard University Description: The nature, psychology, and applications of rationality. Rationality is, or ought to be, the basis of everything we think and do. Yet in an era with unprecedented scientific sophistication, we are buffeted by fake news, quack cures, conspiracy theories, and “post-truth” rhetoric. How should we reason about reason? Rationality has long been a foundational topic in the academy, including philosophy, psychology, AI, economics, mathematics, and government. Recently, discoveries on how people reason have earned three Nobel Prizes, and many applied fields are being revolutionized by rational, evidence-based, and effective approaches. Part I: The nature of rationality. Tools of reason, including logic, statistical decision theory, Bayesian inference, rational choice, game theory, critical thinking, and common fallacies. Part II: The cognitive science of rationality, including classic research by psychologists and behavioral economists. Is Homo sapiens a “rational animal”? Could our irrational heuristics and biases be evolutionary adaptations to a natural information environment? Could beliefs that are factually irrational be socially rational in a drive for individual status or tribal solidarity? Can people be cured of their irrationality? Part III: Rationality in the world. How can our opinions, policies, and practices be made more rational? Can rational analyses offer more effective means of improving the world? Examples will include journalism, climate change, sports, crime, government, medicine, political protest, social change, philanthropy, and other forms of effective altruism. These topics will be presented by guest lecturers, many of them well-known authors and public figures. For the capstone project, students will select a major national or global problem, justify the choice, and lay out the most rational means to mitigate or solve it. -
The Moral Imperative Toward Cost-Effectiveness in Global Health Toby Ord March 2013
center for global development essay The Moral Imperative toward Cost-Effectiveness in Global Health Toby Ord March 2013 www.cgdev.org/content/publications/detail/1427016 abstract Getting good value for the money with scarce resources is a substantial moral issue for global health. This claim may be surprising to some, since conversations on the ethics of global health often focus on moral concerns about justice, fairness, and freedom. But outcomes and consequences are also of central moral importance in setting priorities. In this essay, Toby Ord explores the moral relevance of cost-effectiveness, a major tool for capturing the relationship between resources and outcomes, by illustrating what is lost in moral terms for global health when cost-effectiveness is ignored. For example, the least effective HIV/AIDS intervention produces less than 0.1 percent of the value of the most effective. In practical terms, this can mean hundreds, thousands, or millions of additional deaths due to a failure to prioritize. Ultimately, the author suggests that creating an active process of reviewing and analyzing global health interventions to deliver the bulk of global health funds to the very best. Toby Ord is the founder of Giving What We Can and a James Martin Fellow in the Programme on the Impacts of Future Technology, Oxford University. The Center for Global Development is an independent, nonprofit policy research organization that is dedicated to reducing global poverty and inequality and to making globalization work for the poor. CGD is grateful for contributions from the Bill and Melinda Gates Foundation in support of this work. -
Risks of Space Colonization
Risks of space colonization Marko Kovic∗ July 2020 Abstract Space colonization is humankind's best bet for long-term survival. This makes the expected moral value of space colonization immense. However, colonizing space also creates risks | risks whose potential harm could easily overshadow all the benefits of humankind's long- term future. In this article, I present a preliminary overview of some major risks of space colonization: Prioritization risks, aberration risks, and conflict risks. Each of these risk types contains risks that can create enormous disvalue; in some cases orders of magnitude more disvalue than all the potential positive value humankind could have. From a (weakly) negative, suffering-focused utilitarian view, we there- fore have the obligation to mitigate space colonization-related risks and make space colonization as safe as possible. In order to do so, we need to start working on real-world space colonization governance. Given the near total lack of progress in the domain of space gover- nance in recent decades, however, it is uncertain whether meaningful space colonization governance can be established in the near future, and before it is too late. ∗[email protected] 1 1 Introduction: The value of colonizing space Space colonization, the establishment of permanent human habitats beyond Earth, has been the object of both popular speculation and scientific inquiry for decades. The idea of space colonization has an almost poetic quality: Space is the next great frontier, the next great leap for humankind, that we hope to eventually conquer through our force of will and our ingenuity. From a more prosaic point of view, space colonization is important because it represents a long-term survival strategy for humankind1. -
Existential Risk Diplomacy and Governance Table of Contents Authors/Acknowledgements 3 Executive Summary 4 Section 1
GLOBAL PRIORITIES PROJECT 2017 Existential Risk Diplomacy and Governance Table of contents Authors/acknowledgements 3 Executive summary 4 Section 1. An introduction to existential risks 6 1.1. An overview of leading existential risks 6 Box: Examples of risks categorised according to scope and severity 7 1.1.1 Nuclear war 7 1.1.2 Extreme climate change and geoengineering 8 1.1.3 Engineered pandemics 9 1.1.4 Artificial intelligence 9 1.1.5 Global totalitarianism 9 1.1.6 Natural processes 10 1.1.7 Unknown unknowns 10 1.2. The ethics of existential risk 11 1.3. Why existential risks may be systematically underinvested in, and the role of the international community 11 1.3.1. Why existential risks are likely to be underinvested in 11 1.3.2. The role of the international community 12 Section 2. Recommendations 16 2.1. Develop governance of Solar Radiation Management research 16 2.1.1 Current context 16 2.1.2 Proposed intervention 16 Box: Types of interventions to reduce existential risk 17 2.1.3 Impact of the intervention 18 2.1.4 Ease of making progress 19 2.2. Establish scenario plans and exercises for severe engineered pandemics at the international level 19 2.2.1 Current context 19 2.2.2 Proposed intervention 20 2.2.3 Impact of the intervention 21 2.2.4 Ease of making progress 21 Box: World bank pandemic emergency financing facility 22 2.3. Build international attention to and support for existential risk reduction 23 2.3.1 Current context 23 2.3.2 Proposed intervention 23 2.3.2.1 Statements or declarations 24 Box: Existential risk negligence -
Are We Living at the Hinge of History?
Are we living at the hinge of history? William MacAskill Global Priorities Institute | September 2020 GPI Working Paper No. 12-2020 Are We Living At The Hinge Of History? 0. Introduction In the final pages of On What Matters, Volume II (2011), Derek Parfit made the following comments: We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy.1 These comments hark back to a statement he made twenty-seven years earlier in Reasons and Persons (1984): the part of our moral theory... that covers how we affect future generations... is the most important part of our moral theory, since the next few centuries will be the most important in human history.2 He also subsequently made the same claim in even stronger terms during a talk sponsored by Giving What We Can at the Oxford Union in June 2015: I think that we are living now at the most critical part of human history. The twentieth century I think was the best and worst of all centuries so far, but it now seems fairly likely that there are no intelligent beings anywhere else in the observable universe. Now, if that’s true, we may be living in the most critical part of the history of the universe… [The reason] why this may be the critical period in the history of the universe is if we are the only rational intelligent beings, it’s only we who might provide the origin of what would then become a galaxy-wide civilisation, which lasted for billions of years, and in which life was much better than it is for most human beings.