William Macaskill –
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Effective Altruism William Macaskill and Theron Pummer
1 Effective Altruism William MacAskill and Theron Pummer Climate change is on course to cause millions of deaths and cost the world economy trillions of dollars. Nearly a billion people live in extreme poverty, millions of them dying each year of easily preventable diseases. Just a small fraction of the thousands of nuclear weapons on hair‐trigger alert could easily bring about global catastrophe. New technologies like synthetic biology and artificial intelligence bring unprece dented risks. Meanwhile, year after year billions and billions of factory‐farmed ani mals live and die in misery. Given the number of severe problems facing the world today, and the resources required to solve them, we may feel at a loss as to where to even begin. The good news is that we can improve things with the right use of money, time, talent, and effort. These resources can bring about a great deal of improvement, or very little, depending on how they are allocated. The effective altruism movement consists of a growing global community of peo ple who use reason and evidence to assess how to do as much good as possible, and who take action on this basis. Launched in 2011, the movement now has thousands of members, as well as influence over billions of dollars. The movement has substan tially increased awareness of the fact that some altruistic activities are much more cost‐effective than others, in the sense that they do much more good than others per unit of resource expended. According to the nonprofit organization GiveWell, it costs around $3,500 to prevent someone from dying of malaria by distributing bed nets. -
By William Macaskill
Published on June 20, 2016 Brother, can you spare an RCT? ‘Doing Good Better’ by William MacAskill By Terence Wood If you’ve ever thought carefully about international development you will be tormented by shoulds. Should the Australian government really give aid rather Link: https://devpolicy.org/brother-can-spare-util-good-better-william-macaskill-20160620/ Page 1 of 5 Date downloaded: September 30, 2021 Published on June 20, 2016 than focus on domestic poverty? Should I donate more money personally? And if so, what sort of NGO should I give to? The good news is that William MacAskill is here to help. MacAskill is an associate professor in philosophy at the University of Oxford, and in Doing Good Better he wants to teach you to be an Effective Altruist. Effective Altruism is an attempt to take a form ofconsequentialism (a philosophical viewpoint in which an action is deemed right or wrong on the basis of its consequences) and plant it squarely amidst the decisions of our daily lives. MacAskill’s target audience isn’t limited to people involved in international development, but almost everything he says is relevant. Effective Altruists contend we should devote as much time and as many resources as we reasonably can to help those in greater need. They also want us to avoid actions that cause, or will cause, suffering. Taken together, this means promoting vegetarianism, (probably) taking action on climate change, and–of most interest to readers of this blog–giving a lot of aid. That’s the altruism. As for effectiveness, MacAskill argues that when we give we need to focus on addressing the most acute needs, while carefully choosing what works best. -
GPI's Research Agenda
A RESEARCH AGENDA FOR THE GLOBAL PRIORITIES INSTITUTE Hilary Greaves, William MacAskill, Rossa O’Keeffe-O’Donovan and Philip Trammell February 2019 (minor changes July 2019) We acknowledge Pablo Stafforini, Aron Vallinder, James Aung, the Global Priorities Institute Advisory Board, and numerous colleagues at the Future of Humanity Institute, the Centre for Effective Altruism, and elsewhere for their invaluable assistance in composing this agenda. 1 Table of Contents Introduction 3 GPI’s vision and mission 3 GPI’s research agenda 4 1. The longtermism paradigm 6 1.1 Articulation and evaluation of longtermism 6 1.2 Sign of the value of the continued existence of humanity 8 1.3 Mitigating catastrophic risk 10 1.4 Other ways of leveraging the size of the future 12 1.5 Intergenerational governance 14 1.6 Economic indices for longtermists 16 1.7 Moral uncertainty for longtermists 18 1.8 Longtermist status of interventions that score highly on short-term metrics 19 2. General issues in global prioritisation 21 2.1 Decision-theoretic issues 21 2.2 Epistemological issues 23 2.3 Discounting 24 2.4 Diversification and hedging 28 2.5 Distributions of cost-effectiveness 30 2.6 Modelling altruism 32 2.7 Altruistic coordination 33 2.8 Individual vs institutional actors 35 Bibliography 38 Appendix A. Research areas for future engagement 46 A.1 Animal welfare 46 A.2 The scope of welfare maximisation 48 Appendix B. Closely related areas of existing academic research 51 B.1 Methodology of cost-benefit analysis and cost-effectiveness analysis 51 B.2 Multidimensional economic indices 51 B.3 Infinite ethics and intergenerational equity 53 B.4 Epistemology of disagreement 53 B.5 Demandingness 54 B.6 Forecasting 54 B.7 Population ethics 55 B.8 Risk aversion and ambiguity aversion 55 B.9 Moral uncertainty 57 1 B.10 Value of information 58 B.11 Harnessing and combining evidence 59 B.12 The psychology of altruistic decision-making 60 Appendix C. -
194 William Macaskill. Doing Good Better: How Effective Altruism Can Help You Help Others, Do Work That Matters, and Make Smarte
Philosophy in Review XXXIX (November 2019), no. 4 William MacAskill. Doing Good Better: How Effective Altruism Can Help You Help Others, Do Work that Matters, and Make Smarter Choices About Giving Back. Avery 2016. 272 pp. $17.00 USD (Paperback ISBN 9781592409662). Will MacAskill’s Doing Good Better provides an introduction to the Effective Altruism movement, and, in the process, it makes a strong case for its importance. The book is aimed at a general audience. It is fairly short and written for the most part in a light, conversational tone. Doing Good Better’s only real rival as a treatment of Effective Altruism is Peter Singer’s The Most Good You Can Do, though MacAskill’s and Singer’s books are better seen as companion pieces than rivals. Like The Most Good You Can Do, Doing Good Better offers the reader much of philosophical interest, and it delivers novel perspectives and even some counterintuitive but well-reasoned conclusions that will likely provoke both critics and defenders of Effective Altruism for some time to come. Before diving into Doing Good Better we want to take a moment to characterize Effective Altruism. Crudely put, Effective Altruists are committed to three claims. First, they maintain that we have strong reason to help others. Second, they claim that these reasons are impartial in nature. And, third, they hold that we are required to act on these reasons in the most effective manner possible. Hence, according to Effective Altruists, those of us who are fortunate enough to have the leisure to write (or read) scholarly book reviews (1) should help those who are most in need, (2) should do so even if we lack any personal connection to them, and (3) should do so as efficiently as we can. -
Why Maximize Expected Choice-Worthiness?1 WILLIAM MACASKILL and TOBY ORD University of Oxford
NOUSˆ 00:00 (2018) 1–27 doi: 10.1111/nous.12264 Why Maximize Expected Choice-Worthiness?1 WILLIAM MACASKILL AND TOBY ORD University of Oxford This paper argues in favor of a particular account of decision-making under nor- mative uncertainty: that, when it is possible to do so, one should maximize expected choice-worthiness. Though this position has been often suggested in the literature and is often taken to be the ‘default’ view, it has so far received little in the way of positive argument in its favor. After dealing with some preliminaries and giving the basic motivation for taking normative uncertainty into account in our decision- making, we consider and provide new arguments against two rival accounts that have been offered—the accounts that we call ‘My Favorite Theory’ and ‘My Fa- vorite Option’. We then give a novel argument for comparativism—the view that, under normative uncertainty, one should take into account both probabilities of different theories and magnitudes of choice-worthiness. Finally, we further argue in favor of maximizing expected choice-worthiness and consider and respond to five objections. Introduction Normative uncertainty is a fact of life. Suppose that Michael has £20 to spend. With that money, he could eat out at a nice restaurant. Alternatively, he could eat at home and pay for four long-lasting insecticide-treated bed nets that would protect eight children against malaria. Let’s suppose that Michael knows all the morally relevant empirical facts about what that £20 could do. Even so, it might be that he still doesn’t know whether he’s obligated to donate that money or whether it’s permissible for him to pay for the meal out, because he just doesn’t know how strong his moral obligations to distant strangers are. -
The Definition of Effective Altruism
OUP CORRECTED PROOF – FINAL, 19/08/19, SPi 1 The Definition of Effective Altruism William MacAskill There are many problems in the world today. Over 750 million people live on less than $1.90 per day (at purchasing power parity).1 Around 6 million children die each year of easily preventable causes such as malaria, diarrhea, or pneumonia.2 Climate change is set to wreak environmental havoc and cost the economy tril- lions of dollars.3 A third of women worldwide have suffered from sexual or other physical violence in their lives.4 More than 3,000 nuclear warheads are in high-alert ready-to-launch status around the globe.5 Bacteria are becoming antibiotic- resistant.6 Partisanship is increasing, and democracy may be in decline.7 Given that the world has so many problems, and that these problems are so severe, surely we have a responsibility to do something about them. But what? There are countless problems that we could be addressing, and many different ways of addressing each of those problems. Moreover, our resources are scarce, so as individuals and even as a globe we can’t solve all these problems at once. So we must make decisions about how to allocate the resources we have. But on what basis should we make such decisions? The effective altruism movement has pioneered one approach. Those in this movement try to figure out, of all the different uses of our resources, which uses will do the most good, impartially considered. This movement is gathering con- siderable steam. There are now thousands of people around the world who have chosen -
Population Axiology
Population axiology Hilary Greaves This is the pre-peer reviewed version of this article. The final version is forthcoming in Philosophy Compass; please cite the published version. This ar- ticle may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving. Abstract Population axiology is the study of the conditions under which one state of affairs is better than another, when the states of affairs in ques- tion may differ over the numbers and the identities of the persons who ever live. Extant theories include totalism, averagism, variable value theories, critical level theories, and \person-affecting” theories. Each of these the- ories is open to objections that are at least prima facie serious. A series of impossibility theorems shows that this is no coincidence: it can be proved, for various sets of prima facie intuitively compelling desiderata, that no axiology can simultaneously satisfy all the desiderata on the list. One's choice of population axiology appears to be a choice of which intuition one is least unwilling to give up. 1 Population ethics and population axiology: The basic questions In many decision situations, at least in expectation, an agent's decision has no effect on the numbers and identities of persons born. For those situations, fixed-population ethics is adequate. But in many other decision situations, this condition does not hold. Should one have an additional child? How should life-saving resources be prioritised between the young (who might go on to have children) and the old (who are past reproductive age)? How much should one do to prevent climate change from reducing the number of persons the Earth is able to sustain in the future? Should one fund condom distribution in the developing world? In all these cases, one's actions can affect both who is born and how many people are (ever) born. -
Unprecedented Technological Risks
Policy Brief: Unprecedented Technological Risks Future of Humanit y Institute UNIVERSITY OF OXFORD Unprecedented Technological Risks Over the next few decades, the continued development of dual-use technologies will provide major benefits to society. They will also pose significant and unprecedented global risks, including risks of new weapons of mass destruction, arms races, or the accidental deaths of billions of people. Synthetic biology, if more widely accessible, would give terrorist groups the ability to synthesise pathogens more dangerous than smallpox; geoengineering technologies would give single countries the power to dramatically alter the earth’s climate; distributed manufacturing could lead to nuclear proliferation on a much wider scale; and rapid advances in artificial intelligence could give a single country a decisive strategic advantage. These scenarios might seem extreme or outlandish. But they are widely recognised as significant risks by experts in the relevant fields. To safely navigate these risks, and harness the potentially great benefits of these new technologies, we must proactively provide research, assessment, monitoring, and guidance, on a global level. This report gives an overview of these risks and their importance, focusing on risks of extreme catastrophe, which we believe to be particularly neglected. The report explains why market and political circumstances have led to a deficit of regulation on these issues, and offers some policy proposals as starting points for how these risks could be addressed. September 2014 1 Policy Brief: Unprecedented Technological Risks Executive Summary The development of nuclear weapons was, at the than nuclear weapons, because they are more time, an unprecedented technological risk. The difficult to control. -
Rationality Spring 2020, Tues & Thurs 1:30-2:45 Harvard University
General Education 1066: Rationality Spring 2020, Tues & Thurs 1:30-2:45 Harvard University Description: The nature, psychology, and applications of rationality. Rationality is, or ought to be, the basis of everything we think and do. Yet in an era with unprecedented scientific sophistication, we are buffeted by fake news, quack cures, conspiracy theories, and “post-truth” rhetoric. How should we reason about reason? Rationality has long been a foundational topic in the academy, including philosophy, psychology, AI, economics, mathematics, and government. Recently, discoveries on how people reason have earned three Nobel Prizes, and many applied fields are being revolutionized by rational, evidence-based, and effective approaches. Part I: The nature of rationality. Tools of reason, including logic, statistical decision theory, Bayesian inference, rational choice, game theory, critical thinking, and common fallacies. Part II: The cognitive science of rationality, including classic research by psychologists and behavioral economists. Is Homo sapiens a “rational animal”? Could our irrational heuristics and biases be evolutionary adaptations to a natural information environment? Could beliefs that are factually irrational be socially rational in a drive for individual status or tribal solidarity? Can people be cured of their irrationality? Part III: Rationality in the world. How can our opinions, policies, and practices be made more rational? Can rational analyses offer more effective means of improving the world? Examples will include journalism, climate change, sports, crime, government, medicine, political protest, social change, philanthropy, and other forms of effective altruism. These topics will be presented by guest lecturers, many of them well-known authors and public figures. For the capstone project, students will select a major national or global problem, justify the choice, and lay out the most rational means to mitigate or solve it. -
Sharing the World with Digital Minds1
Sharing the World with Digital Minds1 (2020). Draft. Version 1.8 Carl Shulman† & Nick Bostrom† [in Clarke, S. & Savulescu, J. (eds.): Rethinking Moral Status (Oxford University Press, forthcoming)] Abstract The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence. Here we focus on one set of issues, which arise from the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-produced digital minds could derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings could contribute immense value to the world, and failing to respect their interests could produce a moral catastrophe, while a naive way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence. 1. Introduction Human biological nature imposes many practical limits on what can be done to promote somebody’s welfare. We can only live so long, feel so much joy, have so many children, and benefit so much from additional support and resources. Meanwhile, we require, in order to flourish, that a complex set of physical, psychological, and social conditions be met. -
A R E S E a R C H Agenda
Cooperation, Conflict, and Transformative Artificial Intelligence — A RESEARCH AGENDA Jesse Clifton — LONGTERMRISK.ORG March 2020 First draft: December 2019 Contents 1 Introduction 2 1.1 Cooperation failure: models and examples . .3 1.2 Outline of the agenda . .5 2 AI strategy and governance 8 2.1 Polarity and transition scenarios . .8 2.2 Commitment and transparency . .8 2.3 AI misalignment scenarios . 10 2.4 Other directions . 10 2.5 Potential downsides of research on cooperation failures . 11 3 Credibility 12 3.1 Commitment capabilities . 13 3.2 Open-source game theory . 13 4 Peaceful bargaining mechanisms 16 4.1 Rational crisis bargaining . 16 4.2 Surrogate goals . 18 5 Contemporary AI architectures 21 5.1 Learning to solve social dilemmas . 21 5.2 Multi-agent training . 24 5.3 Decision theory . 25 6 Humans in the loop 27 6.1 Behavioral game theory . 27 6.2 AI delegates . 28 7 Foundations of rational agency 30 7.1 Bounded decision theory . 30 7.2 Acausal reasoning . 31 8 Acknowledgements 35 1 1 Introduction Transformative artificial intelligence (TAI) may be a key factor in the long-run trajec- tory of civilization. A growing interdisciplinary community has begun to study how the development of TAI can be made safe and beneficial to sentient life (Bostrom, 2014; Russell et al., 2015; OpenAI, 2018; Ortega and Maini, 2018; Dafoe, 2018). We present a research agenda for advancing a critical component of this effort: preventing catastrophic failures of cooperation among TAI systems. By cooperation failures we refer to a broad class of potentially-catastrophic inefficiencies in interactions among TAI-enabled actors. -
Beneficial AI 2017
Beneficial AI 2017 Participants & Attendees 1 Anthony Aguirre is a Professor of Physics at the University of California, Santa Cruz. He has worked on a wide variety of topics in theoretical cosmology and fundamental physics, including inflation, black holes, quantum theory, and information theory. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a co-founder of the Future of Life Institute, the Foundational Questions Institute, and Metaculus (http://www.metaculus.com/). Sam Altman is president of Y Combinator and was the cofounder of Loopt, a location-based social networking app. He also co-founded OpenAI with Elon Musk. Sam has invested in over 1,000 companies. Dario Amodei is the co-author of the recent paper Concrete Problems in AI Safety, which outlines a pragmatic and empirical approach to making AI systems safe. Dario is currently a research scientist at OpenAI, and prior to that worked at Google and Baidu. Dario also helped to lead the project that developed Deep Speech 2, which was named one of 10 “Breakthrough Technologies of 2016” by MIT Technology Review. Dario holds a PhD in physics from Princeton University, where he was awarded the Hertz Foundation doctoral thesis prize. Amara Angelica is Research Director for Ray Kurzweil, responsible for books, charts, and special projects. Amara’s background is in aerospace engineering, in electronic warfare, electronic intelligence, human factors, and computer systems analysis areas. A co-founder and initial Academic Model/Curriculum Lead for Singularity University, she was formerly on the board of directors of the National Space Society, is a member of the Space Development Steering Committee, and is a professional member of the Institute of Electrical and Electronics Engineers (IEEE).