Superintelligence As a Cause Or Cure for Risks of Astronomical Suffering

Total Page:16

File Type:pdf, Size:1020Kb

Superintelligence As a Cause Or Cure for Risks of Astronomical Suffering Informatica 41 (2017) 389–400 389 Superintelligence as a Cause or Cure for Risks of Astronomical Suffering Kaj Sotala and Lukas Gloor Foundational Research Institute, Berlin, Germany E-mail: [email protected], [email protected] foundational-research.org Keywords: existential risk, suffering risk, superintelligence, AI alignment problem, suffering-focused ethics, population ethics Received: August 31, 2017 Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks), where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can be both the cause of suffering risks and a way to prevent them from being realized. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks. Povzetek: Prispevek analizira prednosti in nevarnosti superinteligence. 1 Introduction Work discussing the possible consequences of creating We term the possibility of such outcomes a suffering superintelligent AI (Yudkowsky 2008, Bostrom 2014, risk: Sotala & Yampolskiy 2015) has discussed Suffering risk (s-risk): One where an adverse superintelligence as a possible existential risk: a risk outcome would bring about severe suffering on "where an adverse outcome would either annihilate an astronomical scale, vastly exceeding all Earth-originating intelligent life or permanently and suffering that has existed on Earth so far. drastically curtail its potential" (Bostrom 2002, 2013). The previous work has mostly1 considered the worst- In order for potential risks - including s-risks - to case outcome to be the possibility of human extinction by merit work on them, three conditions must be met. First, an AI that is indifferent to humanity’s survival and the outcome of the risk must be sufficiently severe to values. However, it is often thought that for an merit attention. Second, the risk must have some individual, there exist “fates worse than death”; reasonable probability of being realized. Third, there analogously, for civilizations there may exist fates worse must be some way for risk-avoidance work to reduce than extinction, such as survival in conditions in which either the probability or severity of an adverse outcome. most people will experience enormous suffering for most In this paper, we will argue that suffering risks meet of their lives. all three criteria, and that s-risk avoidance work is thus of Even if such extreme outcomes would be avoided, a comparable magnitude in importance as work on risks the known universe may eventually be populated by vast from extinction. Section 2 seeks to establish the severity amounts of minds: published estimates include the of s-risks. There, we will argue that there are classes of possibility of 1025 minds supported by a single star suffering-related adverse outcomes that many value (Bostrom 2003a), with humanity having the potential to systems would consider to be equally or even more eventually colonize tens of millions of galaxies severe than extinction. Additionally, we will define a (Armstrong & Sandberg 2013). While this could enable class of less severe suffering outcomes which many an enormous number of meaningful lives to be lived, if value systems would consider important to avoid, albeit even a small fraction of these lives were to exist in not as important as avoiding extinction. Section 3 looks hellish circumstances, the amount of suffering would be at suffering risks from the view of several different value vastly greater than that produced by all the atrocities, systems, and discusses how much they would prioritize abuses, and natural causes in Earth’s history so far. avoiding different suffering outcomes. Next, we will argue that there is a reasonable probability for a number of different suffering risks to be realized. Our discussion 1 Bostrom (2014) is mainly focused on the risk of is organized according to the relationship that extinction, but does also devote some discussion to superintelligent AIs have to suffering risks: section 4 alternative negative outcomes such as “mindcrime”. We covers risks that may be prevented by a discuss mindcrime in section 5. 390 Informatica 41 (2017) 389–400 K. Sotala et al. superintelligence, and section 5 covers risks that may be well as individuals. Bostrom (2013) seems to realized by one2. Section 6 discusses how it might be acknowledge this by including “hellish” as a possible possible to work on suffering risks. severity in the corresponding chart, but does not place any concrete outcomes under the hellish severity, 2 Suffering risks as risks of extreme implying that risks of extinction are still the worst outcomes. Yet there seem to be plausible paths to severity civilization-wide hell outcomes as well (Figure 1), which As already noted, the main focus in discussion of risks we will discuss in sections 4 and 5. from superintelligent AI has been either literal extinction, with the AI killing humans as a side-effect of pursuing Global Thinning of Extinction Global some other goal (Yudkowsky 2008), or a value the ozone risks hellscape layer extinction. In value extinction, some form of humanity may survive, but the future is controlled by an AI Personal Car is stolen Death Extended torture operating according to values which all current-day followed by humans would consider worthless (Yudkowsky 2011). In death either scenario, it is thought that the resulting future would have no value. Endurable Terminal Hellish In this section, we will argue that besides futures that Figure 1: The worst suffering risks are ones that affect have no value, according to many different value systems everyone and subject people to hellish conditions. it is possible to have futures with negative value. These would count as the worst category of existential risks. In In order to qualify as equally bad or worse than addition, there are adverse outcomes of a lesser severity, extinction, suffering risks do not necessarily need to which depending on one’s value systems may not affect every single member of humanity. For example, necessarily count as worse than extinction. Regardless, consider a simplified ethical calculus where someone making these outcomes less likely is a high priority and a may have a predominantly happy life (+1), never exist common interest of many different value systems. (0), or have a predominantly unhappy life (-1). As long Bostrom (2002) frames his definition of extinction as the people having predominantly unhappy lives risks with a discussion which characterizes a single outnumber the people having predominantly happy lives, person’s death as being a risk of terminal intensity and under this calculus such an outcome would be considered personal scope, with existential risks being risks of worse than nobody existing in the first place. We will terminal intensity and global scope - one person’s death call this scenario a net suffering outcome3. versus the death of all humans. However, it is commonly This outcome might be considered justifiable if we thought that there are “fates worse than death”: at one assumed that, given enough time, the people living happy extreme, being tortured for an extended time (with no lives will eventually outnumber the people living chance of rescue), and then killed. unhappy lives. Most value systems would then still As less extreme examples, various negative health consider a net suffering outcome worth avoiding, but conditions are often considered worse than death (Rubin, they might consider it an acceptable cost for an even Buehler & Halpern 2016; Sayah et al. 2015; Ditto et al., larger amount of future happy lives. 1996): for example, among hospitalized patients with On the other hand it is also possible that the world severe illness, a majority of respondents considered could become locked into conditions in which the bowel and bladder incontinence, relying on a feeding balance would remain negative even when considering tube to live, and being unable to get up from bed, to be all the lives that will ever live: things would never get conditions that were worse than death (Rubin, Buehler & better. We will call this a pan-generational net Halpern 2016). While these are prospective evaluations suffering outcome. rather than what people have actually experienced, In addition to net and pan-generational net suffering several countries have laws allowing for voluntary outcomes, we will consider a third category. In these euthanasia, which people with various adverse conditions outcomes, serious suffering may be limited to only a have chosen rather than go on living. This may fraction of the population, but the overall population at considered an empirical confirmation of some states of some given time4 is still large enough that even this small life being worse than death, at least as judged by the fraction accounts for many times more suffering than has people who choose to die. The notion of fates worse than death suggests the existence of a “hellish” severity that is one step worse than “terminal”, and which might affect civilizations as 3 “Net” should be considered equivalent to Bostrom’s “global”, but we have chosen a different name to avoid giving the impression that the outcome would necessarily 2 Superintelligent AIs being in a special position where be limited to only one planet. they might either enable or prevent suffering risks, is 4 One could also consider the category of pan- similar to the way in which they are in a special position generational astronomical suffering outcomes, but to make risks of extinction both more or less likely restricting ourselves into just three categories is sufficient (Yudkowsky 2008).
Recommended publications
  • Antinatalism and Moral Particularism Gerald K
    Essays in Philosophy Volume 20 Article 5 Issue 1 Is Procreation Immoral? 1-22-2019 Antinatalism and Moral Particularism Gerald K. Harrison Massey University, [email protected] Follow this and additional works at: https://commons.pacificu.edu/eip Recommended Citation Harrison, Gerald K. () "Antinatalism and Moral Particularism," Essays in Philosophy: Vol. 20: Iss. 1, Article 5. https://doi.org/10.7710/ 1526-0569.1629 Essays in Philosophy is a biannual journal published by Pacific nivU ersity Library | ISSN 1526-0569 | http://commons.pacificu.edu/eip/ Essays in Philosophy ISSN 1526-0569 | Essays in Philosophy is published by the Pacific University Libraries Volume 20, Issue 1 (2019) Antinatalism and Moral Particularism Gerald K. Harrison Massey University Abstract I believe most acts of human procreation are immoral, and I believe this despite also believing in the truth of moral particularism. In this paper I explain why. I argue that procreative acts possess numerous features that, in other contexts, seem typically to operate with negative moral valences. Other things being equal this gives us reason to believe they will operate negatively in the context of procreative acts as well. However, most people’s intuitions represent procreative acts to be morally permissible in most circumstances. Given moral particularism, this would normally be good evidence that procreative acts are indeed morally permissible and that the features that operate negatively elsewhere, simply do not do so in the context of procreative acts in particular. But I argue that we have no good reason to think our intuitions about the ethics of human procreation are accurate.
    [Show full text]
  • Evidence-Based Policy Cause Area Report
    Evidence-Based Policy Cause Area Report AUTHOR: 11/18 MARINELLA CAPRIATI, DPHIL 1 — Founders Pledge Animal Welfare Executive Summary By supporting increased use of evidence in the governments of low- and middle-income countries, donors can dramatically increase their impact on the lives of people living in poverty. This report explores how focusing on evidence-based policy provides an opportunity for leverage, and presents the most promising organisation we identified in this area. A high-risk/high-return opportunity for leverage In the 2013 report ‘The State of the Poor’, the World Bank reported that, as of 2010, roughly 83% of people in extreme poverty lived in countries classified as ‘lower-middle income’ or below. By far the most resources spent on tackling poverty come from local governments. American think tank the Brookings Institution found that, in 2011, $2.3 trillion of the $2.8 trillion spent on financing development came from domestic government revenues in the countries affected. There are often large differences in the effectiveness and cost-effectiveness of social programs— the amount of good done per dollar spent can vary significantly across programs. Employing evidence allows us to identify the most cost-effective social programs. This is useful information for donors choosing which charity to support, but also for governments choosing which programs to implement, and how. This suggests that employing philanthropic funding to improve the effectiveness of policymaking in low- and middle-income countries is likely to constitute an exceptional opportunity for leverage: by supporting the production and use of evidence in low- and middle-income countries, donors can potentially enable policy makers to implement more effective policies, thereby reaching many more people than direct interventions.
    [Show full text]
  • John Stuart Mill's Sanction Utilitarianism
    JOHN STUART MILL’S SANCTION UTILITARIANISM: A PHILOSOPHICAL AND HISTORICAL INTERPRETATION A Dissertation by DAVID EUGENE WRIGHT Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Chair of Committee, Linda Radzik Committee Members, Clare Palmer Scott Austin R.J.Q. Adams Head of Department, Gary Varner May 2014 Major Subject: Philosophy Copyright 2014 David Eugene Wright ABSTRACT This dissertation argues for a particular interpretation of John Stuart Mill’s utilitarianism, namely that Mill is best read as a sanction utilitarian. In general, scholars commonly interpret Mill as some type of act or rule utilitarian. In making their case for these interpretations, it is also common for scholars to use large portions of Mill’s Utilitarianism as the chief source of insight into his moral theory. By contrast, I argue that Utilitarianism is best read as an ecumenical text where Mill explains and defends the general tenets of utilitarianism rather than setting out his own preferred theory. The exception to this ecumenical approach to the text comes in the fifth chapter on justice which, I argue on textual and historical grounds, outlines the central features of Mill’s utilitarianism. With this understanding of Utilitarianism in place, many of the passages commonly cited in favor of the previous interpretations are rendered less plausible, and interpretations emphasizing Mill’s other writings are strengthened. Using this methodology, I critique four of the most prominent act or rule utilitarian interpretations of Mill’s moral theory. I then provide an interpretation of Mill’s theory of moral obligation and utilitarianism.
    [Show full text]
  • Cognitive Biases Nn Discussing Cognitive Biases with Colleagues and Pupils
    The RSA: an enlightenment organisation committed to finding innovative practical solutions to today’s social challenges. Through its ideas, research and 27,000-strong Fellowship it seeks to understand and enhance human capability so we can close the gap between today’s reality and people’s hopes for a better world. EVERYONE STARTS WITH AN ‘A’ Applying behavioural insight to narrow the socioeconomic attainment gap in education NATHALIE SPENCER, JONATHAN ROWSON, LOUISE BAMFIELD MARCH 2014 8 John Adam Street London WC2N 6EZ +44 (0) 20 7930 5115 Registered as a charity in England and Wales IN COOPERATION WITH THE no. 212424 VODAFONE FOUNDATION GERMANY Copyright © RSA 2014 www.thersa.org www.thersa.org For use in conjunction with Everyone Starts with an “A” 3 ways to use behavioural insight in the classroom By Spencer, Rowson, www.theRSA.org Bamfield (2014), available www.vodafone-stiftung.de A guide for teachers and school leaders at: www.thersa.org/startswitha www.lehrerdialog.net n Praising pupils for effort instead of intelligence to help instil the Whether you and your pupils believe that academic idea that effort is key and intelligence is not a fixed trait. For ability is an innate trait (a ‘fixed mindset’) or can be example, try “great, you kept practicing” instead of “great, you’re strengthened through effort and practice like a muscle really clever”. (a ‘growth mindset’) affects learning, resilience to n Becoming the lead learner. Educators can shape mindset setbacks, and performance. Mindset through modelling it for the pupils. The way that you (and parents) give feedback to Think about ability like a muscle Try: n Giving a “not yet” grade instead of a “fail” to set the expectation pupils can reinforce or attenuate a given mindset.
    [Show full text]
  • Satisficing Consequentialism Author(S): Michael Slote and Philip Pettit Source: Proceedings of the Aristotelian Society, Supplementary Volumes, Vol
    Satisficing Consequentialism Author(s): Michael Slote and Philip Pettit Source: Proceedings of the Aristotelian Society, Supplementary Volumes, Vol. 58 (1984), pp. 139-163+165-176 Published by: Blackwell Publishing on behalf of The Aristotelian Society Stable URL: http://www.jstor.org/stable/4106846 Accessed: 15/10/2008 09:26 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=black. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about JSTOR, please contact [email protected]. The Aristotelian Society and Blackwell Publishing are collaborating with JSTOR to digitize, preserve and extend access to Proceedings of the Aristotelian Society, Supplementary Volumes.
    [Show full text]
  • Advantages & Disadvantages of Consequential Ethics
    CONSEQUENTIAL ETHICS: SUMMARY (c) 2019 www.prshockley.org Dr. Paul R Shockley What about Consequences? Significant Types of Utilitarianism: Utilitarianism: The right action is what brings about the greatest good Consequential Ethics (CE): An action is right iff it promotes the 1. Consequentialism = whether an act is morally to the greatest number in the long run. Here are different types of best consequences. The best consequences are those in which right depends only on consequences (not utilitarianism: “happiness” is maximized. Central question: what actions will circumstances, the intrinsic nature of the act, or 1. Hedonistic Utilitarianism: maximize Pleasure & minimalize pain. generate the best consequences? This family of outcome based anything that happens before the act). approaches are varied but two, in particular, heed our attention, 2. Act Utilitarianism: an act should be judged by its results. namely, the utilitarianism of Jeremy Bentham (1748-1832) & John 2. Actual Consequentialism = whether an act is 3. Rule Utilitarianism: an act is right iff it follows the rules that promotes Stuart Mill (1806-1873) & egoism or objectivism of Ayn Rand morally right depends only on the actual the best consequences. Ethical rules are chosen in view of the anticipated (1905-1982). Consequential ethics is also referred to as teleological consequences (not foreseen, foreseeable, intended, results flowing from keeping those rules. ethics hence, Greek word teleos, meaning “having reached one’s or likely consequences). 4. Priority Utilitarianism: maximize the achievement of people’s priorities-it is for each person to decide what constitutes personal end” or “goal directed.” This summary centers on utilitarianism. 3. Direct Consequentialism = whether an act is happiness (R.M.
    [Show full text]
  • “Is Cryonics an Ethical Means of Life Extension?” Rebekah Cron University of Exeter 2014
    1 “Is Cryonics an Ethical Means of Life Extension?” Rebekah Cron University of Exeter 2014 2 “We all know we must die. But that, say the immortalists, is no longer true… Science has progressed so far that we are morally bound to seek solutions, just as we would be morally bound to prevent a real tsunami if we knew how” - Bryan Appleyard 1 “The moral argument for cryonics is that it's wrong to discontinue care of an unconscious person when they can still be rescued. This is why people who fall unconscious are taken to hospital by ambulance, why they will be maintained for weeks in intensive care if necessary, and why they will still be cared for even if they don't fully awaken after that. It is a moral imperative to care for unconscious people as long as there remains reasonable hope for recovery.” - ALCOR 2 “How many cryonicists does it take to screw in a light bulb? …None – they just sit in the dark and wait for the technology to improve” 3 - Sterling Blake 1 Appleyard 2008. Page 22-23 2 Alcor.org: ‘Frequently Asked Questions’ 2014 3 Blake 1996. Page 72 3 Introduction Biologists have known for some time that certain organisms can survive for sustained time periods in what is essentially a death"like state. The North American Wood Frog, for example, shuts down its entire body system in winter; its heart stops beating and its whole body is frozen, until summer returns; at which point it thaws and ‘comes back to life’ 4.
    [Show full text]
  • The Definition of Effective Altruism
    OUP CORRECTED PROOF – FINAL, 19/08/19, SPi 1 The Definition of Effective Altruism William MacAskill There are many problems in the world today. Over 750 million people live on less than $1.90 per day (at purchasing power parity).1 Around 6 million children die each year of easily preventable causes such as malaria, diarrhea, or pneumonia.2 Climate change is set to wreak environmental havoc and cost the economy tril- lions of dollars.3 A third of women worldwide have suffered from sexual or other physical violence in their lives.4 More than 3,000 nuclear warheads are in high-alert ready-to-launch status around the globe.5 Bacteria are becoming antibiotic- resistant.6 Partisanship is increasing, and democracy may be in decline.7 Given that the world has so many problems, and that these problems are so severe, surely we have a responsibility to do something about them. But what? There are countless problems that we could be addressing, and many different ways of addressing each of those problems. Moreover, our resources are scarce, so as individuals and even as a globe we can’t solve all these problems at once. So we must make decisions about how to allocate the resources we have. But on what basis should we make such decisions? The effective altruism movement has pioneered one approach. Those in this movement try to figure out, of all the different uses of our resources, which uses will do the most good, impartially considered. This movement is gathering con- siderable steam. There are now thousands of people around the world who have chosen
    [Show full text]
  • Negative Average Preference Utilitarianism Roger Chao*
    Journal of Philosophy of Life Vol.2, No.1 (March 2012):55-66 [Discussion Paper] Negative Average Preference Utilitarianism * Roger Chao Abstract For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living), or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place). To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU) – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU) easily escapes this dilemma (it never even arises within it). 1. Introduction For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion, or they have to confront the Non-Identity Problem. To them it seems there is no escape, they either have to face one problem or the other. What I will try to show in this paper however, is that there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU) – which though similar to anti-frustrationism, has some important differences in practice.
    [Show full text]
  • Which Consequentialism? Machine Ethics and Moral Divergence
    MIRI MACHINE INTELLIGENCE RESEARCH INSTITUTE Which Consequentialism? Machine Ethics and Moral Divergence Carl Shulman, Henrik Jonsson MIRI Visiting Fellows Nick Tarleton Carnegie Mellon University, MIRI Visiting Fellow Abstract Some researchers in the field of machine ethics have suggested consequentialist or util- itarian theories as organizing principles for Artificial Moral Agents (AMAs) (Wallach, Allen, and Smit 2008) that are ‘full ethical agents’ (Moor 2006), while acknowledging extensive variation among these theories as a serious challenge (Wallach, Allen, and Smit 2008). This paper develops that challenge, beginning with a partial taxonomy ofconse- quentialisms proposed by philosophical ethics. We discuss numerous ‘free variables’ of consequentialism where intuitions conflict about optimal values, and then consider spe- cial problems of human-level AMAs designed to implement a particular ethical theory, by comparison to human proponents of the same explicit principles. In conclusion, we suggest that if machine ethics is to fully succeed, it must draw upon the developing field of moral psychology. Shulman, Carl, Nick Tarleton, and Henrik Jonsson. 2009. “Which Consequentialism? Machine Ethics and Moral Divergence.” In AP-CAP 2009: The Fifth Asia-Pacific Computing and Philosophy Conference, October 1st-2nd, University of Tokyo, Japan, Proceedings, edited by Carson Reynolds and Alvaro Cassinelli, 23–25. AP-CAP 2009. http://ia-cap.org/ap-cap09/proceedings.pdf. This version contains minor changes. Carl Shulman, Henrik Jonsson, Nick Tarleton 1. Free Variables of Consequentialism Suppose that the recommendations of a broadly utilitarian view depend on decisions about ten free binary variables, where we assign a probability of 80% to our favored option for each variable; in this case, if our probabilities are well-calibrated and our errors are not correlated across variables, then we will have only slightly more than a 10% chance of selecting the correct (in some meta-ethical framework) specification.
    [Show full text]
  • Lesswrong.Com Sequences
    LessWrong.com Sequences Elizier Yudkowsky Generated by lesswrong_book.py on 2013-04-28. Pseudo-random version string: 8c37c10f-8178-4d00-8a06-16a9ed81a6be. Part I Map and Territory A collection of posts dealing with the fundamentals of rationality: the difference between the map and the territory, Bayes’s Theorem and the nature of evidence, why anyone should care about truth, and minds as reflective cognitive engines. 1. The Simple Truth↗ “I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.” — Danielle Egan (journalist) Author’s Foreword: This essay is meant to restore a naive view of truth. Someone says to you: “My miracle snake oil can rid you of lung cancer in just three weeks.” You reply: “Didn’t a clinical study show this claim to be untrue?” The one returns: “This notion of ‘truth’ is quite naive; what do you mean by ‘true’?” Many people, so questioned, don’t know how to answer in exquisitely rigorous detail. Nonetheless they would not be wise to abandon the concept of ‘truth’. There was a time when no one knew the equations of gravity in exquisitely rigorous detail, yet if you walked off a cliff, you would fall. Often I have seen – especially on Internet mailing lists – that amidst other conversation, someone says “X is true”, and then an argument breaks out over the use of the word ‘true’.
    [Show full text]
  • What Is the Difference Between Weak Negative and Non-Negative Ethical
    What Is the Difference Between Weak Negative and Non-Negative Ethical Views? Simon Knutsson Foundational Research Institute [email protected] June 2016 Abstract Weak negative views in ethics are concerned with both reducing suffering and pro- moting happiness, but are commonly said to give more weight to suffering than to happiness. Such views include weak negative utilitarianism (also called negative- leaning utilitarianism), and other views. In contrast, non-negative views, including traditional utilitarianism, are typically said to give equal weight to happiness and suffering. However, it is not obvious how weak negative and non-negative views differ, and what it means to give happiness and suffering equal weight, or to give suffering more weight. Contents 1 Introduction2 2 Terminology3 3 The views under consideration3 4 Weak negative utilitarianism and consequentialism3 5 The Weak Negative Doctrine4 6 Weak negative axiology4 7 The difference between weak negative and non-negative views assuming hap- piness and suffering are objectively highly measurable5 8 If statements about magnitudes of happiness versus suffering involve value judgements 7 9 Weak negative and non-negative views are equally affected 10 10 When is it accurate to describe non-negative and weak negative views as symmetric vs. asymmetric? 10 Foundational Research Institute Acknowledgments 12 References 12 Appendix: Value for an individual versus value for the world—where does the negativity set in? 12 List of Figures 1 Common illustrations of non-negative
    [Show full text]