Informatica 41 (2017) 389–400 389 Superintelligence as a Cause or Cure for Risks of Astronomical Suffering Kaj Sotala and Lukas Gloor Foundational Research Institute, Berlin, Germany E-mail:
[email protected],
[email protected] foundational-research.org Keywords: existential risk, suffering risk, superintelligence, AI alignment problem, suffering-focused ethics, population ethics Received: August 31, 2017 Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks), where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can be both the cause of suffering risks and a way to prevent them from being realized. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks. Povzetek: Prispevek analizira prednosti in nevarnosti superinteligence. 1 Introduction Work discussing the possible consequences of creating We term the possibility of such outcomes a suffering superintelligent AI (Yudkowsky 2008, Bostrom 2014, risk: Sotala & Yampolskiy 2015) has discussed Suffering risk (s-risk): One where an adverse superintelligence as a possible existential risk: a risk outcome would bring about severe suffering on "where an adverse outcome would either annihilate an astronomical scale, vastly exceeding all Earth-originating intelligent life or permanently and suffering that has existed on Earth so far.