Singularity Summit 2011 Workshop Report
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
1 COPYRIGHT STATEMENT This Copy of the Thesis Has Been
University of Plymouth PEARL https://pearl.plymouth.ac.uk 04 University of Plymouth Research Theses 01 Research Theses Main Collection 2012 Life Expansion: Toward an Artistic, Design-Based Theory of the Transhuman / Posthuman Vita-More, Natasha http://hdl.handle.net/10026.1/1182 University of Plymouth All content in PEARL is protected by copyright law. Author manuscripts are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author. COPYRIGHT STATEMENT This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognize that its copyright rests with its author and that no quotation from the thesis and no information derived from it may be published without the author’s prior consent. 1 Life Expansion: Toward an Artistic, Design-Based Theory of the Transhuman / Posthuman by NATASHA VITA-MORE A thesis submitted to the University of Plymouth in partial fulfillment for the degree of DOCTOR OF PHILOSOPHY School of Art & Media Faculty of Arts April 2012 2 Natasha Vita-More Life Expansion: Toward an Artistic, Design-Based Theory of the Transhuman / Posthuman The thesis’ study of life expansion proposes a framework for artistic, design-based approaches concerned with prolonging human life and sustaining personal identity. To delineate the topic: life expansion means increasing the length of time a person is alive and diversifying the matter in which a person exists. -
Infinite Ethics
INFINITE ETHICS Nick Bostrom Faculty of Philosophy Oxford University [Published in Analysis and Metaphysics, Vol. 10 (2011): pp. 9-59] [This is the final version. Earlier versions: 2003, 2005, 2008, 2009] www.nickbostrom.com ABSTRACT Aggregative consequentialism and several other popular moral theories are threatened with paralysis: when coupled with some plausible assumptions, they seem to imply that it is always ethically indifferent what you do. Modern cosmology teaches that the world might well contain an infinite number of happy and sad people and other candidate value-bearing locations. Aggregative ethics implies that such a world contains an infinite amount of positive value and an infinite amount of negative value. You can affect only a finite amount of good or bad. In standard cardinal arithmetic, an infinite quantity is unchanged by the addition or subtraction of any finite quantity. So it appears you cannot change the value of the world. Modifications of aggregationism aimed at resolving the paralysis are only partially effective and cause severe side effects, including problems of “fanaticism”, “distortion”, and erosion of the intuitions that originally motivated the theory. Is the infinitarian challenge fatal? 1. The challenge 1.1. The threat of infinitarian paralysis When we gaze at the starry sky at night and try to think of humanity from a “cosmic point of view”, we feel small. Human history, with all its earnest strivings, triumphs, and tragedies can remind us of a colony of ants, laboring frantically to rearrange the needles of their little ephemeral stack. We brush such late-night rumination aside in our daily life and analytic 1 philosophy. -
Apocalypse Now? Initial Lessons from the Covid-19 Pandemic for the Governance of Existential and Global Catastrophic Risks
journal of international humanitarian legal studies 11 (2020) 295-310 brill.com/ihls Apocalypse Now? Initial Lessons from the Covid-19 Pandemic for the Governance of Existential and Global Catastrophic Risks Hin-Yan Liu, Kristian Lauta and Matthijs Maas Faculty of Law, University of Copenhagen, Copenhagen, Denmark [email protected]; [email protected]; [email protected] Abstract This paper explores the ongoing Covid-19 pandemic through the framework of exis- tential risks – a class of extreme risks that threaten the entire future of humanity. In doing so, we tease out three lessons: (1) possible reasons underlying the limits and shortfalls of international law, international institutions and other actors which Covid-19 has revealed, and what they reveal about the resilience or fragility of institu- tional frameworks in the face of existential risks; (2) using Covid-19 to test and refine our prior ‘Boring Apocalypses’ model for understanding the interplay of hazards, vul- nerabilities and exposures in facilitating a particular disaster, or magnifying its effects; and (3) to extrapolate some possible futures for existential risk scholarship and governance. Keywords Covid-19 – pandemics – existential risks – global catastrophic risks – boring apocalypses 1 Introduction: Our First ‘Brush’ with Existential Risk? All too suddenly, yesterday’s ‘impossibilities’ have turned into today’s ‘condi- tions’. The impossible has already happened, and quickly. The impact of the Covid-19 pandemic, both directly and as manifested through the far-reaching global societal responses to it, signal a jarring departure away from even the © koninklijke brill nv, leiden, 2020 | doi:10.1163/18781527-01102004Downloaded from Brill.com09/27/2021 12:13:00AM via free access <UN> 296 Liu, Lauta and Maas recent past, and suggest that our futures will be profoundly different in its af- termath. -
Pod of Jake #67 – Julia Galef Ai-Generated Transcript 1
POD OF JAKE #67 – JULIA GALEF AI-GENERATED TRANSCRIPT Jake 00:59 Thank you so much, Julia, for taking the time and joining me on the show today. I really appreciate it and looking forward to this conversation for some time. And I've actually listened to your podcast a bunch in the past. Most recently. Yeah, your episode with metallic. So funny must be funny to be on the other side of the microphone. Yeah. But thank you for coming on, I think the best place to get started would be for those who don't know you to tell your story a little bit. And then we'll spend a lot of time talking about your book, which recently came out. Julia Galef 01:29 Sounds good. Thanks, Jake. Yes, it's good to be on the show. My story. Basically, for the last 10 years, I've just been devoted to this question of how do we improve human reasoning and decision making? And really interest in this question, both from a theoretical level? Like how do we think about what would ideal reasoning look like? Just almost philosophically, and also, just from a practical level? Like, what are the kinds of mistakes that the human brain tends to make and what kinds of situations and getting really into a lot of the cognitive science literature and kind of experimenting on real people like trying out techniques and seeing how well they work and, you know, talking to people about their experiences, noticing their own biases and things like that. -
Wise Leadership & AI 3
Wise Leadership and AI Leadership Chapter 3 | Behind the Scenes of the Machines What’s Ahead For Artificial General Intelligence? By Dr. Peter VERHEZEN With the AMROP EDITORIAL BOARD Putting the G in AI | 8 points True, generalized intelligence will be achieved when computers can do or learn anything that a human can. At the highest level, this will mean that computers aren’t just able to process the ‘what,’ but understand the ‘why’ behind data — context, and cause and effect relationships. Even someday chieving consciousness. All of this will demand ethical and emotional intelligence. 1 We underestimate ourselves The human brain is amazingly general compared to any digital device yet developed. It processes bottom-up and top-down information, whereas AI (still) only works bottom-up, based on what it ‘sees’, working on specific, narrowly defined tasks. So, unlike humans, AI is not yet situationally aware, nuanced, or multi-dimensional. 2 When can we expect AGI? Great minds do not think alike Some eminent thinkers (and tech entrepreneurs) see true AGI as only a decade or two away. Others see it as science fiction — AI will more likely serve to amplify human intelligence, just as mechanical machines have amplified physical strength. 3 AGI means moving from homo sapiens to homo deus Reaching AGI has been described by the futurist Ray Kurzweil as ‘singularity’. At this point, humans should progress to the ‘trans-human’ stage: cyber-humans (electronically enhanced) or neuro-augmented (bio-genetically enhanced). 4 The real risk with AGI is not malice, but unguided brilliance A super-intelligent machine will be fantastically good at meeting its goals. -
Volume 15, Number 2 March-April 2021
Volume 15, Number 2 March-April 2021 thereasoner.org ISSN 1757-0522 list. (Those were “Decision Theory without Representa- Contents tion Theorems” in 2014, published in Philosophers’ Imprint, and “Dr. Truthlove or: How I Learned to Stop Worry- Guest Editorial 9 ing and Love Bayesian Probabilities” in 2015, published in Noˆus.) Anyone interested in the social epistemology of math Features 9 should know Kenny’s work too: his “Probabilistic Proofs and Transferability” (Philosophia Mathematica, 2009) and News 14 “Rebutting and Undercutting in Mathematics” (Philosophi- cal Perspectives, 2015) are classics and personal favorites. What’s Hot in . 14 Kenny and I first crossed paths Events 15 at a conference in Paris while I Courses and Programmes 15 was a grad student. He joined my dissertation committee a little Jobs and Studentships 16 after that. Fortunately, Kenny is an easy person to keep in touch with: just go to any random phi- losophy conference and there’s a 63% chance he’s there. As you’ll see from our interview, he’s a terri- bly interesting person and a credit Guest Editorial to the profession. I’m happy to of- fer you his opinions on fractal mu- Salutations, reasoners. I’m delighted to be guest editor for this sic, Zoom conferences, being a good referee, teaching in math issue, featuring an interview with the man, the myth, the leg- and philosophy, the rationalist community and its relationship end, Kenny Easwaran. Kenny is a newly-minted Professor of to academia, decision-theoretic pluralism, and the city of Man- Philosophy at Texas A&M University. -
Between Ape and Artilect Createspace V2
Between Ape and Artilect Conversations with Pioneers of Artificial General Intelligence and Other Transformative Technologies Interviews Conducted and Edited by Ben Goertzel This work is offered under the following license terms: Creative Commons: Attribution-NonCommercial-NoDerivs 3.0 Unported (CC-BY-NC-ND-3.0) See http://creativecommons.org/licenses/by-nc-nd/3.0/ for details Copyright © 2013 Ben Goertzel All rights reserved. ISBN: ISBN-13: “Man is a rope stretched between the animal and the Superman – a rope over an abyss.” -- Friedrich Nietzsche, Thus Spake Zarathustra Table&of&Contents& Introduction ........................................................................................................ 7! Itamar Arel: AGI via Deep Learning ................................................................. 11! Pei Wang: What Do You Mean by “AI”? .......................................................... 23! Joscha Bach: Understanding the Mind ........................................................... 39! Hugo DeGaris: Will There be Cyborgs? .......................................................... 51! DeGaris Interviews Goertzel: Seeking the Sputnik of AGI .............................. 61! Linas Vepstas: AGI, Open Source and Our Economic Future ........................ 89! Joel Pitt: The Benefits of Open Source for AGI ............................................ 101! Randal Koene: Substrate-Independent Minds .............................................. 107! João Pedro de Magalhães: Ending Aging .................................................... -
Letters to the Editor
Articles Letters to the Editor Research Priorities for is a product of human intelligence; we puter scientists, innovators, entrepre - cannot predict what we might achieve neurs, statisti cians, journalists, engi - Robust and Beneficial when this intelligence is magnified by neers, authors, professors, teachers, stu - Artificial Intelligence: the tools AI may provide, but the eradi - dents, CEOs, economists, developers, An Open Letter cation of disease and poverty are not philosophers, artists, futurists, physi - unfathomable. Because of the great cists, filmmakers, health-care profes - rtificial intelligence (AI) research potential of AI, it is important to sionals, research analysts, and members Ahas explored a variety of problems research how to reap its benefits while of many other fields. The earliest signa - and approaches since its inception, but avoiding potential pitfalls. tories follow, reproduced in order and as for the last 20 years or so has been The progress in AI research makes it they signed. For the complete list, see focused on the problems surrounding timely to focus research not only on tinyurl.com/ailetter. - ed. the construction of intelligent agents making AI more capable, but also on Stuart Russell, Berkeley, Professor of Com - — systems that perceive and act in maximizing the societal benefit of AI. puter Science, director of the Center for some environment. In this context, Such considerations motivated the “intelligence” is related to statistical Intelligent Systems, and coauthor of the AAAI 2008–09 Presidential Panel on standard textbook Artificial Intelligence: a and economic notions of rationality — Long-Term AI Futures and other proj - Modern Approach colloquially, the ability to make good ects on AI impacts, and constitute a sig - Tom Dietterich, Oregon State, President of decisions, plans, or inferences. -
AIDA Hearing on AI and Competitiveness of 23 March 2021
SPECIAL COMMITTEE ON ARTIFICIAL INTELLIGENCE IN A DIGITAL AGE (AIDA) HEARING ON AI AND COMPETITIVENESS Panel I: AI Governance Kristi Talving, Deputy Secretary General for Business Environment, Ministry of Economic Affairs and Communications, Estonia Khalil Rouhana, Deputy Director General DG-CONNECT (CNECT), European Commission Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learnings; Member of the Executive Committee, World Economic Forum Dr. Sebastian Wieczorek, Vice President – Artificial Intelligence Technology, SAP SE, external expert (until October 2020) in the study commission on AI in the Bundestag * * * Panel II: the perspective of Business and the Industry Prof. Volker Markl, Chair of Research Group at TU Berlin, Database Systems and Information Management, Director of the Intelligent Analytics for Massive Data Research Group at DFKI and Director of the Berlin Big Data Center and Secretary of the VLDB Endowment Moojan Asghari, Cofounder/Initiator of Women in AI, Founder/CEO of Thousand Eyes On Me Marina Geymonat, Expert for AI strategy @ Ministry for Economic Development, Italy. Head, Artificial intelligence Platform @TIM, Telecom Italia Group Jaan Tallinn, founding engineer of Skype and Kazaa as well as a cofounder of the Cambridge Centre for the Study of Existential Risk and Future of Life Institute 2 23-03-2021 BRUSSELS TUESDAY 23 MARCH 2021 1-002-0000 IN THE CHAIR: DRAGOŞ TUDORACHE Chair of the Special Committee on Artificial Intelligence in a Digital Age (The meeting opened at 9.06) Opening remarks 1-003-0000 Chair. – Good morning dear colleagues. I hope you are all connected and you can hear and see us in the room. Welcome to this new hearing of our committee. -
Ssoar-2020-Selle-Der Effektive Altruismus Als Neue.Pdf
www.ssoar.info Der effektive Altruismus als neue Größe auf dem deutschen Spendenmarkt: Analyse von Spendermotivation und Leistungsmerkmalen von Nichtregierungsorganisationen (NRO) auf das Spenderverhalten; eine Handlungsempfehlung für klassische NRO Selle, Julia Veröffentlichungsversion / Published Version Arbeitspapier / working paper Empfohlene Zitierung / Suggested Citation: Selle, J. (2020). Der effektive Altruismus als neue Größe auf dem deutschen Spendenmarkt: Analyse von Spendermotivation und Leistungsmerkmalen von Nichtregierungsorganisationen (NRO) auf das Spenderverhalten; eine Handlungsempfehlung für klassische NRO. (Opuscula, 137). Berlin: Maecenata Institut für Philanthropie und Zivilgesellschaft. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-67950-4 Nutzungsbedingungen: Terms of use: Dieser Text wird unter einer CC BY-NC-ND Lizenz This document is made available under a CC BY-NC-ND Licence (Namensnennung-Nicht-kommerziell-Keine Bearbeitung) zur (Attribution-Non Comercial-NoDerivatives). For more Information Verfügung gestellt. Nähere Auskünfte zu den CC-Lizenzen finden see: Sie hier: https://creativecommons.org/licenses/by-nc-nd/3.0 https://creativecommons.org/licenses/by-nc-nd/3.0/deed.de MAECENATA Julia Selle Der effektive Altruismus als neue Größe auf dem deutschen Spendenmarkt Analyse von Spendermotivation und Leistungsmerkmalen von Nichtregierungsorganisationen (NRO) auf das Spenderverhalten. Eine Handlungsempfehlung für klassische NRO. Opusculum Nr.137 Juni 2020 Die Autorin Julia Selle studierte an den Universität -
Comments to Michael Jackson's Keynote on Determining Energy
Comments to Michael Jackson’s Keynote on Determining Energy Futures using Artificial Intelligence Prof Sirkka Heinonen Finland Futures Research Centre (FFRC) University of Turku ENERGIZING FUTURES 13–14 June 2018 Tampere, Finland AI and Energy • Concrete tools: How Shaping Tomorrow answers the question How can AI help? • Goal of foresight crystal clear: Making better decisions today Huge Challenge – The Challenge The world will need to cut energy-related carbon dioxide emissions by 60 percent by 2050 -even as the population grows by more than two billion people Bold Solution on the Horizon The Renewable Energy Transition Companies as pioneers on energy themes, demand, supply, consumption • Google will reach 100% RE for its global operations this year • GE using AI to boost different forms of energy production and use tech-driven data reports to anticipate performance and maintenance needs around the world BUT …also the role of governments, cities and citizens …NGOs, media… new actors should be emphasised AI + Energy + Peer-to-Peer Society • The transformation of the energy system aligns with the principles of a Peer-to-Peer Society. • Peer-to-peer practices are based on the active participation and self-organisation of citizens. Citizens share knowledge, skills, co-create, and form new peer groups. • Citizens will use their capabilities also to develop energy-related products and services Rethinking Concepts Buildings as Power stations – global (economic) opportunity to construct buildings that generate, store and release solar energy -
Artificial Intelligence As a Positive and Negative Factor in Global Risk
The Singularity Institute Artificial Intelligence as a Positive and Negative Factor in Global Risk Eliezer Yudkowsky The Singularity Institute Yudkowsky, Eliezer. 2008. Artificial intelligence as a positive and negative factor in global risk. In Global catastrophic risks, ed. Nick Bostrom and Milan M. Cirkovic, 308–345. New York: Oxford University Press. This version contains minor changes. Eliezer Yudkowsky 1. Introduction By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: “A curious aspect of the theory of evolution is that everybody thinks he understands it” (Monod 1975). My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the problem seems tobe unusually acute in Artificial Intelligence. The field of AI has a reputation formaking huge promises and then failing to deliver on them. Most observers conclude that AI is hard; as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about Artificial Intelligence than they actually do. In my other chapter for Global Catastrophic Risks, “Cognitive Biases Potentially Affect- ing Judgment of Global Risks” (Yudkowsky 2008), I opened by remarking that few people would deliberately choose to destroy the world; a scenario in which the Earth is destroyed by mistake is therefore very worrisome.