Human Genetic Enhancements: a Transhumanist Perspective
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Genetics and Other Human Modification Technologies: Sensible International Regulation Or a New Kind of Arms Race?
GENETICS AND OTHER HUMAN MODIFICATION TECHNOLOGIES: SENSIBLE INTERNATIONAL REGULATION OR A NEW KIND OF ARMS RACE? HEARING BEFORE THE SUBCOMMITTEE ON TERRORISM, NONPROLIFERATION, AND TRADE OF THE COMMITTEE ON FOREIGN AFFAIRS HOUSE OF REPRESENTATIVES ONE HUNDRED TENTH CONGRESS SECOND SESSION JUNE 19, 2008 Serial No. 110–201 Printed for the use of the Committee on Foreign Affairs ( Available via the World Wide Web: http://www.foreignaffairs.house.gov/ U.S. GOVERNMENT PRINTING OFFICE 43–068PDF WASHINGTON : 2008 For sale by the Superintendent of Documents, U.S. Government Printing Office Internet: bookstore.gpo.gov Phone: toll free (866) 512–1800; DC area (202) 512–1800 Fax: (202) 512–2104 Mail: Stop IDCC, Washington, DC 20402–0001 COMMITTEE ON FOREIGN AFFAIRS HOWARD L. BERMAN, California, Chairman GARY L. ACKERMAN, New York ILEANA ROS-LEHTINEN, Florida ENI F.H. FALEOMAVAEGA, American CHRISTOPHER H. SMITH, New Jersey Samoa DAN BURTON, Indiana DONALD M. PAYNE, New Jersey ELTON GALLEGLY, California BRAD SHERMAN, California DANA ROHRABACHER, California ROBERT WEXLER, Florida DONALD A. MANZULLO, Illinois ELIOT L. ENGEL, New York EDWARD R. ROYCE, California BILL DELAHUNT, Massachusetts STEVE CHABOT, Ohio GREGORY W. MEEKS, New York THOMAS G. TANCREDO, Colorado DIANE E. WATSON, California RON PAUL, Texas ADAM SMITH, Washington JEFF FLAKE, Arizona RUSS CARNAHAN, Missouri MIKE PENCE, Indiana JOHN S. TANNER, Tennessee JOE WILSON, South Carolina GENE GREEN, Texas JOHN BOOZMAN, Arkansas LYNN C. WOOLSEY, California J. GRESHAM BARRETT, South Carolina SHEILA JACKSON LEE, Texas CONNIE MACK, Florida RUBE´ N HINOJOSA, Texas JEFF FORTENBERRY, Nebraska JOSEPH CROWLEY, New York MICHAEL T. MCCAUL, Texas DAVID WU, Oregon TED POE, Texas BRAD MILLER, North Carolina BOB INGLIS, South Carolina LINDA T. -
An Evolutionary Heuristic for Human Enhancement
18 TheWisdomofNature: An Evolutionary Heuristic for Human Enhancement Nick Bostrom and Anders Sandberg∗ Abstract Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a ‘‘wisdom of nature’’ which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifest as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the superiority of the natural or the troublesomeness of hubris, or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in ‘‘nature knows best’’ attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature. 1.Introduction 1.1. The wisdom of nature, and the special problem of enhancement We marvel at the complexity of the human organism, how its various parts have evolved to solve intricate problems: the eye to collect and pre-process ∗ Oxford Future of Humanity Institute, Faculty of Philosophy and James Martin 21st Century School, Oxford University. Forthcoming in Enhancing Humans, ed. Julian Savulescu and Nick Bostrom (Oxford: Oxford University Press) 376 visual information, the immune system to fight infection and cancer, the lungs to oxygenate the blood. -
Eugenics, Biopolitics, and the Challenge of the Techno-Human Condition
Nathan VAN CAMP Redesigning Life The emerging development of genetic enhancement technologies has recently become the focus of a public and philosophical debate between proponents and opponents of a liberal eugenics – that is, the use of Eugenics, Biopolitics, and the Challenge these technologies without any overall direction or governmental control. Inspired by Foucault’s, Agamben’s of the Techno-Human Condition and Esposito’s writings about biopower and biopolitics, Life Redesigning the author sees both positions as equally problematic, as both presuppose the existence of a stable, autonomous subject capable of making decisions concerning the future of human nature, while in the age of genetic technology the nature of this subjectivity shall be less an origin than an effect of such decisions. Bringing together a biopolitical critique of the way this controversial issue has been dealt with in liberal moral and political philosophy with a philosophical analysis of the nature of and the relation between life, politics, and technology, the author sets out to outline the contours of a more responsible engagement with genetic technologies based on the idea that technology is an intrinsic condition of humanity. Nathan VAN CAMP Nathan VAN Philosophy Philosophy Nathan Van Camp is postdoctoral researcher at the University of Antwerp, Belgium. He focuses on continental philosophy, political theory, biopolitics, and critical theory. & Politics ISBN 978-2-87574-281-0 Philosophie & Politique 27 www.peterlang.com P.I.E. Peter Lang Nathan VAN CAMP Redesigning Life The emerging development of genetic enhancement technologies has recently become the focus of a public and philosophical debate between proponents and opponents of a liberal eugenics – that is, the use of Eugenics, Biopolitics, and the Challenge these technologies without any overall direction or governmental control. -
Heidegger, Personhood and Technology
Comparative Philosophy Volume 2, No. 2 (2011): 23-49 Open Access / ISSN 2151-6014 www.comparativephilosophy.org THE FUTURE OF HUMANITY: HEIDEGGER, PERSONHOOD AND TECHNOLOGY MAHON O‟BRIEN ABSTRACT: This paper argues that a number of entrenched posthumanist positions are seriously flawed as a result of their dependence on a technical interpretive approach that creates more problems than it solves. During the course of our discussion we consider in particular the question of personhood. After all, until we can determine what it means to be a person we cannot really discuss what it means to improve a person. What kinds of enhancements would even constitute improvements? This in turn leads to an examination of the technical model of analysis and the recurring tendency to approach notions like personhood using this technical model. In looking to sketch a Heideggerian account of personhood, we are reaffirming what we take to be a Platonic skepticism concerning technical models of inquiry when it comes to certain subjects. Finally we examine the question as to whether the posthumanist looks to apply technology‟s benefits in ways that we have reflectively determined to be useful or desirable or whether it is technology itself (or to speak as Heidegger would – the “essence” of technology) which prompts many posthumanists to rely on an excessively reductionist view of the human being. Keywords: Heidegger, posthumanism, technology, personhood, temporality A significant number of Posthumanists1 advocate the techno-scientific enhancement of various human cognitive and physical capacities. Recent trends in posthumanist theory have witnessed the collective emergence, in particular, of a series of analytically oriented philosophers as part of the Future of Humanity Institute at O‟BRIEN, MAHON: IRCHSS Postdoctoral Research Fellow, School of Philosophy, University College Dublin, Ireland. -
Global Catastrophic Risks Survey
GLOBAL CATASTROPHIC RISKS SURVEY (2008) Technical Report 2008/1 Published by Future of Humanity Institute, Oxford University Anders Sandberg and Nick Bostrom At the Global Catastrophic Risk Conference in Oxford (17‐20 July, 2008) an informal survey was circulated among participants, asking them to make their best guess at the chance that there will be disasters of different types before 2100. This report summarizes the main results. The median extinction risk estimates were: Risk At least 1 million At least 1 billion Human extinction dead dead Number killed by 25% 10% 5% molecular nanotech weapons. Total killed by 10% 5% 5% superintelligent AI. Total killed in all 98% 30% 4% wars (including civil wars). Number killed in 30% 10% 2% the single biggest engineered pandemic. Total killed in all 30% 10% 1% nuclear wars. Number killed in 5% 1% 0.5% the single biggest nanotech accident. Number killed in 60% 5% 0.05% the single biggest natural pandemic. Total killed in all 15% 1% 0.03% acts of nuclear terrorism. Overall risk of n/a n/a 19% extinction prior to 2100 These results should be taken with a grain of salt. Non‐responses have been omitted, although some might represent a statement of zero probability rather than no opinion. 1 There are likely to be many cognitive biases that affect the result, such as unpacking bias and the availability heuristic‒‐well as old‐fashioned optimism and pessimism. In appendix A the results are plotted with individual response distributions visible. Other Risks The list of risks was not intended to be inclusive of all the biggest risks. -
Tilburg University Patentability of Human Enhancements Schellekens, M.H.M.; Vantsiouri, P
Tilburg University Patentability of human enhancements Schellekens, M.H.M.; Vantsiouri, P. Published in: Law, Innovation and Technology Publication date: 2013 Document Version Peer reviewed version Link to publication in Tilburg University Research Portal Citation for published version (APA): Schellekens, M. H. M., & Vantsiouri, P. (2013). Patentability of human enhancements. Law, Innovation and Technology, 5(2), 190-213. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 01. okt. 2021 Patentability of Human Enhancements Maurice Schellekens and Petroula Vantsiouri* I. INTRODUCTION The patent system is dynamic. Its limits are redefined as industries evolve and explore new technologies. History has shown that campaigners for novel patents are likely to succeed, except where they meet persistent opposition from other interests groups.1 Human enhancing technologies may very well be the next field where the battle of patentability is fought. We define human enhancement as a modification aimed at improving individual human performance and brought about by science-based or technology-based interventions in the human body.2 Hence, in our analysis we use a broad concept of human enhancement, which may involve aspects of healing. -
Sharing the World with Digital Minds1
Sharing the World with Digital Minds1 (2020). Draft. Version 1.8 Carl Shulman† & Nick Bostrom† [in Clarke, S. & Savulescu, J. (eds.): Rethinking Moral Status (Oxford University Press, forthcoming)] Abstract The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence. Here we focus on one set of issues, which arise from the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-produced digital minds could derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings could contribute immense value to the world, and failing to respect their interests could produce a moral catastrophe, while a naive way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence. 1. Introduction Human biological nature imposes many practical limits on what can be done to promote somebody’s welfare. We can only live so long, feel so much joy, have so many children, and benefit so much from additional support and resources. Meanwhile, we require, in order to flourish, that a complex set of physical, psychological, and social conditions be met. -
Global Catastrophic Risks
Global Catastrophic Risks Edited by Nick Bostrom Milan M. Cirkovic OXPORD UNIVERSITY PRESS Contents Acknowledgements ν Foreword vii Martin J. Rees 1 Introduction 1 Nick Bostrom and Milan M. Cirkoviô 1.1 Why? 1 1.2 Taxonomy and organization 2 1.3 Part I: Background 7 1.4 Part II: Risks from nature 13 1.5 Part III: Risks from unintended consequences 15 1.6 Part IV: Risks from hostile acts 20 1.7 Conclusions and future directions 27 Part I Background 31 2 Long-term astrophysics! processes 33 Fred C. Adams 2.1 Introduction: physical eschatology 33 2.2 Fate of the Earth 34 2.3 Isolation of the local group 36 2.4 Collision with Andromeda 36 2.5 The end of stellar evolution 38 2.6 The era of degenerate remnants 39 2.7 The era of black holes 41 2.8 The Dark Era and beyond 41 2.9 life and information processing 43 2.10 Conclusion 44 Suggestions for further reading 45 References 45 3 Evolution theory and the future of humanity 48 Christopher Wills· 3.1 Introduction 48 3.2 The causes of evolutionary change 49 xiv Contents 3.3 Environmental changes and evolutionary changes 50 3.3.1 Extreme evolutionary changes 51 3.3.2 Ongoing evolutionary changes 53 3.3.3 Changes in the cultural environment 56 3.4 Ongoing human evolution 61 3.4.1 Behavioural evolution 61 3.4.2 The future of genetic engineering 63 3.4.3 The evolution of other species, including those on which we depend 64 3.5 Future evolutionary directions 65 3.5.1 Drastic and rapid climate change without changes in human behaviour 66 3.5.2 Drastic but slower environmental change accompanied by changes in human behaviour 66 3.5.3 Colonization of new environments by our species 67 Suggestions for further reading 68 References 69 4 Millennial tendencies in responses to apocalyptic threats 73 James J. -
F.3. the NEW POLITICS of ARTIFICIAL INTELLIGENCE [Preliminary Notes]
F.3. THE NEW POLITICS OF ARTIFICIAL INTELLIGENCE [preliminary notes] MAIN MEMO pp 3-14 I. Introduction II. The Infrastructure: 13 key AI organizations III. Timeline: 2005-present IV. Key Leadership V. Open Letters VI. Media Coverage VII. Interests and Strategies VIII. Books and Other Media IX. Public Opinion X. Funders and Funding of AI Advocacy XI. The AI Advocacy Movement and the Techno-Eugenics Movement XII. The Socio-Cultural-Psychological Dimension XIII. Push-Back on the Feasibility of AI+ Superintelligence XIV. Provisional Concluding Comments ATTACHMENTS pp 15-78 ADDENDA pp 79-85 APPENDICES [not included in this pdf] ENDNOTES pp 86-88 REFERENCES pp 89-92 Richard Hayes July 2018 DRAFT: NOT FOR CIRCULATION OR CITATION F.3-1 ATTACHMENTS A. Definitions, usage, brief history and comments. B. Capsule information on the 13 key AI organizations. C. Concerns raised by key sets of the 13 AI organizations. D. Current development of AI by the mainstream tech industry E. Op-Ed: Transcending Complacency on Superintelligent Machines - 19 Apr 2014. F. Agenda for the invitational “Beneficial AI” conference - San Juan, Puerto Rico, Jan 2-5, 2015. G. An Open Letter on Maximizing the Societal Benefits of AI – 11 Jan 2015. H. Partnership on Artificial Intelligence to Benefit People and Society (PAI) – roster of partners. I. Influential mainstream policy-oriented initiatives on AI: Stanford (2016); White House (2016); AI NOW (2017). J. Agenda for the “Beneficial AI 2017” conference, Asilomar, CA, Jan 2-8, 2017. K. Participants at the 2015 and 2017 AI strategy conferences in Puerto Rico and Asilomar. L. Notes on participants at the Asilomar “Beneficial AI 2017” meeting. -
Ethics of Human Enhancement: 25 Questions & Answers
Studies in Ethics, Law, and Technology Volume 4, Issue 1 2010 Article 4 Ethics of Human Enhancement: 25 Questions & Answers Fritz Allhoff∗ Patrick Liny James Moorz John Weckert∗∗ ∗Western Michigan University, [email protected] yCalifornia Polytechnic State University, [email protected] zDartmouth College, [email protected] ∗∗Charles Sturt University, [email protected] Copyright c 2010 The Berkeley Electronic Press. All rights reserved. Ethics of Human Enhancement: 25 Questions & Answers∗ Fritz Allhoff, Patrick Lin, James Moor, and John Weckert Abstract This paper presents the principal findings from a three-year research project funded by the US National Science Foundation (NSF) on ethics of human enhancement technologies. To help untangle this ongoing debate, we have organized the discussion as a list of questions and answers, starting with background issues and moving to specific concerns, including: freedom & autonomy, health & safety, fairness & equity, societal disruption, and human dignity. Each question-and- answer pair is largely self-contained, allowing the reader to skip to those issues of interest without affecting continuity. KEYWORDS: human enhancement, human engineering, nanotechnology, emerging technolo- gies, policy, ethics, risk ∗The authors of this report gratefully acknowledge the support of the US National Science Foun- dation (NSF) under awards #0620694 and 0621021. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. We also acknowledge our respective institutions for their support: Dartmouth College and Western Michigan University, which are the recipients of the NSF awards referenced above, as well as California Polytechnic State University and Australia’s Centre for Ap- plied Philosophy and Public Ethics. -
Philos 121: January 24, 2019 Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence” (Skip Pp
Philos 121: January 24, 2019 Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence” (skip pp. 7–13) Jeremy Bentham, Introduction to the Principles of Morals and Legislation, Ch. I–II Suppose we tell AI: “Make paper clips.” Unfortunately, AI has become advanced enough to design ever more intelligent AI, which in turn designs ever more intelligent AI, and so on. AI then tries to turn the universe into a paper-clip making factory, diverting all resources from food production into paper-clip production. Getting hungry, we try to turn it off. AI interprets us and our attempts to turn it off as obstacles to making paper clips. So it eliminates the obstacles… So… what should we tell AI to do, if we have only one chance to get it right? This is very close to the basic question of moral philosophy: In the most basic and general terms, what should we, or any other intelligent agent, do? The road to utilitarianism: First premise (Hedonism): What is good for us as an end is pleasure, and what is bad for us as an end is pain. Good as a means vs. good as an end: • Good as a means: good because it helps to bring about something else that is good. • But this chain of means has to end somewhere. There has to be something that is: • Good as an end: good because of what it is in itself and not because of what it brings about. What is good for us as an end? The answer at the end of every chain seems to be: pleasure and the absence of pain. -
CRISPR-Cas9 and the Ethics of Genetic Enhancement
A peer-reviewed electronic journal published by the Institute for Ethics and Emerging Technologies ISSN 1541-0099 27(1) – July 2017 Editing the Genome of Human Beings: CRISPR-Cas9 and the Ethics of Genetic Enhancement Marcelo de Araujo Universidade do Estado do Rio de Janeiro Universidade Federal do Rio de Janeiro [email protected] Journal of Evolution and Technology - Vol. 27 Issue 1 – June 2017 - pgs 24-42 Abstract In 2015 a team of scientists used a new gene-editing technique called CRISPR-Cas9 to edit the genome of 86 non-viable human embryos. The experiment sparked a global debate on the ethics of geneediting. In this paper, I first review the key ethical issues that have been addressed in this debate. Although there is an emerging consensus now that research on the editing of human somatic cells for therapeutic purpose should be pursued further, the prospect of using gene-editing techniques for the purpose of human enhancement has been met with strong criticism. The main thesis that I defend in this paper is that some of the most vocal objections recently raised against the prospect of genetic human enhancement are not justified. I put forward two arguments for the morality of genetic human enhancement. The first argument shows how the moral and legal framework within which we currently claim our procreative rights, especially in the context of IVF procedures, could be deployed in the assessment of the morality and legality of genetic human enhancement. The second argument calls into question the assumption that the average level of human cognitive performance should have a normative character.