Space Races: Settling the Universe Fast

Total Page:16

File Type:pdf, Size:1020Kb

Space Races: Settling the Universe Fast Space races: settling the universe fast Anders Sandberg Future of Humanity Institute, Oxford Martin School, University of Oxford January 24, 2018 Abstract This report examines the issue of the resource demands and constraints for very fast large-scale settlement by technologically mature civilizations. I derive various bounds due to the energy and matter requirements for relativistic probes, and compare with bounds due to the need to avoid collisions with interstellar dust. When two groups of the same species race for the universe, the group with the biggest resources completely preempts the other group under conditions of transparency. Expanding nearby at a lower speed in order to gain resources to expand far at a high speed is effective. If alien competitors are expected this sets a distance scale affecting the desired probe distribution. This report examines the issue of the resource demands and constraints for very fast large-scale settlement. If a technologically mature species wishes to settle as much of the universe as possible as fast as possible, what should they do? If they seek to preempt other species in colonizing the universe, what is the optimal strategy? What if the competitor is another fraction of their own civilization? 1 Motivation and earlier work In an earlier paper [2] we outlined how a civilization could use a small fraction of a solar system to send replicating probes to all reachable galaxies, and then re- peat the process to colonize all stars inside them. The model was not optimized for speed but rather intended as an existence proof that it is physically feasible to colonize on vast scales, even beyond the discussions of galactic colonization found in the SETI literature. A key question is whether it is rational to use up much of the available resources for this expansion, or if even very aggressively expanding species would leave significant resources behind for consumption. Hanson has argued that a species expanding outwards would have economic/evolutionary pressures to use more and more of resources for settlement, "burning the cosmic commons" [1]. In this model the settlement distances are small, so there are numerous generations and hence evolutionary pressures could exert much influence. In contrast, our model of long-range colonization [2] uses limited resources and has 1 few generations but is not optimized for speed: internal competitive pressures might hence lead to different strategies. The related model in [5] describes civilizations converting matter under its control to waste heat, examining the cosmological effects of this. Here the energy costs of the probe expansion is not explicitely modelled: the conversion is just assumed to be due to consumption. 1.1 Assumptions We will assume civilizations at technological maturity can perform and auto- mate all activities we observe in nature. In particular, they are able to freely convert mass into energy1 in order to launch relativistic objects able to self- replicate given local resources at target systems. The report also assumes a lightspeed limit on velocities: were FTL possible the very concept of "as early as possible" becomes ill-defined because of the time-travel effects. This is similar to assumptions in [2, 5]. The discussion of dust and radiation (3.1) issues assumes some physical limits due to the fragility of molecular matter; if more resilient structures are possible those constraints disappear. The universe is assumed to be dark-matter and dark-energy dominated as the ΛCDM model, implying accelerating expansion and eventual causal separation of gravitationally bound clusters in 1011 years or so. The civilization has enough astronomical data and prediction power to reli- ably aim for distant targets with only minor need for dynamical course correc- tions. We will assume claimed resources are inviolable. One reason is that species able to convert matter to energy can perform a credible scorched earth tactic: rather than let an invader have the resources they can be dissipated, leaving the invader with a net loss due to the resources expended to claim them. Unless the invader has goals other than resources this makes it irrational to attempt to invade. 2 Energy constraints Assume the species has available resource mass R and can launch probes of mass m. If they use a fraction f of R to make probes and the rest to power them (through perfect energy conversion) they will get N = fR=m probes. The energy it takes to accelerate a probe to a given γ-factor is Eprobe = (γ − 1)mc2. The energy to accelerate all probes is E = (γ − 1)mc2fR=m = (γ − 1)Rfc2. The available energy to launch is E = Rc2(1 − f), so equaling the two energies produces 1 − f 1 γ = 1 + = : (1) f f The speed as a function of gamma is v(γ) = cp1 − 1/γ2. The speed as a function of f is hence: p v(f) = cp1 − 1=(1 + (1 − f)=f)2 = c 1 − f 2: (2) 1Whether dark matter or only baryonic matter is usable as mass-energy to the civilization does not play a role in the argument, but may be important for post-settlement activity. 2 Neatly, the speed (as a fraction of c and function of f) is a quarter circle. 2.1 Simple constraints There is a minimum probe mass, required in order to slow down the payload2 at the destination. Using the relativistic rocket equation we get the requirement tanh−1(v=c) m ≥ mpayloade : (3) The multiplicative factor is 3.7 for γ = 2, 5.8 for γ = 3 and 19.9 for γ = 10. Note that this assumes a perfect mass-energy photon rocket (Isp=c = 1). For less effective rockets such as fusion rockets (Isp=c = 0:119) or fission rockets −1 (c=Isp) tanh (v=c) (Isp=c = 0:04) the mass requirements are far higher, m ≥ mpayloade : There is a trivial bound N ≤ R=mpayload, corresponding to the smallest, slowest and most numerous possible probes (which would require external slow- ing, for example through a magsail or Hubble expansion). Another trivial constraint is that f is bounded by N = 1, that is, f ≥ m=R. Hence the highest possible speed is p 2 vN=1 = c 1 − (m=R) ; (4) when all resources are spent on accelerating a single probe. 2.2 Expanding across the universe Given an energy budget, how should probes be allocated to distance to reach everything as fast as possible? Here "as fast as possible" means minimizing the maximum arrival time. Most points in a sphere are closer to the surface than the center: the average distance is a = (3=4)r. In co-moving coordinates the amount of matter within distance L grows as L3. Hence most resources are remote. This is limited by access: only material within dh ≈ 5 Gpc can eventually be reached due to the expansion of the universe, and only gravitationally bound matter such as some superclusters (typically smaller than 20 Mpc [4]) will remain cohesive in future expansion. Given the above, reaching the surface of the sphere needs to be as fast as possible. If interior points are reached faster, that energy could be reallocated to launching for the more distant points, so hence the optimal (in this sense) expansion reaches every destination at the same time. In the simple non-expanding universe case the necessary velocity of an in- dividual probe aimed at distance r is v(r) = βcr=dh, where β is the maxi- mal fraction ofp lightspeed possible due to other constraints. This corresponds 2 2 to γ(r) = 1= 1 − k r where k = β=dh. The total kinetic energy for send- R dh 2 2 ing probes to the entire sphere is Etotal = 0 4πr ρd(γ(r) − 1)mc dr where 3 ρd = 3fR=4πmdh denotes the destination density of the N probes (assumed to be constant throughout the sphere). Expanding, Z dh 2 2 1 Etotal = 4πρdmc r p − 1 dr 2 2 0 1 − k r 2 −7 Potential mpayload mentioned have been 500 tons, 30 grams, or 8 · 10 grams [2]. 3 Figure 1: Plot of f as a function of β under the requirement that Etotal = E. Points under the curve represents feasible choices. " # fRc2 3(β3 − β) = 3 sin−1(β) + − 2β3 (5) 2β3 p1 − β2 The bracket represents the geometric part of the distribution. The dh's cancel, since the expression only distributes the N probes; a settlement program trying 3 to reach the entire volume would aim for N = fR=m / dh. The highest value of β compatible with having E = R(1 − f)c2 Joules of energy to accelerate can be found by equating the Etotal with E and expressing it as a function of f. The result is an arc descending from β = 0; f = 1 to β = 1; f = 2=(3π − 2) ≈ 0:2694, independent of m (figure 1). Assuming f = 0:2694 gives γ = 1=f = 3:7. Building more or heavier probes (higher f) than this curve leaves less energy to power them than is needed for the intended β. If remote destinations are more important to reach early than nearby ones (or Hubble expansion is a major factor), then a nonlinear velocity distribution α v(r) = βc(r=dh) with α > 1 would ensure faster arrival at the frontier. The overall effect is to reduce the overall energy demand (roughly as 1/α), since the average velocity is lower. The extra energy can of course be allocated to increase β to 1, but beyond that it is likely rational to add γ to the interior probes. 2.3 Expansion of the universe In an expanding universe an initial high velocity will be counteracted by the expansion: in co-moving coordinates the initial velocity v0 declines as v0=a(t) where a(t) is the cosmological scale factor.
Recommended publications
  • An Evolutionary Heuristic for Human Enhancement
    18 TheWisdomofNature: An Evolutionary Heuristic for Human Enhancement Nick Bostrom and Anders Sandberg∗ Abstract Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a ‘‘wisdom of nature’’ which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifest as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the superiority of the natural or the troublesomeness of hubris, or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in ‘‘nature knows best’’ attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature. 1.Introduction 1.1. The wisdom of nature, and the special problem of enhancement We marvel at the complexity of the human organism, how its various parts have evolved to solve intricate problems: the eye to collect and pre-process ∗ Oxford Future of Humanity Institute, Faculty of Philosophy and James Martin 21st Century School, Oxford University. Forthcoming in Enhancing Humans, ed. Julian Savulescu and Nick Bostrom (Oxford: Oxford University Press) 376 visual information, the immune system to fight infection and cancer, the lungs to oxygenate the blood.
    [Show full text]
  • The Transhumanist Reader Is an Important, Provocative Compendium Critically Exploring the History, Philosophy, and Ethics of Transhumanism
    TH “We are in the process of upgrading the human species, so we might as well do it E Classical and Contemporary with deliberation and foresight. A good first step is this book, which collects the smartest thinking available concerning the inevitable conflicts, challenges and opportunities arising as we re-invent ourselves. It’s a core text for anyone making TRA Essays on the Science, the future.” —Kevin Kelly, Senior Maverick for Wired Technology, and Philosophy “Transhumanism has moved from a fringe concern to a mainstream academic movement with real intellectual credibility. This is a great taster of some of the best N of the Human Future emerging work. In the last 10 years, transhumanism has spread not as a religion but as a creative rational endeavor.” SHU —Julian Savulescu, Uehiro Chair in Practical Ethics, University of Oxford “The Transhumanist Reader is an important, provocative compendium critically exploring the history, philosophy, and ethics of transhumanism. The contributors anticipate crucial biopolitical, ecological and planetary implications of a radically technologically enhanced population.” M —Edward Keller, Director, Center for Transformative Media, Parsons The New School for Design A “This important book contains essays by many of the top thinkers in the field of transhumanism. It’s a must-read for anyone interested in the future of humankind.” N —Sonia Arrison, Best-selling author of 100 Plus: How The Coming Age of Longevity Will Change Everything IS The rapid pace of emerging technologies is playing an increasingly important role in T overcoming fundamental human limitations. The Transhumanist Reader presents the first authoritative and comprehensive survey of the origins and current state of transhumanist Re thinking regarding technology’s impact on the future of humanity.
    [Show full text]
  • Global Catastrophic Risks Survey
    GLOBAL CATASTROPHIC RISKS SURVEY (2008) Technical Report 2008/1 Published by Future of Humanity Institute, Oxford University Anders Sandberg and Nick Bostrom At the Global Catastrophic Risk Conference in Oxford (17‐20 July, 2008) an informal survey was circulated among participants, asking them to make their best guess at the chance that there will be disasters of different types before 2100. This report summarizes the main results. The median extinction risk estimates were: Risk At least 1 million At least 1 billion Human extinction dead dead Number killed by 25% 10% 5% molecular nanotech weapons. Total killed by 10% 5% 5% superintelligent AI. Total killed in all 98% 30% 4% wars (including civil wars). Number killed in 30% 10% 2% the single biggest engineered pandemic. Total killed in all 30% 10% 1% nuclear wars. Number killed in 5% 1% 0.5% the single biggest nanotech accident. Number killed in 60% 5% 0.05% the single biggest natural pandemic. Total killed in all 15% 1% 0.03% acts of nuclear terrorism. Overall risk of n/a n/a 19% extinction prior to 2100 These results should be taken with a grain of salt. Non‐responses have been omitted, although some might represent a statement of zero probability rather than no opinion. 1 There are likely to be many cognitive biases that affect the result, such as unpacking bias and the availability heuristic‒‐well as old‐fashioned optimism and pessimism. In appendix A the results are plotted with individual response distributions visible. Other Risks The list of risks was not intended to be inclusive of all the biggest risks.
    [Show full text]
  • What Is the Upper Limit of Value?
    WHAT IS THE UPPER LIMIT OF VALUE? Anders Sandberg∗ David Manheim∗ Future of Humanity Institute 1DaySooner University of Oxford Delaware, United States, Suite 1, Littlegate House [email protected] 16/17 St. Ebbe’s Street, Oxford OX1 1PT [email protected] January 27, 2021 ABSTRACT How much value can our decisions create? We argue that unless our current understanding of physics is wrong in fairly fundamental ways, there exists an upper limit of value relevant to our decisions. First, due to the speed of light and the definition and conception of economic growth, the limit to economic growth is a restrictive one. Additionally, a related far larger but still finite limit exists for value in a much broader sense due to the physics of information and the ability of physical beings to place value on outcomes. We discuss how this argument can handle lexicographic preferences, probabilities, and the implications for infinite ethics and ethical uncertainty. Keywords Value · Physics of Information · Ethics Acknowledgements: We are grateful to the Global Priorities Institute for highlighting these issues and hosting the conference where this paper was conceived, and to Will MacAskill for the presentation that prompted the paper. Thanks to Hilary Greaves, Toby Ord, and Anthony DiGiovanni, as well as to Adam Brown, Evan Ryan Gunter, and Scott Aaronson, for feedback on the philosophy and the physics, respectively. David Manheim also thanks the late George Koleszarik for initially pointing out Wei Dai’s related work in 2015, and an early discussion of related issues with Scott Garrabrant and others on asymptotic logical uncertainty, both of which informed much of his thinking in conceiving the paper.
    [Show full text]
  • Intergalactic Spreading of Intelligent Life and Sharpening the Fermi Paradox
    Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox Stuart Armstronga,∗, Anders Sandberga aFuture of Humanity Institute, Philosophy Department, Oxford University, Suite 8, Littlegate House 16/17 St. Ebbe's Street, Oxford, OX1 1PT UK Abstract The Fermi paradox is the discrepancy between the strong likelihood of alien intelligent life emerging (under a wide variety of assumptions), and the ab- sence of any visible evidence for such emergence. In this paper, we extend the Fermi paradox to not only life in this galaxy, but to other galaxies as well. We do this by demonstrating that traveling between galaxies { indeed even launching a colonisation project for the entire reachable universe { is a rela- tively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox. Keywords: Fermi paradox, interstellar travel, intergalactic travel, Dyson shell, SETI, exploratory engineering 1. Introduction 1.1. The classical Fermi paradox The Fermi paradox, or more properly the Fermi question, consists of the apparent discrepancy between assigning a non-negligible probability for intelligent life emerging, the size and age of the universe, the relative rapidity ∗Corresponding author Email addresses: [email protected] (Stuart Armstrong), [email protected] (Anders Sandberg) Preprint submitted to Acta Astronautica March 12, 2013 with which intelligent life could expand across space or otherwise make itself visible, and the lack of observations of any alien intelligence.
    [Show full text]
  • Transhumanism
    T ranshumanism - Wikipedia, the free encyclopedia http://en.wikipedia.org/w/index.php?title=T ranshum... Transhumanism From Wikipedia, the free encyclopedia See also: Outline of transhumanism Transhumanism is an international Part of Ideology series on intellectual and cultural movement supporting Transhumanism the use of science and technology to improve human mental and physical characteristics Ideologies and capacities. The movement regards aspects Abolitionism of the human condition, such as disability, Democratic transhumanism suffering, disease, aging, and involuntary Extropianism death as unnecessary and undesirable. Immortalism Transhumanists look to biotechnologies and Libertarian transhumanism other emerging technologies for these Postgenderism purposes. Dangers, as well as benefits, are Singularitarianism also of concern to the transhumanist Technogaianism [1] movement. Related articles The term "transhumanism" is symbolized by Transhumanism in fiction H+ or h+ and is often used as a synonym for Transhumanist art "human enhancement".[2] Although the first known use of the term dates from 1957, the Organizations contemporary meaning is a product of the 1980s when futurists in the United States Applied Foresight Network Alcor Life Extension Foundation began to organize what has since grown into American Cryonics Society the transhumanist movement. Transhumanist Cryonics Institute thinkers predict that human beings may Foresight Institute eventually be able to transform themselves Humanity+ into beings with such greatly expanded Immortality Institute abilities as to merit the label "posthuman".[1] Singularity Institute for Artificial Intelligence Transhumanism is therefore sometimes Transhumanism Portal · referred to as "posthumanism" or a form of transformational activism influenced by posthumanist ideals.[3] The transhumanist vision of a transformed future humanity has attracted many supporters and detractors from a wide range of perspectives.
    [Show full text]
  • Existential Risk Prevention As Global Priority
    Global Policy Volume 4 . Issue 1 . February 2013 15 Existential Risk Prevention as Global Priority Nick Bostrom Research Article University of Oxford Abstract Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. Policy Implications • Existential risk is a concept that can focus long-term global efforts and sustainability concerns. • The biggest existential risks are anthropogenic and related to potential future technologies. • A moral case can be made that existential risk reduction is strictly more important than any other global public good. • Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sus- tainable state. • Some small existential risks can be mitigated today directly (e.g. asteroids) or indirectly (by building resilience and reserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity to improve humanity’s ability to deal with the larger existential risks that will arise later in this century. This will require collective wisdom, technology foresight, and the ability when necessary to mobilise a strong global coordi- nated response to anticipated existential risks.
    [Show full text]
  • Monte Carlo Model of Brain Emulation Development
    Monte Carlo model of brain emulation development Future of Humanity Institute working paper 2014-1 (version 1.2) 1 Anders Sandberg [email protected] Future of Humanity Institute & Oxford Martin Programme on the Impacts of Future Technology Oxford Martin School Background Whole brain emulation (WBE) is the possible future technology of one-to-one modelling of the function of the entire (human) brain. It would entail automatically scanning a brain, decoding the relevant neural circuitry, and generate a computer-runnable simulation that has a one-to-one relationship with the functions in the real brain (as well as an adequate virtual or real embodiment)2. Obviously this is a hugely ambitious project far outside current capabilities, possibly not even feasible in theory3. However, should such a technology ever become feasible there are good reasons to expect the consequences to be dramatic4: it would enable software intelligence, copyable human capital, new ethical problems, and (depending on philosophical outlook) immortality and a posthuman species. Even if one does not ascribe a high probability to WBE being ever feasible it makes sense to watch for trends indicating that it may be emerging, since adapting to its emergence may require significant early and global effort taking decades5. Predicting when a future technology emerges is hard, and there are good reasons to be cautious about overconfident pronouncements. In particular, predictions about the future of artificial intelligence have not been very successful and there are good theoretical reasons to have expected this6. However, getting a rough estimate of what is needed for a technology to be feasible compared to current trends can give a helpful “order of magnitude estimate” of how imminent a technology is, and how quickly it could move from a primitive state to a mature state.
    [Show full text]
  • Looking to the Future of Technology and Human Being. an Interview with Anders Sandberg
    Looking to the future of technology and human being. An Interview with Anders Sandberg Una mirada al futuro de la tecnología y del ser humano. Entrevista con Anders Sandberg ANDERS SANDBERG Oxford University Interviewed by ANTONIO DIÉGUEZ Universidad de Málaga ABSTRACT Interview with Anders Sandberg, member of the Future of Humanity Institute at Oxford University and expert in human enhancement and transhumanism, about central topics in his works. KEYWORDS TRANSHUMANISM, HUMAN ENHANCEMENT, ANDERS SANDBERG, BIOTECHNOLOGY RESUMEN Entrevista realizada a Anders Sandberg, miembro del Future of Humanity Institute de la Universidad de Oxford y experto en mejoramiento humano y transhumanismo, sobre cuestiones centrales de su labor investigadora. PALABRAS CLAVE TRANSHUMANISMO, MEJORAMIENTO HUMANO, ANDERS SANDBERG, BIOTECNOLOGÍA © Contrastes. Revista Internacional de Filosofía, vol. XXV Nº3 (2020), pp. 143-158. ISSN: 1136-4076 Departamento de Filosofía, Universidad de Málaga, Facultad de Filosofía y Letras Campus de Teatinos, E-29071 Málaga (España) 144 ANDERS SANDBERG The most concise and accurate characterization I know of Anders Sandberg was written by Duke University philosopher Allen Buchanan, and goes like this: “Sandberg is a multifaceted genius –a philosopher, neuroscientist, futurist, mathematician, and computer graphics artist who works at the Uehiro Centre for Practical Ethics at Oxford”.1 Few philosophers certainly dare to look to the future with his boldness and confidence. You can agree more or less with their ideas, their analyses or their forecasts, but what cannot be denied is that they are always founded on solid scientific knowledge and deep philosophical reflections. In any case, he himself admits that many of his theses are merely exploratory, and do not pretend to be prophecies, let alone assume that the future is somehow mysteriously predetermined.
    [Show full text]
  • Intelligence Explosion FAQ
    MIRI MACHINE INTELLIGENCE RESEARCH INSTITUTE Intelligence Explosion FAQ Luke Muehlhauser Machine Intelligence Research Institute Abstract The Machine Intelligence Research Institute is one of the leading research institutes on intelligence explosion. Below are short answers to common questions we receive. Muehlhauser, Luke. 2013. “Intelligence Explosion FAQ.” First published 2011 as “Singularity FAQ.” Machine Intelligence Research Institute, Berkeley, CA Contents 1 Basics 1 1.1 What is an intelligence explosion? . 1 2 How Likely Is an Intelligence Explosion? 2 2.1 How is “intelligence” defined? . 2 2.2 What is greater-than-human intelligence? . 2 2.3 What is whole-brain emulation? . 3 2.4 What is biological cognitive enhancement? . 3 2.5 What are brain-computer interfaces? . 4 2.6 How could general intelligence be programmed into a machine? . 4 2.7 What is superintelligence? . 4 2.8 When will the intelligence explosion happen? . 5 2.9 Might an intelligence explosion never occur? . 6 3 Consequences of an Intelligence Explosion 7 3.1 Why would great intelligence produce great power? . 7 3.2 How could an intelligence explosion be useful? . 7 3.3 How might an intelligence explosion be dangerous? . 8 4 Friendly AI 9 4.1 What is Friendly AI? . 9 4.2 What can we expect the motivations of a superintelligent machine to be? 10 4.3 Can’t we just keep the superintelligence in a box, with no access to the Internet? . 11 4.4 Can’t we just program the superintelligence not to harm us? . 11 4.5 Can we program the superintelligence to maximize human pleasure or desire satisfaction? .
    [Show full text]
  • Transhumanism and the Meaning of Life
    1 Transhumanism and the Meaning of Life Anders Sandberg Preprint of chapter in Transhumanism and Religion: Moving into an Unknown Future, eds. Tracy Trothen and Calvin Mercer, Praeger 2014 Transhumanism, broadly speaking,1 is the view that the human condition is not unchanging and that it can and should be questioned. Further, the human condition can and should be changed using applied reason.2 As Max More explained, transhumanism includes life philosophies that seek the evolution of intelligent life beyond its current human form and limitations using science and technology.3 Nick Bostrom emphasizes the importance to transhumanism of exploring transhuman and posthuman modes of existence. 4 This exploration is desirable since there are reasons to believe some states in this realm hold great value, nearly regardless of the value theory to which one subscribes.5 Transhumanism, in his conception, has this exploration as its core value and then derives other values from it. Sebastian Seung, an outsider to transhumanism, described it as having accepted the post-Enlightenment critique of reason, yet not giving up on using reason to achieve grand ends that could give meaning to life individually or collectively: The “meaning of life” includes both universal and personal dimensions. We can ask both “Are we here for a reason?” and “Am I here for a reason?” Transhumanism answers these questions as follows. First, it’s the destiny of humankind to transcend the human condition. This is not merely what will happen, but what should happen. Second, it can be a personal goal to sign up for Alcor6, dream about uploading, or use technology to otherwise improve oneself.
    [Show full text]
  • Cognitive Biotechnology Is Itself Nebulous, Being Part of an Emerging Field
    1 2 Table of Contents EXECUTIVE SUMMARY 4 ​ ​ ​ PROBLEM STATEMENT 6 ​ ​ ​ CATEGORIZATION 7 ​ FIELDS OF COGNITION 7 ​ ​ ​ ​ ​ METHODS OF ENHANCEMENT 8 ​ ​ ​ ​ ​ FORMS OF ENHANCEMENT 9 ​ ​ ​ ​ ​ RECOVER 10 ​ COGNITIVE IMPAIRMENT 10 ​ ​ ​ CURRENT COMMERCIAL BIOTECHNOLOGIES 11 ​ ​ ​ ​ ​ Learning 11 Brain-Computer Interface (BCI) 11 Therapeutic 12 CURRENT RESEARCH 12 ​ ​ ​ FUTURE OF COGNITIVE RECOVERY 13 ​ ​ ​ ​ ​ ​ ​ Injured Soldiers 13 Injured Enemy Soldiers 14 AUGMENT 15 ​ CURRENT COMMERCIAL TECHNOLOGIES 15 ​ ​ ​ ​ ​ CURRENT RESEARCH 16 ​ ​ ​ FUTURE OF COGNITIVE AUGMENTATION 17 ​ ​ ​ ​ ​ ​ ​ Education 17 The Future Soldier 17 Enhanced Decision Making 18 REPLACE 19 ​ REPLACING COGNITIVE ABILITIES 19 ​ ​ ​ ​ ​ CURRENT COMMERCIAL TECHNOLOGIES 19 ​ ​ ​ ​ ​ Brainwave interface (BWI) 19 3 Brain-machine interface (BMI) 20 Brain-computer interface (BCI) 20 CURRENT RESEARCH AND COMPANIES 20 ​ ​ ​ ​ ​ ​ ​ FUTURE OF COGNITIVE REPLACEMENT 21 ​ ​ ​ ​ ​ ​ ​ LANDSCAPE ANALYSIS: NEW OPPORTUNITIES 22 ​ ​ ​ ​ ​ ​ ​ UPCOMING (NEAR FUTURE) 22 ​ ​ ​ ​ ​ ​ DISTANT (FAR FUTURE) 22 ​ ​ ​ ​ ​ ​ ETHICAL CONSIDERATIONS 24 ​ ​ ​ SAFETY AND PRIVACY 25 ​ ​ ​ ​ ​ MEDICAL PURPOSE 25 ​ ​ ​ PERSONAL AUTONOMY VS. NATIONAL SECURITY 25 ​ ​ ​ ​ ​ ​ ​ ​ ​ CONCLUSION 26 ​ RECOVER 27 ​ AUGMENT 27 ​ REPLACE 28 ​ NEW OPPORTUNITIES 28 ​ ​ ​ REFERENCES 29 ​ APPENDIX A: COMPANY/RESEARCH ORGANIZATIONAL MATRIX 34 ​ ​ ​ ​ ​ ​ ​ APPENDIX B: RAR ORGANIZATIONAL MATRIX 34 ​ ​ ​ 4 Executive Summary The ability to move your arm feels like a given. The movement is innate, unconscious, and even simplistic. However, it is only within the last decade that scientists have been able to accurately meld brain signals with machine interfaces to create mind-controlled prosthetics. Additionally, only in the last few years have they been able to make this flow of information bi-directional, creating prosthetics that can now feel sensation and send these feelings to the brain [1]. Our ability to decode the mind, and our ability to create computers which can handle this decoded data, are just emerging.
    [Show full text]