Artificial Pain May Induce Empathy, Morality, and Ethics in The
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Artificial General Intelligence and Classical Neural Network
Artificial General Intelligence and Classical Neural Network Pei Wang Department of Computer and Information Sciences, Temple University Room 1000X, Wachman Hall, 1805 N. Broad Street, Philadelphia, PA 19122 Web: http://www.cis.temple.edu/∼pwang/ Email: [email protected] Abstract— The research goal of Artificial General Intelligence • Capability. Since we often judge the level of intelligence (AGI) and the notion of Classical Neural Network (CNN) are of other people by evaluating their problem-solving capa- specified. With respect to the requirements of AGI, the strength bility, some people believe that the best way to achieve AI and weakness of CNN are discussed, in the aspects of knowledge representation, learning process, and overall objective of the is to build systems that can solve hard practical problems. system. To resolve the issues in CNN in a general and efficient Examples: various expert systems. way remains a challenge to future neural network research. • Function. Since the human mind has various cognitive functions, such as perceiving, learning, reasoning, acting, I. ARTIFICIAL GENERAL INTELLIGENCE and so on, some people believe that the best way to It is widely recognized that the general research goal of achieve AI is to study each of these functions one by Artificial Intelligence (AI) is twofold: one, as certain input-output mapping. Example: various intelligent tools. • As a science, it attempts to provide an explanation of • Principle. Since the human mind seems to follow certain the mechanism in the human mind-brain complex that is principles of information processing, some people believe usually called “intelligence” (or “cognition”, “thinking”, that the best way to achieve AI is to let computer systems etc.). -
Arxiv:2012.10390V2 [Cs.AI] 20 Feb 2021 Inaccessible
Opinion - Paper under review Deep Learning and the Global Workspace Theory Rufin VanRullen1, 2 and Ryota Kanai3 1CerCo, CNRS UMR5549, Toulouse, France 2ANITI, Universit´ede Toulouse, France 3Araya Inc, Tokyo. Japan Abstract Recent advances in deep learning have allowed Artificial Intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace theory refers to a large-scale system integrating and distributing infor- mation among networks of specialized modules to create higher-level forms of cognition and awareness. We argue that the time is ripe to consider explicit implementations of this theory using deep learning techniques. We propose a roadmap based on unsu- pervised neural translation between multiple latent spaces (neural networks trained for distinct tasks, on distinct sensory inputs and/or modalities) to create a unique, amodal global latent workspace (GLW). Potential functional advantages of GLW are reviewed, along with neuroscientific implications. 1 Cognitive neural architectures in brains and ma- chines Deep learning denotes a machine learning system using artificial neural networks with multiple \hidden" layers between the input and output layers. Although the underlying theory is more than 3 decades old [1, 2], it is only in the last decade that these systems have started to fully reveal their potential [3]. Many of the recent breakthroughs in AI (Artificial Intelligence) have been fueled by deep learning. Neuroscientists have been quick to point out the similarities (and differences) between the brain and these deep artificial neural networks [4{9]. The advent of deep learning has allowed the efficient computer implementation of perceptual and cognitive functions that had been so far arXiv:2012.10390v2 [cs.AI] 20 Feb 2021 inaccessible. -
Emerging Infectious Outbreak Inhibits Pain Empathy Mediated Prosocial
***THIS IS A WORKING PAPER THAT HAS NOT BEEN PEER-REVIEWED*** Emerging infectious outbreak inhibits pain empathy mediated prosocial behaviour Siqi Cao1,2, Yanyan Qi3, Qi Huang4, Yuanchen Wang5, Xinchen Han6, Xun Liu1,2*, Haiyan Wu1,2* 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China 2Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 3Department of Psychology, School of Education, Zhengzhou University, Zhengzhou, China 4College of Education and Science, Henan University, Kaifeng, China 5Department of Brain and Cognitive Science, University of Rochester, Rochester, NY, United States 6School of Astronomy and Space Sciences, University of Chinese Academy of Sciences, Beijing, China Corresponding author Please address correspondence to Xun Liu ([email protected]) or Haiyan Wu ([email protected]) Disclaimer: This is preliminary scientific work that has not been peer reviewed and requires replication. We share it here to inform other scientists conducting research on this topic, but the reported findings should not be used as a basis for policy or practice. 1 Abstract People differ in experienced anxiety, empathy, and prosocial willingness during the coronavirus outbreak. Although increased empathy has been associated with prosocial behaviour, little is known about how does the pandemic change people’s empathy and prosocial willingness. We conducted a study with 1,190 participants before (N=520) and after (N=570) the COVID-19 outbreak, with measures of empathy trait, pain empathy and prosocial willingness. Here we show that the prosocial willingness decreased significantly during the COVID-19 outbreak, in accordance with compassion fatigue theory. Trait empathy could affect the prosocial willingness through empathy level. -
Artificial Intelligence: Distinguishing Between Types & Definitions
19 NEV. L.J. 1015, MARTINEZ 5/28/2019 10:48 AM ARTIFICIAL INTELLIGENCE: DISTINGUISHING BETWEEN TYPES & DEFINITIONS Rex Martinez* “We should make every effort to understand the new technology. We should take into account the possibility that developing technology may have im- portant societal implications that will become apparent only with time. We should not jump to the conclusion that new technology is fundamentally the same as some older thing with which we are familiar. And we should not hasti- ly dismiss the judgment of legislators, who may be in a better position than we are to assess the implications of new technology.”–Supreme Court Justice Samuel Alito1 TABLE OF CONTENTS INTRODUCTION ............................................................................................. 1016 I. WHY THIS MATTERS ......................................................................... 1018 II. WHAT IS ARTIFICIAL INTELLIGENCE? ............................................... 1023 A. The Development of Artificial Intelligence ............................... 1023 B. Computer Science Approaches to Artificial Intelligence .......... 1025 C. Autonomy .................................................................................. 1026 D. Strong AI & Weak AI ................................................................ 1027 III. CURRENT STATE OF AI DEFINITIONS ................................................ 1029 A. Black’s Law Dictionary ............................................................ 1029 B. Nevada ..................................................................................... -
The Neural Pathways, Development and Functions of Empathy
EVIDENCE-BASED RESEARCH ON CHARITABLE GIVING SPI FUNDED The Neural Pathways, Development and Functions of Empathy Jean Decety University of Chicago Working Paper No.: 125- SPI April 2015 Available online at www.sciencedirect.com ScienceDirect The neural pathways, development and functions of empathy Jean Decety Empathy reflects an innate ability to perceive and be sensitive to and relative intensity without confusion between self and the emotional states of others coupled with a motivation to care other; secondly, empathic concern, which corresponds to for their wellbeing. It has evolved in the context of parental care the motivation to caring for another’s welfare; and thirdly, for offspring as well as within kinship. Current work perspective taking (or cognitive empathy), the ability to demonstrates that empathy is underpinned by circuits consciously put oneself into the mind of another and connecting the brainstem, amygdala, basal ganglia, anterior understand what that person is thinking or feeling. cingulate cortex, insula and orbitofrontal cortex, which are conserved across many species. Empirical studies document Proximate mechanisms of empathy that empathetic reactions emerge early in life, and that they are Each of these emotional, motivational, and cognitive not automatic. Rather they are heavily influenced and modulated facets of empathy relies on specific mechanisms, which by interpersonal and contextual factors, which impact behavior reflect evolved abilities of humans and their ancestors to and cognitions. However, the mechanisms supporting empathy detect and respond to social signals necessary for surviv- are also flexible and amenable to behavioral interventions that ing, reproducing, and maintaining well-being. While it is can promote caring beyond kin and kith. -
The Neural Components of Empathy: Predicting Daily Prosocial Behavior
doi:10.1093/scan/nss088 SCAN (2014) 9,39^ 47 The neural components of empathy: Predicting daily prosocial behavior Sylvia A. Morelli, Lian T. Rameson, and Matthew D. Lieberman Department of Psychology, University of California, Los Angeles, Los Angeles, CA 90095-1563 Previous neuroimaging studies on empathy have not clearly identified neural systems that support the three components of empathy: affective con- gruence, perspective-taking, and prosocial motivation. These limitations stem from a focus on a single emotion per study, minimal variation in amount of social context provided, and lack of prosocial motivation assessment. In the current investigation, 32 participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing painful, anxious, and happy events that varied in valence and amount of social context provided. They also completed a 14-day experience sampling survey that assessed real-world helping behaviors. The results demonstrate that empathy for positive and negative emotions selectively activates regions associated with positive and negative affect, respectively. Downloaded from In addition, the mirror system was more active during empathy for context-independent events (pain), whereas the mentalizing system was more active during empathy for context-dependent events (anxiety, happiness). Finally, the septal area, previously linked to prosocial motivation, was the only region that was commonly activated across empathy for pain, anxiety, and happiness. Septal activity during each of these empathic experiences was predictive of daily helping. These findings suggest that empathy has multiple input pathways, produces affect-congruent activations, and results in septally mediated prosocial motivation. http://scan.oxfordjournals.org/ Keywords: empathy; prosocial behavior; septal area INTRODUCTION neuroimaging studies of empathy, 30 focused on empathy for pain Human beings are intensely social creatures who have a need to belong (Fan et al., 2011). -
Artificial Consciousness and the Consciousness-Attention Dissociation
Consciousness and Cognition 45 (2016) 210–225 Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog Review article Artificial consciousness and the consciousness-attention dissociation ⇑ Harry Haroutioun Haladjian a, , Carlos Montemayor b a Laboratoire Psychologie de la Perception, CNRS (UMR 8242), Université Paris Descartes, Centre Biomédical des Saints-Pères, 45 rue des Saints-Pères, 75006 Paris, France b San Francisco State University, Philosophy Department, 1600 Holloway Avenue, San Francisco, CA 94132 USA article info abstract Article history: Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to Received 6 July 2016 implement sophisticated forms of human intelligence in machines. This research attempts Accepted 12 August 2016 to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable Keywords: once we overcome short-term engineering challenges. We believe, however, that phenom- Artificial intelligence enal consciousness cannot be implemented in machines. This becomes clear when consid- Artificial consciousness ering emotions and examining the dissociation between consciousness and attention in Consciousness Visual attention humans. While we may be able to program ethical behavior based on rules and machine Phenomenology learning, we will never be able to reproduce emotions or empathy by programming such Emotions control systems—these will be merely simulations. Arguments in favor of this claim include Empathy considerations about evolution, the neuropsychological aspects of emotions, and the disso- ciation between attention and consciousness found in humans. -
Artificial Intelligence/Artificial Wisdom
Artificial Intelligence/Artificial Wisdom - A Drive for Improving Behavioral and Mental Health Care (06 September to 10 September 2021) Department of Computer Science & Information Technology Central University of Jammu, J&K-181143 Preamble An Artificial Intelligence is the capability of a machine to imitate intelligent human behaviour. Machine learning is based on the idea that machines should be able to learn and adapt through experience. Machine learning is a subset of AI. That is, all machine learning counts as AI, but not all AI counts as machine learning. Artificial intelligence (AI) technology holds both great promises to transform mental healthcare and potential pitfalls. Artificial intelligence (AI) is increasingly employed in healthcare fields such as oncology, radiology, and dermatology. However, the use of AI in mental healthcare and neurobiological research has been modest. Given the high morbidity and mortality in people with psychiatric disorders, coupled with a worsening shortage of mental healthcare providers, there is an urgent need for AI to help identify high-risk individuals and provide interventions to prevent and treat mental illnesses. According to the publication Spectrum News, a form of AI called "deep learning" is sometimes better able than human beings to spot relevant patterns. This five days course provides an overview of AI approaches and current applications in mental healthcare, a review of recent original research on AI specific to mental health, and a discussion of how AI can supplement clinical practice while considering its current limitations, areas needing additional research, and ethical implications regarding AI technology. The proposed workshop is envisaged to provide opportunity to our learners to seek and share knowledge and teaching skills in cutting edge areas from the experienced and reputed faculty. -
Anterior Insular Cortex and Emotional Awareness
REVIEW Anterior Insular Cortex and Emotional Awareness Xiaosi Gu,1,2* Patrick R. Hof,3,4 Karl J. Friston,1 and Jin Fan3,4,5,6 1Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom WC1N 3BG 2Virginia Tech Carilion Research Institute, Roanoke, Virginia 24011 3Fishberg Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, New York 10029 4Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York 10029 5Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York 10029 6Department of Psychology, Queens College, The City University of New York, Flushing, New York 11367 ABSTRACT dissociable from ACC, 3) AIC integrates stimulus-driven This paper reviews the foundation for a role of the and top-down information, and 4) AIC is necessary for human anterior insular cortex (AIC) in emotional aware- emotional awareness. We propose a model in which ness, defined as the conscious experience of emotions. AIC serves two major functions: integrating bottom-up We first introduce the neuroanatomical features of AIC interoceptive signals with top-down predictions to gen- and existing findings on emotional awareness. Using erate a current awareness state and providing descend- empathy, the awareness and understanding of other ing predictions to visceral systems that provide a point people’s emotional states, as a test case, we then pres- of reference for autonomic reflexes. We argue that AIC ent evidence to demonstrate: 1) AIC and anterior cingu- is critical and necessary for emotional awareness. J. late cortex (ACC) are commonly coactivated as Comp. -
Machine Guessing – I
Machine Guessing { I David Miller Department of Philosophy University of Warwick COVENTRY CV4 7AL UK e-mail: [email protected] ⃝c copyright D. W. Miller 2011{2018 Abstract According to Karl Popper, the evolution of science, logically, methodologically, and even psy- chologically, is an involved interplay of acute conjectures and blunt refutations. Like biological evolution, it is an endless round of blind variation and selective retention. But unlike biological evolution, it incorporates, at the stage of selection, the use of reason. Part I of this two-part paper begins by repudiating the common beliefs that Hume's problem of induction, which com- pellingly confutes the thesis that science is rational in the way that most people think that it is rational, can be solved by assuming that science is rational, or by assuming that Hume was irrational (that is, by ignoring his argument). The problem of induction can be solved only by a non-authoritarian theory of rationality. It is shown also that because hypotheses cannot be distilled directly from experience, all knowledge is eventually dependent on blind conjecture, and therefore itself conjectural. In particular, the use of rules of inference, or of good or bad rules for generating conjectures, is conjectural. Part II of the paper expounds a form of Popper's critical rationalism that locates the rationality of science entirely in the deductive processes by which conjectures are criticized and improved. But extreme forms of deductivism are rejected. The paper concludes with a sharp dismissal of the view that work in artificial intelligence, including the JSM method cultivated extensively by Victor Finn, does anything to upset critical rationalism. -
How Empiricism Distorts Ai and Robotics
HOW EMPIRICISM DISTORTS AI AND ROBOTICS Dr Antoni Diller School of Computer Science University of Birmingham Birmingham, B15 2TT, UK email: [email protected] Abstract rently, Cog has no legs, but it does have robotic arms and a head with video cameras for eyes. It is one of the most im- The main goal of AI and Robotics is that of creating a hu- pressive robots around as it can recognise various physical manoid robot that can interact with human beings. Such an objects, distinguish living from non-living things and im- android would have to have the ability to acquire knowl- itate what it sees people doing. Cynthia Breazeal worked edge about the world it inhabits. Currently, the gaining of with Cog when she was one of Brooks’s graduate students. beliefs through testimony is hardly investigated in AI be- She said that after she became accustomed to Cog’s strange cause researchers have uncritically accepted empiricism. appearance interacting with it felt just like playing with a Great emphasis is placed on perception as a source of baby and it was easy to imagine that Cog was alive. knowledge. It is important to understand perception, but There are several major research projects in Japan. A even more important is an understanding of testimony. A common rationale given by scientists engaged in these is sketch of a theory of testimony is presented and an appeal the need to provide for Japan’s ageing population. They say for more research is made. their robots will eventually act as carers for those unable to look after themselves. -
The Evolution of Intelligence
Review From Homo Sapiens to Robo Sapiens: The Evolution of Intelligence Anat Ringel Raveh * and Boaz Tamir * Faculty of interdisciplinary studies, S.T.S. Program, Bar-Ilan University, Ramat-Gan 5290002, Israel; [email protected] (A.R.R.); [email protected] (B.T.) Received: 30 October 2018; Accepted: 18 December 2018; Published: 21 December 2018 Abstract: In this paper, we present a review of recent developments in artificial intelligence (AI) towards the possibility of an artificial intelligence equal that of human intelligence. AI technology has always shown a stepwise increase in its capacity and complexity. The last step took place several years ago with the increased progress in deep neural network technology. Each such step goes hand in hand with our understanding of ourselves and our understanding of human cognition. Indeed, AI was always about the question of understanding human nature. AI percolates into our lives, changing our environment. We believe that the next few steps in AI technology, and in our understanding of human behavior, will bring about much more powerful machines that are flexible enough to resemble human behavior. In this context, there are two research fields: Artificial Social Intelligence (ASI) and General Artificial Intelligence (AGI). The authors also allude to one of the main challenges for AI, embodied cognition, and explain how it can viewed as an opportunity for further progress in AI research. Keywords: Artificial Intelligence (AI); artificial general intelligence (AGI); artificial social intelligence (ASI); social sciences; singularity; complexity; embodied cognition; value alignment 1. Introduction 1.1. From Intelligence to Super-Intelligence In this paper we present a review of recent developments in AI towards the possibility of an artificial intelligence equals that of human intelligence.