Glossary of Defined Terms

Total Page:16

File Type:pdf, Size:1020Kb

Glossary of Defined Terms Glossary of Defned Terms Ambiactive Processes and systems which simultaneously stimulate and suppress rates and sensitivities. Ambidextrous Able to use both hands equally well, that is, equally capable to perform complementary activities. Ambimodal Processes and systems which produce multiple outcome states, including diferent types of agentic modality. Ambiopic Processes and systems which combine nearsighted myopia and far- sighted hyperopia. Augmented Signifcantly assisted by technologies and especially digital technologies. Compositive Methods or models which are composed and customized to suit diferent contexts and problems. Digitalization Te transformation of goal-directed processes through the appli- cation of digital technologies. Discontinuous Processes and systems with diferent levels which have signif- cant gaps between them. Dyssynchronous Processes and systems which cycle at diferent rates and are poorly synchronized. Entrogenous Mediators of in-betweenness, whereby systems develop and trans- form boundaries. Hyperactive Abnormally or extremely active. © Te Author(s) 2021 P. T. Bryant, Augmented Humanity, https://doi.org/10.1007/978-3-030-76445-6 295 296 Glossary of Defned Terms Hyperheuristic Simplifed means of selecting sets of related models or metamodels. Hyperopic Farsighted expansive processes, especially in sampling and search, the opposite of myopia. Hyperparameter Fundamental characteristics or attributes of metamodels, including their broad categories, relations, and mechanisms. Hypersensitive Abnormally or extremely sensitive. Maximize To partially rank order a set, and then choose a member of which is no worse any other members, given some evaluation criteria. Metaheuristic Simplifed means of selecting sets of related heuristics or models. Metamodel Te common features of a family or related set of models. Optimize To fully rank order a set, and then choose the member which ranks above all others. Index A See also Artifcial agency; Augmented Adaptation agency; Human agency cultural, 160 Agentic functioning by design, 241 activation mechanisms of, 47–51 personality and, 15–18, 78–81 combinatorics of, 5, 272–273 rates of, 12 complexity of, 31 (see also science of augmented agency Discontinuous processing) and, 279–280 divergence of, 30 technological innovation modalities of, 31 (see also and, 12–13 Ambimodality) Adaptive ranges of, 31 (see also Ambiopia) aspirations, 201, 205 rates of, 31 (see also expectations, 205 Dyssynchronous processing) feedback, 231, 240 sensitivities of, 31 (see also ftness, 13, 92, 204 Ambiactivity) learning, 236, 237 upregulation and Afective computing, 11, 140, 144, downregulation of, 78 151, 161 Agentic metamodels Agency digitalized generative, 45–48 modal dilemmas of, 84–85 modern adaptive, 44–45 technology and, 29 premodern replicative, 40 types of, 6 problematics of, 57 © Te Author(s) 2021 297 P. T. Bryant, Augmented Humanity, https://doi.org/10.1007/978-3-030-76445-6 298 Index Aggregation Annals School, 286 alternative explanation of, 80 Anthropomorphism, 29, 70, 283 of choice, 81 Aristotle (384-322 BCE), 55, collective agents and, 76 227, 285 mechanisms of, 93–95 Artifcial agency Ambiactivity autonomy and, 10, 51 defnition of, 175–176 capabilities of, 45, 225, 230–232 evaluation of performance compression of, 82 and, 209 defnition of, 6 learning and, 225–226, 230–232 diferent levels of, 9, 58, 63–65 potential benefts of, 233 empathic capabilities of, 126 self-regulation and, 175 metamodels of, 59 Ambidexterity over-processing by, 112 self-generation and, 253, 260 Artifcial empathy, 140 Ambiguity, 289 See also Afective computing empathy and, 157 Artifcial intelligence entrogeneity and, 51–52 biases and, 30 of learning, 214, 226, 233 capabiities of, 23 positional, 145 compositive methods and, 24, 46 Ambimodality metamodels of, 58–61 artifcial compression and, 82 mind-body problem and, 11, 68 defnition of, 85 risks of, 241 functional, 277 (see also Bounded Artifcial personality, 11, 48, 68, rationality) 140, 151, 232 high ambimodality, 87–91 Augmented agency low ambimodality, 86–89 defnition of, 6 (see also self-regulation and, 191–192 Human-machine) Ambiopia dilemmas of, 65–66, defnition of, 117 (see also 253–254, 273–274 Hyperopia) humanization of, 29 highly ambiopic metamodels, 119 metamodels of, 45–48 moderately ambiopic Autonomous vehicles metamodels, 103–104 collaborative supervision of, non-ambiopic 9, 29, 46 metamodels, 123–125 evaluation of performance problem solving and, 115–118 and, 214 Ambivalence, 289 learning and, 241 learning and, 233–234 self-regulation and, 180, 181, 188 Index 299 Autonomy machine learning and, augmented learning and, 242 108, 141–142 collectivity and, 265 persistence of, 6, 141, 158, 160 Enlightenment and, 75, 91, 228 reinforcement of, 65, 142 evaluation of performance Bloch, Marc (1886-1944), 286 and, 220 Bounded rationality modernity and, 44, 58, 139 collective agents and, 93 risks for, 84, 89, 160, 193, 263 digitalization and, 21, 128–129 self-generation and, 247 See also Satisfcing self-regulation and, 169 Braudel, Fernand (1902-1985), 286 Bruner, Jerome (1915-2016), 143, 161, 228 B Bandura, Albert digitalization of agency, 11, C 173, 194 Capabilities psychology of agency, 15, 169, digitalization of, 14 201, 228 human-artifcial divergence, Behavioral theories 31, 56, 65 adaptive aspirations and, 205 limitations of, 2, 4, 19–20, 103 of decision making, 69, 128–129 Chomsky, Noam, 16, 21, 228 of economics, 106, 147 Clinical medicine empathicing and, 144 augmentation of, 64 future of, 129, 279–280 expert systems and, 21 of organizations, 81, 93 self-generation and, 251 Behavior-performance self-regulation and, 172 agentic metamodels and, 41–49 Cognitive-afective processing augmentation of, 21 agentic metamodels and, 41–49 evaluation of, 199–200 augmentation of, 21 limited capabilities of, 19 limited capabilities of, 19 personality and, 11 personality and, 11 Bias Cognitive empathicing artifcial intelligence and, 10 collective mind and, 147 augmented risk of, 25, 27, defnition of, 146 (see also 49, 64, 234 Satisfcing) behavioral theories and, 19, 104 heuristics and, 150 confrmation, 233 potential divergence of, learning and, 226 151–152, 158–159 300 Index Cognitive empathy Compositive methods ambiopic, 141–142 defnition of, 23–24 digitalization of, 151–152 science of augmented agency and, justice and, 148 278, 287–289 limited capabilities of, 141–142 self-generation and, 258 mentalization and, 144–146 Consciousness metamodels of, 149–150 limits of, 23, 29, 106, 145 modernity and, 139–140 role of, 28–29, 43 Collective agents science of augmented agency ambimodality of, 192 and, 282–284 culture and, 131 Construal Level Teory, 130 digitalization and, 14 Contextual empathy and trust, 147 human sciences and, 15–18 evaluation of performance and, performance evaluation 200, 204–205 criteria, 204 historical models of, 40, 42, 178 personality, 15 learning and, 230–231 problem solving, 124 metamodels of, 86–89 Contextual learning origins of, 76–79 science of augmented agency and, self-generation and, 247, 264 276, 290 self-regulation and, 189 self-regulation and, 191 Collective choice Culture aggregation of, 81, 94 augmented humanity and, augmented risks for, 219 4, 265 behavioral theories of, 129 cognitive empathy and, 145, 150 microeconomics and, 106 collective agents and, 12, 18, 76, science of augmented agency 131, 201 and, 285 commitments and, 127, 284 Collective mind indigenous peoples, 287 augmented risks for, 143 institutions and, 104 cognitive empathicing and, 147 norms of, 79, 252 culture and, 12 self-generation and, 247–248 empathy and, 145 self-regulation and, 182 Commitments defnition of, 13 future science and, 284–286 D importance of, 70, 127, 284 Darwin, Charles (1809-1882), 43 referential, 42, 44 Deep learning, 24, 218, 241 Index 301 Descartes, Rene ambimodality and, 95 (1596-1650), 28, 139 ambiopic problem solving Digital assistants, 5, 11, 140, 173 and, 115–118 Digital divide, 12 defnition of, 55 Digitalization evaluation of performance economic development and, 252 and, 218 generative agentic metamodel science of augmented agency and, and, 45–48 273, 288–289 historical period of, 5, 10–12 three main types of, 50–51 period of, 45–47 Epistemology problematics of, 269, 279 behavior and, 20 risks of, 6 commitments, 67, 160 Discontinuous processing contextual, 17 evaluation of performance, 207 of science of augmented learning schemes, 232–234 agency, 275 self-regulatory schemes, 174 Ethics See also Ambiactivity cognitive empathy and, 94 Dyssynchronous processing commitments, 20, 67, 160 learning rates, 232–234 contextual, 17 performance evaluation rates, 208 future science and, 285 self-regulatory rates, 174 moral disengagement See also Ambiactivity risk, 190–191 theories of justice, 148 Eudaimonia, 69 E future science and, 285 Economics Evaluation of performance behavioral theories of, 129 agentic metamodels and, 41–49 classical theories of, 105, 146 augmentation of, 21 cognitive empathy and, 140 collectivity and, 218–219 eudaimonia and, 69 criteria of, 205 evaluation of performance and, 203 diferent rates of, 207 fourishing and, 107 digitalization of, future science and, 285 201–202, 206–209 of welfare, 81 downregulation of, 211–213 Engels, Friedrich (1820-1895), 77 metamodels of, 210–211 Entrogenous mediation personality and, 218–219 ambiactive learning and, 232–234 sensitivity to variance ambiactive self-regulation and, 206–207 and, 182–183 upregulation of, 213–216 302 Index F G False consciousness, 149 Gardner, Howard, 228
Recommended publications
  • The Evolution of Intelligence
    Review From Homo Sapiens to Robo Sapiens: The Evolution of Intelligence Anat Ringel Raveh * and Boaz Tamir * Faculty of interdisciplinary studies, S.T.S. Program, Bar-Ilan University, Ramat-Gan 5290002, Israel; [email protected] (A.R.R.); [email protected] (B.T.) Received: 30 October 2018; Accepted: 18 December 2018; Published: 21 December 2018 Abstract: In this paper, we present a review of recent developments in artificial intelligence (AI) towards the possibility of an artificial intelligence equal that of human intelligence. AI technology has always shown a stepwise increase in its capacity and complexity. The last step took place several years ago with the increased progress in deep neural network technology. Each such step goes hand in hand with our understanding of ourselves and our understanding of human cognition. Indeed, AI was always about the question of understanding human nature. AI percolates into our lives, changing our environment. We believe that the next few steps in AI technology, and in our understanding of human behavior, will bring about much more powerful machines that are flexible enough to resemble human behavior. In this context, there are two research fields: Artificial Social Intelligence (ASI) and General Artificial Intelligence (AGI). The authors also allude to one of the main challenges for AI, embodied cognition, and explain how it can viewed as an opportunity for further progress in AI research. Keywords: Artificial Intelligence (AI); artificial general intelligence (AGI); artificial social intelligence (ASI); social sciences; singularity; complexity; embodied cognition; value alignment 1. Introduction 1.1. From Intelligence to Super-Intelligence In this paper we present a review of recent developments in AI towards the possibility of an artificial intelligence equals that of human intelligence.
    [Show full text]
  • Endowing a Robotic Tutor with Empathic Qualities: Design and Pilot Evaluation
    Heriot-Watt University Research Gateway Endowing a Robotic Tutor with Empathic Qualities: Design and Pilot Evaluation Citation for published version: Obaid, M, Aylett, R, Barendregt, W, Basedow, C, Corrigan, LJ, Hall, L, Jones, A, Kappas, A, Kuester, D, Paiva, A, Papadopoulos, F, Serholt, S & Castellano, G 2018, 'Endowing a Robotic Tutor with Empathic Qualities: Design and Pilot Evaluation', International Journal of Humanoid Robotics, vol. 15, no. 6, 1850025. https://doi.org/10.1142/S0219843618500251 Digital Object Identifier (DOI): 10.1142/S0219843618500251 Link: Link to publication record in Heriot-Watt Research Portal Document Version: Early version, also known as pre-print Published In: International Journal of Humanoid Robotics Publisher Rights Statement: Pre-print General rights Copyright for the publications made accessible via Heriot-Watt Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy Heriot-Watt University has made every reasonable effort to ensure that the content in Heriot-Watt Research Portal complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 06. Oct. 2021 October 17, 2018 14:36 WSPC/INSTRUCTION FILE output International Journal of Humanoid Robotics c World Scientific Publishing Company Endowing a Robotic Tutor with Empathic Qualities: Design and Pilot Evaluation Mohammad Obaid1, Ruth Aylett2, Wolmet Barendregt3, Christina Basedow4, Lee J.
    [Show full text]
  • A Theoretical Approach for a Novel Model to Realizing Empathy
    A Theoretical Approach for a Novel Model to Realizing Empathy Marialejandra García Corretjer GTM-Grup de recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Spain Carrer Quatre Camins 30, Barcelona, Spain, 08022, [email protected] The focus of Marialejandra's current line of research is the result of a diverse industry & academic experiences converging into one challenge. Throughout her career, after two Masters degrees (Multimedia Project Management and Research for Design & Innovation) and a Bachelor degree in Industrial Design & Fine Arts, she was always interested in the relationships between people and their objects, as a form of communication that reveals preferences, expectations, and personalities. Now, she studies the impact of this relationship between humans and smart objects within the Human Computer Interactions discipline, from the perspective of the process of Realizing Empathy. David Miralles GTM-Grup de recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Spain Carrer Quatre Camins 30, Barcelona, Spain, [email protected] He holds a degree on Theoretical Physics of the University of Barcelona (1995). From 1996 to 2001 he worked at the Fundamental Physics Department at the same university. In 2001 he obtained a PhD on Mathematical Physics. From 2001 to 2007 he worked at the Department of Communication and Signal Theory of Universitat Ramon Llull. Nowadays, he is member of the Grup de recerca en Tecnologies Media at La Salle-Universitat Ramon Llull. Raquel Ros GTM-Grup de recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Spain Carrer Quatre Camins 30, Barcelona, Spain, [email protected] A Theoretical Approach for a Novel Model to Realizing Empathy 1.
    [Show full text]
  • Building Machines That Learn and Think Like People
    BEHAVIORAL AND BRAIN SCIENCES (2017), Page 1 of 72 doi:10.1017/S0140525X16001837, e253 Building machines that learn and think like people Brenden M. Lake Department of Psychology and Center for Data Science, New York University, New York, NY 10011 [email protected] http://cims.nyu.edu/~brenden/ Tomer D. Ullman Department of Brain and Cognitive Sciences and The Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139 [email protected] http://www.mit.edu/~tomeru/ Joshua B. Tenenbaum Department of Brain and Cognitive Sciences and The Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139 [email protected] http://web.mit.edu/cocosci/josh.html Samuel J. Gershman Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA 02138, and The Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139 [email protected] http://gershmanlab.webfactional.com/index.html Abstract: Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like
    [Show full text]
  • New Media Art and Embodiment
    New Media Art and Embodiment: Encountering Artificial Intelligence Semeli Panagidou s1601806 [email protected] MA Arts and Culture: Art of the Contemporary World and World Art Studies First reader: Prof. Dr. R. Zwijnenberg Second reader: Dr. A.K.C. Crucq July 2018 Acknowledgments First and foremost, I would like to thank my supervisor Prof. Dr. Rob Zwijnenberg for his continuing support, trust and patience in advising me throughout the process of writing my thesis. I am truly grateful for his time and attention and for all the moments of encouragement he has provided. This would not have been possible otherwise. I would also like to dedicate this effort to my parents for providing me with opportunities to explore and grow in my perception of the world through various means of education and for being understanding and compassionate throughout. In this I have been privileged and I am eternally grateful. 2 Contents INTRODUCTION 4 CHAPTER 1: Being in the world Ideas of cognition in science, philosophy, culture and art 1.1 Postcognitivism, embodiment and the influence of computation. 10 1.2 Hybrid cognitive art and multi-level encounters in the performance Guy Ben-Ary’s cellF. 14 CHAPTER 2: Performative technologies in art Interactions between human and artificial intelligence 2.1 Posthumanist performativity towards empathic communication in a technoetic present. 23 2.2 Meet me halfway: AI as deconstructive co-creator in Oscar Sharp and Ross Goodwin’s Sunspring. 29 2.3 Affective encounters with the machine in Driessens & Verstappen’s Tickle Salon. 35 CONCLUSION 41 LIST OF ILLUSTRATIONS 44 BIBLIOGRAPHY 45 3 Introduction Artificial intelligence has rapidly made its mark as the technology of our generation.
    [Show full text]
  • Artificial Pain May Induce Empathy, Morality, and Ethics in The
    philosophies Article Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots Minoru Asada Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita 565-0871, Japan; [email protected] Received: 6 June 2019; Accepted: 8 July 2019; Published: 13 July 2019 Abstract: In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning. Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others. Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed. Keywords: pain; empathy; morality; mirror neuron system (MNS) 1. Introduction The rapid progress of observation and measurement technologies in neuroscience and physiology have revealed various types of brain activities, and the recent progress of artificial intelligence (AI) technologies represented by deep learning (DL) methods [1] has been remarkable.
    [Show full text]
  • On the Possibility of Robots Having Emotions
    Georgia State University ScholarWorks @ Georgia State University Philosophy Theses Department of Philosophy Summer 8-12-2014 On the Possibility of Robots Having Emotions Cameron Hamilton Follow this and additional works at: https://scholarworks.gsu.edu/philosophy_theses Recommended Citation Hamilton, Cameron, "On the Possibility of Robots Having Emotions." Thesis, Georgia State University, 2014. https://scholarworks.gsu.edu/philosophy_theses/150 This Thesis is brought to you for free and open access by the Department of Philosophy at ScholarWorks @ Georgia State University. It has been accepted for inclusion in Philosophy Theses by an authorized administrator of ScholarWorks @ Georgia State University. For more information, please contact [email protected]. ON THE POSSIBILITY OF ROBOTS HAVING EMOTIONS by CAMERON REID HAMILTON Under the Direction of Andrea Scarantino ABSTRACT I argue against the commonly held intuition that robots and virtual agents will never have emotions by contending robots can have emotions in a sense that is functionally similar to hu- mans, even if the robots' emotions are not exactly equivalent to those of humans. To establish a foundation for assessing the robots' emotional capacities, I first define what emotions are by characterizing the components of emotion consistent across emotion theories. Second, I dissect the affective-cognitive architecture of MIT's Kismet and Leonardo, two robots explicitly de- signed to express emotions and to interact with humans, in order to explore whether they have emotions.
    [Show full text]
  • Margiotta.Pdf
    studi superiori / 993 scienze dell’educazione Questo volume fa parte della serie “I fondamenti della pedagogia”. Direttore scientifico: Massimo Baldacci (Università “Carlo Bo” di Urbino). Comitato scientifico: Marguerite Altet (Università di Nantes), Franco Cambi (Università di Firenze), Enza Colicchi (Università di Messina), Franco Frabboni (Università di Bologna), Antonio Medina Rivilla (Università di Madrid), Luigina Mortari (Università di Verona), Umberto Margiotta (Università di Venezia), Fran- ca Pinto Minerva (Università di Foggia), Simonetta Ulivieri (Università di Firen- ze), Miguel Zabalza Beraza (Università di Santiago de Compostela). La progressiva articolazione della pedagogia, pur consentendo una migliore ade- renza alle varie questioni formative, sta producendo una marcata frammentazione della sua dimensione teorica. Pertanto, vi è l’esigenza di ripristinare prospettive di sintesi unitaria, tornando ai fondamenti della pedagogia come condizione per in- terpretare in maniera non riduttiva le problematiche formative attuali. A questo scopo, la serie “I fondamenti della pedagogia” intende costituire una sorta di trat- tato pedagogico aperto, caratterizzato dall’attenzione prioritaria ai campi (i grandi problemi dell’educazione: l’educazione intellettuale, l’educazione etico-sociale, l’educazione tecnologica ecc.), ai paradigmi (le grandi prospettive teoriche della pedagogia: la pedagogia razionalista, la pedagogia empirista, la pedagogia pragma- tista ecc.) e alle strutture (le grandi “strutture” teoriche della pedagogia:
    [Show full text]
  • A Test for Artificial Empathy by Adam Brandenburger* and Cheryl Loh
    A Test for Artificial Empathy by Adam Brandenburger* and Cheryl Loh** Version 05/15/18 “Empathy, evidently, existed only within the human community” -- Philip K. Dick, Do Androids Dream of Electric Sheep, 1968 1. Introduction The recent rapid advances in AI have led a lot of people to ponder what a world of tomorrow populated with intelligent machines will look like. For some people, the promise is for a bright future for humanity where AI’s help us in our homes, assist in caring for the elderly or raising children, and operate alongside us at work to boost our performance and even our creativity. An example is Kasparov [5, 2017], who describes the successes of human-machine partnerships in chess and extrapolates from these to a positive human future in the age of AI. Other people see the risk of an AI apocalypse where machines seize control of the planet and even threaten our physical existence. Such an existential catastrophe was described by Bostrom [1, 2014], and Stephen Hawking and Elon Musk have issued warnings in the popular media. The dividing line between these two futures may depend on the issue of empathy, or, more precisely, on the possibility of artificial empathy (AE). If it is possible to build machines which reliably possess AE, then we can hope that their behavior will take into account human well-being as well as any objectives of their own. But if tomorrow’s machines have their own goals and pursue them without regard to human welfare, then they may cause us harm --- even existential harm --- without caring.
    [Show full text]
  • Towards Artificial Empathy
    Int J of Soc Robotics DOI 10.1007/s12369-014-0253-z Towards Artificial Empathy How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? Minoru Asada Accepted: 21 September 2014 © The Author(s) 2014. This article is published with open access at Springerlink.com Abstract The design of artificial empathy is one of the “affectivity” in human robot interaction (hereafter, HRI) most essential issues in social robotics. This is because has been recently addressed in a brief survey from the empathic interactions with ordinary people are needed to viewpoint of affective computing [41]. Several attempts introduce robots into our society. Several attempts have been have been made to address specific contexts (e.g., [26] made for specific situations. However, such attempts have for survey) in which a designer specifies how to manifest provided several limitations; thus, diminishing authenticity. empathic behaviors towards humans, and, therefore, under- The present article proposes “affective developmental robot- stand that capabilities regarding empathic interaction seem ics (hereafter, ADR),” which provides more authentic artifi- limited and difficult to extend (generalize) to different con- cial empathy based on the concept of cognitive developmen- texts. tal robotics (hereafter, CDR). First, the evolution and devel- Based on views from developmental robotics [4,29], opment of empathy as revealed in neuroscience and biobe- empathic behaviors are expected to be learned through social havioral studies are reviewed, moving from emotional conta- interactions with humans. Asada et al. [6] discussed the gion to envy and schadenfreude. These terms are then recon- importance of “artificial sympathy” from a viewpoint of CDR sidered from the ADR/CDR viewpoint, particularly along the [4].
    [Show full text]
  • Artificial Empathy: an Interdisciplinary Investigation
    Artificial Empathy: An Interdisciplinary Investigation IJSR special issue Luisa Damiano1, Paul Dumouchel2 and Hagen Lehmann3 1 Epistemology of the Sciences of the Artificial Research Group, Department of Human Sciences, University of Bergamo, Bergamo, Italy 2 Graduate School of Core Ethics and Frontier Sciences, Ritsumeikan University, Kyoto, Japan 3 Adaptive Systems Research Group, School of Computer Science and STCA, University of Hertfordshire, Hatfield, United Kingdom Summary. One central issue of Social Robotics research is This special session issue aims at offering an interdisciplinary the question of the affective (emotional) involvement of users. context in which the different dimensions of Artificial The problem of creating a robot able to establish and to Empathy can be explored and connected. Our goal is to participate competently in dynamic affective exchanges with stimulate the interaction between applied research and human partners has been recognized as fundamental, theoretical and epistemological reflections, and to promote a especially for the success of projects involving Assistive or front line in Social Robotics research that takes all its aspects Educational Robotics. This locates Social Robotics at the (epistemological, theoretical, technical, social and ethical) into crossroad of many interconnected issues related to various consideration, without losing sight of its fundamental disciplines, such as Epistemology, Cognitive Science, question: Under which conditions can a robot become a social Sociology and Ethics. partner for humans? Among these issues are, for example, the epistemological and theoretical problems of defining how emotions could be represented in a robot and under which conditions robots 1. GENERAL INFORMATION could be able to participate competently in emotional and Special Issue proposed title empathic dynamics with human beings.
    [Show full text]
  • Increasing Empathy Through Conversation Joy Harlynking [email protected]
    Rollins College Rollins Scholarship Online Honors Program Theses Spring 2019 The Art of Caring: Increasing Empathy Through Conversation Joy Harlynking [email protected] Follow this and additional works at: https://scholarship.rollins.edu/honors Part of the Human Factors Psychology Commons, Personality and Social Contexts Commons, and the Social Psychology Commons Recommended Citation Harlynking, Joy, "The Art of Caring: Increasing Empathy Through Conversation" (2019). Honors Program Theses. 87. https://scholarship.rollins.edu/honors/87 This Open Access is brought to you for free and open access by Rollins Scholarship Online. It has been accepted for inclusion in Honors Program Theses by an authorized administrator of Rollins Scholarship Online. For more information, please contact [email protected]. Running head: INCREASING EMPATHY 1 The Art of Caring: Increasing Empathy Through Conversation Joy Harlynking Psychology Department Rollins College INCREASING EMPATHY 2 Abstract In our every day lives we use empathy more than we would assume, however a current empathy deficit has led us to wonder how we can increase our empathy. The present study aimed to find a way to increase one’s empathy, specifically through an emotional conversation with another individual. There were 61 participants who were separated into either an emotional or factual conversation group. They first took surveys including the IRI, closeness questions, and the pre Revised Eyes Test and then engaged in conversation with another participant. They then took a post Revised Eyes Test after their conversation. The results showed that while the emotional conversation group did not increase in empathy more than the factual group, all the participants in both groups significantly increased in empathy after their conversation.
    [Show full text]