Approaches to Deploying a Safe Artificial Moral Agent
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Bridging Divides: New Pragmatic Philosophy and Composition Theory
UNLV Retrospective Theses & Dissertations 1-1-2007 Bridging divides: New pragmatic philosophy and composition theory Eric Wallace Leake University of Nevada, Las Vegas Follow this and additional works at: https://digitalscholarship.unlv.edu/rtds Repository Citation Leake, Eric Wallace, "Bridging divides: New pragmatic philosophy and composition theory" (2007). UNLV Retrospective Theses & Dissertations. 2124. http://dx.doi.org/10.25669/btlv-wtta This Thesis is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in UNLV Retrospective Theses & Dissertations by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected]. BRIDGING DIVIDES: NEW PRAGMATIC PHILOSOPHY AND COMPOSITION THEORY by Eric Wallace Leake Baehelor of Arts University of Nevada, Las Vegas 2002 A thesis submitted in partial fulfillment of the requirements for the Master of Arts Degree in English Department of English College of Liberal Arts Graduate College University of Nevada, Las Vegas May 2007 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 1443772 INFORMATION TO USERS The quality of this reproduction is dependent upon the quality of the copy submitted. -
The Primary Axiom and the Life-Value Compass-John Mcmurtry
PHILOSOPHY AND WORLD PROBLEMS-Vol . I-The Primary Axiom And The Life-Value Compass-John McMurtry THE PRIMARY AXIOM AND THE LIFE-VALUE COMPASS John McMurtry Department of Philosophy, University of Guelph, Guelph NIG 2W1, Canada Keywords: action, animals, concepts, consciousness, dualism, feeling, fields of value, good and evil, God, identity, image thought, justice, language, life-ground, life-value onto-axiology, life-coherence principle, materialism, mathematics, meaning, measure, mind, phenomenology, primary axiom of value, rationality, reification, religion, truth, ultimate values, universals, value syntax, yoga Contents 6.1. The Primary Axiom of Value 6.2. The Fields of Life Value 6.3. The Unity of the Fields of Life 6.4. The Common Axiological Ground beneath Different Interpretations 6.5. The Thought-Field of Value: From the Infinite within to Impartial Value Standard 6.6. The Life-Value Compass: The Nature and Measure of Thought Value across Domains 6.7. Ecology, Economy and the Good: Re-Grounding in Life Value at the Meta-Level 6.8. Beyond the Ghost in the Machine and Life-Blind Measures of Better 6.9. Thought With No Object: The Ground of Yoga 6.10. Beyond the Polar Fallacies of Religion and Materialism 6.11. Concept Thought: From the Boundless Within to the World of Universals 6.12. Beneath Nominalism and Realism: Reconnecting Concepts to the Life-Ground 6.13. Concepts Construct A Ruling Plane of Life in the Biosphere 6.14. The Tragic Flaw of the Symbol-Ruled Species 6.15. How We Know the Internal Value of Any Thought System 6.16. The Core Disorder of Contemporary Thought: Life-Blind Rationality 6.17. -
Blame, What Is It Good For?
Blame, What is it Good For? Gordon Briggs1 Abstract— Blame is an vital social and cognitive mechanism [6] and reside in a much more ambiguous moral territory. that humans utilize in their interactions with other agents. In Additionally, it is not enough to be able to correctly reason this paper, we discuss how blame-reasoning mechanisms are about moral scenarios to ensure ethical or otherwise desirable needed to enable future social robots to: (1) appropriately adapt behavior in the context of repeated and/or long-term outcomes from human-robot interactions, the robot must interactions and relationships with other social agents; (2) avoid have other social competencies that support this reasoning behaviors that are perceived to be rude due to inappropriate mechanism. Deceptive interaction partners or incorrect per- and unintentional connotations of blame; and (3) avoid behav- ceptions/predictions about morally-charged scenarios may iors that could damage long-term, working relationships with lead even a perfect ethical-reasoner to make choices with other social agents. We also discuss how current computational models of blame and other relevant competencies (e.g. natural unethical outcomes [7]. What these challenges stress is the language generation) are currently insufficient to address these need to look beyond the ability to simply generate the proper concerns. Future work is necessary to increase the social answer to a moral dilemma in theory, and to consider the reasoning capabilities of artificially intelligent agents to achieve social mechanisms needed in practice. these goals. One such social mechanism that others have posited as I. INTRODUCTION being necessary to the construction of a artificial moral agent is blame [8]. -
Aristotle and Kant on the Source of Value
Aristotle and Kant on the Source of Value The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Korsgaard, Christine. 1986. Aristotle and Kant on the source of value. Ethics 96(3): 486-505. Published Version http://dx.doi.org/10.1086/292771 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:3164347 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Aristotle and Kant on the Source of Value* ChristineM. Korsgaard THREE KINDS OF VALUE THEORY In this paper I discuss what I will call a "rationalist" account of the goodness of ends. I begin by contrasting the rationalist account to two others, "subjectivism' and "objectivism.' Subjectivism identifies good ends with or by reference to some psychological state. It includes the various forms of hedonism as well as theories according to which what is good is any object of interest or desire. Objectivism may be represented by the theory of G. E. Moore. According to Moore, to say that something is good as an end is to attribute a property, intrinsic goodness, to it. Intrinsic goodness is an objective, nonrelational property of the object, a value a thing has independently of anyone's desires, interests, or pleasures. The attraction of subjectivist views is that they acknowledge the connection of the good to human interests and desires. -
What Is Philosophy.Pdf
I N T R O D U C T I O N What Is Philosophy? CHAPTER 1 The Task of Philosophy CHAPTER OBJECTIVES Reflection—thinking things over—. [is] the beginning of philosophy.1 In this chapter we will address the following questions: N What Does “Philosophy” Mean? N Why Do We Need Philosophy? N What Are the Traditional Branches of Philosophy? N Is There a Basic Method of Philo- sophical Thinking? N How May Philosophy Be Used? N Is Philosophy of Education Useful? N What Is Happening in Philosophy Today? The Meanings Each of us has a philos- “having” and “doing”—cannot be treated en- ophy, even though we tirely independent of each other, for if we did of Philosophy may not be aware of not have a philosophy in the formal, personal it. We all have some sense, then we could not do a philosophy in the ideas concerning physical objects, our fellow critical, reflective sense. persons, the meaning of life, death, God, right Having a philosophy, however, is not suffi- and wrong, beauty and ugliness, and the like. Of cient for doing philosophy. A genuine philo- course, these ideas are acquired in a variety sophical attitude is searching and critical; it is of ways, and they may be vague and confused. open-minded and tolerant—willing to look at all We are continuously engaged, especially during sides of an issue without prejudice. To philoso- the early years of our lives, in acquiring views phize is not merely to read and know philoso- and attitudes from our family, from friends, and phy; there are skills of argumentation to be mas- from various other individuals and groups. -
Full Text in Pdf Format
Vol. 18: 49–60, 2018 ETHICS IN SCIENCE AND ENVIRONMENTAL POLITICS Published September 18 https://doi.org/10.3354/esep00186 Ethics Sci Environ Polit OPENPEN ACCESSCCESS Can humanoid robots be moral? Sanjit Chakraborty* Department of Philosophy, Jadavpur University, Kolkata, 700032, India ABSTRACT: The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the norma- tive outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay with linguistic human beings instead of non-linguistic humanoid robots, as humanoid robots are indeed docile automata that cannot be responsible for their actions. To eradicate human ethics in order to make way for humanoid robot ethics highlights the moral actions and adequacy that hinges the myth of creative agency and self-dependency, which a humanoid robot can scarcely express. KEY WORDS: Humanoid robots · Morality · Responsibility · Docility · Artificial intelligence · Conscience · Consciousness INTRODUCTION depend on the rationality and conscience of the par- ticular agent. Moral values are subjective, as they This paper begins with prolegomena on the claim differ according to the different intentions, motiva- of the interrelated nature of moral agency and ethical tions and choices of individuals. Subjective value is conscience. Who can be a ‘moral agent’? The simple reliant on our method of valuing or judging of it. -
On How to Build a Moral Machine
On How to Build a Moral Machine Paul Bello [email protected] Human & Bioengineered Systems Division - Code 341, Office of Naval Research, 875 N. Randolph St., Arlington, VA 22203 USA Selmer Bringsjord [email protected] Depts. of Cognitive Science, Computer Science & the Lally School of Management, Rensselaer Polytechnic Institute, Troy, NY 12180 USA Abstract Herein we make a plea to machine ethicists for the inclusion of constraints on their theories con- sistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these mat- ters, and we don’t hold out hope for machines that can both always do the right thing (on some general ethic) and produce explanations for its behavior that would be understandable to a human confederate. Our tentative solution involves understanding the folk concepts associated with our moral intuitions regarding these matters, and how they might be dependent upon the nature of hu- man cognitive architecture. It is in this spirit that we begin to explore the complexities inherent in human moral judgment via computational theories of the human cognitive architecture, rather than under the extreme constraints imposed by rational-actor models assumed throughout much of the literature on philosophical ethics. After discussing the various advantages and challenges of taking this particular perspective on the development of artificial moral agents, we computationally explore a case study of human intuitions about the self and causal responsibility. -
Which Consequentialism? Machine Ethics and Moral Divergence
MIRI MACHINE INTELLIGENCE RESEARCH INSTITUTE Which Consequentialism? Machine Ethics and Moral Divergence Carl Shulman, Henrik Jonsson MIRI Visiting Fellows Nick Tarleton Carnegie Mellon University, MIRI Visiting Fellow Abstract Some researchers in the field of machine ethics have suggested consequentialist or util- itarian theories as organizing principles for Artificial Moral Agents (AMAs) (Wallach, Allen, and Smit 2008) that are ‘full ethical agents’ (Moor 2006), while acknowledging extensive variation among these theories as a serious challenge (Wallach, Allen, and Smit 2008). This paper develops that challenge, beginning with a partial taxonomy ofconse- quentialisms proposed by philosophical ethics. We discuss numerous ‘free variables’ of consequentialism where intuitions conflict about optimal values, and then consider spe- cial problems of human-level AMAs designed to implement a particular ethical theory, by comparison to human proponents of the same explicit principles. In conclusion, we suggest that if machine ethics is to fully succeed, it must draw upon the developing field of moral psychology. Shulman, Carl, Nick Tarleton, and Henrik Jonsson. 2009. “Which Consequentialism? Machine Ethics and Moral Divergence.” In AP-CAP 2009: The Fifth Asia-Pacific Computing and Philosophy Conference, October 1st-2nd, University of Tokyo, Japan, Proceedings, edited by Carson Reynolds and Alvaro Cassinelli, 23–25. AP-CAP 2009. http://ia-cap.org/ap-cap09/proceedings.pdf. This version contains minor changes. Carl Shulman, Henrik Jonsson, Nick Tarleton 1. Free Variables of Consequentialism Suppose that the recommendations of a broadly utilitarian view depend on decisions about ten free binary variables, where we assign a probability of 80% to our favored option for each variable; in this case, if our probabilities are well-calibrated and our errors are not correlated across variables, then we will have only slightly more than a 10% chance of selecting the correct (in some meta-ethical framework) specification. -
Philosophy of Artificial Intelligence
David Gray Grant Philosophy of Artificial Intelligence Note to the reader: this syllabus is for an introductory undergraduate lecture course with no prerequisites. Course Description In this course, we will explore the philosophical implications of artificial intelligence. Could we build machines with minds like ours out of computer chips? Could we cheat death by uploading our minds to a computer server? Are we living in a software-based simulation? Should a driverless car kill one pedestrian to save five? Will artificial intelligence be the end of work (and if so, what should we do about it)? This course is also a general introduction to philosophical problems and methods. We will dis- cuss classic problems in several areas of philosophy, including philosophy of mind, metaphysics, epistemology, and ethics. In the process, you will learn how to engage with philosophical texts, analyze and construct arguments, investigate controversial questions in collaboration with others, and express your views with greater clarity and precision. Contact Information Instructor: David Gray Grant Email: [email protected] Office: 303 Emerson Hall Office hours: Mondays 3:00-5:00 and by appointment Assignments and Grading You must complete all required assignments in order to pass this course. Your grade will be determined as follows: • 10%: Participation • 25%: 4 short written assignments (300-600 words) • 20%: Paper 1 (1200-1500 words) • 25%: Paper 2 (1200-1500 words) • 20%: Final exam Assignments should be submitted in .docx or .pdf format (.docx is preferred), Times New Roman 12 point font, single-spaced. Submissions that exceed the word limits specified above (including footnotes) will not be accepted. -
Trends in Value Theory Since 1881
Munich Personal RePEc Archive Trends in Value Theory since 1881 Freeman, Alan London Metropolitan University 7 December 2010 Online at https://mpra.ub.uni-muenchen.de/48646/ MPRA Paper No. 48646, posted 27 Jul 2013 04:31 UTC TRENDS IN VALUE THEORY SINCE 1881 Alan Freeman London Metropolitan University Abstract: This is a prepublication version of an article which originally appeared in the World Review of Political Economy. Please cite as Freeman, A. 2010. ‘Trends in Value Theory since 1881’, World Review of Political Economy Volume 1 No. 4, Fall 2010. pp567-605. The article surveys the key ideas and currents of thinking about Marx’s value theory since he died. It does so by studying their evolution, in their historical context, through the lens of the Temporal Single System Interpretation (TSSI) of Marx’s ideas, an approach to Marx’s theory of value which has secured significant attention in recent years. The article explains the TSSI and highlights the milestones which led to the evolution of its key concepts. Key words: theory of value; Marxian economics; TSSI; New Solution; temporalism JEL codes: B24, B3, B5, B50 Page 1 of 37 TRENDS IN VALUE THEORY SINCE 1881 Alan Freeman Introduction1 This article summarizes the key ideas and currents of thinking about Marx’s value theory since he died. It does so by studying their evolution, in their historical context, through the lens of the Temporal Single System Interpretation (TSSI) of Marx’s ideas, an approach to Marx’s theory of value which has secured significant attention in recent years. The article explains the TSSI and highlights the milestones which led to the evolution of its key concepts. -
Just a Tool? John Dewey's Pragmatic Instrumentalism and Educational
Just a Tool? John Dewey’s Pragmatic Instrumentalism and Educational Technology By © 2018 Mike Bannen Submitted to the graduate degree program in Social & Cultural Studies in Education and the Graduate Faculty of the University of Kansas in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Chair: Suzanne Rice, Ph.D. John L. Rury, Ph.D. Argun Staatcioglu, Ph.D. Heidi Hallman, Ph.D. Joe E. O’Brien, Ed.D. Date Defended: 31 January 2018 The dissertation committee for Mike Bannen certifies that this is the approved version of the following dissertation: Just a Tool? John Dewey’s Pragmatic Instrumentalism and Educational Technology Chairperson: Suzanne Rice, Ph.D. Date Approved: 31 January 2018 ii Abstract This dissertation examines how John Dewey’s philosophy of knowledge might be used to consider the aims of contemporary educational technologies. While significant scholarship exists examining the historical and philosophical importance of Dewey’s contributions to American progressive education, much less scholarship has focused on examining the relationship between Dewey’s theory of knowledge and his thoughts regarding the purposes and aims of educational technologies. I argue that because many of Dewey’s ideas were heavily influenced by the material and social changes of the industrial revolution, his theories about knowledge, technology, and education offer a unique perspective when considering the educational significance of digital technologies. This dissertation is guided by two central questions: (1) What is the relationship between Dewey’s philosophy of knowledge, his philosophy of technology, and his philosophy of education? (2) How might Dewey’s ideas about the relationship between knowledge, technology, and education help educators, students, and policy makers think about the aims and uses of digital technologies in contemporary educational contexts? I begin by examining Dewey’s pragmatically instrumental account of knowledge. -
Machine Learning Ethics in the Context of Justice Intuition
SHS Web of Conferences 69, 00150 (2019) https://doi.org/10.1051/shsconf/20196900150 CILDIAH-2019 Machine Learning Ethics in the Context of Justice Intuition Natalia Mamedova1,*, Arkadiy Urintsov1, Nina Komleva1, Olga Staroverova1 and Boris Fedorov1 1Plekhanov Russian University of Economics, 36, Stremyanny lane, 117997, Moscow, Russia Abstract. The article considers the ethics of machine learning in connection with such categories of social philosophy as justice, conviction, value. The ethics of machine learning is presented as a special case of a mathematical model of a dilemma - whether it corresponds to the “learning” algorithm of the intuition of justice or not. It has been established that the use of machine learning for decision making has a prospect only within the limits of the intuition of justice field based on fair algorithms. It is proposed to determine the effectiveness of the decision, considering the ethical component and given ethical restrictions. The cyclical nature of the relationship between the algorithmic algorithms subprocesses in machine learning and the stages of conducting mining analysis projects using the CRISP methodology has been established. The value of ethical constraints for each of the algorithmic processes has been determined. The provisions of the Theory of System Restriction are applied to find a way to measure the effect of ethical restrictions on the “learning” algorithm 1 Introduction Accepting the freedom of thought is authentic intuition of justice of one individual in relation to The intuition of justice is regarded as a sincere, himself and others. But for artificial intelligence emotionally saturated belief in the justice of some (hereinafter - AI) freedom of thought is not supposed - position.