PHL204 Introduction to Ethics Course Guide Corse Code
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Virtue Ethic for the Twenty-First Century Warrior: Teaching Natural Law Through the Declaration of Independence and the Gettsyburg Address
Concordia Seminary - Saint Louis Scholarly Resources from Concordia Seminary Doctor of Ministry Major Applied Project Concordia Seminary Scholarship 2-16-2012 A Virtue Ethic for the Twenty-First Century Warrior: Teaching Natural Law Through the Declaration of Independence and the Gettsyburg Address Ryan Rupe Concordia Seminary, St. Louis, [email protected] Follow this and additional works at: https://scholar.csl.edu/dmin Part of the Practical Theology Commons Recommended Citation Rupe, Ryan, "A Virtue Ethic for the Twenty-First Century Warrior: Teaching Natural Law Through the Declaration of Independence and the Gettsyburg Address" (2012). Doctor of Ministry Major Applied Project. 139. https://scholar.csl.edu/dmin/139 This Major Applied Project is brought to you for free and open access by the Concordia Seminary Scholarship at Scholarly Resources from Concordia Seminary. It has been accepted for inclusion in Doctor of Ministry Major Applied Project by an authorized administrator of Scholarly Resources from Concordia Seminary. For more information, please contact [email protected]. CONCORDIA SEMINARY SAINT LOUIS, MISSOURI A VIRTUE ETHIC FOR THE TWENTY-FIRST CENTURY WARRIOR: TEACHING NATURAL LAW THROUGH THE DECLARATION OF INDEPENDENCE AND THE GETTSYBURG ADDRESS A MAJOR APPLIED PROJECT SUBMITTED TO THE DEPARTMENT OF DOCTOR OF MINISTRY STUDIES IN CANDIDACY FOR THE DEGREE OF DOCTOR OF MINISTRY BY RYAN R. RUPE SAINT LOUIS, MISSOURI 15 DECEMBER 2011 A VIRTUE ETHIC FOR THE TWENTY-FIRST CENTURY WARRIOR: TEACHING NATURAL LAW THROUGH THE DECLARATION OF INDEPENDENCE AND THE GETTSYBURG ADDRESS RYAN R. RUPE 16 FEBRUARY 2012 Concordia Seminary Saint Louis, Missouri ____________________________________ __________________ Advisor Dr. Arthur Bacon DATE ____________________________________ __________________ Reader DATE ____________________________________ __________________ Director, Doctor of Ministry Program DATE Dr. -
Defending Options Author(S): Shelly Kagan Source: Ethics, Vol. 104, No. 2 (Jan., 1994), Pp
Defending Options Author(s): Shelly Kagan Source: Ethics, Vol. 104, No. 2 (Jan., 1994), pp. 333-351 Published by: The University of Chicago Press Stable URL: http://www.jstor.org/stable/2381581 Accessed: 13-08-2014 16:42 UTC Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. The University of Chicago Press is collaborating with JSTOR to digitize, preserve and extend access to Ethics. http://www.jstor.org This content downloaded from 130.132.173.11 on Wed, 13 Aug 2014 16:42:57 UTC All use subject to JSTOR Terms and Conditions Defending Options ShellyKagan Suppose some act would best promote the overall good, objectively speaking. Are we morallyrequired to do it? Not necessarily,says ordi- nary,commonsense morality: after all, the act in question mightviolate someone's rightsor run afoul of some otheragent-centered constraint. Well, then, are we at least morallyrequired to performthe act with the best resultsof those acts that are not otherwiseforbidden? Here, too, ordinarymorality says no: thereis no such general moral require- ment to promote the good (not even within the confines of moral constraints).Rather, ordinary morality claims that in a certainbroad, but not unlimited,range of cases, agents have moraloptions: although they are morallypermitted to performthe act with the best conse- quences overall,they are not morallyrequired to do so; on the contrary, they are also morallypermitted to performinstead acts that are less than optimal, such as pursuing theirown interests. -
Applying Virtue Ethics to Business: the Agent-Based Approach
Applying virtue ethics to business: The agent-based approach By: John Dobson It ca be argued that the presence of what are in a slightly old-fashioned terminology called virtues in fact plays a significant role in the operation of the economic system. - Kenneth Arrow Introduction There are two basic approaches to integrating ethics in business: the action-based approach, and the agent-based approach. The traditional approach is action-based in that it focusses on developing rules or guidelines to constrain management's actions. These rules or guidelines generally manifest themselves in corporate codes-of-conduct, or codes-of-ethics. Contrarily, rather than the action-based focus on rules governing action, the agent-based approach concerns the fundamental character and motivations of the individual agent. Under the agent-based approach, moral behavior is not limited to adherence to a rule or guideline but rather involves the individual rationally pursuing moral excellence as a goal in and of itself. In essence, ethics becomes central to the rationality concept as an objective rather than a constraint: "something positively good, ..something to be sought after" (Ladd, 1991, p. 82). Agent-based approaches generally derive their philosophical foundation from virtue-ethics theory. This theory is attracting increasing interest from business ethicists. In essence, the 'virtue' in virtue-ethics is defined as some desirable character trait, such as courage, that lies between two extremes, such as rashness and cowardice. Thus the 'virtuous' agent is involved in a continual quest to find balance in decision-making. Such an agent does not apply any specific 'rules' in making decisions, but rather attempts to make decisions that are consistent with the pursuit of a particular kind of excellence that in turn entails exercising sound moral judgement guided by such 'virtues' as courage, wisdom, temperance, fairness, integrity, and consistency. -
Lawrence Kohlberg's Stages of Moral Development from Wikipedia
ECS 188 First Readings Winter 2017 There are two readings for Wednesday. Both are edited versions of Wikipedia articles. The first reading adapted from https://en.wikipedia.org/wiki/Lawrence_Kohlberg's_stages_of_moral_development, and the second reading is adapted from https://en.wikipedia.org/wiki/Ethics. You can find the references for the footnotes there. As you read the article about moral development please think about you answered the Heinz Dilemma in class, and in which stage did your justification lie. I do not plan on discussing our answers to the Heinz Dilemma any further in class. As you read the ethic article, please think about which Normative ethic appeals to you, and why. This will be one of the questions we will discuss on Wednesday. My goal for both of these readings is to help you realize what values you bring to your life, and our course in particular. Lawrence Kohlberg's Stages of Moral Development from Wikipedia Lawrence Kohlberg's stages of moral development constitute an adaptation of a psychological theory originally conceived by the Swiss psychologist Jean Piaget. Kohlberg began work on this topic while a psychology graduate student at the University of Chicago[1] in 1958, and expanded upon the theory throughout his life. The theory holds that moral reasoning, the basis for ethical behavior, has six identifiable developmental stages, each more adequate at responding to moral dilemmas than its predecessor.[2] Kohlberg followed the development of moral judgment far beyond the ages studied earlier by Piaget,[3] who also claimed that logic and morality develop through constructive stages.[2] Expanding on Piaget's work, Kohlberg determined that the process of moral development was principally concerned with justice, and that it continued throughout the individual's lifetime,[4] a notion that spawned dialogue on the philosophical implications of such research.[5][6] The six stages of moral development are grouped into three levels: pre-conventional morality, conventional morality, and post-conventional morality. -
Classical Ethics in A/IS
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems Classical Ethics in A/IS We applied classical ethics methodologies to considerations of algorithmic design in autonomous and intelligent systems (A/IS) where machine learning may or may not reflect ethical outcomes that mimic human decision-making. To meet this goal, we drew from classical ethics theories and the disciplines of machine ethics, information ethics, and technology ethics. As direct control over tools becomes further removed, creators of autonomous systems must ask themselves how cultural and ethical presumptions bias artificially intelligent creations. Such introspection is more necessary than ever because the precise and deliberate design of algorithms in self-sustained digital systems will result in responses based on such design. By drawing from over two thousand years’ worth of classical ethics traditions, we explore established ethics systems, including both philosophical traditions (utilitarianism, virtue ethics, and deontological ethics) and religious and culture-based ethical systems (Buddhism, Confucianism, African Ubuntu traditions, and Japanese Shinto) and their stance on human morality in the digital age.1 In doing so, we critique assumptions around concepts such as good and evil, right and wrong, virtue and vice, and we attempt to carry these inquiries into artificial systems’ decision-making processes. Through reviewing the philosophical foundations that define autonomy and ontology, we address the potential for autonomous capacity of artificially intelligent systems, posing questions of morality in amoral systems and asking whether decisions made by amoral systems can have moral consequences. Ultimately, we address notions of responsibility and accountability for the decisions made by autonomous systems and other artificially intelligent technologies. -
2014-PDF-Of-Philosophy-News
UNIVERSITY OF TORONTO DEPARTMENT OF PHILOSO PHY FALL 2014 IAN HACKI NG WSEI E NPAGS E 3BALZAN PRIZ E! PRACT ICAL ETH ICS – SHEL LY KAG AN ON SPECIESISM By Ellen Roseman Shelly Kagan, the Clark Professor of Philosophy at Yale University, drew a capacity crowd to the Roseman Lecture in Practical Ethics last fall. He is an engaging, funny and whip- smart speaker. And he chose an intriguing topic, one that is not usually explored in mainstream philosophy courses. The title: What’s wrong with speciesism? In a bestselling 1975 book, Animal Liberation , Australian philosopher Peter Singer claimed that most of us Shelly Kagan with Ellen Roseman are “speciesists” in our attitude toward, and treatment of, animals. Singer, now a bioethics professor than persuasive. “People have rights. Animals don’t,” he tells Speciesism is supposed to be a kind at Princeton University, created me after the lecture. “There’s a of morally unjustified prejudice, akin a splash when he called for an huge crowd of people working to racism or sexism, Kagan said in animal rights movement. Kagan on this issue. I thought a lot of his notes for the lecture. “Although read the book while in graduate the arguments were weak.” I found that charge compelling for school. He became a vegetarian. years, I now find that I have my But in a second reading of the Kagan thinks the crucial concept doubts,” he explained. “It now seems book in 2011, while preparing is to define the meaning of a to me that most people are not actu - to give an animal ethics seminar “person.” Going back to British ally speciesists at all, but something to Yale students, Kagan found philosopher John Locke’s work rather different.” some of Singer’s arguments less ...continued on Page 2 We wish to thank the generous donors to the Department of Philosophy, without whom Philosophy News would not be possible. -
Artificial Intelligence: from Ethics to Policy
Artificial intelligence: From ethics to policy STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 641.507 – June 2020 EN Artificial intelligence: From ethics to policy There is little doubt that artificial intelligence (AI) and machine learning (ML) will revolutionise public services. However, the power for positive change that AI provides simultaneously holds the potential for negative impacts on society. AI ethics work to uncover the variety of ethical issues resulting from the design, development, and deployment of AI. The question at the centre of all current work in AI ethics is: How can we move from AI ethics to specific policy and legislation for governing AI? Based on a framing of 'AI as a social experiment', this study arrives at policy options for public administrations and governmental organisations who are looking to deploy AI/ML solutions, as well as the private companies who are creating AI/ML solutions for use in the public arena. The reasons for targeting this application sector concern: the need for a high standard of transparency, respect for democratic values, and legitimacy. The policy options presented here chart a path towards accountability; procedures and decisions of an ethical nature are systematically logged prior to the deployment of an AI system. This logging is the first step in allowing ethics to play a crucial role in the implementation of AI for the public good. STOA | Panel for the Future of Science and Technology AUTHORS This study has been written by Dr Aimee van Wynsberghe of Delft University of Technology and co-director of the Foundation for Responsible Robotics at the request of the Panel for the Future of Science and Technology (STOA) and managed by the Scientific Foresight Unit, within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament. -
Metaethics and the Autonomy of Morality
Philosophers’ volume8,no.6 july2008 Imprint 1. Introduction SincethepublicationofG.E.Moore’sPrincipia Ethicaithasbecome commonplace for philosophers to distinguish between questions in metaethics and those in normative ethics.1 A sympathetic character- Metaethics izationofthecenturyofself-consciouslymetaethicalresearchthatfol- lowedwouldemphasizetheextraordinarydevelopmentbothinour understandingofthecentralmetaethicalproblemsandinthesophis- ticationofthetheorieselaboratedtomeetthem.However,someare & notsosympathetic.Inthispaper,Iexamineonesourceofdistrustin metaethicalresearch:itsapparenttensionwiththenotionthatmoral- ityisautonomous. Tobegin,IbrieflysketchhowIamthinkingofmetaethics,ofthe the Autonomy of autonomyofmorality,andofthetensionthatcanappeartoexistbe- tweenthem.Onetraditionalconceptionofmetaethicstakesittocon- cern only the analysis of moral language.2 However, contemporary philosophers typically use the term more expansively.3 Here, I use Morality the term to pick out elements common to these contemporary dis- cussions.Thiscommoncoreencompassesmoralontologyandmoral psychologyaswellasmoralsemantics.Bycontrast,normativeethics (sometimesalsocalled‘substantiveethics’)concernsthestructureand contentofthecorrectmoralevaluationofagents,statesofaffairs,and actions.Normativeethicaltheoriestypicallyofferaccountsofmoral valueandmoralreasons,ofvirtuouscharactertraits,ofrightness,and Tristram McPherson oftherelationshipsbetweenthese. 1. The word ‘metaethics’ came into regular philosophical usage much later. University of Minnesota Duluth -
Virtue Ethics in the Contemporary Social and Political Realm Sean Cordell Submitted for the Degree of Phd, February 2010. Depart
Virtue Ethics in the Contemporary Social and Political Realm Sean Cordell Submitted for the degree of PhD, February 2010. Department of Philosophy, University of Sheffield. 1 Abstract This thesis concerns the problem of applying the ideas developed in contemporary virtue ethics to political philosophy. The core of the problem, explained in the opening chapters, is that assessment of right action offered by virtue ethics - in terms of what 'the virtuous person' characteristically does or would do - is focused on individual persons, rather than political principles of government. Accordingly, interpretations of traditional Aristotelianism have struggled to accommodate the putative value of modern value pluralism and manifold conceptions of the 'good life', whilst liberal theories that employ virtue concepts fail to offer a political philosophy that is distinctly virtue ethical. Rather than trying to fit individualistic virtue ethics to political theory in these ways, subsequent chapters start from the viewpoint of individuals and look outward to their social and political environment, arguing that an adequately socio-political virtue ethics requires, and suits, an ethics of social roles. Various virtue ethical approaches to roles, however, fail in different ways to determine what it means to act virtuously in such a role. In response, it is argued that virtue ethics needs a normative account of what specific role-determining institutions should be like. The possibilities for the Aristotelian ergon - function or 'characteristic activity' - serving as a normative criterion for a good institution of its kind are discussed and modified, leading to a positive account of institutional ergon that links the primary function of an institution with the specific and distinct human good or goods that it serves. -
Ethics and the Future -- Assigned Readings
Ethics and the Future -- Assigned Readings Note: This is a list of the readings that I assigned (or recommended) to the students (a mix of undergraduates and graduate students) for my seminar, “Ethics and the Future,” taught at Yale in Spring 2021. As the reading list was prepared only for the students in the course, I’m afraid I didn’t include links for any papers that I was including in the course pack, though I believe all of these readings can be easily found online, with the exception, of course, of Ord’s The Precipice, and Parfit’s Reasons and Persons. I am making the reading list publicly available at the request of Pablo Stafforini. In choosing suitable readings I was aided tremendously by the list of readings selected by William MacAskill and Christian Tarsney for their own seminar taught at Oxford in 2019, and especially by the potential syllabus that Joshua Monrad wrote up for my benefit. (Since it was Joshua who rightly insisted to me that this would make an interesting and important topic for a seminar, I am doubly in his debt.) The readings are organized by topic. We spent a week on each. --Shelly Kagan Background on Existential Risks: 1. Toby Ord, The Precipice, Chapters 3-6 and Appendices C and D (about 124 pages) The Basic Case for Longtermism: 1. Perhaps start with this very brief overview: Todd, “Future Generations and Their Moral Significance” (about 7 pages), which can be found online at: https://80000hours.org/articles/future-generations/ 2. Then look at the somewhat longer (but still breezy) exposition in Ord, The Precipice, Intro and Chapters 1-2, and Appendix E (65 pgs.) 3. -
Towards a Role Ethics Approach to Command Rejection
Towards A Role Ethics Approach to Command Rejection Ruchen Wen∗, Ryan Blake Jackson∗, Tom Williams∗ and Qin Zhuy ∗Department of Computer Science, yDivision of Humanities, Arts & Social Sciences Colorado School of Mines Golden, Colorado 80401 Email: frwen,rbjackso,twilliams,[email protected] Abstract—It is crucial for robots not only to reason ethically, In this work, we will evaluate two different moral commu- but also to accurately communicate their ethical intentions; a nication strategies for human-robot interactions: a norm-based robot that erroneously communicates willingness to violate moral approach grounded in deontological ethical theory, and a role- norms risks losing both trust and esteem, and may risk negatively impacting the moral ecosystem of its human teammates. Previ- based approach grounded in role ethics. We will specifically ous approaches to enabling moral competence in robots have examine these strategies in the context of the robot rejecting primarily used norm-based language grounded in deontological a command from a human. Recent research has highlighted ethical theories. In contrast, we present a communication strategy the importance not only of rejecting commands [6], [7], but of grounded in role-based ethical theories, such as Confucian ethics. the specific way in which command rejections are phrased [8]: We also present a human subjects experiment investigating the differences between the two approaches. Our preliminary results robots have been shown to hold significant persuasive power show that, while the role-based approach is equally effective at over humans [9], [10], and accordingly, a robot that miscom- promoting trust and conveying the robot’s ethical reasoning, it municates its willingness to adhere to human moral norms may may actually be less effective than the norm-based approach at risk inadvertently negatively impacting the moral ecosystem encouraging certain forms of mindfulness and self-reflection. -
Hedonism (For International Encyclopedia of Ethics)
Hedonism Word Count: 4,488 Hedonism is among the oldest, simplest, and most widely discussed theories of value – theories that tell us what makes the world better or what makes a person’s life go better. Hedonism, in a word, is the view that “pleasure is the good.” In its most comprehensive form, hedonism about value holds that the only thing that ultimately ever makes the world, or a life, better is its containing more pleasure or less pain. The term ‘hedonism’ is also sometimes used to refer to doctrines about other topics. ‘Universal hedonism’ sometimes stands for the view that we ought to bring the greatest balance of pleasure over pain into the world that we can (see UTILITARIANISM), and ‘psychological hedonism’ the view that all human behavior is motivated ultimately by desires to obtain pleasure or avoid pain. Our topic here is hedonism about value. 1. What is hedonism about value? a. What is hedonism a theory of? An important distinction among kinds of value is the distinction between something’s being good for some person (or other subject), and something’s simply being a good thing (see GOOD AND GOOD FOR). The former kind of value – called ‘welfare’ or ‘well-being’ – makes our lives better, or makes things go better for us (see WELL-BEING), while the latter kind of value makes the world better. Typically, whenever a person receives some benefit, or has his life made better, this also makes the world better. But it is at least conceivable that the two come apart, as when an undeserving person receives some benefit, making things go better for him 1 without perhaps making the world better.