Ethics Workshop: Computational Neuroscience and Technology REU

Total Page:16

File Type:pdf, Size:1020Kb

Ethics Workshop: Computational Neuroscience and Technology REU

Ethics Workshop: Computational Neuroscience and Technology REU

— Introductions and Purpose of Workshop —

 Introductions

 The Purpose of the Workshop

. The purpose of this workshop is three-fold:

-- Emphasize the existence, range, and variety of moral problems that arise for practitioners of science and engineering.

-- Introduce you to the systematic, conceptual treatment of moral problems known as philosophical ethics, or more simply, ethics.

-- Discuss ways of applying the results of these systematic treatments to address the moral problems that confront you as scientists and engineers.

. One could argue that we are obligated to act morally in all that we do, and so scientists and engineers should act morally in what they do on the job simply as a part of this.

. However, there are at least two reasons why special attention to the moral problems of science and engineering is justified:

-- The specific character of these problems depends on the technical details and on the fact that moral problems often arise in this context in part out of the pursuit of the “greater good”.

-- The consequences of failure to address these problems could be dire, even to the point of extinction.

 So, with that, let’s have some fun! Ethics Workshop: Computational Neuroscience and Technology REU

— What is Ethics? —

 What Ethics Is

. Philosophical thinking about morality. The discipline involves (a) systematic reflection on moral concepts in an attempt to construct theories that explain and justify moral phenomena, and (b) application of these general theories to particular situations.

. “Philosophical thinking” here consists primarily in conceptual analysis. Philosophers attempt to determine the meanings and applications of our concepts. In assessing meaning, they concern themselves with what is possible, and so distinguish themselves from scientists and others who are primarily concerned with what is actual.

. Moral phenomena include many types of things—e.g., problems, judgments, rules, laws, etc.—but the focus is primarily on human action, where this includes inner actions (e.g., moral reasoning and judgment) and outer actions. (Question: can it apply to behaviors that are not actions?)

 What Ethics Isn’t

. Philosophical ethics combines an interest in getting morality right, i.e., in explaining moral phenomena, with an interest in determining how we should act. The latter follows from an application of the former to particular cases. So it is both descriptive and normative.

. An interest in moral phenomena does not make something ethics in the sense meant here. There are two different ways to diverge from this discipline:

-- Descriptive Only: We can study moral phenomena, e.g., moral institutions, judgments, reactions, etc., as they are realized in the world with a view to describing them properly. If we do this, we operate as scientists. If we do this, we are anthropologists or sociologists or psychologists. Philosophers need to be sensitive to the results of these studies, but their concerns are more abstract and general.

-- Normative Only: If we are only interested in moral phenomena because we are interested in shaping character—that is, in determining how to act—then we operate as moralists, counselors, or ministers. Philosophical ethics is not religion.

 The Business of Philosophical Ethics . Philosophers who work in ethics are interested in developing knowledge about moral phenomenon. To this end, they operate no differently than those who pursue knowledge in other domains, viz., they construct theories that are designed to explain the phenomena in question.

-- Ethicists operate under the assumption that moral phenomena form a natural class, i.e., that these phenomena do not represent a random, unconnected collection. As such, they are related to one another by general rules (or laws) that underpin our judgments and actions. The business of philosophical ethics, then, is the business of identifying these rules and the elements they relate.

-- In other words, ethics is about structures. An ethical theory can be understood as a model of an ethical structure, where this is a systematically related set of fundamental moral elements.

. In constructing theories, then, the philosopher will identify and analyze (a) the fundamental moral elements, and (b) the rules that relate them. In doing this, they will operate as philosophers, looking to analyze these elements and rules in ways that account for all possibilities, i.e., in ways that are maximally abstract and general.

-- The moral elements are concepts, such as GOODNESS, RIGHTNESS, JUSTICE, VIRTUE, etc.

-- Analysis of these concepts will aim for a list of conditions satisfaction of which is both necessary and sufficient to count as an instance of them. For example, one might say that X is GOOD just in case (or “if and only if”) it generates more happiness than unhappiness.

-- The rules that relate them will be the moral laws.

 Ethical Theories

. Thus, the construction of ethical theories will consist of two stages: (a) identification and analysis of fundamental moral concepts, i.e., notions used to classify fundamental aspects of moral phenomena, and (b) identification and analysis of the law- governed relationships that connect these elements.

. The result will be a theoretical model of moral phenomena. This model will explain the phenomena as we find it in the world, and so should prove invaluable for understanding what happens and determining what should happen.

. Stage One: Identification and analysis of moral concepts -- GOOD: This applies to more than just actions; in fact, it can apply to almost any object, event, or situation. Something qualifies as GOOD if it meets or exceeds a positively-valued, nonmoral standard. Examples of this standard include pleasure production, knowledge production, beauty production, being “Godlike”, etc. (Something that is good according to one of these standards can become known as a good, if it consistently and persistently meets the standard.)

-- RIGHT: In its ethical employment, this is ambiguous between its adjectival and its nominal uses. Thus, something can be right, as opposed to wrong, or it can be a right, as opposed to a privilege, say. The former use typically applies to actions and events involving actions; actions are said to be right if they satisfy a standard of rectitude, e.g., they maximize the good, or they conform with duty, or they agree with God’s will. The latter use is taken to specify a type of entitlement that moral agents have to certain goods, or to certain forms of treatment. We are familiar with this concept, e.g., consider our Declaration of Independence, which says, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”

-- JUSTICE: This applies to actions, either of individuals or institutions, that balance (or keep balanced) the moral ledger sheet, so to speak. Thus, it is essentially an ethical concept that applies to society, or at least to groups. The Golden Rule is an example of a principle that expresses a form of justice. One could argue that social justice demands equal treatment of moral agents unless there is some overriding moral reason to justify differential treatment.

-- VIRTUE: A characteristic of agents that counts as an excellence, e.g., courage, honesty, fidelity, integrity, wisdom, etc. It is the developed form of positive character traits that mark good people.

. Stage Two: Identification and analysis of relationships between these concepts.

-- There are many ethical theories out there, and as you might conclude based on this fact, there is little consensus about any of them. For now, we will consider a few of the more important of these, measured in terms of (a) traditional influence, (b) number of adherents, or at least noisy adherents, and (c) resonance with intuition.

-- We will distinguish between two general types of theories, viz., ends-based and means-based. As we noted above, ethics is ultimately about action, and actions are a type of end toward which we as human beings aim. We use this fact to frame our examination of theories: are they primarily interested in the ends that result from our actions (i.e., ends-based), or are they primarily interested in the means that result in our actions (i.e., means-based).

-- Ends-Based Theories  These theories place primary emphasis on the ends that result from an action, and in particular, on the value that arises because of them. Thus, they will define right in terms of good—that is, the moral quality of an act will depend on the amount of positive nonmoral good (typically of a certain type) that arises out of the act.

 Example 1: Utilitarianism. This is a hedonist view that regards pleasure as the primary good. Thus, something will be good to the extent that it augments pleasure or diminishes pain. An act will be right—i.e., it will be the morally correct action—just in case it is the act that increases the balance of pleasure more than any alternative. (Note that “pleasure” can be given sophisticated definition here.)

 Example 2: Beauty-based Consequentialism. Here, the good is defined in terms of the aesthetic notion of beauty, and the right is then defined in terms of it.

-- Means-Based Theories

 These theories place primary emphasis on what happens within the agent prior to the actual performance of the action—that is, certain features of the act itself rather than the consequences of the act. For instance, they might focus on the deliberation that precedes the decision, and in particular, on the reasons that guided one’s choice. Or they might focus on motives, or intentions, or on the character traits that led to the action. These theories separate the right and the good, defining them independently. The good may influence deliberation, but the moral value of an act will depend on what the theorist takes to be right.

 Example 1: Kantian Deontology. The moral value of the act depends on whether the act was performed “out of respect for the moral law,” or more simply, out of duty. In Kant’s view, the Categorical Imperative should structure deliberation, determining for us what acts are morally correct and what are not. Whether an act has good consequences or ill is irrelevant to its moral standing.

 Example 2: Virtue Ethics. An act is right just in case it is the type of act that would be performed by a virtuous agent. A virtuous agent is one who has cultivated her positive character traits, in conformity with the Golden Mean and with an eye on how others who are virtuous act. The idea here is that we can cultivate excellences as a part of our behavioral profile.

 Example 3: Justice-based Ethics. An act is right if it conforms to principles like these: (a) each person has an equal right to the most extensive liberties compatible with similar liberties for all, and (b) any inequality can only be justified if the presence of the inequality benefits the least advantaged person (Rawls). These principles frame what acts are right, determining them independently of their consequences.

 Evaluating Ethical Theories

. It doesn’t take much to see that these theories are not compatible. In some cases, they will issue the same directive, but in many cases they will give conflicting moral advice. What to do? It is clear that the lack of consensus here is a problem—this is something that you might not have to worry to much about in science.

. How do we go about evaluating these theories and choosing the best one?

-- This presumes that there is such a thing as a “best ethical theory”. Perhaps this is like the “best musical taste” or something to that effect. Or perhaps we should be sensitive to situations, allowing for the possibility that one theory might be appropriate in one location and another in a different location.

-- There is an obvious, theoretical answer that is two-fold: (a) check the theory for internal consistency and for agreement with one’s considered, stable moral intuitions, and (b) see how it fares when embedded in the world—does it distinguish the intuitively moral from the intuitively nonmoral and immoral?

-- This is not what I want here, though. I am more interested in hearing from you. Which of these theories appeals to you? How would you defend it, if pressed? Are there other possibilities here, perhaps beyond those that have been specified? Ethics Workshop: Computational Neuroscience and Technology REU

— What has Ethics to do with Research, Science, and Engineering? —

 Ethics in Theory and Application

. As we noted, ethics is about both theory and practice. Given its interest in right conduct, it should come as no surprise that it stands or falls based on how well it applies to situations in the world. Remember that it has both a descriptive side and a normative side. The applicability of ethical theory is what makes it so valuable—it helps us to understand phenomena that we encounter every day. So ethics had better be applicable, but this is different from Applied Ethics, a sub-discipline of philosophical ethics.

. But what are we to make of applied ethics?

-- It would be a mistake to take applied ethics to be the mere application of ethical theory to nonethical fact. It isn’t as though ethical theory lends itself to immediate and obvious application in all circumstances and situations. In fact, the situation for the ethical theorist with respect to right conduct is rather like the situation might be for a biologist with respect to life: there are many varied cases, and no one theoretical size fits all. The details matter, and furthermore, the details have ethical implications. (For instance, there will be new concepts involved that figure into the moral content of judgments.)

-- Applied ethics is still philosophical ethics—it is still about theory construction and practical application. It is just closer to the ground, so to speak, than the abstract ethical theories we discussed earlier.

-- That said, it is still true that an applied ethical theory can be developed top-down, guided by reasoning and analysis framed in terms of one ethical theory or another.

-- Alternatively, it might be bottom-up, driven by a close inspection of the situation within a particular domain. How are decisions made within this domain? What type of institutions does one find there? Are there other potentially moral creatures involved (e.g., animals?). Etc. etc. One might be inclined in the direction of deontology with respect to one’s own personal conduct but in the direction of utilitarianism with respect to one’s applied ethic in, say, medicine.  Varieties of Applied Ethics

. Bio-medical Ethics: Is it right to let someone die who has a terminal illness and is in great pain? At what point is it right for an HMO to deny medical treatment to someone who has pursued many different alternatives? Is it morally correct to become pregnant in the hope that the baby could donate an organ to a dying sibling?

. Business Ethics: Are corporations morally obligated to let stockholders know all potentially negative business information? Are businesses morally obligated to their stockholders over and above the communities in which they operate? Is it morally permissible to pay your Pakistani workers $.75 an hour when that is more than they would otherwise make, even though that wage keeps them impoverished? What are a business’s obligations to its workers?

. Legal Ethics: Is it ever morally acceptable to sink your own case if you think that your client is guilty? Have you a greater moral obligation to the “system” or to your community?

. Environmental Ethics: What are our obligations to other species? Have we moral obligations to future generations that constrain the way we use the environment? Is it permissible to compromise an area environmentally to keep the economy going strong?

 The Application of Ethics to Research, Science, and Engineering

. Can you identify a few instances of science or engineering-related ethical problems?

. How do scientific facts and theories figure into these problems?

. How might ethics help a practitioner of science or engineering?

. How might it hurt? Ethics Workshop: Computational Neuroscience and Technology REU

— Ethics in Project Proposal and Project Scheduling —

Richard Wells

 A Research Project Proposal is:

. A request to an agency for money

. A contractual agreement (if funded)

. You are promising to do something specific if they give you the money

 What is the nature of a proposal?

. Usually 3 to 5 years in duration

. Usually involves hundreds of thousands of dollars from the funding agency

. Extremely competitive: about 1 in 10 actually get funded

. It’s research. That means you don’t know for a fact what the outcome will be.

 What are the ethical issues?

. Proposals that don’t promise much don’t get funded

. Proposals that promise to do everything don’t get funded

. How much can/should you promise?

. How confident are YOU that you really can do what you’re promising to do?

 Example: AMFeR Project

. Utterly new type of material technology

. No one has every succeeded in doing this yet

. Are there any ethical problems in this proposal?  Scheduling

. Major proposals usually have to commit to either a schedule or at least an estimate of when major milestones are reached

. By definition, if it’s research nobody’s ever done it before

. How do you schedule innovation?

. Do YOU believe your own schedule?

. How much should you promise here?

 Example: AMFeR schedule

. The schedule lists major milestones

. Almost all of them represent things no one has ever successfully done before

. All schedules for research contain great uncertainty.

. If you’re too conservative, you won’t get funded

. When is optimism unethical?

 Closer to Home:

. On your first day here you found out what your research project was to be.

. How did you feel when confronted with what you were being expected to do in just 8 weeks?

. Were you sure you could do it?

 You decided to stay & to take the money

. How ethical do you think your decision was?

. What would you have decided to do if you had been totally certain you couldn’t do the job?

. How much uncertainty does there have to be before a decision to take the money would not be ethical?

. What might have been your options?  Discussion Questions

. Is it ever permissible to lie in a project proposal? For instance, it is morally permissible to include work in your proposal that you have already completed?

. Are there any moral constraints on the projections you make in a proposal, e.g., work timelines, deliverable list, etc.?

. Is it permissible to submit a proposal to more than one granting agency at a time?

. What are your moral obligations to other members of your research team?

. Etc. Ethics Workshop: Computational Neuroscience and Technology REU

— Ethics in Research and Publication —

Richard Wells

 What is “research”?

. Generally defined as “studious inquiry or examination aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws.”

 What is publication?

. It is the dissemination of your findings to the scientific community

. Scientific publications are subject to peer review

 What ethical problems could there possibly be with this?

. You’ve just done a comprehensive experiment with 100 test cases

. 99 of those test cases went just like your theory said they would

. 1 case did not, and you don’t know why. You think the sample might have been contaminated

. Do you report that 1 case or not?

 What ethical problems could there possibly be with this?

. You’re the acknowledged leading expert in your field and have been for 15 years.

. The theory you discovered 15 years ago has made you famous and respected, and brought enormous prestige to your university.

. You hear you’re being nominated for the Nobel Prize for your life’s work

. You’ve just gotten the data in from your latest experiment and guess what: Your famous theory is dead wrong.

. What now? Ethics Workshop: Computational Neuroscience and Technology REU

— Case Studies in Engineering and Neuroscience —

Richard Wells

 Presentation of Case Study in Engineering

. What are the moral problems associated with this story?

. One moral practice is the assignment of praise and blame. How might that go here, and how would you justify the assignment? (Think of justification in terms of the theories you learned about yesterday.)

. Are there any heroes in this story? Any villains?

 Discussion:

. Take a few minutes and think of a situation in neuroscience that raises an ethical problem.

. What is your situation? What is the ethical problem?

. How might one resolve it? How would you have approached this situation?

. In general, what are the moral problems that confront neuroscientists? Is there anyway to identify these systematically?

. What are the prospects for a theoretically unified response to all such problems? Ethics Workshop: Computational Neuroscience and Technology REU

— Concluding Remarks —

 Overview

. Philosophical ethics is a discipline devoted to the explanation of moral phenomena. It has descriptive and normative sides to it, i.e., it is out to get the phenomena right and to determine how we should act. These are accomplished (or not?) by ethicists who use the tools of conceptual analysis to model ethical structures, which comprise ethical elements (e.g., concepts such as goodness and rightness) and relations among those elements (e.g., moral laws). The results are complex theories; these can be maximally abstract, as in the case of ends- and means-based theories of the types we discussed, or more concrete, as in the case of theories developed in applied contexts (e.g., biomedical or business contexts).

. In the contexts of science and engineering, there are many, varied ethical problems. It isn’t too much of a stretch to say that almost anything you do as a researcher can have moral implications, measured either in terms of principles that figure into the means or consequences that figure into the ends. Thus, decisions about what research to pursue, what to say in proposals, how to structure funded research, how to deal with data, how to disseminate results, etc. all have moral aspects that can become relevant.

 The Deliverables

. Develop the “moral problem detector” that enables you to spot moral problems when you encounter them.

. Deal with moral problems systematically, in a way that is guided by principle. Ethical theories help here—they supply the mechanisms one needs to distinguish relevant aspects of moral problems from irrelevant and then to weigh and evaluate those relevant aspects with a view to arriving at a resolution. You may come up with disparate resolutions, driven by different theories, in different cases, but the important thing is that they are systematic resolutions that you can justify.

 Parting Shots

. Do we use theory to explain moral facts of the matter, or do we use it to rationalize our gut reactions? Situational ethics is attractive in that it allows us to be sensitive to differences among moral situations, but it also opens up the possibility that we might just pick and choose among theories so as to make our non-philosophical, pre-theoretical visceral reactions look presentable. . Are there moral facts of the matter independent of us, or are we just making this stuff up as we go along? If the former, then we should aim at consensus, and the descriptive aspect would be more prominent than the normative; if the latter, might there still be facts of the matter, but just social ones? In this case, perhaps the normative aspect is more prominent, given that it would appear to be what determines the relevance of ethics to human life.

. In science, one typically operates under the presumption that the world is a unified place— that is, that the laws of nature come together to form a coherent and interrelated whole. But in ethics, it would appear that unbending commitment to a single moral theory gets one into a world of hurt. Situations that call for moral evaluation differ—some would appear to demand a commitment to principle antecedent to action (e.g., “Thou shalt not murder”, or perhaps a restriction on conflicts of interest or on human cloning), while others would appear to demand that we attend to consequences (e.g., decisions about whether to breech the dams). If this is not misleading, then it suggests that moral phenomena do not admit of unified, uniform treatment. Is this a problem? Why would the world be this way here and not elsewhere? (Or perhaps our theories only go so far, and there really is a “Grand Unified Moral Theory” out there that accounts for why ends-based theories work so well in one location and means-based theories work so well in other locations.)

. “Just because we can do it, should we do it?” We can bomb the world back into the stone age, but presumably we shouldn’t do that. We can go on a killing spree, but presumably we shouldn’t do that. We can deepen our understanding about the brain by conducting invasive, painful, and perhaps permanently harmful experiments on human subjects, but presumably … what? We tend to think of this as bad, but why? And what about those human intellectual pursuits that do not involve human subjects but could generate knowledge that might result in very dangerous technology? Should we pursue that just because we can? (Note that a knowledge-based ends-based theorist might be hard-pressed to say “no” in these instances.)

. Money determines what research is done, especially in the funding-intensive science and engineering fields. This is just a fact—you have to have it to pay for the equipment, personnel, etc. But what factors should determine where the money goes, i.e., what factors should determine what research gets done?

. We control conflict of interest and scientific misconduct at the UI through institutional mechanisms designed to inform researchers of policy, enable them to avoid moral difficulties, and indicate when there might exist problems that need special attention. This is a means-based fix to moral problems, viz., problems of conflict of interest and scientific misconduct. But at least with respect to conflict of interest, it would appear as though our intuitions suggest an ends-based analysis. Acknowledging that it would be nigh impossible for a university as complex as the UI to treat each case of research by itself, making determinations on a case by case basis, so this appears to be a practical necessity. But are we using the wrong tool here? Is it like the drunk who searches for his keys under the streetlight, not because he lost them there but because the light is better?

. Are there right answers here, or are these things matters of taste?

Recommended publications