<<

IN THIS ISSUE MACHINE LEARNING • MEDICINE • AUTONOMOUS VEHICLES

t h e

LAWYER VOLUME 14 ISSUE 1 | FALL 2017 | SECTION OF SCIENCE & TECHNOLOGY LAW | AMERICAN BAR ASSOCIATION

ALGORITHMS: IN CONTROL?

LISA R. LIFSHITZ, LOIS D. MERMELSTEIN, AND LARRY W. THORPE, ISSUE EDITORS

Published in The SciTech , Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. EDITORIAL BOARD messagefromthechair CO-EDITOR-IN-CHIEF STEPHEN M. GOODMAN LOIS D. MERMELSTEIN Pryor Cashman LLP The Law Office of New York, NY Eileen Smith Ewing, Chair 2016–17 Lois D. Mermelstein [email protected] Austin, TX MATTHEW HENSHON [email protected] Henshon Klein LLP Artificial Intelligence: Revolution or Evolution? CO-EDITOR-IN-CHIEF Boston, MA PETER MCLAUGHLIN [email protected] Many fear advances in artificial intelligence (AI). No less a mind than that of Burns & Levinson LLP LISA R. LIFSHITZ Boston, MA Stephen Hawking said, “The development of full artificial intelligence could [email protected] Torkin Manes LLP Toronto, ON spell the end of the human race. . . . It would take off on its own, and re-design DEPUTY-EDITOR-IN-CHIEF [email protected] CAROL HENDERSON itself at an ever-increasing rate. Humans, who are limited by slow biological Stetson University College of Law SARAH MCMILLAN evolution, couldn’t compete, and would be superseded.” Gulfport, FL McGlinchey Stafford PLLC [email protected] New Orleans, LA Of course, other promising, revolutionary technological advances have [email protected] ASSISTANT EDITORS MICHAEL A. AISENBERG RUSSELL MOY raised similar, Armageddon-like fears: cloning and gene alteration, to name a Mitre Corp. Washington, DC few. As committed to the use of science for the betterment of human- McLean, VA [email protected] [email protected] GEORGE LYNN PAUL ity, we face a special challenge—it is we who must work alongside scientists BEVERLY ALLEN George L. Paul, P.C. to promote, monitor, and regulate the best uses of technological innovation— Washington, DC Phoenix, AZ [email protected] [email protected] and to draft policies on potentially harmful uses. It’s a significant job, but one HAROLD L. BURSTYN LARRY W. THORPE the members of this Section (and indeed readers of and contributors to this Furgang & Adwar LLP Springfield, TN Syracuse, NY [email protected] magazine) are uniquely qualified to approach. [email protected] LISA MARIE VON BIELA In their respective articles in this issue, Natasha Duarte and April Doss KRISTA CARVER Sammamish, WA Covington & Burling LLP [email protected] each take on broad ethical and policy issues affecting the development of AI Washington, DC CHARLES WOODHOUSE across fields of knowledge. Duarte introduces possible ethical frameworks [email protected] Woodhouse Shanahan PA EILEEN SMITH EWING Washington, DC for the use of AI in analyzing big and making automated decisions. Doss Boston, MA [email protected] argues that our traditional, common-law approach to forming law and pol- [email protected] COMMITTEE LIAISONS PETER J. GILLESPIE JONATHAN GANNON icy—namely, the gradual accumulation of judicial decisions—is simply not Laner Muchin, Ltd. JUNG JIN LEE Chicago, IL dynamic enough to meet the rapidity of change in areas like AI. [email protected] There are a number of excellent pieces in this issue on more granular AI AVERY GOLDSTEIN Blue Filament Law topics as well. In medicine, Matt Henshon advises the use of AI in small ways, Birmingham, MI to enhance human decision making. Aubrey Haddach and Jeffrey Licitra [email protected] demonstrate the high human cost of an AI-driven false positive in medical SECTION OF SCIENCE & TECHNOLOGY LAW OFFICERS CHAIR, 2016–17 BUDGET OFFICER diagnostics. Privacy issues come to the fore in Kay Firth-Butterfield’s article EILEEN SMITH EWING GARTH JACOBSON on data privacy and AI: the European Union, for one, seeks transparency in Boston, MA CT Corporation [email protected] Seattle, WA how automated decision-making systems may reach adverse decisions against CHAIR, 2017–18 [email protected] DAVID Z. BODENHEIMER SECTION DELEGATES consumers. Crowell & Moring LLP ELLEN J. FLANNERY Professor Gary Marchant offers a hopeful note on how AI may affect the Washington, DC Covington & Burling LLP [email protected] Washington, DC practice of law. He concludes we lawyers face an evolution, not a revolution, as CHAIR-ELECT [email protected] WILLIAM B. BAKER BONNIE FOUGHT many developing technologies will benefit our efforts but not replace the need Potomac Law Group PLLC Hillsborough, CA for human oversight. Washington, DC [email protected] [email protected] PAST CHAIR LIAISON TO Speaking of change and evolution, this is my farewell column as Section VICE CHAIR OFFICERS JULIE FLEMING CYNTHIA H. CWIK Chair. In August, we had a very productive ABA Annual Meeting in New Fleming Strategic Jones Day York City, during which we cosponsored a very successful ABA Showcase Atlanta, GA San Diego, CA [email protected] [email protected] Program on Cybersecurity (the Section’s Eric Hibbard was among the pan- SECRETARY KATHERINE LEWIS elists). Our past Section Chair Heather Rafter offered a timely CLE panel on Meister Seelig & Fein LLP New York, NY copyright issues affecting music and technology. Committee leaders Kather- [email protected] ine Lewis (our incoming Secretary) and Barron Oda provided two fascinating AMERICAN BAR ASSOCIATION CONTACTS arts-focused CLE programs—one on virtual and augmented reality; the other SECTION STAFF ART DIRECTOR DIRECTOR KELLY BOOK on technological advances in detecting art forgery. CARYN CROSS HAWK [email protected] At the conclusion of the Annual Meeting, it was my privilege to pass the [email protected] SECTION EMAIL ADDRESS ABA PUBLISHING [email protected] gavel to our new Section Chair: David Z. Bodenheimer of Washington, D.C. CONTRACT EDITOR MEMBERSHIP QUESTIONS MELISSA VASICH OR ADDRESS CHANGES? Nationally ranked by Chambers USA in the area of government contracts, [email protected] 1-800-285-2221 or David’s expertise in that field and in related areas, such as privacy, cybersecu- [email protected] The SciTech Lawyer (ISSN 1550-2090) is published quarterly as a service to its members by the Section of rity, and homeland security, will be a great boon to our Section. I commend Science & Technology Law of the American Bar Association, 321 North Clark Street, Chicago, IL 60654-7598. u It endeavors to provide information about current developments in law, science, medicine, and technology him to you all. that is of professional interest to the members of the ABA Section of Science & Technology Law. Any member of the ABA may join the Section by paying its annual dues of $55. Subscriptions are available to nonmembers for $55 a year ($65 for foreign subscribers). Some back issues are available for $12 plus a $3.95 handling charge from the ABA Service Center, American Bar Association, 321 North Clark Street, Chicago, IL 60654-7598; 1-800-285-2221. Requests to reprint articles should be sent to ABA Copyrights & Contracts, www.americanbar.org/utility/reprint/Periodicals; all other correspondence and manuscripts should be sent to The SciTech Lawyer Contract Editor Melissa Vasich, [email protected]. For more information, visit www.americanbar.org/publications/scitech_lawyer_home.html. The material published in The SciTech Lawyer reflects the views of the authors and has not been approved by the Section of Science & Technology Law, the Editorial Board, the House of Delegates, or the Board of Governors of the ABA. Copyright Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. © 2017 American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. tableofcontents

2 MESSAGE FROM THE CHAIR analytics, and practice management assistants, but it will be an Artificial Intelligence: Revolution or Evolution? evolution, not a revolution. By Eileen Smith Ewing, Chair 2016–17 By Gary E. Marchant

4 A SIMPLE GUIDE TO MACHINE LEARNING 24 WHEN LAW AND ETHICS COLLIDE WITH AUTONOMOUS “Artificial intelligence” (AI) usually refers to machine learning. VEHICLES Machine learning uses algorithms to perform inductive Thought can be used to study ethical issues reasoning, figuring out “the rules” given the factual inputs involving autonomous vehicle (AV) algorithms. How should and the results. Applying those rules to new sets of factual a manufacturer program an AV to respond to an inevitable inputs can deduce results in new cases. Lawyers are already crash, where continuing on the path will injure a large group, using machine learning to help with legal research, evaluate steering away will injure a single individual or small group, and pleadings, perform large-scale document review, and more. attempting to avoid the collision may injure all? The legal and By Warren E. Agin moral solutions may not be the same. By Stephen S. Wu 10 ARTIFICIAL INTELLIGENCE IN HEALTH CARE: APPLICATIONS AND LEGAL ISSUES 28 ARTIFICIAL INTELLIGENCE AND THE LAW: MORE QUESTIONS Big data and machine learning are enabling innovators to THAN ANSWERS? enhance clinical care, advance medical research, and improve Current U.S. legislation involving AI is principally concerned , through the use of “black-box” algorithms that with data privacy and autonomous vehicles. The European are too complex for their reasoning to be understood. Safety General Data Protection Regulation (GDPR) will give citizens regulation, medical malpractice and product liability claims, the right to demand an account of how an adverse decision was intellectual property, and patient privacy will impact the way achieved. This will require transparency in AI systems, which black-box medicine is developed and deployed. will raise intellectual property and privacy issues that will have By W. Nicholson Price II to be reconciled with legislation or in the courts. By Kay Firth-Butterfield 14 AI AND MEDICINE: HOW FAST WILL ADAPTATION OCCUR? Computers excel at working with “structured data,” such as 32 BUILDING ETHICAL ALGORITHMS billing codes or lab test results, but human medical judgment Ethical review of automated decision-making systems is a and doctor’s notes are much harder for a computer to analyze. necessary prerequisite to the large-scale deployment of these In medicine, the cost of a false positive may be low, but the systems. Several established frameworks provide ethical cost of a false negative can be catastrophic. Thus, applying principles to guide organizations’ best practices around AI to medicine requires small steps that can supplement and technology design and data use, and can be adapted to big data enhance—rather than replace—human decision making. analytics, automated decision-making systems, and AI. By Matthew Henshon By Natasha Duarte

16 B-TECH CORNER: PRENATAL GENETIC TESTING: 38 WHY CHANGES IN DATA SCIENCE ARE DRIVING A NEED FOR WHERE ALGORITHMS MAY FAIL QUANTUM LAW AND POLICY, AND HOW WE GET THERE With noninvasive prenatal genetic testing, the cost of a false We are living in a Newtonian age with respect to legal positive is not low for parents who relied on the results and and policy issues for emerging technologies, content with unfortunately terminated the pregnancies or who otherwise traditional approaches and relying on the slow accretion of planned for the arrival of a child with a trisomy condition. A precedent. If we do not make the leap to quantum policy, better understanding of the technology that makes these tests embracing a duality where conflicting rights and ideals are possible should lead to better laws and patient outcomes. balanced and encouraged to thrive at the same time, our entire By Aubrey Haddach and Jeffrey Licitra ecosystem of jurisprudence and privacy rights will suffer. By April F. Doss 20 ARTIFICIAL INTELLIGENCE AND THE FUTURE OF LEGAL PRACTICE 43 IN MEMORIAM: CHARLES RAY “CHAS” MERRILL Despite the alarming headlines, AI will not replace most The Section celebrates the life of Chas Merrill, a pioneer, lawyers’ jobs, at least in the short term. It will create new legal intellect, and patient mentor who was a key leader in the issues for lawyers, such as the liability issues of autonomous Information Security Committee. cars and the safety of medical robots, and will transform the By Stephen S. Wu and Michael S. Baum way lawyers practice, with technology-assisted review, legal

Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. BY WARREN E. AGIN

A SIMPLE GUIDE TO MACHINE LEARNING

awyers know a lot about a wide on Amazon, help sort your mail, find are rapidly displacing the traditional of subjects—the result of information for you on Google, and taxicab service. Understanding what L constantly dealing with a broad allow Siri to answer your questions. machine learning is and what it can variety of factual situations. Never- In the legal field, products built on do is key to understanding its future theless, most lawyers might not know machine learning are already start- effects on the legal industry. much about machine learning and ing to appear. Lexis and Westlaw how it impacts lawyers in particular. now incorporate machine learning What Is Machine Learning? This article provides a short and sim- in their natural language search and Humans are good at deductive rea- ple guide to machine learning geared to other features. ROSS Intelligence is soning. For example, if I told you that attorneys. an AI research tool that finds relevant a bankruptcy claim for rent was lim- “Artificial intelligence” (AI) usu- “phrases” from within cases and other ited to one year’s rent, you would ally refers to machine learning in one sources in response to a plain language easily figure out the amount of the form or another. It might appear as the search. Through the use of natural lan- allowed claim. If the total rent claim stuff of science fiction, or perhaps aca- guage processing, you can ask ROSS was $100,000, but one year’s rent was demia, but in reality machine learning questions in fully formed sentences. $70,000, you would apply the rule techniques are in wide use today. Such Kira Systems uses machine learning and deduce that the allowable claim is techniques recommend books for you to quickly analyze large numbers of $70,000. No problem. You can deter- contracts. mine the result easily, and you can Warren E. Agin (wea@swiggartagin. These are just two of dozens of also easily program a computer to com) is a principal of Analytic Law LLC new, machine learning–based prod- consistently apply that rule to other in Boston, which helps law firms and legal ucts. On the surface, these tools might situations. Now reverse the process. departments find quantitative solutions seem similar to those currently avail- Assume I told you that your client was to legal problems. He also chairs the ABA able—but they actually do something owed $100,000 and that the annual rent Business Law Section’s Legal Analytics fundamentally different, making them was $70,000, and then told you that the Committee and teaches legal analytics as not only potentially far more efficient allowable claim was $70,000. Could an adjunct professor at Boston College and powerful, but also disruptive. you figure out how I got that answer? Law School. You can follow him on For example, machine learning is the You might guess that the rule is that Twitter at @AnalyticLaw. Mr. Agin thanks “secret sauce” that enables ridesharing the claim is limited to one year’s rent, Michael Bommarito of LexPredict and services like Uber to efficiently adjust but could you be sure? Perhaps the rule Thomas Hamilton of ROSS Intelligence for pricing to maximize both the demand was something entirely different. This kindly reviewing and commenting on an for rides and the availability of drivers, is , and it is much earlier version of this article, published in predict how long it will take a driver more difficult to do. the February 2017 issue of Business Law to pick you up, and calculate how long Machine learning techniques Today, but emphasizes that any errors are your ride will take. With machine are computational methods for fig- his, not theirs. learning, Uber and similar companies uring out “the rules,” or at least

FALL 2017 TheSciTechLawyer 5 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. Figure 1 for example, the number of creditors, the debtor’s market capitalization, where the case was filed, and, of course, the even- tual fee awarded to the debtor’s counsel. We might compare these numbers and discover that if we graphed the fee awards against the debtor’s market capitalization, it looks something like figure 1 (purely

Legal Fee hypothetically). There seems to be a trend. The larger the market capitalization (the x axis), the higher the legal fee seems to be. In fact, the data points look sort of like a line. We can calculate the line that best fits the Market Cap data points using a technique called lin- ear regression (see fig. 2). Figure 2 We can even see the equation that the line represents. You take the market capi- talization for the debtor, multiply it by 4.92 percent, and add $116,314 (these two variables are the “weighting mecha- nisms,” explained in detail below). This is called a “prediction model.” The pre-

Legal Fee diction model might not perfectly fit the data used to create it—after all, not all the data points fall exactly on the line—but it provides a useful approximation. That approximation will provide a pretty good estimate for legal fees in future cases Market Cap (that’s what the R2 number on the graph tells us). For the record, the data here is approximations of the rules, given the reasoning to figure out the rule. You imaginary, hand-tailored to demonstrate factual inputs and the results. Those rules then apply that rule to get the next num- the methodology. can then be applied to new sets of factual ber. Broken down a little, the prior game Naturally, real-world problems are inputs to deduce results in new cases. looks like this: more complex. Instead of a short series of For instance, consider number series numbers as inputs, a real-world problem games. For example: Input Result might use dozens, perhaps thousands, of 1 1 2 possible inputs that might be applied to 2 4 6 8 10 ? 1 1 2 3 an undiscovered rule to obtain a known 1 1 2 3 5 answer. We also do not necessarily know The next number is 12, right? Here, 1 1 2 3 5 ? which of the inputs the unknown rule the inputs are the series of numbers 2 uses! through 10, and from this we induce the We look at the group of inputs and To solve a more complex problem, we rule for getting the next number—add 2 induce a rule that gives us the displayed might begin by building a database with to the last number in the series. Here is results. Once we have derived a work- the relevant points of information about another one: able rule, we can apply it to the last row to a large number of cases, in each instance get the result 8, but more importantly we collecting the data points that we think 1 1 2 3 5 ? can apply it to any group of numbers in might affect the answer. To build our pre- the Fibonacci sequence. This is a simple diction model, we would select cases at The next number is 8. This is a Fibo- (very simple) example of what machine random to use as a “training set,” put- nacci sequence, and the rule is that you learning does. ting the remainder aside to use as a “test add together the last two numbers in the Let’s take a more complex example. set.” Then we would begin to analyze the series. Assume we wanted to predict the amount various relationships among the data With these games, what you are doing of a debtor’s counsel’s fees in a Chapter points in our training set using statistical in your head is looking at a series of 11 case. We could take a look at cases in methods. Statistical analytics can help us inputs and answers, and using inductive the past and get information about each: identify the factors that seem to correlate

6 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. with the known results and the factors Assume we want to set up a com- training set. This might be data from that clearly do not matter. puter system to identify these a structured, or highly defined, data- Advanced statistical methods might handwritten images and tell us what base, or unstructured data like you help us sort through the various rela- letter each image represents. Defining might find in a set of discovery docu- tionships and find an equation that takes a rule set is too difficult for us to do by ments. Second, you have the answers. some of the inputs and provides an esti- hand and come up with anything that With a structured database, a particular mated result that is pretty close to the is remotely usable, but we know there is answer will be closely identified with actual results. Assuming we find such an a rule set. The letter A is clearly differ- the input information. With unstruc- equation, we then try it out on the test set ent from the letter P, and C is different tured information, the answer might to see if it does a good job there as well— from G, but how do you describe those be a category, such as which letter an predicting results that are close to the real differences in a way a computer can use image represents or whether a particu- results. If our predictive model works on to consistently determine which image lar email is spam; or the answer might our test set, then we consider ourselves represents which letter? be part of a relationship, such as text in lucky. We can now predict a debtor’s The answer is that you don’t. a court decision that relates to a legal counsel’s legal fees ahead of time; at least Instead, you reduce each image to a set question asked by a researcher. Third, until changing circumstances—perhaps of data points, tell the computer what you have the learning algorithm itself— rules changes, a policy change at the U.S. the image is of, and let the computer the software code that explores the Trustee’s Office, or the effect our very induce the rule set that reliably matches relationships between the input infor- own model has on which counsel get all the sets of data points to the cor- mation and the answers. Finally, you hired for cases—render our model inac- rect answers. For the image recognition have weighting mechanisms—basically curate. If our model does not work on the problem, you might begin by defin- parts of the algorithm that help define test set data, than we consider it flawed ing each letter as a 20 pixel by 20 pixel the relationships between the input and go back to the drawing board. image, with each pixel having a differ- information and the answers, within For real-world problems, this kind of ent grayscale score. That gives you 400 the confines of the algorithm. Once analysis is difficult. The job of collecting data points, each with a different value you have these four components, the the data, cleaning it, and analyzing it for depending on how dark that pixel is. learner simply adjusts the weighting relationships takes a lot of time. Given Each of these sets of 400 data points is mechanisms in a controlled manner the large number of potential variables associated with the answer—the letter until it finds values for the weighting that affect real-world relationships, iden- they represent. These sets become the mechanisms that allow the algorithm tifying those that matter is somewhat training set, and another database of to accurately match the input informa- a process of trial and error. We might data points and answers is the test set. tion with the known correct answers. get lucky and generate results quickly, We then feed that training set into our Let’s see how this might work with we might invest substantial resources machine learning algorithm—called a my hypothetical system for estimat- without finding an answer at all, or the “learner”—and let it go to work. ing a debtor’s counsel’s fees. In the relationships might simply prove to be What does the learner actually do? example (see fig. 2), the market capital- too complex for the methods I described This is a little more difficult to explain, izations are the input information (X). to work adequately. Inductive reason- partially because there are a lot of dif- The known legal fees for each case are ing is difficult to do manually. This brings ferent types of learners using a variety the answers (Y). For purposes of illus- us to machine learning. Machine learn- of methods. Computer scientists have tration, let’s assume the algorithm is Y ing can efficiently find relationships using developed a number of different kinds = aX + b (a vast simplification, but I’m inductive reasoning. of techniques that allow a computer going to use it to demonstrate a point). As an example of what machine learn- program to infer rule sets from defined The weighting mechanisms are the two ing can do, consider these images: sets of inputs and known answers. variables a and b. Instead of manually Some are conceptually easier to under- calculating the values of a and b using stand than others. In this article, I , a machine learning describe, in simple terms, how one of program might instead try different these techniques works. Machine learn- values of a and b, each time checking ing programs will use a variation of to see how well the line fits the actual one or more of these techniques. The data points mathematically. If a change most advanced systems include several in a or b improves the fit of the line, techniques, using the one that fits the the learner might continue to change specific problem best or seems to gen- a and b in the same direction, until the erate the most accurate answers. changes no longer improve the line’s fit. In general, think of a learner as Of course, in my example it is eas- including four components. First, you ier just to calculate a and b using linear have the input information from the regression techniques. I don’t even

FALL 2017 TheSciTechLawyer 7 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. need to have math skills to do it—the single number that reflects the weights In between are what are called “hid- functionality is built right into Micro- given the various inputs. den layers” of perceptrons, each taking in soft Excel and other common software Fourth, the weighted sum is fed into a one or more input numbers from a prior products. Given a spreadsheet with the step function. This is a function that out- layer and outputting a single number data, I can perform the calculation with puts a single number based on the weighted to one or more perceptrons in the next a few mouse clicks. Machine learning sum. A simple step function might output layer. By stacking the layers of percep- programs, however, can figure out the a 0 if the weighted sum is between 0 and trons, the “deep learner” acts a little bit relationships when there are millions of 0.5, and a 1 if the weighted sum is between like a computer circuit, one whose opera- data points and billions of relationships— 0.5 and 1. Usually a perceptron will use a tions are programmed by the changes in when modeling the systems is impossible logarithmic step function designed to gen- the weights. to do by hand because of the complex- erate a number between, say, 0 and 1 along The computer scientist building the ity. Machine learning systems are limited a logarithmic scale so that most weighted neural network determines its design— only by the quality of the data and the values will generate a result at or near 0, or how many perceptrons the system uses, power of the computers running them. at or near 1, but some will generate a result where the input data comes from, how Now, let’s look at an example of a in the middle. the perceptrons connect, what step func- machine learning system. Some systems will include a fifth ele- tion gets used, and how the system ment: a “bias.” The bias is a variable that interprets the output numbers. However, Neural Networks is added or subtracted from the weighted the learner itself decides what weights are The term “neural network” conveys the sum to bias the perceptron toward out- given to each input as the numbers move impression of something obscure and putting a higher or lower result. through the network, and what biases mysterious, but it is probably the easi- In summary, the perceptron is a sim- are applied to each perceptron. As the est form of a machine learning system to ple mathematical construct that takes in weights and biases change, the outputs explain to the uninitiated. This is because a bunch of numbers and outputs a sin- will change. The learner’s goal is to keep it is made up of layers of a relatively sim- gle number. By computing the weighted adjusting the weights and biases used ple construct called a “perceptron.” sum of the inputs, running that number by the system until the system produces through the step function, and adjusting answers using the input information the result using a final bias, the perceptron that most closely approximate the actual, tells you whether the collection of inputs known answers. produces a result above or below a thresh- Returning to the handwriting recog- old level. This mechanism works much nition example, remember that we broke like a switch. The result of that switch down each letter image into 400 pix- might be fed to another perceptron, or it els, each with a grayscale value. Each of might relate to a particular “answer.” For those 400 data points would become a example, if your learner is doing hand- input number into our system and be Credit: https://blog.dbrgn.ch/2013/3/26/ writing recognition, you might have a fed into one or more of the perceptrons perceptrons-in-python/ perceptron that tells you the image is in the first input layer. Those outputs the letter A based on whether the output would pass through some hidden lay- This perceptron contains four compo- number is closer to a 1 than a 0. ers in the middle. Finally, we would have nents, the first being one or more inputs In a neural network, the perceptrons an output layer of 26 perceptrons, one represented by the circles on the left. typically are stacked in layers. The first for each letter. The output perceptron The input is simply a number, perhaps layers receive the input information for with the highest output value will tell us between 0 and 1. It might represent part the learner, and the last layer outputs what letter the system thinks the image of our input information, or it might be the results. represents. the output from another perceptron. Then, we pick some initial values Second, each input number is given a for the weights and biases, run all the weight—a percentage by which the input samples in our training set through the is multiplied. For example, if the percep- system, and see what happens. Do the tron has four inputs of equal importance, output answers match the real answers? each input is multiplied by 25 percent. Probably not even close the first time Alternatively, one input might be multi- through. So, the system begins adjust- plied by 70 percent while the other three ing weights and biases, with small, are each multiplied by 10 percent, reflect- incremental changes, testing against the ing that one input is far more important Credit: http://www.intechopen.com/ training set and continuously looking than the others. books/cerebral-palsy-challenges-for-the- for improvements in the results until it Third, these weighted input numbers future/brain-computer-interfaces-for- becomes as accurate as it is going to get. are added to generate a weighted sum—a cerebral-palsy Then, the test set is fed into the system to

8 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. see if the set of weights and biases we just exact match where the word “definition” determined produces accurate results. If occurs in the same sentence as the term it does, we now have an algorithm that “adequate protection.” ROSS does some- Legal tools based we can use to interpret handwriting. thing different. Using the Watson AI It might seem a little like magic, but systems and its own algorithms, it looks on machine even a relatively simple neural network, within the search query for word groups properly constructed, can be used to read it recognizes and then finds the results learning have handwriting with a high degree of accu- it has learned to associate with those racy. Neural networks are particularly word groups. If you search for “what is enormous good at sorting things into categories, the definition of adequate protection,” especially when using a discrete set of the system will associate the query “what application. input data points. What letter is it? Is it a is the definition” with similar queries, picture of a face or something else? Is a such as “what is the meaning of” or just and to allow for periodic retraining. As proof of claim filed in a bankruptcy case “what is.” It will also recognize the term a result, it will become extremely adept objectionable or not? “adequate protection” as a single concept at providing immediate responses to instead of two separate words, and likely, the most common queries by users. It Machine Learning in Action given the context, understand it as a word might also be able to eventually give you a These examples are basic, designed to found in bankruptcy materials. Finally, it confidence level in its answer, comparing provide some understanding of what are will have associated a successful response the information it provides against the fairly abstract systems. Machine learners as being one that gives you certain types entire scope of reported decisions and its come in many flavors—some suitable for of clauses including the term “ade- users’ reactions to similar, prior responses, performing basic sorting mechanisms, quate protection.” It will not understand to let you know how reliable the results and others capable of identifying and specifically that you are looking for a def- provided might be. Even though the indexing complex relationships among inition, but because others who used the system doesn’t understand the material in information in unstructured databases. system and made similar inquiries pre- the same manner as a human, its ability Some systems work using fairly simple ferred responses providing definitions, to track relationship building over a large programs and can run on a typical office you will get clauses containing similar scope of content and a large number of computer, and others are highly com- language patterns and, viola, you will get interactions allows it to behave as you plex and require supercomputers or large your definition. might, if you had researched a particular server farms to accomplish their tasks. You should not even have to use the point or issue thoroughly many times To understand the power of machine term “adequate protection” to get an previously. This provides a research learning systems compared with non- answer back discussing the concept when tool far more powerful than existing learning analytic tools, let’s revisit an that is the appropriate answer to your methodologies. earlier example: ROSS Intelligence. ROSS question. So long as your question trig- Legal tools based on machine learning is built on the IBM Watson system, gers the right associations, the system have enormous application. Lawyers although it also includes its own machine will, over time, learn to return the correct are already using learners to help with learning systems to perform many of responses. legal research, categorize document sets its tasks. Watson’s search tools employ a The key is that a machine learning for discovery, evaluate pleadings and number of machine learning algorithms system learns. In a way, we do the same transactional documents for structural working together to categorize seman- thing ROSS does. The first time we errors or ambiguity, perform large- tic relationships in unstructured textual research a topic, we might look at a lot scale document review in mergers and databases. In other words, if you give of cases and go down a lot of dead ends. acquisitions, and identify contracts Watson a large database of textual mate- The next time, we are more efficient. After affected by systemic changes like Brexit. rial dealing with a particular subject, dealing with a concept several times, we General Motors’ legal department, Watson begins by indexing the material, no longer need to do the research. We and likely other large companies, are noting the vocabulary and which words remember what the key case is, and at exploring using machine learning tend to associate with other words. Even most we check to see if there is anything techniques to evaluate and predict though Watson does not actually under- new. We know how the cases link together, litigation outcomes and even help choose stand the text’s meaning, it develops, so the new materials are easy to find. which law firms they employ. Machine through this analysis, the ability to mimic A machine learning–based research learning is not the solution for every understanding by finding the patterns in tool can do this on a much broader scale. question, but it can help answer a large the text. It learns not just from our particular number of questions that simply were For example, when you conduct a research efforts, but also from those of not answerable in the past, and that is Boolean search in a traditional service everyone who uses the system. As the why the advent of machine learning for “definition /s ‘adequate protection,’” system receives more use, it employs user in the legal profession will prove truly the service searches its database for an feedback to assess how its model performs transformational. u

FALL 2017 TheSciTechLawyer 9 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. artificial intelligence in health care APPLICATIONS AND LEGAL ISSUES by w. nicholson price ii

rtificial intelligence (AI) is rapidly smartphones or recorded on fitness track- of anatomical pathologists or radiologists moving to change the healthcare sys- ers. Machine learning techniques, a subset within the span of years.5 Another cur- atem. Driven by the juxtaposition of of AI, use simple learning rules and itera- rent algorithm can predict which trauma big data and powerful machine learning tive techniques to find and use patterns victims are likely to hemorrhage by con- techniques—terms I will explain momen- in these vast amounts of data. The result- stantly analyzing vital signs and can in tarily—innovators have begun to develop ing algorithms can make predictions and turn call for intervention to forestall catas- tools to improve the process of clinical group sets—how long is a patient expected trophe; such prognostic algorithms could care, to advance medical research, and to live given his collection of symptoms, come into use in a similarly short time to improve efficiency. These tools rely on and does that picture of a patch of skin frame.6 A bit farther off, black-box algo- algorithms, programs created from health- look like a benign or a cancerous lesion?— rithms could be used for diagnosis more care data that can make predictions or but typically, these techniques cannot generally, to recommend off-label uses for recommendations. However, the algo- explain why or how they reach the conclu- existing drugs, to allocate scarce resources rithms themselves are often too complex sion they do. Either they cannot explain to patients most likely to benefit from for their reasoning to be understood or it at all, or they can give explanations that them, to detect fraud or problematic med- even stated explicitly. Such algorithms may are accurate but meaningless in terms of ical behavior, or to guide research into be best described as “black-box.”1 This medical understanding.2 Because of this new diseases or conditions. In fact, black- article briefly describes the concept of AI inherent opacity (which might or might box algorithms are already in use today in medicine, including several possible not be augmented with deliberate secrecy in smartphone apps that aim to identify applications, then considers its legal impli- about how the algorithms were devel- developmental disorders in infants based cations in four areas of law: regulation, oped and validated), I describe this field on facial features7 or autism in young chil- tort, intellectual property, and privacy. as to “black-box medicine,” though it has dren based on eye movement tracking.8 also been referred to as AI in medicine or The potential for benefit from such black- AI in Medicine “predictive analytics.”3 To add to the com- box medicine is substantial, but it comes Medicine, like many other fields, is expe- plexity, when more data are available for with its own challenges: scientific and riencing a confluence of two recent the machine learning algorithms, those medical, certainly, but also legal. How do developments: the rise of big data, and the data can be incorporated to refine future we ensure that black-box medicine is safe growth of sophisticated machine learn- predictions, as well as to change the algo- and effective, how do we ensure its effi- ing/AI techniques that can be used to rithms themselves. The algorithms at the cient development and deployment, and find complex patterns in those data. Big heart of black-box medicine, then, are not how do we protect patients and patient data as a phenomenon is characterized only opaque but also likely to change over privacy throughout the process? by the “three Vs” of volume (large quan- time. tities of data), variety (heterogeneity in Black-box medicine has tremendous Regulation the data), and velocity (fast access to the potential for use throughout the health- The first question to ask is perhaps the data). In medicine, the data come from care system, including in prognostics, most fundamental: How do we ensure that many sources: electronic health records, diagnostics, image analysis, resource allo- black-box algorithms are high quality— medical literature, clinical trials, insurance cation, and treatment recommendations. that is, that they do what they say, and that claims data, pharmacy records, and even Machine learning is most familiar in the they do it well and safely? New and emerg- information entered by patients into their context of image recognition, and an algo- ing medical technologies and devices are rithm has already been developed that can typically regulated for safety and efficacy W. Nicholson Price II, PhD (wnp@ identify skin cancer by analyzing images by the Food and Drug Administration umich.edu) is an assistant professor of of skin lesions; the algorithm performs as (FDA). Whether the FDA actually has law at the University of Michigan Law well as board-certified dermatologists.4 A statutory authority over free-standing School. His work focuses on innovation recent New England Journal of Medicine algorithms used to make medical deci- in the life sciences, with a significant article suggests that such algorithms could sions (or to help make them) depends on emphasis on the use of big data and soon enter widespread use in image analy- the relatively complex question of what is artificial intelligence in health care. sis, aiding or displacing much of the work a “medical device.” The FDA’s regulation

10 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. of black-box medical algorithms may rigid, involving somewhat lighter pre- also conflict with its long-standing state- market scrutiny (focused on procedural ment that it does not regulate the practice safeguards like the quality of the data of medicine.9 Elsewhere, I argue that the used, the development techniques, and FDA has this authority, probably over the validation procedures) coupled with algorithms standing alone and almost cer- robust post-market oversight as these tainly in the context of linked technology algorithms enter into clinical care. The that may more readily be called a “medi- FDA has recently expressed interest in cal device,” but disputes may arise over this approach.12 Of course, this is eas- this point.10 Industry dynamics may also ier said than done; the parallel case of play a role here: Silicon Valley, the hub of post-market surveillance for drugs is much of the innovation in AI generally, notoriously troublesome to implement. traditionally has not worked closely with One attractive possibility would be for regulators like the FDA. the FDA to enable oversight help from Assuming that the FDA can and will other sophisticated healthcare entities by regulate AI in the healthcare system collaborating with them and, crucially, (and the agency has asserted this abil- enabling ways to get them important ity and intent),11 typically two tools help and useful information. Hospitals, insur- ensure safety and efficacy of new medi- ance companies, and physician specialty cal technology: scientific understanding associations all have an interest in ensur- and clinical trials. Unfortunately, these ing that black-box algorithms actually two tools do not work well in the context work to help patients (and, potentially, of black-box medicine. Understanding their bottom lines). Rival developers may does not work for obvious reasons— also have an interest, especially in find- we do not understand how a black-box ing problems with existing algorithms. algorithm makes decisions, because the In addition, these sophisticated entities machine learning techniques generally may have the capacity to perform eval- cannot tell us their reasoning, and even uations, especially as they are used in when they can, the results are often too clinical practice, and to generate perfor- complex to understand. Using clinical mance data. Nevertheless, performing trials for testing safety, efficacy, and valid- this type of collaborative governance role ity might work for some algorithms, but requires information, and many algo- will not work for many others. For algo- rithm developers are reluctant to share rithms that divide patients into groups that kind of information with any other and suggest a particular treatment, clin- entities. Potentially the FDA could serve ical trials could be used to test their as a centralized information-sharing role efficacy. But some algorithms will make to allow those other entities to play their highly personalized treatment predic- part in regulating black-box medicine. tions or recommendations, so that the However, exactly how this idea might use of clinical trials would be infeasible. become a reality is very much an unre- And even for algorithms that are ame- solved question. nable to trials, the benefits of black-box medicine—quick, cheap shortcuts to Tort otherwise inaccessible medical knowl- What do we do when black-box medi- edge—would be seriously delayed or cine goes awry? The law of tort interacts even curtailed due to the slow, ponder- with black-box medicine in a few different ous, expensive enterprise of clinical trials. contexts. First, if there are flaws built into For algorithms that change as they incor- the algorithms themselves, or if regulation porate more data, the challenges are even fails to ensure that algorithms are high more pronounced. In short, in black-box quality, then the developers of algorithms medicine, traditional methods of testing (or technologies that rely on them) new medical technologies and devices are might become liable under tort law. likely not to work at all in some instances, However, courts have been reluctant and to slow or stifle innovation in others. to extend or apply product liabil- So how should the FDA tackle this ity theories to software developers, challenge? The most fruitful path, I and even more reluctant in the argue, will likely be more flexible than context of healthcare software.13

FALL 2017 TheSciTechLawyer 11 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. Part of that reluctance has come from the firms invest in developing black-box not been tested in the courts, it is at least fact that healthcare software to date has algorithms, how can they protect that debatable how well one can describe an been characterized primarily as technol- investment? Developing black-box algo- algorithm that is opaque, and how broad ogy that helps healthcare providers make rithms can involve considerable expense. the resulting protection would be.21 decisions by providing them with infor- Developers must generate, assemble, or Trade secrecy—or secrecy in general— mation or analysis, with the final decision acquire the tremendous data sets needed seems an obvious solution but comes with always resting in the hands of the provider. to train their algorithms; they must assem- its own problems. Trade secret law pro- Black-box medicine turns that notion on ble the expertise and resources to actually tects from appropriation information its head, or at least it can. Can and should develop those algorithms; and they must that is kept secret and gets commercial healthcare providers be fully responsi- validate them to make sure they work. value from its secrecy. What better way ble for decisions suggested or made by Normally, we might expect intellectual than secrecy to protect an algorithm that black-box algorithms that they do not, or property to provide some measure of pro- is already opaque and cannot be under- cannot, understand? tection for the information goods created stood? The data on which an algorithm This raises a second set of questions. by such expenditures, so that firms are is generated, the method by which the What must healthcare providers and willing to invest the necessary funds for algorithm was developed, and the pro- healthcare institutions—doctors, nurses, their development without fear that result- cess of its validation can all be kept secret hospitals, managed-care organizations, ing inventions will be appropriated by by firms looking to protect their invest- and the like—do to fulfill their duties others.17 However, intellectual property fits ment in the algorithm’s development. And of care to patients in a healthcare world relatively poorly for black-box medicine. indeed, firms that are developing black- with black-box algorithms? Must pro- Patents are a natural choice to pro- box algorithms seem to be relying on just viders themselves evaluate the quality of tect technological innovation, but patents such secrecy. But while secrecy may be black-box algorithms, based on proce- do not provide strong incentives for an effective intellectual property strategy, dural measures (validation undertaken, black-box medicine. A string of recent it runs headlong into the concerns raised performance , etc.) before rely- decisions by the U.S. Supreme Court above about safety, malpractice, and regu- ing on those algorithms in the course of interpreting section 101 of the Patent Act, lation. How willing will doctors, patients, providing care? And should healthcare which governs patentable subject matter, and insurers be to accept medical algo- institutions perform similar evaluations has made it very difficult to patent black- rithms where not only is the working of before implementing black-box software? box algorithms.18 In Mayo Collaborative the algorithm a mystery, but also the way I argue elsewhere that they should, but Services v. Prometheus Laboratories, Inc., the algorithm was made and tested, along currently the information necessary for the Supreme Court repeated its long- with the data underlying its develop- that type of evaluation is largely unavail- standing statement that laws of nature ment? And if third parties are indeed to able—just as in the parallel regulatory cannot be patented.19 However, the Court be actively involved in ensuring algorith- context mentioned above.14 Similarly, if an applied that rule to a diagnostic test that mic quality and validity, as I suggest above, algorithm suggests an intervention that used the measurement of a metabolite how can they conduct such evaluations seems mundane but unhelpful, useless level in a patient’s blood to adjust the dos- without the underlying information? The and expensive, or dangerous, should the age of a drug, which many, including the reliance of algorithm developers on trade provider second-guess the recommenda- Federal Circuit below, had thought to be secrecy echoes other past situations where tion? On the one hand, the answer seems a patentable application of such a law. The information relevant to public health has an obvious “yes”—providers are trained to Supreme Court used very broad language been kept secret, and these experiences care for patients—but on the other hand, to invalidate the patent: “[W]ell-under- suggest that there may be similar fights if providers only implement those deci- stood, routine, conventional activity over access to algorithmic information.22 sions they would have reached on their previously engaged in by scientists who However, if intellectual property own, they will leave on the table much of work in the field . . . is normally not suffi- incentives are unavailable to help pro- the benefit that black-box medicine prom- cient to transform an unpatentable law of tect investments in black-box medicine, ises to extract from otherwise inaccessible nature into a patent-eligible application will firms invest sufficiently? How can the patterns in big data. This would not leave of such a law.”20 Where underlying infor- government help drive this form of inno- everything on the table—algorithms can mation about the biological world is the vation while ensuring that it is safe and still potentially perform the usual analyses heart of the invention, merely using that effective? These questions are and will more quickly and cheaply15—but excessive information to guide medical treatment remain pressing for the development of AI caution is not costless. Courts have not is unpatentable (as is the information in health care. tackled these issues yet, but they will need itself). But this describes most black-box to in the near future. algorithms quite well and suggests that Privacy those algorithms are unlikely to be pat- Finally, privacy concerns run through the Intellectual Property entable subject matter. Further patent development and deployment of black- Intellectual property protection creates problems might arise under section 112, box medicine.23 Privacy is important in another set of challenges for the devel- which requires a “written description” at least two areas: gathering immense opment of black-box medicine.16 When of the invention. Although this issue has amounts of healthcare data to develop

12 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. algorithms, and sharing such data to over- entity-centered privacy regulation make (forthcoming 2017), https://papers.ssrn.com/ see them. Algorithm developers need to sense in a world where giant data agglom- abstract_id=2938391. assemble data from multiple sources to erations are necessary and useful? Should 11. See U.S. Food & Drug Admin., train machine learning algorithms. Those intellectual property law find new ways Medical Device Accessories—Describing data—as well as data about how the algo- to recognize the primacy of health data Accessories and Classification Pathway rithms perform in practice—may then be and the fast-moving nature of algorithms? for New Accessory Types: Guidance for shared with other entities in the health- Must the legal doctrine of the “learned Industry and Food and Drug Administra- care system for the purpose of evaluation intermediary” bow to the recognition that tion Staff (Jan. 30, 2017). and validation, as described above. In doctors cannot fully understand all the 12. Press Release, FDA, FDA Selects each case, patient-oriented data privacy technologies they use or the choices such Participants for New Digital Health Software is a concern, most notably as mandated technologies help them make when they Precertification Pilot Program (Sept. 26, 2017), under the Health Insurance Portability are not provided the needed and/or neces- https://www.fda.gov/NewsEvents/Newsroom/ and Accountability Act’s (HIPAA’s) Pri- sary information? Should the FDA change PressAnnouncements/ucm577480.htm. vacy Rule. The Privacy Rule governs and how it regulates new medical technol- 13. Randolph A. Miller & Sarah M. Miller, restricts both disclosure and use of “pro- ogy as AI software gains prominence? As Legal and Regulatory Issues Related to the Use tected health information” (that is, most black-box medicine develops and evolves, of Clinical Software in Health Care Delivery, individually identifiable health infor- the need to consider these legal issues— in Clinical Decision Support: The Road mation) by “covered entities” (mostly, and the need for scientifically literate Ahead 423, 426 (Robert A. Greenes ed., 2007). healthcare providers, health insurers, lawyers who can understand them in con- 14. W. Nicholson Price II, Medical Mal- health information clearinghouses, and text—will continue to grow. u practice and Black-Box Medicine, in Big Data, business associates of the same).24 HIPAA Health Law, and Bioethics (I. Glenn Cohen creates a relatively complex set of permit- Endnotes et al. eds., forthcoming 2018). ted and restricted uses of protected health 1. W. Nicholson Price II, Black-Box Medicine, 15. Megan Molteni, If You Look information. Notably, de-identified infor- 28 Harv. J.L. & Tech. 419 (2015). at X-Rays or Moles for a Living, AI Is mation is not governed by the Privacy 2. Jenna Burrell, How the Machine “Thinks”: Coming for Your Job, Wired (Jan. 25, Rule (though it raises its own concerns Understanding Opacity in Machine Learning 2017), https://www.wired.com/2017/01/ about data aggregation and the possibility Algorithms, 3 Big Data & Soc’y 1, 5 (2016). look-x-rays-moles-living-ai-coming-job/. of re-identification), and neither is infor- 3. I. Glenn Cohen et al., The Legal and 16. W. Nicholson Price II, Big Data, Patents, mation collected by noncovered entities Ethical Concerns That Arise from Using Complex and the Future of Medicine, 37 Cardozo L. Rev. like Google, Apple, or other aggregators Predictive Analytics in Health Care, 33 Health 1401 (2016). of big data.25 Navigating the HIPAA Pri- Aff. 1139 (2014). 17. Mark A. Lemley, Ex Ante versus Ex Post vacy Rule—and otherwise managing and 4. Andre Esteva et al., Dermatologist-Level Justifications for Intellectual Property, 71 U. Chi. addressing the privacy concerns of those Classification of Skin Cancer with Deep Neural L. Rev. 129 (2004). whose data is used throughout black-box Networks, 542 Nature 115 (2017). 18. Rebecca S. Eisenberg, Diagnostics Need medicine—creates yet another ongoing set 5. Ziad Obermeyer & Ezekiel J. Emanuel, Not Apply, 21 B.U. J. Sci. & Tech. L. 256 (2015). of potential legal concerns. Predicting the Future—Big Data, Machine 19. 132 S. Ct. 1289 (2012). Learning, and Clinical Medicine, 375 New Eng. 20. Id. at 1298. Conclusion J. Med. 1216 (2016). 21. W. Nicholson Price II, Describing Black- Black-box medicine has tremendous 6. Nehemiah T. Liu et al., Development and Box Medicine, 21 B.U. J. Sci. & Tech. L. 347 potential to reshape health care, and it is Validation of a Machine Learning Algorithm and (2015). moving rapidly to do so. Some health- Hybrid System to Predict the Need for Life-Saving 22. David S. Levine, The People’s Trade care black-box algorithms are already at Interventions in Trauma Patients, 52 Med. & Secrets?, 18 Mich. Telecomm. & Tech. L. Rev. work in consumer-directed smartphone Biological Engineering & Computing 193 61 (2011). apps, and others are likely to enter medical (2014). 23. Roger Allan Ford & W. Nicholson Price practice in the span of years. But the legal 7. Megan Molteni, Thanks to AI, Computers II, Privacy and Accountability in Black-Box issues involved with the development and Can Now See Your Health Problems, Wired Medicine, 23 Mich. Telecomm. & Tech. L. Rev. implementation of AI algorithms, which (Jan. 9, 2017), https://www.wired.com/2017/01/ 1 (2016). we do not and cannot understand, are computers-can-tell-glance-youve-got-genetic- 24. 45 C.F.R. pts. 160, 164. substantial. As described here, regulation, disorders/. 25. Nicolas Terry, Big Data and Regulatory legal causes of action such as medical mal- 8. Autism, Right Eye, https://www.righteye. Arbitrage in Health Care, in Big Data, Health practice and product liability, intellectual com/tests-therapies/autism (last visited Oct. 17, Law, and Bioethics, supra note 14. property, and patient privacy all have real 2017). implications for the way black-box medi- 9. Patricia J. Zettler, Toward Coherent Federal cine is developed and deployed. In turn, Oversight of Medicine, 52 San Diego L. Rev. 427 black-box medicine may change the way (2015). we approach some of these issues in the 10. W. Nicholson Price II, Regulating context of contemporary health care. Does Black-Box Medicine, 91 Mich. L. Rev.

FALL 2017 TheSciTechLawyer 13 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. AI AND MEDICINE HOW FAST WILL ADAPTATION OCCUR?

BY MATTHEW HENSHON

rtificial intelligence (AI) burst 15-terabytes of information. In Ken Jeopardy! In the game show, the cor- onto the popular scene Jennings’s words: rect answers are “known,” so Watson A in 2011, when IBM’s Wat- sifts through data and tries to find the son defeated two human champions It rigorously checks the top right one. And if the machine does not (including all-time leader Ken Jen- hits against all the contex- pick a winner, it can adjust its algo- nings) in a nationally televised tual information it can muster: rithm (so-called “machine learning”). two-part exhibition of Jeopardy!, the the category name; the kind of But in medicine, it is perhaps harder to TV game show.1 A previous Watson answer being sought; the time, find the single correct answer. Comput- iteration (Deep Blue) had defeated place, and gender hinted at in ers excel at working with “structured then world champion Garry Kasp- the clue; and so on. And when data,” such as billing codes or lab test arov in chess in 1997, but the game of it feels “sure” enough, it decides results; but sometimes human medical chess perhaps seemed a simpler task to buzz. This is all an instant, judgment and doctor’s notes are just as for machines: a defined board, and 16 intuitive process for a human important in making a diagnosis, and pieces on each side.2 In contrast, the Jeopardy! player, but I felt con- those are much harder for a computer range of Jeopardy! clues (remember, as vinced that under the hood my to analyze.6 Alex Trebek reminds viewers regularly, brain was doing more or less the But while IBM’s Watson health- to “phrase your response in the form of same thing.3 care efforts appear (for the ) a question”) is seemingly limitless, and to be retrenching,7 other players are the clues are often in the form of puns Applying Watson to fields like med- aggressively entering into the medical or slang, so a chess move like “rook to icine has been a bit rougher. IBM AI market. In 2014, Google acquired D1” in comparison seems simple. signed a high-profile partnership with London-based DeepMind, for $400 To its credit, IBM has built an entire the University of Texas MD Ander- million.8 Among other research proj- marketing campaign around Watson’s son Cancer Center in Houston in 2012, ects, DeepMind has developed a victory on Jeopardy! But the method- declaring it a “moon shot” to cure can- program that plays Go (the Asian ology that enabled Watson to excel in cer in a press release.4 But five years board game that is more complicated the quiz show was perhaps unique: the later, with progress significantly slower than chess) and has begun to regu- machine first tries to identify a key- than initially anticipated, MD Ander- larly beat the best players in the world, word in the clue, then compares that son and IBM have parted ways. The even when five Go champions com- word against its (then) database of university, which paid IBM a total of bined their efforts to try and defeat the $39 million on a contract originally program! Matthew Henshon (mhenshon@ negotiated for less than 10 percent of Like Watson, Google’s DeepMind henshon.com) is a partner at the that amount, had nothing to show for is attempting to apply its technol- Boston boutique of Henshon its money, except a cancer-screening ogy to health care: last November, it Klein LLP. He is chair of the Artificial tool that was still in the “pilot” stage.5 announced a partnership with a Lon- Intelligence and Robotics Committee. The problem for Watson and medi- don hospital system.9 But DeepMind’s Follow him on Twitter at @mhenshon. cine may be related to its success in Streams app appears to be built around

14 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. much more rudimentary AI, and its Medical care is not the only arena www-03.ibm.com/press/us/en/ primary benefit at this point appears to that AI hopes to move into: games like pressrelease/42214.wss. be streamlining the process of notifi- chess and Go are supposed to be a “test 5. David H. Freedman, A Reality cation for blood tests indicating acute bed” (to use the industry term) for Check for IBM’s AI Ambitions, MIT kidney injuries (AKIs).10 While AKI legal work, crime prevention, and busi- Tech. Rev. (June 27, 2017), https://www. is one of the leading causes of death ness negotiations, among others. But technologyreview.com/s/607965/a-reality- in the National Health Service (NHS), as one IBM AI researcher said, “There check-for-ibms-ai-ambitions/. the “special sauce” at this point appears are precious few zero-sum, perfect- 6. Daniela Hernandez, Artificial Intel- to be simply routing abnormal blood information, two-player games that we ligence Is Now Telling Doctors How to Treat test results to the appropriate doctor’s compete in in the real world.”13 You, Wired (June 2, 2014), https://www. mobile device. The pace of adaptation in medi- wired.com/2014/06/ai-healthcare/. The lesson learned may be that cine relates to the nature of the test-bed 7. Indeed, even IBM’s more recent applying AI to real-world problems AI itself: namely, the gaming world. press releases seem more modest: “[IBM’s requires small steps that can sup- There’s little real-world consequence Watson will be] collaborating with more plement and enhance—rather than (other than to Garry Kasparov him- than a dozen leading cancer institutes to replace—human decision making. self) in a chess game. There are no lives accelerate the ability of clinicians to identify Streams is not attempting to replace at risk, and a mistake might lead to the and personalize treatment options for their doctors and specialists at this point— early loss of a rook. But in medicine, patients.” Press Release, IBM, Clinicians Tap merely get them key information faster. and other real-world events, mistakes Watson to Accelerate DNA Analysis and Another factor is that the move to full have consequences. A missed AKI Inform Personalized Treatment Options for electronic health records began only marker by DeepMind means the life of Patients (May 5, 2015), https://www-03.ibm. about 10 years ago, and is still in pro- a real patient is potentially at risk. Thus, com/press/us/en/pressrelease/46748.wss. The cess; AI will get better as it has more there is a natural tendency to be con- institutes include Ann & Robert H. Lurie data to evaluate. One site that is cur- servative with AI algorithms: the cost Children’s Hospital of Chicago; BC Cancer rently analyzing healthcare data is of a false positive (the equivalent of a Agency; City of Hope; Cleveland Clinic; Modernizing Medicine, which uses a false alarm) is low; the cost of a false Duke Cancer Institute; Fred & Pamela tablet and data provided from 3,700 negative can be catastrophic. Buffett Cancer Center in Omaha, Nebraska; doctors on over 14 million patient vis- AI will continue to progress with McDonnell Genome Institute at Washington its to recommend treatments or drugs each advance in semiconductors; note University in St. Louis; New York Genome based on symptoms, much like Netflix the computing power in your smart- Center; Sanford Health; University of Kansas suggesting a new movie.11 phone is more than that of Deep Blue Cancer Center; University of North Carolina We also may have to revise our 20 years ago. But getting to the next Lineberger Comprehensive Cancer Center; view of what AI will do: the apparent stage, where we rely on AI to make University of Southern California Center early promise of Watson was in find- judgments on life-and-death decisions, for Applied Molecular Medicine; University ing a single “cure for cancer.” But a may take longer than we currently of Washington Medical Center; and Yale more promising side of AI may be in anticipate. The incremental steps Cancer Center. simply helping patients manage their shown by Streams, Virta Health, Mod- 8. Oliver Roeder, The Bots Beat Us. own conditions. For instance, type 2 ernizing Medicine, and others may be Now What?, FiveThirtyEight (July 10, diabetes can often be managed—and more promising—and more successful 2017), https://fivethirtyeight.com/features/ in some cases reversed—by control- in the short to medium term—than a the-bots-beat-us-now-what/. ling the patient’s diet and lifestyle. The “moon shot.” u 9. Mustafa Suleyman, A Milestone for problem is that such control requires DeepMind Health and Streams, DeepMind extensive oversight, from a seemingly Endnotes (Feb. 27, 2017), https://deepmind.com/blog/ full-time doctor in the home. But with 1. Ken Jennings, My Puny Human Brain, milestone-deepmind-health-and-streams/. smartphones and home monitoring Slate (Feb. 16, 2011), http://www.slate. 10. Streams in NHS Hospitals, Deep- devices like Fitbit, the patient can pro- com/articles/arts/culturebox/2011/02/ Mind, https://deepmind.com/applied/ vide such information in real-time, to my_puny_human_brain.html. deepmind-health/working-nhs/how-were- be integrated into a larger database. 2. Kasparov vs. Deep Blue, helping-today/ (last visited Oct. 17, 2017). The doctor can then quickly assess the NPR (Aug. 8, 2014), http://www. 11. Hernandez, supra note 6. changing conditions of the patient. npr.org/2014/08/08/338850323/ 12. Kevin Maney, How Artificial Intel- Preliminary testing of an app-based kasparov-vs-deep-blue. ligence Will Cure America’s Sick Health Care system by one company (Virta Health) 3. Jennings, supra note 1. System, Newsweek (May 24, 2017), http:// has shown that 87 percent of the type 2 4. Press Release, IBM, MD Anderson www.newsweek.com/2017/06/02/ai-cure- diabetic patients in the study reduced Taps IBM Watson to Power “Moon Shots” america-sick-health-care-system-614583. their insulin dose or eliminated it Mission Aimed at Ending Cancer, Starting html. outright.12 with Leukemia (Oct. 18, 2013), https:// 13. Roeder, supra note 8.

FALL 2017 TheSciTechLawyer 15 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. b-techcorner By Aubrey Haddach and Jeffrey Licitra

PRENATAL GENETIC TESTING: WHERE ALGORITHMS MAY FAIL

n the forefront of the much anticipated arrival amount of DNA found in the maternal blood serum of genomic medicine, the field of nonin- to create a sample size large enough for the sequenc- Ovasive prenatal genetic testing (NIPT) has ing algorithms. Of the total DNA found in the lived up to its expectations as a game changer. NIPT maternal blood serum, approximately 3.4–6.2 per- is estimated to be worth $500 million in 2013, with cent is fetal, with the rest belonging to the mother.4 potential to grow to 2.38 billion by 2022.1 NIPT not This DNA is often referred to as “cellular free” because only has revolutionized access to prenatal testing for it is not found in the cell nuclei and originates from genetic disorders through a mere blood test, but it cells that die naturally, leaving DNA fragments in the also is unique enough to defy classification by both expectant mother’s bloodstream. intellectual property law and the existing Food and The most common prenatal genetic tests are for Drug Administration (FDA) regulatory framework. trisomy disorders, such as Down syndrome (trisomy Much attention has been given to the Federal 21), Edwards syndrome (trisomy 18), and Patau syn- Circuit’s 2015 decision in Ariosa Diagnostics, Inc. v. drome (trisomy 13). The latter two conditions can Sequenom, Inc.,2 invalidating the patent that protected lead to early miscarriage and significant rates of clini- Sequenom’s commercially offered Maternit21® test as cal morbidity or mortality following birth.5 non-patent-eligible subject matter under the Supreme Typically, humans have 23 pairs of chromosomes, Court’s Mayo v. Prometheus decision.3 Less discussed with each parent contributing a single chromosome is the technology itself, a screening test for a genetic to the pair. Trisomy occurs when an extra chro- condition, which relies significantly on “next-gen” mosome or fragment thereof is associated with the DNA sequencing and proprietary algorithms to trans- typical pair, resulting in three chromosomes where late massive amounts of data into clinical results. The there should only be two. A trisomy on the 21st chro- NIPT technology is described below before turning to mosome is referred to as “trisomy 21.” Individual a discussion of the law. chromosomes themselves are made up of DNA “base pairs,” molecules that occur in tandem on opposing Algorithms and NIPT strands of DNA, and are abbreviated by the letters Advancements in computational genomics, which A (adenine), T (thymine), C (cytosine), and G (gua- focuses on developing models to inter- nine), so as to form what is referred to as the “genetic pret the data generated by DNA sequencing, have code.” DNA is “sequenced” to determine the number allowed genetic testing to scale upwards. The next- of base pairs present and if those base pairs correlate gen sequencing technology that powers these tests is to a specific gene, and thus a genetic condition. capable of generating billions of DNA base pair reads Five companies (Sequenom, Illumina, LabCorp, from a single sample. This vast quantity of data must Ariosa, and Natera) currently offer prenatal tests for be reduced to an interpretable form. trisomy conditions, each relying on different varia- Using this sequencing technology and the accom- tions of the same sequencing methods. Though these panying algorithms, genetic testing can be done by methods all involve the same basic steps of isolat- isolating fragments of fetal and maternal DNA pres- ing, amplifying, and sequencing DNA, each company ent in a pregnant woman’s blood and then amplifying applies those steps in different and proprietary ways. those fragments. Through “amplification,” scien- This matters because each test can offer an incorrect tists produce millions of copies of the relatively small screening result due to errors inherent to individ- ual testing methods and their underlying statistical Aubrey Haddach ([email protected]) assumptions. is co-chair of the Section’s Biotechnology Law Sequenom, Illumina, and LabCorp use a method Committee and a member of the Intellectual called “massively parallel shotgun sequencing,” while Property Department at Dinsmore & Shohl LLP. Jeff Ariosa and Natera use “targeted sequencing.” Shotgun Licitra ([email protected]) is a recent graduate sequencing amplifies the entirety of the genetic mate- of the American University Washington College of rial collected, regardless of chromosome number, and Law and a member of the District of Columbia Bar. then sorts the sequenced genetic content according to

16 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. previously known and expected results for the human true-positive predictions at a greater than 99 percent genome. Targeted sequencing, as the name implies, rate for trisomy 21.12 amplifies only a region or set of genes on the desired A review of results for trisomy 18 and 13 studies chromosome. showed a lower true-positive rate (the rate at which Each company then uses its own proprietary algo- tests are both positive and correct) of 97–99 percent rithms of “quantitative read counting” to count the and 92–95 percent, respectively.13 A meta-analysis of total number of base pairs sequenced for the chro- all studies estimated the true-positive rates for trisomy mosome region being examined. One sample alone 18 and trisomy 13 to be approximately 95 percent undergoing shotgun sequencing may generate, on when performed in combination with the trisomy average, 10,800,000 base pair reads, as opposed to 21 test, but possibly lower when performed alone.14 32,000 for targeting sequencing.6 If the algorithm Moreover, the vast majority of studies have been done finds too many base pairs associated with a particular on women at a high risk for trisomy, increasing the chromosome, then the is that the likelihood of a correct positive test result.15 extra DNA base pairs indicate a trisomy condition on As the FDA considers NIPTs to be laboratory that chromosome. diagnostic tests (LDTs), it does not regulate prena- What distinguishes prenatal genetic screening tal genetic testing beyond a policy of “enforcement from other types of genetic testing is its reliance on discretion”—meaning the FDA chooses whether or the tiny amount of fetal DNA fragments present in not to oversee pre-market testing or impose post- the maternal serum. That is to say, NIPT does not market safety requirements.16 This is because the actually sequence the entire fetal genome because it tests are developed and used entirely in one loca- only has fragments to work with. It is not so much tion, regardless of where the samples originate. LDTs reading the code as it is counting it. In this way, any were traditionally used by hospitals billing directly amount of extra DNA present in the base pair count, to Medicare. Their “home brew” labs have long been even if that extra DNA is not associated with an extra regulated for scientific accuracy and precision by chromosome, can lead to a false-positive result. the Clinical Laboratory Improvement Amendments One such false-positive case involved a pregnant (CLIA) statute.17 woman who had a copy number variation. A copy Notably, none of this oversight reaches the level number variation occurs when there is more DNA of clinical validity, and, perhaps more importantly, in certain chromosome regions due to the adverse results are not required to be reported to the in length of certain DNA strands. The nonstandard FDA. Contrastingly, clinical results for drugs and length of maternal DNA with a copy number varia- diagnostic tests otherwise falling under the FDA’s tion, particularly when inherited in fetal DNA, and purview are subject to extensive clinical data review thus doubly represented in the sample, could yield for marketing approval and then continuous adverse extra DNA in the test result, and thus a false-positive event reporting after going to market. This allows an reading.7 Interestingly, both Ariosa and Sequenom analysis of all adverse events occurring to a far more replied to this case separately, not so much to refute robust population than could ever be constructed in a its findings but to explain that they each compen- clinical study. sate for copy variance differently in their respective This lack of regulatory oversight for NIPTs leaves algorithms.8 patients and medical professionals in a position to interpret the results of tests that often influence a The Validity of Screening Results woman’s decision to terminate pregnancy. Although While copy variance is certainly not the only fac- NIPTs are screening tests, intended as an intermedi- tor that can affect false-positive results for NIPTs, ate step before a more invasive diagnostic procedure, it is instructive because it shows how indispens- patients may not appreciate the nuance of a 10 mL able underlying statistical assumptions are to the test blood sample yielding a statistical guess, even if an results. Maternal or confined placental mosaicism,9 incredibly accurate one, at the presence of a genetic vanishing twin pregnancies, and maternal malig- condition. Similarly, the nature of NIPTs as screening nancy may all yield false-positive results for trisomy tests has left the courts to grapple with the importance because each of these conditions allows for extra of NIPT as a breakthrough technology and its place DNA in the maternal blood serum.10 within the larger context of patent jurisprudence. Various journalists have reported on the anguish experienced by parents who relied on false-positive Ariosa, Mayo, and Section 101 Patent results and unfortunately terminated the preg- Eligibility nancies as a result or who otherwise awaited (or There are two central tenets of patent law that planned) for the arrival of a child with a trisomy determine whether an invention such as NIPT is pat- condition.11 Yet peer-reviewed journals (and the com- ent-eligible subject matter under 35 U.S.C. section panies themselves) continue to conclude that NIPTs 101. On one hand is the now infamous expression are fundamentally sound in their ability to make that Congress intended statutory subject matter to

FALL 2017 TheSciTechLawyer 17 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. “include anything under the sun that is made by a sufficient “inventive concept” to transform the m an .” 18 On the other is the “natural law exception,” abstract science to patentable subject matter? The which holds that naturally occurring phenomena are Mayo framework may ultimately be inadequate not patentable.19 The problem arises when a patent on if it confuses patent-ineligible abstract methods a man-made invention—even a ground-breaking one with the several, complex, laboratory steps used such as NIPT—relies so heavily on a scientific dis- in molecular diagnostics. covery that a court finds the patent invalid for only The Federal Circuit applied Mayo to conclude claiming a natural law. that (1) Sequenom was merely utilizing a natu- Sequenom obtained the first patent on NIPT in ral law, i.e., the presence of fragmented fetal DNA 2005, when it purchased from Isis Innovation Ltd., in the maternal bloodstream; and (2) the pat- the commercial research arm of Oxford University, ent lacked sufficient inventive concept because an exclusive license to a patent for “amplifying” its claim involved the standard DNA sequencing and “detecting” cellular-free fetal DNA (cffDNA).20 steps of amplification and detection routinely prac- In 2011, Sequenom became the first company to ticed by scientists.25 However, the claim may have offer prenatal genetic testing. Only months later, been written so broadly as to render these meth- Sequenom entered into litigation in the North- ods abstract regardless of the patentability of NIPT ern District of California with Ariosa, Natera, and itself. Indeed, Judge Lourie acknowledged as much Verinata over infringement of its patent. If the pat- in his concurrence, denying rehearing en banc, ent was valid, Sequenom would have exclusive positing that the patent may have been invalid for control of the nascent NIPT market. If invalid, its lack of specificity regardless of Mayo, and that “the competitor companies would be able to enter the finer filter of § 112 might be better suited to treat- market without risking infringement. ing these as questions of patentability.”26 The task before the district court was to apply The judges, even in agreeing on the result, the Supreme Court’s two-part Mayo test to deter- expressed misgivings about the breadth of the “natu- mine whether the patent was invalid for lack of ral laws” restriction imposed by Mayo. Judge Lourie eligible subject matter: first, decide whether the acknowledged that the holding of Mayo had been method claimed is to a natural law or abstract idea; correctly applied as binding Supreme Court prec- if yes, proceed to step two, and determine if there edent, but he found the rule “unsound” insofar as it is an “inventive concept” sufficient to overcome “takes inventions of this nature out of the realm of the patent’s reliance on an abstract idea.21 In Octo- patent-eligibility.”27 Similarly, Judge Dyk, in his own ber 2013, the district court held the patent invalid concurrence, observed that “the major defect is not as ineligible subject matter, concluding “the only that the claims lack inventive concept but rather that inventive concept contained in the patent to be the they are overbroad.”28 Judge Linn, concurring in the discovery of cffDNA, which is not patentable.”22 panel decision, explained he did so “bound by the Sequenom appealed only to have the Federal sweeping language of the test” set out by the Supreme Circuit affirm the patent’s invalidation and deny Court, noting that “the amplification and detection of rehearing en banc.23 In June 2015, the Supreme cffDNA had never before been done.”29 Judge New- Court declined Sequenom’s petition for cer- man, in her dissent to the denial of rehearing, viewed tiorari, putting an end to its five-year quest to Sequenom’s patent to be patentable subject matter. In maintain the patent in the face of noninfringe- her opinion, the patent at issue involved the “discov- ment suits brought by its competitors. Along the ery and development of a new diagnostic method” for way, members of the legal community and bio- cffDNA, itself a discovery, in contrast to a situation tech industry filed numerous briefs imploring the where “both the medicinal product and its metabo- Federal Circuit to avoid a ruling that would nul- lites were previously known, leaving sparse room for lify patent protection for future genetic screening innovative advance.”30 In other words, the subject technology. matter of the process patent was neither an abstract The Mayo decision itself was comparatively process nor natural law, and in failing that, there was “low-tech” and involved a method for adminis- no need to look for an “inventive concept.” tering an intravenous drug to a patient, where the Much of the underlying next-gen sequencing dosage of the drug was increased or decreased technology is already patented or proprietary subject to obtain an efficacy level based on measuring matter. If anything, it is the use of NIPT algorithms metabolites in the bloodstream.24 Calculations themselves to connect the next-gen sequencing tech- and dosage adjustments are decisively abstract nology to a meaningful screening result that may be compared to the multitude of tangible steps the “inventive concept.” involved in noninvasive testing, from the sep- Even if the Ariosa decision stands, the patent at aration of cffDNA in the blood sample to the issue may not have been representative of the poten- next-gen sequencing and algorithmic determina- tial patentable subject matter in NIPT. Presently, tion of a clinical result. Do these steps constitute litigation among these companies continues in federal

18 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. court and at the Patent Trial and Appeal Board to 10. Zandra C. Deans et al., Recommended Practice for decide whether the patent is invalid on other bases.31 Laboratory Reporting of Non-Invasive Prenatal Testing of Trisomies 13, 18, and 21: A Consensus Opinion, 37 Prenatal Moving Forward Diagnosis 699 (2017). Noninvasive prenatal genetic testing involves technol- 11. See, e.g., Beth Daley, Oversold and Misunderstood: ogy that at present resists classification under FDA Prenatal Screening Tests Prompt Abortions, New Eng. Ctr. regulations and confounds the intuitive notion that for Investigative Reporting (Dec. 13, 2014), available at ground-breaking inventions deserve patent protec- https://eye.necir.org/2014/12/13/prenatal-testing/; Prenatal tion. This is in part because the technology itself is so Tests Have High Failure Rate, Triggering Abortions, NBC much more complex than what the legal profession is News (Dec. 14, 2014), http://www.nbcnews.com/health/ accustomed to. NIPT is not just a traditional labora- womens-health/prenatal-tests-have-high-failure-rate- tory diagnostic test, but a screening exam for a genetic triggering-abortions-n267301. condition with clinical implications. Similarly, it is 12. Megan Allyse et al., Non-Invasive Prenatal Testing: not just the application of a natural law, in the discov- A Review of International Implementation and Challenges, 7 ery of cffDNA, but a novel way of applying next-gen Int’l J. Women’s Health 113, 114 (2015); Norwitz et al., sequencing technology to that discovery. supra note 5, at 59. The public has an interest both in seeing the 13. Allyse et al., supra note 12, at 114. biotech industry continue to innovate and in 14. M.M. Gil et al., Analysis of Cell-Free DNA in Maternal receiving the benefits of more robust testing and Blood Screening for Fetal Aneuploidies: Updated Meta- oversight. The two interests are not contradictory, Analysis, 45 Ultrasound Obstetrics & Gynecology 249, and a better understanding of the technology that 261–62 (2015). makes these tests possible should in turn lead to 15. Norwitz et al., supra note 5, at 50. better laws and patient outcomes. u 16. U.S. Food & Drug Admin., Draft Guidance for Industry, Food and Drug Administration Staff, and Endnotes Clinical Laboratories: Framework for Regulatory 1. Press Release, Transparency Mkt. Research, Non-Inva- Oversight of Laboratory Developed Tests (LDTs) 21 sive Prenatal Testing (NIPT) Market (BambniTest, Harmony, (Oct. 3, 2014). informaSeq, MaterniT21 PLUS, NIFTY, Panorama, 17. 42 U.S.C. § 263(a). PrenaTest, verifi, VisibiliT and Others)—Global Industry 18. Diamond v. Chakrabarty, 447 U.S. 303, 309 (1980). Analysis, Size, Volume, Share, Growth, Trends and Forecast 19. Funk Bros. Seed Co. v. Kalo Inoculant Co., 333 U.S. 127, 2014–2022 (May 21, 2015), http://www.transparencymarket- 130 (1948). research.com/noninvasive-prenatal-diagnostics-market.html. 20. U.S. Patent No. 6,258,540. 2. 788 F.3d 1371 (Fed. Cir.), reh’g en banc denied, 809 F.3d 21. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 1282 (Fed. Cir. 2015), cert. denied, 136 S. Ct. 2511 (2016). 2353 (2014) (“[A] court must first ‘identif[y] the abstract idea 3. Mayo Collaborative Servs. v. Prometheus Labs., Inc., 132 represented in the claim,’ and then determine ‘whether the S. Ct. 1289 (2012). balance of the claim adds significantly more.’” (alteration in 4. Y.M. Dennis Lo, Quantitative Analysis of Fetal DNA in original)); Mayo Collaborative Servs. v. Prometheus Labs., Inc., Maternal Plasma and Serum: Implications for Noninvasive 132 S. Ct. at 1289, 1299 (2012) (“These other steps apparently Prenatal Diagnosis, 62 Am. J. Hum. Genetics 768 (1998); see added to the formula something that in terms of patent law’s also Yuk Ming Dennis Lo, Non-Invasive Prenatal Diagnosis objectives had significance—they transformed the process by Massively Parallel Sequencing of Maternal Plasma DNA, into an inventive application of the formula.”). 2 Open Biology, June 2012 (reporting that fetal DNA 22. Ariosa Diagnostics, Inc. v. Sequenom, Inc., 19 F. Supp. concentrations have been found as high as 10 percent using 3d 938, 950 (N.D. Cal. 2013). next-gen, digital polymerase chain reaction (PCR)). 23. Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 5. Errol R. Norwitz et al., Noninvasive Prenatal Testing: 1371 (Fed Cir.), reh’g en banc denied, 809 F.3d 1282 (Fed. Cir. The Future Is Now, 6 Revs. in Obstetrics & Gynecology 2015), cert. denied, 136 S. Ct. 2511 (2016). 48, 49 (2013). 24. Mayo, 132 S. Ct. at 1294. 6. Andrew B. Sparks et al., Selective Analysis of Cell-Free 25. Ariosa, 788 F.3d at 1378 (“Thus, in this case, append- DNA in Maternal Blood for Evaluation of Fetal Trisomy, 32 ing routine, conventional steps to a natural phenomenon, Prenatal Diagnosis 3, 4 (2012). specified at a high level of generality, is not enough to supply 7. Matthew W. Snyder et al., Copy-Number Variation and an inventive concept.”). False Positive Prenatal Aneuploidy Screening Results, 372 New 26. Ariosa, 809 F.3d at 1286 (Lourie, J., concurring). Eng. J. Med. 1639 (2015). 27. Id. at 1287. 8. Correspondence: Copy-Number Variation and False 28. Id. at 1293 (Dyk, J., concurring). Positive Prenatal Aneuploidy Screening Results, 373 New Eng. 29. Ariosa, 788 F.3d at 1380–81 (Linn, J., dissenting). J. Med. 2583 (2015). 30. Ariosa, 809 F.3d at 1293–94 (Newman, J., dissenting). 9. Mosaicism occurs when cells carry more than one 31. See, e.g., Verinata Health, Inc. v. Ariosa Diagnostics, genotype, allowing for more DNA than normal. Inc., No. 12-cv-05501-SI (N.D. Cal. Jan. 19, 2017).

FALL 2017 TheSciTechLawyer 19 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. ARTIFICIAL INTELLIGENCE AND THE FUTURE OF LEGAL PRACTICE BY GARY E. MARCHANT

Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. he alarming headlines and pre- As with many new technologies, “superintelligence,”8 such an achieve- dictions of artificial intelligence there is a cycle of hype at the outset ment is likely decades away. That is why T (AI) replacing lawyers have no that creates inflated expectations, even important legal skills based on human doubt created discomfort for many though the long-term implications judgment, inference, common sense, attorneys already anxious about the of that technology may be profound interpersonal skills, and experience will future of their profession: “Rise of and enormous. As Bill Gates percep- remain valuable for the lifetime of any the Robolawyers.” “Here Come the tively noted in his book The Road lawyer practicing today. Robot Lawyers.” “Why Hire a Law- Ahead, “[w]e always overestimate the While AI has many attributes for yer? Machines Are Cheaper.” “Armies change that will occur in the next two its many different applications, two of Expensive Lawyers, Replaced by years and underestimate the change are currently most important for legal Cheaper Software.” “Law Firm Bosses that will occur in the next ten.”5 Right applications. First, machine learning Envision Watson-Type Computers now, AI in the practice of law is more is the capability of computers to teach Replacing Young Lawyers.” “Why Law- of an opportunity than a threat, with themselves and learn from experi- yers and Other Industries Will Become early adopters providing more efficient ence. This means that the AI can do Obsolete. You Should Stop Practicing and cost-effective legal services to an more than blindly adhere to what it has Law Now and Find Another Profes- expanding portfolio of existing and been programmed to do, but can learn sion.” And so on. potential clients. from experience and data to constantly Despite these dire headlines, AI The use of AI in law will thus be an improve its capabilities. This is how will fortunately not replace most law- evolution, not a revolution.6 But make Google’s Deep Mind system was able to yers’ jobs, at least in the short term. no mistake, AI is already transforming defeat the world’s best human Go play- One in-depth study of the legal field virtually every business and activity ers. Second, natural language processing estimated that AI would reduce law- that attorneys deal with, some more is the capability of computers to under- yers’ billing hours by only 13 percent quickly and dramatically than oth- stand the meaning of spoken or written over the next five years.1 Other esti- ers, and the legal profession will not human speech and to apply and inte- mates are a little less sanguine, but still be spared from this disruptive change. grate that understanding to perform not projecting a catastrophic impact Incorporation of AI into a law firm’s human-like analysis. on attorney employment. A database systems and operations is a gradual, AI is rapidly being applied to all on the effect of automation on over learning process, so early adopters major sectors of the economy and 800 professions created by McKinsey & will have a major advantage over firms society, including medicine, finance, Company found that 23 percent of the that lag in adopting the technology. national defense, transportation, average attorney’s job could be replaced The lawyers, law firms, and businesses manufacturing, the media, arts and by robots.2 A study by Deloitte esti- that do not get on the AI bandwagon entertainment, and social relation- mated that 100,000 legal jobs will be will increasingly be left behind, and ships, to name just some. Many of eliminated by automation in the United eventually displaced. As a recent ABA these applications will create new legal Kingdom by 2025.3 And last year Journal cover story explained, “Arti- issues for lawyers, such as the liability JPMorgan used an AI computer pro- ficial intelligence is changing the way issues of autonomous cars, the legality gram to replace 360,000 billable hours lawyers think, the way they do busi- of lethal autonomous weapons, finan- of attorney work, with one report ness and the way they interact with cial bots that may run afoul of antitrust of this development observing that clients. Artificial intelligence is more laws, and the safety of medical robots. “[t]he software reviews documents than legal technology. It is the next But in addition to changing the sub- in seconds, is less error-prone and great hope that will revolutionize the ject matter that lawyers work on, it will never asks for vacation.”4 legal profession.”7 also transform the way lawyers practice their craft. Gary E. Marchant, PhD (gary. What Is Artificial Intelligence [email protected]) is a Regents’ At its simplest, AI is the development AI Applications for Legal Practice Professor, Lincoln Professor of Emerging and use of computer programs that AI is rapidly infiltrating the practice of Technologies, Law & Ethics, and Faculty perform tasks that normally require law. A recent survey of managing part- Director of the Center for Law, Science & human intelligence. At this time and ners of U.S. law firms with 50 or more Innovation at the Sandra Day O’Connor for the foreseeable future, current AI lawyers found that over 36 percent of College of Law at Arizona State capabilities only permit computers to law firms, and over 90 percent of large University. His teaching and research approach, achieve, or exceed certain law firms (>1,000 attorneys), are either interests focus on the governance of but not all human cognitive functions. currently using or actively explor- emerging technologies. Prior to his While some researchers are working on ing use of AI systems in their legal joining the faculty of ASU in 1999, developing computers that can match practices.9 The following summary Professor Marchant was a partner at or eclipse the human mind, sometimes describes some of the major categories Kirkland & Ellis in Washington, D.C. referred to as “general intelligence” or and examples of such applications.

FALL 2017 TheSciTechLawyer 21 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. Technology-Assisted Review practice areas, including transactional Legal Decision Making Technology-assisted review (TAR) was and due diligence, bankruptcy, liti- AI is enabling judicial decision mak- the first major application of AI in legal gation research and preparation, real ing in a number of ways. For example, practice, using technology solutions to estate, and many others. Sometimes the Wisconsin Supreme Court recently organize, analyze, and search very large billed as the first robot lawyer, ROSS is upheld the use of algorithms in crimi- and diverse data sets for e-discovery or an online research tool using natural nal sentencing decisions.13 While such record-intensive investigations. Going language processing powered by IBM algorithms represent an early use of far beyond keyword and Boolean Watson that provides legal research and primitive AI (some may not consider searches, studies show that TAR pro- analysis for several different law firms such algorithms AI at all), they open the vides a fifty-fold increase in efficiency today, and can reportedly read and door to use more sophisticated AI sys- in document review than human process over a million legal pages per tems in the sentencing process in the review.10 For example, predictive cod- minute. It was first publicly adopted by future. A number of online dispute reso- ing is a TAR technique that can be used the law firm BakerHostetler to assist lution tools have or are being developed to train a computer to recognize rel- with its bankruptcy practice, but is now to completely circumvent the judicial evant documents by starting with a being used by that firm and several process. For example, the Modria online “seed set” of documents and providing others for other practice areas as well. dispute resolution tool, developed from human feedback; the trained machine A similar system is RAVN developed in the eBay dispute resolution system, has can then review large numbers of doc- the United Kingdom and first publicly been used to settle many thousands uments very quickly and accurately, adopted by the law firm Berwin Leigh- of disputes online using an AI system. going beyond individual words and ton Paisner in London in 2015 to assist The U.K. government is developing an focusing on the overall language and with due diligence in real estate deals Internet-based dispute resolution sys- context of each document. Numerous by verifying property details against tem that will be used to resolve minor vendors now offer TAR products. the official public records. Accord- (<£25,000) civil legal claims without any ing to the law firm attorney in charge court involvement. Microsoft and the Legal Analytics of implementation: “once the pro- U.S. Legal Services Corporation have Legal analytics use big data, algorithms, gram has been trained to identify and teamed up to provide machine learning and AI to make predictions from or work with specific variables, it can legal portals to provide free legal advice detect trends in large data sets. For complete two weeks’ work in around on civil law matters to people who can- example, Lex Machina, now owned by two seconds, making [it] over 12 mil- not afford to hire lawyers. LexisNexis, uses legal analytics to pre- lion times quicker than an associate dict trends and outcomes in intellectual doing the same task manually.”12 Kira The Future of AI and the Law property litigation, and is now expand- is another AI system that has already These initial applications of AI to legal ing to other types of complex litigation. been adopted by several law firms to practice are just the early beginnings of Wolters Kluwer leverages a massive assist with automated contract analysis what will be a radical technology-based database of law firm billing records to and data extraction and due diligence disruption to the practice of law. AI provide baselines, comparative analy- in mergers and acquisitions. “represents both the biggest opportu- sis, and efficiency improvements for nity and potentially the greatest threat in-house counsel and outside law firms Legal Bots to the legal profession since its forma- on staffing, billing, and timelines for Bots are interactive online programs tion.”14 The transformative impacts of various legal matters. Ravel Law, also designed to interact with an audi- AI on legal practice will continue to recently purchased by LexisNexis, uses ence to assist with a specific function accelerate going forward. AI will take legal analytics of judicial opinions to or to provide customized answers to over a steadily increasing share of law predict how specific judges may decide the recipient’s specific situation. Many firm billable hours, be applied to an cases, including providing recommen- law firms are developing bots to assist ever-expanding set of legal tasks, and dations on specific precedents and current or prospective clients in deal- require knowledge and abilities outside language that may appeal to a given ing with a legal issue based on their the existing skill set of most current judge. Law professor Daniel Katz and own circumstances and facts. Other practicing attorneys. Today AI repre- his colleagues have utilized legal ana- groups are developing pro bono legal sents an opportunity for a law firm or lytics and machine learning to create a bots to assist people who may not oth- an attorney to be a leader in efficiency, highly accurate predictive model for the erwise have access to the legal system. cost-effectiveness, and productivity, but outcome of Supreme Court decisions.11 For example, a Stanford law gradu- soon incorporation of AI into practice ate developed an online chat bot called will be a matter of keeping up rather Practice Management Assistants DoNotPay that has helped over 160,000 than being a leader. Many technology companies and law people resolve parking tickets, and is AI in the practice of law raises many firms are partnering to create pro- now being expanded to help refugees broader issues that can only be briefly grams that can assist with specific with their legal problems. listed here. How will AI change law

22 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. firm billing, where a smart AI sys- stand as witnesses to explain their own 5. Bill Gates, The Road Ahead (1995). tem can conduct searches and analyses independent decision making? 6. Joanna Goodman, Robots in Law: in a few seconds that formerly would One thing is certain—there will be How Artificial Intelligence Is Trans- have taken several weeks of an asso- winners and losers among lawyers who forming Legal Services 3 (2016). ciate’s billable time? If AI eliminates do and do not uptake AI, respectively. 7. Julie Sobowale, Beyond Imagination: How many of the more routine tasks in legal As one senior lawyer recently remarked, Artificial Intelligence Is Transforming the Legal practice that are traditionally per- “Unless private practice lawyers start to Profession, A.B.A. J., Apr. 2016, at 46, 48. formed by young associates, how will engage with new technology, they are 8. Nick Bostrom, Superintelligence: this affect hiring and advancement of not going to be relevant even to their Paths, Dangers, Strategies (2014). young attorneys? How will legal train- clients.”15 The AI train is leaving the sta- 9. Thomas S. Clay & Eric A. Seeger, ing and law schools need to change to tion—it is time to jump on board. u Altman Weil Inc., 2017: Law Firms in address the new realities of AI-driven Transition 84 (2017), http://www. legal practice? How will AI affect the Endnotes altmanweil.com/LFiT2017/. competitive advantage of large law 1. Dana Remus & Frank S. Levy, Can 10. Maura R. Grossman & Gordon V. firms versus small and medium-sized Robots Be Lawyers? Computers, Lawyers, Cormack, Technology-Assisted Review in E-Dis- firms? Will companies start obtaining and the Practice of Law 46 (Nov. 27, 2016) covery Can Be More Effective and More Efficient legal services directly from legal tech- (unpublished manuscript), https://ssrn.com/ Than Exhaustive Manual Review, 17 Rich. J.L. & nology vendors, skipping law firms abstract=2701092. Tech. 11, 43 (2011). altogether? Will AI systems be vulnera- 2. David Johnson, Find Out If a Robot Will 11. Daniel Martin Katz et al., A General ble to charges of unauthorized practice Take Your Job, Time (Apr. 19, 2017), http://time. Approach for Predicting the Behavior of the of law? Given that AI systems increas- com/4742543/robots-jobs-machines-work/. Supreme Court of the United States, 12 PLoS ingly use their own self-learning rather 3. Deloitte Insight: Over 100,000 Legal Roles ONE, 2017, https://doi.org/10.1371/journal. than preprogrammed instructions to to Be Automated, Legal IT Insider (Mar. 16, pone.0174698. make decisions, how can we ensure the 2016), https://www.legaltechnology.com/ 12. Goodman, supra note 6, at 31. accuracy, legality, and fairness of AI latest-news/deloitte-insight-100000-legal-roles- 13. State v. Loomis, 881 N.W.2d 749 (Wis. decisions? Will lawyers be responsible to-be-automated/. 2016). for negligence for relying on AI sys- 4. Hugh Son, JPMorgan Software Does in 14. Goodman, supra note 6, at 129 (quoting tems that make mistakes? Will lawyers Seconds What Took Lawyers 360,000 Hours, Rohit Talwar). be liable for malpractice for not using Bloomberg (Feb. 27, 2017), https://www. 15. LexisNexis, Lawyers and Robots? AI that exceeds human capabilities in bloomberg.com/news/articles/2017-02-28/ Conversations around the Future of the certain tasks? Will self-learning AI sys- jpmorgan-marshals-an-army-of-developers-to- Legal Industry 3 (2017) (comment of David tems need to be deposed and take the automate-high-finance. Halliwell of U.K. law firm ).

THE ABA SECTION OF SCIENCE & TECHNOLOGY LAW THANKS THE SILVER SPONSOR FOR THEIR GENEROUS SUPPORT OF THE 2017 ANNUAL MEETING

SILVER SPONSOR

CONTACT [email protected] FOR 2017–2018 SPONSORSHIP OPPORTUNITIES

FALL 2017 TheSciTechLawyer 23 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. WHEN LAW AND ETHICS COLLIDE WITH AUTONOMOUS VEHICLES

rom time to time, busy practic- system in its cars and intends to deploy Should they try to preserve the maxi- ing lawyers face ethical issues of fully automated vehicles in two years. mum number of lives (assuming they Fthe kind taught in professional Companies are also working on freight are sophisticated enough to engage in responsibility law school classes and truck automation, and their work even- such a calculation)? Or should they continuing legal education courses. tually will result in fully automated avoid doing harm to innocent pedestri- However, they do not often discuss trucks. ans, bystanders, and passengers? Does the kinds of general ethical issues that AV manufacturers will rely on the manufacturer owe any special ethi- academics and professional moral sophisticated algorithms to control cal duties to the purchaser of the AV or philosophers take up. Recent devel- AVs. Software implementing such algo- the AV occupants, as opposed to occu- opments in artificial intelligence and rithms depends on inputs from sensors, pants of other vehicles or those outside robotics, and autonomous driving in such as light detection and ranging the AV? Many of the media stories rais- particular, have rekindled interest in (LiDAR), radar, cameras, and GPS. ing these ethical issues rely on the work ethics throughout the world, and espe- The software analyzes the AV’s loca- of Professor Patrick Lin of California cially in the United States. tion, position relative to the road, and Polytechnic State University. Autonomous vehicles (AVs) have upcoming obstacles. These algorithms Professor Lin likes to use “thought captured the imagination of writers then determine the best path to follow experiments” to explain ethical dilem- in popular media. Living close to the and cause the AV throttle, brake, and mas. Thought experiments are “similar garage where Waymo (the new Google steering to follow the planned path. A to everyday science experiments in affiliate) houses its AVs in Mountain group of moral philosophers has raised which researchers create unusual con- View, California, I feel like I am living ethical questions about these algo- ditions to isolate and test desired in the AV capital of the world, as I fre- rithms. In particular, this group asks variables”1 and are similar to the hypo- quently see AVs navigating the streets how AVs should behave when acci- theticals law professors use to teach around my home in Los Altos. Nearby dents are about to occur. What is the legal subjects. Thought experiments Tesla has deployed a driver assistance moral way to design AV algorithms? can be used to study ethical issues

24 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. BY STEPHEN S. WU

involving AV algorithms. Indeed, the expanded on the trolley problem in is worse from an ethical standpoint to last administration’s Department of a 1985 Yale Law Journal comment,4 cause harm than it is to allow harm to Transportation policy on highly auto- which is the more common formu- happen, even if the consequences are mated vehicles specifically mentions lation of the thought : a worse. The trolley problem teases out ethical issues in programming AVs: runaway trolley is heading down the the moral philosopher’s dilemma: is “Manufacturers and other entities, track toward five workers and will it better to throw the switch and save working cooperatively with regulators soon run over them if no interven- more lives (five versus one), or is it bet- and other stakeholders (e.g., drivers, tion occurs. A spur of track leads off ter (for the bystander) to do nothing in passengers and vulnerable road users), to the right, but there is one worker on order to avoid causing harm to anyone? should address these situations to the track. A bystander is standing by Professor Lin has applied the trolley ensure that such ethical judgments and a switch. If the bystander throws the problem to AVs by posing the follow- decisions are made consciously and switch, the trolley will turn onto the ing thought experiment: intentionally.”2 spur, saving the five workers, but killing the single worker on the spur.5 [Y]ou are about to run over and Of Trolleys and Autonomous If the bystander does nothing, the kill five pedestrians. Your car’s Vehicles bystander would not be killing any- crash-avoidance system detects Perhaps the most famous thought one. The bystander would merely be the possible accident and acti- experiment is the so-called “trolley “allowing” the five to die. Throwing vates, forcibly taking control problem.” As the name suggests, the the switch would involve killing just of the car from your hands. To trolley problem involves a runaway one person. Some philosophers such avoid this disaster, it swerves in trolley. British philosopher Philippa as Jarvis Thomson have the view that the only direction it can, let’s say Foot invented the “trolley problem” it is better to maximize the number of to the right. But on the right is a and first introduced it in 1967.3 Ameri- lives saved in situations like this. Oth- single pedestrian who is unfortu- can philosopher Judith Jarvis Thomson ers such as Foot disagree, saying that it nately killed.6

FALL 2017 TheSciTechLawyer 25 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. WHEN FACED WITH AN INEVITABLE CRASH, SHOULD AUTONOMOUS VEHICLES BE PROGRAMMED TO SAVE MORE LIVES OR AVOID CAUSING HARM?

News writers have (with or without manufacturer programmed its AVs file suit against the Manufacturer, con- crediting Professor Lin) repeated this to steer away from a large group and tending that the Manufacturer had a and similar scenarios in numerous recent toward a single individual or small safer alternative design: it could have news articles.7 Philosophers continue to group when they anticipate that a programmed the AV to run over the debate the question of whether it is better crash is inevitable? For the remainder One. Thus, it appears the Manufacturer to save more lives or avoid doing harm. of this article, I imagine a hypotheti- is in a no-win situation. To the extent there is any consensus, a cal manufacturer (Manufacturer) has recent survey showed that philosophers implemented just such an algorithm. Possible Defenses favored throwing the switch in the trolley And I imagine that an accident occurs The Manufacturer might turn to tradi- problem.8 Thus, if an AV manufacturer where the AV steers away from five tional defenses recognized in the law hires professional philosophers to advise people (the Five), but at the cost of to avoid the dilemma. For instance, it it on how to design AV algorithms, they striking and killing a single individual could assert a necessity defense, saying are likely to advise the manufacturer to (the One). I assume that the One was that running over the One was neces- program the AV to steer away from a an innocent bystander or pedestrian, sary to save lives. Under the necessity large group at the cost of running over a rather than a jaywalker or someone doctrine, “it has long [been] recognized single individual. engaging in wrongful conduct. I also that ‘[n]ecessity often justifies an action assume that if the AV had attempted to which would otherwise constitute a The Legal Trolley Problem avoid collision altogether, the AV may trespass, as where the act is prompted Dilemma have made things worse—it may have by the motive of preserving life or As a practicing lawyer, I was curious. killed all six people. I imagine that a property and reasonably appears to What would the legal consequences representative of the One files a com- the actor to be necessary for that pur- be if an AV manufacturer followed a plaint against the Manufacturer. p o s e .’” 10 The private necessity defense philosopher’s advice and tried to “do The most common causes of action thus serves as a justification for a non- the right thing” in trolley problem sit- in a suit claiming a defect in a prod- governmental defendant’s conduct uations? What would happen if the uct include strict products liability, where the defendant’s act causes harm, negligence, breach of warranty, and but the defendant acted to prevent an Stephen S. Wu ([email protected]) is a statutory violations for unfair or decep- even worse harm. However, necessity shareholder in Silicon Valley Law Group tive trade practices. With each claim, is likely to be unavailing as a defense in San Jose, California, and practices counsel for the representative of the for the Manufacturer. In its traditional in the areas of information technology, One would contend that the feature of form, the necessity defense justifies security, privacy, and intellectual swerving toward the One made the AV acts of trespass or damage to personal property compliance, litigation, defective. But even worse, the Manu- property, but not bodily injury.11 In our transactions, and policies. He served as facturer’s conduct appears intentional. hypothetical case, the AV killed the the 2010–2011 Chair of the ABA Section Indeed, the Manufacturer made a One and thus does not apply. of Science & Technology Law. This deliberate decision to cause the AV to Another possible defense is the material is based upon work supported swerve toward the One (or someone defense of third persons. Similar to by the National Science Foundation similarly situated to the One). The rep- self-defense, the Manufacturer might (NSF) under Grant No. 1522240. Any resentative may even assert a cause of try to argue that its use of force against opinions, findings, and conclusions action for battery, the essence of which the One is justified on order to defend or recommendations expressed in this is harmful contact intentionally done.9 the Five against harm. The Restatement material are those of the author and On its face, the representative seems to (Second) of Torts provides that an do not necessarily reflect the views have a strong case. actor can defend any third person from of the NSF. The author is grateful for The Manufacturer would not have wrongful injury by the use of force.12 the support of the NSF, in addition fared any better if it programmed the However, the Manufacturer’s argument to Michael Wu for making useful AV to do nothing, allowing the AV to will fail because in our hypothetical suggestions to improve this article, as run over the Five. If the AV killed the case the One was not acting wrongfully. well as his editorial assistance. Five, representatives of the Five could To the contrary, we have assumed that

26 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. the One was an innocent actor. There I recognize three problems with this Accelerating the Next Revolution in is no wrongful conduct for the Manu- approach. First, we have assumed that Roadway Safety 26 (2016). Likewise, the facturer to defend against, and thus the collision avoidance may make things Trump administration’s latest guidance on defense does not apply. worse and the AV may end up hurting AVs acknowledges that manufacturers should A third defense the Manufacturer or killing all six people. Nonetheless, it deliberate concerning ethical issues. U.S. could try to assert is the “sudden emer- is more legally defensible and would, Dep’t of Transp. & Nat’l Highway Traffic gency” doctrine, also known as the as a practical matter, sit better with a Safety Admin., Automated Driving “imminent peril” doctrine. “[I]f an jury: the Manufacturer did all it could Systems 2.0: A Vision for Safety 1 n.1 actual or apparent emergency is found to save everyone’s life. If the accident (2017). to exist the defendant is not to be held ended up killing all six, then at least the 3. Philippa Foot, The Problem of Abortion to the same quality of conduct after the Manufacturer tried to save lives. and the Doctrine of the Double Effect, Oxford onset of the emergency as under nor- Second, my position is implicitly at R e v. , 1967, at 5, 8. mal circumstances.”13 Cases involving odds with the trolley problem thought 4. Judith Jarvis Thomson, The Trolley Prob- the sudden emergency doctrine in the experiment. I am implicitly reject- lem, 94 Yale L.J. 1395, 1395 (1985). car accident context involve split-second ing what appears to be a false choice 5. See id. at 1397. decisions of drivers in a difficult posi- between running over the Five or run- 6. Lin, supra note 1, at 79. tion. The facts of some of these cases ning over the One. 7. See, e.g., Why Self-Driving Cars Must Be sound like real-world trolley problems.14 Finally, I recognize that my choice of Programmed to Kill, MIT Tech. Rev. (Oct. 22, The defense recognizes that an actor in collision avoidance of the “legal” solu- 2015), http://www.technologyreview.com/ such situations cannot be held to the tion is not the one philosophers would view/542626/why-self-driving-cars-must-be- same standard of care as when an actor consider “moral.” Law and morality programmed-to-kill/. is calm in normal circumstances. How- sometimes diverge. Conduct we con- 8. David Bourget & David J. Chalmers, ever, the problem for the Manufacturer sider immoral may be legal, and some What Do Philosophers Believe? 16 (Nov. is that the Manufacturer is considering conduct considered to be morally per- 30, 2013), https://philpapers.org/archive/ how to program an AV in the ordinary missible may be illegal. This is one BOUWDP. course of the design process, far from more case in which law and moral- 9. See, e.g., 5 B.E. Witkin, Summary of any imminent accident. The sudden ity may come to different conclusions. California Law § 383, at 599 (10th ed. 2005). emergency doctrine applies only when, Given the liability dilemma, the only 10. People v. Ray, 981 P.2d 928, 935 (Cal. at the time of the actor’s conduct causing way to immunize the Manufacturer 1999). the accident, the actor faced a sudden trying to “do the right thing” and allow 11. See Restatement (Second) of Torts choice between two or more actions. it to program AVs to steer toward the § 263 (Am. Law Inst. 1965) (“One is privileged Here, the Manufacturer’s programming One is to change the law through legis- to commit an act which would otherwise be a decision occurred long before the acci- lation or regulations. trespass to the chattel of another or a conversion dent. The Manufacturer was not facing Trolley problems are useful start- of it, if it is or is reasonably believed to be a sudden decision. To the contrary, we ing points for analyzing the ethical reasonable and necessary to protect the person have assumed that the Manufacturer issues of programming AVs, if noth- or property of the actor, the other or a third undertook a careful and deliberate anal- ing else because they spark discussion person from serious harm.” (emphasis added)). ysis of how to design its AV algorithms among the media and their audience. 12. See id. § 76. and made a choice to program the AV to Some people reject the real-world rel- 13. Bill Hollingsworth, Note, The Sudden steer toward the One. No sudden emer- evance of the trolley problem, but the Emergency Doctrine in Florida, 21 U. Fla. L. gency was occurring during the design principles gleaned from it will aid man- R e v. 667, 671 (1969). process. Accordingly, the defense does ufacturers in deciding how to program 14. See, e.g., Myhaver v. Knutson, 942 P.2d not apply. AVs. More generally, injecting discus- 445, 446 (Ariz. 1997) (en banc) (involving a sions of ethics raises the awareness of driver who faced a choice of driving straight Resolving the Liability Dilemma ethical dimensions to AV design and into a car driving the wrong way in his lane or Because the traditional defenses offer no manufacturers’ decisions, and that is a crossing the yellow line into oncoming traffic). protection, the Manufacturer has no easy good thing. u way out of the liability dilemma. As the law currently stands, I believe the only Endnotes way for the Manufacturer to limit its legal 1. Patrick Lin, Why Ethics Matters for liability in the trolley problem scenario Autonomous Cars, in Autonomous Driving: is to program its AVs to attempt to avoid Technical, Legal and Social Aspects 69, collision. It should neither steer toward 75 (Markus Maurer et al. eds., 2016). the One nor allow the AV to run over the 2. U.S. Dep’t of Transp. & Nat’l Five. Rather, it should try to maximize Highway Traffic Safety Admin., collision avoidance. Federal Automated Vehicles Policy:

FALL 2017 TheSciTechLawyer 27 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. ARTIFICIAL INTELLIGENCE AND THE LAW MORE QUESTIONS THAN ANSWERS? BY KAY FIRTH-BUTTERFIELD

rtificial intelligence (AI) will be the game of Go, even creating moves such as reinforcement learning, neu- everywhere. It will ensure our that we humans have failed to notice in ral nets, deep learning, and more are A world runs smoothly and our hundreds of years of play; and it wins driving the AI revolution, but they are every need is met. In its not very intel- at poker, a game that requires players to not—and seem nowhere near—arti- ligent form, it is here already in our perfect the art of bluff. ficial general intelligence (AGI). AGI cars, smartphones, search engines, Just around the corner are AI- will be achieved when a computer can and translation and personal assis- enabled dolls to become our children’s perform all the same intellectual activi- tants; in our homes in the form of real imaginary friends, as well as fully ties as a human. robot cleaners and lawnmowers; on the autonomous cars. It is worth noting For lawyers, this lack of definition street helping with surveillance, traf- that although John McCarthy first sug- of AI is a problem. If we are unable to fic monitoring, and policing; and even gested the idea of autonomous cars as describe something, we cannot legislate in condoms and sex dolls—the list is a scientific possibility in the 1960s, the or easily draw on the correct existing extensive and growing. AI beats us at technology has only made enormous law when cases come to court in the strides to make this possible in the last absence of legislation. Indeed, it is cer- Kay Firth-Butterfield, LLM, MA, FRSA few years. tainly arguable that, as this product is ([email protected]) And yet Vint Cerf, the “father of continuously evolving and is unlike is a in the United Kingdom the Internet,” describes AI not as intel- any product we have ever seen before, and the Project Head of AI and ML at ligent but as an “artificial idiot.”1 The no current legal precedent could apply. the World Economic Forum. She is an prize-winning Go-playing computer Recently, Senator Maria Cantwell associate fellow of the Centre for the has no idea it is playing Go. However, (Wash.) proposed a bill that would Future of Intelligence at the University AI is excellent at learning, and with require the U.S. Department of Com- of Cambridge and a senior fellow and enough data to train it and some often, merce to form a committee with an AI distinguished scholar at the Robert S. basic instruction, it can learn a new focus. According to GeekWire, the draft Strauss Center for International Security skill—for example, sorting through also seeks to create a federal definition and Law, University of Texas, Austin; all the documents in a case and make of AI as “systems that think and act like cofounder of the Consortium on Law and decisions about discovery, or helping a humans or that are capable of unsu- Ethics of A.I. and Robotics, University doctor to diagnose cancer. pervised learning,” and differentiates of Texas, Austin; and vice-chair of between AGI or a system that “exhib- the IEEE Global Initiative for Ethical What Is Artificial Intelligence? its apparently intelligent behavior at Considerations in Artificial Intelligence To understand, we have to realize that least as advanced as a person across the and Autonomous Systems. She has taught AI is not one technology but a range full range of cognitive, emotional, and courses on law and policy of emerging of techniques that give the appearance social behaviors,” and “‘narrow artifi- technologies and AI at the University of of intelligence. AI is applied math and cial intelligence,’ such as self-driving Texas Law School in Austin. statistics at their very best. Techniques cars or image recognition.”2 Others

28 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. suggest that we should have “use case” their sentences would ask to see how complex AI algorithms and thus trans- definitions—for example, the way in the computer arrived at its decision. parency is illusory because even if we which Nevada has defined AI for use According to a ProPublica study, in can see what the system is doing we in autonomous vehicles as “the use of the COMPAS model the computer is cannot understand it. Instead of human computers and related equipment to trained on historic criminal justice data, regulation they propose regulation enable a machine to duplicate or mimic which may lead to corruption of the by algorithm. Thus, a car would have the behavior of human beings.”3 data and the subsequent decision—cre- standard algorithms to deal with its ating racially biased decisions because operation and a “guardian” algorithm Transparency the historic data encompassed them.6 to make sure it stays within its set Currently, the legislation around this In State v. Loomis, the judge used the parameters. By way of example, as data technology is principally concerned COMPAS tool to assist with sentencing.7 is continually collected by the standard with data privacy and autonomous The Wisconsin Supreme Court rejected algorithm about road users so that it vehicles.4 In Europe, the “home of Loomis’s appeal, saying he would have can improve the safety and reliability of privacy,” the General Data Protec- received the same sentence whether or driving, the guardian AI would prevent tion Regulation (GDPR) will come not the AI was involved.8 However, the the standard algorithms learning to into force in 2018.5 Among other pro- court seemed concerned about the use speed from their collected data about visions, the GDPR gives a citizen of of COMPAS. Chief Justice Roberts was the habits of the human driver. The the European Union (EU) the right likewise concerned, when in response remaining unsolved question is who to demand an account of how a deci- to a question regarding AI in the courts will guard the guardians? sion that affected them adversely was he said that AI is already in courtrooms Undoubtedly, opaque and transpar- achieved. Thus, if an algorithm was “and it’s putting a significant strain on ent systems raise intellectual property used in the denial of a loan to an EU how the judiciary goes about doing (IP), copyright, and patent issues, citizen, that citizen can require the loan things.”9 However, a few months later, which will have to be reconciled with company to explain how it came to its the U.S. Supreme Court decided not to legislation or in the courts. decision. hear the writ of certiorari of the Loo- This presents a problem for most mis appeal. The question of bias does Privacy Concerns of the systems currently known as AI not end in the way data is collected or Nor does the problem of data end here. because many develop their decisions cleaned or how it is used in AI—it also For our current AIs to work they need within what is termed a “black box.” comes from the scientists themselves massive data sets, which is why the This is not the informative black box when they are training the AI algorithm. Economist called data the new oil.11 If flight recorder but rather an opaque Some suggest that Alexa only comes you have data you can create AI, and one where AI algorithms crunch data with a female voice because it was pro- therefore everything we do is of value to achieve answers. In other words, grammed by predominantly white male to someone; collection and sale of our the scientist feeds data into the com- geeks. data is big business. With devices that puter and the computer uses many The IEEE Global Initiative for listen and observe in our home, a once iterations of questions and answers to Ethical Considerations in Artificial private place has lost its privacy. Some achieve the answer for which it was Intelligence and Autonomous Sys- of these devices listen and record and trained. Using the poker-playing com- tems has recommended a new IEEE store those recordings the whole time, puter as an example, it ran millions of standard to deal with this problem of while others, like Alexa, listen for key games against itself using three com- transparency.10 P7001 is in the work- “wake up” words. In November 2015, a puters powered by supercomputers to ing group phase at the moment and murder occurred at the home of James achieve its victory. With enough data, argues that diverse stakeholders need Bates. He was accused of the murder, computers have the speed, discipline, transparency and that transparency is and the prosecutor asked Amazon for and endurance to do complex tasks; essential to the way we should design any recordings created by Alexa at the however, the scientists who design embodied AI—for example, autono- time of the death. Amazon refused them more often than not do not know mous cars and robots. It is argued that saying, “Given the important First how the computers achieve the answer. accident investigators, lawyers, and Amendment and privacy implications For the GDPR to work, developers of the general public need to know what at stake, the warrant should be quashed AI will have to make these systems the car or household robot was doing unless the Court finds that the State transparent. at the time of an accident in order to has met its heightened burden for com- Take, for example, the Correc- allocate blame and damages and, most pelled production of such materials.”12 tional Offender Management Profiling importantly, instill trust in the technol- However, Bates’s attorney subsequently for Alternative Sanctions (COMPAS) ogy. However, some scientists consider obtained copies of recordings from developed to assist U.S. judges in sen- the task of transparency too difficult to Amazon and released them into evi- tencing. Under the GDPR, defendants achieve. It is their view that, as human dence. Therefore, this important legal who wish to challenge the fairness of beings, we cannot hope to understand issue has yet to receive a decision.

FALL 2017 TheSciTechLawyer 29 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. Additional concerns about privacy question has to be whether, as demand Standard for Robots and Robotic Devices are applicable to children. Article 16 of is principally for female sex robots, this (BS 8611) that provides a guide to the the United Nations Convention on the is just another way of continuing sex- ethical design and application of robots Rights of the Child gives children a right ual assault on women. As we move into and robotic systems. to privacy.13 It is difficult to exercise that the robot age, we may have the oppor- However, it seems that the bulk of the right, once you have sufficient mental tunity to end the “oldest profession” or law regarding AI will come from judicial capacity to do so, if your parents—by perhaps simply enable it to metamor- decision making, although there may be having devices that listen and record in phose. This question becomes all the some regulators who already exist and your home from your birth—have given more pressing when thinking about who could find AI falling within their away your childhood privacy. Indeed, a an AI-enabled child sex doll, which purview. In a hyper-connected world, we child might soon have its own monitor- is being produced for pedophiles in have already seen that cybersecurity is ing device—an AI-enabled doll to talk Japan. Their developer argues that it vital. As we extend our dependency on to and learn from. Parents or legislators stops him, and others, from assault- AI, cybersecurity will become ever more need to be making choices as to what ing human children. However, the vital as AI—better able to adapt to threats these dolls upload to the cloud or teach importation of this sort of object in faster than humans—will also run the their children and, perhaps more impor- the United Kingdom would probably cybersecurity systems. Developers have tantly, as to their cybersecurity protocol. be a crime; a defendant was recently been using game theory to help teach It is unclear if this will be an option convicted of trying to import a non- algorithms about strategic defense. In one offered by the manufacturers or whether AI-enabled sex robot.16 The Foundation scenario, two standard algorithms played most parents have sufficient knowledge for Responsible Robotics recently pub- a game of collecting “things” but could to understand the problem. To prove lished a neutral report in an effort to also attempt to kill one another; they only this point, in the United Kingdom the start these conversations.17 resorted to trying to do so when there Purple company recently included a was scarcity of “things.” However, when requirement for users of its free Wi-Fi to Regulating AI a cleverer algorithm was introduced it clean toilets for 1,000 hours; of the thou- Regulation is often said to stifle innova- immediately killed the weaker two.20 Reg- sands who logged on, only one person tion, but regulation in this space seems ulatory standards can be built on existing read the terms and conditions in which necessary to protect the millions of cus- ones, such as the U.S. National Institute this was included.14 It is for this reason tomers who will buy and use AI-enabled of Standards and Technology (NIST) that the German government banned devices. However, it seems unlikely that standards for cryptography. The Inter- “Cayla,” an AI-enabled doll, earlier this AI, other than autonomous vehicles, will net of Things (IoT) makes things much year, and in their guidance for autono- find its way onto the federal legislative less safe as all these devices are a poten- mous car makers said this about data agenda anytime soon. The Kenan Institute tial access point and many are cheaper collected in cars: of Ethics at Duke University has been con- to make than to patch. Additionally, as sidering the idea of “adaptive regulation,” companies produce AI-enabled devices It is the vehicle keepers and vehi- which would involve passing a regulation and then go out of business, the burden cle users who decide whether geared toward a specific emerging tech- of security and safety will become greater. their vehicle data that are gen- nology so that developers and users could For example, will car makers be required erated are to be forwarded and have some security to guide investment, to maintain the AI software through- used. The voluntary nature of and revisiting the regulation at an early out the lifetime of the car and multiple such data disclosure presupposes stage to ensure it was working. owners? the existence of serious alterna- There are a number of efforts to create As to governmental endeavors in the tives and practicability. Action guidelines for the use of AI. Some ini- United States, the benefits and problems should be taken at an early stage tiatives are from industry—for example, of AI were considered by the Obama to counter a normative force of IBM has published ethical use guide- administration in two reports from the the factual, such as that prevail- lines and helped to create the Partnership Office of Science and Technology, the ing in the case of data access by on AI.18 Additionally, nonprofits such as second focusing on the growing con- the operators of search engines or AI Global (groupings of geographically cern that automating our brains might social networks.15 localized academics, industry, and gov- lead to mass unemployment.21 AI has ernment; e.g., AI Austin) and the Future yet to be taken up as a topic of debate by Another important privacy ques- of Life Institute have created guiding the Trump administration, but there is tion is the development of sex robots principles for the design, development, a bipartisan working group on AI (led enabled with AI. These robots will also and use of AI.19 Additionally, as men- by Congressmen John Delaney (Md.) need to collect data, and that data has tioned, the IEEE has brought some 200 and Pete Olson (Tex.)),22 and the Trump to be stored somewhere. The possibil- experts together to create standards administration has voiced its support ity of having one’s most intimate secrets applicable to work with AI and robotics. for the creation of autonomous vehi- hacked must be high, but the real In the United Kingdom, there is a British cles. In the political arena, we may have

30 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. seen the impact of AI on the decisions 3. Ne v. R e v. Stat. § 482A.020. https://www.ibm.com/blogs/think/2017/01/ of the American public in the presiden- 4. Ethics Commission Creates World’s First ibm-cognitive-principles/. tial race by the focusing of “fake news” Initial Guidelines for Autonomous Vehicles, 19. See AI Austin, https://www.ai-austin. and by the minute targeting of voters Germany.info (June 21, 2017), http://www. org/ (last visited Oct. 17, 2017); Asilomar using AI to build unique profiles of vot- germany.info/Vertretung/usa/en/__pr/P__ AI Principles, Future of Life, https:// ers from their public records and social Wash/2017/06/21-AutonomousVehicles. futureoflife.org/ai-principles/ (last visited Oct. media accounts. Individual security of html. 17, 2017). data is often impinged because users 5. Council Regulation 2016/679, 2016 O.J. 20. Joel Z. Leibo et al., Multi-Agent believe they have their accounts locked (L 119) 1 (effective May 25, 2018) [hereinafter Reinforcement Learning in Sequential Social to friends and family, but this is not so. GDPR]. Dilemmas, in Proceedings of the 16th A recent Cambridge University study 6. Julia Angwin et al., Machine Bias, International Conference on Autono- shows that from 10 Facebook “likes” an ProPublica (May 23, 2016), https://www. mous Agents and Multiagent Systems AI can know you as well as a work col- propublica.org/article/machine-bias-risk- (AA-MAS 2017) (S. Das et al. eds., 2017), league.23 Additionally, things become assessments-in-criminal-sentencing. https://storage.googleapis.com/deepmind- more complex as AI can seamlessly 7. Adam Liptak, Sent to Prison by a Soft- media/papers/multi-agent-rl-in-ssd.pdf. change video to insert words into the ware Program’s Secret Algorithms, N.Y. Times, 21. Kristin Lee, Artificial Intelligence, mouth of the speaker that are entirely May 1, 2017. Automation, and the Economy, Obama different from what was actually said.24 8. State v. Loomis, 881 N.W.2d 749 (Wis. White House (Dec. 20, 2016), https:// Recent problems have shown that bad 2016). obamawhitehouse.archives.gov/ actors can also fool image detection 9. Id. blog/2016/12/20/artificial-intelligence- AI—for example, persuading it that a 10. See The IEEE Global Initiative for automation-and-economy. kitten is a computer,25 or corrupting Ethical Considerations in Artificial Intelligence 22. Press Release, Congressman John Microsoft’s ill-fated Tay chatbot.26 and Autonomous Systems, IEEE Standards Delaney, Delaney Launches Bipartisan Ass’n, https://standards.ieee.org/develop/ Artificial Intelligence (AI) Caucus for Conclusion indconn/ec/autonomous_systems.html (last 115th Congress (May 24, 2017), https:// Those who attempt to forecast the visited Oct. 17, 2017). delaney.house.gov/news/press-releases/ future have three chances: to be wrong, 11. The World’s Most Valuable Resource Is delaney-launches-bipartisan-artificial- to be right, or to be partially right. No Longer Oil, but Data, Economist (May 6, intelligence-ai-caucus-for-115th-congress. Undoubtedly, the latter course is the best 2017), https://www.economist.com/news/ 23. Computers Using Digital Footprints one to chart. When looking at the future leaders/21721656-data-economy-demands- Are Better Judges of Personality than Friends of AI, the rights to data will likely become new-approach-antitrust-rules-worlds-most- and Family, Univ. of Cambridge (Jan. 12, an increasingly important issue, as well as valuable-resource. 2015), http://www.cam.ac.uk/research/news/ how the general population learns about 12. Eliott C. McLaughlin, Suspect OKs computers-using-digital-footprints-are- AI and what it can do (so that they can Amazon to Hand Over Echo Recordings better-judges-of-personality-than-friends- safely rear their children and cast their in Murder Case, CNN (Apr. 26, 2017), and-family. votes). Currently, there is much hype http://www.cnn.com/2017/03/07/tech/ 24. Meg Miller, Watch This Video of about AI and a paucity of AI scientists amazon-echo-alexa-bentonville-arkansas- Obama—It’s the Future of Fake News, outside the major corporations, which murder-case/index.html. Co.Design (July 18, 2017), https:// could lead to another “AI winter.” This 13. Convention on the Rights of the Child www.fastcodesign.com/90133566/ is a time of great opportunity to actually art. 16, Nov. 20, 1989, 1577 U.N.T.S. 3. watch-this-video-an-ai-created-of-obama-its- shape the way in which humanity 14. Rachel Thompson, 22,000 the-future-of-fake-news. survives into the future—we should not People Accidentally Signed Up to Clean 25. Anish Athalye, Robust Adversarial waste that opportunity! u Toilets Because People Don’t Read Wi-Fi Examples, OpenAI (July 17, 2017), https:// Terms, Mashable (July 13, 2017), blog.openai.com/robust-adversarial-inputs/. Endnotes http://mashable.com/2017/07/13/ 26. James Vincent, Twitter Taught Micro- 1. Frank Konkel, Father of the Internet: wifi-terms-conditions-toilets/#kireRB9KmiqJ. soft’s AI Chatbot to Be a Racist Asshole in Less “AI Stands for Artificial Idiot,” Nextgov 15. Ethics Commission Creates World’s First than a Day, Verge (Mar. 24, 2016), https:// (May 9, 2017), http://www.nextgov.com/ Initial Guidelines for Autonomous Vehicles, www.theverge.com/2016/3/24/11297050/ emerging-tech/2017/05/father-internet- supra note 4 (guideline 15). tay-microsoft-chatbot-racist. shows-no-love-ai-connected-devices/137697/. 16. Man Who Tried to Import Childlike Sex 2. Tom Krazit, Washington’s Sen. Cantwell Doll to UK Is Jailed, Guardian, June 23, 2017. Prepping Bill Calling for AI Committee, 17. Noel Sharkey et al., Found. for GeekWire (July 10, 2017), https:// Responsible Robotics, Our Sexual www.geekwire.com/2017washingtons- Future with Robots (2017). sen-cantwell-reportedly-prepping-bill- 18. Transparency and Trust in the Cognitive calling-ai-committee/. Era, IBM THINK Blog (Jan. 17, 2017),

FALL 2017 TheSciTechLawyer 31 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. BUILDING ETHICAL ALGORITHMS BY NATASHA DUARTE

his summer, the Supreme Court lead to different outcomes. For example, automated decision-making systems are declined to hear a case about the crime prediction algorithms are often recognizing ethical review as a necessary T constitutional rights of a man trained to “learn” patterns from histori- prerequisite to the large-scale deploy- whose sentencing decision was deter- cal arrest records (known as the “training ment of these systems. mined in part by a computer.1 In State data”). This prioritizes the historical Ethical review of automated deci- v. Loomis,2 the state used a tool called patterns behind law enforcement’s deci- sion-making systems is a complex and COMPAS (Correctional Offender sions about where to patrol and whom nuanced process. Yet companies and Management Profiling for Alternative to arrest. These tools can also be cali- policymakers do not need to start from Sanctions) to calculate the “recidivism brated to minimize false positives or false scratch when it comes to developing an risk” of the defendant—the likelihood negatives, depending on whether a juris- ethical framework to guide this review. that he would be rearrested for another diction would rather err on the side of Several frameworks exist that have been crime within two years, based on peo- keeping “low-risk” people in jail or let- or can be adapted to the automated com- ple who shared his characteristics and ting “high-risk” people go free. For better puting context. Ethical principles are background. COMPAS asks a series of or worse, choices about how to design only as good as the processes in place questions about the defendant and runs and use decision-making algorithms to implement and enforce them. Com- the answers through a proprietary algo- are shaping policy, culture, and societal panies need to adopt—and constantly rithm, which provides a recidivism “risk norms. reevaluate—internal processes for ethi- s c ore .” 3 A study by ProPublica found that Industry and government have a cal design and review. The first section when COMPAS predictions were wrong, responsibility to avoid building and of this article discusses existing ethical they were more likely to incorrectly clas- using harmful automated decision-mak- frameworks that can be adapted to auto- sify black defendants as high risk and ing systems. Several major tech industry mated decision-making systems, and the white defendants as low risk.4 players have made public commitments second section is devoted to implementa- Whether they are used to target ads to lead the way on creating ethical stan- tion strategies. or to determine prison sentences, the dards for machine learning and artificial algorithms behind automated decisions intelligence.5 Even smaller companies Ethical Frameworks represent more than math. They also rep- are beginning to hire chief ethics offi- Several established frameworks provide resent human choices, such as what data cers or even assemble entire teams to ethical principles to guide organizations’ to use to make decisions, what variables focus on ethical issues regarding the use best practices around technology design to consider, and how much weight to of big data, machine learning, and auto- and data use. Although these frame- assign to an algorithmic prediction. Each mated decision-making systems. Slowly works were not specifically designed for of these choices is value-laden and may but surely, institutions that build and use machine learning or artificial intelligence,

32 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. ALGORITHMS

they can be adapted to different well-known maxim of “do no harm.” It of informed consent. Imagine a user cre- technologies. requires researchers to conduct risk-ben- ates a social media account and agrees efit assessments, maximize the possible to a privacy policy stating that the infor- Belmont Report benefits to research subjects and to the mation she discloses may be used for The Belmont Report6 was commissioned public, and minimize possible harms. “research.” Has the user consented to in the 1970s, prompted by high-profile Researchers must also assess the specific allow the company, or outside research- medical research scandals including the risks and benefits of including mem- ers, to use that information to predict Tuskegee syphilis study7 and the Stanford bers of vulnerable populations (such as the user’s mental health status? Is sepa- prison experiment.8 The Belmont Report children or pregnant women) in a study. rate consent required for this type of identified three key principles that con- The “justice” principle demands that use, which arguably was not anticipated tinue to govern human subjects research: the benefits and burdens of research are by the user when she created a profile? (1) respect for persons, (2) beneficence, fairly distributed. The report notes that Researchers at traditional institutions and (3) justice. The first principle requires fair distribution does not always must address consent issues when they researchers to respect the basic dignity equal distribution. “[D]istinctions based seek institutional review board (IRB) and autonomy of their subjects. Research on experience, age, deprivation, compe- approval to collect information from subjects must be presented relevant tence, merit and position do sometimes research subjects. However, big data anal- information in a comprehensible format constitute criteria justifying differential ysis is often done without IRB approval and then voluntarily give their consent to treatment for certain purposes.”9 for several reasons, including the ease participate. “Beneficence” embodies the While the Belmont Report was devel- of access to publicly available data sets oped to address medical research on (or data held by companies) and a lack Natasha Duarte ([email protected]) human subjects, its principles are just as of institutional clarity about whether big is a policy analyst at the Center for salient for big data analysis and automa- data research counts as human subjects Democracy & Technology. tion. For example, consider the concept research requiring IRB approval.10

FALL 2017 TheSciTechLawyer 33 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. Menlo Report serving their clients or employers. Under This is not a comprehensive list of con- In 2012, the Menlo Report11 was com- the ACM Code, engineers’ responsibility siderations to guide ethical review, and missioned in response to new questions to the public requires them to ensure many of these questions do not have clear about the ethics of information and com- that any software produced by engineers right or wrong answers, but they provide munications technology research (ICTR). is safe, passes appropriate tests, and is insight into key concepts and approaches. The Menlo Report identified three factors ultimately in the public good; to disclose that make risk assessment challenging in any potential danger to the user, the Individual Autonomy and Control ICTR: “the researcher-subject relation- public, or the environment; to be fair and Does the automated system reflect indi- ships, which tend to be disconnected, avoid deception concerning the software; viduals’ privacy choices? For example, dispersed, and intermediated by technol- and to consider issues of physical does it target ads by inferring sensitive ogy; the proliferation of data sources and disabilities, allocation of resources, characteristics—such as race, gender, or analytics, which can heighten risk incal- economic disadvantage, and other factors sexual orientation—that an individual culably; and the inherent overlap between that can diminish access to the benefits of has intentionally obscured or declined to research and operations.”12 Each of these the software. Engineers must also report disclose? factors also applies to data-driven auto- “significant issues of social concern” to mated decision-making systems. Because their employers or clients. In turn, the Beneficence and Risk Assessment the data that feeds automated systems is ACM Code prohibits employers from Are useful insights that companies glean collected and aggregated digitally, data punishing engineers for expressing from data passed on to data subjects in subjects often do not know they are data ethical concerns. helpful ways? For example, information subjects, and the effects of automated sys- that is not obviously health related may tems can be widely dispersed, difficult to Fair Information Practice Principles be used to predict individuals’ propensity detect, and difficult to connect to one par- The Fair Information Practice Principles for certain health conditions.18 Whether ticular system. (FIPPs)15 are internationally recognized these predictions should be communi- The Menlo Report builds on the prin- as the foundational principles for cated to individuals, and how best to do ciples articulated in the Belmont Report responsible collection, use, and so, is a complex question requiring rigor- but accounts for the additional challenges management of data, and they continue ous evaluation of the risks and benefits of presented by information technology. to serve as guiding principles in the era disclosure. For example, “In the ICTR context, the of big data.16 There are many iterations of principle of Respect for Persons includes the FIPPs, but they were first codified in Justice or Fairness consideration of the computer systems 1980 by the Organisation for Economic Are any groups or populations—especially and data that directly . . . impact persons Co-operation and Development (OECD) protected or vulnerable classes—over- or who are typically not research subjects in its Guidelines on the Protection underrepresented in the training data? themselves.”13 The report added a fourth of Privacy and Transborder Flows of Does the automated system rely on exist- principle calling for consideration of law Personal Data.17 Compliance with the ing data or make assumptions about and public interest. This principle asks FIPPs means minimizing the amount certain groups in a way that perpetuates researchers to engage in legal due dili- of data collected and the length of social biases? Does the system create dis- gence, transparency, and accountability. retention; ensuring that data is collected parate outcomes for different groups or for a specified purpose and used only populations? Is the risk of error evenly dis- ACM Software Engineering Code of in contexts consistent with that purpose tributed across groups? Ethics and Professional Practice (unless additional consent is requested While the Belmont and Menlo Reports and given); giving individuals control Transparency and Accountability apply specifically to research, the over and access to their personal Are the claims a company makes about Association for Computing Machinery information, including the right to its automated decision-making systems (ACM) published a code of ethics in consent to its collection and use and truthful and easy for users to understand? 1992 that applies generally to the practice to correct or delete it; and securing Do explicit and implicit statements allow and profession of software engineering. data through the use of encryption, users (or parties who contract to use the The ACM Software Engineering Code de-identification, and other methods. system) to form accurate expectations of Ethics and Professional Practice about how the system functions, its accu- (ACM Code)14 recognized engineering Common Principles racy, and its usefulness? as a profession and acknowledged that Together, these existing frameworks “software engineers have significant can provide a roadmap to guide ethical Data Governance and Privacy opportunities to do good or cause harm” review of big data analytics, automated Is the data used to train a system accu- and to enable or influence others to decision-making systems, and artificial rate, complete, and up to date? Was the do good or cause harm. Among other intelligence. Below are some examples of collection of data limited to only infor- things, the ACM Code requires engineers how these common principles can apply mation needed to perform a specific to act in the public interest, even when to automated decision-making systems. function or solve a specific problem? Was

34 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. personal information effectively deleted Guiding Principles on Business and structures for integrating human when it was no longer necessary or rel- Human Rights,19 based on the “Pro- rights compliance into business oper- evant? Were adequate steps taken to tect, Respect, and Remedy” framework ations; (6) written procedures and ensure that training data were not linked developed by UN Special Representa- documentation; (7) grievance and rem- or reasonably linkable to an individual? tive John Ruggie. The guidance includes edy mechanisms; (8) whistleblowing operational principles for businesses, mechanisms; (9) multi-stakeholder col- Professional Judgment including (1) making a publicly available laboration; and (10) transparency. Are engineers trained and encouraged policy commitment to respect human to spot and raise ethical issues? Are there rights; (2) assessing actual and potential Applying Human Rights Implementation specific mechanisms through which human rights impacts that the business Guidance to Automated Decision- engineers and other employees can may cause or contribute to; (3) assign- Making Systems report ethical issues? Are there incentives ing responsibility for addressing human The frameworks for implementing (or disincentives) for doing so? rights impacts to the appropriate peo- human rights principles share a set of ple within the business; (4) tracking the common processes that can also support Implementing Ethical Review effectiveness of the business’s response to ethical design and use of automated deci- of Automated Decision-Making human rights issues through qualitative sion-making systems. These common Systems and quantitative indicators, drawing on processes are (1) public commitments; Creating a set of ethical principles or feedback from both internal and external (2) employee training; (3) risk assess- guidelines is a good start, but a review sources, including affected stakeholders; ment; (4) testing; (5) grievance and process must accompany it. The tech- (5) reporting publicly on how the busi- remedies mechanisms; (6) transparency nology sector is beginning to recognize ness addresses human rights impacts, measures, such as public reporting of that conducting ex ante ethical reviews “particularly when concerns are raised ethical review processes or results; and of automated decision-making systems by or on behalf of affected stakeholders”; (7) oversight. Here are examples of how is imperative, though industry has been and (6) providing remedies for adverse these processes could be adapted to the slow to develop and share processes for impacts. automated decision-making context: doing so. This may be because these systems are still seen as new and experi- Digital Rights Corporate Public Commitments mental. The uncertainty surrounding the Accountability Index Companies should make public com- technology is all the more reason to shore Since 2015, Ranking Digital Rights has mitments to uphold ethical principles in it up with sound ethical risk assessment evaluated companies’ respect for freedom the design, training, review, testing, and procedures. Even with ethical review, of expression and privacy using its Cor- use of the automated systems they build. automated decision-making systems (like porate Accountability Index.20 The index These commitments should include any technology) will have unintended includes measures of internal implemen- a statement of the company’s ethical consequences, some of them harmful. tation mechanisms such as (1) employee principles. Sound internal processes can put compa- training, (2) whistleblower programs, nies in a better position to detect, remedy, (3) impact assessments, (4) stakeholder Employee Training and learn from harmful outcomes and engagement, (5) grievance and remedy Companies should develop rigor- avoid replicating them. mechanisms, and (6) public disclosure of ous ethical training for employees and The technology at issue may be new, implementation processes. consultants who build automated sys- but the need for businesses to adopt inter- tems—particularly engineers and nal processes to promote social good is GNI Implementation Guidelines product developers—based on real-world not. Over the past few decades, indus- The Global Network Initiative (GNI) scenarios. Training should be special- try and civil society have engaged in the Implementation Guidelines for the Prin- ized according to the type of company, its development of process-based frame- ciples on Freedom of Expression and mission, and the products it creates, and works for putting human rights and Privacy provide details for how technol- should train engineers to become adept corporate social responsibility principles ogy companies can protect and advance at spotting ethical issues on their own. into practice. These frameworks are useful free expression and privacy rights.21 for informing how companies can imple- The guidance includes (1) oversight by Risk Assessment ment ethical principles into the design the board of directors of the company’s Companies should develop compre- of automated decision-making systems human rights risk assessments, report- hensive risk assessments designed to before they are deployed in the wild. ing, and response; (2) employee training; anticipate potential negative or disparate (3) impact assessments, including specific impacts of automated decision-making UN Guiding Principles on Business and guidance building on the UN framework; systems on all individuals and groups Human Rights (4) ensuring that business partners, sup- likely to be impacted. Potential risks In 2011, the United Nations (UN) pliers, and distributors also comply with include but are not limited to privacy Human Rights Council published its human rights principles; (5) management and data security, discrimination, loss of

FALL 2017 TheSciTechLawyer 35 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. important opportunities (particularly in the issue to the company and receive an guests. The company has also overhauled the contexts of finance, housing, employ- explanation of why the content violated its processes for receiving and respond- ment, and criminal justice), and loss of the terms of service and an opportunity ing to discrimination complaints, making autonomy or choice. to appeal the decision. it easier for users to flag and get help for It is important that any risk-ben- potential instances of discrimination. efit assessments be realistic about the Transparency In this case, Airbnb is tackling potential benefits of an automated deci- Companies should be open about their individual human bias rather than algo- sion-making system. For example, ethical review processes and mechanisms rithmic bias, but the lessons it has learned claims that collecting more data will help when practicable. Sharing these processes apply to automated decision-making sys- solve complex problems should have a can help create industry guideposts, facil- tems as well: mitigating bias requires an fact-based rationale and should not auto- itate discovery and mitigation of adverse internal acknowledgment of concerns, matically override concerns about data impacts, and foster trust between compa- dedicated personnel, constant evaluation, minimization and privacy. nies and the public. and thoughtful internal mechanisms. Companies should conduct indi- Airbnb has acknowledged that there is no vidualized risk assessments for each Oversight single solution to addressing bias and dis- vulnerable population that could be Several companies have recently taken crimination on its platform, and it is still affected by the outcomes of an automated the important step of hiring a chief eth- experimenting with different approaches. system. The assessments should take ics officer and—even better—assembling into account historical marginalization, a team dedicated to ethical assessment Conclusion disparities in access to resources and jus- and review. These teams should have full Automated decision-making systems are tice, and other factors that might lead to access to information about projects and more than neutral lines of code. They are harmful disparate impacts. the authority to recommend changes to agents of policy, carrying out the values Risk assessments should also antici- or even halt projects that do not meet embedded in their design or data. They pate potential uses and abuses of an ethical standards. get their values from the choices made by automated system by third parties. For the humans who create them—whether example, a company selling an automated Case Study: Airbnb’s Inclusion Team those value choices are conscious or not. decision-making system to a govern- During the past year, the home-sharing A well-developed ethical review process ment entity must assess the risk that the company Airbnb has made a number is not a silver bullet for preventing unfair government entity will use the system of internal changes aimed at reduc- outcomes. But it is necessary if we hope in ways that the company may not have ing discrimination on its platform. After to build systems that promote democ- intended or in ways that create disparate struggling with accounts of racial dis- racy, equality, and justice. u impacts or violate rights. crimination against would-be guests (those seeking accommodations), the Endnotes Testing company reviewed its practices and poli- 1. Loomis v. Wisconsin, 137 S. Ct. 2290 Automated decision-making systems cies to see what structural changes might (2017). should be tested for limitations, such as help reduce racial bias on the platform.22 2. 881 N.W.2d 749 (Wis. 2016). disparate impacts on minority groups. One of those changes was the creation of 3. See Petition for Writ of Certiorari, The possibilities and best practices for a permanent team of engineers, data sci- Loomis, 137 S. Ct. 2290 (No. 16-6387), testing (e.g., whether comprehensive test- entists, researchers, and designers whose http://www.scotusblog.com/wp-content/ ing before deploying an algorithm in the sole function is to advance inclusion and uploads/2017/02/16-6387-cert-petition.pdf; wild is practicable) may vary across tools root out bias. A key function of this team Julia Angwin et al., Machine Bias, ProPublica and contexts. In some cases, access to the is assessing what information in a would- (May 23, 2016), https://www.propublica.org/ algorithm and testing by outside experts be renter’s profile—such as photo and article/machine-bias-risk-assessments-in- may be the best way to ensure fairness. name—might trigger a racially biased criminal-sentencing. decision to reject the renter, and whether 4. Angwin et al., Machine Bias, supra note Grievance and Remedies highlighting other information could miti- 3; Julia Angwin & Jeff Larson, Bias in Criminal Companies should have clear and trans- gate that bias. The team is experimenting Risk Scores Is Mathematically Inevitable, parent mechanisms in place to receive with reducing the prominence of profile Researchers Say, ProPublica (Dec. 30, 2016), and respond to grievances from individ- photos and highlighting objective infor- https://www.propublica.org/article/bias- uals who believe they have been harmed mation like reservation details, reviews of in-criminal-risk-scores-is-mathematically- by an automated decision-making sys- would-be guests from previous hosts, and inevitable-researchers-say. tem. For example, if a social media user verifications. For guests who do not have 5. John Markoff, How Tech Giants Are believes her content was wrongfully reviews or verifications, the team is explor- Devising Real Ethics for Artificial Intelligence, flagged and removed by an automated ing how it can use design to improve N.Y. Times, Sept. 1, 2016, https://www. tool for violating the platform’s terms messaging and social connections to nytimes.com/2016/09/02/technology/artificial- of service, she should be able to report build trust between hosts and would-be intelligence-ethics.html.

36 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. 6. Nat’l Comm’n for the Protection 12. Id. at 5. U.N. Doc. HR/PUB/11/04 (2011), http:// of Human Subjects of Biomedical & 13. Id. at 7. www.ohchr.org/Documents/Publications/ Behavioral Research, The Belmont 14. Software Engineering Code of Ethics and GuidingPrinciplesBusinessHR_EN.pdf. Report: Ethical Principles and Guide- Professional Practice, Ass’n for Computing 20. 2017 Indicators, Ranking Digital lines for the Protection of Human Machinery (1999), http://www.acm.org/ Rts., https://rankingdigitalrights.org/2017- Subjects of Research (1979) [hereinafter about/se-code. indicators/ (last updated Sept. 14, 2016). Belmont Report], https://www.hhs.gov/ 15. See, e.g., Org. of Econ. Co-operation 21. Implementation Guidelines for the ohrp/regulations-and-policy/belmont-report/ & Dev., The OECD Privacy Framework Principles of Freedom of Expression and index.html. 70–72 (2013) [hereinafter OECD Privacy Privacy, Global Network Initiative, http:// 7. The Tuskegee Timeline, Ctrs. for Dis- Framework], http://www.oecd.org/sti/ globalnetworkinitiative.org/sites/default/files/ ease Control & Prevention, https://www. ieconomy/oecd_privacy_framework.pdf. Implementation-Guidelines-for-the-GNI- cdc.gov/tuskegee/timeline.htm (last updated 16. See Robert Gellman, Fair Information Principles_0.pdf (last visited Oct. 17, 2014). Aug. 30, 2017). Practices: A Basic History (Apr. 10, 2017) 22. Airbnb hired Laura Murphy to lead the 8. Stanford Prison Experiment, http:// (unpublished manuscript), https://bobgellman. review, and Murphy assembled a cross-depart- www.prisonexp.org/ (last visited Oct. 17, 2017). com/rg-docs/rg-FIPshistory.pdf. ment internal team as well as outside experts. 9. Belmont Report, supra note 6. 17. OECD Guidelines on the Protection of Together, they reviewed aspects of Airbnb such 10. See, e.g., Kalev Leetaru, Are Research Privacy and Transborder Flows of Personal as how hosts and guests interact, the company’s Ethics Obsolete in the Era of Big Data?, Forbes Data, Org. of Econ. Co-operation & Dev. written policies, enforcement of the policies (June 17, 2016), https://www.forbes.com/sites/ (1980), http://www.oecd.org/sti/ieconomy/ and response to complaints of discrimination, kalevleetaru/2016/06/17/are-research-ethics- oecdguidelinesontheprotectionofprivacy and the (lack of) diversity on the Airbnb team. obsolete-in-the-era-of-big-data/. andtransborderflowsofpersonaldata.htm. Laura W. Murphy, Airbnb’s Work to Fight 11. David Dittrich & Erin Kenneally, These guidelines were updated in 2013. OECD Discrimination and Build Inclusion U.S. Dep’t of Homeland Sec., The Menlo Privacy Framework, supra note 15. (2016), http://blog.atairbnb.com/wp-content/ Report: Ethical Principles Guiding 18. Deepika Singhania, This Startup Uses AI uploads/2016/09/REPORT_Airbnbs-Work-to- Information and Communication to Predict Lifestyle Disease Risks, YourStory Fight-Discrimination-and-Build-Inclusion.pdf. Technology Research (2012), https:// (Apr. 20, 2017), https://yourstory.com/2017/04/ www.caida.org/publications/papers/2012/ tissot-signature-innovators-club-march-winner/. menlo_report_actual_formatted/menlo_ 19. U.N. Human Rights Council, Guiding report_actual_formatted.pdf. Principles on Business and Human Rights,

CYBERSECURITY FOR THE HOME AND OFFICE THE LAWYER’S GUIDE TO TAKING CHARGE OF YOUR OWN INFORMATION SECURITY — BY JOHN BANDLER —

Incidents of cybercrime and cybercrime-related identity theft continue to grow exponentially. As a result, governments, regulators, and professional bodies increasingly require that lawyers and other professionals take reasonable cybersecurity measures. Beyond protecting workplaces from cybertheft or intrusion, there’s also a need to protect ourselves and our families from these threats. This guide helps individuals take control of their cybersecurity.

2017, 416 Pages, 7x10, ISBN: 978-1-63425-907-1, Product Code: 5450076 List Price: $69.95, SciTech Member Price: $55.95 AVAILABLE AT WWW.SHOPABA.ORG

FALL 2017 TheSciTechLawyer 37 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. WHY CHANGES IN DATA SCIENCE ARE DRIVING A NEED FOR QUANTUM LAW AND POLICY, AND HOW WE GET THERE BY APRIL F. DOSS

he growing complexity of issues at these current challenges, and explain what to offer advisory opinions. It means that the intersection of technology, pri- quantum policy could mean. once a precedent has been established, it T vacy, security, and law have brought Quantum physics asks us to believe is extremely hard to overturn—the inertia us to a turning point similar to the revo- two apparently contradictory things becomes almost insurmountable. lution in scientific thinking that has taken simultaneously: that light can be both a Technology development works dif- place over the past 50 years. For centuries, particle and a wave, and that it can be both ferently. Beta versions, user acceptance scientists believed that the laws of physics at the same time; or that bits in a computer testing, and minimum viable products are described by Isaac Newton were adequate can register both one and zero at the same the watchwords of the day, and failing fast to explain the workings of the universe. By time. In the days of Newtonian physics, to support rapid improvement in itera- and large, for most purposes they still are. propositions like these would have felt like tions is key. But at the edges of our understanding—in something out of Alice in Wonderland, an A simple example is one that my intel- the subatomic scale and across the vast- exercise in believing impossible things. lectual property (IP) colleagues often ness of the universe—Newtonian physics Today, though, we know that Newtonian point to: the timeline for obtaining tra- broke down. Something in those laws physics cannot explain subatomic behav- ditional patent protection for a new didn’t hold true. The evidence we were ior or the cosmos. And that realization led invention has made patents on software collecting made clear that those theories to the development of a new field of phys- increasingly obsolete. If the window for a were inadequate to explain new scientific ics. It helps to remember that these new new software product or technique to be questions we were facing. theories were driven by necessity: quan- cutting edge shrinks to a mere 12 to 18 At the intersection of law and tech- tum physics came about because the laws months, then it may no longer make sense nology today we are facing a similar of physics we had relied on before were to wait for a patent to issue before bring- revolution. It is now possible to collect no longer sufficient to explain the way the ing the product to market. By the time data that is so granular—subatomic par- universe worked. patent protection is obtained, the prod- ticles of information, if you will—on We are at a similar turning point when uct itself would be obsolete. Of course, such a massive, grand, and continuous it comes to applying traditional law and this does not apply to all IP issues, and not scale— and analysis that policy to the questions raised by algo- all the time. But this is a real and genuine matches the scale of the universe—that rithms, big data, and cybersecurity and concern among technology developers our traditional approaches to law and pol- privacy law. Law and policy are strain- and business owners who are struggling to icy struggle to make sense of what these ing under the pressures imposed by fit their more agile business model into a advances mean for privacy and technol- technological advances, and a Newto- framework of traditional, and often slow, ogy, and leave real doubts about whether nian approach is no longer sufficient to legal processes. law and policy can keep up. In the face of answer the questions that are pressing to It isn’t possible, of course, to map a pre- these challenges, we need a new approach, be clarified, or to keep pace with the wide- cise one-for-one comparison between something I think of as “quantum pol- spread scope of rapid and hard-to-predict the evolution of hard science and law and icy.” Allow me to relate a few examples of changes. policy. Instead, I am offering up quan- What do I mean by a Newtonian tum physics as a conceptual analogy for approach to law and policy? It is one in the ways in which we might approach the April F. Doss ([email protected]) which we are content with gradual, incre- challenge of modernizing law and policy. was formerly the head of intelligence law mental change, where we continue to According to quantum mechan- for the National Security Agency (NSA), rely on the slow accretion of precedent, ics, light can be a wave and a particle and the chair of the cybersecurity and where we are content with having criti- at the same time, and both properties privacy practice at the law firm Saul cal legal issues decided by disparate cases can be leveraged. Relying on duality, Ewing. She currently serves as senior that take years to wend their way through rather than resisting it, leads to quantum minority counsel for the Senate Select multiple jurisdictions before arriving at a gains. Quantum mechanics allows us to Committee on Intelligence (SSCI). The critical mass of new law that is achieved zoom in, to understand the behavior of views expressed here are her own, and through an organic evolution. It means things at almost unfathomably micro not those of the NSA, SSCI, or any other continuing to require that there be a case levels, leading to advances in min- organization. in controversy, refusing to allow courts iaturization that were previously

38 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. unimaginable. Quantum mechanics also degree to embrace the intellectual chal- are honest, it shouldn’t be hard to recog- explains the behavior of the universe lenge of understanding what data objects nize the enduring wisdom in the words of at macro levels. It fills in the questions are, or how analytics work, or to consider William Gladstone or William Penn that raised by Einstein’s theories of relativity, the territoriality and jurisdictional ques- “justice delayed is justice denied.”1 and allows us to zoom out and take stock tions raised by actions and information A more creative, rapid, and innova- of the heavens in ways that we could not that traverse the world’s communications tive approach to these issues could allow have predicted only a short time ago. networks. We shirk our duties when we the law to move forward more quickly— Law and policy need to embrace some shy away from trying to understand, at a but only if it is done in a way that has analogous approaches. One of these is the layman’s level, the technologies that are appropriate safeguards built in to coun- ability to embrace, rather than resist, dual- shaping our world. terbalance the risks of uncertainty in ity: not just balancing rights or ideals that Law and policy also need to zoom out, outcomes, unintended consequences, and seem to conflict, but encouraging them to know when it’s useful to describe black erroneous judgments. to thrive at the same time, and under- holes—to look at the macro effects of an standing that sometimes outcomes are analytic, for example—instead of focusing Subatomic Particles and the probabilistic. on the bits of data like grains of sand. Universe of Conclusions: Quantum Law and policy need the ability We need to recognize the limits of tra- Policy for Big Data Analytics to zoom in, to tackle legal and policy ditional approaches to law and policy, and A number of years ago, when I was at the questions at a level of detail that many to look for appropriate ways to bring to National Security Agency (NSA), I had practitioners resist. How often have you bear concepts like minimum viable prod- the privilege of leading the group that was heard, or perhaps said, “I’m not technical” uct, rapid iteration, and failing fast and charged with developing a new approach when talking about a complex question improving often, which have been key to vetting the legal and policy ramifica- with a client? Lawyers should no longer to technological advances in the private tions of big data analytics. I would like to allow themselves the sector. The idea of failing at anything is be able to say that a team of smart lawyers intellectual anathema to lawyers: partly because so and intelligence oversight officers were sloppiness of many of us are ego-driven (often able to come up with innovative solutions, saying that. It to unhealthy degrees), and also but that wouldn’t be accurate. Instead, doesn’t take because we value the stability what happened was that a cross-disci- a computer that comes with predictabil- plinary team emerged: one that included science ity in the law, and we want to software developers, intelligence analysts, avoid actions that could bring people with expertise in data tagging and about unintended societal in platforms, and yes, people who were injustice, whether to an indi- steeped in the specifics of the policy, legal, vidual or group. and oversight regimes. However, the slow pace of We knew that we needed to create a legal and policy developments framework that would be flexible enough in key areas such as big data and to accommodate the wide range of ana- data analytics, cybersecurity, and lytics that would undoubtedly emerge privacy mean that in fact we are over time. It needed to be agile enough to already failing to provide the make decisions quickly—speed is often certainty, guidance, and critical in intelligence operations, and we resolution of issues wanted to give people confidence that new that justice ideas could be run through the vetting demands. process quickly enough to meet mission If we needs. It needed to be adaptable over time, able to incorporate best practices as we learned them. It needed to include a definitive registry of deci- sions, allow periodic re-review when analytics or

FALL 2017 TheSciTechLawyer 39 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. legal or policy requirements changed, quickly or well with assessing how com- companies that refuse to cooperate with and be scalable so that the review process plex analytics would operate on small bits the government for law enforcement or could keep pace with a demand signal that of data, or to address the outcomes they national security purposes. National and was likely to grow. would drive when they were run at a large multinational companies use their oppo- With all of these requirements in mind, scale. sition to judicial warrants as a marketing we assumed that risk-based tiering made With all of that in mind, the analytics tool. While multinational companies sense. We knew we would need a decision vetting team crafted a novel, interdis- face genuinely complex dilemmas over matrix, something that could be translated ciplinary way of reviewing analytics, whether and how to comply with the into codable yeses and nos. We would codifying the results, creating a repeat- vast array of laws across all of the juris- need an iterative approach: evaluating ana- able framework, and ensuring that the dictions where they do business, privacy lytics to create the matrix, training people ecosystem for analytic vetting could con- are mistaken to think of these on the matrix once it existed, and adding tinue to evolve and change as the people companies as acting purely from altru- new rules to the matrix every time a new using the framework learned more. In ism, when the same companies carry out analytic review prompted a new rule. This creating this analytic vetting framework, data collection and analysis schemes that cycle of constant updates would allow our this small, interdisciplinary team took a are more comprehensive, intrusive, and decision-making processes to continue to quantum leap forward into new ways of unfettered than anything that is typically grow and mature. And all of this would managing complexity, volume, scale, itera- lawful for the government to do in West- only be possible if we created a standing tion, and a host of other issues that arose ern democracies. framework of human , docu- with the challenge of big data analytics. Cybersecurity puts the paradox in mentation, and thresholds for decision In other words, quantum policy had just stark relief. Most privacy advocates want making that could be repeatable, scalable, taken hold. to see a high level of security for sys- and timely, and could grow. tems holding personal data. But in order The traditional approach of hav- Simultaneous Belief in Two to achieve a high level of cybersecurity, ing one of our in-house clients bring us Seemingly Contradictory Things it is often necessary to implement some a problem would barely have worked for Anyone watching national security and degree of network and user activity mon- vetting a single cloud analytic—the intel- privacy debates can see the ways in which itoring—which has the effect of being ligence analysts could have articulated the black-and-white thinking obscures the privacy-intrusive. Today, many companies intended mission outcome, but probably complexity of the issues we face. On the capture detailed system, network, and user could not describe with precision the ways national security side, zealous advocates log information and run complex analyt- the data would interact with each other with mental blind spots sometimes fail to ics on it, perhaps combining that data with or provide a comprehensive overview of acknowledge that many law enforcement other behavioral indicators, in order to the legal and privacy protections built into and electronic surveillance programs raise detect and assess insider threat. Yet doing the computing platform where the ana- genuine privacy impacts. While many so necessarily comes at a privacy cost. lytics would run. The technology team people might believe that some privacy Modern technology provides us with could tell us how the data would interact, intrusions are worth the increased secu- a nearly endless supply of seemingly con- but were not as well positioned to gauge rity benefit (I count myself as among tradictory positions, of situations in which what the impact would be on intelligence that group, having testified publicly in the on-the-one-hand and on-the-other analysis if an analytic were tweaked in a favor of the renewal of Foreign Intelli- discussion seems to go on without end. particular way at our suggestion. Relatively gence Surveillance Act (FISA) section We need approaches to law and policy few lawyers have the technical and opera- 7022), it would be wrong to pretend that that will help us harmonize those ten- tional expertise to fully evaluate the legal no privacy impacts exist; the better ques- sions, rather than simply thinking of one implications of a complex analytic involv- tion is whether those privacy impacts thing versus another; we need to be able to ing data governed by many different sets are justified by the national security gain. embrace them both. of rules. And analytic decision making Similarly, some privacy advocates disre- required a repeatable approach that could gard the real benefits of national security, What to Do When, under Existing tackle not just the legal advice but other going so far as to praise leaks of sensitive Laws, the Behavior of the Universe dimensions of the decision as well: policy information and lionize the leakers. This Is Hard to Predict approval, resource commitment, and inte- could not be more short-sighted. When Although cybersecurity is a challenge for grating legal advice with technical review the privacy community celebrates theft of everyone, what you see depends on where to ensure that an analytic—while it might data from government systems and the you sit, and cybersecurity legal risk looks be given a green light by the lawyers— unauthorized release of classified informa- different, and less definitive, from the pri- wouldn’t be run until someone else had tion, they undermine the credibility of the vate sector perspective than from the made sure that it wouldn’t end up crashing important privacy values that they are try- government perspective. its platform. ing to speak for. First, in the private sector, you can get In other words, Newtonian approaches When it comes to private sector activi- sued. It doesn’t matter whether you are to legal review were not enough to deal ties, many privacy advocates applaud a multinational corporation or a small

40 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. business that happens to hold personal differently from other crimes. We expect • We know that there are widespread information—like the Social Security the police to keep the city streets generally insecurities in the Internet of numbers of your employees, or the credit safe; we know that a bank that reports a Things, along with widespread card numbers of your customers. Most theft to the police is unlikely to be held adoption of in-home devices government entities do not share that responsible for failing to stop the thief. from smart refrigerators to audio- concern. In the classic government case, At the same time, we know that the powered personal in-home there is a premium on confidentiality government cannot ensure cybersecurity assistants. In Germany, children’s of information, and there is high con- for the private sector; we are not even toys with cameras and artificial cern over integrity and availability. While sure that it should try. What we are left intelligence (AI) were recalled both government and private entities can with is an odd tension in which hacking because they were capable of spying. face devastating impacts from the loss is a crime, but the victim shoulders part • Biometrics are increasingly used of secrets, the government can rarely be of the blame. (And if the victim is fined, for identification, and people are sued. Private entities have to think about those penalties are likely to be paid to volunteering to have implanted all of the cybersecurity strategies that gov- the state, rather than as restitution to the microchips that can be used for ernment entities use, and they also need second order of victims, such as customers everything from tracking the time to incorporate an additional set of risk or patients whose information has been they spend at work to opening secu- mitigation tools in order to manage their breached.) We suspect that some kind rity doors and presenting their mass cybersecurity risk—tools like insurance of public-private partnership has to transportation passes. coverage, contractual language, limita- be central to solving the cybersecurity tions on liability, and representations and conundrum, but we are having an awfully Private sector litigation over data warranties. hard time figuring out how to get there. breaches keeps growing: suits against Second, in cybersecurity lawsuits companies by individual customers and is there is no clearly defined standard What Will the Future Bring? patients; derivative lawsuits by sharehold- of care. Frequently, a company cannot The trends in technology and networking ers against directors and officers for failing be confident that the cybersecurity are driving us toward more widespread to ensure effective cybersecurity measures measures it has taken will be deemed harvesting of information and increasingly were in place; and regulatory or enforce- to have been “enough” in the face of complex ways of using it that range from ment actions taken by governments. a breach. As a result, cybersecurity consequential to comical. In just the past Insurance companies have started under- risk assessments now permeate every year in the private sector, here are a few writing policies for property damage and aspect of commercial life, from mergers examples of things we have seen: personal injury that result from a cyber- and acquisitions, to cross-border data attack with impacts in the physical realm: transfers of personnel and customer • In California, an employee sued her when a car’s steering system is hijacked, or information, to complying with the company for requiring her to install a dialysis machine is shut down, or traf- patchwork of data breach notification an app on her phone that would fic lights are switched to show green in all laws in this country alone.3 track her location even when she directions, or an app that’s supposed to Further complicating matters, was off work. control a home cooktop is hijacked by a companies that have been hacked often • Complex data analytics are being malicious user to turn on a stove, in turn feel as though they are victimized twice: used to influence decisions by causing a fire that burns down a house. first, as the victim of a computer crime; parole boards, and also to prioritize Consumers still say they want privacy, and second, as the victim of a federal or the text messages that come into a and they also still value the benefits that state enforcement action or regulatory crisis hotline. come from commercial products that har- probe seeking, with perfect 20/20 vision, • A committee in the U.S. House of vest their information for digital personal to determine whether their cybersecurity Representatives approved legisla- assistants and in-home security systems, preparedness had been “reasonable,” tion that would allow employers more accurate driving directions, the fun despite the lack of clearly defined to require employees to undergo of locating friends online, interactive chil- standards. Private sector entities know it is genetic testing and grant employer dren’s toys, or other AI. Many consumers unlikely that law enforcement will be able access to those results along with do not read privacy notices or understand to reach attribution for a cyberattack— other health information.4 them, and governments struggle to decide much less make indictments or arrests. • Litigation was brought against the where the line is between appropriate And so they are reluctant to provide the manufacturer of an app-controlled consumer protection and economic-stran- government with information that could sex toy, alleging invasion of privacy gling paternalism. help identify cybersecurity threat trends. because the company did not tell Through it all, law and policy will I often encouraged my clients to report its customers that it was harvest- struggle to keep up, because the pace of cyberattacks to law enforcement. But there ing data from the app about what technology change is limited only by inge- is a kernel of truth in those reservations settings were used, when, and how nuity and imagination, and to a much my clients expressed: we treat cybercrime often. lesser extent by the laws of electrical

FALL 2017 TheSciTechLawyer 41 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. engineering and physics. In other words, The profusion of data and ways to allowing the government to access it recent years have shown us a world in manipulate it, the paradox of privacy in an for a legitimate purpose. which the challenges of big data, privacy, era of living online, the question over who • We need to focus privacy policy on and information security grow more com- can misuse information and how, or how the end goals and sensitivity of the plicated, more multidimensional, and to think about consumer choice in a time information, rather than focusing more interconnected. And those chal- when we freely give away data but do not on how it was acquired (commer- lenges are greatest when dealing with the always understand how it is used—all of cial purchase, government warrant, extraordinarily detailed types of informa- these dilemmas are forcing us to acknowl- etc.), and adopt policies that avoid tion now available about nearly everyone, edge the limits of our current legal and unnecessarily placing the United and in dealing with the macro-level con- policy approaches to them. If we do not States at a disadvantage against clusions being drawn from analytics that modernize our approaches to law and pol- other nations, either in national are assessing that information. icy, they will continue to lag sorely behind security or economic terms. the pace of technological change, with • We should consider ways to imple- How Quantum Policy Can Help real-life consequences—some of them ment test cases and incubator Just as quantum physics paved the way for unintended—for individuals, organiza- environments for legal and policy technologies from which the entire world tions, and governments. Here are a few evolution. benefits, quantum policy can help support examples of what quantum policy could the effort to democratize data science, pri- look like: When it comes to dealing with legal vacy, and related technology. and policy issues for emerging tech- From its inception, the Internet has • We need to admit that we cannot nologies, we are still largely living in a been a great democratizing force, making put the tech genie back in the bottle Newtonian age. If we do not make the unprecedented volumes of information or reseal Pandora’s box; legal origi- leap to quantum policy, our entire ecosys- available to millions of people all over nalism has limited effectiveness tem of jurisprudence, litigation, legislation, the world, often for free (or included in in judicial review of these kinds of intellectual property, and privacy rights the price of a cell phone service plan). matters. will suffer as a result. Quantum physics Free webmail services made global com- • We need to avoid the tempta- did not evolve overnight, and neither will munication cheap, easy, and practically tion to use scaremongering, or quantum policy. But by looking at exam- instant for billions of people around the else we will miss the opportunity ples of innovations that have worked, and globe who would never previously have for clear and rational discussions by assembling cross-disciplinary teams considered sending international letters about the ways technology can be to think in creative ways about the chal- or telegrams. Voice over Internet Proto- privacy-protective. lenges that face us, we can move toward col (VoIP) connections made real-time • We need to educate the public with- solutions that allow the law, and lawyers, conversations with voice and video avail- out paternalism or condescension. to keep up. u able to billions of people who never could • We need to teach technology in law have afforded international long-distance schools, teach privacy in technol- Endnotes phone calls. Web hosting, blogs, YouTube, ogy schools, and increase the use of 1. The quote has been variously attributed you name it—nearly everything necessary cross-disciplinary teams. to both. for the exchange of ideas and commerce • We need to take rigorous 2. Section 702 of the Foreign Intelligence Sur- have been put into place by the innova- approaches to data risk, being sure veillance Act: Hearing Before the H. Comm. on tion engine of the Internet, at a price that we understand where the biggest the Judiciary, 115th Cong. 20 (2017) (statement makes these tools accessible on a scale that data really is, and assign account- of April F. Doss, Partner, Saul Ewing LLP). could never have been imagined before. ability to both the government and 3. At last count, 48 states had their own Data has never been more ubiquitous or the private sector accordingly. data breach laws. These laws have different available. At the same time, it is impossible • We need to pursue cost-effective definitions of an actionable breach; they impose to escape the potential negative impacts: data security, realigning penalties to different timelines for notifying victims; some the same AI that promises rapid innova- harm. require notifying state government agencies and tion, the same genetic information that • We need to consider duality: For others do not; and they have different require- promises new medical treatments, and the example, if the encryption and ments for the information to be included in same convenience of personalized traf- going-dark debate falters because consumer and regulator notifications. fic recommendations can also be used to of black-and-white thinking, we 4. Sharon Begley, House Republicans Would bring about unprecedented invasions of should consider alternatives outside Let Employers Demand Workers’ Genetic Test privacy. And the same access to informa- of that binary box. Perhaps encryp- Results, PBS Newshour (Mar. 11, 2017), tion that supports democracy can also tion and back doors are not the only http://www.pbs.org/newshour/rundown/ be subverted to cement the iron rule of ways to achieve the goals of keep- house-republicans-let-employers-demand- authoritarian regimes. ing data private and secure, while workers-genetic-test-results/.

42 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. IN MEMORIAM CHARLES RAY “CHAS” MERRILL

BY STEPHEN S. WU AND MICHAEL S. BAUM

Charles Ray “Chas” Merrill, 76, passed away on in leading the many (sometimes vexing) drafting April 25, 2017, in Denver, Colorado. In his prac- initiatives. He often exercised his unique personal tice, he served as a partner in McCarter & English’s charm and humor to help maintain a shared mission Newark, New Jersey, office for many years. After and organize diverse legal and nonlegal profession- graduating from Harvard Law School in 1965, he als into teams that produced sections of the Digital began his career in the area of tax law, estate plan- Signature Guidelines and PKI Assessment Guidelines. ning, and corporate law. He received a tax LLM These two publications were the heart of the ISC’s from New York University in 1970. However, he work plan for many years, and both publications had later moved into information security and electronic a worldwide influence on digital signature laws and commerce law. He served as an active member of industry practices for many years. the Section’s Information Security Committee (ISC) Chas also helped the ISC present programs at in the 1990s and 2000s before he retired in 2006. the RSA Conference, the world’s leading informa- In his work for the Section, he was a pioneer, an tion security conference. For instance, he acted as intellect, and a patient mentor. He provided wise the judge in the very first mock trial that the Sec- counsel to ISC leadership, and was a dear friend to tion cosponsored with the RSA Conference in 2007. many in the ISC. In the earliest days of the devel- Audience members raved about the mock trial pro- opment of e-commerce—when it was still called gram in their reviews. electronic contracting—Chas quickly distinguished Chas had a special ability to inspire and lead himself as a thought leader in the frenetic world teams of people, an activity that sometimes resem- of digital signature law and policy. Chas was a co- bled “herding cats.” His leadership also included an reporter and key leader in producing two of the over 20-year involvement as an adult Scout leader Section’s path-breaking publications on public key for the Boy Scouts of America. He received the Sil- infrastructure: Digital Signature Guidelines and PKI ver Beaver Award from his local council, the highest Assessment Guidelines. award a council can bestow on an adult leader. As the ISC leader for these two publications, Denley Chew, an attorney working at the Fed- Chas maintained a steady (and often paternal) hand eral Reserve Bank of New York, wrote, “Chas was an early pioneer of this new emerging field of ‘informa- tion security law’ who really helped make the ISC Stephen S. Wu ([email protected]) served as the into what it is today—intellectual, social, construc- 2010–2011 Chair of the ABA Section of Science & Technology Law. Michael S. Baum (michael@ tive—and for that I think we will always be grateful secureav.com) served as the founding Chair of the and aspirational.” Section’s Information Security Committee and Chas’s work is for the ages. He was a true gentle- Electronic Commerce Division. man who will be sorely missed.

FALL 2017 TheSciTechLawyer 43 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. Nonprofit Section of Science & Technology Law Organization American Bar Association U.S. Postage PAID 321 North Clark Street American Bar Chicago, Illinois 60654-7598 Association PC54580001401

CONNECTWITHSCITECH

#SCITECH

Each year, thousands of members count on SciTech to put the tools they need at their fingertips to navigate rapidly changing science and technology law issues:

THREE INFORMATION-PACKED PERIODICALS: n The SciTech Lawyer, our glossy quarterly magazine with n , our quarterly electronic law review published by the cutting-edge coverage Section and the Center for Law, Science & Innovation of the Sandra n SciTech e-Merging News, our quarterly electronic newsletter with Day O’Connor College of Law at Arizona State University timely practice perspectives and Section activities/opportunities n Unlimited access to over 20 free hot-topic committees and n Free access to our podcast archive on emerging issues list serves n Selected chapters from preeminent SciTech books n Free access to Section articles via the SciTech e-Archive and ABA Web Store (delivered electronically) and access to chapter archive n Members-only discounts on Section books (save 10% or more) and n Two free webinars/teleconferences on hot topics CLE programs (save $100 off the public rate, $50 off the ABA rate) (notices sent electronically) n Exclusive career and business development resources n Limited-time special offers CALENDAROFEVENTS

November 9, 2017 ASCENDING STARS OF ABA SCITECH: HOT TOPICS IN 60 MINUTES Teleconference (Free to SciTech) November 16, 2017 MEDICAL DEVICES AND DIGITAL HEALTH: IDENTIFYING OPPORTUNITIES AND MANAGING RISKS New York, NY November 17, 2017 DRIVING THE NEXT GENERATION: LEGAL UPDATE ON SELF-DRIVING CARS CLE Webinar February 1–2, 2018 SCITECH MEETINGS AND EVENTS AT ABA MIDYEAR MEETING Vancouver, BC 44 TheSciTechLawyer FALL 2017 Published in The SciTech Lawyer, Volume 14, Number 1, Fall 2017. © 2017 American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.