Administering Artificial Intelligence

Administering Artificial Intelligence

ADMINISTERING ARTIFICIAL INTELLIGENCE ALICIA SOLOW-NIEDERMAN* As AI increasingly features in everyday life, it is not surprising to hear calls to step up regulation of the technology. In particular, a turn to administrative law to grapple with the consequences of AI is understandable because the technology’s regulatory challenges appear facially similar to those in other technocratic domains, such as the pharmaceutical industry or environmental law. But AI is unique, even if it is not different in kind. AI’s distinctiveness comes from technical attributes—namely, speed, complexity, and unpredictability—that strain administrative law tactics, in conjunction with the institutional settings and incentives, or strategic context, that affect its development path. And this distinctiveness means both that traditional, sectoral approaches hit their limits, and that turns to a new agency like an “FDA for algorithms” or a “federal robotics commission” are of limited utility in constructing enduring governance solutions. This Article assesses algorithmic governance strategies in light of the attributes and institutional factors that make AI unique. In addition to technical attributes and the contemporary imbalance of public and private resources and expertise, AI governance must contend with a fundamental conceptual challenge: algorithmic applications permit seemingly technical decisions to de facto regulate human behavior, with a greater potential for physical and social impact than ever before. This Article warns that the current trajectory of AI development, which is dominated by large private firms, augurs an era of private governance. To maintain the public voice, it suggests an approach rooted in governance of data—a *. 2020–2022 Climenko Fellow and Lecturer on Law, Harvard Law School; 2017–2019 PULSE Fellow, UCLA School of Law and 2019-2020 Law Clerk, U.S. District Court for the District of Columbia. Alicia Solow-Niederman drafted this work during her tenure as a PULSE Fellow, and the arguments advanced here are made in her personal capacity. This Article reflects the regulatory and statutory state of play as of early March 2020. Thank you to Jon Michaels, Ted Parson, and Richard Re for substantive engagement and tireless support; to Jennifer Chacon, Ignacio Cofone, Rebecca Crootof, Ingrid Eagly, Joanna Schwartz, Vivek Krishnamurthy, Guy Van den Broeck, Morgan Weiland, Josephine Wolff, Jonathan Zittrain, participants at We Robot 2019, and the UCLAI working group for invaluable comments and encouragement; to Urs Gasser for conversations that inspired this research project; and to the editors of the Southern California Law Review for their hard work in preparing this Article for publication. Thanks also to the Solow-Niederman family and especially to Nancy Solow for her patience and kindness, and to the Tower 26 team for helping me to maintain a sound mind in a sound body. Any errors are my own. 633 634 SOUTHERN CALIFORNIA LAW REVIEW [Vol. 93:633 fundamental AI input—rather than only contending with the consequences of algorithmic outputs. Without rethinking regulatory strategies to ensure that public values inform AI research, development, and deployment, we risk losing the democratic accountability that is at the heart of public law. TABLE OF CONTENTS INTRODUCTION ..................................................................................... 635 I. BEYOND FORMAL REGULATION .................................................. 645 A. FROM REGULATION TO COLLABORATION ........................................ 645 B. CODE, LAW, AND REGULATION ......................................................... 646 II. ADMINISTRATIVE PARADIGMS ................................................... 650 A. PRESCRIPTIVE REGULATION ............................................................. 651 1. Pharmaceutical Clearance by FDA ............................................... 651 2. Against “Command-and-ContrOl” fOr AI ..................................... 653 a. Speed ........................................................................................ 653 b. Complexity ............................................................................... 656 i. Interpretability ...................................................................... 657 ii. Domain Expertise ................................................................ 659 c. Unpredictability ........................................................................ 663 i. Uncertainty ........................................................................... 663 ii. Emergence .......................................................................... 665 B. COLLABORATION AND NEGOTIATION ............................................... 670 1. From Regulation to Governance in Environmental Law .............. 671 2. Governance Challenges fOr AI ...................................................... 673 III. IN SEARCH OF ACCOUNTABILITY ............................................. 680 A. CODE AS POLICY ............................................................................... 681 B. THE ALGORITHMIC GOVERNANCE DILEMMA ................................... 684 1. The Public-Private Dilemma ......................................................... 684 2. Recasting the Terms Of the Dilemma: PrOtecting Values by Design .......................................................................................... 688 a. Market-Driven Data Governance ............................................. 688 b. Norm-Driven Data Governance ............................................... 690 CONCLUSION ......................................................................................... 694 2020] ADMINISTERING ARTIFICIAL INTELLIGENCE 635 INTRODUCTION Popular media coverage about artificial intelligence (“AI”)1 often makes it sOund as thOugh the technOlOgy itself is an autOnOmous actOr. It’s easy to understand the urge tO anthropomorphize AI: sometimes, the results of algorithmic research are altogether different from what the data scientists whO created the algOrithm expected in ways that suggest algOrithmic autonomy.2 Consider, fOr instance, an algOrithm that was tOld to sort data. Like Amelia Bedelia, the sOftware tOOk this directive a bit tOO literally and deleted all the data fed to it, apparently on the theory that data that did not exist could not be considered unsorted.3 But thinking abOut the algOrithm as the actor here is dangerous because it elides the role that humans, and the institutions within which they sit, are in fact respOnsible in the first instance for the data selection and programmatic chOices that create what we call AI today.4 In the case Of the data-sorting algOrithm, the algOrithm was nOt thinking independently abOut hOw to playfully evade a directive; rather, it was fOllOwing instructiOns given tO it by data scientists, whO had nOt realized that their algOrithmic agent cOuld cOmply with the spirit but nOt the intent behind the directive in an unexpected way. Such algorithmic creativity might, in the right setting, be exciting and generative insofar as it catalyzes new research approaches. As anyOne whO has grown frustrated with a toddler can attest, thOugh, childlike evasion of a directive is nOt always funny. Take the researchers whO programmed an 1. This Article defines artificial intelligence as a class of technologies that rely on some form of automated decisionmaking executed by a computer. AI, as used in this Article, includes both AI-driven robots and AI algorithms that lack a bodied form, whether they employ machine learning or another method. See Jack M. Balkin, The Path of Robotics Law, 6 CALIF. L. REV. CIR. 45, 45–46 (2015) (“I do not distinguish sharply between robots and artificial intelligence (AI) agents. As innovation proceeds, the distinction between these two kinds of technologies may be far less important to the law . .”); Andrew Tutt, An FDA for Algorithms, 69 ADMIN L. REV. 83, 85 n.2 (2017) (noting a “terminological divide in legal scholarship” wherein “[s]ome of the most prominent authors in the field prefer to conceive of algorithmic regulation as the problem of regulating robots” and asserting that “algorithms are the appropriate unit of regulation”). The understanding used in this Article thus applies “regardless of whether [the particular technologies under consideration] are physical robots, such as care robots or driverless cars, or software systems, such as medical diagnosis systems, intelligent personal assistants, or algorithmic chat bots, in real, virtual, contextual, and mixed-reality environments.” INST. OF ELEC. & ELEC. ENG’RS, ETHICALLY ALIGNED DESIGN: A VISION FOR PRIORITIZING HUMAN WELL-BEING WITH AUTONOMOUS AND INTELLIGENT SYSTEMS 17, (1st ed. 2019), https://standards.ieee.org/content/dam/iee e-standards/standards/web/documents/other/ead1e.pdf [https://perma.cc/MC7P-KZJX]. 2. See Joel Lehman et al., The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities, ARXIV 7 (Nov. 21, 2019), https://arxiv.org/pdf/1803.03453.pdf [https://perma.cc/2U68-SCX9] (documenting research results that differed from what data scientists expected). 3. Id. at 8; see also Meet Amelia Bedelia, AMELIA BEDILIA, https://www.ameliabedeliabooks.co m/meet-amelia-bedelia [https://perma.cc/FX8U-YQPN] (“She does exactly what you tell her to do – so be careful what you say.”). 4. This Article limits itself to present and near-term AI capabilities, in which humans supervise and

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    64 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us