The Current State of AI Governance – an EU Perspective Section 3: Developing an AI Governance Regulatory Ecosystem Authors: Mark Dempsey, Keegan Mcbride, Joanna J
Total Page:16
File Type:pdf, Size:1020Kb
This is a rough draft and pre print version of an upcoming chapter for an OUP publication on AI governance and public management. The Current State of AI Governance – An EU Perspective Section 3: Developing an AI Governance Regulatory Ecosystem Authors: Mark Dempsey, Keegan McBride, Joanna J. Bryson Abstract: The rapid pace of technological advancement and innovation has put governance and regulatory mechanisms to the test. There is a clear need for new and innovative regulatory mechanisms that enable governments to successfully manage the integration of such technologies into our societies and ensure that such integration occurs in a sustainable, beneficial, and just manner. Artificial Intelligence stands out as one of the most debated such innovations. What exactly is it, how should it be built, how can it be used, and how and should it be regulated? Yet, in this debate, AI is becoming widely utilized within both existing, evolving, and bespoke regulatory contexts. The present chapter explores in particular what is arguably the most successful AI regulatory approach to date, that of the European Union. We explore core definitional concepts, shared understandings, values, and approaches currently in play. We argue that due to the so-called ‘Brussels effect’, regulatory initiatives within the European Union have a much broader global impact and, therefore, warrant close inspection. Introduction The continual process of societal digitalization has led to a dynamic and almost amorphous regulatory environment. One of the key concepts currently at play within this dynamic is that of Artificial Intelligence (AI). Having a clearly defined regulatory environment for AI is necessary for developing not only shared understandings of the technology, but more urgently to ultimately enable governments to protect their and their citizens’ interests. In today’s digitalized world, transnational dependencies are increasingly common, and therefore transnational regulatory and governance frameworks are needed that take these dependencies into account. The way that the European Union (EU) has handled digital regulation generally, and AI in particular, is therefore of great interest. The EU is a trading block of independently functioning and historically warring nations who now ‘harmonize’ their legislation to create unified market policies, giving all member nations disproportionate power on the global stage as well as control within their own borders. AI has become a key aspect within the political dossier for the EU’s executive branch, the European Commission (EC). This is so much so that, in her first speech before the European Parliament, the new president of the EC Ursula von der Leyen, committed to adopting “a coordinated European approach on the human and ethical implications of artificial intelligence.” (Von der Leyen, March 2020, pg. 13) With this in mind, this chapter will chart the most salient attempts to date in providing global and transnational governance frameworks for AI, but with an emphasis on existing EU regulatory proposals. Such proposals include the Digital Services Act (DSA) and an AI whitepaper which forms the basis of a formal regulatory proposal due in April 2021. Additionally, this chapter will provide a discussion on the ‘Brussels Effect’ (Bradford, 2020), which refers to the increasing extent to which EU regulations extend to those outside the jurisdiction of the EU. Defining AI & Context Definitions and attempts to define A abound. AI is all around us (Bryson & Wyatt, 1997) and similar to other systems, it can often be made out to be more complicated than need be. Often times, unfortunately, such convolution is deliberate and aims to advance an agenda or bypass potential scrutiny. In addition to this, AI remains a contested concept given its universality as a general-purpose technology (Bryson & Brundage 2016). Thus, to date, there has been difficulty when it comes to reaching consensus for AI, which has negative implications for the regulation and governance of AI. For contextual purposes, it is important to understand the interests and ambitions behind what drives certain definitions. For example, the Organization for Economic Co-operation and Development (OECD) has a trade and economic progress mandate; this is reflected in their definition, which it published with its OECD principles on AI1. To the OECD, an “Artificial Intelligence (AI) System is a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” (OECD, 2021, p. 7). Such a definition can be contrasted to those that organizations such as the Council of Europe (CoE) and the EU Agency for Fundamental Rights (FRA) are beginning to coalesce around, such as that offered by the EU’s High-Level Expert Group of AI (AI HLEG). This definition, which is both long and further complicates matters, is provided in its entirety below due to its increasingly common use: “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).” (AI HLEG, 2019, p. 2) While the latter definition is common, it is lengthy and contains conscious efforts by some members of the AI HLEG to create a definition that may allow for their products to avoid regulatory scrutiny. With this in mind, this paper prefers a more succinct approach to defining AI: “AI is any artifact that extends our own capacities to perceive and act” (Bryson, 2019). Although it is an unusual definition, it might, as Bryson notes, “…also give us a firmer grip on the sorts of changes AI brings to our society, by allowing us to examine a longer history of technological interventions” (Bryson, 2019) An overview of existing transnational governance efforts The importance of transnational governance efforts is becoming increasingly noted, and research on the topic has begun to rapidly proliferate, especially research focusing on the regulation and global coordination of such regulation for technological advances (Erdélyl et al., 2018; Deeks, 2020, Crootof et al.,2021, Beaumier et al., 2020). Though there is increasing interest in such a topic, often times one common error is made. Namely, many scholars fail to acknowledge or omit the fact that regulatory policy has been applied to AI before and has in fact been the beneficiary of decades of regulatory policy. Research and deployment of AI has, so far, been primarily up-regulated with very significant government and other capital investments (Miguel and Casado, 2016; Technology Council Committee on Technology, 2016; Brundage and Bryson, 2016; cf. Bryson 2019). In the context of the above, it is therefore important to note the following three points: 1 Agreed in May 2019 by 42 states and adopted by the G20 in 2019; See Artificial Intelligence OECD Principles on AI, available at https://www.oecd.org/going-digital/ai/principles/ 1. Any such AI regulatory policies should, and basically always will be, developed and implemented in light of the importance of respecting the positive impacts of technology as well. 2. No one is talking about introducing regulation to AI, AI already exists in a regulatory framework (Brundage and Bryson, 2017; O’Reilly, 2017), what is being discussed is whether that framework needs optimizing. 3. Regulation has so far mostly been entirely constructive, with governments providing vast resources to companies and universities developing AI. Even where regulation constrains, informed and well-designed constraint can lead to more sustainable and even faster growth. It is possible to draw comparisons to the finance sector. Finance has always been regulated, but, as the global financial crisis (GFC) of 2007 – 2009 demonstrated, regulations must, and often should be, overhauled and optimized. We are at similar cross-roads with AI. As the technologist Benedict Evans argues: “Tech has gone from being just one of many industries, to being systemically important to society.” (Evans, 2020) If something is ‘systemically important to society’, it must be governed and regulated. Indeed, day to day life is becoming increasingly intertwined with AI. The welfare of society and citizens may be influenced by decisions that are becoming increasingly made by algorithms. Such changes have led to extensive research (e.g., Algorithm Watch, 2020; EU Fundamental Rights Agency, 2020) and a further drive for downwards regulation, not least where privacy, surveillance, bias and discrimination are concerned. The lack of any formal regulatory structure to address AI concerns on a