Don T Explain. Optimize

Total Page:16

File Type:pdf, Size:1020Kb

Don T Explain. Optimize

Don’t explain. Optimize.

David Weinberger senior researcher, Harvard Berkman Klein Center Because machine-learning systems are not only going to make mistakes, they are going to reflect the human biases expressed in the data they are trained on, many – including the European Commission via its General Data Protection Regulation – want to insist that such systems be able to explain how they came up with their conclusions. Explanations are a tool. There are other tools that can achieve the desired result without requiring us to inhibit the ability of machine learning systems to help us achieve our social and personal goals. Instead of insisting on making ML systems explicable in every instance, governmental agencies representing the public interest should regulate what such systems are optimized for, measure their performance against pre-ML systems, and use social insurance to compensate those who are, inevitably, hurt. Of course, optimizing systems to achieve the quantifiable goals we designed an ML system to achieve is not enough. These systems must also support quantifiable social constraints that hold across all ML systems. For example, a system of autonomous vehicles might be optimized for lowering the number of traffic fatalities from 40,000 pr year in the U.S. If it wildly succeeds by reducing the number of victims to 5,000 but all of them are people of color, then the system has reached its optimization goal, but has flagrantly violated a critical constraint. Deciding on what to optimize a system for will be a difficult political challenge, especially since virtually all systems will need a weighted and interrelated set of optimizations: The federal government might demand that AV's be optimized first to reduce fatalities, second to reduce the environmental impact, third to reduce injuries, fourth to reduce property damage, fifth to shorten travel times, and so forth. This could be complicated by state and local optimizations, such as slowing traffic along some city's streets to encourage shopping in local stores. Negotiating all these optimizations will be difficult…not to mention the possibility of allowing users to set personal preferences within certain parameters: 1 [email protected], 10-30-017 perhaps you will be allowed designate that you want to take the scenic route, but not if it means your environmental impact will exceed a designated maximum. And, of course, all of these choices are subject to the critical social constraints. Once in place, a regulated system will produce reports to show how well it is achieving the goals expressed by its optimizations. No system will perform perfectly. For example, a medical diagnostic system will make errors, and a system of AVs is unlikely to eliminate all crashes. If those systems are too complex to be explicable – e.g., transient networks of AVs might use uninterpretable algorithms as well as inputs from traffic, weather, and social behavior prediction systems that are themselves "black boxes" – there will be no way to explain why it was your Aunt Ida who died in that pile-up. But we still would not want to scrap a system that reduced annual fatalities by a significant percentage. To address this inevitability, “no fault” social insurance should routinely be used to compensate victims and their families. Nothing will bring the victims back, but at least there would be fewer Aunt Ida’s dying in car crashes. There are several reasons to move to this sort of governance: 1. It lets us benefit from AI systems that have advanced beyond the ability of humans to understand them. 2. It focuses the discussion at the system level rather than on individual incidents, letting us evaluate an AI system in comparison to the system it replaces, thus perhaps swerving around some of the moral panic AI seems to be occasioning. 3. It places the governance of these systems within our human, social framework, subordinating them to human needs, desires, and rights. 4. It treats the governance questions as societal questions to be settled through our existing processes for resolving policy issues. Overall, by treating the governance of AI as a question of optimizations, we can focus the necessary argument about them on what truly matters: What is it that we want from a system, and what are we willing to give up to get it? # # #

2

Recommended publications