Combating Bias in Artificial Intelligence and Machine Learning
Total Page:16
File Type:pdf, Size:1020Kb
Combating Bias in Artificial Intelligence and Machine Learning Used in Healthcare SEPTEMBER 2020 CHARLYN HO MARC MARTIN SARI RATICAN DIVYA TANEJA D. SEAN WEST COUNSEL & LEAD AUTHOR PARTNER SENIOR COUNSEL ASSOCIATE COUNSEL +1.202.654.6341 +1.202.654.6351 +1.310.788.3287 +1.332.223.3935 +1.206.359.3598 [email protected] [email protected] [email protected] [email protected] [email protected] Some jurisdictions in which Perkins Coie LLP practices law may require that this PerkinsCoie.com/AI communication be designated as Advertising Materials. | Perkins Coie LLP CombatingCombating Bias Bias in inArtificial Artificial Intelligence Intelligence and andMachine Machine Learning Learning Used Usedin Healthcare in Healthcare Artificial intelligence (AI) has the promise to revolutionize healthcare through the use of machine learning (ML) techniques to predict patient outcomes and personalize patient care, but this use of AI carries the risk of what is described as “algorithmic bias” affecting such outcomes and care, even unintentionally. AI seeks to enable computers to imitate intelligent human behavior, and ML, a subset of AI, involves systems that learn from data without relying on rules-based programming. AI has the power to unleash insights from large data sets to more accurately diagnose and evaluate patients and dramatically speed up the drug discovery process. However, despite its promise, the use of AI is not without legal risks that can generate significant liability for the developer and/or user of ML algorithms. Among other risks, the development and use of ML algorithms could result in discrimination against individuals on the basis of a protected class, a potential violation of antidiscrimination and other laws. Such algorithmic bias occurs when an ML algorithm makes decisions that treat similarly situated individuals differently where there is no justification for such differences, regardless of intent. Absent strong policies and procedures to prevent and mitigate bias throughout the life cycle of an ML algorithm, it is possible that existing human biases can be embedded into the ML algorithm with potentially serious consequences, particularly in the healthcare context, where life and death decisions are being made algorithmically. In this article, we discuss the legal concerns arising from the potential bias manifestations in ML algorithms used in the healthcare context under U.S. law. We briefly describe the legal landscape governing algorithmic bias in the United States and offer some emerging tools that build on existing recommended best practices, such as adversarial debiasing and the use of synthetic data for detecting, avoiding, and mitigating algorithmic bias. AI AND ML IN HEALTHCARE ML techniques include supervised learning (a method of teaching ML algorithms to “learn” by example) as well as deep learning (a subset of ML that abstracts complex concepts through layers mimicking neural networks of biological systems). Trained ML algorithms can be used to identify causes of diseases by establishing relationships between a set of inputs, such as weight, height, and blood pressure, and an output, such as the likelihood of developing heart disease. As an example of the power of training ML algorithms on large amounts of data, a group of scientists trained a highly accurate AI system using electronic medical records of nearly 600,000 patients to extract clinically relevant data from huge data sets and associate common medical conditions with specific information. The ML algorithm was then able to use a patient’s symptoms, history, lab results, and other clinical data to automatically diagnose common childhood conditions, enhancing physicians’ ability to accurately diagnose illness as compared to human-only methods. Unfortunately, ML algorithms developed in the healthcare context are not immune from algorithmic bias. In one pre-pandemic study, an algorithm used by UnitedHealth to predict which patients would require extra medical care favored white patients over black patients, bumping up the white patients in the queue for special treatments over sicker black patients that suffered from far more chronic illnesses. Race was not a factor in the algorithm’s decision-making, but race correlated with other factors that affected the outcome. The lead researcher of this study—which motivated the New Some jurisdictions in which Perkins Coie LLP practices law may require that this PerkinsCoie.com/AI communication be designated as Advertising Materials. | Perkins Coie LLP CombatingCombating Bias Bias in inArtificial Artificial Intelligence Intelligence and andMachine Machine Learning Learning Used Usedin Healthcare in Healthcare York Department of Financial Services (DFS) and Department of Health (DOH) to write a letter to UnitedHealth inquiring about this alleged bias—stated that the “algorithm’s skew sprang from the way it used health costs as a proxy for a person’s care requirements, making its predictions reflect economic inequality as much as health needs.” The DFS and DOH were particularly troubled by the algorithm’s reliance on historical spending to evaluate future healthcare needs and stated that this dependence poses a significant risk of conflicts of interest and also unconscious bias. The DFS and DOH cited studies documenting the barriers to receiving healthcare that black patients suffer. Therefore, utilizing medical history in the algorithm, including healthcare expenditures, is unlikely to reflect the true medical needs of black patients, because they historically have had less access to, and therefore less opportunity to receive and pay for, medical treatment. At the time of writing this article, AI is being utilized in the fight against the COVID-19 pandemic, including to triage patients and expedite the discovery of a vaccine. For example, researchers have developed an AI-powered tool that predicts with 70% to 80% accuracy which newly infected COVID-19 patients are likely to develop severe lung disease. The U.S. Centers for Disease Control and Prevention has leveraged Microsoft’s AI-powered bot service to create its own COVID-19 assessment bot that can assess a user’s symptoms and risk factors to suggest a next course of action, including whether to go to the hospital. While these advances will benefit many patients, they do not resolve the concerns relating to algorithmic bias and its repercussions for certain groups. Use of AI to triage COVID-19 patients based on symptoms and preexisting conditions can perpetuate well-researched and well-documented preexisting human prejudice against the pain and symptoms of people of color and women. Data on COVID-19 shows disparities based on race and socioeconomic status, including substantially higher mortality rates among certain racial groups. The risk of algorithmic bias in ML algorithms designed to combat COVID-19 is heightened because the data is not equally distributed across age groups, race, and other patient characteristics. Without representative data, there is a higher risk of bias. However, as discussed in more detail below, if such AI tools were debiased using adversarial debiasing, synthetic data, or some other solution, an AI tool could triage patients more appropriately based on their symptoms so that they receive the healthcare that they need. OVERVIEW OF THE U.S. LEGAL LANDSCAPE RELATING TO ALGORITHMIC BIAS IN HEALTHCARE While U.S. laws do not yet specifically address AI bias, existing federal and state laws may make algorithmic discrimination and bias unlawful, regardless of intent. For example, Section 1557 of the Affordable Care Act (ACA) prohibits any healthcare provider that is receiving federal funds to refuse to treat—or to otherwise discriminate against— an individual based on protected classifications such as race, national origin, or sex. In addition to healthcare-specific laws, several states specifically prohibit discrimination in hospitals or clinics based on protected classifications. Although different laws apply different standards for when unlawful discrimination exists, generally, an antidiscrimination claim requires establishing either disparate treatment or disparate impact. Disparate treatment can be established by either facially disparate treatment (such as explicit race classifications) or intentional, but facially neutral, discrimination (such as zip code classification with the intent that zip codes serve as a rough proxy for race). Disparate Some jurisdictions in which Perkins Coie LLP practices law may require that this PerkinsCoie.com/AI communication be designated as Advertising Materials. | Perkins Coie LLP CombatingCombating Bias Bias in inArtificial Artificial Intelligence Intelligence and andMachine Machine Learning Learning Used Usedin Healthcare in Healthcare impact can be established by showing that a facially neutral policy or practice disproportionately affects a group sharing a protected trait, such as a religion. We surmise that it is more likely that AI developers working on healthcare-related ML algorithms would design a facially neutral algorithm that contains undetected biases resulting in disparate impacts than an algorithm that intentionally treats people differently on an unlawful basis. In fact, a 2014 White House study on Big Data concluded that it is very rare for algorithmic bias to arise from intentional disparate treatment; instead, algorithmic bias is often caused by poor training data (data that is either inaccurate, out of date, or nonrepresentative of the population)