AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 Considerations for AI Fairness for People with Disabilities Shari Trewin (IBM; [email protected]) Sara Basson (Google Inc.; [email protected]) Michael Muller (IBM Research; michael [email protected]) Stacy Branham (University of California, Irvine; [email protected]) Jutta Treviranus (OCAD University; [email protected]) Daniel Gruen (IBM Research; daniel [email protected]) Daniel Hebert (IBM; [email protected]) Natalia Lyckowski (IBM; [email protected]) Erich Manser (IBM; [email protected]) DOI: 10.1145/3362077.3362086 Abstract Introduction Systems that leverage Artificial Intelligence In society today, people experiencing disabil- are becoming pervasive across industry sec- ity can face discrimination. As artificial intel- tors (Costello, 2019), as are concerns that ligence solutions take on increasingly impor- these technologies can unintentionally ex- tant roles in decision-making and interaction, clude or lead to unfair outcomes for marginal- they have the potential to impact fair treatment ized populations (Bird, Hutchinson, Ken- of people with disabilities in society both pos- thapadi, Kiciman, & Mitchell, 2019)(Cutler, itively and negatively. We describe some of Pribik, & Humphrey, 2019)(IEEE & Sys- the opportunities and risks across four emerg- tems, 2019)(Kroll et al., 2016)(Lepri, Oliver, ing AI application areas: employment, educa- Letouze,´ Pentland, & Vinck, 2018). Initiatives tion, public safety, and healthcare, identified to improve AI fairness for people across racial in a workshop with participants experiencing (Hankerson et al., 2016), gender (Hamidi, a range of disabilities. In many existing situ- Scheuerman, & Branham, 2018a), and other ations, non-AI solutions are already discrimi- identities are emerging, but there has been natory, and introducing AI runs the risk of sim- relatively little work focusing on AI fairness ply perpetuating and replicating these flaws. for people with disabilities. There are nu- We next discuss strategies for supporting fair- merous examples of AI that can empower ness in the context of disability throughout the people with disabilities, such as autonomous AI development lifecycle. AI systems should vehicles (Brewer & Kameswaran, 2018) and be reviewed for potential impact on the user voice agents (Pradhan, Mehta, & Findlater, in their broader context of use. They should 2018) for people with mobility and vision im- offer opportunities to redress errors, and for pairments. However, AI solutions may also users and those impacted to raise fairness result in unfair outcomes, as when Idahoans concerns. People with disabilities should be with cognitive/learning disabilities had their included when sourcing data to build models, healthcare benefits reduced based on biased and in testing, to create a more inclusive and AI (K.W. v. Armstrong, No. 14-35296 (9th robust system. Finally, we offer pointers into Cir. 2015) :: Justia, 2015). These scenar- an established body of literature on human- ios suggest that the prospects of AI for peo- centered design processes and philosophies ple with disabilities are promising yet fraught that may assist AI and ML engineers in inno- with challenges that require the sort of upfront vating algorithms that reduce harm and ulti- attention to ethics in the development process mately enhance the lives of people with dis- advocated by scholars (Bird et al., 2019) and abilities. practitioners (Cutler et al., 2019). The challenges of ensuring AI fairness in the context of disability emerge from multi- Copyright c 2019 by the author(s). ple sources. From the very beginning of al- 40 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 gorithmic development, in the problem scop- Related Work ing stage, bias can be introduced by lack of awareness of the experiences and use The 2019 Gartner CIO survey (Costello, 2019) cases of people with disabilities. Since sys- of 3000 enterprises across major industries tems are predicated on data, in data sourc- reported that 37% have implemented some ing and data pre-processing stages, it is crit- form of AI solution, an increase of 270% ical to gather data that include people with over the last four years. In parallel, there disabilities and to ensure that these data are is increasing recognition that intelligent sys- not completely subsumed by data from pre- tems should be developed with attention to sumed “normative” populations. This leads the ethical aspects of their behavior (Cutler to a potential conundrum. The data need to et al., 2019)(IEEE & Systems, 2019), and be gathered in order to be reflected in the that fairness should be considered upfront, models, but confidentiality and privacy, espe- rather than as an afterthought (Bird et al., cially as regards disability status, might make 2019). IEEE’s Global Initiative on Ethics of collecting these data difficult (for developers) Autonomous and Intelligent Systems is devel- or dangerous (for subjects) (Faucett, Ring- oping a series of international standards for land, Cullen, & Hayes, 2017)(von Schrader, such processes (Koene, Smith, Egawa, Man- Malzer, & Bruyere` , 2014). Another area dalh, & Hatada, 2018), including a process to address during model training and test- for addressing ethical concerns during design ing is the potential for model bias. Ow- (P7000), and the P7003 Standard for Algorith- ing to intended or unintended bias in the mic Bias Considerations (Koene, Dowthwaite, data, the model may inadvertently enforce & Seth, 2018). There is ongoing concern and or reinforce discriminatory patterns that work discussion about accountability for potentially against people with disabilities (Janssen & harmful decisions made by algorithms(Kroll Kuk, 2016). We advocate for increased aware- et al., 2016)(Lepri et al., 2018), with some ness of these patterns, so we can avoid repli- new academic initiatives – like one at George- cation of past bias into future algorithmic deci- town’s Institute for Tech Law & Policy (Givens, sions, as has been well-documented in bank- 2019), and a workshop at the ASSETS 2019 ing (Bruckner, 2018)(Chander, 2017)(Hurley conference(Trewin et al., 2019) – focusing & Adebayo, 2016). Finally, once a trained specifically on AI and Fairness for People with model is incorporated in an application, it is Disabilities. then critical to test with diverse users, par- ticularly those deemed as outliers. This pa- Any algorithmic decision-process can be bi- per provides a number of recommendations ased, and the FATE/ML community is actively towards overcoming these challenges. developing approaches for detection and re- mediation of bias (Kanellopoulos, 2018)(Lohia et al., 2019). Williams, Brooks and Shmar- gad show how racial discrimination can arise In the remainder of this article, we overview in employment and education even without the nascent area of AI Fairness for People with having social category information, and how Disabilities as a practical pursuit and an aca- the lack of category information makes such demic discipline. We provide a series of exam- biases harder to detect (Williams, Brooks, & ples that demonstrate the potential for harm to Shmargad, 2018). Although they argue for people with disabilities across four emerging inclusion of social category information in al- AI application areas: employment, education, gorithmic decision-making, they also acknowl- public safety, and healthcare. Then, we iden- edge the potential harm that can be caused tify strategies of developing AI algorithms that to an individual by revealing sensitive social resist reifying systematic societal exclusions data such as immigration status. Selbst et al. at each stage of AI development. Finally, we argue that purely algorithmic approaches are offer pointers into an established body of lit- not sufficient, and the full social context of de- erature on human-centered design processes ployment must be considered if fair outcomes and philosophies that may assist AI and ML are to be achieved (Selbst, Boyd, Friedler, engineers in innovating algorithms that reduce Venkatasubramanian, & Vertesi, 2019). harm and – as should be our ideal – ultimately enhance the lives of people with disabilities. Some concerns about AI fairness in the 41 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 context of individuals with disabilities or tentional. For example, qualified deaf candi- neurological or sensory differences are dates who speak through an interpreter may now being raised (Fruchterman & Mellea, be screened out for a position requiring ver- 2018)(Guo, Kamar, Vaughan, Wallach, bal communication skills, even though they & Morris, 2019)(Lewis, 2019)(Treviranus, could use accommodations to do the job ef- 2019)(Trewin, 2018a), but research in this fectively. Additional discriminatory practices area is sparse. Fruchterman and Mel- are particularly damaging to this population, lea (Fruchterman & Mellea, 2018) outline the where employment levels are already low: In widespread use of AI tools in employment and 2018, the employment rate for people with recruiting, and highlight some potentially se- disabilities was 19.1%, while the employment rious implications for people with disabilities, percentage for people without disabilities was including the analysis of facial movements 65.9% (Bureau of Labor Statistics, 2019). and voice in recruitment, personality tests that Employers are increasingly relying on technol- disproportionately screen out people with dis- ogy in their hiring practices. One of their sell- abilities, and the use of variables
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages24 Page
-
File Size-