AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

Considerations for AI Fairness for People with Shari Trewin (IBM; trewin@us..com) Sara Basson (Google Inc.; [email protected]) Michael Muller (IBM Research; michael [email protected]) Stacy Branham (University of California, Irvine; [email protected]) Jutta Treviranus (OCAD University; [email protected]) Daniel Gruen (IBM Research; daniel [email protected]) Daniel Hebert (IBM; [email protected]) Natalia Lyckowski (IBM; [email protected]) Erich Manser (IBM; [email protected]) DOI: 10.1145/3362077.3362086

Abstract Introduction Systems that leverage Artificial Intelligence In society today, people experiencing disabil- are becoming pervasive across industry sec- ity can face discrimination. As artificial intel- tors (Costello, 2019), as are concerns that ligence solutions take on increasingly impor- these technologies can unintentionally ex- tant roles in decision-making and interaction, clude or lead to unfair outcomes for marginal- they have the potential to impact fair treatment ized populations (Bird, Hutchinson, Ken- of people with disabilities in society both pos- thapadi, Kiciman, & Mitchell, 2019)(Cutler, itively and negatively. We describe some of Pribik, & Humphrey, 2019)(IEEE & Sys- the opportunities and risks across four emerg- tems, 2019)(Kroll et al., 2016)(Lepri, Oliver, ing AI application areas: employment, educa- Letouze,´ Pentland, & Vinck, 2018). Initiatives tion, public safety, and healthcare, identified to improve AI fairness for people across racial in a workshop with participants experiencing (Hankerson et al., 2016), gender (Hamidi, a range of disabilities. In many existing situ- Scheuerman, & Branham, 2018a), and other ations, non-AI solutions are already discrimi- identities are emerging, but there has been natory, and introducing AI runs the risk of sim- relatively little work focusing on AI fairness ply perpetuating and replicating these flaws. for people with disabilities. There are nu- We next discuss strategies for supporting fair- merous examples of AI that can empower ness in the context of throughout the people with disabilities, such as autonomous AI development lifecycle. AI systems should vehicles (Brewer & Kameswaran, 2018) and be reviewed for potential impact on the user voice agents (Pradhan, Mehta, & Findlater, in their broader context of use. They should 2018) for people with mobility and vision im- offer opportunities to redress errors, and for pairments. However, AI solutions may also users and those impacted to raise fairness result in unfair outcomes, as when Idahoans concerns. People with disabilities should be with cognitive/learning disabilities had their included when sourcing data to build models, healthcare benefits reduced based on biased and in testing, to create a more inclusive and AI (K.W. v. Armstrong, No. 14-35296 (9th robust system. Finally, we offer pointers into Cir. 2015) :: Justia, 2015). These scenar- an established body of literature on human- ios suggest that the prospects of AI for peo- centered design processes and philosophies ple with disabilities are promising yet fraught that may assist AI and ML engineers in inno- with challenges that require the sort of upfront vating algorithms that reduce harm and ulti- attention to ethics in the development process mately enhance the lives of people with dis- advocated by scholars (Bird et al., 2019) and abilities. practitioners (Cutler et al., 2019). The challenges of ensuring AI fairness in the context of disability emerge from multi- Copyright c 2019 by the author(s). ple sources. From the very beginning of al-

40 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 gorithmic development, in the problem scop- Related Work ing stage, bias can be introduced by lack of awareness of the experiences and use The 2019 Gartner CIO survey (Costello, 2019) cases of people with disabilities. Since sys- of 3000 enterprises across major industries tems are predicated on data, in data sourc- reported that 37% have implemented some ing and data pre-processing stages, it is crit- form of AI solution, an increase of 270% ical to gather data that include people with over the last four years. In parallel, there disabilities and to ensure that these data are is increasing recognition that intelligent sys- not completely subsumed by data from pre- tems should be developed with attention to sumed “normative” populations. This leads the ethical aspects of their behavior (Cutler to a potential conundrum. The data need to et al., 2019)(IEEE & Systems, 2019), and be gathered in order to be reflected in the that fairness should be considered upfront, models, but confidentiality and privacy, espe- rather than as an afterthought (Bird et al., cially as regards disability status, might make 2019). IEEE’s Global Initiative on Ethics of collecting these data difficult (for developers) Autonomous and Intelligent Systems is devel- or dangerous (for subjects) (Faucett, Ring- oping a series of international standards for land, Cullen, & Hayes, 2017)(von Schrader, such processes (Koene, Smith, Egawa, Man- Malzer, & Bruyere` , 2014). Another area dalh, & Hatada, 2018), including a process to address during model training and test- for addressing ethical concerns during design ing is the potential for model bias. Ow- (P7000), and the P7003 Standard for Algorith- ing to intended or unintended bias in the mic Bias Considerations (Koene, Dowthwaite, data, the model may inadvertently enforce & Seth, 2018). There is ongoing concern and or reinforce discriminatory patterns that work discussion about accountability for potentially against people with disabilities (Janssen & harmful decisions made by algorithms(Kroll Kuk, 2016). We advocate for increased aware- et al., 2016)(Lepri et al., 2018), with some ness of these patterns, so we can avoid repli- new academic initiatives – like one at George- cation of past bias into future algorithmic deci- town’s Institute for Tech Law & Policy (Givens, sions, as has been well-documented in bank- 2019), and a workshop at the ASSETS 2019 ing (Bruckner, 2018)(Chander, 2017)(Hurley conference(Trewin et al., 2019) – focusing & Adebayo, 2016). Finally, once a trained specifically on AI and Fairness for People with model is incorporated in an application, it is Disabilities. then critical to test with diverse users, par- ticularly those deemed as outliers. This pa- Any algorithmic decision-process can be bi- per provides a number of recommendations ased, and the FATE/ML community is actively towards overcoming these challenges. developing approaches for detection and re- mediation of bias (Kanellopoulos, 2018)(Lohia et al., 2019). Williams, Brooks and Shmar- gad show how racial discrimination can arise In the remainder of this article, we overview in employment and education even without the nascent area of AI Fairness for People with having social category information, and how Disabilities as a practical pursuit and an aca- the lack of category information makes such demic discipline. We provide a series of exam- biases harder to detect (Williams, Brooks, & ples that demonstrate the potential for harm to Shmargad, 2018). Although they argue for people with disabilities across four emerging inclusion of social category information in al- AI application areas: employment, education, gorithmic decision-making, they also acknowl- public safety, and healthcare. Then, we iden- edge the potential harm that can be caused tify strategies of developing AI algorithms that to an individual by revealing sensitive social resist reifying systematic societal exclusions data such as immigration status. Selbst et al. at each stage of AI development. Finally, we argue that purely algorithmic approaches are offer pointers into an established body of lit- not sufficient, and the full social context of de- erature on human-centered design processes ployment must be considered if fair outcomes and philosophies that may assist AI and ML are to be achieved (Selbst, Boyd, Friedler, engineers in innovating algorithms that reduce Venkatasubramanian, & Vertesi, 2019). harm and – as should be our ideal – ultimately enhance the lives of people with disabilities. Some concerns about AI fairness in the

41 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 context of individuals with disabilities or tentional. For example, qualified deaf candi- neurological or sensory differences are dates who speak through an interpreter may now being raised (Fruchterman & Mellea, be screened out for a position requiring ver- 2018)(Guo, Kamar, Vaughan, Wallach, bal communication skills, even though they & Morris, 2019)(Lewis, 2019)(Treviranus, could use accommodations to do the job ef- 2019)(Trewin, 2018a), but research in this fectively. Additional discriminatory practices area is sparse. Fruchterman and Mel- are particularly damaging to this population, lea (Fruchterman & Mellea, 2018) outline the where employment levels are already low: In widespread use of AI tools in employment and 2018, the employment rate for people with recruiting, and highlight some potentially se- disabilities was 19.1%, while the employment rious implications for people with disabilities, percentage for people without disabilities was including the analysis of facial movements 65.9% (Bureau of Labor Statistics, 2019). and voice in recruitment, personality tests that Employers are increasingly relying on technol- disproportionately screen out people with dis- ogy in their hiring practices. One of their sell- abilities, and the use of variables that could be ing points is the potential to provide a fairer discriminatory, such as gaps in employment. recruitment process, not influenced by an in- “Advocates for people with disabilities should dividual recruiter’s bias or lack of knowledge. be looking at the proxies and the models Machine learning models are being used used by AI vendors for these “hidden” tools for candidate screening and matching job- of discrimination” (Fruchterman & Mellea, seekers with available positions. There are AI- 2018). driven recruitment solutions on the market to- day that analyze online profiles and resumes, Motivating Examples the results of online tests, and video interview data, all of which raise potential concerns for In October 2018, a group of 40 disability advo- disability discrimination (Fruchterman & Mel- cates, individuals with disabilities, AI and ac- lea, 2018). While the use of AI in HR and cessibility researchers and practitioners from recruitment is an increasing trend (Faggella, industry and academia convened in a work- 2019), there are already cautionary incidents shop (Trewin, 2018b) to identify and discuss of discrimination, as when Amazon’s AI re- the topic of fairness for people with disabil- cruiting solution “learned” to devalue resumes ities in light of the increasing mainstream of women (Dastin, 2018). application of AI solutions in many indus- tries (Costello, 2019). This section describes The workshop identified several risk scenar- some of the opportunities and risks identi- ios: fied by the workshop participants in the areas of employment, education, public safety and • A deaf person may be the first person us- healthcare. ing sign language interpretation to apply to an organization. Candidate screening mod- els that learn from the current workforce will Employment perpetuate the status quo, and the biases of People with disabilities are no strangers to dis- the past. They will likely exclude candidates crimination in hiring practices. In one recent with workplace differences, including those field study, disclosing a disability (spinal cord who use accommodations to perform their injury or Asperger’s Syndrome) in a job appli- work. cation cover letter resulted in 26% fewer pos- • An applicant taking an online test using as- itive responses from employers, even though sistive technology may take longer to an- the disability was not likely to affect productiv- swer questions, especially if the test itself ity for the position (Ameri et al., 2018). When has not been well designed for accessibil- it comes to inclusive hiring, it has been shown ity. Models that use timing information may that men and those who lack experience with end up systematically excluding assistive disability tend to have more negative affective technology users. Resumes and job ap- reactions to working with individuals with dis- plications may not contain explicit informa- abilities (Popovich, Scherbaum, Scherbaum, tion about a person’s disability, but other & Polinko, 2003). Exclusion can be unin- variables may be impacted, including gaps

42 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

in employment, school attended, or time to platforms do not consider the needs of all perform an online task. learners (Cinquin, Guitton, & Sauzeon´ , 2019). • An applicant with low facial affect could AI in the education market is being driven be screened out by a selection process by the rapid transition from onsite classroom that uses video analysis of eye gaze, voice based education to online learning. Institu- characteristics, or facial movements, even tions can now expand their online learning though she is highly skilled. This type of initiatives to reach more students in a cost- screening poses a barrier for anyone whose effective manner. Industry analyst, Global appearance, voice or facial expression dif- Market Insights (Bhutani & Wadhwani, 2019), fers from the average. It could exclude autis- predicts that the market will grow to a $6 Bil- tic individuals or blind applicants who do not lion dollar industry by 2024. The new genera- make eye contact, deaf people and others tion of online learning platforms are integrated who do not communicate verbally, people with AI technologies and use them to person- with speech disorders or facial paralysis, or alize learning (and testing) for each student, people who have survived a stroke, to name among other applications. Two examples of a few. providers of these systems are from tradi- When the available data do not include many tional Learning Management System (LMS) people with disabilities, and reflect existing bi- vendors like Blackboard; and more recently ases, and the deployed systems rely on prox- from the Massive Open Online Course (MooC) ies that are impacted by disability, the risk of providers like edX. unfair treatment in employment is significant. Personalized learning could provide enor- We must seek approaches that do not perpet- mous benefits for learners with disabilities, uate the biases of the past, or introduce new e.g. (Morris, Kirschbaum, & Picard, 2010). barriers by failing to recognize qualified can- It could be as simple as augmenting existing didates because they are different, or use ac- content with additional illustrations and pic- commodations to do their work. tures for students who are classified as vi- sual learners, or as complex as generation Education of personalized user interfaces (Gajos, Wob- brock, & Weld, 2007). For non-native lan- In the United States, people with disabilities guage speakers, including deaf learners, the have historically been denied access to free system could provide captions for video con- public education (Dudley-Marling & Burns, tent so the student can read along with the 2014)(Obiakor, Harris, Mutua, Rotatori, & Al- lecture. gozzine, 2012). It was nearly 20 years af- ter the passing of Brown v. Board of Educa- Any system that makes inferences about a tion, which desegregated public schools along student’s knowledge and abilities based on racial lines, that the Education for All Hand- their online interactions runs the risk of misin- icapped Children Act was passed (Dudley- terpreting and underestimating students with Marling & Burns, 2014), mandating that all disabilities. Students whose learning capabil- students are entitled to a “free and appro- ities or styles are outside the presumed norm priate public education” in the “least restric- may not receive fair treatment. For example, tive environment.” Prior to 1975, a mere one if there is a rigid time constraint for completing in five learners with disabilities had access a test or a quiz, a student that has a cognitive to public school environments, often in seg- disability or test anxiety where they process regated classrooms (Dudley-Marling & Burns, information more slowly than other students 2014). Despite great strides, some learn- would be assessed as being less capable than ers with disabilities still cannot access inte- they are. grated public learning environments (Dudley- Marling & Burns, 2014), K-12 classroom tech- Unlike other areas, in an educational setting, nologies are often inaccessible (Shaheen & disability information may often be available, Lazar, 2018), postsecondary online learning and the challenge is to provide differentiated materials are often inaccessible (Burgstahler, education for a wide range of people, without 2015) (Straumsheim, 2017), and e-learning introducing bias against disability groups.

43 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

Public Safety identified as being angry, and a potential secu- rity threat. Someone with an altered gait could People with disabilities are much more likely be using a prosthesis, not hiding a weapon. to be victims of violent crime than people with- out disabilities (Harrell, 2017). Threats come not only from other citizens, but also from law People with cognitive disabilities may be at es- enforcement itself; for example, police offi- pecially high risk of being misidentified as a cers can misinterpret people with disabilities potential threat. Combining this with the need as being uncooperative or even threatening, to respond quickly to genuine threats creates and deprive them of access to Miranda warn- a dangerous situation and requires careful de- ings (US Department of Justice Civil Rights sign of the AI system and its method of de- Division, 2006). Law enforcement’s implicit ployment. bias and discrimination towards people with disabilities, as well as the potential for tech- nology to address these challenges, are both There may also be opportunities for AI to im- featured in the 2015 Final Report of the Pres- prove public safety for people with disabil- ident’s Task Force on 21st Century Polic- ities. For example, AI-based interpretation ing (Policing, 2015). could be trained to ’understand’ a wide range of behaviors including hand flapping, pacing The application of AI technology to identify and sign language, as normal. A recent sur- threats to public safety and enforce the law is vey and interview study of people who are highly controversial (McCarthy, 2019). This in- blind (Branham et al., 2017) suggests that fa- cludes technology for identifying people, rec- cial and image recognition technologies could ognizing individuals, and for interpreting be- better support personal safety for individuals havior (for example, whether someone is act- with sensory disabilities. They may support ing in a suspicious manner). Aside from the locating police officers and identifying fraudu- threat to personal privacy, the potential for lent actors claiming to be officials. They may errors and biased performance is very real. allow a person who is blind or deaf to be made While public discourse and academic atten- aware of a weapon being brandished or dis- tion has so far focused on racial and gen- charged. They may support access to facial der disparities, workshop participants identi- cues for more cautious and effective interac- fied serious concerns and also some oppor- tions with a potential aggressor or a police tunities for people with disabilities. officer. These technologies may even allow Autonomous vehicles must be able to iden- blind individuals to provide more persuasive tify people in the environment with great preci- evidence to catch their perpetrators. sion. They must reliably recognize individuals who use different kinds of wheelchair and mo- When considering the ethics of proposed bility devices, or move in an unusual way. One projects in this space, the potential risks for in- workshop participant described a wheelchair- dividuals with disabilities should also be evalu- using friend who propels themselves back- ated and addressed in the overall design. For wards with their feet. This is an unusual example, a system could highlight someone method of getting around, but the ability to rec- with an altered gait, and list possibilities as ognize and correctly identify such outlier indi- someone hiding a weapon, or someone us- viduals is a matter of life and death. ing a prosthesis or mobility device. In a situa- Another participant had recently observed, a tion where facial recognition is being used, a dishevelled man pacing restlessly in an air- person’s profile could help responders to avoid port lounge, muttering to himself, clearly in a misunderstanding, but again this comes at the state of high stress. His behavior could be in- cost of sacrificing privacy, and potentially do- terpreted by both humans and AI analysis as ing harm to other marginalized groups (Hamidi a potential threat. Instead he may be show- et al., 2018a). An overall balance must be ing signs of an anxiety disorder, autism, or a found between using AI as a tool for main- strong fear of flying. Deaf signers’ strong facial taining public safety while minimizing negative expressions can be misinterpreted (Shaffer & outcomes for vulnerable groups and outlier in- Rogan, 2018), leading to them being wrongly dividuals.

44 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

Healthcare For example, if speech pauses are used to di- agnose conditions like Alzheimer’s disease, a Today there are large disparities in access to person whose speech is already affected by a healthcare for people with disabilities (Iezzoni, disability may be wrongly diagnosed, or their 2011) (Krahn, Walker, & Correa-De-Araujo, diagnosis may be missed because the system 2015), especially those with developmen- does not work for them. Pauses in speech can tal disabilities (Krahn, Hammond, & Turner, be because person is a non-native speaker; 2006). Patients who are non-verbal or pa- and not a marker of disease. tients with cognitive impairments are often Just as we have seen in the domains of under-served or incorrectly served (Barnett, employment, education, and public safety, if McKee, Smith, & Pearson, 2011) (Krahn & healthcare applications are built for the ex- Fox, 2014) (Krahn et al., 2015). Deaf patients tremes of currently excluded populations, the are often misdiagnosed with having a mental solution stands to improve fairness in access, illness or a disorder, because of lack of cultural instead of locking people out. Across all do- awareness or language barrier (Glickman, mains, AI applications pose both risks and op- 2007) (Pollard, 1994). Another area that is portunities for people with disabilities. The underserved in the current model are patients question remains: how and when can fairness with rare diseases or genetic disorders, that for people with disabilities be implemented do not fall within standard protocols (Wastfelt, in the software development process towards Fadeel, & Henter, 2006). For older adults minimizing risks and maximizing benefits? In with deteriorating health, this may lead to un- the following section, we address this question wanted institutionalization. Promising techno- for each stage of the AI development process. logical developments, many of which include AI, abound, but need to better incorporate tar- get users in the development process (Haigh Considerations for AI Practitioners & Yanco, 2002). In this section, we recommend ways AI prac- AI applications in healthcare could help to titioners can be aware of, and work towards overcome some of the barriers preventing fairness and inclusion for people with disabil- people getting access to the care and preven- ities in their AI-based applications. The sec- tative care they need. For example, a non- tion is organized around the typical stages of verbal person may have difficulty communi- AI model development: problem scoping, data cating a problem they are experiencing. With sourcing, pre-processing, model selection and respect to pain management or prescription training, and incorporating AI in an application. delivery, AI can remove the requirement that patients advocate on their own behalf. For complex cases where disabilities or commu- Problem Scoping nicative abilities may affect treatment and abil- Some projects have greater potential to im- ity to adhere to a treatment plan, AI could pact human lives than others. To identify ar- be applied to recognize special needs situa- eas where special attention may need to be tions and flag them for extra attention, and paid to fairness, it can be helpful to apply the build a case for a suitable course of treatment. Bioss AI Protocol (Bioss, 2019), which recom- With respect to rare diseases or genetic disor- mends asking the following 5 questions about ders, disparate data points can be aggregated the work being done by AI: such that solution determination and delivery is not contingent on an individual practitioner’s 1. Is the work Advisory, leaving space for hu- know-how. man judgement and decision making? Unfortunately, there are no standards or reg- 2. Has the AI been granted any Authority over ulations to assess the safety and efficacy of people? these systems. If the datasets don’t well- represent the broader population, AI might 3. Does the AI have Agency (the ability to act work less well where data are scarce or diffi- in a given environment)? cult to collect. This can negatively impact peo- 4. What skills and responsibilities are we at ple with rare medical conditions/disabilities. risk of Abdicating?

45 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

5. Are lines of Accountability clear, in what included, and what data and people are left are still organizations run by human beings? out. Finally, a word of warning. From a machine AI practitioners can also investigate whether learning perspective, an obvious solution to this is an area where people with disabilities handling a specialized sub-group not typical of have historically experienced discrimination, the general population might be to develop a such as employment, housing, education, and specialized model for that group. For example, healthcare. If so, can the project improve a specialized speech recognition model tuned on the past? Identify what specific outcomes to the characteristics of people with slurred there should be, so these can be checked as speech, or people who stutter. From an eth- the project progresses. Develop a plan for ical perspective, solutions that handle outliers tackling bias in source data to avoid perpet- and disability groups by routing them to an al- uating previous discriminatory treatment. This ternative service require careful thinking. Indi- could include boosting representation of peo- viduals may not wish to self-identify as having ple with disabilities, adjusting for bias against a disability, and there may be legal protections specific disability groups, or flagging gaps in against requiring self-declaration. Solutions data coverage so the limits of the resulting that attempt to infer disability status, or infer model are explicit. a quality that serves as a proxy for disability Approaches to developing ethical AI include status also present an ethical minefield. It may actively seeking the ongoing involvement of be acceptable to detect stuttered speech in or- a diverse set of stakeholders (Cutler et al., der to route a speech sample to a specialized 2019), and a diversity of data to work with. speech recognition model with higher accu- To extend this approach to people with dis- racy, but using the same detection system to abilities, it may be useful to define a set of evaluate a job applicant could be discrimina- ’outlier’ individuals, and include them in the tory, unfair and potentially illegal. Any system team, following an inclusive design method that explicitly detects ability-related attributes as discussed in the following section. These of an individual may need to make these in- are people whose data may look very differ- ferences visible to the user, optional, and able ent to the average. What defines an outlier to be challenged when they make wrong in- depends on the application. Many variables ferences. It is crucial to involve members of can be impacted by a disability, leading to a the affected disability group from the outset. potential for bias even where no explicit dis- This can prevent wasted time and effort on ability information is available. For example, in paths that lead to inappropriate and unfair out- speech recognition it could be a person with a comes, inflexible systems that cannot be ef- stutter or a person with slow, slurred speech. fectively deployed at scale, or that will be likely In a healthcare application involving height, to face legal challenges. this could mean including a person of short stature. Outliers may also include people who Data Sourcing belong with one group, but whose data looks more like that of another group. For example, When sourcing data to build a model, impor- a person who is slow to take a test may not be tant considerations are: struggling with the material, but with typing, or accessing the test itself through their assistive 1. Does the data include people with disabil- technology. By defining outlier individuals up ities, especially those disabilities identified front, the design process can consider at each as being most impacted by this solution? stage what their needs are, whether there are For example, data about the employees of a potential harms that need to be avoided, and company with poor diversity may not include how to achieve this. anyone who is deaf, or blind. If important Related to identifying outliers, developing a groups are missing, or if this information is measurement plan is also valuable at this not known, take steps to find or create such stage. If the plan includes expected out- data to supplement the original data source. comes for outliers and disability groups, this 2. Might the data embody bias against peo- can impact what data (including people) are ple with disabilities? Consider whether the

46 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

data might capture existing societal biases ror, but actually representing non-typical against people with disabilities. For exam- individuals, reducing the diversity in the ple, a dataset of housing applications with dataset. decisions might reflect a historical reluc- • Feature selection may include or exclude tance to choose someone with a disability. features that convey disability status. Be- When this situation is identified, raise the is- sides explicit disability information, other sue. features could be impacted by disability sta- 3. Is disability information explicitly repre- tus or the resulting societal disadvantage, sented in the data? If so, practitioners providing a proxy for disability status. For can use bias detection tests to check for example, a preference for large fonts could bias, and follow up with mitigation tech- serve as a proxy for visual impairment, or niques to adjust for bias before training a use of video captions could be correlated model (Bellamy et al., 2018). with deafness. Household income, educa- tional achievement, and many other vari- Sometimes data are constructed by combin- ables can also be correlated with disability. ing several data sources. Depending on the • Feature engineering involves deriving new requirements of those sources, some groups features from the data, either through anal- of people may not have records in all sources. ysis or combination of existing features. Be attentive to whether disability groups might For example, calculating a person’s reading fall into this category and be dropped from the level or personality traits based on their writ- combined data set. For example, when com- ing, or calculating a ratio of days worked to bining photograph and fingerprint biometrics, days lived. In both of these examples, the consider what should happen for individuals derived feature will be impacted by certain who do not have fingers, and how they will be disabilities. represented and handled. In Europe, GDPR regulations (European Although accepted practice in many fields is to Union, 2016) give individuals the right to know exclude sensitive features so as not to build a what data about them is being kept, and how model that uses that feature, this is not neces- it is used, and to request that their data be sarily the best approach for algorithmic solu- deleted. As organizations move to limit the in- tions. The reality is that it can be extremely formation they store and the ways it can be difficult to avoid including disability status in used, AI systems may often not have explicit some way. When possible, including features information about disability that can be used that explicitly represent disability status allows to apply established fairness tests and cor- for testing and mitigation of disability-related rections. By being attentive to the potential bias. Consulting outlier individuals and stake- for bias in the data, documenting the diver- holder groups identified in the problem scop- sity of the data set, and raising issues early, ing stage is valuable to provide a better under- practitioners can avoid building solutions that standing of the ways that disability can be re- will perpetuate inequalities, and identify sys- flected in the data, and the tradeoffs involved tem requirements for accommodating groups in using or excluding certain features and data that are not represented in the data. values.

Data Pre-Processing Preserving Privacy The process of cleaning and transforming data People experiencing disabilities may have the into a form suitable for machine learning has most to gain from many smart systems, but been estimated to take 80-90% of the effort of are also particularly vulnerable to data abuse a typical data science project (Zhang, Zhang, and misuse. The current privacy protections & Yang, 2003), and the choices made at this do not work for individuals who are outliers or stage can have implications for the inclusive- different from the norm. The current response ness of the solution. to this data abuse and misuse, by privacy ef- forts globally, is to de-identify the data. The • Data cleaning steps may remove outliers, notion is that if we remove our identity from the presumed to be noise or measurement er- data, it can’t be traced back to us and it can’t

47 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 be used against us. The assumption is that est product or service offering. Layered on we will thereby retain our privacy while con- top of the standard are utilities that help con- tributing our data to making smarter decisions sumers explore, discover and refine their un- about the design. derstanding of their needs and preferences, for a given context and a given goal. The While people experiencing disabilities are par- personal data preference part of this standard ticularly vulnerable to data abuse and misuse, will let consumers declare what personal data they are often also the easiest to re-identify. If they are willing to release to whom, for what you are the only person in a neighborhood us- purpose, what length of time and under what ing a wheelchair, it will be easy to re-identify conditions. Services that wish to use the data you. If you are the only person that receives would declare what data is essential for pro- delivery of colostomy bags in your community, viding the service and what data is optional. it will be very easy to re-identify your purchas- This will enable a platform to support the ne- ing data. gotiation of more reasonable terms of ser- vice. The data requirements declarations by If de-identification is not a reliable approach to the service provider would be transparent and maintaining the privacy of individuals that are auditable. The standard will be augmented far from average, but data exclusion means with utilities that inform and guide consumers that highly impactful decisions will be made regarding the risks and implications of pref- without regard to their needs, what are poten- erence choices. Regulators in Canada and tial approaches to addressing this dilemma? Europe plan to point to this standard when it The primary focus has been on an ill-defined is completed. This will hopefully wrest back notion of privacy. When we unpack what this some semblance of self-determination of the means to most people, it is self-determination, use of our data. ownership of our own narrative, the right to know how our data is being used, and ethical Another approach to self-determination and treatment of our story. data that is being explored with the Plat- form Co-op Consortium (Platform Coopera- To begin to address this dilemma, an Interna- tivism Consortium, 2019) is the formation of tional Standards Organization personal data data co-ops. In a data co-op, the data produc- preference standard has been proposed as ers would both govern and share in the profit an instrument for regulators to restore self- (knowledge and funds) arising from their own determination regarding personal data. The data. This approach is especially helpful in proposal is developed as a response to the all- amassing data in previously ignored domains, or-nothing terms of service agreements which such as rare illnesses, niche consumer needs ask you to give away your private data rights in or specialized hobbies. In smart cities, for ex- exchange for the privilege of using a service. ample, there could be a multiplicity of data These terms of service agreements are usu- domains that could have associated data co- ally couched in legal fine print that most peo- ops. Examples include wayfinding and traffic ple could not decode even if they had the time information, utility usage, waste management, to read them. This means that it has become a recreation, consumer demands, to name just convention to simply click “I agree” without at- a few. This multiplicity of data co-ops would tending to the terms and the rights we have re- then collaborate to provide input into more linquished. The proposed standard will be part general urban planning decisions. of an existing standard called AccessForAll or ISO/IEC 24751 (ISO/IEC, 2008). The struc- Model Training and Testing ture of the parent standard enables match- ing of consumer needs and preferences with When developing a model, there are many resource or service functionality. It provides bias testing methods available to researchers. a common language for describing what you However, when applying these techniques to need or prefer in machine-readable terms and fairness for people with disabilities, some lim- a means for service providers or producers to itations become evident. Firstly, group-based describe the functions their products and ser- methods require large enough numbers of in- vices offer. This allows platforms to match di- dividuals in each group to allow for statistical verse unmet consumer needs with the clos- comparison of outcomes, and secondly, they

48 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 often rely on binary in-group/out-group com- valuable. These can be developed with the parisons, which can be difficult to apply to outlier individuals and key stakeholder groups disability. This section will expand on each identified at the outset. For models that in- of these points and suggest ways to address terpret humans (speech, language, gesture, these limitations in model testing. facial expression) where algorithmic fairness is important, the goal is to develop a method When examining fairness for people with dis- that works well for as many groups as possible abilities, there may be few examples in the (e.g. speech recognition for deaf speakers), training data. As defined by the United Na- and to document the limitations of the model. tions Convention on the Rights of People with For models that allocate people to groups (e.g. Disabilities (UN General Assembly, 2007), job candidate, loan applicant), allocative fair- disability “results from the interaction between ness is important. In selecting a measure for persons with impairments and attitudinal and allocative fairness, we argue that an individual environmental barriers that hinders their full fairness approach rather than a group fairness and effective participation in society.” As such, approach is preferable. While group fairness disability depends on context and comes in seeks to equalize a measure across groups, many forms, including physical barriers, sen- individual fairness aims for ‘similar’ individu- sory barriers, and communication barriers. als to receive similar outcomes. For example, One important consequence of experiencing in deciding whether to grant loan applications, a disability is that it can lead us to do things in even if the presence of a disability is statisti- a unique way, or to look or act differently. As a cally correlated with unemployment over the result, disabled people may be outliers in the whole dataset, it would still be unfair to treat data, or align with one of many very different an employed person with a disability the same sub-groups. as the unemployed group, simply because of their disability. Individual fairness aligns better Today’s methods for bias testing tend to split with the societal notion of fairness, and legal individuals into members of a protected group, mandates against discrimination. and ‘others’. However, disability status has many dimensions, varies in intensity and im- pact, and often changes over time. Further- Deployment in Real Applications more, people are often reluctant to reveal a disability. Binarized approaches that combine In this stage, the trained model is incorporated many different people into a broad ‘disabled’ into an application, typically with an interface category will fail to detect patterns of bias that for people to use, or an API to connect to. apply, differently and distinctively, against spe- Testing with diverse users, especially outliers, cific groups within the disability umbrella. For is essential. Understanding how different peo- example, an inaccessible online testing Web ple will perceive and use an AI-based system site will not disadvantage wheelchair users, is also important, for example, to see if certain but those who rely on assistive technologies people are more likely than others to ascribe or keyboard-only control methods to access trust to an AI-based system, or be more likely the Web may be unable to complete the tests. to feel insulted by an AI system’s terse replies More sensitive analysis methods not based on or lack of context. binary classifications are needed to address As a matter of course, quality assurance these challenges. Other protected attributes should include testing by people with disabil- like race and gender that have traditionally ities, covering as broad a set of disability been examined with binary fairness metrics groups as possible. This would include both are also far more complex and nuanced in testing the of the system itself to reality (Keyes, 2018) (Hamidi, Scheuerman, ensure it is accessible, and the system’s per- & Branham, 2018b), and new approaches formance on diverse data inputs. The test pro- suitable for examining disability-related bias cess should deliberately include outlier indi- would support a more sophisticated examina- viduals to test the limits of the system and the tion of these attributes too. mechanisms for addressing system failure. To test for fairness, an audit based on iden- Because disability manifests in such diverse tified test cases and expected outcomes is ways, there will be situations where the ap-

49 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 plication is presented with an individual quite ties. For AL/ML practitioners, this may seem unlike those in the training data. For exam- like a daunting task, with questions ranging ple an automated telephone help system may from how to find diverse users, how to eth- have difficulty interpreting an individual with a ically and respectfully engage them, and by speech impairment, especially if they are not what methods one can reliably incorporate speaking in their native language. Develop- their feedback to improve systems. ers can avoid discrimination by providing an alternative to using AI, for example by support- The field of Human-Computer Interaction ing typed input in addition to speech. Users has long contemplated these questions, should have the ability to opt out of AI-based and has developed a number of design interpretation. philosophies, methodologies, and methods to guide the practice. Some of these per- Disability may also impact the accuracy of in- tain specifically to engaging people with dis- puts to an AI-based system. An example is abilities, including Universal Design (Story, a video-based personality analysis concluding Mueller, & Mace, 1998), Ability-Based De- that an autistic interviewee is untrustworthy sign (Wobbrock, Kane, Gajos, Harada, & because they did not make eye contact with Froehlich, 2011), Design for User Empower- the interviewer, and then feeding that into an ment (Ladner, 2015), and Design for Social applicant selection model. For people with dis- (Shinohara, Wobbrock, & Pratt, abilities, it is essential to have the opportunity 2018). In this section, we endeavor to pro- to inspect and correct the data used to make vide a brief overview of just three poten- decisions about them. tial approaches that AL/ML developers might Equally important is the ability to query and choose as they seek to integrate users into challenge AI decisions, receiving some form their process. Our purpose, then, is not to pro- of explanation of the factors that most im- vide a comprehensive review of all or even a pacted the decision. If these factors were af- few methodologies, but rather to offer links into fected by a person’s disability, the decision the literature for those who want to learn more may be discriminatory. Any AI-based system about these approaches, or perhaps seek out that makes decisions affecting people should a collaborator with such expertise. include both an opportunity to dispute a deci- Below, we overview three distinct approaches sion, and provide a manual override for outlier to human-centered design: Inclusive Design, individuals where the model is unreliable. Participatory Design, and Value-Sensitive De- As many AI systems by their very nature sign. Each of these has developed from learn and thus modify their behavior over time, different intellectual traditions, and therefore ongoing mechanisms to monitor for fairness varies in the degree to which they explicitly in- to people with disabilities should be incorpo- clude people with disabilities in their theoret- rated. These can include ongoing auditing ical frameworks. People with disabilities are and reviews of performance as well as pe- often excluded from design processes, and riodic explicit testing to verify that changes designs rarely anticipate end-users’ needs in the system’s operation aimed at improv- to appropriate and adapt designs (Derboven, ing performance do not introduce disparities Geerts, & De Grooff, 2016). We therefore will in how decisions are made for specific sub- begin here by introducing some basic ratio- populations. This is crucial to ensure that as a nale for why it is important to include people system gets better for people overall it doesn’t with disabilities directly in software develop- unfairly get worse for some. ment efforts. Firstly, the opportunity to fully participate in so- Design Approaches ciety is every person’s right, and digital inclu- sion is fundamental to those opportunities to- In several of the stages of AI development de- day. All of us will very likely experience disabil- scribed above, and particularly in the problem ity at some stage in our lives, and our technol- scoping and testing/deployment phases, we ogy must be robust enough to accommodate have encouraged AI/ML engineers to seek the the diversity of the human experience, espe- ongoing involvement of people with disabili- cially if it is used in critical decision-making

50 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 areas. This cannot happen unless this diver- the margins. In particular, we believe the theo- sity is considered from the outset, hence the retical approaches of Inclusive Design (specif- disability rights movement’s mantra of “noth- ically as it evolved in Canada), Participatory ing about us, without us.” Design and Value Based Design are particu- larly valuable when designing for – and with – Including people with disabilities may bring people with disabilities. compelling new design ideas and ultimately expand the potential user base of the prod- uct. People with disabilities (including seniors) Inclusive Design have been described as the original life hack- The practice and theoretical framing of in- ers and personal innovators (Harley & Fitz- clusive design, that emerged and evolved patrick, 2012) (Storni, 2010), as they often in Canada and with global partners since have to find creative workarounds and inno- the emergence of the Web, takes advan- vate new technologies in a world that is not tage of the affordances or characteristics of built to their specifications. Second, peo- digital systems (Pullin, Treviranus, Patel, & ple with disabilities can be seen as present- Higginbotham, 2017) (Treviranus, 2016). In ing valuable diverse cases which should be contrast to related universal design theories, brought from the “edge” and into the “center” that emerged from architectural and indus- of our design thinking. In her foundational pa- trial design fields, the Canadian inclusive per on Feminism in Human Computer Interac- design practice aims to use the mutability tion, Bardzell advocated to study not only the and connectivity of networked digital systems conceptual “center” of a distribution of users, to achieve one-size-fits-one designs within but also the edge cases (Bardzell, 2010). She an integrated system, thereby increasing the argued that design often functions with a de- adaptability and longevity of the system as fault “user” in the designers’ minds, and that a whole (Lewis & Treviranus, 2013). Rather default user is often male, white, educated, than specifying design criteria, or accessibility and non-disabled. In Bardzell’s analysis, ac- checklists, the theory specifies a process or commodating the edge cases is a way both to mindset, called the “three dimensions of inclu- broaden the audience (or market) for a design, sive design.” and to strengthen the design against unantic- ipated changes in users, usage, or contexts 1. Recognize that everyone is unique, and of use. These ideas are echoed by other re- strive for a design that is able to match this searchers (Krischkowsky et al., 2015) (Muller uniqueness in an integrated system. Sup- et al., 2016) (Tscheligi et al., 2014). One port self-awareness of this uniqueness (use canonical example of how designs made with data to make people ‘smarter’ about them- and for people with disabilities can actually im- selves, not just machines smarter). prove the user experience of everyone (an ex- 2. Create an inclusive co-design process. The ample of “universal design”), is the curb cut. most valuable co-designers are individuals Curb cuts – the ramps that allow people in that can’t use or have difficulty using the cur- wheelchairs to transition from public sidewalks rent designs. Continuously ask whose per- to cross the street – also serve parents with spective is missing from the decision mak- prams, workers with heavy wheeled loads, ing “table” and how can they help make the and pedestrians on scooters. Both Downey “table” more inclusive. and Jacobs (Downey, 2008) (Jacobs, 1999) have advocated for electronic curb cuts; one 3. Recognize that all design operates within a example of such a feature is the zooming ca- complex adaptive system of systems. Be pability of the browser, which supports easier cognizant of the entangled impact and fric- reading for people with low vision or people tion points. Strive for designs that are bene- who are far away from the screen. ficial to this larger system of systems. In practice, the best way of centering This practice of inclusive design critiques and marginalized perspectives will require that we counters reliance on probability and popula- include people with disabilities in our core de- tion based statistics, pointing out the risks of sign practices. Fortunately, there is a rich his- basing critical decisions solely on the majority tory of work on design that centers users at or statistical average (Treviranus, 2014). With

51 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 respect to fairness in AI, people with disabil- published an overview of theory and meth- ities are most disadvantaged by population- ods for work with developmentally diverse chil- based AI decisions. The only common defin- dren. Katan et al. (2015) used interactive ma- ing characteristic of disability is difference chine learning in participatory workshops with from the norm. If you are not like the norm, people with disabilities (Katan, Grierson, & probability predictions based on population Fiebrink, 2015). data are wrong. Even if there is full rep- Some of these methods amount to merely resentation and all human bias is removed, asking people about their needs, e.g., (Holb, statistically-based decisions and predictions Bthun, & Dahl, 2013) (Krishnaswamy, 2017). will be biased against small minorities and out- However, other approaches involve bring- liers. Current automated decisions focus on ing people with disabilities (and sometimes the good of the majority, causing greater dis- their informal carers) into the design pro- parity between the majority and smaller mi- cess as active co-designers, e.g., (Gomez Tor- norities. Inclusive design practitioners are in- res, Parmar, Aggarwal, Mansur, & Guthrie, vestigating learning models that do not give 2019) (Hamidi et al., 2016) (Lee & Riek, advantage to being like the majority, causing 2018) (McGrenere et al., 2003) (Sitbon & learning models to attend to the full diversity Farhin, 2017) (Wilde & Marti, 2018) (Williams of requirements (Treviranus, 2019). This is hy- et al., 2018). In general, the methods that in- pothesized to also support better context shift- clude direct participation by people with dis- ing, adaptability and detection of weak sig- abilities in design activities are more power- nals. ful, and tend to include deeper understandings Given that data is from the past, optimiza- than are possible through the less engaged tion using past data will not achieve the cul- survey methods. ture shift inclusive design practitioners hope We note that this is an active research area, to achieve. Hence inclusive design prac- with discussion of needs that are not yet met, titioners are combining existing data, data e.g., (Holone & Herstad, 2013) (Oswal, 2014), from alternative scenarios, and modelled or and many opportunities to improve and inno- simulated data to assist in decision making. vate the participatory methods. For people Storytelling, bottom-up personalized data, or who are new to PD, we suggest beginning with small (n=1), thick (in context) data is also em- an orientation to the diversity of well-tested ployed to overcome the bias toward numerical methods, e.g., (Muller & Druin, 2012), followed data (Clark, 2018) (Pullin et al., 2017). by a “deeper dive” into methods that have been used with particular populations and/or Participatory Design with particular challenges. Participatory Design (PD) emphasizes the ac- tive role of people who will be affected by a Value-Sensitive Design technology, as co-designers of that technol- Participatory design originated primarily from ogy. There are many methods and rationales the workplace democracy movement in Scan- for these approaches, which can be found dinavia, e.g., (Bjerknes, Ehn, & Kyng, 1987), in (Muller & Druin, 2012), among other refer- and then developed in many directions. One ences. Some PD methods were originally pro- of the core assumptions of the workplace ap- posed as “equal opportunity” practices, e.g., plications was a division of labor among work- (Kuhn & Muller, 1993), because the methods ers and managers. In that context, PD meth- involved low-technology or “lo-fi” prototyping ods were seen as ways to reduce power dif- practices that did not require extensive com- ferences during the two-party design process, puter knowledge to contribute to the design. by facilitating the voice of the workers in re- However, the needs of people with disabilities lation to management. These assumptions were generally not considered during the early have tended to carry through into work with days of PD. people with disabilities, in which the two par- This omission has now been partially rectified. ties are reconceived as people with disabilities Borjesson¨ and colleagues (Borjesson,¨ Baren- and designers, or people with disabilities and dregt, Eriksson, & Torgersson, 2015) recently providers, with designers as mediators.

52 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

Value Sensitive Design (VSD) offers a broader nitive disability as a potential threat. In AI perspective regarding the stakeholders in de- systems for healthcare, where speech char- sign (Friedman, Hendry, & Borning, 2017). acteristics can be used to diagnose cognitive For this paper, VSD crucially expands con- impairments, a person with a speech impedi- cepts of stakeholders into direct stakehold- ment can be misdiagnosed. ers (people who have contact with a design or a technology, including users, designers, To avoid such erroneous conclusions and po- providers) and indirect stakeholders (people tentially damaging outcomes, a number of who are affected by the design or technol- steps are proposed. AI systems should be pri- ogy, even if they do not have direct contact oritized for fairness review and ongoing mon- with it). For people with disabilities, there are itoring, based on their potential impact on the often multiple stakeholders in complex rela- user in their broader context of use. They tionships, e.g., (Zolyomi, Ross, Bhattacharya, should offer opportunities to redress errors, Milne, & Munson, 2018). and for users and those impacted to raise fair- ness concerns. People with disabilities should While there are disagreements about be included when sourcing data to build mod- the details of a value-centric approach, els. Such “outlier” data - the edge cases - e.g., (Borning & Muller, 2012) (Le Dan- will create a more inclusive and robust sys- tec, Poole, & Wyche, 2009) (Muller & Liao, tem. From the perspective of people with dis- 2017), there is consensus that values mat- abilities, there can be privacy concerns with ter, and that values can be formative for self-identification, but there can be risk of ex- designs. There may be values differences clusion from the data models if users opt not among people with disabilities, carers, and to participate or disclose. There are methods medical professionals, e.g, (Draper et al., provided to increase participation while pro- 2014) (Felzmann, Beyan, Ryan, & Beyan, tecting user privacy, such as the personal data 2016), and therefore an explicit and focused preferences standard. In deploying the AI ap- values inquiry may be helpful in “satisficing” plication, it is critical to test the UI and sys- these complex assumptions, views, and tem preferences with outlier individuals. Users needs (Cheon & Su, 2016). While VSD has should be able to pursue workarounds, and tended to have fewer well-defined methods, ultimately override the system where models Friedman et al. (2017) recently published a may be unreliable. survey of values-centric methods (Friedman et al., 2017). AI has been shown to help improve the lives of people with disabilities in a number of different environments whether it be navigating a city, Conclusion re-ordering prescriptions at a local pharmacy through a telephone or text service, or facil- We have outlined a number of situations in itating public safety. Almost everyone in the which AI solutions could be disadvantageous greater community is directly connected with for people with disabilities if researchers and someone with a disability whether it be a fam- practitioners fail to take necessary steps. In ily member, a colleague, a friend, or a neigh- many existing situations, non-AI solutions are bor. While AI technology has significantly im- already discriminatory, and introducing AI runs proved the lives of those in the disabled com- the risk of simply perpetuating and replicating munity, there are always ways in which we can these flaws. For example, people with dis- continue to advocate for fairness and equality abilities may already face discrimination in hir- and challenge the status quo. ing opportunities. With AI-driven hiring sys- tems, models that recognize good candidates When creating AI that is designed to help the by matching to the existing workforce will per- community, we must take into consideration a petuate that status quo. In education, an AI disabled person user approach. The AI de- system that draws inferences based on a stu- signed should consider disabled people as a dent’s online interactions might misinterpret focal point. To borrow upon the concept driven speed for competency, if the student is using by Eric Ries in The Lean Startup (Ries, 2011), assistive technologies. In public safety, AI sys- we need to deploy minimum viable products tems might misinterpret a person with a cog- (MVPs) that are not perfected but rather im-

53 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019 proved upon by the user involved. and public health: opportunity for social justice. Preventing chronic In a sense, what is needed is an incremen- disease, 8(2), A45. Retrieved from tal and algorithmic approach that continues http://www.ncbi.nlm.nih.gov/ to challenge the status quo and strives to pubmed/21324259http:// improve the standardization of fairness and www.pubmedcentral.nih.gov/ equality. This should be from a multi-industrial articlerender.fcgi?artid= approach with key players in different indus- PMC3073438 tries. Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, This paper suggests challenging every day S. C., Houde, S., Kannan, K., . . . Zhang, practices that may prove to inhibit people with Y. (2018, oct). AI Fairness 360: An disabilities, and is a starting point to bring Extensible Toolkit for Detecting, Under- awareness for the need for equality. It is im- standing, and Mitigating Unwanted Algo- portant to remember that promoting this goal rithmic Bias. Retrieved from http:// is a process. Success will require a series of arxiv.org/abs/1810.01943 incremental steps of further learning, thought Bhutani, A., & Wadhwani, P. (2019). Artificial provoking peer discussion, and changes at the Intelligence (AI) in Education Market local and municipal level. Only when these in- Size worth $6bn by 2024. Global Market cremental changes are met will we drive sus- Insights. Retrieved from https://www tainable outcomes for people with disabilities .gminsights.com/pressrelease/ using AI systems. artificial-intelligence-ai-in -education-market Acknowledgments Bioss. (2019). The Bioss AI Protocol. Re- trieved 2019-08-30, from http://www The authors thank all participants of the IBM- .bioss.com/ai/ MIT AI Week 2018 Workshop on AI Fairness Bird, S., Hutchinson, B., Kenthapadi, K., Kici- for People with Disabilities for their contribu- man, E., & Mitchell, M. (2019). Fairness- tions to the discussions leading to this article. Aware Machine Learning: Practical Chal- Special thanks to Peter Fay, Phill Jenkins, Au- lenges and Lessons Learned. In Com- drey Hasegawa and Morgan Foreman. panion proceedings of the 2019 world wide web conference on - www ’19 References (pp. 1297–1298). New York, New York, USA: ACM Press. Retrieved Ameri, M., Schur, L., Adya, M., Bent- from http://dl.acm.org/citation ley, F. S., McKay, P., & Kruse, D. .cfm?doid=3308560.3320086 doi: (2018, mar). The Disability Employ- 10.1145/3308560.3320086 ment Puzzle: A Field Experiment on Bjerknes, G., Ehn, P., & Kyng, M. (1987). Employer Hiring Behavior. ILR Re- Computers and Democracy - A Scandi- view, 71(2), 329–364. Retrieved from navian Challenge. Avebury. http://journals.sagepub.com/ Borjesson,¨ P., Barendregt, W., Eriksson, E., doi/10.1177/0019793917717474 & Torgersson, O. (2015). Design- doi: 10.1177/0019793917717474 ing Technology for and with Develop- Bardzell, S. (2010). Feminist HCI: Tak- mentally Diverse Children: A System- ing Stock and Outlining an Agenda atic Literature Review. In Proceedings for Design. In Proceedings of the of the 14th international conference on sigchi conference on human factors in interaction design and children (pp. 79– computing systems (pp. 1301–1310). 88). New York, NY, USA: ACM. Re- New York, NY, USA: ACM. Re- trieved from http://doi.acm.org/ trieved from http://doi.acm.org/ 10.1145/2771839.2771848 doi: 10 10.1145/1753326.1753521 doi: 10 .1145/2771839.2771848 .1145/1753326.1753521 Borning, A., & Muller, M. (2012). Next Steps Barnett, S., McKee, M., Smith, S. R., & for Value Sensitive Design. In Proceed- Pearson, T. A. (2011, mar). Deaf ings of the sigchi conference on human sign language users, health inequities, factors in computing systems (pp. 1125–

54 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

1134). New York, NY, USA: ACM. Re- mlr115{&}id=1081{&}div= trieved from http://doi.acm.org/ {&}collection= 10.1145/2207676.2208560 doi: 10 Cheon, E., & Su, N. M. (2016). Integrat- .1145/2207676.2208560 ing roboticist values into a Value Sen- Branham, S. M., Abdolrahmani, A., Easley, sitive Design framework for humanoid W., Scheuerman, M., Ronquillo, E., & robots. 11th ACM/IEEE International Hurst, A. (2017). ”Is Someone There? Conference on Human-Robot Interaction Do They Have a Gun”. In Proceedings (HRI). doi: 10.1109/HRI.2016.7451775 of the 19th international acm sigaccess Cinquin, P.-A., Guitton, P., & Sauzeon,´ conference on computers and accessibil- H. (2019, mar). Online e-learning ity - assets ’17 (pp. 260–269). New York, and cognitive disabilities: A system- New York, USA: ACM Press. Retrieved atic review. Computers & Educa- from http://dl.acm.org/citation tion, 130, 152–167. Retrieved from .cfm?doid=3132525.3132534 doi: https://www.sciencedirect 10.1145/3132525.3132534 .com/science/article/pii/ Brewer, R. N., & Kameswaran, V. (2018). Un- S0360131518303178 doi: derstanding the Power of Control in Au- 10.1016/J.COMPEDU.2018.12.004 tonomous Vehicles for People with Vi- Clark, C. B. (2018, jun). Many more sion Impairment. In Proceedings of the stories: Co-design and creative com- 20th international acm sigaccess confer- munities. OCAD University. Re- ence on computers and accessibility - trieved from http://openresearch assets ’18 (pp. 185–197). New York, .ocadu.ca/id/eprint/2079/ New York, USA: ACM Press. Retrieved Costello, K. (2019). Gartner Survey Shows from http://dl.acm.org/citation 37 Percent of Organizations Have .cfm?doid=3234695.3236347 doi: Implemented AI in Some Form. Stam- 10.1145/3234695.3236347 ford, Conn.: Gartner. Retrieved from Bruckner, M. A. (2018). The Promise https://www.gartner.com/en/ and Perils of Algorithmic Lenders’ newsroom/press-releases/2019 Use of . Chicago-Kent -01-21-gartner-survey-shows Law Review, 93. Retrieved from -37-percent-of-organizations https://heinonline.org/HOL/ -have Page?handle=hein.journals/ Cutler, A., Pribik, M., & Humphrey, L. chknt93{&}id=15{&}div= (2019). Everyday Ethics for Ar- {&}collection= tificial Intelligence (Tech. Rep.). Bureau of Labor Statistics, U. D. o. L. IBM. Retrieved from https:// (2019). PERSONS WITH A DIS- www.ibm.com/watson/assets/ ABILITY: LABOR FORCE CHAR- duo/pdf/everydayethics.pdf ACTERISTICS — 2018. Retrieved Dastin, J. (2018). Amazon scraps secret AI from https://www.bls.gov/ recruiting tool that showed bias against news.release/pdf/disabl.pdf women - Reuters. Retrieved 2019-08-30, Burgstahler, S. (2015, dec). Opening from https://www.reuters.com/ Doors or Slamming Them Shut? On- article/us-amazon-com-jobs line Learning Practices and Students --insight/amazon with Disabilities. Social Inclusion, -scraps-secret-ai-recruiting 3(6), 69. Retrieved from http:// -tool-that-showed-bias www.cogitatiopress.com/ojs/ -against-women-idUSKCN1MK08G index.php/socialinclusion/ Derboven, J., Geerts, D., & De Grooff, article/view/420 doi: D. (2016). The Tactics of Everyday 10.17645/si.v3i6.420 Practice: A Semiotic Approach to Appro- Chander, A. (2017). The Racist Algo- priation. Interaction Design and Architec- rithm? Michigan Law Review, ture(s), 29(29), 99–120. Retrieved from 115(6), 1023. Retrieved from https://lirias.kuleuven.be/ https://heinonline.org/HOL/ 389602?limo=0 Page?handle=hein.journals/ Downey, G. (2008). Closed captioning: Sub-

55 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

titling, stenography and the digital con- man–Computer Interaction, 11(2), vergemce of text with television. Johns 63–125. Retrieved from http:// Hopkins University Press. dx.doi.org/10.1561/1100000015 Draper, H., Sorell, T., Bedaf, S., Syrdal, doi: 10.1561/1100000015 D. S., Gutierrez-Ruiz, C., Duclos, A., Fruchterman, J., & Mellea, J. (2018). & Amirabdollahian, F. (2014, oct). Expanding Employment Success Ethical Dimensions of Human-Robot for People with Disabilities — Interactions in the Care of Older Peo- Benetech — Software for Social Good ple: Insights from 21 Focus Groups (Tech. Rep.). benetech. Retrieved Convened in the UK, France and the from https://benetech.org/ Netherlands. In International conference about/resources/expanding on social robotics (LNCS 8755 ed., -employment-success-for pp. 135–145). Springer. Retrieved -people-with-disabilities-2/ from http://link.springer.com/ Gajos, K. Z., Wobbrock, J. O., & Weld, 10.1007/978-3-319-11973-1{ }14 D. S. (2007). Automatically Gen- doi: 10.1007/978-3-319-11973-1 14 erating User Interfaces Adapted to Dudley-Marling, C., & Burns, M. B. (2014). Users’ Motor and Vision Capabilities. Two Perspectives on Inclusion in the In Proceedings of the 20th annual United States, Global Education Review, acm symposium on user interface soft- 2014. Global Education Review, 1(1), ware and technology (pp. 231–240). 14–31. Retrieved from https://eric New York, NY, USA: ACM. Re- .ed.gov/?id=EJ1055208 trieved from http://doi.acm.org/ European Union. (2016). Regulation (EU) 10.1145/1294211.1294253 doi: 10 2016/679 of the European Parliament .1145/1294211.1294253 and of the Council of 27 April 2016 on Givens, A. R. (2019). Institute Announces the protection of natural persons with re- New Project on Algorithmic Fairness for gard to the processing of personal data People with Disabilities. Institute for and on the free movement of such data, Technology Law and Policy, Georgetown and repealing Directive 95/46/EC (Gen- Law. eral Da. Glickman, N. (2007). Do You Hear Voices? Faggella, D. (2019, apr). Machine Learning Problems in Assessment of Mental in Human Resources – Applications and Status in Deaf Persons With Severe Trends. Language Deprivation (Vol. 12). Ox- Faucett, H. A., Ringland, K. E., Cullen, ford University Press. Retrieved from A. L. L., & Hayes, G. R. (2017, https://www.jstor.org/stable/ oct). (In)Visibility in Disability and As- 42658866 doi: 10.2307/42658866 sistive Technology. ACM Trans. Ac- Gomez Torres, I., Parmar, G., Aggarwal, cess. Comput., 10(4), 14:1—-14:17. Re- S., Mansur, N., & Guthrie, A. (2019). trieved from http://doi.acm.org/ Affordable Smart Wheelchair. In Ex- 10.1145/3132040 doi: 10.1145/ tended abstracts of the 2019 chi con- 3132040 ference on human factors in comput- Felzmann, H., Beyan, T., Ryan, M., & ing systems (pp. SRC07:1—-SRC07:6). Beyan, O. (2016, jan). Imple- New York, NY, USA: ACM. Re- menting an Ethical Approach to Big trieved from http://doi.acm.org/ Data Analytics in Assistive Robotics 10.1145/3290607.3308463 doi: 10 for Elderly with Dementia. SIGCAS .1145/3290607.3308463 Comput. Soc., 45(3), 280–286. Re- Guo, A., Kamar, E., Vaughan, J. W., Wal- trieved from http://doi.acm.org/ lach, H., & Morris, M. R. (2019, 10.1145/2874239.2874279 doi: 10 jul). Toward Fairness in AI for People .1145/2874239.2874279 with Disabilities: A Research Roadmap. Friedman, B., Hendry, D. G., & Born- Retrieved from http://arxiv.org/ ing, A. (2017). A Survey of abs/1907.02227 Value Sensitive Design Methods. Haigh, K. Z., & Yanco, H. A. (2002). Automa- Foundations and Trends R in Hu- tion as Caregiver: A Survey of Issues

56 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

and Tchnologies (Tech. Rep.). AAAI. of Justice Statistics. Retrieved from Hamidi, F., Muller,¨ C., Baljko, M., Schorch, http://www.bjs.gov/index.cfm M., Lewkowicz, M., & Stangl, A. ?ty=pbdetail{&}iid=5986 (2016). Engaging with Users and Holb, K., Bthun, S., & Dahl, Y. (2013). Stakeholders: The Emotional and the Safe Walking Technology for People with Personal. In Proceedings of the Dementia: What Do They Want? In 19th international conference on sup- Proceedings of the 15th international porting group work (pp. 453–456). acm sigaccess conference on comput- New York, NY, USA: ACM. Re- ers and accessibility (pp. 21:1—-21:8). trieved from http://doi.acm.org/ New York, NY, USA: ACM. Re- 10.1145/2957276.2996292 doi: 10 trieved from http://doi.acm.org/ .1145/2957276.2996292 10.1145/2513383.2513434 doi: 10 Hamidi, F., Scheuerman, M. K., & Bran- .1145/2513383.2513434 ham, S. M. (2018a). Gender Recog- Holone, H., & Herstad, J. (2013). Three nition or Gender Reductionism? In Tensions in Participatory Design for Proceedings of the 2018 chi confer- Inclusion. In Proceedings of the ence on human factors in computing sys- sigchi conference on human factors in tems - chi ’18 (pp. 1–13). New York, computing systems (pp. 2903–2906). New York, USA: ACM Press. Retrieved New York, NY, USA: ACM. Re- from http://dl.acm.org/citation trieved from http://doi.acm.org/ .cfm?doid=3173574.3173582 doi: 10.1145/2470654.2481401 doi: 10 10.1145/3173574.3173582 .1145/2470654.2481401 Hamidi, F., Scheuerman, M. K., & Bran- Hurley, M., & Adebayo, J. (2016). Credit Scor- ham, S. M. (2018b). Gender Recogni- ing in the Era of Big Data. Yale Journal of tion or Gender Reductionism?: The So- Law and Technology, 18. Retrieved from cial Implications of Embedded Gender https://heinonline.org/HOL/ Recognition Systems. In Proceedings Page?handle=hein.journals/ of the 2018 chi conference on human yjolt18{&}id=148{&}div= factors in computing systems (pp. 8:1— {&}collection= -8:13). New York, NY, USA: ACM. Re- IEEE, G. I. o. E. o. A., & Systems, I. (2019). trieved from http://doi.acm.org/ Ethically Aligned Design: A Vision 10.1145/3173574.3173582 doi: 10 for Prioritizing Human Well-being with .1145/3173574.3173582 Autonomous and Intelligent Systems Hankerson, D., Marshall, A. R., Booker, J., (First edit ed.). IEEE. Retrieved from El Mimouni, H., Walker, I., & Rode, https://standards.ieee.org/ J. A. (2016). Does Technology Have content/ieee-standards/en/ Race? In Proceedings of the 2016 industry-connections/ec/ chi conference extended abstracts on {%}0Aautonomous-systems. human factors in computing systems - Iezzoni, L. I. (2011, oct). Eliminating chi ea ’16 (pp. 473–486). New York, Health And Health Care Disparities New York, USA: ACM Press. Retrieved Among The Growing Population Of from http://dl.acm.org/citation People With Disabilities. Health Affairs, .cfm?doid=2851581.2892578 doi: 30(10), 1947–1954. Retrieved from 10.1145/2851581.2892578 http://www.healthaffairs.org/ Harley, D., & Fitzpatrick, G. (2012). Appropria- doi/10.1377/hlthaff.2011.0613 tion of social networking by older people: doi: 10.1377/hlthaff.2011.0613 two case studies. In Ecsw 2011 work- ISO/IEC. (2008). Information technology — shop on fostering social interactions in Individualized adaptability and accessi- the ageing society: Artefacts - method- bility in e-learning, education and train- ologies - research paradigms. Aarhus, ing — Part 1: Framework and reference Denmark. model. Author. Harrell, E. (2017). Crime Against Persons Jacobs, S. (1999). Section 255 of the with Disabilities, 2009-2015 - Sta- Act of 1996: tistical Tables (Tech. Rep.). Bureau Fueling the Creation of New Elec-

57 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

tronic Curbcuts. The Center for an research in intellectual disabilities : Accessible Society. Retrieved from JARID, 27(5), 431–46. Retrieved from http://www.accessiblesociety http://www.ncbi.nlm.nih.gov/ .org/topics/technology/ pubmed/23913632http:// eleccurbcut.htm www.pubmedcentral.nih.gov/ Janssen, M., & Kuk, G. (2016, jul). The chal- articlerender.fcgi?artid= lenges and limits of big data algorithms PMC4475843 doi: 10.1111/jar.12067 in technocratic governance. Govern- Krahn, G. L., Hammond, L., & Turner, A. ment Information Quarterly, 33(3), (2006, jan). A cascade of disparities: 371–377. Retrieved from https://www Health and health care access for peo- .sciencedirect.com/science/ ple with intellectual disabilities. Men- article/pii/S0740624X16301599 tal Retardation and Developmental Dis- doi: 10.1016/J.GIQ.2016.08.011 abilities Research Reviews, 12(1), 70– Kanellopoulos, Y. (2018, jul). A Model 82. Retrieved from http://doi.wiley for Evaluating Algorithmic Systems Ac- .com/10.1002/mrdd.20098 doi: 10 countability. Retrieved from http:// .1002/mrdd.20098 arxiv.org/abs/1807.06083 Krahn, G. L., Walker, D. K., & Correa-De- Katan, S., Grierson, M., & Fiebrink, R. Araujo, R. (2015, apr). Persons with (2015). Using Interactive Machine Learn- disabilities as an unrecognized health ing to Support Interface Development disparity population. American journal Through Workshops with Disabled Peo- of public health, 105 Suppl(Suppl ple. In Proceedings of the 33rd an- 2), S198–206. Retrieved from nual acm conference on human fac- http://www.ncbi.nlm.nih.gov/ tors in computing systems (pp. 251– pubmed/25689212http:// 254). New York, NY, USA: ACM. Re- www.pubmedcentral.nih.gov/ trieved from http://doi.acm.org/ articlerender.fcgi?artid= 10.1145/2702123.2702474 doi: 10 PMC4355692 doi: 10.2105/ .1145/2702123.2702474 AJPH.2014.302182 Keyes, O. (2018, nov). The Mis- Krischkowsky, A., Tscheligi, M., Neureiter, K., gendering Machines. Proceedings of Muller, M., Polli, A. M., & Verdezoto, N. the ACM on Human-Computer Inter- (2015). Experiences of Technology Ap- action, 2(CSCW), 1–22. Retrieved propriation: Unanticipated Users, Usage, from http://dl.acm.org/citation Circumstances, and Design. In Proceed- .cfm?doid=3290265.3274357 doi: ings of the 14th european conference on 10.1145/3274357 computer supported cooperative work. Koene, A., Dowthwaite, L., & Seth, S. Krishnaswamy, K. (2017). Participa- (2018). IEEE P7003TM standard for tory Design: Repositioning, Transfer- algorithmic bias considerations. In ring, and Personal Care Robots. In Proceedings of the international work- Proceedings of the companion of the shop on software fairness - fairware 2017 acm/ieee international conference ’18 (pp. 38–41). New York, New on human-robot interaction (pp. 351– York, USA: ACM Press. Retrieved 352). New York, NY, USA: ACM. Re- from http://dl.acm.org/citation trieved from http://doi.acm.org/ .cfm?doid=3194770.3194773 doi: 10.1145/3029798.3034815 doi: 10 10.1145/3194770.3194773 .1145/3029798.3034815 Koene, A., Smith, A. L., Egawa, T., Mandalh, Kroll, J. A., Barocas, S., Felten, E. W., S., & Hatada, Y. (2018). IEEE P70xx, Es- Reidenberg, J. R., Robinson, D. G., tablishing Standards for Ethical Technol- & Yu, H. (2016). Accountable Al- ogy. Proceedings of KDD, ExCeL Lon- gorithms. University of Pennsylvania don UK, August, 2018 (KDD’18). Law Review, 165. Retrieved from Krahn, G. L., & Fox, M. H. (2014, sep). https://heinonline.org/HOL/ Health disparities of adults with intel- Page?handle=hein.journals/ lectual disabilities: what do we know? pnlr165{&}id=648{&}div= What do we do? Journal of applied {&}collection=

58 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

Kuhn, S., & Muller, M. (1993). Participatory Bhide, M., Saha, D., Varshney, K. R., & Design. Communications of the ACM, Puri, R. (2019, may). Bias Mitigation 36(6). Post-processing for Individual and Group K.W. v. Armstrong, No. 14-35296 (9th Cir. Fairness. In Icassp 2019 - 2019 ieee 2015) :: Justia. (2015). Retrieved from international conference on acoustics, https://law.justia.com/cases/ speech and signal processing (icassp) federal/appellate-courts/ca9/ (pp. 2847–2851). IEEE. Retrieved from 14-35296/14-35296-2015-06-05 https://ieeexplore.ieee.org/ .html document/8682620/ doi: Ladner, R. (2015). Design for user 10.1109/ICASSP.2019.8682620 empowerment. Interactions, McCarthy, O. J. (2019). AI & Global XXII(2). Retrieved from http:// Governance: Turning the Tide on interactions.acm.org/archive/ Crime with Predictive Policing - United view/march-april-2015/ Nations University Centre for Policy design-for-user-empowerment Research. University Le Dantec, C. A., Poole, E. S., & Wyche, Centre for Policy Research. Re- S. P. (2009). Values As Lived Experi- trieved from https://cpr.unu.edu/ ence: Evolving Value Sensitive Design in ai-global-governance-turning Support of Value Discovery. In Proceed- -the-tide-on-crime-with ings of the sigchi conference on human -predictive-policing.html factors in computing systems (pp. 1141– McGrenere, J., Davies, R., Findlater, L., 1150). New York, NY, USA: ACM. Re- Graf, P., Klawe, M., Moffatt, K., . . . trieved from http://doi.acm.org/ Yang, S. (2003). Insights from the 10.1145/1518701.1518875 doi: 10 Aphasia Project: Designing Technology .1145/1518701.1518875 for and with People Who Have Apha- Lee, H. R., & Riek, L. D. (2018, may). sia. In Proceedings of the 2003 con- Reframing Assistive Robots to Promote ference on universal usability (pp. 112– Successful Aging. ACM Trans. Hum.- 118). New York, NY, USA: ACM. Re- Robot Interact., 7(1), 11:1—-11:23. Re- trieved from http://doi.acm.org/ trieved from http://doi.acm.org/ 10.1145/957205.957225 doi: 10 10.1145/3203303 doi: 10.1145/ .1145/957205.957225 3203303 Morris, R. R., Kirschbaum, C. R., & Pi- Lepri, B., Oliver, N., Letouze,´ E., Pent- card, R. W. (2010). Broadening Ac- land, A., & Vinck, P. (2018, dec). cessibility Through Special Interests: A Fair, Transparent, and Accountable New Approach for Software Customiza- Algorithmic Decision-making Pro- tion. In Proceedings of the 12th inter- cesses. Philosophy & Technology, national acm sigaccess conference on 31(4), 611–627. Retrieved from computers and accessibility (pp. 171– http://link.springer.com/ 178). New York, NY, USA: ACM. Re- 10.1007/s13347-017-0279-x doi: trieved from http://doi.acm.org/ 10.1007/s13347-017-0279-x 10.1145/1878803.1878834 doi: 10 Lewis, C. (2019, jun). Implications of Devel- .1145/1878803.1878834 opments in Machine Learning for Peo- Muller, M., & Druin, A. (2012). Participa- ple with Cognitive Disabilities. SIGAC- tory design: The third space of HCI. In CESS Newsletter, Issue 124. Retrieved J. Jacko (Ed.), Human computer interac- from http://www.sigaccess.org/ tion handbook (3rd ed., pp. 1125–1154). newsletter/2019-06/lewis.html CRC Press. Lewis, C., & Treviranus, J. (2013, sep). Public Muller, M., & Liao, V. (2017). Using participa- Policy and the Global Public Inclusive In- tory design fictions to explore ethics and frastructure Project. interactions, 20(5), values for robots and agents. 62–66. Retrieved from http://doi Muller, M., Neureiter, K., Krischkowsky, A., .acm.org/10.1145/2510123 doi: Tscheligi, M., Al Zubaidi-Polli, A. M., 10.1145/2510123 & Tscheligi, M. (2016). Collabora- Lohia, P. K., Natesan Ramamurthy, K., tive Appropriation: How Couples, Teams,

59 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

Groups and Communities Adapt and from http://dl.acm.org/citation Adopt Technologies. In Proceedings .cfm?doid=3173574.3174033 doi: of the 19th acm conference on com- 10.1145/3173574.3174033 puter supported cooperative work and Pullin, G., Treviranus, J., Patel, R., & Hig- social computing companion - cscw ’16 ginbotham, J. (2017). Designing companion (pp. 473–480). New York, interaction, voice, and inclusion in AAC New York, USA: ACM Press. Retrieved research. Augmentative and Alterna- from http://dl.acm.org/citation tive Communication, 33(3), 139–148. .cfm?doid=2818052.2855508 doi: Retrieved from https://doi.org/ 10.1145/2818052.2855508 10.1080/07434618.2017.1342690 Obiakor, F. E., Harris, M., Mutua, K., Rota- doi: 10.1080/07434618.2017.1342690 tori, A., & Algozzine, B. (2012). Mak- Ries, E. (2011). The lean startup ing Inclusion Work in General Education : how today’s entrepreneurs use Classrooms. Education and Treatment of continuous innovation to create Children, 35(3), 477–490. doi: 10.1353/ radically successful businesses. etc.2012.0020 Retrieved from https://books Oswal, S. K. (2014, may). Participatory De- .google.com/books?hl=en{&}lr= sign: Barriers and Possibilities. Com- {&}id=r9x-OXdzpPcC{&}oi= mun. Des. Q. Rev, 2(3), 14–19. Re- fnd{&}pg=PA15{&}dq=+The+ trieved from http://doi.acm.org/ Lean+Startup{&}ots=0s 10.1145/2644448.2644452 doi: 10 -bDboHfV{&}sig=jVn-Rw46 .1145/2644448.2644452 -A2D4H{ }9HnK7gTi5MCg{#}v= Platform Cooperativism Consortium. (2019). onepage{&}q= What Is a Platform Co-op? Retrieved TheLeanStartup{&}f=false 2019-08-30, from https://platform Selbst, A. D., Boyd, D., Friedler, S. A., .coop/ Venkatasubramanian, S., & Vertesi, J. Policing, P. T. F. o. s. C. (2015). Final Report (2019). Fairness and Abstraction in of the President’s Task Force on 21st Sociotechnical Systems. In Proceed- Century Policing. Washington, DC: ings of the conference on fairness, Office of Community Oriented Policing accountability, and transparency - fat* Services. Retrieved from https:// ’19 (pp. 59–68). New York, New ric-zai-inc.com/Publications/ York, USA: ACM Press. Retrieved cops-p311-pub.pdf from http://dl.acm.org/citation Pollard, R. (1994). Public mental health ser- .cfm?doid=3287560.3287598 doi: vice and diagnostic trends regarding indi- 10.1145/3287560.3287598 viduals who are deaf or hard of hearing. Shaffer, I. R., & Rogan, I. (2018). Ex- Rehabilitation Psychology, 39(3), 147– ploring the Performance of Facial Ex- 160. pression Recognition Technologies on Popovich, P. M., Scherbaum, C. A., Deaf Adults and Their Children. In Scherbaum, K. L., & Polinko, N. Proceedings of the 20th international (2003, mar). The Assessment of acm sigaccess conference on com- Attitudes Toward Individuals With puters and accessibility - assets ’18 Disabilities in the Workplace. The (pp. 474–476). New York, New Journal of Psychology, 137(2), York, USA: ACM Press. Retrieved 163–177. Retrieved from http:// from http://dl.acm.org/citation www.tandfonline.com/doi/abs/ .cfm?doid=3234695.3240986 doi: 10.1080/00223980309600606 doi: 10.1145/3234695.3240986 10.1080/00223980309600606 Shaheen, N. L., & Lazar, J. (2018). K–12 Pradhan, A., Mehta, K., & Findlater, L. (2018). Technology Accessibility: The Message ”Accessibility Came by Accident”. In from State Government. Journal of Spe- Proceedings of the 2018 chi confer- cial Education Technology, 33(2), 83– ence on human factors in computing sys- 97. tems - chi ’18 (pp. 1–13). New York, Shinohara, K., Wobbrock, J. O., & Pratt, W. New York, USA: ACM Press. Retrieved (2018). Incorporating Social Factors in

60 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

Accessible Design. In Proceedings of the Treviranus, J. (2019). The Value of Be- 20th international acm sigaccess confer- ing Different. In Proceedings of the ence on computers and accessibility - 16th web for all 2019 personalization assets ’18 (pp. 149–160). New York, - personalizing the web (pp. 1:1—- New York, USA: ACM Press. Retrieved 1:7). New York, NY, USA: ACM. Re- from http://dl.acm.org/citation trieved from http://doi.acm.org/ .cfm?doid=3234695.3236346 doi: 10.1145/3315002.3332429 doi: 10 10.1145/3234695.3236346 .1145/3315002.3332429 Sitbon, L., & Farhin, S. (2017). Co- Trewin, S. (2018a, nov). AI Fairness for designing Interactive Applications People with Disabilities: Point of View. with Adults with Intellectual Disabil- Retrieved from http://arxiv.org/ ity: A Case Study. In Proceedings abs/1811.10670 of the 29th australian conference on Trewin, S. (2018b, nov). Will AI Methods Treat computer-human interaction (pp. 487– People with Disabilities Fairly? Medium, 491). New York, NY, USA: ACM. Re- MIT-IBM Watson AI Lab. Retrieved trieved from http://doi.acm.org/ from medium.com/@MITIBMLab/ 10.1145/3152771.3156163 doi: will-ai-methods-treat-people 10.1145/3152771.3156163 -with-disabilities-fairly Storni, C. (2010). Multiple Forms of -7626b38f9cb5 Appropriation in Self-Monitoring Tech- Trewin, S., Morris, M. R., Azenkot, S., nology: Reflections on the Role of Branham, S., Bleuel, N., Jenkins, P., Evaluation in Future Self-Care. Inter- . . . Lasecki, W. (2019). AI Fairness national Journal of Human–Computer for People with Disabilities. Re- Interaction, 26(5), 537–561. Re- trieved from https://assets19 trieved from https://doi.org/ .sigaccess.org/ai fairness 10.1080/10447311003720001 doi: workshop program.html 10.1080/10447311003720001 Tscheligi, M., Krischkowsky, A., Neureiter, K., Story, M. F., Mueller, J. L., & Mace, R. L. Inkpen, K., Muller, M., & Stevens, G. (1998). The Universal Design File: De- (2014). Potentials of the ”Unexpected”: signing for People of All Ages and Abil- Technology Appropriation Practices and ities. Revised Edition. Center for Uni- Communication Needs. In Proceed- versal Design, NC State University, Box ings of the 18th international conference 8613, Raleigh, NC 27695-8613 ($24). on supporting group work (pp. 313– Tel: 800-647-6777 (Toll Free)’ Web 316). New York, NY, USA: ACM. Re- site: http://www.design.ncsu.edu. Re- trieved from http://doi.acm.org/ trieved from https://eric.ed.gov/ 10.1145/2660398.2660427 doi: 10 ?id=ED460554 .1145/2660398.2660427 Straumsheim, C. (2017, mar). Berkeley UN General Assembly. (2007). Con- Will Delete Online Content. Inside vention on the Rights of Persons Higher Ed. Retrieved from https:// with Disabilities : resolution / www.insidehighered.com/news/ adopted by the General Assembly, 2017/03/06/u-california 24 January 2007, A/RES/61/106. -berkeley-delete-publicly Retrieved from https://www -available-educational .un.org/development/desa/ -content disabilities/convention-on Treviranus, J. (2014). The Value of the Statis- -the-rights-of-persons-with tically Insignificant. Educause Review. -disabilities/convention-on Treviranus, J. (2016). Life-long Learning on -the-rights-of-persons-with the Inclusive Web. In Proceedings of the -disabilities-2.html 13th web for all conference (pp. 1:1—- US Department of Justice Civil Rights Di- 1:8). New York, NY, USA: ACM. Re- vision. (2006). COMMONLY ASKED trieved from http://doi.acm.org/ QUESTIONS ABOUT THE AMERICANS 10.1145/2899475.2899476 doi: 10 WITH DISABILITIES ACT AND LAW .1145/2899475.2899476 ENFORCEMENT. Retrieved 2019-

61 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

08-30, from https://www.ada.gov/ Values, Identity, and Social Translu- q{&}a{ }law.htm cence: Neurodiverse Student Teams in von Schrader, S., Malzer, V., & ere,Bruy` Higher Education. In Proceedings of S. (2014, dec). Perspectives on the 2018 chi conference on human fac- Disability Disclosure: The Importance tors in computing systems (pp. 499:1—- of Employer Practices and Workplace 499:13). New York, NY, USA: ACM. Re- Climate. Employee Responsibilities trieved from http://doi.acm.org/ and Rights Journal, 26(4), 237–255. 10.1145/3173574.3174073 doi: 10 Retrieved from https://doi.org/10 .1145/3173574.3174073 .1007/s10672-013-9227-9 doi: 10 .1007/s10672-013-9227-9 Wastfelt, M., Fadeel, B., & Henter, J.-I. (2006, jul). A journey of hope: lessons learned from studies on rare diseases and orphan drugs. Journal of Internal Shari Trewin is a tech- Medicine, 260(1), 1–10. Retrieved from nical leader in IBM’s http://doi.wiley.com/10.1111/ Accessibility Leadership j.1365-2796.2006.01666.x doi: Team, an ACM Distin- 10.1111/j.1365-2796.2006.01666.x guished Scientist, and Wilde, D., & Marti, P. (2018). Explor- Chair of ACM SIGAC- ing Aesthetic Enhancement of Wear- CESS. She has a Ph.D. able Technologies for Deaf Women. In in Computer Science and Proceedings of the 2018 designing in- Artificial Intelligence from teractive systems conference (pp. 201– the University of Edinburgh, with 17 patents 213). New York, NY, USA: ACM. Re- and over 60 publications in technologies trieved from http://doi.acm.org/ to remove barriers for people experiencing 10.1145/3196709.3196777 doi: 10 disabilities. .1145/3196709.3196777 Williams, B. A., Brooks, C. F., & Shmar- Sara Basson works at gad, Y. (2018). How Algorithms Google as “Accessibility Discriminate Based on Data they Evangelist,” with the goal Lack: Challenges, Solutions, and of making the experi- Policy Implications. Journal of Infor- ence of Googlers more mation Policy, 8, 78. Retrieved from accessible and usable. https://www.jstor.org/stable/ She previously worked at 10.5325/jinfopoli.8.2018.0078 IBM Research on Speech doi: 10.5325/jinfopoli.8.2018.0078 Technology, Accessibility, Wobbrock, J. O., Kane, S. K., Gajos, K. Z., and Education Transfor- Harada, S., & Froehlich, J. (2011, mation. Sara holds a apr). Ability-Based Design. ACM Ph.D. in Speech, Hearing, Transactions on Accessible Computing, and Language Sciences 3(3), 1–27. Retrieved from http:// from The Graduate Center of CUNY. portal.acm.org/citation.cfm ?doid=1952383.1952384 doi: Michael Muller is an 10.1145/1952383.1952384 ACM Distinguished Sci- Zhang, S., Zhang, C., & Yang, Q. (2003, entist, and a member of may). Data preparation for data min- the SIGCHI Academy with ing. Applied Artificial Intelligence, expertise in participatory 17(5-6), 375–381. Retrieved from design and participatory http://www.tandfonline.com/ analysis. In the AI Inter- doi/abs/10.1080/713827180 doi: actions group of IBM Re- 10.1080/713827180 search AI, he focuses on Zolyomi, A., Ross, A. S., Bhattacharya, A., the human aspects of data science; ethics and Milne, L., & Munson, S. A. (2018). values in applications of AI to human issues.

62 AI MATTERS, VOLUME 5, ISSUE 3 SEPTEMBER 2019

Stacy Branham is an As- Natalia Lyckowski is sistant Professor of In- an Application Devel- formatics at the Univer- oper with IBM Global sity of California, Irvine. Financing. She is also Her research explores the a passionate volunteer role of technology in so- educator on inclusion cial integration for people and diversity awareness. with disabilities, with ac- Most proudly, she is the tive lines of inquiry in the mother of a young autistic domains of personal mobility and parenting man who has taught her with disability. how to think out of the box and remain ever flexible. Jutta Treviranus is the Erich Manser is an director and founder IBM Accessibility evan- of the Inclusive Design gelist exploring new Research Centre (IDRC), approaches to acces- established in 1993. The sible technology, new IDRC leads many interna- applications of emerging tional research networks technologies, and con- to proactively model more tributing to accessibility inclusive socio-technical standards. Beyond the practices, including in workplace, Erich is an active mentor, dis- the area of data science ability rights advocate, and fundraiser. He is (e.g., Project We Count). also a nationally recognized blind athlete in Dr. Treviranus has co-designed numerous marathon and triathlon racing. policies related to digital inclusion.

Dan Gruen is a Cogni- tive Scientist in IBM Re- search’s Center for Com- putational Health focusing on AI explainability, clini- cal reasoning, and com- plex medical cases. Dan has a PhD in Cogni- tive Science from UCSD and holds over 50 US and international patents in User Experience, Visualization, Social Soft- ware, and related areas.

Daniel Hebert has his computer science degree from Rensselaer Poly- technic Institute and has been with IBM since 2006 as a QA, engineer, and developer. He enjoys working on smart home and accessibility projects, and is a community advo- cate for persons with disabilities and involved with the local independent living movement.

63