<<

Envisioning Communities: A Participatory Approach Towards AI for Social Good Elizabeth Bondi,∗1 Lily Xu,∗1 Diana Acosta-Navas,2 Jackson A. Killian1 {ebondi,lily_xu,jkillian}@g.harvard.edu,[email protected] 1John A. Paulson School of Engineering and Applied Sciences, 2Department of Philosophy, Harvard University ABSTRACT Research in (AI) for social good presupposes some definition of social good, but potential definitions have been PACT seldom suggested and never agreed upon. The normative ques- tion of what AI for social good research should be “for” is not concept thoughtfully elaborated, or is frequently addressed with a utilitar- capabilities approach ian outlook that prioritizes the needs of the majority over those who have been historically marginalized, brushing aside realities of injustice and inequity. We argue that AI for social good ought to be realization assessed by the communities that the AI system will impact, using participatory approach as a guide the capabilities approach, a framework to measure the ability of different policies to improve human welfare equity. Fur- thermore, we lay out how AI research has the potential to catalyze social progress by expanding and equalizing capabilities. We show Figure 1: The framework we propose, Participatory Ap- how the capabilities approach aligns with a participatory approach proach to enable Capabilities in communiTies (PACT), for the design and implementation of AI for social good research melds the capabilities approach (see Figure 3) with a partici- in a framework we introduce called PACT, in which community patory approach (see Figures 4 and 5) to center the needs of members affected should be brought in as partners and their input communities in AI research projects. prioritized throughout the project. We conclude by providing an incomplete set of guiding questions for carrying out such participa- tory AI research in a way that elicits and respects a community’s 1 INTRODUCTION own definition of social good. Artificial intelligence (AI) for social good, hereafter AI4SG, hasre- CCS CONCEPTS ceived growing attention across academia and industry. Countless research groups, workshops, initiatives, and industry efforts tout Human-centered computing → Participatory design Com- • ; • programs to advance computing and AI “for social good.” Work in puting methodologies → Artificial intelligence General and ; • domains from healthcare to conservation has been brought into reference → Cross-computing tools and techniques . this category [43, 73, 83]. We, the authors, ourselves have endeav- KEYWORDS ored towards AI4SG work as and philosophy researchers. artificial intelligence for social good; capabilities approach; partici- Despite the rapidly growing popularity of AI4SG, social good patory design has a nebulous definition in the computing world and elsewhere ACM Reference Format: [27], making it unclear at times what work ought to be considered Elizabeth Bondi,∗1 Lily Xu,∗1 Diana Acosta-Navas,2 Jackson A. Killian1. social good. For example, can a COVID-19 contact tracing app be arXiv:2105.01774v2 [cs.CY] 18 Jun 2021 2021. Envisioning Communities: A Participatory Approach Towards AI for considered to fall within this lauded category, even with privacy Social Good. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, risks? Recent work has begun to dive into this question of defining and Society (AIES ’21), May 19–21, 2021, Virtual Event, USA. ACM, New York, AI4SG [22, 23, 49]; we will explore these efforts in more depth in NY, USA, 12 pages. https://doi.org/10.1145/3461702.3462612 Section 2. *Equal contribution. Order determined by coin flip. However, we point out that context is critical, so no single tractable set of rules can determine whether a project is “for so- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed cial good.” Instead, whether a project may bring about social good for profit or commercial advantage and that copies bear this notice and the full citation must be determined by those who live within the context of the on the first page. Copyrights for components of this work owned by others than ACM system itself; that is, the community that it will affect. This point must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a echoes recent calls for decolonial and power-shifting approaches to fee. Request permissions from [email protected]. AI that focus on elevating traditionally marginalized populations AIES ’21, May 19–21, 2021, Virtual Event, USA [5, 36, 45, 54, 55, 84]. © 2021 Association for Computing Machinery. ACM ISBN 978-1-4503-8473-5/21/05...$15.00 The community-centered, context-specific conception of social https://doi.org/10.1145/3461702.3462612 good that we propose raises its own questions, such as how to reconcile multiple viewpoints. We therefore address these concerns Recent years have seen calls for AI and computational researchers with an integrated framework, called the Participatory Approach to more closely engage with the ethical implications of their work. to enable Capabilities in communiTies, or PACT, that allows re- Green [26] implores researchers to view themselves and their work searchers to assess “goodness" across different stakeholder groups through a political lens, asking not just how the systems they build and different projects. We illustrate PACT in Figure 1. As partof will impact society, but also how even the problems and methods this framework, we first suggest ethical guidelines rooted in capa- they choose to explore (and not explore) serve to normalize the bility theory to guide such evaluations [59, 70]. We reject a view types of research that ought to be done. Latonero [44] offers a that favors courses of action solely for their aggregate net benefits, critical view of technosolutionism as it has recently manifested in regardless of how they are distributed and what resulting injustices AI4SG efforts emerging from industry, such as Intel’s TrailGuard arise. Such an additive accounting may easily err toward favor- AI that detects poachers in camera trap images, which have the ing the values and interests of majorities, excluding traditionally potential to individually identify a person. Latonero argues that underrepresented community members from the design process while companies may have good intentions, they often lack the altogether. ability to gain the expertise and local context required to tackle Instead, we employ the capabilities approach, designed to mea- complex social issues. In a related spirit, Blumenstock [7] urges sure human development by focusing on the substantive liberties researchers not to forget “the people behind the numbers” when that individuals and communities enjoy, to lead the kind of lives developing data-driven solutions, especially in development con- they have reason to value [70]. A capability-focused approach to so- texts. De-Arteaga et al. [19] focus on a specific subset of AI4SG cial good is aimed at increasing opportunities for people to achieve dubbed machine learning for development (ML4D) and similarly combinations of “functionings”—that is, combinations of things express the importance of considering local context to ensure that they may find valuable doing or being. While common assessment researcher and stakeholder goals are aligned. methods might overlook new or existing social inequalities if some Also manifesting recently are meta-critiques of AI4SG specifi- measure of “utility” is increased high enough in aggregate, our cally, which contend that the subfield is vaguely defined, to trou- approach would only define an endeavor as contributing to social bling implications. Moore [56] focuses specifically on how the good if it takes concrete steps toward empowering all members of choice of the word “good” can serve to distract from potentially the affected community to each enjoy the substantive liberties to negative consequences of the development of certain technolo- function in the ways they have reason to value. gies, retorting that AI4SG should be re-branded as AI for “not bad”. We then propose to enact this conception of social good with Malliaraki [51] argues that AI4SG’s imprecise definition hurts its a participatory approach that involves community members in “a ability to progress as a discipline, since the lack of clarity around process of investigating, understanding, reflecting upon, establish- what values are held or what progress is being made hinders the ing, developing, and supporting mutual learning between multiple ability of the field to establish specific expertise. Green [27] points participants” [75], in order for the community itself to define what out that AI4SG is sufficiently vague to encompass projects aimed those substantive liberties and functionings should be. In other at police reform as well as predictive policing, and therefore simply words, communities define social good in their context. lacks meaning. Green also argues that AI4SG’s orientation toward Our contributions are therefore (i) arguing that the capabilities “good” biases researchers toward incremental technological im- approach is a worthy candidate for conceptualizing social good, es- provements to existing systems and away from larger reformative pecially in diverse-stakeholder settings (Section 3), (ii) highlighting efforts that could be better. the role that AI can play in expanding and equalizing capabilities Others have gone further, calling for reforms in the field to say (Section 4), (iii) explaining how a participatory approach is best that “good” AI should seek to shift power to the traditionally dis- served to identify desired capabilities (Section 5), and (iv) presenting advantaged. Mohamed et al. [55] put forth a decolonial view of and discussing our proposed guiding principles of a participatory AI that suggests that AI systems should be built specifically to approach (Section 6). These contributions come together to form dismantle traditional forms of colonial oppression. They provide PACT. examples of how AI can perpetuate colonialism in the digital age, such as through algorithmic exploitation via Mechanical Turk– style “ghost work” [25] or through algorithmic dispossesion in which disadvantaged communities are designed for without being 2 GROWING CRITICISMS OF AI FOR SOCIAL allowed a seat at the design table. They also offer three “tactics” for GOOD moving toward decolonial AI, namely: (1) a critical technical prac- As a technical field whose interactions on social-facing problems tice to analyze whether systems promote fairness, diversity, safety, are young but monumental in impact, the field of AI has yet to fully and mechanisms of anticolonial resistance; (2) reciprocal engage- develop a moral compass. On the whole, the subfield of AI for social ments that engender co-design between affected communities and good is not an exception. researchers; and (3) a change in attitude from benevolence to soli- We highlight criticisms that have arisen against AI4SG research, darity, that again necessitates active engagement with communities which serve as a call to action to reform the field. Later, we ar- and grassroots organizations. In a similar spirit, Kalluri [36] calls for gue that a participatory approach rooted in enabling capabilities researchers to critically analyze the power structures of the systems will provide needed direction to the field by letting the affected they design for and consider pursuing projects that empower the communities—particularly those who are most vulnerable—be the people they are intended to help. For example, researchers may seek guide. to empower those represented in the data that enables the system, rather than the decision-maker who is privileged with a wealth of most, and which they are willing to forgo. Furthermore, for some data. This could be accomplished by designing systems for users to groups the losses may be more significant than for others, and this audit or demand recourse based on an AI system’s decision [36]. fact requires due consideration. In tandem with these criticisms, there have been corresponding A related, though distinct, standard might put aside the ambition efforts to define AI4SG. Floridi. etal [22] provide a report of an of maximizing good and adopt some threshold condition instead. early initiative to develop guidelines in support of AI for good. As long as it generates a net improvement over the status quo, some- Therein, they highlight risks and opportunities for such systems, one might say, an action can be said to promote social and public outline core ethical principles to be considered, and offer several good. This strategy raises three additional problems: first, utility recommendations for how AI efforts can be given a “firm foun- aggregates are insensitive to allocation and distribution; second, dation” in support of social good. Floridi et al. [23] later expand they are not sensitive to individual liberties and fundamental rights; on this work by proposing a three-part account, which includes a and third, to the extent that they consider stakeholders’ preferences, definition, a set of guiding principles, and a set of “essential factors they do not take into account the idea that individuals’ preferences for success.” They define AI4SG as “the design, development, and depend on their circumstances and in better circumstances could deployment of AI systems in ways that (i) prevent, mitigate, or have different preferences70 [ ]. Why should this matter? Because resolve problems adversely affecting human life and/or the well- attempts to increase utility may be too easily signed off as progress being of the natural world, and/or (ii) enable socially preferable while inflicting serious harm and contributing to the entrenchment of and/or environmentally sustainable developments.” Note that this existing inequality. disjunctive definition captures a broad spectrum of projects with For this reason, we believe that respect for liberty, fairness of widely diverging outcomes, leaving open still the possible critiques distribution, and sensitivity to interpersonal variations in utility discussed above. However, their principles and essential factors functions should serve as moral constraints, to channel utility in- contribute to establishing rough guidelines for projects in the field, crements towards more valuable social outcomes. Operating within as others have done [49, 81] in lieu of seeking a definition. these constraints suggests shifting the focal point from utility to a different standard. In the spirit of enabling capabilities, wesug- gest that such standard should be based on an understanding of 3 THE CAPABILITIES APPROACH the kinds of lives individuals have reason to value, how the dis- Clearly, AI for social good is in need of a stricter set of ethical tribution of resources in combination with environmental factors criteria and stronger guidance. To understand what steps take us may create (or eliminate) opportunities for them to actualize these in the direction of a more equitable, just, and fair society, we must lives, and designing projects that create fertile conditions for these find the right conceptual tools and frameworks. Utilitarian ethics, opportunities to be secured. a widely referenced framework, adopts the aggregation of utility This reorientation of the moral assessment framework is in line (broadly understood) as the sole standard to determine the moral with a view of AI as a tool to shift power to individuals and groups value of an action [68, 80]. in situations of disadvantage, as advocated by Kalluri [36]. What According to classic utilitarianism, the right action to take in this shift means, in operational terms, is not yet fully elucidated. any given context is whichever action maximizes utility for society. We therefore hope to contribute to this endeavor by exploring This brand of utilitarianism seems particularly attractive in science- conceptual tools that may guide developers in the process, providing oriented circles, due to its claim that a moral decision can be made intermediate landmarks. We consider “capabilities” to be suitable by simply maximizing an objective function. For example, in the candidates for this task, as they constitute a step in the direction social choice literature, Bogomolnaia et al. [8] argue for the use of empowering disadvantaged individuals to decide what, in their of utilitarianism as “it is efficient, strategyproof and treats equally view, that shift should consist of. agents and outcomes.” However, the apparent transparency of the The capabilities approach, which stands at the intersection of utilitarian argument obscures its major shortcomings. First, like ethics and development economics, was originally proposed by the problem of class imbalance in machine learning, a maximizing Amartya Sen and Martha Nussbaum [60, 70]. The approach focuses strategy will bias its objective towards the majority class. Hence, on the notion that all human beings should have a set of substan- effects on minority and marginalized groups may be easily over- tive liberties that allow them to function in society in the ways looked. Second, which course of action can maximize utility for they choose [60]. These substantive liberties include both freedom society overall cannot be determined on simple cost–benefit analy- from external obstruction and the external conditions conducive to ses. Any such procedure involves serious trade-offs, and a moral such functioning. The capability set includes all of the foundational decision requires that we acknowledge tensions and navigate them abilities a human being ought to have to flourish in society, such responsibly. as, freedom of speech, thought and association; bodily health and To take a recent example, the development of digital contact integrity; and control over one’s political and material environ- tracing apps in the wake of the COVID-19 pandemic represented a ment; among others. The capabilities approach has inspired the massive potential benefit for public health, yet posed a serious threat creation of the United Nations Human Development Index (Fig- to privacy rights, and all the values that privacy protects—including ure 2), marking a shift away from utilitarian evaluations such as freedom of thought and expression, personal autonomy, and demo- gross domestic product to people-centered policies that prioritize cratic legitimacy. Which course of action will maximize utility? It substantive capabilities [82]. is not just controversial. What verdict is reached will necessarily A capability-oriented AI does not rest content with increasing depend on what fundamental choices and liberties individuals value aggregate utility and respecting legal rights. It asks itself what it is care, but focuses on Nussbaum’s proposed set of capabilities rather than eliciting desired capabilities from affected patients. We signif- icantly build on these claims by discussing the potential of AI to enhance capabilities and argue that the capabilities approach goes hand-in-hand with a participatory approach. Two papers specifi- cally ground their work in the capabilities approach; we highlight these as exemplars for future work in AI4SG. Thinyane and Bhat [79] invoke the capabilities approach, specifically to empower the agency of marginalized groups, to motivate their development of a mobile app to identify victims of human trafficking in Thai fisheries. Kumar et al. [42] conduct an empirical study of women’s health in Figure 2: The United Nations Human Development Index India through the lens of Nussbaum’s central human capabilities. (HDI) was inspired by the capabilities approach, which serves as the foundation of the participatory approach that we propose. The above map from Wikimedia Commons 4 AI AND CAPABILITIES shows the HDI rankings of countries around the world, The capabilities approach may serve AI researchers and developers where darker color indicates a higher index. to assess the potential impact of projects and interventions along two dimensions of social progress: capability expansion and distri- bution. This section argues that, when they interact productively doing, and how it interacts with existing social realities, to enhance with other social factors, AI projects can contribute to equalizing or reduce the opportunities of the most vulnerable members of and enhancing capabilities—and that therein lies their potential society to pursue the lives they have reason to value. It asks whether to bring about social good. For instance, an AI-based system Vi- the social, political, and economic environment that it contributes sual Question Answering System can enhance the capabilities of to create deprives individuals of those capabilities or fosters the the visually impaired to access relevant information about their conditions in which they may enjoy equal substantive liberties to environment [38]. flourish. In this framework, a measure of social good is howmuch When considering equalizing and enhancing capabilities, it is a given project contributes to bolster the enjoyment of substantive important to notice that whether an individual or group enjoys a liberties, especially by members of marginalized groups. This kind set of capabilities is not solely a matter of having certain personal of measure is respectful of individual liberties, sensitive to questions characteristics or being free from external obstruction. External of distribution, and responsive to interpersonal variations in utility. conditions need also be conducive to enable individuals’ choices Before discussing the relation between AI and capabilities, we among the set of alternatives that constitutes the capability set would like to dispel a potential objection to our framework. The [60]. This is why these liberties are described as substantial, as capabilities approach has been criticized for not paying sufficient opposed to formal or negative. Hence, capabilities, or substantive attention to groups, institutions, and social structures, thus render- liberties, are composed of (1) individuals’ personal characteristics ing itself unable to account for power imbalances and dynamics (including skills, conditions, and traits), (2) external resources they [30, 40]. This characteristic would make the approach unappealing have access to, and (3) the configuration of their wider material and for a field that seeks to address injustices in the distribution of social environment [34]. power. However, the approach does indeed place substantial em- Prior work at the intersection of technology and capabilities has phasis on conceptualizing the factors that affect (particularly those addressed the potential of technology to empower users, by en- which may enhance) individuals’ power. Its emphasis on substantial hancing their capabilities and choice sets. Johnstone [34] proposes opportunities is itself a way of conceptualizing what individuals the capabilities approach as a key framework for computer ethics, have the power to do or to be. These powers are determined in suggesting that technological objects may serve as tools or external large part by the broader social environment, which includes group resources that enhance individuals’ range of potential action and membership, inter-group relations, institutions, and social prac- choice [17, 39]. This is an important role that AI-based technologies tices. Furthermore, the approach explicitly focuses on enhancing may play in enhancing capabilities: providing tools for geographic the power of the most vulnerable members of society. In fact, the navigation, efficient communications, accurate health diagnoses, creation and enhancement of capabilities constitute intermediate and climate forecasts. Technology may also foster the development steps on the way towards shifting power, promoting the welfare of new skills, abilities, and traits that broaden an individual’s choice and broadening the liberties of those who have been historically sets. We diagram in Figure 3 the relationship between an AI4SG deprived. For this reason, it provides a fruitful framework to assess project on an individual’s characteristics, resources, and environ- genuine moral progress and a promising tool for AI projects seeking ments, and thus the potential of AI to alter (both positively and to promote social good. negatively) capability sets. Light discussion of the capabilities approach has begun inside Technological objects may also become part of social structures, the AI community. Moore [56] references the capabilities approach forming networks of interdependencies with people, groups, and to call for greater accountability and individual control of private other artifacts [62, 85]. When focusing on AI-based technologies, it data. Coeckelbergh [16] proposes taking a capabilities approach to is crucial to also acknowledge their potential to affect the social and health care, particularly when using AI assistance to replace human material environment, which may render it more or less conducive helps foster individuals’ internal characteristics skills, conditions, traits

may jointly individual provide create choice external resources functionings capability sets AI4SG goods available achieved

alters wider material & social environment

Figure 3: AI for social good research projects can expand and equalize capabilities by fostering individuals’ internal charac- teristics, providing external resources, or altering the wider material and social environment, thus creating capability sets to improve social welfare and fight injustice. to secure capability sets for individuals and communities. For exam- impacted individuals to have a say on what alternatives are opened ple, AI-based predictive policing may increase the presence of law (or closed) to them. enforcement agents in specific areas, which may in turn affect the Moreover, AI projects that incorporate stakeholders’ choices capability sets of community members residing in those areas. If into their design process may contribute to create what Wolff and this presence increases security from, say, armed robbery, commu- De-Shalit [86] refer to as “fertile functionings.” That is, functionings nity members may enjoy some more liberties than they previously that, when secured, are likely to secure other functionings. Fertile did. And yet, if this acts to reinforce the disparate impact of law functionings include, though are not limited to, the ability to work enforcement on members of vulnerable communities, along with and the ability to have social affiliations. These are the kinds of other inequalities, then it may not only diminish the substantive functionings that either enable other functionings (e.g. control over liberties of those individuals impacted, but also alter the way in one’s environment) or reduce the risk associated with them. which such liberties are distributed across the population. Projects in AI that create propitious environments and enable Assessing this kind of trade-off ought to be a crucial step in individuals to make decisions over the capability sets they value in evaluating the ability of a particular project to promote social good. turn give those individuals the capability to function in a way that This assessment must be aligned with the kinds of choices and leads to the creation of other capabilities. If this kind of participatory opportunities community members have reasons to pursue [72]. A space is offered to those who are the most vulnerable, AI could project should not be granted the moral standing of being “for social plausibly act against disadvantage, and contribute to shifting the good” if it leads to localized utility increments, at the expense of distribution of power. In this way, the PACT framework is aligned reducing the ability of members of the larger community to choose with the principles of Design Justice in prioritizing the voices of the lives they value. Most importantly, this assessment must be affected communities and viewing positive change as a result ofan made by community members themselves, as the following section accountable collaborative process [18]. argues. The PACT framework is also committed to the notion that, when applying capabilities to AI, we must include all stakeholders, espe- 5 COMMUNITY PARTICIPATION cially vulnerable members of society. But how do we accomplish this concretely, and how does this affect the different steps inan As Ismail and Kumar [32] contend, communities should ultimately AI4SG research project? In the following section, we will first intro- be the ones to decide whether and how they would like to use AI. duce suggested mechanisms of a participatory approach rooted in If the former condition is met and the community agrees that an AI capabilities, and then discuss how these come into play in (1) deter- solution may be relevant and useful, the latter requires the inclusive mining which capability sets to pursue at the beginning of a project, design of an AI system through a close and continued relationship and (2) evaluating the success of an AI4SG system in terms of its between the AI researcher and those impacted. impact, particularly on groups’ sets of substantive liberties. This close partnership is particularly important as the gross The approach we propose is diagrammed in Figure 4, which effects of AI-based social interventions on communities’ capabilities embeds community participation into each stage. By beginning are unlikely to be exclusively positive. Tradeoffs may be forced on first with defining capability sets, we resist the temptation toim- designers and stakeholders; some are likely to be intergroup, and mediately apply established tools to address ingrained social issues, some, intragroup. For this reason, it is only consistent with the which would only restrict the possibilities for change [48]. These proposed approach that designers consult stakeholders from all participatory approaches constitute bottom-up approaches for em- groups on their preferred ways to navigate such tradeoffs. In other bedding value into AI systems, by learning values from human words, if PACT is focused on creating capabilities, it must enable AI researchers may also co-organize workshops, such as a pair A Participatory Approach to AI4SG of “AI vs. tuberculosis” workshops held by some of the authors in Mumbai, India which brought together domain experts, non-profits, state and local health officials, industry experts, and researchers, determine identified by Mumbai-based collaborators who specialize in building project design + evaluate desired relationships across health-focused organizations in India. The development success capability sets varied backgrounds of the stakeholders at these workshops was conducive to quickly identifying areas of greatest need across the spectrum of tuberculosis care in India, many of which would not have been considered by AI researchers alone. Further, these group forums sparked conversations between stakeholders that often would not otherwise communicate, but that led to initial ideas Figure 4: Our proposed approach to AI4SG projects, where for solutions. This highlights that such approaches are often most the key is that stakeholder participation is centered successful when AI researchers move out of their usual spheres and throughout. We do not explicitly comment on the design into the venues of the domains that their projects impact. and development of the project other than calling for the At the inception of a project, the designers and initial partners inclusion of community participation and iterative evalua- should together identify all relevant stakeholders that will be im- tion of success, in terms of whether the project realized the pacted by the proposed project, bringing them on as partners or desired capability sets. members of a (possibly informal) advisory panel. Every effort should be made to include at least one member from each impacted group. interaction and engagement rather than being decided through the If stakeholders were initially omitted and are later identified, they centralized and non-representative lens of AI researchers [46]. should then be added. Initial consultation with panel members and partners should be carried out in a way that facilitates open and can- 6 GUIDING PRINCIPLES FOR A did discussion to ensure all voices are represented and accounted PARTICIPATORY APPROACH for during the project design phase [87]. We additionally propose that during the lifetime of the project, the panel and partners should, To direct the participatory approach we propose, we provide the fol- if possible, be available for ongoing consultation during mutually lowing set of guiding questions, outlined in Figure 5 and elaborated agreed upon checkpoints to ensure the project continues to align on below. These questions are specifically worded to avoid binary with stakeholder values. Similar practices have been suggested by answers of yes or no. When vaguely stated requirements are put Perrault et al. [66], who advocate for close, long-term collaborations forth as list items to check off, there is a risk of offering a cheap seal with domain experts. Likewise, the Design Justice network pursues of approval on projects that aren’t sufficiently assessed in terms of non-exploitative and community-led design processes, in which their ethical and societal implications. Instead, we expect that in designers serve more as facilitators than experts. This framework nearly all cases, none of these questions will be fully resolved. Thus, is modeled on the idea of a horizontal relation in which affected these questions are meant to serve as a constant navigation guide communities are given the role of domain experts, given their lived to help researchers regularly and critically reassess their intended experience and direct understanding of projects’ impact [18]. work. We begin our discussion on how a participatory approach to How are plans laid out for maintaining and sustaining this work in AI4SG would look with our first guiding question: the long-term, and how would the partnership be ended? How are impacted communities identified and how are they rep- We believe partnership norms should be established at the begin- resented in this process? Who represents historically marginalized ning of the partnership, including ensuring that the community groups? representatives and experiential experts have the power to end This question is one of the most important and potentially dif- the project and partnership, in addition to the designers and other ficult. As such, we encourage readers to research their domain stakeholders. As noted by Madaio et al. [49], this should also neces- of interest and seek partners as an important first step, but of- sitate an internal discussion amongst the research team to decide fer some advice from our experience. We often consider seeking what, if any, criteria would be grounds for such a cessation, e.g., if it partners at non-profits, community-based organizations, and/or becomes clear that the problem at hand requires methods in which (non-)governmental organizations (NGOs), and have even had some the research team does not have sufficient expertise. Discussions experience with non-profits or NGOs finding us. Finding these with the broader partners should also lay out the expected time groups as an AI researcher may be done with the help of an institu- scale and potential benefits and risks for all involved. tion’s experts on partnerships (e.g., university or company officials Once the relationship is underway, it should be maintained as focused on forming external partnerships), prior collaborators or discussed, with regular communication. When a project reaches acquaintances, discussions with peers within an institution in dif- the point where a potential deployment is possible, we advocate ferent departments (e.g., public health or ecology researchers), or for an incremental deployment, such as the example of Germany’s “cold emails.” Young et al. [87] provide further guiding questions teststrecken for self-driving cars, which sets constraints for auton- for finding these partners, particularly in their companion guide omy, then expands these constraints once certain criteria are met [50]. In any case, some vetting and research is an important step. [23]. Guiding Principles for a Participatory Approach to AI4SG

How are impacted communities identi!ied? Represented? Including marginalized groups? Long-term plans for maintaining work? Compensation for their input? Are they valued as partners? How to incorporate divergent viewpoints? How are concerns addressed?

Determining Capability Sets Evaluating AI4SG What functionings do stakeholders want to achieve? How does the new AI4SG system a"ect all stakeholders' capabilities, particularly those selected at the start? What functionings are priority for the most vulnerable? Are any capabilities or functionings negatively a"ected? Are any of the capability sets fertile? What should the role of an AI researcher be? Do the values of the project match those of the community?

Figure 5: Guiding principles for the participatory approach we propose. The key message is to center the community’s desired capabilities throughout, and to ensure that the goals of the project are properly aligned to do so without negatively impacting other capabilities.

What kind of compensation are stakeholders receiving for their time Hence, we endorse a deliberative approach, which aims to un- and input? Does that compensation respect them as partners? cover “overlapping consensus” [67]. This approach is based on the Stakeholders must be respected as experiential or domain experts expectation that, in allowing diverse worldviews, value systems, and be considered as partners. They must be compensated in some and preference sets to engage in conversation, with appropriate way, whether monetarily or otherwise. We advocate for compen- moderation and technical tools, social groups may find a core set sation to avoid establishing an approach made under the guise of of decisions that all participants can reasonably agree with. De- participatory AI that ends up being extractive [63]. One notable liberative approaches to democracy have been operationalized by drawback to a participatory approach is the potential for these com- programs such as vTaiwan, which uses digital technology to inform munity interactions to become exploitative if not done with inten- policy by building consensus through civic engagement and dia- tion, compensation, or long-term interactions [76]. Collaborators logue [31]. vTaiwan uses the consensus-oriented voting platform who have significantly influenced the design or implementation of Pol.is which seeks to identify a set of common values upon which a project ought to be recognized as coauthors or otherwise acknowl- to shape legislation, rather than arbitrating between polarized sides edged in resulting publications, presentations, media coverage, etc. [78]. Similarly, OPPi serves as a platform for consensus building through bottom-up crowd-sourcing [47]. This tool is tailored for How can we understand and incorporate viewpoints from many groups opinion sharing and seeking, oriented towards finding common of stakeholders? ground among stakeholders. Surveys or voting may seem a natural choice. However, a simple An alternative to fully public deliberative processes are targeted voting mechanism is risky, as it may find that a majority of the efforts such as Diverse Voices panels87 [ ] which focus on including community favors, for example, building a heavily polluting factory traditionally marginalized groups in existing policy-making pro- near a river, while the impacted community living at the proposed cesses. Specifically, they advocate for informal elicitation sessions site would object that this factory would severely degrade their with partners, asking questions such as, “What do you do currently? quality of life. These concerns from the marginalized group must What would support your work?” [37]. They also suggest consider- be given due weight. This emphasis on the welfare of marginal- ing whether tech should be used in the first place and highlight that ized groups is based on the premise that the evaluation of human a key challenge of participatory design is to determine a course of capabilities must consider individual capabilities [59]. In short, all action if multiple participants disagree. We add that it remains a individual capabilities are valuable as ends in their own right; they challenge to assemble these panels. should never be considered means to someone else’s rights or wel- Simonsen and Robertson [75] provide multiple strategies as well, fare. We must, therefore, guard against depriving individuals of particularly with the goal of coming to a common language and basic entitlements as a means to enhancing overall welfare. fostering communication between groups of experts from different backgrounds. They suggest strategies to invite discussion, such as In the fair machine learning literature, Martin Jr. et al. [52] pro- games, acting out design proposals, storytelling, group brainstorms pose a participatory method called community-based system dy- of what a perfect utopian solution would look like, participatory namics (CBSD) to bring stakeholders in to help formulate a machine prototyping, and probing. In the case of probing, for example, one learning problem and mitigate bias. Their method is designed to strategy was to provide participants with a cultural probe kit, con- understand causal relationships, specifically feedback loops, par- sisting of items such as a diary and a camera, in order to understand ticularly those in high-stakes environments such as health care or people’s reactions to their environments [24]. criminal justice, that may disadvantage marginalized and vulnerable Further, several fields in artificial intelligence have devoted a groups. This process is intended to bring in relevant stakeholders great deal of thought to learning and aggregating preferences: pref- and recognize that their lived experience makes these participants erence elicitation [12], which learns agent utility functions; com- more qualified to recognize the effects of these interventions. Us- putational social choice [9], which deals with truthful agents; and ing visual diagrams designed by impacted community members, mechanism design [74], which deals with strategic agents. As an ex- the CBSD method can help identify levers that may help enable ample, Kahng et al. [35] form what they call a “virtual democracy” or inhibit functionings, particularly for those who are most vul- for a food rescue program. They collect data about preferences nerable. Similarly, Simonsen and Robertson [75] suggest the use on which food pantry should receive food, then use these prefer- of mock-ups and prototypes to facilitate communication between ences to create and aggregate the preferences of virtual voters. The experiential experts and developers. Other strategies discussed for goal is to make ethical decisions without needing to reach out to understanding multiple viewpoints may also apply if tailored to- stakeholders each time a food distribution decision needs to be wards determining capabilities. made. These fields have highlighted theoretical limitations in prefer- ence aggregation, such as Arrow’s impossibility theorem [4] which What functionings are the priority for those most vulnerable? Is there reveals limitations in the ability to aggregate preferences with as an overlap between their priorities and the goals of other stakeholders? few as three voters, but these results do not necessarily inhibit us We need to pay attention to those who are most vulnerable. from designing good systems in practice [53, 71]. Such areas provide The capabilities approach may be leveraged to fight inequality by rich directions for future research, particularly at the intersection thinking in terms of capability equalizing. To identify capabilities of participatory methods and social science research. that are not yet available to marginalized members of a community, What specific concerns are raised during the deliberative process, and we must listen to their concerns and ensure those concerns are how are these addressed? prioritized, for example via strategies proposed in our discussions Diverse Voices panels [87] with experiential experts (both from on finding those impacted by AI systems and including multiple non-profits and from within the community) may also be used to viewpoints. identify potential issues by asking questions such as “What mistakes As an example of the consequences of failing to include those could be made by decision makers because of how this proposal most vulnerable throughout the lifetime of an AI system, consider is currently worded?” and, “What does the proposal not say that one of the initial steps of data collection. Women are often ignored you wish it said?” Other strategies we discussed for understanding in datasets and therefore their needs are underreported [20]. For multiple viewpoints may also have a role to play here when tailored example, crash-test dummies were designed to resemble the average towards sharing concerns. However, we also stress the importance male, and vehicles were evaluated to be safe based on these male of ongoing partnerships with impacted community members be- dummies—leaving women 47% more likely to be seriously injured yond just an initial session. By ensuring that the community has a in a crash [65]. These imbalances are often also intersectional, as voice throughout the project’s lifetime, their values are always kept Buolamwini and Gebru [10] demonstrate by revealing stark racial front-and-center. As others have argued, it is not sufficient to inter- and gender-based disparities in facial recognition . view impacted communities once simply to “check the participatory Beyond the inclusion of all groups in datasets, data must be prop- box” [76]. erly stratified to expose disparities. During the COVID-19 pandemic in December 2020, the Bureau of Labor Statistics reported a loss 6.1 Determining Capability Sets of 140,000 net jobs. The stratified data reveal that all losses were women’s: women lost 156,000 jobs while men gained 16,000, and un- Now, equipped with some guiding questions and concrete examples employment was most severe for Black, Latinx, and Asian women of a participatory approach, we will apply these principles directly [21]. Without accounting for the capabilities of all people affected to selecting capability sets at the outset of an AI4SG project. To by such systems, it is difficult to claim that these technologies were identify what capabilities to work towards, some scholars such as for social good. Nussbaum [59] have proposed well-defined sets of capabilities and On the other hand, prioritizing the needs of the most marginal- argued that society ought to focus on enabling those capabilities. ized groups may at times offer an accelerated path towards achiev- However, we endorse the view that the selection of an optimal set ing collective goals. Project Drawdown identified and ranked 100 of capabilities should be based on community consensus [70]. We practical solutions for stopping climate change [29]. Number 6 on its argue that an AI project can only be for social good if it is responsive list was the education of girls, recognizing that women with higher to the values of the communities affected by the AI system. levels of education marry later and have fewer children, as well What functionings do members of the various stakeholder groups wish as manage agricultural plots with greater yield. Another solution they could achieve through the implementation of the project? advocates for securing land tenure of indigenous peoples, whose stewardship of the land fights deforestation, resource extraction, focus on improved performance on metrics such as accuracy, preci- and monocropping. sion, or recall.

Are any of these capability sets fertile, in the sense of securing other How does the new AI4SG system affect all stakeholders’ capabilities, capabilities? particularly those selected at the start? To maximize the capabilities of various communities, we may This question is related to the literature on measuring capabilities wish to focus on capabilities that produce fertile functionings, as [34], and is therefore difficult to answer. We will aim to provide a discussed in Section 5. Specifically, many functionings are necessary few examples, which may not apply well to every AI4SG system, inputs to produce others; for example, achieving physical fitness nor will it be an exhaustive list of techniques that could be valid. from playing sports requires as input good health and nourishment First, based on the idea of AI as a diagnostic to measure social [15]. Some of Nussbaum’s 10 central capabilities—including bodily problems [1], we may be able to (partially) probe this question us- integrity (security against violence and freedom of mobility) and ing data. Sweeney [77] show through ad data that advertisements control over one’s environment (right of political participation, to related to arrest records were more likely to be displayed when hold property, and dignified work)—may be viewed as fertile59 [ ]. searching for “Black-sounding” names, which may affect a candi- Reading, for instance, may secure the capability to work, associate date’s employment capability, for example. Obermeyer et al. [61] with others, and have control over one’s environment. AI has the analyze predictions, model inputs and parameters, and true health potential to help achieve many of these capabilities. For example, outcome data to show that the capability of health is violated for accessible crowdwork, done thoughtfully, offers the opportunity Black communities, as they are included in certain health programs for people with disabilities to find flexible work without the need less frequently than white communities due to a biased . for transit [88]. There could additionally be feedback mechanisms when an in- How closely do the values of the project match those of the community tervention is deployed, whether via the AI system itself, or possibly as opposed to the designers? How does the focus of the project respond by collaborating with local non-profits and NGOs. This may be to their expressed needs and concerns? Does the project have the especially useful in cases where these organizations are already capacity to respond to those needs? engaged in tracking and improving key capabilities such as health Consider the scenario where a community finds it acceptable to outcomes, e.g., World Health Partners [14] or CARE international hunt elephants, while the designers are trying to prevent poach- [33]. Again, these examples will likely not apply to all cases and ing. There could be agreement on high-level values, such as public opens the door for further, interdisciplinary research. However, no health, but disagreement on whether to prioritize specific interven- matter what strategy is taken, it is imperative that we continue tions to promote public health. There could even be a complete lack to center communities in all attempts to measure the effects on of interest in the proposed AI system by the community. stakeholders’ capabilities. At an early stage of a project, AI researchers need to facilitate a consultation method to understand communities’ values and Are other valued capabilities or functionings negatively affected as a choices. Note that this process could allow stark differences in pri- result of the project? Are stakeholders’ values and priority rankings ority between stakeholders to surface which prevent the project in line with such tradeoffs? from starting so as not to over-invest in a project that will later We have a responsibility to actively think about possible outcomes; be terminated because of difference in values. We may consider it is neglect to dismiss negative possibilities as “unintended conse- several of the strategies we discussed previously in this case, such quences” [64]. These participatory mechanisms should thus ensure as deliberative democratic processes, Diverse Voices panels, or com- that the perspective of the most vulnerable and most impacted putational social choice methods. It may subsequently be necessary stakeholders is given due consideration. We recognize that this can to end the project if these strategies do not work. be especially challenging, as discussed in our first guiding ques- tion for identifying impacted communities. Therefore, we further 6.2 Evaluating AI for Social Good suggest that the evaluation of an AI4SG project should employ consultation mechanisms that are open to all community mem- Once AI researchers and practitioners have a system tailored to bers throughout the implementation process, such as the feedback these capabilities, we believe that communities should be the ones mechanisms suggested previously. to judge the success of this new system. This stage of the process may pose additional challenges, given the difficulty of measuring What should my role be as an AI researcher? As a student? capabilities [34]. Though we do not here endorse any measurement We believe that AI researchers at all levels should participate in methodology, various attempts to operationalize the capabilities this work. This work involves all of the above points, including approach give us confidence that such methodologies are feasible learning about the domain to understand who stakeholders are, and may be implemented in the course of evaluating AI projects discussing with the stakeholders, and evaluating performance. We [3]. also acknowledge that we AI researchers may not always be the First and foremost, we maintain that the evaluation of success best suited to lead participatory efforts, and so encourage inter- should be done throughout the lifecycle of the AI4SG project (and disciplinary collaborations between computer science and other beyond) as discussed above, especially via community feedback. disciplines, such as social sciences. A strong example would be the However, we wish to emphasize that we as AI researchers need Center for Analytical Approaches to Social Innovation (CAASI) at to keep capabilities in mind as we evaluate the success of AI4SG the University of Pittsburgh, which brings together interdisciplinary projects to avoid “unintended consequences” and a short-sighted teams from policy, computing, and social work [11]. However, AI researchers should not completely offload these important scenar- CRCS Rising Stars workshop bring students together from multiple ios to social science or non-profit colleagues. It should be a team disciplines to build relationships with each other. Researchers could effort, which we believe will bring fruitful research in social science, also consider domain workshops and conferences, such as those in computer science, and even more disciplines. ecology or public health. AI researchers and students can also advocate for systematic The incentive structure in AI research is often stacked against change from within, which we discuss more in depth in the con- thoughtful deployment. Whereas a traditional experimental section clusion. Although student researchers are limited by constraints may take as little as a week to prepare, a deployment in the field such as research opportunities and funding, they may establish a may take months or years, but is rarely afforded corresponding set of moral aspirations for their work and set aside time for people- weight by reviewers and committee members. This extended time- centered activities, such as mentoring and community-building line weighs most heavily on PhD students and untenured faculty [13]. who are evaluated on shorter timescales. We should thus reward both incremental and long-term deployment, freeing researchers from the pressure to rush to deployment before an approach is 7 CONCLUSION: THOUGHTS ON AI FOR validated and ready. SOCIAL GOOD AS A FIELD In addition to the need for bringing stakeholders into the design In this paper, we lay out a community-centered approach to defining process of AI research, we must ensure that all communities are AI for social good research that focuses on elevating the capabilities welcomed as AI researchers as well. Such an effort could counteract of those members who are most marginalized. This focus on capa- existing disparities and inequities within the field. For example, as bilities, we argue, is best enacted through a participatory approach is the case in other academic disciplines, systemic anti-Blackness is that includes those affected throughout the design, development ingrained in the AI community, with racial discrepancies in physi- and deployment process, and gives them ground to choose their cal resources such as access to a secure environment in which to desired capability sets as well as influence how they wish to see focus, social resources such as access to project collaborators or those capabilities realized. referrals for internships, and measures such as the GRE or teacher We recognize that the participatory approach we lay out requires evaluations [28]. Further, as of 2018, only around 20% of tenure- a significant investment of time, energy, and resources beyond track computer science faculty were women [69]. To combat these what is typical in AI or even much existing AI4SG research. We inequities, people across the academic ladder may actively work highlight this discrepancy to urge a reformation within the AI research to change who they hire, with whom they collaborate, including community to reconsider existing incentives to encourage researchers collaborations with minority-serving institutions [41], and how to pursue more socially impactful work. much time they spend on service activities to improve diversity Institutions have the power to catalyze change by (1) establish- efforts [28]. ing requirements for community engagement in research related The above reformations could contribute greatly to making the to public-facing AI systems; and (2) increasing incentives for re- use of participatory approaches the norm in AI4SG research, rather searchers to meaningfully engage impacted communities while than the exception. PACT, we argue, is a meaningful new way simultaneously producing more impactful research [6]. to answer the question: “what is AI for social good?” We, as AI While engaging in collaborative work with communities can researchers dedicated to the advancement of social good, must give rise to some technical directions of independent interest to the make a PACT with communities to find our way forward together. AI community [19], such a shift to encourage community-focused work will in part require reconsidering evaluation criteria used ACKNOWLEDGMENTS when reviewing papers at top AI conferences. Greater value must be placed on papers with positive social outcomes, including those We would like to thank Jamelle Watson-Daniels and Milind Tambe with potential for impact, if the work has not yet been deployed. for valuable discussions. Thank you to all those we have collab- Such new criteria are necessary since long-term, successful AI4SG orated with throughout our AI4SG journey for their partnership, partnerships often also lead to non-technical contributions, as well wisdom, guidance, and insights. This work was supported by the as situated programs which do not focus necessarily on general- Center for Research on Computation and Society (CRCS) at the izability [66]. We encourage conferences to additionally consider Harvard John A. Paulson School of Engineering and Applied Sci- rewarding socially beneficial work with analogies to Best Paper ences. awards, such as the “New Horizons” award from the MD4SG 2020 Workshop, and institutions to recognize impactful work such as the REFERENCES Social Impact Award at the Berkeley School of Information [58]. [1] Rediet Abebe, Solon Barocas, , Karen Levy, Manish Raghavan, and In the meantime, we suggest that researchers look to nontradi- David G Robinson. 2020. Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 252–260. tional or interdisciplinary venues for publishing their impactful [2] Rediet Abebe and Kira Goldner. 2018. Mechanism design for social good. AI community-focused work. These venues often gather researchers Matters 4, 3 (2018), 27–34. [3] Paul Anand, Graham Hunter, Ian Carter, Keith Dowding, Francesco Guala, and from a variety of disciplines outside computer science, opening the Martin Van Hees. 2009. The development of capability indicators. Journal of door for future collaborations. For example, researchers could con- Human Development and Capabilities 10, 1 (2009), 125–152. sider COMPASS, IAAI, MD4SG/EAAMO [2], the LIMITS workshop [4] Kenneth J Arrow. 1950. A difficulty in the concept of social welfare. Journal of political economy 58, 4 (1950), 328–346. [57], and special tracks at AAAI and IJCAI, among others. Venues [5] Abeba Birhane and Olivia Guest. 2020. Towards decolonising computational such as the Computational Sustainability Doctoral Consortium and sciences. Women, Gender and Research (2020). [6] Emily Black, Joshua Williams, Michael A. Madaio, and Priya L. Donti. 2020. A interventions for algorithmic equity: lessons from the field. In Proceedings of the call for universities to develop requirements for community engagement in ai 2020 Conference on Fairness, Accountability, and Transparency. 45–55. research. In The Fair and Responsible AI Workshop at the 2020 CHI Conference on [38] Jin-Hwa Kim, Soohyun Lim, Jaesun Park, and Hansu Cho. 2019. Korean Local- Human Factors in Computing Systems. ization of Visual Question Answering for Blind People. In AI for Social Good [7] Joshua Blumenstock. 2018. Don’t forget people in the use of big data for devel- Workshop at NeurIPS. opment. Nature 561 (2018), 170–172. [39] Dorothea Kleine. 2013. Technologies of choice?: ICTs, development, and the capa- [8] Anna Bogomolnaia, Hervé Moulin, and Richard Stong. 2005. Collective choice bilities approach. MIT Press. under dichotomous preferences. Journal of Economic Theory 122, 2 (2005), 165– [40] Christine Koggel. 2003. Globalization and women’s paid work: expanding free- 184. dom? Feminist Economics 9, 2-3 (2003), 163–184. [9] Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D Procaccia. [41] Caitlin Kuhlman, Latifa Jackson, and Rumi Chunara. 2020. No computation 2016. Handbook of computational social choice. Cambridge University Press. without representation: Avoiding data and algorithm biases through diversity. [10] Joy Buolamwini and . 2018. Gender shades: Intersectional accu- In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge racy disparities in commercial gender classification. In Conference on fairness, Discovery & Data Mining. accountability and transparency. PMLR, 77–91. [42] Neha Kumar, Naveena Karusala, Azra Ismail, and Anupriya Tuli. 2020. Taking the [11] CAASI. 2021. Center for Analytical Approaches to Social Innovation. http: long, holistic, and intersectional view to women’s wellbeing. ACM Transactions //caasi.pitt.edu/ on Computer-Human Interaction (TOCHI) 27, 4 (2020), 1–32. [12] Urszula Chajewska, , and Ronald Parr. 2000. Making rational [43] Roberta Kwok. 2019. AI empowers conservation biology. Nature 567, 7746 (2019), decisions using adaptive utility elicitation. In Proceedings of the Seventeenth 133–135. National Conference on Artificial Intelligence (AAAI-00). 363–369. [44] Mark Latonero. 2019. Opinion: AI For Good Is Often Bad. https://www.wired. [13] Alan Chan. 2020. Approaching Ethical Impacts in AI Research: The Role of a com/story/opinion-ai-for-good-is-often-bad/. Graduate Student. In Resistance AI Workshop at NeurIPS. [45] Jason Edward Lewis, Angie Abdilla, Noelani Arista, Kaipulaumakaniolono Baker, [14] Annapurna Chavali. 2011. World health partners. Hyderabad, India: ACCESS Scott Benesiinaabandan, Michelle Brown, Melanie Cheung, Meredith Coleman, Health International Centre for Emerging Markets Solutions Indian School of Busi- Ashley Cordes, Joel Davison, et al. 2020. Indigenous Protocol and Artificial ness (2011). Intelligence Position Paper. Concordia Spectrum (2020). [15] David A Clark. 2005. Sen’s capability approach and the many spaces of human [46] Q Vera Liao and Michael Muller. 2019. Enabling Value Sensitive AI Systems well-being. The Journal of Development Studies 41, 8 (2005), 1339–1368. through Participatory Design Fictions. arXiv preprint arXiv:1912.07381 (2019). [16] Mark Coeckelbergh. 2010. Health care, capabilities, and AI assistive technologies. [47] Adrian Liew, Wong Seong Cheng, Santosh Kumar, Wayne Neo, Yinxue Tay, Ethical theory and moral practice 13, 2 (2010), 181–190. Kim Ng, and Jennifer Lewis. Accessed 2021. The pulse of the people. https: [17] Julie E Cohen. 2012. Configuring the networked self: Law, code, and the playof //www.oppi.live/. everyday practice. Yale University Press. [48] Audre Lorde. 1984. The master’s tools will never dismantle the master’s house. [18] Sasha Constanza-Chock. 2020. Introduction: #TravelingWhileTrans, De- In Sister outsider: Essays and speeches. Ten Speed Press. sign Justice, and Escape from the Matrix of Domination. In Design Jus- [49] Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. tice (1 ed.). https://design-justice.pubpub.org/pub/ap8rgw5e https://design- 2020. Co-Designing Checklists to Understand Organizational Challenges and justice.pubpub.org/pub/ap8rgw5e. Opportunities around Fairness in AI. In Proceedings of the 2020 CHI Conference [19] Maria De-Arteaga, William Herlands, Daniel B Neill, and Artur Dubrawski. 2018. on Human Factors in Computing Systems. 1–14. Machine learning for the developing world. ACM Transactions on Management [50] Lassana Magassa, Meg Young, and Batya Friedman. 2017. Diverse Voices: Information Systems (TMIS) 9, 2 (2018), 1–14. a method for underrepresented voices to strengthen tech policy docu- [20] Catherine D’Ignazio and Lauren F Klein. 2020. Data feminism. MIT Press. ments. https://techpolicylab.uw.edu/wp-content/uploads/2017/10/TPL_Diverse_ [21] Claire Ewing-Nelson. 2021. All of the Jobs Lost in December Were Women’s Jobs. Voices_How-To_Guide_2017.pdf. National Women’s Law Center Fact Sheet (2021). [51] Eirini Malliaraki. 2019. What is this “AI for Social Good”? https://eirinimalliaraki. [22] Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, medium.com/what-is-this-ai-for-social-good-f37ad7ad7e91. Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, [52] Donald Martin Jr., Vinod Prabhakaran, Jill Kuhlberg, Andrew Smart, and et al. 2018. AI4People—an ethical framework for a good AI society: opportunities, William S Isaac. 2020. Participatory problem formulation for fairer machine risks, principles, and recommendations. Minds and Machines 28, 4 (2018), 689– learning through community based system dynamics. In Eighth International 707. Conference on Learning Representations (ICLR). [23] Luciano Floridi, Josh Cowls, Thomas C King, and Mariarosaria Taddeo. 2020. [53] Eric Maskin and Amartya Sen. 2014. The Arrow impossibility theorem. Columbia How to design AI for social good: Seven essential factors. Science and Engineering University Press. Ethics 26, 3 (2020), 1771–1796. [54] Sabelo Mhlambi. 2020. From Rationality to Relationality: Ubuntu as an Ethical [24] Bill Gaver, Tony Dunne, and Elena Pacenti. 1999. Design: cultural probes. inter- and Human Rights Framework for Artificial Intelligence Governance. Carr Center actions 6, 1 (1999), 21–29. Discussion Paper Series (2020). [25] Mary L Gray and Siddharth Suri. 2019. Ghost work: How to stop Silicon Valley [55] Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial ai: from building a new global underclass. Eamon Dolan Books. Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy [26] Ben Green. 2018. Data science as political action: grounding data science in a & Technology (2020), 1–26. politics of justice. Available at SSRN 3658431 (2018). [56] Jared Moore. 2019. AI for not bad. Frontiers in Big Data 2 (2019), 32. [27] Ben Green. 2019. “Good” isn’t good enough. In AI for Social Good Workshop at [57] Bonnie Nardi, Bill Tomlinson, Donald J Patterson, Jay Chen, Daniel Pargman, NeurIPS. Barath Raghavan, and Birgit Penzenstadler. 2018. Computing within limits. [28] Devin Guillory. 2020. Combating anti-Blackness in the AI community. arXiv Commun. ACM 61, 10 (2018), 86–93. preprint arXiv:2006.16879 (2020). [58] Berkeley ISchool News. 2021. New Award Supports Social Good. https://www. [29] Paul Hawken. 2017. Drawdown: The most comprehensive plan ever proposed to ischool.berkeley.edu/news/2021/new-award-supports-social-good reverse global warming. Penguin. [59] Martha C. Nussbaum. 2000. Introduction: Feminism and International Develop- [30] Marianne Hill. 2003. Development as empowerment. Feminist economics 9, 2-3 ment. In Women and human development: the capabilities approach. Cambridge (2003), 117–135. University Press. [31] Yu-Tang Hsiao, Shu-Yang Lin, Audrey Tang, Darshana Narayanan, and Claudina [60] Martha C Nussbaum. 2011. Creating capabilities. Harvard University Press. Sarahe. 2018. vTaiwan: An empirical study of open consultation process in [61] Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Taiwan. SocArXiv (2018). Dissecting racial bias in an algorithm used to manage the health of populations. [32] Azra Ismail and Neha Kumar. 2021. AI in Global Health: The View from the Front Science 366, 6464 (2019), 447–453. Lines. In CHI Conference on Human Factors in Computing Systems. [62] Ilse Oosterlaken. 2015. Technology and human development. Routledge. [33] Barney Jeffries. 2017. CARE International Annual Report FY17. CARE Interna- [63] Rachel Pain and Peter Francis. 2003. Reflections on participatory research. Area tional (2017). 35, 1 (2003), 46–54. [34] Justine Johnstone. 2007. Technology as empowerment: A capability approach to [64] Nassim Parvin and Anne Pollock. 2020. Unintended by Design: On the Political computer ethics. Ethics and Information Technology 9, 1 (2007), 73–87. Uses of “Unintended Consequences”. Engaging Science, Technology, and Society [35] Anson Kahng, Min Kyung Lee, Ritesh Noothigattu, Ariel Procaccia, and Christos- (2020). Alexandros Psomas. 2019. Statistical foundations of virtual democracy. In Inter- [65] Caroline Criado Perez. 2019. Invisible women: Exposing data bias in a world national Conference on Machine Learning. 3173–3182. designed for men. Random House. [36] Pratyusha Kalluri. 2020. Don’t ask if artificial intelligence is good or fair, ask [66] Andrew Perrault, Fei Fang, Arunesh Sinha, and Milind Tambe. 2020. AI for Social how it shifts power. Nature 583, 7815 (2020), 169–169. Impact: Learning and Planning in the Data-to-Deployment Pipeline. AI Magazine [37] Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, 41, 4 (2020). Aaron Tam, Corinne Bintz, Daniella Raz, and PM Krafft. 2020. Toward situated [67] John Rawls. 1971. A Theory of Justice. Harvard University Press. [68] Christopher Robichaud. 2005. With great power comes great responsibility: On the [80] Mark Timmons. 2013. Moral theory: An introduction. Rowman & Littlefield moral duties of the super-powerful and super-heroic. Open Court. Publishers. [69] Joseph Roy. 2019. Engineering by the numbers. In Engineering by the numbers. [81] Nenad Tomašev, Julien Cornebise, Frank Hutter, Shakir Mohamed, Angela Piccia- American Society for Engineering Education, 1–40. riello, Bec Connelly, Danielle CM Belgrave, Daphne Ezer, Fanny Cachat van der [70] Amartya Sen. 1999. Development as Freedom. Alfred A. Knopf. Haert, Frank Mugisha, et al. 2020. AI for social good: unlocking the opportunity [71] Amartya Sen. 1999. The possibility of social choice. American economic review for positive impact. Nature Communications 11, 1 (2020), 1–6. 89, 3 (1999), 349–378. [82] Mahbub Ul Haq. 2003. The birth of the human development index. Readings in [72] Amartya Sen. 2017. Collective choice and social welfare. Harvard University Press. human development (2003), 127–137. [73] Zheyuan Ryan Shi, Claire Wang, and Fei Fang. 2020. Artificial intelligence for [83] Fei Wang and Anita Preininger. 2019. AI in health: state of the art, challenges, social good: A survey. arXiv preprint arXiv:2001.01818 (2020). and future directions. Yearbook of medical informatics 28, 1 (2019), 16. [74] Yoav Shoham and Kevin Leyton-Brown. 2008. Multiagent systems: Algorithmic, [84] Meredith Whittaker, Meryl Alper, Cynthia L Bennett, Sara Hendren, Liz Kaziunas, game-theoretic, and logical foundations. Cambridge University Press. Mara Mills, Meredith Ringel Morris, Joy Rankin, Emily Rogers, Marcel Salas, et al. [75] Jesper Simonsen and Toni Robertson. 2012. Routledge international handbook of 2019. Disability, Bias, and AI. AI Now Institute (2019). participatory design. Routledge. [85] Langdon Winner. 1980. Do artifacts have politics? Daedalus (1980), 121–136. [76] Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2020. Partic- [86] Jonathan Wolff and Avner De-Shalit. 2007. Disadvantage. Oxford university press ipation is not a Design Fix for Machine Learning. arXiv preprint arXiv:2007.02423 on demand. (2020). [87] Meg Young, Lassana Magassa, and Batya Friedman. 2019. Toward inclusive tech [77] Latanya Sweeney. 2013. Discrimination in online ad delivery. Commun. ACM 56, policy design: a method for underrepresented voices to strengthen tech policy 5 (2013), 44–54. documents. Ethics and Information Technology 21, 2 (2019), 89–103. [78] Audrey Tang. 2020. Inside Taiwan’s new digital democracy. The Economist [88] Kathryn Zyskowski, Meredith Ringel Morris, Jeffrey P Bigham, Mary L Gray, and (2020). Shaun K Kane. 2015. Accessible crowdwork? Understanding the value in and [79] Hannah Thinyane and Karthik S Bhat. 2019. Apprise: Supporting the critical- challenge of microtask employment for people with disabilities. In Proceedings agency of victims of human trafficking in Thailand. In Proceedings of the 2019 of the 18th ACM Conference on Computer Supported Cooperative Work & Social CHI Conference on Human Factors in Computing Systems. 1–14. Computing. 1682–1693.