<<

Educational, Scientific and Cultural

ARTIFICIAL INTELLIGENCE and EQUALITY

Key findings of UNESCO’s Global Dialogue

ARTIFICIAL INTELLIGENCE and

Key findings of UNESCO’s Global Dialogue Prepared in August 2020 by the United Nations Educational, Scientific and Cultural Organization, 7, place de Fontenoy, 75352 Paris 07 SP,

© UNESCO 2020

This report is available in Open Access under the Attribution-ShareAlike 3.0 IGO (CC-BY-SA 3.0 IGO) license (http://creativecommons.org/licenses/by-sa/3.0/igo/). The present license applies exclusively to the text content of this report and to images whose copyright belongs to UNESCO. By using the content of this report, the users accept to be bound by the terms of use of the UNESCO Open Access Repository (http://www.unesco.org/open-access/terms-use-ccbysa-en).

The designations employed and the presentation of material throughout this report do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries.

The ideas and opinions expressed in this report are those of the authors; they are not necessarily those of UNESCO and do not commit the Organization.

This Report was prepared by the Division for Gender Equality, UNESCO

Graphic design: Anna Mortreux

Printed by UNESCO The printer is certified Imprim’Vert®, the French printing industry’s environmental initiative CONTENTS

Foreword...... 2

Introduction...... 4

FRAMING THE LANDSCAPE OF GENDER EQUALITY AND AI ...... 6 AI and Social Good 6 The Imperatives of Gender Equality 7

GENDER EQUALITY AND AI PRINCIPLES...... 9 Gender Equality in Existing Principles 9

RECOMMENDATIONS FOR INTEGRATING GE INTO AI PRINCIPLES...... 16 Process of Principle Development 16 Content of Principles 16

GENDER TRANSFORMATIVE OPERATIONALIZATION OF AI PRINCIPLES...... 19 Awareness, an Skills 19 Private Sector/Industry 20 Other Stakeholders 28

ACTION PLAN AND NEXT STEPS...... 30 Awareness 30 Framework 30 Coalition Building 31 Capacity Building, Technical Assistance and Funding 32 Research, Monitoring and Learning 32

SUMMARY OF RECOMMENDATIONS...... 33

ANNEXES...... 36 Annex 1: Explicit references to gender equality in existing AI ethics principles 36 Annex 2: Selected initiatives that address gender equality in AI 40

References...... 44

Bibliography...... 48

1 Foreword

Advancing gender equality through education, the , , information and communication lies at the heart of UNESCO’s mandate, with Gender Equality constituting one of the two Global Priorities of the Organization since 2008. This means that UNESCO applies a gender equality lens to all its initiatives, including its normative work. The present report builds on UNESCO’s previous At the Summit’s 2019 edition, which gathered over work on gender equality and AI and aims to continue 70.000 participants, I had the honor of discussing the conversation on this topic with a select group the gender biases and displayed by of experts from key stakeholder groups. In March digital voice assistants during a fireside chat with 2019, UNESCO published a groundbreaking report, the journalist Esther Paniagua, launching the I’d Blush if I Could: closing gender divides in digital DeepTech stage of the Summit. skills through education, based on research funded Building on this momentum, UNESCO planned a by the German Federal Ministry for Economic follow-up conference to coincide with International Cooperation and Development. This report featured Women’s Day in March 2020. The conference recommendations on actions to overcome entitled ‘Gender Equality and the Ethics of Artificial global gender gaps in digital skills, with a special Intelligence: What solutions from the private sector”, examination of the impact of gender biases coded and funded by the German and Google- into some of the most prevalent AI applications. Exponent, aimed to continue the conversation The recommendations concerning AI’s gender with experts from the technology industry as biases are urgent in light of the explosive growth well as from academia and research institutes. of digital voice assistants such as ’s Alexa Unfortunately, we had to cancel this global event and Apple’s Siri. Almost all voice assistants are due to the COVID-19 pandemic. given names and voices, and express a ‘personality’ that is engineered to be uniformly Since rescheduling the conference was not possible, subservient. As the UNESCO report explains, these we decided to reorient our work and launch a biases are rooted in stark gender imbalances in Global Dialogue on Gender Equality and Artificial digital skills education and are exacerbated by Intelligence (the Dialogue) with leaders in AI, digital the gender imbalances of the technical teams technology and gender equality from academia, civil developing frontier technologies by companies with society and the private sector. We structured the significant gender disparities in their C-suites and Dialogue around eight questions that participants corporate boards. could answer either in writing or through a virtual interview session that was recorded. The release of I’d Blush if I Could has helped spark a global conversation on the gendering of The present report shares the main findings from AI technology and the importance of education experts’ contributions to UNESCO’s Dialogue to develop the digital skills of women and . on Gender Equality and AI, as well as additional Over 600 media reports have been published on research and analysis conducted by an external it by outlets across the world, including The New consultant, Jennifer Breslin1. The report provides York Times, The Guardian, Le Monde, El País, Der recommendations on how to address gender equality Spiegel, and many others. The conversation was considerations in AI principles. It also offers guidance subsequently taken up by influential global fora, to the public and private sectors, as well as to civil such as the Web Summit in Lisbon, , society and other stakeholders, regarding how to considered as the largest tech event in the world. operationalize gender equality and AI principles. For

1 Executive Director of Futuristas, a organization that advocates for a broader , technology, engineering, arts, and mathematics (STEAM) ecosystem that is inclusive, just and responds to the needs of society.

2 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

example, it highlights the need to increase general – Genevieve Macfarlane Smith, Associate Director, awareness within society regarding the negative Center for Equity, Gender and Leadership, and positive implications of AI for girls, women and UC Berkeley Haas School of Business gender non-binary people. Regarding the education – Saralyn Mark, Founder and President, iGIANT sector, it also insists on the need to develop curricula – Mike McCawley, Chief Architect, IBM Watson and pedagogy that better integrate cross-disciplinary in Support social sciences, ethics and technology at the – Frida Polli, CEO, pymetrics secondary and tertiary educational levels. – Londa Schiebinger, John L. Hinds Professor of History of Science, The timing of the Global Dialogue on Gender – Elizabeth Stocks, Tech/AI Expert, Women Leading Equality and AI is also propitious to take the in AI conversation forward in order to ensure that AI – Kelly Trindel, Head of Policy & I/O Science, and AI codes of ethics are part of the solution, pymetrics rather than part of the problem, in global efforts to achieve gender equality. In November 2019, at the I am indebted to Jennifer Breslin for travelling to 40th session of the General Conference, UNESCO Manhattan in March, during the scary early days Member States unanimously decided to mandate of the pandemic to meet with me and for accepting the Organization with developing a global normative without hesitation to collaborate on this project. She instrument on the ethics of artificial intelligence put in an extraordinary effort under difficult and tight to be submitted to the 41st session of the General timelines to conduct stellar research and prepare Conference for its approval in November 2021. While a draft. Blandine Bénézit has been indispensable UNESCO is preparing its draft Recommendation on in this process since November 2019. Despite the the Ethics of Artificial Intelligence, the findings and disappointment of the cancellation of the global recommendations of the Dialogue will provide the conference she helped organize for 7 March 2020 at stakeholders with the opportunity to reflect on how UNESCO, she rallied behind the idea of re-orienting best to integrate gender equality considerations into the work and provided support with this Dialogue such global normative frameworks. by preparing the questions, transcribing recordings and editing, with Anne Candau, the final report. This report is the result of teamwork. First, I am I would be remiss if I did not mention Bruno Zanobia grateful to the experts and leaders in the field of AI and Mary Joy Brocard from my team for their for taking the time to either talk to me via video or logistical, communications and design support for respond to our questions in writing. Without their the Conference and for the Dialogue. I also wish to input, we would not have been able to produce this express my sincere gratitude to the staff, consultants document. These experts are (in alphabetical order): and interns of UNESCO’s Division for Gender Equality; – Rediet Abebe, Junior Fellow, Society of Fellows, International Women’s Day Programme Working Harvard University Group; and the AI Intersectoral Task Team for their – Getnet Aseffa, CEO, iCog Labs input and feedback at different stages of the process. – Maria Axente, Responsible AI and AI for Good Lead, PwC This report is the final product of a 3-year partnership – Samara Banno, Tech/AI Expert, Women Leading and generous financial support of the German in AI Government. Our collaboration that started with – Daniela Braga, Founder and CEO, DefinedCrowd a wonderful, trusting working relationship with – Margaret Burnett, Distinguished Professor, Norman Schraepel in 2017 continued with Dr School of Electrical Engineering and Computer Michael Holländer, both from the German Agency Science, Oregon State University for International Cooperation (GIZ). I am particularly – Christine Chow, Director, Global Tech Lead, grateful to Michael for his unwavering support, trust, Head of Asia and Emerging Markets, Federated solidarity and flexibility without which I could not Hermes have been able to navigate this initiative through – Lori Foster, Research Partner and Advisor, unexpected developments, including a pandemic. pymetrics – Allison Gardner, co-Founder of Women Leading in AI – Sara Kassir, Senior Policy and Research Analyst, Saniye Gülser Corat pymetrics Director for Gender Equality, UNESCO Paris, 26 August 2020

3 Introduction

Simply put, artificial intelligence (AI) involves using computers to classify, analyze, and draw predictions from data sets, using a set of rules called algorithms. AI algorithms are trained using large datasets so that they can identify patterns, make predictions, recommend actions, and figure out what to do in unfamiliar situations, learning from new data and thus improving over time. The ability of an AI system to improve automatically through experience is known as Machine Learning (ML).

While AI thus mimics the human brain, currently AI also having a negative impact on women’s it is only good, or better than the human brain, at economic and labour market a relatively narrow range of tasks. However, we opportunities by leading to automation. interact with AI on a daily basis in our professional Recent research by the IMF1 and the Institute for and personal lives, in areas such as job Women’s Policy Research2 found that women are recruitment and being approved for a bank loan, in at a significantly higher of displacement due medical diagnoses, and much more. to job automation than men. Indeed, the majority of workers holding that face a high-risk of The AI-generated patterns, predictions and automation, such as clerical, administrative, recommended actions are reflections of the bookkeeping and cashier positions, are women. accuracy, universality and reliability of the data sets It is therefore crucial that women are not left used, and the inherent assumptions and biases of behind in terms of retraining and reskilling the developers of the algorithms employed. AI is set strategies to mitigate the impact of automation on to play an even more important role in every aspect job losses. of our daily lives in the future. It is important to look more closely at how AI is, and will affect gender On the other hand, while AI poses significant equality, in particular women, who represent over threats to gender equality, it is important to half of the world’s population. recognize that AI also has the potential to make positive changes in our societies by challenging Research, including UNESCO’s 2019 report I’d oppressive gender norms. For example, while an Blush if I Could: closing gender divides in digital skills AI-powered recruitment software was found to through education, unambiguously shows that discriminate against women, AI-powered gender- gender biases are found in Artificial Intelligence decoders help employers use gender-sensitive (AI) data sets in general and training data sets language to write job postings that are more in particular. Algorithms and devices have the inclusive in order to increase the of their potential of spreading and reinforcing harmful workforce. AI therefore has the potential of being gender . These gender biases risk part of the solution for advancing gender equality further stigmatizing and marginalizing women on a in our societies. global scale. Considering the increasing ubiquity of AI in our societies, such biases put women at risk of Building on the momentum of the report I’d Blush being left behind in all realms of economic, political if I Could and subsequent conversations held and social life. They may even offset some of the in hundreds of media outlets and influential considerable offline that countries have conferences, UNESCO invited leaders in AI, digital made towards gender equality in the recent past. technology and gender equality from the private

4 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

sector, academia and civil society and AI for further consideration, discussion to join the conversation on how to overcome and elaboration amongst various stakeholders. gender biases in AI and beyond, understanding It reflects experts’ inputs to the UNESCO Dialogue that real progress on gender equality lies in on Gender Equality and AI, as well as additional ensuring that women are equitably represented research and analysis. This is not a comprehensive where corporate, industry and policy decisions exploration of the complexities of the AI are made. UNESCO, as a laboratory of ideas and ecosystem in all its manifestations and all its standard setter, has a significant role to play in intersections with gender equality. Rather, this is a helping to foster and shape the international starting point for conversation and action and has debate on gender equality and AI. a particular focus on the private sector.

The purpose of the UNESCO’s Dialogue on Gender It argues for the need to Equality and AI was to identify issues, challenges, and good practices to help: 1. Establish a whole society view and mapping of f Overcome the built-in gender biases found in AI the broader goals we seek to achieve in terms of devices, data sets and algorithms; gender equality; f Improve the global representation of women 2. Generate an understanding of AI Ethics Principles in technical roles and in boardrooms in the and how to position gender equality within them; technology sector; and f Create robust and gender-inclusive AI principles, 3. Reflect on possible approaches for guidelines and codes of ethics within the operationalizing AI and Gender Equality Principles; industry. and

This Summary Report sets forth proposed 4. Identify and develop a funded multi-stakeholder elements of a Framework on Gender Equality action plan and coalition as a critical next step.

5 FRAMING THE LANDSCAPE OF GENDER EQUALITY AND AI

‘Algorithmic failures are ultimately human failures that reflect the priorities, values, and limitations of those who hold the power to shape technology. We must work to redistribute power in the design, development, deployment, and governance of AI if we hope to realize the potential of this powerful advancement and attend to its perils.’ Joy Buolamwini3

AI AND SOCIAL GOOD of AI (versus being largely at present concentrated in, and driven by, the private sector). UNESCO, for While AI can be used to make our daily lives example, is advocating for a humanistic approach easier, for example by driving our car or helping to AI that would ensure that AI contributes to the us find the perfect match on a dating app, it can realization of the Sustainable Development Goals also be used to help solve some of the world’s (SDGs) and of human frameworks. Others biggest and most pressing challenges. To this are asking for the decolonization of AI6, arguing end, technology companies, such as Google and that AI has become a tool of digital , Microsoft, have put in place ‘AI for Good’ or ‘AI for whereby ‘poorer countries are overwhelmed by Social Good’ programmes that seek to use AI to readily available services and technology, and solve humanitarian and environmental challenges. cannot develop their own industries and products For example, AI could contribute to predicting that compete with Western ’.7 Others natural disasters before they happen, protecting still are calling for a use of AI that protects data endangered species or tracking diseases as they rights and sovereignty, expands collective as spread in order to eliminate them sooner.4 well as individual choices, dismantles , the neo-liberal order and late stage/extractive Although the term ‘AI for good’ is increasingly , and promotes human flourishing over used by technology companies and civil society relentless economic growth.8 Gender equality is organizations, there is much less discussion about necessary for the realization of any and all of the what actually constitutes ‘social good’. While using goals above, as well as being an objective in and AI for social good is indeed commendable, it must of itself. It thus needs to be mainstreamed and start with a conversation about what is ‘good’ and considered at the highest level of outcomes and with an understanding of the linkages between AI societal imperatives. and our societal goals. However, establishing this is not necessarily a What are our values and what are the straightforward process. transformative goals to which AI is being applied? Are we continuing what has been described as What happens when there are competing or an AI race and ‘frontierism’5, or moving to a more different views of human values and social good? thoughtful model that better serves humanity? Or when ethical imperatives clash with each other Aspirations and considerations have been raised or with national or local laws? And while it might around public interest and social good as a driver be theoretically possible to find consensus on

6 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

grave harms to be avoided through AI, are we Convention on the Elimination of All Forms of prepared to designate areas of social use where against Women, and the work of harm carries greater repercussions and apply the UN Commission on the Status of Women, are higher standards of caution and accountability, or essential reference points for establishing a more not pursue it at all?9 One must also be attentive comprehensive understanding of the persistent to the potential negative consequences of AI and entrenched structural and micro-level in the name of social good. For example, an AI challenges of gender equality around the world.12 designed to increase efficiency or effectiveness The UN Sustainable Developing Goals provide could cause people to lose their jobs and thus their another guide to realizing gender equality as a goal livelihoods. Moreover, if an AI system is poorly in and of itself (SDG 5) and as a necessary lever for trained or designed, or used incorrectly, flaws achieving all the other interlinked SDGs13 may arise. In medicine, for example, ‘bad AI’ can lead to misdiagnoses. In that case, false-positive These bodies of work reflect commitments to and results could cause distress to the patients, measurement of progress around issues such as: or lead to wrong and unnecessary treatments f Women’s political participation and or surgerie. Even worse, false-negative results representation in decision-making; could mean patients go undiagnosed until a f The elimination of discriminatory practices in 10 disease has reached its terminal stage. AI also institutions, budgets, law and access to ; risks reproducing sexist, racist or ableist social f Changing negative social norms and gender structures if those developing and deploying it do stereotypes and valuing different ‘ways of not critically interrogate current practices. The knowing’ and social practices; complexity and nuances want for easy solutions. f Eliminating gender-based ; Secondly, we must understand the AI and f Women’s unpaid care and domestic work; technology ecosystem. What are the components f Women’s economic empowerment and financial and prevailing practices around incentives and inclusion; investments, governance and power, and broader f Gender responsive social protection and access issues around technology and society? Who to basic services, infrastructure and digital has access? Whose priorities are reflected? inclusion; Who benefits and who is harmed? Who takes decisions? This requires an examination of f Women’s roles in peace and security and in systemic structural issues like governance, policy, humanitarian contexts; regulation, funding, societal norms, avenues for f Strengthening women’s roles in environmental participation, and the like.11 sustainability and resilience; f Access to health care and sexual and ; THE IMPERATIVES OF GENDER f Access to quality education and life-long EQUALITY learning; and Likewise, there is a need to increase our f Women’s participation in culture, media and understanding of the landscape and imperatives STEM, including traditional knowledge. of gender equality and women’s empowerment in order to appreciate and address how it interfaces Context and issues of intersectionality14 must also with AI. drive efforts for transformative gender equality. Women are a multifaceted and heterogeneous At the level of global women’s , group and have different experiences based on the Beijing Platform for Action, the UN realities or characteristics which include: women

7 FRAMING THE LANDSCAPE OF GENDER EQUALITY AND AI

living in rural and remote areas; indigenous In order for these gender equality goals, values and women; racial, ethnic or religious minority considerations to be manifested in the technology women; women living with disabilities; women space, some argue that there needs to be a ‘critique living with HIV/AIDS; women with diverse sexual and framework that offers not only the potential orientations and gender identities; younger or older to analyse the damaging effects of AI, but also a women; migrant, refugee or internally displaced proactive understanding on how to imagine, design women or women in humanitarian settings. and develop an emancipatory AI that undermines The importance of taking into consumerist, misogynist, racist, gender binarial and account in AI principles was raised by a number heteropatriarchal societal norms’.15 of participants in the UNESCO Dialogue (Abebe, Schiebinger, Smith). For example, Abebe noted Developing an understanding of the gap between that ‘much of the discourse around gender equality needs and action, and between promise and considerations can be narrow and assume uniform harm, in AI and gender equality to date would experience of people belonging to one gender and, also prove informative and help to cut through as a result, AI principles for women, for instance, hype, platitudes, and reductive approaches. This can be designed with specific kinds of women should include an examination of the displacing in mind.’ In addition, participants pointed to the effect that AI has on importance of understanding gender as non-binary who may be disproportionately represented in to ensure that gender equality principles for AI are sectors that are undergoing automation. It should as inclusive as possible. also include a more nuanced assessment of data representativeness and the systems-level It cannot be expected that the AI industry (nor efforts required to fix this. If one looks at women’s even gender practitioners themselves) will become representation in the online contents that expert in each and every area of intervention comprise the data training our AI, it is clear that we necessary to achieve transformative gender have a long way to go.16 equality. However, there needs to be a robust level of awareness of the vastness, complexity, Finally, a discussion on gender and AI must include and persistency that is so as a more serious interrogation of why gaps between not to ever essentialize it or approach it in an men and women’s contributions to the development overly simplistic way. It is critical that there are and use of AI, and digital technology in general, are commitments and mechanisms to bring in experts not adequately shrinking and why gaps such as the and affected groups. Otherwise, it may only be digital gender divide are actually growing.17 the most egregious, obvious, and familiar forms of gender inequality – as understood by a more select group or based on select assumptions – that are addressed.

8 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

GENDER EQUALITY AND AI PRINCIPLES

GENDER EQUALITY UNESCO also references Recommendation 3C IN EXISTING PRINCIPLES regarding AI, of the UN Secretary-General’s High Level Panel on Digital Cooperation which ‘has Direct references to gender equality and women’s preliminarily identified consensus on the following empowerment in existing AI and ethics principles fifteen principles: accountability, accessibility, are scarce. A scan of several major AI and diversity, explainability, fairness and non- ethics principles, as well as meta-level analyses discrimination, human-centricity, human-control, combining data from these principles in addition inclusivity, privacy, reliability, responsibility, safety, to the examination of principles from other security, transparency, and trustworthiness. relevant sources, point to a few different ways in COMEST20 has also identified the following generic which gender equality is generally being treated. principles, many of which overlap with the ones SUMMARIES OF THE META-ANALYSES above: human rights, inclusiveness, flourishing, autonomy, explainability, transparency, awareness Several meta-level analyses have been undertaken and literacy, responsibility, accountability, – by Harvard, Nature Machine Intelligence, the democracy, good governance, and sustainability’.21 AI Ethics Lab, and UNESCO18 – of the AI ethics principles that have been developed over the past A FEMINIST VIEWPOINT few years by the private sector, , Other AI and ethics principles have been drafted inter-governmental organizations, civil society and by organizations that speak more specifically academia. to gender equality and . These The following are their findings with respect to the perspectives and frameworks have not been categories of principles and those most commonly fully considered in the above analysis (although found across frameworks. While there are it is possible that these views were provided commonalties, there is also divergence in meaning in an effort to inform the development of the and interpretation. For example, while fairness frameworks considered here). is often included as one of the key principles, its Some call for feminist approaches that challenge definition usually varies across frameworks. the neo-liberal order, data exploitation, colonialism 22 UNESCO undertook a review of AI principles and and AI’s lack of transparency. These approaches frameworks in 2020 and suggested the following also warn against the risk of thinking that five foundational values from which flow closely because AI is based on abstract mathematical linked principles and then policy actions to algorithms, it can therefore reveal pure, objective implement them. and universal truths. In order to prevent people from relying and trusting AI too much, scholars The foundational values are Human dignity; recommend ‘maintaining the focus on the humans Human Rights and Fundamental ; behind the algorithms’.23 According to Sareeta Leaving no one Behind; Living in harmony; Amrute, Associate Professor of at Trustworthiness; and the Protection of the the , ‘making humans Environment.19 accountable for the algorithms they design

9 GENDER EQUALITY AND AI PRINCIPLES

Harvard

AI Ethics Lab Nature Machine Intelligence

10 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

shows that the biases the algorithms produce platform to challenge of sexism and are far from inevitable’.24 These approaches also discrimination in all spaces; challenge AI designers’ tendency to put the onus f Movements and Political Participation: on the individual. As stated by Amrute: ‘most Technology is a place where social norms are often, designers of technical systems begin with negotiated, performed, imposed – and currently a standard user [in mind] and, in doing so, set into shaped by patriarchy – and is a space for motion patterns of discrimination that are hidden feminist resistance and movement building; by the assumption of system neutrality’.25 Governance of technology requires challenging Instead, feminist scholars call for a reworked patriarchal processes, democratizing policy political and economic approach, as well as a making, diffusing ownership of and power in relational approach to ethics. This means that global and local networks, and requires gender rather than ethical imperatives being developed equality advocates/specialists at the decision- making table; in the abstract, in a top-down fashion, they ask for more applied and embodied ethics that take f Economy: Challenge the capitalist logic that into account people’s lived experiences, and ‘asks drives technology towards further privatization, 26 whose knowledge counts and in what ways’. profit and corporate control, and create Feminist theory thus ‘moves the discussion of alternative forms of economic power that are ethics from establishing decontextualised rules grounded in principles of cooperation, solidarity, to developing practices to train sociotechnical commons, environmental sustainably, and systems—algorithms and their human openness; as well as commitment to free and makers—to begin with the material and embodied open source software, tools and platforms; situations in which these systems are entangled, which include from the start, histories of race, f Expression: Claim the power of technology to gender and dehumanisation’.27 As such, relational amplify women’s narratives and lived realities, ethics allows for adjustments responding to and defend the right to sexual expression. ‘how subjects and technologies are aligned and Object to efforts of state and non-state actors realigned, attached and reattached to one another to control, surveil, regulate and restrict this [as] a method for practicing ethics that critically sexual expression, for example through the assesses a situation, imagines different ways of labeling of content that restricts expression living, and builds the structures that make those of sexuality (e.g. as pornography or harmful lives possible.’28 content); f Agency: build an ethics and politics of consent FEMINIST INTERNET PRINCIPLES into the culture, design, and terms of Though the Feminist Internet Principles29 are not technology with women’s agency lying in their geared toward AI, many of them – in broad content ability to make informed decisions on what or in spirit – are relevant when one replaces aspects of their public or private lives to share. ‘Internet’ with ‘AI’ and undertakes extrapolation. Support the right to privacy and full control These should be adapted and built on. They further over personal data and information online at all inform, and in some cases already reflect, the levels, with a rejection of practices by states principles in frameworks under consideration in and private companies to use data for profit and this paper. to manipulate behavior, as well as surveillance practices by any actor to control and restrict In a more condensed form, they include: women’s bodies, speech and activism; f Access: universal, open, affordable, f Memory: Right to exercise and retain control unconditional, meaningful and equal access over personal history and memory, including to the internet; to information in diversity of being able to access all personal data and languages, abilities, interests and contexts; information online, be able to exercise control and to usage of technology including the over this data, including who has access to it right to code, design, adapt and critically and and under what conditions, and the ability to sustainably use and reclaim, as well as a delete it forever;

11 GENDER EQUALITY AND AI PRINCIPLES

f Anonymity: Defend the right to be anonymous In examining the discussions on the meta- and reject all claims to restrict anonymity; analyses, as well as reviewing the principles of the Institute of Electrical and Electronics Engineers f Children and Youth: Call for the inclusion of the (IEEE), the (EU), UNESCO, the voices and experiences of young people in the Organization for Economic Co-operation and decisions made about safety and security online Development (OECD), Microsoft and IBM as a and promote their safety, privacy, and access sampling, the following indicates where and how to information. Recognize children’s right to gender equality, sex discrimination, or women’s healthy emotional and sexual development, empowerment tends to surface. which includes the right to privacy and access to positive information about sex, gender and EXPLICIT sexuality at critical times in their lives; Direct reference to gender can be found in f Online Violence: Call all stakeholders to address provisions on non-discrimination, diverse teams, harassment and technology-related violence. data privacy, vulnerable groups, accessibility, achieving the SDGs, human values/beneficence, GENDER DIMENSIONS among others. However, these sometimes appear in background sections or in referencing gender- When asked whether there exists any AI normative related issues alongside a long list of other issues. instruments or principles that successfully address gender equality, UNESCO Dialogue Most frequently, these gender-related issues are participants argued that these were either categorized under ‘fairness’ which varies across inexistent (Braga) or that current practices were frameworks but includes bias and discrimination.30 insufficient (Abebe). Further reflections include: The Harvard meta-study noted: ‘the Fairness f Smith: ‘I have not seen any [AI normative and Non-discrimination theme is the most highly instruments or principles that successfully represented theme in our dataset, with every address gender equality] and think it’s a big gap.’ document referencing at least one of its six principles: “non-discrimination and the prevention f Abebe: ‘There are insufficient efforts and of bias,” “representative and high-quality data,” guidelines to ensure equitable benefit from “fairness,” “equality,” “inclusiveness in impact,” and and mitigation of harm of the development, “inclusiveness in design.”’31 deployment, access, and use of AI technologies.’ With regards to fairness, Tannenbaum et al. point f Kassir: ‘Legal standards need to be formalized, out that ‘to date, there is no unified definition of so that terms like “ethical” and “de-biased” are algorithmic fairness.’ 32This echoes Collett and formally defined.’ Dillon’s finding that there are ‘more than twenty different definitions of fairness circulating in Participants acknowledged that the existing AI academic work, some focusing on group fairness principles of fairness, transparency, accountability and others focusing on individual fairness.’33 Given and explainability tended to include gender the absence of a unified definition, Tannenbaum et equality, although sometimes only implicitly. al. argue that ‘the best approach is to understand As encouraging examples, participants pointed the nuances of each application domain, make to existing frameworks such as the EU’s Ethics transparent how algorithmic decision-making is Guidelines for Trustworthy Artificial Intelligence, deployed and appreciate how bias can arise.’34 the UN Human Rights Framework, the Asilomar AI Principles, Google’s AI Principles, those of A list of examples of explicit references to gender the Partnership on AI, and of the Australian equality in selected normative texts is presented in Government. Annex 1.

12 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

IMPLICIT OR ALIGNED VALUES AND emphasize that all people deserve access OBJECTIVES WITH GENDER EQUALITY to the benefits of AI technology, and that systems should be designed to facilitate that The second category is that of implicit reference, broad access. (…) The Montreal Declaration or generally aligned values and objectives asserts that AI systems “must help eliminate between stated principles (or their implementation relationships of domination between groups and mechanisms) and reducing gender-based people based on differences of power, wealth, discrimination or realizing gender equality and or knowledge” and “must produce social and women’s empowerment. These include: economic benefits for all by reducing social f Gender as Part of ‘Groups’ or under Bias/ inequalities and vulnerabilities.” This framing Discrimination/Equality-related principles: makes clear the relationship between the Women and gender status are assumed to be “equality” principle and the principles of “non- included in the often mentioned categories of discrimination and the prevention of bias” and 35 marginalized groups, vulnerable groups, those “inclusiveness in impact.”’ suffering from discrimination and inequalities, f Values and Impact - Human Values, Well-Being, excluded groups, and in framings such as Beneficence, Impact: A range of principles ‘regardless of group (…) or certain attribute’ type that speak further to inclusiveness, impact, language, reference to equality, fairness (as social good, and human well-being, all of which noted above), and inclusiveness. The Harvard implicitly – in theory – include women and the study notes that: ‘there are essentially three realization of (aspects of) gender equality. different ways that equality is represented in the documents in [their] dataset: in terms of human – Inclusiveness in Impact: According to the rights, access to technology, and guarantees Harvard report, ‘“Inclusiveness in impact” of through technology. In the as a principle calls for a just distribution human rights framing, the Toronto Declaration of AI’s benefits, particularly to populations notes that AI will pose “new challenges to that have historically been excluded.’36 The equality” and that “[s]tates have a duty to take study revealed remarkable consensus in proactive measures to eliminate discrimination.” the language that documents employ to In the access to technology framing, documents reflect this principle, including concepts like ‘shared benefits’ and ‘empowerment’.

DOCUMENT LANGUAGE

ASILOMAR AI PRINCIPLES Shared Benefit: AI technologies should benefit and empower as many people as possible.

MICROSOFT’S AI PRINCIPLES Inclusiveness - AI systems should empower everyone and engage people. If we are to ensure that AI technologies benefit and empower everyone, they must incorporate and address a broad range of human needs and experiences. Inclusive design practices will help system developers understand and address potential barriers in a product or environment that could unintentionally exclude people. This means that AI systems should be designed to understand the context, needs and expectations of the people who use them.

PARTNERSHIP ON AI TENETS We will seek to ensure that AI technologies benefit and empower as many people as possible.

SMART DUBAI AI PRINCIPLES We will share the benefits of AI throughout society: AI should improve society, and society should be consulted in a representative fashion to inform the development of AI.

T20 REPORT ON THE FUTURE Benefits should be shared: AI should benefit as many people as possible. Access OF WORK AND EDUCATION to AI technologies should be open to all countries. The wealth created by AI should benefit workers and society as a whole as well as the innovators.

UNI GLOBAL UNION’S AI Share the Benefits of AI Systems: AI technologies should benefit and empower PRINCIPLES as many people as possible. The economic prosperity created by AI should be distributed broadly and equally, to benefit all of humanity.

Source: Fjeld, J et al. 2020.

13 GENDER EQUALITY AND AI PRINCIPLES

– Benefits: ‘The European High Level Expert gender equality and women’s rights. How is Group guidelines add some detail around this reconciled? what “benefits” might be shared: “AI – ‘Access to technology’: includes access systems can contribute to well-being by to technology itself, education, training, seeking achievement of a fair, inclusive and workforce and economic dimensions, and peaceful society, by helping to increase ability to use (Harvard). citizen’s mental autonomy, with equal – ‘Leveraged to benefit society’ and human distribution of economic, social and political rights: to illustrate the prevalence of this opportunity.” There is a clear connection to principle the Harvard study states that the principles we have catalogued under ‘Twenty-three of the documents in our the Promotion of Human Values theme, dataset (64%) made a reference of this kind. especially the principle of “leveraged to We also noted when documents stated benefit society.”’37 explicitly that they had employed a human – Inclusiveness in Design: Design teams and rights framework, and of the thirty-six 41 societal decision-making should contribute documents (14%) did so.’ to the discussion on what we use AI for, f and in what contexts, ‘specifically, that ‘Justice is mainly expressed in terms of there should be “a genuinely diverse and fairness, and of prevention, monitoring or inclusive social forum for discussion, to mitigation of unwanted bias and discrimination, enable us to democratically determine the latter being significantly less referenced which forms of AI are appropriate for than the first two by the private sector. Whereas our society.” The Toronto Declaration some sources focus on justice as respect for emphasizes the importance of including diversity, inclusion and equality, others call for end users in decisions about the design a possibility to appeal or challenge decisions or and implementation of AI in order to the right to redress and remedy. Sources also “ensure that systems are created and used emphasize the importance of fair access to AI, in ways that respect rights – particularly data, and the benefits of AI. Issuers from the the rights of marginalized groups public sector place particular emphasis on AI’s who are vulnerable to discrimination.” impact on the labour market and the need to This interpretation is similar to the address democratic or societal issues. Sources Multistakeholder Collaboration principle focusing on the risk of biases within datasets in [the] Professional Responsibility underline the importance of acquiring and category, but it differs in that it emphasizes processing accurate, complete and diverse data, 42 bringing into conversation all of society especially training data.’ – specifically those most impacted by AI – f Beneficence:‘While promoting good and not just a range of professionals in, for (“beneficence” in ethical terms) is often example, industry, government, civil society mentioned, it is rarely defined, though notable organizations, and academia.’38 exceptions mention the augmentation of human f Human Values: According to the authors of the senses, the promotion of human well-being and Harvard study, ‘the Promotion of Human Values flourishing, peace and happiness, the creation category consists of three principles: “human of socio-economic opportunities, and economic 43 values and human flourishing,” “access to prosperity.’ Strategies include: ‘minimizing technology,” and “leveraged to benefit society.”’39, power concentration or, conversely, using power often with links to human rights. “for the benefit of human rights”, working more – ‘The principle of “human values and human closely with “affected” people, minimizing flourishing” is defined as the development conflicts of interests, proving beneficence and use of AI with reference to prevailing through customer demand and feedback, and social norms, core cultural beliefs, and developing new metrics and measurements for 44 humanity’s best interests.’40 It should be human well-being.’ noted however, that national level ‘prevailing f Solidarity: ‘Solidarity is mostly referenced in social norms’ may in fact be discriminatory, relation to the implications of AI for the labour contradict international frameworks on market. Sources call for a strong social safety

14 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

net. They underline the need for redistributing AI is reasoning, it is also about being ‘completely the benefits of AI in order not to threaten social transparent where and for what purpose AI cohesion and respecting potentially vulnerable systems are used’, and about ‘[characterizing] the persons and groups. Lastly, there is a warning behavior of the system with respect to sex and of data collection and practices focused on gender.’46 individuals that may undermine solidarity in favour of “radical ”.’45 The accessibility of AI also came up as a core principle in participants’ contributions to THE LESS OBVIOUS BUT STILL IMPORTANT UNESCO’s Dialogue. Braga for example stated While not every principle may have an explicit or that ‘everyone should have the right to access immediately evident link to gender equality, this AI, like access to healthcare and education.’ She does not mean it does not exist. In other principles said: ‘Insofar as it is a service you have to pay that address Accountability, Responsibility, for, it’s difficult. It has to be more democratized. Transparency, Human Control of Technology, And no one should be discriminated against (age, Privacy, Security, Remedy, and the like, if one race, colour, knowledge…). There is a level of correctly takes the view that AI is in its entirety digital skills to access this technology and some a socio-technical system, i.e. a system that economic wealth, but these barriers should be includes software, hardware and data, but also diminished.’ In a similar vein, Getnet Aseffa from the people who build and use the technology ’s iCog Labs lamented the very high digital and the laws and regulations that determine how gender divide in Ethiopia and noted the need and what technology can be developed and for for the education system to do more to address what purpose, then issues of gender equality are gender equality. inevitably present throughout. Women as affected groups, as stakeholders, rights For example, in her contribution to UNESCO’s holders, experts on gender equality and women’s Dialogue, Margaret Burnett, Distinguished empowerment, and as creators and policy makers Professor at Oregon State University, emphasized should therefore be included and present in that to reveal its gender biases, an AI system has these capacities to generate awareness, inform, to be: implement, and monitor all of the principles. f Transparent, meaning that it is able to communicate on how it is reasoning; MISSING ELEMENTS f Accountable, which is only possible if it is Finally, it is worth undertaking a review with transparent; and gender equality experts and affected groups f Explainable to all, not just the educated and on where there are gaps and if there are either affluent few. broader or sub-principles, or specific language that should be included. Alternatively, is it enough According to Tannenbaum et al., AI transparency that any gaps will be filled through interpretation, is not just about being able to understand how an contextualization, and the operationalization phase?

15 RECOMMENDATIONS FOR INTEGRATING GE INTO AI PRINCIPLES

In order to effectively integrate gender equality into AI principles, it will be important to take a rights-based and comprehensive gender equality approach. This means giving substance, meaning, nuance, and specificity to principles. It requires filling gaps and addressing challenges at a systemic level. It also means being more anticipatory, considering future scenarios, and being proactive. The risk of not integrating these measures means that abstract concepts may be ignored, or necessary interventions may be absent all together. The following are further considerations and needs.

PROCESS OF PRINCIPLE less attention and follow through, accountability DEVELOPMENT or stricter monitoring. Notably, UNESCO states that ‘concern for gender equality is sometimes Ensure gender equality experts and women’s bundled as a subset of a general concern for participation in the initial process of principle bias in the development of algorithms, in the formulation and in their ongoing interpretation, datasets used for their training, and in their use in application, monitoring, reporting, and decision-making. However, gender is a much larger recalibration. This is important whether being done concern, which includes, among others, women’s at an intergovernmental level, within sectors, or at empowerment linked with their representation the institutional level. of women among developers, researchers and company leaders, as well as access to initial education and training opportunities for women, CONTENT OF PRINCIPLES which all require the consideration of gender equality as a stand- alone principle.’47 Where Gender is Located: gender equality as a stand-alone principle Whole of Society, Systems, and Lessons from past efforts on gender and Lifecycle Approach technology are worth considering. When gender is Issues of gender equality and AI need to be made explicit, and when it is left as implicit, should considered with a lifecycle approach in terms of be deliberate and thoughtful. It also matters where technology development and implementation, the explicit references are made. For instance, is and a systems approach in terms of how AI is gender equality in a long list, is it in a preamble, contextualized in structural issues and in the is it in the main body of principles and in their broader technology and society discussion. This implementation? Vague, overarching references is not just about specific algorithms or datasets. and placement where there is little ‘teeth’ or One aspect of this is recognizing relationships direct focus, or conversely, where it is everywhere and power, between actors, from local to larger and therefore nowhere, risk that gender receives scales, around private and public sectors, and

16 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

within society – between men and women in all AI be used to create more empowering narratives spheres.48 The Harvard study notes that some and realities for girls and women? Can it be used to approaches are ‘technochauvinistic’, where root out instances and patterns of discrimination solutions are found in the data and technology and negative norms? Can it be used affirmatively itself. Rather, it recommends the approach of the to elevate girls and women? How can it be used Toronto Declaration: ‘All actors, public and private, to address positive goals around gender equality, must prevent and mitigate against discrimination like combatting gender-based violence, access risks in the design, development and application to education, health, economic empowerment, of machine learning technologies. They must political participation, peace and security, also ensure that there are mechanisms allowing ? for access to effective remedy in place before deployment and throughout a system’s lifecycle.’49 A few of the UNESCO Dialogue participants insisted that we should be cautious when Addressing Gender Equality in AI: thinking about emancipatory or transformative AI Avoiding Harm, Increasing Visibility, (AI systems that would actively redress gender and Contributing to Empowerment inequalities). In this respect, Gardner, co-Founder of Women Leading in AI, said that it is important Avoiding Harm: Much of the standard accounting to ‘understand the limitation of what can be for gender equality is around avoiding or mitigating achieved’ with technology. harm. While some examples may be obvious, avoiding harm still requires considerable nuance, Smith argued that an ‘emancipatory AI’ could ‘un-packing’, and the input of gender equality be dangerous if it means that gender biases are experts broadly and in specific domains. This hidden behind good intentions. She wrote: ‘Many requires proactive mitigation, monitoring, and Machine Learning (ML) systems seek to “do good” bringing to light negative real-world impacts, by tackling existing gender issues/inequalities as well as accepting that some things may not through use of AI. This is great, however, they are be able to be fixed and therefore should not be still at risk of perpetuating harmful bias. As Ruha done at all, or should ultimately be abandoned Benjamin, author of Race After Technology notes, (e.g. the example of Amazon’s hiring algorithm this “techno-benevolence” can cause significant which remained biased after multiple attempts harm as can be even harder to detect to fix it). Moreover, while eliminating or reducing when the stated purpose is to override human harm is of utmost importance, it is too reductive or do good.’ to equate gender equality with this one objective. Finally, representatives of pymetrics, a technology Making the Invisible Visible: Issues of gender company that has developed an emancipatory equality and women’s empowerment can also AI-powered hiring tool, stressed that although be simply omitted and passed over. Although emancipatory AI tools can have a positive and this is often discussed in the context of the powerful impact for greater gender equality, unrepresentativeness of data, women and gender they may also influence people in thinking issues can also be invisible depending on which that technology is a silver bullet that can solve questions are being addressed, by whom, and where problems of gender inequality on its own. The priorities for AI lie. Omission and failure to consider risk is that people become complacent and leave gender equality can also come through ignorance issues to technology. In the words (willful or not), and unconscious bias. In the of Sara Kassir from pymetrics: ‘stakeholders may UNESCO Dialogue, a number of experts noted how sometimes overestimate the extent to which an they had seen instances of people making incorrect AI tool can fully address a complex issue like assumptions about gender, even amongst those equity​ in the workforce. As the first stage in the that were trying to account for it, leading to missed talent pipeline, pymetrics takes its role in the opportunities, or negative impacts. talent selection process very seriously, but the simple reality is that an assessment platform Empowerment: Finally, there is a need to better is only one part of a very large ecosystem. understand and activate AI for emancipation and Organizations that are committed to promoting women’s empowerment, for challenging norms gender equality need to recognize the importance and stereotypes, power dynamics, and so on. Can of multi-pronged solutions. Fair AI evaluations

17 RECOMMENDATIONS FOR INTEGRATING GE INTO AI PRINCIPLES

are not enough; disparities in education, training, Provision for participation of Women/ retention, and promotion need to be addressed, Girls and Gender Equality Experts and inclusive cultures need to be fostered.’ She These affected groups (at the macro societal adds: ‘technology is an important tool to mitigate level and micro level with specific use cases), bias, but it is not a comprehensive solution to should be included as developers of technology, social problems.’ as beneficiaries (if women and gender equality practitioners in various sectors use AI within their Across these three dimensions of avoiding work), as informed and informing citizens and harm, increasing visibility and contributing to public (understanding women’s implications and empowerment, it would be useful to take the ability to engage in public comment, conversations strongest elements of existing frameworks that and policy decisions), and as monitors and within respond to these imperatives and use them to accountability mechanisms (impact audits, public inform action. bodies, etc.). Provision for Differentiated Impact on Women, Girls and Points of Prioritization and Tradeoffs Intersectionality (and multiple forms of There is a need to understand the complexity discrimination). and tradeoffs between issues of representative data collection versus privacy, monitoring versus Whether or not gender and women are explicitly censorship, data ownership and exploitation and mentioned, there is still a need to further unpack women’s and gender equality advocates’ personal the differentiated experience and impacts around security. For example, Buolamwini suggests that AI for women and girls. These include things like ‘a common response to uncovering severe data redress for harm, where centuries of lessons, imbalances is to collect more data; however, how and more recent experiences of cyber-violence, data is collected, categorized, and distributed show that access to justice can be challenging presents ethical challenges around consent and for women to realize. The Harvard analysis of AI privacy along with societal challenges with the frameworks likewise called for ‘additional mapping politics of classifications where social categories projects that might illustrate narrower or different like race and gender become reified into the versions of the themes with regard to particular technical systems that increasingly shape society.’51 geographies or stakeholder groups.’50

What is the process for considering gender issues in ‘negotiation’ of different principles and their realization? How do we ensure that women are not deprioritized?52

18 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

GENDER TRANSFORMATIVE OPERATIONALIZATION OF AI PRINCIPLES

While participants in UNESCO’s Dialogue acknowledged the importance of principles such as fairness and transparency, they lamented the lack of guidelines regarding their concrete implementation (Smith, Schiebinger, Chow). For example, Christine Chow asked: ‘How do we operationalize these principles at the industry-level?’ In her opinion, there is an urgent need to translate the ‘rhetoric and big statements’ into ‘action’. Tannenbaum et al. also state: ‘Numerous groups have articulated “principles” for human- centred AI. (…) What we lack are mechanisms for technologists to put these principles into practice.’53

Many others have also noted that there is a gap literacy, develop technical and ethical education in putting principles into actions and that this and skills development, and build capacities for AI needs to be remedied. How we operationalize application. Most UNESCO Dialogue participants gender responsive principles will result – or made the point that more training and awareness not – in gender transformative AI. The following raising was needed in order to educate people are illustrative actions that can be taken to about AI, what it is, how it works and how it can begin to develop an effective AI for Gender impact gender equality. Equality ecosystem. A comprehensive account of possibilities and steps should be developed. The following are some suggested steps to address gender inequalities in AI applications:

AWARENESS, EDUCATION f Shift the narrative, language and framing of AI AND SKILLS to improve public understanding and discourse. Media, journalism and academia play critical ‘The public is largely unaware of the ways in roles in demystifying and promoting accuracy, which AI shapes their lives, and there are few an evidence base and accountability.55 regulations that require disclosure about the use of the technology. Without awareness about the f Improve education on AI and society, bias, uses, risks, and limitations of AI, we remain at and ethics, for technical and non-technical the mercy of entities that benefit from opaque AI audiences, particularly among professions and systems, even when they propagate structural in areas where there are gaps – e.g. engineers inequalities and violate civil rights and .’ and ethics/social sciences. As UNESCO notes, Joy Buolamwini54 ‘global engineering education today is largely focused on scientific and technological courses In order for society at large, girls and women that are not intrinsically related to the analysis and the AI industry to truly understand, reap the of human values nor are overtly designed to benefits of and prevent negative impacts of AI, a positively increase human and environmental robust approach is required to raise awareness and wellbeing. It is most important to address

19 GENDER TRANSFORMATIVE OPERATIONALIZATION OF AI PRINCIPLES

this issue and to educate future engineers discussion required in order to respond to many of and computer scientists to develop ethically the issues raised above. aligned design of AI systems. This requires an explicit awareness of the potential societal THE BIGGER PICTURE OF ETHICS and ethical implications and consequences of There is a clear imperative for the AI industry to the technology-in-design, and of its potential embrace robust and gender transformative ethical 56 misuse.’ It is similarly important to provide principles. To this end, Mozilla suggests that this opportunities for gender equality advocates requires shifting the conversation from putting and practitioners to learn about and develop the onus on individuals and ‘personal behavior’ to understanding on what AI is, how it works, and system change.58 In addition, Buolamwini notes entry points for influence over its development, the importance of reducing conflicts of interest and its use to advance their goals. between profit and AI ethics work.59 f Create programmes for development of AI At the level, Christine Chow, who Literacy, or ‘algorithm awareness’, for individual is Director, Global Tech Lead and Head of Asia girls and women to critically assess personal and Emerging Markets at Federated Hermes, and societal risk and benefit from the impacts emphasized the importance of a coordinated of AI, as well as to empower them as public top-down/bottom-up process for the drafting advocates and agents in its development. of AI principles. She pointed to Intel, IBM and Microsoft as good examples of using such an f Enable gender equality advocates, practitioners approach. She also highlighted the importance of and women in different sectors to be able to being forward-looking. She added that principles identify opportunities and build adequate skills to could not chase the problems of yesterday, but integrate and apply AI in different situations, for should rather anticipate what technology will be emancipatory purposes and on a lifelong basis. developed tomorrow, how it will be deployed and f Improve access to quality STEM education who will use it. and for girls and women. There Collett and Dillon make the point that AI gender- remains significant horizontal and vertical specific guidelines should be research-informed gender segregation57 in STEM education and and as specific and relevant as possible to AI careers, in access to technology for learning systems.60 Guidelines need to be context specific: and productive purposes. There are barriers either about AI in crime and policing, or AI and across the STEM ecosystem preventing women health, or financial services, etc. from taking advantage of, and contributing to, opportunities in technology. AI is the latest Finally, UNESCO’s role in establishing global example of these trends that must be reversed. recommendations regarding the ethics of AI was See UNESCO’s Cracking the Code and I’d Blush welcomed by participants of the Dialogue (Axente if I Could reports for more detail on the gender and Braga). Axente, for example, noted that divide in STEM and digital skills. UNESCO’s value added was its global mandate. f Adopt a forward-looking view, so that girls DATA and women are not playing catch-up in understanding and skills and are not relegated f Increase awareness within the industry and to a reactive position. address issues of lack of representativeness and bias within training data. Data that is PRIVATE SECTOR/INDUSTRY representative by one standard (e.g. equal numbers of women) may still be biased by Some of the following suggestions are ‘generic’ another standard (e.g. those equal numbers of and cut across a number of ethical implications women are stereotyped in a negative way). and interest groups, beyond women. However, what is key is that there is a ‘gender lens’ adopted f Increase collection and use of intersectional where appropriate. Some of these topics are just data, especially from under-represented suggestions, others are being implemented in groups. For example, DefinedCrowd provides places. There is, however, further thought and custom-built training data for its clients and

20 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

the Algorithmic Justice League, founded by Certification: According to Gardner, a certification Joy Buolamwini, provides such data sets for in trained AI implementation should have these research. main components: f Definition of data streams, and techniques to f Correcting for Data Bias: Where possible, remove bias; commit to using data sets that have been f Compliance with existing equality laws and developed with a gender equality lens (despite regulations; potential cost increase), detecting and f Ethical use of data with confirmation agreement correcting for data bias and ensuring that data if required; sets (including training data) represent the f Confidence level measurement through testing populations a machine learning application and deployment usage; will affect, including by consulting with domain f Corrective algorithms to counteract imbalance experts.61 Dillon and Collett call for context- of predictive capability; and specific and gender-specific guidelines for best f Conscience algorithms to counteract practice regarding data.62 Guidelines would heteronomy (actions that are influenced by a cover data collection, data handling and subject- force outside the individual), to flag abuse, and specific trade-offs. to remove overstimulation within the AI (through adversarial behaviours). f Address issues of consent and confirmation of ethical use of data, privacy and security for As an example of certification, Gardner points women and girls (addressing parents as well) to the IEEE standard P7003 on Algorithmic Bias and understand the specific vulnerabilities Considerations that is currently being developed. which they face (Gardner). Policies: In his contribution to UNESCO’s Dialogue, f Data quality control and development of McCawley from IBM put the onus on the AI industry common standards for assessing the adequacy itself in helping policymakers develop AI policies: of training data and its potential bias through ‘the industry must work extra hard to make AI a multi-stakeholder approach.63 Please also accessible and explainable to regulators to help refer to the discussion below on Standards them develop policy’ and ‘get better at governing and Policies, and on Technical Solutions and AI’. Finally, he emphasized the importance for Transparency. governments to get it right, saying he feared for either too much or too little AI regulation. f Address socio-cultural dimensions of data selection, classification, Natural Language Gender Scorecard or Seal: This could be developed Processing (NLP), labels, among other. by a body with deep expertise on gender equality that addresses a range of criteria leading to a f Support gender equality advocates in their systems level approach. broader work around women in STEM, digital literacy (including content development), and INTERNAL MITIGATION AT COMPANY gender equality across all disciplines that LEVEL inform AI data training. CORPORATE GOVERNANCE STANDARDS AND POLICIES f Create or enhance company governance models Codes of Conduct: Codes for professional conduct, and mechanisms for ethical compliance that ethics and establishing responsibility need to be include reflections on gender equality and the 66 created for AI and software developers, similar to involvement of women and gender advocates. the medical profession, lawyers, journalists or the Examples include creating an “Ethical Use aviation industry.64 Advisory Council” with experts and civil society groups, which integrates gender equality Human Rights: Create standards that address concerns. For example, Microsoft launched human rights risks in Machine Learning system an internal high-level group on AI and Ethics design.65 in Engineering and Research (AETHER) which examines how its software should, or should not, be used.

21 GENDER TRANSFORMATIVE OPERATIONALIZATION OF AI PRINCIPLES

f Create highest-level policies that signal training. If you’re a manager, you did an MBA. management support for advancing gender Maybe you’ve heard about bias but don’t know equality through corporate products and what to do about it.’ services. Axente also pointed out the need for ‘a continuous f Enhance and leverage existing processes and educational process explaining why [gender mechanisms to increase attention to gender equality in AI] is important’. She says: ‘We need to equality rather than creating additional layers. raise awareness, make sure they understand the ‘What can be done more readily, make as value of fairness, and .’ In their field of easy as possible to pick up’.67 On the other AI, pymetrics is also of the opinion that it is ‘crucial hand, harder steps could be taken, as this is to educate the entire ecosystem around Human necessary for culture change. Resource technology on the importance of auditing talent selection for fairness.’ Chow also pointed out f Create incentives for non-biased products that those writing AI principles within companies including by linking this to job promotions and need to be adequately trained. She mentioned that opportunities to participate in development at times, those in charge of drafting AI principles teams (IBM does this based on participation in within a company do not fully understand how AI bias training). technology is developed and deployed. When this is the case, those drafting AI principles may not Sponsors and Supporters: Identify sponsors and recognize the different types of AI technology that supporters throughout the organization and across exist and could be deployed and may therefore teams.68 This is not the same as Affinity Groups narrowly focus their principles on one specific (which typically focus on workforce development application of AI rather than on AI as a whole. – an important factor in gender transformative AI – but just one dimension of what needs to be Mike McCawley explained that IBM makes time done) but rather is about getting support in the and creates incentives for staff to pursue training: area of gender responsive products and services at each staff member needs to set aside a minimum all different stages of the AI life cycle. of 30 hours per year for training. Accumulating training accomplishments is rewarded by giving EDUCATE, PROMOTE, CONTEXTUALIZE staff opportunities to participate in new projects, Create education and training opportunities and for promotions. According to McCawley, training is important for: for the workforce to understand the issues at play. This should be contextualized to the work f Staff within the technology industry to build and being done by the company so it resonates more design better AI; deeply. ‘Give broad issues but then contextualize f AI customers to understand how good the AI for your own work. Demonstrate how principles, that they are using is; 69 and policies apply to their role, work, product’. f Government policymakers to get better at Axente suggests that an effective argument is that governing AI and developing policy; and reducing bias reduces risk (which companies do f The public at large to understand what AI is, not like) and can lead to important reputational how it works, and how it impacts their lives. gains and therefore increased competitiveness. Educating and raising awareness could be done In the UNESCO Dialogue, Smith argued that the through: education of technical teams is required in order f Technology companies providing ethics training for them to design and implement AI systems that for their staff; ensure gender equality. Braga also said that the technology sector needs a lot more education. f Affinity groups such as Women in Machine According to her, ‘people who are building these Learning and the Black in AI initiative that AI systems are trained engineers that were never organize awareness raising events and push made aware of [questions surrounding bias and towards adequate inclusion of all women within gender equality] during their trainings and careers. the AI research ecosystem (Abebe); and Managers come from different fields.’ She says: ‘If f UNESCO, which reaches the public at large you are an expert you don’t go through an ethical through conferences, reports, videos and could

22 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

possibly develop curricula for schools and ‘integrating contributions from AI foundations with journalists about AI and about AI’s impacts on contributions from foundational human sciences’. gender equality. In relation to gender equality, Smith argued that hiring diverse and multidisciplinary teams would DIVERSE DEVELOPMENT TEAMS AND help ensure that technology companies design DESIGN PROCESS and implement AI systems that integrate gender equality considerations. She explicitly referred ‘Privileged Ignorance: The vast majority of to the ‘need to integrate the knowledge of social researchers, practitioners, and educators in the scientists and gender experts into the teams field are shielded or removed from the harm that developing, managing and using AI systems’. By can result in the use of AI systems leading to mentioning ‘gender experts’, Smith makes the undervaluation, deprioritization, and ignorance of implicit case that hiring more women in technolgy problems along with decontextualized solutions. is not sufficient because women are not The communities most likely to be harmed by gender experts. AI systems are least likely to be involved in the teaching, design, development, deployment, and Keeping a human in the loop is also often governance of AI; even when underrepresented recommended, in order to ensure there is human individuals enter previously inaccessible spaces, oversight.71 But, it is not just about keeping ‘a’ we face existing practices, norms, and standards human in the loop but about keeping the right that require system-wide not just individual human in the loop, one with empathy,72 and one change.’ Joy Buolamwini70 with an understanding of the need to account for gender equality. As noted by UNESCO however, f Substantially increase and bring to positions of ‘even having a human “in the loop” to moderate a parity women coders, developers and decision- machine decision may not be sufficient to produce makers, with intersectionality in mind. This is a “good” decision’.73 not a matter of numbers, but also a matter of culture and power, with women actually having Auditing the design process and reflecting the ability to exert influence. See more in the good practices in ‘gendered innovations’ is also discussion below on Women in Tech; essential. This refers to employing methods of sex and gender analysis as a resource to create f Build multi-disciplinary teams, including gender new knowledge and stimulate novel design. experts, representatives of the social sciences Within technology, this applies to hardware and and other domain experts; software components and is a lens through which to undertake all work. Saralyn Mark, Founder f Institute residency programmes (engineers to and President of iGiant, a non-profit accelerator visit non-profit organizations; end users, domain for gender equal technologies, provides a seal of experts, non-profits to visit companies). approval for companies that use a gender lens when designing technology. iGiant assesses the The multidisciplinary nature of AI was discussed design process, rather than the resulting design by many Dialogue participants (Aseffa, Braga, itself. Companies can find commercial value in Axente, Smith). Considering the many different this. It may encourage them to become more fields of AI, such as dialogue systems, image committed to gender equality. Smith was also processing, social media filtration, big data in favour of using auditing tools right from the analysis, and perception synthesizers, hiring beginning of the design process. She said: ‘There multidisciplinary teams seems to be a prerequisite. are also some great qualitative tools that could be Braga illustrates this point. She came from the adapted to be gender specific – such as fairness fields of linguistics and literature, and then shifted checklists that should be completed before an AI to engineering and voice design. She wrote her system is developed and at certain points in the PhD about the need for multidisciplinarity in AI and development process. These are currently quite then founded her company DefinedCrowd, which is broad, and could be adapted towards a justice and in her words ‘all about multidisciplinarity’. equality perspective for gender.’

According to Burnett, for Explainable AI to work While AI systems are composed of algorithms in a truly effective manner, there is a need for and training data contained in software, they can

23 GENDER TRANSFORMATIVE OPERATIONALIZATION OF AI PRINCIPLES

also be integrated into hardware such as robots were used to address this, [or] fairness tests of or smart speakers, like Amazon’s Alexa. When algorithmic outcomes using available tools such AI hardware exists, it is important to closely as [IBM’s AI Fairness 360 toolkit] for example. examine its design, whether the user experience Likewise, evaluating outcomes and impact on of people varies according to their sex or gender, stakeholders [that are] subject to the outcome plus and whether the design of the AI hardware mitigations. Impact on the workforce and its skills, participates in spreading harmful stereotypes and appropriate mitigations, from a gender impact about a specific gender. Tannenbaum et al. point viewpoint, should also be listed. Governance is key, out, for example, that robots can be gendered with so if the AIA notes a significant area of risk then their names, colour, appearance, voice, personality, approval must be sought from the regulator(s) and whether the robot is programmed to complete before deployment.’ gender-stereotypical tasks (for example a female looking/sounding robot designed to clean).74 In As noted above, a level of ignorance and failure line with the analysis provided in I’d Blush if I Could, to check assumptions on gender can lead to Gardner pointed out that: ‘Robots are generally incomplete risk assessments. A process for created to appear human, and pervasively female breaking down assumptions is therefore also form, this must be removed, robots essential. This requires the participation of gender should not have a gender or failing that, not have equality and women’s rights experts in panels and a gender-specific role, for example robots as exercises (going beyond gender balance in teams). care assistants, reinforces that women should only have subservient roles.’ In their guidelines CONSULTATION WITH AFFECTED END for human-AI interaction design, Microsoft also USERS tackles the issue of gender biases.75 Affected users should be involved to mitigate potential for unanticipated or unintended harms ETHICS PANELS AND IMPACT MAPPING through community review boards.80 Additionally, Companies engineering and/or implementing Gardner notes that the role and authority of an machine learning systems have a responsibility ethics panel and/or a citizens’ panel should be to map human rights risks before and during the required and publicized. These panels should have life cycle of the product – from development to meaningful authority, including stating ‘No Go’. deployment and use.76 High-risk areas should be subject to review.77 The responsible technology TECHNICAL APPROACHES think tank Doteveryone calls for Consequence In her contribution to UNESCO’s Dialogue, Gardner Scanning Workshops during the feature planning emphasized that rigorous evaluations of AI stage to think about unintended consequences of were needed every step of the way: in design, features and plan mitigation strategies. UNESCO training, testing, implementation/deployment suggests the use of ethical impact assessments.78 and maintenance (i.e. post implementation). Similar calls have been made for holding ethics panels that review and assess work. Others Participants in the UNESCO Dialogue mentioned a suggest creating checklists (though this can risk variety of existing and easily accessible tools, as a superficial approach). The AI Now Institute, an well as tools currently being developed, to remove interdisciplinary research center dedicated to bias from AI datasets, algorithms, designs and understanding the social implications of artificial design processes. Some also emphasized the intelligence, calls for rigorous testing in sensitive importance of making these tools open-sourced domains.79 through platforms like GitHub, so that as many people as possible can access and use them In her contribution to UNESCO’s Dialogue, Gardner (although GitHub has its own history of gender states that she wishes to see the development bias81). For example, IBM’s AI Fairness 360 of Algorithmic Impact Assessments (AIAs) as a toolkit is an open-sourced structured set of tools more holistic way of regulating and auditing AI. and algorithms designed for the technology In her view: ‘A component of these AIAs would be community. The toolkit provides prescriptions a requirement for publishing the gender balance (specific dance steps) to follow, that will reveal of the project team (and roles), whether data has how fair or robust an AI system is. It enables been tested for gender bias and what methods people to find out whether their AI has sufficient

24 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

coverage or hidden biases, and thus seeks to f Multi-accuracy auditing improves the increase AI fairness. transparency of the AI systems by quantifying how its performance varies across race, age, The following are existing or suggested tools to sex and intersections of these attributes. remove bias in AI systems: Tannenbaum et al. describe this auditing technique as follows: ‘In multi-accuracy, the f Data checks: Various standard methods are goal is to ensure that the algorithm achieves available to check the quality of AI data and good performance not only in the aggregate but 82 training data. Getnet Aseffa mentioned the also for specific subpopulations—for example, need to use a ‘gender detection software’ in “elderly Asian ” or “Native American order to check whether data is balanced in ”. The multi-accuracy auditor takes terms of gender. He also said that the people a complex machine-learning algorithm and cleaning and organizing data have to make sure systematically identifies whether the current that gender equality is taken into account. algorithm makes more mistakes for any subpopulation. In a recent paper, the neural f Fairness audits may compare an AI’s results network used for facial recognition was audited to vetted standards found in the real world. and specific combinations of artificial neurons For example, in her contribution to UNESCO’s that responded to the images of darker-skinned Dialogue, Sara Kassir explained how pymetrics women were identified that are responsible for conducts fairness audits of their AI powered the misclassifications.’85 hiring tool (open-sourced on GitHub): ‘When a hiring tool does create systematic f Auditing user Interfaces for gender biases: disadvantage (i.e. favoring men over women), Burnett’s research reveals that there are gender the consequence is known as “adverse impact”. differences in how people interact with, use and Pymetrics audits our models for evidence of learn from technology. According to her findings, adverse impact as a means of ensuring that men and women differ in their motivations we are sufficiently promoting gender equality (why they use technology), their information in hiring. Our audits are conducted using a processing style, their self-confidence in their specific standard found in U.S. digital skills, their aversion to risk, and their regulations called “the 4/5th rule.” For example, learning style (whether they will enjoy tinkering assume a hiring assessment is used to evaluate to find a solution or whether they just want to 200 people, 100 men and 100 women. If the complete a task quickly). Her research also assessment “passes” 50 men, it must pass reveals that if you make technology easier no less than 40 women, indicating an adverse to use for people at the margins, you end up impact ratio of no lower than 0.8 or 4/5ths.’ improving the user experience of everyone. She therefore developed the Gender Inclusiveness f Counter-factual analyses examine the extent Magnifier, called GenderMag or GMAG, which is to which variables such as ‘gender’ impact a method for identifying gender biases in user the decision made by an AI. Tannenbaum interfaces. It works for apps, websites and all et al. explain this in the following way: kinds of software but can also be applied to AI ‘Consider Google Search, in which men are software. GenderMag enables people to solve five times more likely than women to be ‘gender inclusiveness bugs’ in their software offered ads for high-paying executive jobs. or in the tools designed to help people create The algorithm that decides which ad to show software.86 inputs features about the individual making the query and outputs a set of ads predicted f Other tools exist to test the fairness of AI to be relevant. The counterfactual would systems, such as Word Embedding Association test the algorithm in silico83 by changing the Tests, which measure whether removing part gender of each individual in the data and then of the training data affects the biases of a studying how predictions change. If simply word embedding software.87 Another way to changing an individual from “woman” to “man” increase the transparency and accountability systematically leads to higher paying job ads, of an AI system is to use White-box Automatic then the predictor is—indeed—biased.’84 Machine Learning models, which are becoming

25 GENDER TRANSFORMATIVE OPERATIONALIZATION OF AI PRINCIPLES

increasingly possible and available. White-box to provide metadata about the demographics models are interpretable models: ‘one can of crowd labellers.’93 This echoes the paper explain how they behave, how they produce ‘Datasheets for Datasets’ by et al. predictions and what the influencing variables in which they describe in detail what information are’.88 Finally, companies should carry out should be contained in such ‘datasheets’.94 rigorous pre-release trials of their AI systems to check for biases.89 Google for example uses f Creating Model Cards can also be done for ‘adversarial testing’ in an attempt to remove model performance. These should reflect bias from its products before their launch.90 In gender representativeness of data and any January 2020, Margi Murphy reported in The potential bias risks in models. IBM, for instance, Telegraph that in order to prepare for the launch envisions such documents to contain purpose, of its smart speaker and smartphone, Google performance, safety, security, and provenance of asked its workers to stress-test the devices information and to be completed by AI service by repeatedly insulting them. , providers for examination by consumers. racism and sexism were actively encouraged. Salesforce also creates model cards for Annie Jean-Baptiste, Head of Product Inclusion customers and end users. at Google explains that with this adversarial testing technique, they are ‘trying to break TRACKING 91 the product before it launches.’ While such f Tracking where AI systems are used and for pre-release trials can reveal the more obvious what purposes.95 cases of bias, they may not be able to expose implicit and unconscious biases in a systematic f Auditing the auditing tools: While some auditing way. It is therefore important to carry them out tools exist, there are still no standardized concurrently with other auditing and testing processes or rules regarding AI auditing. Some techniques at all stages of the AI life cycle. questions require standardized answers. For example, who decides what needs to be While a number of tools already exist to test the audited? How should it be audited? By whom? fairness of AI systems, more research is needed. And do we need to audit the auditing tools and In her Dialogue contribution, Christine Chow processes? If so, how? Who should do this? stressed that evidenced-based research about how to actually remove gender biases in AI is still f Maintenance auditing: In her UNESCO Dialogue missing. While she acknowledged that companies contribution, Gardner expressed the need to like Google and IBM had developed tools to help audit AI systems post implementation through raise the issue, she argued that there was still a regular maintenance checks. She suggests: ‘it gap between our understanding of the issue and should be built into the maintenance schedule the availability of the tools to fix the issue. to periodically carry out regression testing using the test data (not training data) to ensure TRANSPARENCY that algorithm weights are still providing the f Data/Fact Sheets: In order to increase the expected outcomes and no algorithm creep or reliability and fairness of AI systems, scholars distortion has occurred.’ point to the necessity of having a standard document that defines ‘what [data] needs to EXTERNAL MITIGATION be collected at a minimum so stakeholders Community Advisory Boards / Panels: While can make informed decisions’.92 For example, these should be involved proactively (as noted Tannenbaum et al. note that ‘researchers have above), they should also be involved in monitoring designed “nutrition labels” to capture metadata and accountability mechanisms. These may be about how the dataset was collected and independent of a particular company. annotated. Useful metadata should summarize statistics on, for example, the sex, gender, Audits and Algorithmic Impact Assessments: ethnicity and geographical location of the While these are undertaken by companies, they participants in the dataset. In many machine- should also (preferably?) be undertaken by learning studies, the training labels are collected neutral third parties and in real time. Results of through crowdsourcing, and it is also useful audits should be made available to the public

26 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

together with responses from the company.96 The the technology industry. 97Axente noted that Algorithmic Justice League is one organization solving this issue requires that we go beyond that will undertake such algorithmic audits. just bringing more women in the industry. In her opinion, technology companies should provide DIVERSITY IN AI INDUSTRY – WOMEN IN mentoring and coaching opportunities to make the TECH culture more female friendly and create space for When asked about gender equality in AI, most women’s contributions. Ultimately, she argues that UNESCO Dialogue participants started by achieving gender equality is linked to changing discussing gender parity in the and power structures within organizations. equal opportunities for men and women. All In order to rebalance unequal power structures and participants seem to agree that hiring more promote gender equality in the technology sector, women in AI is a necessary – albeit not sufficient Gardner points to positive action as a potential – step for greater gender equality in this field. On solution. Positive action are measures that can be an individual level, Daniela Braga, the Founder and CEO of DefinedCrowd, is probably the one having lawfully taken to enable or encourage members achieved most success on this front, with her of underrepresented groups to overcome their company having 42% of women employees. disadvantage, meet their needs or participate in an activity. Speaking on behalf of the Women Leading Some, like Gardner, argue that hiring more women in AI network – a think thank that addresses is not enough. The real objective is to make gender biases in AI, Gardner declares: ‘We want sure that women are hired in core roles such to emphasize that there is a difference between as development and coding: ‘One thing that is fairness and equality. We view equality not as missing is a requirement for diverse development equal treatment in the spirit of fairness but in teams (that are not just ethics-washing with battling the inequality and lack of representation women in peripheral roles but actually part of the [women] currently face within the field. For a future development and coding).’ where there is true equality we must address the issue of the obstacles women face in entering and When discussing the underrepresentation of continuing within the field. In order to overcome women in the field of AI, some pointed to the [this], it may be that we need to inject positive importance of going beyond the , in action to redress the problems.’ order to include those that identify as genderqueer or gender non-conforming, as well as those that Sara Kassir, on behalf of pymetrics, also made the intersect with multiple identities based on their point that de-biasing hiring practices is only part race, geography, ability, , socio- of the solution: ‘Organizations that are committed economic status etc. For example, Abebe stated: to promoting gender equality need to recognize ‘We need adequate representation of women the importance of multi-pronged solutions. Fair and genderqueer and non-binary individuals in AI evaluations are not enough; disparities in positions where they can inform and guide the education, training, retention, and promotion need end-to-end cycle of AI.’ to be addressed, and inclusive cultures need to be fostered.’ In their analysis of gender equality in AI, participants also went further than just looking at There is a long and evolving body of work and hiring practices in the field. For example, Braga, lessons on increasing in the Smith and Axente pointed to how the culture of an workplace. This includes tools like the Women’s organization can either empower or disenfranchise Empowerment Principles, which were established women. To describe work environments in the by UN Global Compact and UN Women and offer technology industry that are toxic for women, guidance to businesses on how to promote gender Smith used the word ‘brotopia’ – a term coined by equality in the workplace.98 The National Center Emily Chang in her 2018 book Brotopia: Breaking for Women in Technology also offers a plethora of Up the Boys’ Club of . In a similar research, tools and initiatives on this topic. These vein, the term ‘brogrammer culture’ is also used resources respond to a number of gaps identified to describe a male-dominated and macho work by Dialogue participants and could be tailored to environment that pushes women away from the unique needs of the AI industry.

27 GENDER TRANSFORMATIVE OPERATIONALIZATION OF AI PRINCIPLES

Generally, the literature and practices on and conferences – and advance women as promoting women in business, and inclusion in organizational speakers and representatives;100 the technology industry emphasize the following f Provide childcare at events. (illustrative): MEASURING AND REPORTING: INCREASE FUNDING FOR: f Publish data on workforce positions, f AI startups founded by women; compensation and rates of promotions and f Scholarships for women studying in the field actively address identified gaps. of AI; f Women AI researchers; f Participation of women’s groups and gender OTHER STAKEHOLDERS equality experts as affected AI users and stakeholders; Although the UNESCO Dialogue and this report are f Corporate Social Responsibility gender equality primarily focused on the private sector, achieving initiatives. ethical and gender transformative AI unequivocally depends on the deep leadership and engagement HIGHEST LEVEL SUPPORT, GOALS AND of multiple stakeholder groups. A proper POLICIES: treatment of the roles and responsibilities of f Establish organizational strategies, policies and each stakeholder group and recommendations on budgets that are gender transformative; actions is warranted and some of the literature has f Put clear management commitment, messaging started to define these aspects (e.g. Buolamwini’s and accountability structures in place; 2019 testimony to US Congress on the role of the f Address and correct for power asymmetries;99 government vis-à-vis AI and the World Wide Web f Eliminate unfair policies, creating gender Foundation’s 2018 recommendations to the G20 equality related policies (e.g. equal pay, on gender and AI). The following merely highlights harassment, leave/needs); a few entry points. f Put in place transparent, independent and clear processes for remedial action. GENDER EQUALITY ADVOCATES / CIVIL SOCIETY HIRING, RETENTION, PROMOTION TO Learn: Develop opportunities for learning about AI DECISION-MAKING: and its implications for gender equality experts, f Review hiring and promotion practices for bias; practitioners and those working on girls and f Address intersectionality; women’s empowerment. f Review task assignments (who is given visibility and stretch goals?); Advocate: Get involved in advocacy efforts around f Create mentoring and AI and ethics, including at the global and national programmes. level. Similarly, more actively reflect AI implications within the deliberations and work of gender SUPPLY CHAIN: equality bodies – e.g. the UN Commission on the f Support women-owned businesses and gender Status of Women, the Beijing Platform for Action responsive companies through supply chain – and initiatives. Demand AI that is accountable to management. women – what it should look like must come from the gender equality community. CULTURAL CHANGE: f Provide training on bias; Apply/Incorporate: Support the ability of gender f Recognize that training on bias alone will not equality advocates, women’s groups, and women create transformative . Deeply themselves to apply and incorporate good AI seated norms and unconscious bias cannot be technologies to support girls’ and women’s undone with training; empowerment. f Provide support for ally and affinity groups BUT without making the burden of change fall on Contribute and Monitor: Make available gender them or individual women; equality experts, practitioners, and representatives f Refuse to participate in, or hold ‘manels’ and from affected groups to participate in AI ‘manferences’ – male-dominated panels development and monitoring.

28 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

GOVERNMENT processes and requirements for publicly traded companies. For example, Smith suggested that Priorities and Practices: Set priorities for AI governments make it mandatory for companies that development and application that respond to the respond to their AI-related RFPs to have gender- needs of girls and women, and that contribute to specific AI regulation in place. In her words: ‘Since social good more broadly. Ensure that government governments are big users of AI tech for public use of AI is not driving gender inequality and the purposes (e.g. education, health, etc.) it would be perpetuation/exacerbation of bias and explore great to have mandates for completion of [gender- affirmative actions to counter bias.101 specific qualitative] tools in RFP proposals from companies.’ Policy, and Regulation: Develop appropriate policies, legislation and regulation Infrastructure and Education: Build the that respond to gender equality concerns (e.g. digital data (e.g. collection of sex and require disclosure). According to Tannenbaum gender-disaggregated data), and educational et al., to ensure that AI does not cause harm infrastructure that supports girls’ and women’s we need ‘stringent review processes’: ‘Since the access to, use of, benefits from, and contributions Second World , medical research has been towards, the digital society and AI. submitted to stringent review processes aimed at protecting participants from harm. AI, which EDUCATION SYSTEM / ACADEMIA has the potential to influence human life at scale, has yet to be so carefully examined.’102 When Cross-pollination of Ethics and Technology: explaining what legislative or supervisory bodies Review pedagogy and educational curricula for need to be put in place to regulate AI, UNESCO opportunities to better integrate AI, ethics, social Dialogue participants drew parallels with the sciences and gender equality studies. review processes of the pharmaceutical industry, the Food and Drugs Administration, and the role of Research: ‘Promote deeper collaborations between the World Health Organization. Braga for example AI researchers and organizations that work said there is a need for an ISO certification for AI, most closely with communities that are most stating that companies that want to be trustworthy harmed by algorithmic inequality. Fund university/ will be incentivized to obtain the certification. Any community partnerships both to study AI harm on certification should explicitly address bias and marginalized groups, and also to do participatory harm around gender. design of AI that is rooted in the needs of marginalized communities.’104 Funding and Investment: Devote public funding to support AI for women and women in AI, including Data Training Sets: Develop gender responsive through investment in research and development, data training sets (e.g. Cornell University). scholarships and training, and allocating budgets Include AI-relevant programmes into public within agencies/ministries (gender-responsive interest technology clinics at degree granting budgeting). Create an Accountability Fund to institutions, similar to how law schools provide support critical research, provide redress for harm students with hands-on experience via public (e.g. through a government tax on AI firms) and interest law clinics.105 Such clinics would help explore the impacts of AI and ML on women.103 produce graduates who will engage in public Support investment in open and public models. interest activities as AI professionals. Procurement, Hiring and Appointments, Incorporate AI and ethics into STEM education and Requirements for Publicly Traded Companies: computer science classes at the secondary school Ensure that women are represented, including in level and within digital literacy initiatives in formal decision-making positions, in government and and informal educational settings. on any of its advisory bodies and citizen panels. Review hiring, retaining, and promotion practices Be forward looking and incorporate future scenario and other gender related policies. Utilize the power practices and thinking. of the purse and government procurement to secure gender equality commitments and practices from business through Request for Proposal (RFP)

29 ACTION PLAN AND NEXT STEPS

In the UNESCO Dialogue, Getnet Aseffa reminded us that once AI has boomed it will be too late to address gender equality issues. The window of opportunity is now. Therefore, it is recommended that a dedicated, cross-disciplinary initiative that can help catalyze deep thinking and action on gender equality and AI be swiftly created.

AWARENESS FRAMEWORK

It is crucial to generate awareness regarding the As AI ethics principles and frameworks are importance and urgency of these issues, what developed and implemented, it is critical that gender responsive AI would look like, and what is women and gender equality issues are represented. required to develop and deploy such AI systems. f Contribution to Broader AI and Ethics Dialogue: f Development of primers for different audiences Ensure that gender equality issues are in accessible language with real world continually fed into the process of developing examples, and create briefs for the gender AI ethics principles and frameworks, bringing equality community on the implications of AI in a more impactful approach than has and how it impacts their work. Create briefs been taken to date. This could be done by also for the AI industry on the dimensions providing substantive recommendations and of gender equality and how gender biases methodologies for adapting or contextualizing can potentially manifest themselves through recommendations, and advocating for AI. Eventually, briefs should be created at a establishing a gender responsive process for more micro level, e.g. on access to justice, their implementation. education, employment, etc. f Develop an Action Plan for gender equality and f Create Outreach Materials and Campaigns AI: At the next level of detail, create a detailed to develop a shared sense of understanding Gender Equality and AI Action Plan that can and collective response, and create core inform a broader initiative and help guide reinforcing messages and other outreach different stakeholders. This could outline key materials. Launching creative campaigns to priorities, timelines, stakeholder roles, existing draw attention to these issues, including for and needed organizational expertise, and the public, should be considered. identify and secure funding. f Visualization - develop a map of the AI COALITION BUILDING ecosystem and life cycle from a gender equality perspective, which will show ‘at a ‘The complexity of the ethical issues surrounding glance’ the different levels of challenges and AI requires equally complex responses that opportunities, the actors, and the various levers necessitate the cooperation of multiple stakeholders of influence and action. across the various levels and sectors of the international, regional and national communities. f Engage influencers - identify and recruit trusted Global cooperation on the ethics of AI and global influencers in key communities to broadcast inter-cultural dialogue are therefore indispensable for messages and encourage engagement. arriving at complex solutions.’ UNESCO 106

30 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

This statement holds equally true for multi- Economy and Society to establish a new stakeholder cooperation on gender equality and doctoral research opportunity in Gender and AI AI. Many experts have called for the development at the ;110 of a broad coalition to address the layers and intricacies of these issues. Its features and f Intergenerational Participation: Engage young functions could include: people, including at the secondary school level, as well as older adults to create intergenerational f Multi-stakeholder participation: Include approaches, tapping into differing perspectives, stakeholders from different sectors (academia, thinking, and lived experiences. Young people government, civil society, private sector), who will be living in the future being created now disciplines (technology, data, gender equality, should have an important voice around ethics ethics, human rights), and engage the full and the purpose of AI; spectrum of those involved in gender equality from the women in the technology workforce, f Relationship Building and Dialogue: Develop to the girls and women in STEM movements, critical relationships and break down silos to gender equality and women’s empowerment across groups and unpack and generate a experts and feminists, to those working baseline understanding of AI and gender on gender equality and women’ and girls’ equality. Creating a baseline understanding empowerment in specific contexts (access to also means breaking down one’s assumptions, justice, economic empowerment, ending violence acknowledging what you do not know, and against women, education, health, etc.). identifying where collaboration with discipline experts should be ongoing and built into In the UNESCO Dialogue, Chow was of the approaches. This is particularly necessary opinion that the AI industry needs stronger in order to embrace a global approach that ties with academia. She said that ‘academic includes the whole of society. While women in collaborations’ and ‘more authentic and the technology workforce – and their allies – fundamental research’ would enable the AI are critical protagonists and part of the solution, industry to address the root causes of gender they should not be treated as proxies for gender biases in AI rather than just fixing symptoms. equality experts, particularly when it comes to Similarly, Axente talked about the need to understanding structural issues and risks; 111 ‘connect the dots’ between the gender experts and the technologists. She also said that the AI f Networking: Create ongoing opportunities industry is in need of both women data scientists (online and through events/conferences) for and ethicists. She suggested UNESCO could conversation, interaction, collaboration, and have a crucial role to play in this respect thanks information sharing; to its unique convening power. f Collective Impact and Action Oriented: The need to ‘connect the dots’ echoes the While developing an action plan and its findings of Collett and Dillon’s 2019 report. In implementation framework, seek ways to this report, the authors’ first recommendation build collective action, to fill gaps in action, to for future research in AI and Gender was to connect to and leverage the work of others, and ‘bridge gender theory and AI practice’.107 They to join forces for greater impact. Fundamentally, recognized the need for a dialogue between a coalition should be action oriented, a ‘think’ gender theorists and technologists, using and ‘do’ shop. a quote from Leavy: gender theory and AI practice ‘are speaking completely different CAPACITY BUILDING, TECHNICAL 108 languages’. The authors also make reference ASSISTANCE AND FUNDING to Gina Neff from the University of Oxford: ‘Gina Neff highlights this problem of the Practical mechanisms for translating principles growing distance between those who are into reality, to enable cooperation, and to drive designing and deploying these systems, and concrete action are of highest priority. They should those who are affected by these systems’.109 together form a package that enables stakeholders In an attempt to bridge this gap, Gina Neff to make transformative rather than marginal or partnered with the Women’s Forum for the superficial change:112

31 ACTION PLAN AND NEXT STEPS

f Translators and Experts: Explore the possibility RESEARCH, MONITORING of creating gender equality and AI ‘translators’ AND LEARNING who are able to navigate both worlds and find common language between gender equality As society moves rapidly into new horizons with experts, ethicists and technologists. Explore greater complexity and many unknowns, research, the possibility of providing training and/or of monitoring, learning and become ever creating a roster of technical advisors available more critical: to stakeholders on critical issues, policy and AI f Learning from Recent Efforts: What can applications; we learn and apply from past and current f Resource Base and Tools: Collect and further gender and technology efforts? What has build a resource base on gender equality and failed (over-emphasis on the ‘pipeline’,113 AI. Where they do not exist, develop a suite of piecemeal efforts, ‘corporate ’,114 tools and guidance for the AI industry, as well as vague language, siloed efforts, hype and under- for civil society and gender equality advocates. estimation of deeply rooted norms and power Also, participate in external tool development asymmetries, disinterest in gender on the side bringing in a gender equality component; of technologists and for technology on the side of gender experts, etc.)? What efforts have led f Policy Dialogues: Create or participate in policy to success, and what is fundamentally different dialogues on AI (e.g. through the work of the about AI? UN, the OECD or the EU) and on Gender Equality (e.g. through the UN Commission on the Status f Defining and Undertaking a Research Agenda: of Women); Create partnerships to develop and undertake a robust research agenda.115 f Capacity Building: Develop capacity building opportunities on gender and AI for different f Monitoring and Evidence: Conduct quantitative stakeholder groups, e.g. workshops or other and qualitative analyses regarding the impacts projects that provide rich learning opportunities; of AI on gender equality, the status of funding and investment, the roles of women in decision- f Seal: Explore the creation of a ‘Gender making in AI, and other metrics. Participate in, Equality AI Seal’ that adopts a holistic system/ and hold others accountable for monitoring, society approach and considers process, transparency and reporting. Develop bottom-up content, expertise, funding, etc. Critically, this feedback loops and more real-time monitoring would provide monitoring and accountability possibilities; mechanisms and be awarded based on effective action, not merely a pledge; f Visibility: Ensure that the implications of AI for gender equality are presented in real world f Fund: Participate in the establishment and cases and accessible languages for the public. implementation of a fund that would support concrete and sustained action on gender equality and AI. Adequate resources are necessary for achieving these goals and should be budgeted at institutional/organizational levels, as well as for collaborative initiatives, and to support the participation of gender equality practitioners who are under-resourced.

32 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

SUMMARY OF RECOMMENDATIONS

Adequately Frame the Overarching principles mean in concrete terms and Landscape of AI and Society and the with consideration to intersectionality and Imperatives of Gender Equality compounded discrimination – particularly for principles that are implicit or without obvious Shift the narrative of AI as something ‘external’ gender connections but where there are still or technologically deterministic, to something differentiated impacts. ‘human’ that is not happening to us but is created, directed, controlled by human beings f Ensure that gender equality is understood and reflective of society. Establish a baseline in terms of addressing harm, increasing the understanding of AI, where and how it should visibility of the gendered implications of AI, be used for advancing a humanist framework and encouraging AI’s positive applications and and societal goals, and develop mechanisms for beneficial impacts on women’s empowerment. its governance. Similarly, establish a baseline understanding of the imperatives of gender Operationalize Gender Equality and AI equality and how AI as a socio-technical system principles can reinforce or challenge inequality. f Increase Awareness, Education and Skills in Society Significantly Strengthen Gender – Awareness and Participation: Increase Equality in AI Principles general awareness within society at large, f Integrate feminist theory and frameworks the gender equality community, and the rather than merely adding in ‘women’ as a AI industry, regarding the positive and target group. Treat gender equality as a way of negative implications of AI for girls, women thinking, a lens, an ethos and a constant – not a and gender non-binary people. Greater checklist. awareness should enhance public and stakeholder participation in AI governance. f Ensure participation of gender equality advocates and practitioners, affected groups, – Contributors/Developers/Leaders: Promote and organizations working on specific more girls and women in STEM and in discipline areas in principle development and AI education, careers and business. This implementation. includes creating lifelong learning and skill f Make gender equality more explicit and position development opportunities for women in gender equality principles in a way that provides technology and AI, particularly for those for greater accountability for their prioritization displaced by automation. and realization. – Personal Literacy: Expand digital literacy to f Take systemic approaches that take into include AI and ‘algorithmic awareness’, with account society as a whole, the entirety of the attention to the specific impacts for girls and AI ecosystem, and address structural gender women.Use and Applications: Advance women’ equality. and girls’ access to and ability to use and apply f Unpack and substantiate what vague, AI to realize gender equality goals. undefined, or potentially contradictory

33 SUMMARY OF RECOMMENDATIONS

f Industry Action redress) that promote gender equality in – Advocacy for and commitment to human and through AI; encourage the development rights-based and gender responsive ethical of AI applications that do not perpetuate frameworks and support for the creation of bias or negatively impact girls and women industry standards and policies – including but that rather respond to their needs and in cooperation with governments at the experiences; create funding mechanisms national level – that promote professional for participatory AI, access to AI and AI codes of conduct, that adhere to certain education for girls and women; promote certifiable components, and that respond to diversity and equality through hiring, supply gender equality criteria. chain and related practices; and contribute to the collection of sex-disaggregated data. – Commitment to use of gender representative data, mitigating data bias, and advancing – Academia and the Education System: data quality and rights. Develop curricula and pedagogy that better integrates cross-disciplinary social – Engage with external and independent sciences, ethics and technology literacy mitigation efforts such as Community and learning at the secondary and tertiary Advisory Boards and External Audits, and educational levels. Employ the research Algorithm Impact Assessments that reflect and development and other capacities of gender equality issues. universities (e.g. public interest clinics) to – Organizational Level Internal Mitigation address gender bias in data, algorithms and including putting in place contextualized other elements of the AI ecosystem. and high-level corporate governance mechanisms, policies and incentives for Create Gender and AI Initiatives: Action gender responsive product development; Plan and Next Steps building organizational capacity, creating f Coalition Building: Establish a multi-disciplinary diverse teams and processes (e.g. and inter-generational coalition that builds participation of affected users and gender partnerships across sectors and groups for a experts, transparency cards/sheets) to holistic society/AI ecosystem approach. Create mitigate bias and amplify benefits; and avenues for dialogue and learning by creating creating or using technical tools before, a common understanding and language and during and after product development that through collaborations and collective impact check for bias and harm. models. – Ramp up gender diversity in the AI Industry f Framework Development: Contribute gender through good and evolving practices that perspectives to the broader development of address corporate policies, hiring, promotion AI and ethics frameworks and principles and and retention practices, organizational their implementation. Additionally, create a culture and power structures, educational comprehensive Action Plan on Gender Equality and mentoring programmes, and monitoring and AI that can inform initiatives and guide and reporting. concrete stakeholder efforts. f Actions by Other Stakeholders f Awareness Raising: Develop materials that – Gender Equality Advocates and Civil Society: provide an overview and targeted analyses (e.g. Increase the capacities and engagement impact areas or for different stakeholders), of gender advocates and those working as well as channels for awareness raising, on girls and women’s empowerment outreach and engagement. with AI. Scale up advocacy and policy development efforts (in AI and women’s f Capacity Building, Technical Assistance rights frameworks and bodies) promoting and Funding: Establish mechanisms that access to and application of AI for the enable operationalization of gender and AI benefit of girls and women, and critically, for principles, including through the development monitoring AI’s impacts. of expertise, resources and tools, capacity building programmes, a fund that can support – Government: Commit to policies, regulations, programming and participation, and explore and mechanisms (proactively and through

34 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

the development of a seal that provides criteria and failures – around gender equality and for gender equality in and through AI with technology advocacy and initiatives; develop accountability. robust monitoring and learning mechanisms; and ensure that gender equality and AI work is f Research, Monitoring and Learning: Define a publicly accessible and visible.116 research agenda for gender equality and AI, learn from past experiences – successes, gaps

The following is a ‘food for thought’ map of a more holistic approach to gender equality and AI.

35 ANNEXES

Annex 1: Explicit references to gender equality in existing AI ethics principles

Below are examples of explicit references to gender in selected texts (emphasis added in italics). f The Union Network International (UNI) Global Union – Principle 3 Make AI Serve People and Planet: – Principle 4 Ensure a Genderless, Unbiased ‘throughout their entire operational process, AI: ‘In the design and maintenance of AI and AI systems [to] remain compatible and artificial systems, it is vital that the system increase the principles of human dignity, is controlled for negative or harmful human- integrity, , privacy and cultural and bias, and that any bias—be it gender, race, gender diversity, as well as ... fundamental sexual orientation or age, etc.—is identified human rights. In addition, AI systems must and is not propagated by the system.’118 protect and even improve our planet’s ecosystems and biodiversity.’117 f The Organisation for Economic Co-operation and Development (OECD) – Inclusive growth, sustainable development advancing inclusion of underrepresented and well-being (Principle 1.1): Stakeholders populations, reducing economic, social, should proactively engage in responsible gender and other inequalities, and protecting stewardship of trustworthy AI in pursuit natural environments, thus invigorating of beneficial outcomes for people and inclusive growth, sustainable development the planet, such as augmenting human and well-being.119 capabilities and enhancing creativity, f The Institute of Electrical and Electronics Engineers (IEEE) – Principle 1 Human Rights: ‘Human benefit is different across social groups, including a crucial goal of Autonomous and intelligent gender, race, ethnicity, and disability status.’121 systems (A/IS), as is respect for human rights – Affective Computing: ‘Intimate systems must set out in works including, but not limited not be designed or deployed in ways that to: (…) the Convention on the Elimination of all contribute to stereotypes, gender or racial 120 forms of Discrimination against Women’ inequality, or the exacerbation of human – Implementing Well-Being: ‘“Well-being” misery.’122 will be defined differently by different – A/IS for Sustainable Development: ‘The ethical groups affected by A/IS. The most relevant imperative driving this chapter is that A/ indicators of well-being may vary according IS must be harnessed to benefit humanity, to country, with concerns of wealthy nations promote equality, and realize the world being different than those of low- and community’s vision of a sustainable future middle-income countries. Indicators may and the SDGs: (...) of universal respect for vary based on geographical region or unique human rights and human dignity, the rule of circumstances. The indicators may also be law, justice, equality and nondiscrimination;

36 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

of respect for race, ethnicity and cultural assessing the success of evaluating the diversity; and of equal opportunity permitting A/IS performance on those tasks. Such the full realization of human potential and tasks would assess, for example, whether contributing to shared prosperity. A world the A/IS apply norms in discriminatory which invests in its children and in which ways to different races, ethnicities, , every grows up free from violence ages, body shapes, or to people who use and exploitation. A world in which every wheelchairs or prosthetics, and so on.’124 woman and enjoys full gender equality – Law: Illustration on sentencing algorithm and all legal, social and economic barriers and bias case (Loomis) in which the to their empowerment have been removed. defendant claimed that gender (higher risk A just, equitable, tolerant, open and socially as a male) was wrongly considered. ‘The inclusive world in which the needs of the most court reasoned that knowing the inputs 123 vulnerable are met.’ and output of the tool, and having access – Embedding Values into A/IS: ‘unanticipated to validating studies of the tool’s accuracy, or undetected biases should be further were sufficient to prevent infringement of reduced by including members of diverse Loomis’ due process. Regarding the use social groups in both the planning of gender—a protected class in the United and evaluation of A/IS and integrating States—the court said he did not show that community outreach into the evaluation there was a reliance on gender in making process (...). Behavioral scientists and the output or sentencing decision. Without members of the target populations will the ability to interrogate the tool and know be particularly valuable when devising how gender is used, the court created a criterion tasks for system evaluation and paradox with its opinion.’125 f Microsoft126 – Fairness: Reduce unfairness rather than Increasing fairness requires everyone to think making it worse or maintaining status quo. about it, not delegate to others.127 This relates to technical components as – Inclusiveness: Be intentionally inclusive well as the societal system in which it is and diverse. Designing for the 3% can at the deployed. Responding to socio-technological same time reach the 97%. Reference to trans challenges and realities require greater women and planning, testing, building with diversity of AI development and deployment. diverse groups. f The (EC) – Introduction: ‘In particular, AI systems can preferences, but also their sexual orientation, help to facilitate the achievement of the UN’s age, gender, (…)’130 Sustainable Development Goals, such as – 1.5 Diversity, Non-discrimination and 128 promoting gender balance.’ Fairness: ‘Particularly in business-to- – (Glossary) Vulnerable Persons and Groups: consumer domains, systems should be user- ‘What constitutes a vulnerable person or centric and designed in a way that allows group is often context-specific. (…) factors all people to use AI products or services, linked to one’s identity (such as gender, regardless of their (…) gender (…).’ 131 (...)) or other factors can play a role. The – 2.1 Fundamental rights as a basis for Charter of Fundamental Rights of the EU Trustworthy AI: Equality, non-discrimination encompasses under Article 21 on non- and solidarity – ‘This also requires adequate discrimination the following grounds, which respect for potentially vulnerable persons and can be a reference point amongst others: groups, such as (…) women, (…).’ 132 namely sex (…) and sexual orientation.’129 – 2.2 Non-Technical Methods: Diversity and – 1.3 Privacy and Data Protection: ‘Digital Inclusive Design Teams – ‘Ideally, teams are records of human behaviour may allow not only diverse in terms of gender, culture, AI systems to infer not only individuals’ age, but also in terms of professional backgrounds and skill sets.’133

37 ANNEXES

f The United Nations System Chief Executive Board for Coordination (CEB) – Principle on Inclusiveness: ‘People and Nations-convened and externally organized particularly those farthest behind, including platforms and organizations related to women and girls, should be at the centre of artificial intelligence. In this regard, launch all artificial intelligence-related capacity- initiatives to lower the financial, knowledge, building programming and decision-making accessibility and social barriers to the processes.’134 effective participation of all stakeholders, – ‘All artificial intelligence-relatedcapacity- with a focus on increasing participation from building programming by United Nations developing countries, as well as increased entities should be gender transformative. participation by women and girls.’ 138 Gender and age transformative approaches – 5.1. ‘Build a repository of artificial need to be embedded in all artificial intelligence policy challenges and successes intelligence-related capacity-building from diverse stakeholders, including the programming and decision-making various solutions tried and their impacts, processes. The particular effects of artificial especially those solutions that are focused intelligence on women and girls, and on on the bottom billion and on those at greatest the increasing digital gender and age divide, risk of being left behind, including women and should also be taken into account.’135 girls.’139 – In addition to broad principles, the UN CEB – 6. ‘Increase United Nations System and focuses on four needed levels of capacity Member State capacity, particularly in building: infrastructure (specifically developing countries, to collect, analyse and mentioning the ); data; share open, interoperable sex-disaggregated and social capabilities (with data sets, as well as artificial intelligence specific attention to recruitinggirls and tools to support both artificial intelligence women in STEM); and policy, law and human innovation and the monitoring of the impacts rights (with specific reference to human of artificial intelligence.’140 rights and gender equality).136 – 9. ‘Maintain strong ethical and human – In its Road Map for Action, explicit rights guardrails, ensuring that artificial reference to gender comes in a number intelligence developments do not exceed of recommended commitments and the capacity to protect society, particularly corresponding measures: marginalized, vulnerable and the poorest – 1.3. ‘Develop templates and guidelines for populations, including women and girls.’ 141 public-private investment agreements that With a measure, 9.4, that calls to ‘Develop, facilitate greater investments in Internet building further on the existing efforts, policy infrastructure, ensuring that the benefits of and legal toolkits (with input from diverse such investments are shared widely across stakeholders) that aim to ensure that artificial society, with a particular focus on those intelligence systems fully respect human groups that are most likely to be left behind, and workers’ rights, take into consideration including women and girls, persons with local norms and ethics and do not contribute disabilities, migrants and refugees, rural to, replicating or exacerbating biases 137 people and indigenous people.’ including on the basis of gender, race, age – 4.2. ‘Promote and support more inclusive and nationality, and in areas such as crime multi-stakeholder participation in both United prevention.’142 f UNESCO and the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) – ‘Respect for human rights and fundamental as discrimination and bias, including on the freedoms should be considered as one of basis of gender, as well as diversity, digital and the foundational values as development, knowledge divides are addressed. This is why deployment and uptake of AI technologies leaving no one behind could be considered must occur in accordance with international as another foundational value throughout the human rights standards. (…) In order to ensure AI system lifecycle. Thus, the development an inclusive AI, it is crucial that issues such and use of AI systems must be compatible

38 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

with maintaining social and cultural – Ensuring education: ‘Gender equitable AI and diversity, different value systems, take into AI for gender equity’;148 ‘Increase artificial consideration the specific needs of different intelligence-related human capacity by age groups, persons with disabilities, women supporting high quality and inclusive and girls, disadvantaged, marginalized and education, learning and training policies vulnerable populations and must not restrict and programmes as well as reskilling and the scope of lifestyle choices or personal retraining of workers, including women and experiences. This also raises concerns about girls’;149 and ‘Strengthen gender diversity in neglecting local knowledge, cultural pluralism AI research both in academia and the private and the need to ensure equality. The economic sector.’150 prosperity created by AI should be distributed – Ensuring a Gender Sensitive Approach: broadly and equally, to benefit all of humanity. ‘Adopt sustained, varied and life-wide Particular attention must be paid to the lack of approaches; Establish incentives, targets 143 necessary technological infrastructure (…).’ and quotas; Embed ICT in formal education; – ‘Gender bias should be avoided in the Support engaging experiences; Emphasize development of algorithms, in the datasets meaningful use and tangible benefits; used for their training, and in their use in Encourage collaborative and peer learning; decision-making.’ 144 Create safe spaces and meet women where – Value of Justice (Equality): ‘The value of they are; Examine exclusionary practices justice is also related to non-discrimination. and language; Recruit and train gender- Roboticists should be sensitised to sensitive teachers; Promote role models and the reproduction of gender bias and mentors; Bring parents on board; Leverage sexual stereotype in robots. The issue of community connections and recruit allies; discrimination and stigmatisation through Support technology autonomy and women’s data mining collected by robots is not a trivial digital rights; Use universal service and issue. Adequate measures need to be taken access funds; Collect and use data, and set 151 by States.’145 actionable indicators and targets.’ – In its summary of possible policy actions to – Fighting the Digital Divide: ‘Work to reduce guide reflection, the UNESCO highlights the digital divides, including gender divides, in following explicit recommendations with AI access, and establish mechanisms for respect to gender: continuous monitoring of the differences in access. Ensure that individuals, groups – Educating about cost-benefit and inequalities: and countries that are least likely to have ‘Working to reduce digital divides, including access to AI are active participants in gender divides, in regard to AI access (…)’146 multi-stakeholder dialogues on the digital – Practicing multi-stakeholder governance: divide by emphasizing the importance ‘Ensuring gender equality, linguistic and of gender equality, linguistic and regional regional diversity as well as the inclusion diversity as well as the inclusion of youth and of youth and marginalized groups in multi- marginalized groups.’152 stakeholder ethical dialogues on AI issues.’147

39 ANNEXES

Annex 2: Selected initiatives that address gender equality in AI

The following initiatives seek to address gender equality in AI and the ethics of AI. They are listed in alphabetical order.

Gender and AI develops practical methods of sex and gender analysis for scientists and engineers and 2) – Catalyst: Catalyst focuses on creating provides case studies as concrete illustrations workplaces that work for women and of how sex and gender analysis leads to undertakes analysis on gender and AI. innovation. https://genderedinnovations. https://www.catalyst.org/ stanford.edu/ – Coding Rights: Coding Rights is an organization – The Center for Gender, Equity and Leadership bringing an intersectional feminist approach (EGAL) of the Haas School of Business, UC to defend human rights in the development, Berkeley: EGAL educates Equity Fluent Leaders regulation and use of technologies. The in order for them to use their power to address organization acts collectively and in networks, barriers, increase access, and drive change using creativity and hacker knowledge to for positive impact. EGAL recently published question the present and reimagine a future Mitigating Bias in Artificial Intelligence: an based on transfeminist and decolonial Equity Fluent Leadership Playbook. The values. Their mission is to expose and Playbook provides a framework on how bias challenge technologies which reinforce power manifests in datasets and algorithms, breaking asymmetries, with focus on gender inequalities down concepts to remove the gaps between the and its . technical side of the AI field and the business https://www.codingrights.org/ leaders who manage and govern it. After – Equal AI: Equal AI is an initiative focused on highlighting why bias in AI is a pervasive and correcting and preventing unconscious bias critical problem, including the material impacts in the development of artificial intelligence. for businesses, the Playbook outlines evidence- With leaders across business, technology, based strategies (“plays”) that business leaders and academia, Equal AI is developing can implement to holistically tackle the issue. guidelines, standards and tools to ensure equal https://haas.berkeley.edu/equity/ representation in the creation of AI. – The Feminist Internet: The Feminist Internet is https://www.equalai.org/ a non-profit organization on a mission to make – Feminist AI: Founded in 2016, Feminist. the internet a more equal space for women and AI is a community AI research and design other marginalized groups through creative, group focused on critical making as a critical practice. Projects include creating F’xa, a response to hegemonic AI. Rather than simply feminist chatbot designed to provide the general criticize the lack of diversity in AI design public with a playful guide to AI bias. and development, the group proposes an https://feministinternet.com/ alternative by co-designing intelligent products, – Women in Big Data: The goal of the women experiences and futures from a feminist post in big data forum is to strengthen diversity in humanist approach. They do this by using AI the big data field. The aim is to encourage and Art and Design projects to create AI products, attract more female talents to the big data and experiences and systems. analytics field and help them connect, engage https://www.feminist.ai/ and grow. https://www.womeninbigdata.org/

– Gendered Innovations, Stanford University: The – Women in Data Science (WiDS): WiDS is an Gendered Innovations project harnesses the initiative that aims to inspire and educate data creative power of sex and gender analysis for scientists worldwide, regardless of gender, and innovation and discovery. Considering gender to support women in the field. WiDS organizes may add a valuable dimension to research, conferences, datathons, podcast series, it may take research in new directions. The education outreach. peer-reviewed Gendered Innovations project: 1) https://www.widsconference.org/

40 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

– Women in ML and Data Science: Its mission – Women Leading in AI: Women Leading in AI is to support and promote women and gender is a global ‘think tank’ for women in AI with minorities who are practicing, studying or the aim to address the bias that can occur interested in the fields of machine learning within algorithms due to a lack of diversity and data science. It creates opportunities and inclusivity within the field of Artificial for members to engage in technical and Intelligence. It particularly focuses on the good professional conversations in a positive, governance of AI as a means to ensure that supportive environment by hosting talks by the changes in society likely to occur in the women and gender minority individuals working 4th industrial revolution will be of benefit to all in data science or machine learning, as well as people and not further embed societal prejudice hosting technical workshops, networking events in our systems. The network also promotes and hackathons. The network is inclusive to opportunities for women to support other anyone who supports its cause regardless of women and men who support women in this or technical background. field.https://womenleadinginai.org/ http://wimlds.org/

AI, Ethics and Justice

– ACM Fairness, Accountability and Transparency agency, power relations, effective modes of Conference (ACM FAccT): ACM FAccT is an regulation, accountability, sanctions, and interdisciplinary conference organized by the workforce displacement. Association for Computing Machinery dedicated https://www.aies-conference.com/2020/ to bringing together a diverse community of – AI Ethics Lab: AI Ethics Lab aims to detect and scholars from computer science, law, social solve ethical issues in building and using AI sciences, and humanities to investigate systems to enhance technology development. In and tackle issues in this emerging area. It research, the Lab functions as an independent particularly seek to evaluate technical solutions center where multidisciplinary teams of with respect to existing problems, reflecting philosophers, computer scientists, legal upon their benefits and risks; to address pivotal scholars, and other experts focus on analyzing questions about economic incentive structures, ethical issues related to AI systems. Its teams perverse implications, distribution of power, and work on various projects ranging from research redistribution of welfare; and to ground research ethics in AI to global guidelines in AI ethics. on fairness, accountability, and transparency in http://aiethicslab.com/big-picture/ existing legal requirements. https://facctconference.org/ – AI For People: AI For People gathers a diverse team of motivated individuals that – AI4All: AI4ALL is a US-based nonprofit is dedicated to bring AI Policy to the people, dedicated to increasing diversity and inclusion in order to create positive change in society in AI education, research, development, and with technology, through and for the public. Its policy. AI4ALL organizes summer outreach mission is to learn, pose questions and take program particularly for people of colour, young initiative on how AI technology can be used for women and low-income high-schoolers, to learn the social good. It conducts impact analyses, about human-centered AI. https://ai-4-all.org/ projects and democratic policies that act at the – AI, Ethics and Society Conference (AAAI/ crossing of AI and society. ACM): The Association for the Advancement of https://www.aiforpeople.org/ Artificial Intelligence (AAAI) and the Association – Alliance for Inclusive Algorithms: Organized by for Computing Machinery (ACM) joined forces in two feminist organizations, Women@thetable 2018 to start the annual AAAI/ACM Conference and Ciudadanía Inteligente, the Alliance for on AI, Ethics, and Society. The aim of these Inclusive Algorithms calls on governments, the conference is to provide a platform for multi- private sector, and civil society organizations to disciplinary participants to address the ethical take proactive steps to remove biases from AI concerns and challenges regarding issues such and increase the representation of women in the as privacy, safety and security, surveillance, field of AI.https://aplusalliance.org/ inequality, data handling and bias, personal

41 ANNEXES

– Association for Progressive Communications – Global Partnership on Artificial Intelligence: (APC): APC is an international network of The OECD will host the Secretariat of the new civil society organizations founded in 1990 Global Partnership on AI (GPAI), a coalition that dedicated to empowering and supporting people aims at ensuring that Artificial Intelligence is working for peace, human rights, development used responsibly, respecting human rights and and protection of the environment, through the democratic values. The GPAI brings together strategic use of information and communication experts from industry, government, civil society technologies (ICTs). Its aim is to build a world in and academia to conduct research and pilot which all people have easy, equal and affordable projects on AI. Its objective is to bridge the gap access to the creative potential of ICTs to between theory and practice on AI policy. GPAI improve their lives and create more democratic will create a strong link between international and egalitarian societies. https://www.apc.org policy development and technical discourse on AI. http://www.oecd.org/going-digital/ai/OECD- – Black in AI: Black in AI (BAI) is a multi- to-host-Secretariat-of-new-Global-Partnership- institutional, transcontinental initiative on-Artificial-Intelligence.htm creating a space for sharing ideas, fostering collaborations, and discussing initiatives to – IT for Change: IT for Change is an NGO based increase the presence of Black individuals in in Bengaluru, India. It works in the areas of the field of AI. To this end, BAI holds an annual education, gender, governance, community technical workshop series, run mentoring informatics and internet/digital policies to programs, and maintain various forums for push the boundaries of existing vocabulary and fostering partnerships and collaborations. practice, exploring new development and social https://blackinai.github.io/ change frameworks. https://itforchange.net/ – Data & Society: Data & Society studies the – Leverhulme Centre for the Future of social implications of data-centric technologies Intelligence: The Leverhulme Centre for the and automation. It produces original research Future of Intelligence (CFI) builds a new on topics including AI and automation, the interdisciplinary community of researchers, impacts of technology on labor and health, and with strong links to technologists and the online disinformation. https://datasociety.net/ policy world, and a clear practical goal: to work together to ensure that humans make the best – EQUALS: The EQUALS Global Partnership of the opportunities of artificial intelligence as it for Gender Equality in the Digital Age is develops over coming decades. http://lcfi.ac.uk/ a committed group of corporate leaders, governments, businesses, not-for-profit – OpenAI: OpenAI is an AI research and organizations, academic institutions, NGOs and deployment company based in San Francisco, community groups around the world dedicated California. Its mission is to ensure that artificial to promoting gender balance in the technology general intelligence benefits all of humanity. The sector by championing equality of access, OpenAI Charter describes the principles that skills development and opportunities for guide its work. https://openai.com/about/ women and men alike. https://www.equals.org/ – Oxford Internet Institute: The Oxford Internet – Generation AI, UNICEF: UNICEF launched Institute is a multidisciplinary research and Generation AI with a diverse set of partners, teaching department of the University of Oxford, including the , UC dedicated to the social science of the Internet. Berkeley, Article One, Microsoft and others to set Students and staff examine gender and AI in and lead the global agenda on AI and children projects, courses and research. - outlining the opportunities and challenges, https://www.oii.ox.ac.uk/ as well as engaging stakeholders to build AI powered solutions that help realize and uphold child rights. https://www.unicef.org/innovation/ GenerationAI

42 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

– Partnership on AI: The Partnership on AI – The Algorithmic Justice League: The conducts research, organizes discussions, Algorithmic Justice League’s mission is to shares insights, provides thought leadership, raise awareness about the impacts of AI, consults with relevant third parties, responds equip advocates with empirical research, build to questions from the public and media, and the voice and choice of the most impacted creates educational material that advances communities, and galvanize researchers, policy the understanding of AI technologies including makers, and industry practitioners to mitigate AI machine perception, learning, and automated harms and biases. We’re building a movement reasoning. It recently initiated “Closing Gaps to shift the AI ecosystem towards equitable and in Responsible AI” – a multiphase, multi- accountable AI. https://www.ajlunited.org/ stakeholder project aimed at surfacing the – The Institute of Electrical and Electronics collective wisdom of the community to identify Engineers’ (IEEE) Global Initiative: The IEEE salient challenges and evaluate potential Global Initiative’s mission is ‘to ensure every solutions to operationalizing AI Principles. stakeholder involved in the design and These insights can in turn inform and empower development of autonomous and intelligent the change makers, activists, and policymakers systems is educated, trained, and empowered working to develop and manifest responsible AI. to prioritize ethical considerations so that these https://www.partnershiponai.org/about/ technologies are advanced for the benefit of – Teens in AI: The Teens In AI initiative exists to humanity.’ https://standards.ieee.org/industry- inspire the next generation of AI researchers, connections/ec/autonomous-systems.html entrepreneurs and leaders who will shape the world of tomorrow. It aims to give young people early exposure to AI being developed and deployed for social good. Through a combination of hackathons, accelerators, and bootcamps, together with expert mentoring, talks, company tours, workshops, and networking opportunities, the program creates the platform for young people aged 12-18 to explore AI, machine learning, and data science. https://www.teensinai.com/ – The AI Now Institute: The AI Now Institute at University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. Their work focuses on four core domains: rights and liberties; labor and automation; bias and inclusion; safety and critical infrastructure. They have published numerous reports and held discussions on gender equality and bias. Their new initiative ‘Data Genesis’ is developing new approaches to study and understand the role of training data in the machine learning field. https://ainowinstitute.org/ – The AI for Good Global Summit: The AI for Good Global Summit is a leading action-oriented, global and inclusive United Nations platform on AI. The Summit is organized by the ITU with XPRIZE Foundation, in partnership with UN Sister Agencies, and ACM. https:// aiforgood.itu.int/

43 REFERENCES

1 Brussevich, M., Dabla-Norris, E. and Khalid, S. 2019. 11 See GIS Watch 2019 for a discussion on radical AI Is Technology Widening the Gender Gap? Automation governance. and the Future of Female Employment. IMF Working Papers, Working Paper No. 19/91. Washington, DC: 12 CEDAW: https://www.ohchr.org/en/hrbodies/cedaw/ International Monetary Fund. pages/cedawindex.aspx UN Commission on the Status of Women: https:// 2 Hegewisch, A., Childers, C. and Hartmann, H. 2019. www.unwomen.org/en/csw Women, Automation and the Future of Work. Washington, DC: The Institute for Women’s Policy Research. 13 SDGs and Women: https://www.unwomen.org/en/ news/in-focus/women-and-the-sdgs and https://sdgs. 3 Buolamwini, J. 2019. Hearing on: Artificial Intelligence: un.org/goals/goal5. Societal and Ethical Implications. Washington, DC: House Committee on Science, 14 Intersectionality refers to the complex, cumulative way Space and Technology. Joy Adowaa Buolamwini is in which the effects of multiple forms of discrimination a Ghanaian-American computer scientist and digital (such as sexism, racism, and classism) combine, activist based at the MIT Media Lab. Her 2018 paper overlap, or intersect especially in the experiences of Gender Shades: Intersectional Accuracy Disparities in marginalized individuals or groups. Commercial Gender Classification uncovered large 15 In the 2019 GIS Watch report, P. Peña and J. Varon racial and gender bias in AI facial recognition systems refer to ‘’ as an epistemological tool from companies like Microsoft, IBM, and Amazon. that aims to re-politicise and de-essentialise global Subsequently, she founded the Algorithmic Justice feminist movements that have been used to legitimise League, an organization that looks to challenge bias in policies of exclusion on the basis of gender, migration, decision-making software. miscegenation, race and class. 4 For more information about Google’s AI for Social 16 For example, women make up only 17% of the Good programme visit https://ai.google/social-good/. biographies on Wikipedia and 15% of its editors, and 5 Gurumurthy, A. and Chami, N. 2019. Radicalising the AI there is evidence of gender bias in the language of governance agenda. Global Information Society Watch Wikipedia entries (Wikipedia - https://en.wikipedia. 2019: Artificial intelligence: Human rights, social justice org/wiki/Gender_bias_on_Wikipedia), 26% of and development. Johannesburg, South Africa: APC. subjects and sources in mainstream Internet news stories are women (Who Makes the News - http:// 6 Peña, P. and Varon, J. 2019. Decolonising AI: A whomakesthenews.org/gmmp-campaign), there are transfeminist approach to data and social justice. significant gender gaps across the stages of academic Global Information Society Watch 2019: Artificial publishing, citation and comment (NIH - https://www. intelligence: Human rights, social justice and ncbi.nlm.nih.gov/pmc/articles/PMC7112170/), and development. Johannesburg, South Africa: APC. the rather dismal figures go on, particularly looking at historical data. Women’s access to and different use 7 Kwet, M. 2019. Digital colonialism is threatening the of ICT (phones, internet access, digital literacy, etc.) is Global South. Aljazeera, 13 March 2019. another significant factor for the representativeness 8 ORBIT. 2019. 100 Brilliant Women in AI and Ethics: of data. In 2017 there were 250 million fewer Conference Report. Leicester, UK: Observatory for women online (EQUALS - https://www.equals.org/ Responsible Research and Innovation in ICT. post/2018/10/17/beyond-increasing-and-deepening- Collett, C. and Dillon, S. 2019. AI and Gender: Four basic-access-to-ict-for-women). Proposals for Future Research. Cambridge, UK: The 17 The digital gender gap or divide refers to the Leverhulme Centre for the Future of Intelligence. differences between men and women and between GIS Watch. 2019. Global Information Society Watch girls and boys in their ability to access and use digital 2019: Artificial intelligence: Human rights, social justice technologies and participate fully in the online world. and development. Johannesburg, South Africa: APC. Worldwide, women and girls are disproportionately 9 Buolamwini, op. cit. affected by the lack of access to information and ORBIT, op. cit. communication technologies (ICTs) and lack of digital skills to use them effectively. For more information 10 Chui, M. et al. 2018. Notes from the AI Frontier: about the growing digital gender divide see the 2019 Applying AI for Social Good. McKinsey Global Institute UNESCO report I’d Blush if I Could: closing gender Discussion Paper, December 2018. Washington, DC: divides in digital skills through education. McKinsey Global Institute.

44 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

18 Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. and Srikumar, 43 Ibid., p.395. M. 2020. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to 44 Ibid. Principles for AI. Cambridge, MA: Berkman Klein Center 45 Ibid., p. 396. for Internet & Society. Jobin, A., Ienca, M. and Vayena, E. 2019. The global 46 Tannenbaum et al., op. cit., p. 141. landscape of AI ethics guidelines. Nature Machine 47 UNESCO, 2020, Working document, op. cit. Intelligence, Vol. 1, pp. 389–399. AI Ethics Lab. 2020. A Guide to the Dynamics of AI 48 ORBIT, op. cit. Principles: Our Toolbox for Understanding & Using AI 49 Fjeld et al., op. cit., p. 48. Principles. February 2020. UNESCO. 2020. Outcome Document: First Version of 50 Ibid., p. 66. a Draft Text of a Recommendation on the Ethics of Artificial Intelligence. SHS/BIO/AHEG-AI/2020/4REV. 51 Buolamwini, op. cit. Paris: UNESCO. 52 GIS Watch, op. cit. 19 UNESCO, 2020, Outcome Document, op. cit. European Commission High-Level Group on Artificial Intelligence. 2019. Ethics Guidelines for Trustworthy AI. 20 The World Commission on Ethics of Scientific Knowledge Brussels, Belgium: European Commission. and Technology (COMEST) is a multidisciplinary Jobin et al., op. cit. scientific advisory body of UNESCO, made up of UNESCO, 2020, Working document, op. cit. independent experts. The work of COMEST has built on and complemented work on AI being done within the 53 Tannenbaum et al., op. cit., p. 4. United Nations system, other international organizations, 54 Buolamwini, op. cit. nongovernmental organizations, academia and others. 55 ORBIT, op. cit. 21 UNESCO. 2020. Working Document: Toward a Draft Text of a Recommendation on the Ethics of Artificial 56 UNESCO. 2019. Preliminary Study on a Possible Intelligence. SHS/BIO/AHEG-AI/2020/3REV. Paris: Standard-Setting Instrument on the Ethics of Artificial UNESCO. Intelligence, 40 C/67. Paris: UNESCO. p. 15. 22 Amrute, S. 2019. Of Techno-Ethics and Techno-Affects. 57 Gender segregation refers to the unequal distribution Feminist Review, Vol. 123, No. 1, pp. 56–73. of men and women in the occupational structure. ‘Vertical segregation’ describes the clustering of men 23 GIS Watch, op. cit. at the top of occupational hierarchies and of women Amrute, op. cit., p. 70. at the bottom; ‘horizontal segregation’ describes 24 Ibid., p. 70. the fact that at the same occupational level (that is within occupational classes, or even occupations 25 Ibid., p. 59. themselves) men and women have different job tasks. See: Gender Segregation (in employment). A Dictionary 26 Ibid., p. 57. of Sociology. Encyclopedia.com. 11 August 2020. 27 Ibid., p. 58. https://www.encyclopedia.com/social-sciences/ dictionaries-thesauruses-pictures-and-press-releases/ 28 Ibid., p. 57. gender-segregation-employment 29 See https://feministinternet.org/ 58 Mozilla Foundation. 2020. Internet Health: Trustworthy 30 E.g.: Bellamy, R. K. E., et al. 2018. AI Fairness 360: An Artificial Intelligence. Extensible Toolkit for Detecting, Understanding, and 59 Buolamwini, op. cit. Mitigating Unwanted Algorithmic Bias. IBM Journal of Research and Development, Vol. 63, No. 4/5. 60 Collett and Dillon, op. cit. Weinberger, D. 2019. Where fairness ends. Harvard 61 World Economic Forum Global Future Council on Human Business School Digital Initiative, 26 March. Rights 2016-2018. 2018. How to Prevent Discriminatory 31 Fjeld et al., op. cit., p. 47. Outcomes in Machine Learning. White Paper, March 2018. Geneva: World Economic Forum. 32 Tannenbaum, C., Ellis, R. P., Eyssel, F., Zou, J. and Schiebinger, L. 2019. Sex and gender analysis improves 62 Collett and Dillon, op. cit. science and engineering. Nature, Vol. 575, pp. 137–146. 63 World Economic Forum Global Future Council on Human 33 Collett and Dillon, op. cit., p. 23. Rights 2016-2018, op. cit. 34 Tannenbaum et al., op. cit. 64 ORBIT, op. cit. 35 Fjeld et al., op. cit., p. 50. 65 World Economic Forum Global Future Council on Human Rights 2016-2018, op. cit. 36 Ibid., p. 51. 66 Ibid. 37 Ibid., p. 51. 67 Baxter, K. 2020. AI Ethics at Salesforce. Women in Big 38 Ibid., p. 52. Data Webinar on AI and Ethics, 21 May. 39 Ibid., p. 60. 68 Ibid. 40 Ibid., p. 61. 69 Ibid. 41 Ibid., p. 64. 70 Buolamwini, op. cit. 42 Jobin et al., op. cit., p. 394. 71 ORBIT, op. cit.

45 REFERENCES

72 Herrera, C. 2020. TIBCO: The good, the bad, and the 97 Litt, M. 2018. I accidentally built a brogrammer culture. human sides of AI. Women in Big Data Webinar on AI and Now we’re undoing it. Fast Company, 2 February. Ethics, 21 May. 98 See https://www.weps.org/. 73 UNESCO, 2019, op. cit., p. 14. 99 West et al., op. cit. 74 Tannenbaum et al., op. cit. 100 Else, H. 2019. How to banish manels and manferences 75 Amershi, S. et al. 2019. Guidelines for Human-AI from scientific meetings. Nature, Vol. 573, Interaction. Proceedings of the 2019 CHI Conference pp. 184–186. on Human Factors in Computing Systems (CHI ‘19). Association for Computing Machinery, New York, 101 Avila, R., Brandusescu, A., Ortiz Freuler, J., Thakur, D. Paper 3, pp. 1–13. 2018. Policy Brief W20 Argentina: Artificial Intelligence: Open Questions about Gender Inclusion. Geneva: World 76 World Economic Forum Global Future Council on Wide Web Foundation. Human Rights 2016-2018, op. cit. 102 Tannenbaum et al., op. cit., p. 4. 77 Baxter, op. cit. 103 Buolamwini, op. cit.t. 78 UNESCO, 2020, Working document, op. cit. Avila et al., op. cit. 79 West, S. M., Whittaker, M. and Crawford, K. 2019. 104 Buolamwini, op. cit. Discriminating Systems: Gender, Race and Power in AI. New York: AI Now Institute. 105 Ibid. 80 Buolamwini, op. cit. 106 UNESCO, 2020, Working document, op. cit. World Economic Forum Global Future Council on 107 Collett and Dillon, op. cit., p. 8. Human Rights 2016-2018, op. cit. AI Ethics Lab, op. cit. 108 Ibid., p. 10. 81 Wong, J. C. 2016. Women considered better coders 109 Ibid. – but only if they hide their gender. The Guardian, 110 See https://www.oii.ox.ac.uk/blog/doctoral-research- 12 February. opportunity-in-gender-and-ai-applications-open/ 82 E.g. Pang, W. 2019. How to Ensure Data Quality for AI. 111 To be avoided is a scenario where a woman in insideBIGDATA, 17 November. technology is treated as a spokesperson and source 83 In silico (pseudo-Latin for ‘in silicon’, alluding to for all issues related to gender. The technology the mass use of silicon for computer chips) is an sector needs to dig deeper. Similarly, gender equality expression meaning ‘performed on computer or via practitioners cannot outsource conversations on computer simulation’. technology to women in the technology workforce and those working to advance women in STEM. Rather, 84 Tannenbaum et al., op. cit., p. 5. early on, they must play a fundamental role in shaping 85 Ibid. understanding on gender and become very conversant themselves on the implications of AI. 86 Burnett, M. 2020. Doing Inclusive Design: From GenderMag to InclusiveMag. Human-Computer 112 Learning from current efforts on gender and ICT, we Interaction Institute Seminar Series. Pittsburgh, PA: see that piecemeal approaches, simplistic checklists, Carnegie Mellon University. one-off trainings, or efforts lacking accountability, have been insufficient in creating meaningful change. 87 Swinger, N., De-Arteaga, M., Heffernan IV, N. T., Leiserson, M. D. M., Kalai, A. T. 2019. What are the 113 The pipeline argument postulates that the gross biases in my word embedding? Proceedings of the underrepresentation of women in the technology AAAI/ACM Conference on Artificial Intelligence, Ethics, sector is due to a low female pool of talent in STEM and Society (AIES), Honolulu, HI, USA, 2019. fields. This argument tends to ignore the fact that the underrepresentation of women in the technology 88 Sciforce. 2020. Introduction to the White-Box AI: the sector is also due to technology companies failing to Concept of Interpretability. Medium, 31 January. attract and retain female talent. 89 West et al., op. cit. 114 The term ‘corporate feminism’ is used by some to refer to a specific brand of feminism that encourages 90 Murphy, M. 2020. Inside Google’s struggle to control assimilation into the corporate mainstream rather than its ‘racist’ and ‘sexist’ AI. The Telegraph, 16 January. a complete deconstruction (or at least re-thinking) 91 Ibid. of the system as a whole. It puts the onus on the individual to break the , rather than 92 Buolamwini, op. cit. addressing systemic mechanisms of oppression 93 Tannenbaum et al., op. cit., p. 5. (based on class, sexual orientation, race, religion, ability, etc). Sheryl Sandberg, author of the book Lean 94 Gebru, T. et al. 2018. Datasheets for Datasets. In: Women, Work and the Will to Lead, is often cited as Proceedings of the 5th Workshop on Fairness, one of the figureheads of corporate feminism. Accountability, and Transparency in Machine Learning, Stockholm, , 2018. 115 E.g. Collett and Dillon, op. cit., for some questions that have already been asked. 95 West et al., op. cit. 116 Ibid. 96 World Economic Forum Global Future Council on Human Rights 2016-2018, op. cit. 117 UNI Global Union. 2017. Top 10 Principles For Ethical Artificial Intelligence. Nyon, Switzerland: UNI Global Union. p. 7.

46 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

118 Ibid., p. 8. 135 Ibid. 119 OECD. 2020. Recommendation of the Council on 136 Ibid., p. 5. Artificial Intelligence.OECD Legal Instruments, OECD/ LEGAL/0449. Paris: OECD. p.7. 137 Ibid., p. 7. 120 IEEE. 2019. Ethically Aligned Design: A Vision for 138 Ibid., p. 9. Prioritizing Human Well-being with Autonomous and 139 Ibid. Intelligent Systems. First Edition. New York: IEEE. p. 19. 140 Ibid. 121 Ibid., p. 82. 141 Ibid., p. 11. 122 Ibid., p. 96. 142 Ibid. 123 Ibid., p. 140. 143 UNESCO, 2020, Working Document, op. cit., p. 11. 124 Ibid., p. 188. 144 COMEST. 2019. Preliminary study on the Ethics 125 Ibid., p. 241–242. of Artificial Intelligence. SHS/COMEST/EXTWG- 126 Microsoft. 2020. Microsoft AI Principles. ETHICS-AI/2019/1. Paris: UNESCO. p. 25. 127 While gender and women were not explicitly 145 COMEST. 2017. Report of COMEST on Robotics Ethics. mentioned, there were images of women provided in SHS/YES/COMEST-10/17/2 REV. Paris: UNESCO. p. 52. the video discussion of this principle. 146 UNESCO, 2020, Working Document, op. cit., p. 39. 128 European Commission High-Level Group on Artificial 147 Ibid., p. 40. Intelligence, op. cit., p. 4. 148 Ibid., p. 42. 129 Ibid., p. 38. 149 Ibid., p. 43. 130 Ibid., p. 17. 150 Ibid., p. 44. 131 Ibid., p. 18. 151 UNESCO and EQUALS. 2019. I’d Blush if I Could: Closing 132 Ibid., p. 11. Gender Divides in Digital Skills Through Education. 133 Ibid., p. 23. Geneva: EQUALS. p. 9. 134 United Nations System Chief Executive Board for 152 UNESCO, 2020, Working Document, op. cit., p. 48. Coordination (CEB). 2019. A United Nations system- wide strategic approach and road map for supporting artificial intelligence capacity development on artificial intelligence. CEB/2019/1/Add.3. Geneva: CEB. p. 3.

47 BIBLIOGRAPHY

AI Ethics Lab. 2020. A Guide to the Dynamics of AI Collett, C. and Dillon, S. 2019. AI and Gender: Four Proposals Principles: Our Toolbox for Understanding & Using AI for Future Research. Cambridge, UK: The Leverhulme Principles. https://aiethicslab.com/big-picture/ Centre for the Future of Intelligence. https://www.repository.cam.ac.uk/handle/1810/294360 Amershi, S. et al. 2019. Guidelines for Human-AI Interaction. Proceedings of the 2019 CHI Conference on Human COMEST. 2017. Report of COMEST on Robotics Ethics. Factors in Computing Systems (CHI ‘19). Association for SHS/YES/COMEST-10/17/2 REV. Paris: UNESCO. Computing Machinery, New York, Paper 3, pp. 1–13. DOI: https://unesdoc.unesco.org/ark:/48223/pf0000253952 10.1145/3290605.3300233 COMEST. 2019. Preliminary study on the Ethics of Artificial Amrute, S. 2019. Of Techno-Ethics and Techno-Affects. Intelligence. SHS/COMEST/EXTWG-ETHICS-AI/2019/1. Feminist Review, Vol. 123, No. 1, pp. 56–73. DOI: Paris: UNESCO. https://unesdoc.unesco.org/ 10.1177/0141778919879744. ark:/48223/pf0000367823 Avila, R., Brandusescu, A., Ortiz Freuler, J., Thakur, D. Else, H. 2019. How to banish manels and manferences 2018. Policy Brief W20 Argentina: Artificial Intelligence: from scientific meetings. Nature, Vol. 573, pp. 184–186. Open Questions about Gender Inclusion. Geneva: World https://www.nature.com/articles/d41586-019-02658-6 Wide Web Foundation. http://webfoundation.org/ docs/2018/06/AI-Gender.pdf European Commission High-Level Group on Artificial Intelligence. 2019. Ethics Guidelines for Trustworthy AI. Baxter, K. 2020. AI Ethics at Salesforce. Women in Big Data Brussels, Belgium: European Commission. https:// Webinar on AI and Ethics, 21 May. https://www.youtube. ec.europa.eu/digital-single-market/en/news/ com/watch?v=qLFIfHoRe_E ethics-guidelines-trustworthy-ai Bellamy, R. K. E., et al. 2018. AI Fairness 360: An Extensible Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. and Srikumar, Toolkit for Detecting, Understanding, and Mitigating M. 2020. Principled Artificial Intelligence: Mapping Unwanted Algorithmic Bias. IBM Journal of Research Consensus in Ethical and Rights-based Approaches to and Development, Vol. 63, No. 4/5. https://arxiv.org/ Principles for AI. Cambridge, MA: Berkman Klein Center abs/1810.01943 for Internet & Society. http://nrs.harvard.edu/urn-3:HUL. InstRepos:42160420 Brussevich, M., Dabla-Norris, E. and Khalid, S. 2019. Is Technology Widening the Gender Gap? Automation and Gebru, T. et al. 2018. Datasheets for Datasets. Proceedings the Future of Female Employment. IMF Working Papers, of the 5th Workshop on Fairness, Accountability, and Working Paper No. 19/91. Washington, DC: International Transparency in Machine Learning, Stockholm, Sweden, Monetary Fund. https://www.imf.org/en/Publications/ 2018. https://arxiv.org/abs/1803.09010 WP/Issues/2019/05/06/Is-Technology-Widening-the- Gender-Gap-Automation-and-the-Future-of-Female- GIS Watch. 2019. Global Information Society Watch 2019: Employment-46684. Artificial intelligence: Human rights, social justice and development. Johannesburg, South Africa: APC. Buolamwini, J. 2019. Hearing on: Artificial Intelligence: https://www.apc.org/en/pubs/global-information- Societal and Ethical Implications. Washington, DC: society-watch-2019-artificial-intelligence-human-rights- United States House Committee on Science, Space and social-justice-and Technology. https://science.house.gov/imo/media/doc/ Buolamwini%20Testimony.pdf Gurumurthy, A. and Chami, N. 2019. Radicalising the AI governance agenda. Global Information Society Watch Burnett, M. 2020. Doing Inclusive Design: From 2019: Artificial intelligence: Human rights, social justice GenderMag to InclusiveMag. Human-Computer and development. Johannesburg, South Africa: APC. Interaction Institute Seminar Series. Pittsburgh, https://www.giswatch.org/2019-artificial-intelligence- PA: Carnegie Mellon University. https://scs. human-rights-social-justice-and-development hosted.panopto.com/Panopto/Pages/Viewer. aspx?id=1fa67161-215d-45e8-b228-ab3e00f844ce Hegewisch, A., Childers, C. and Hartmann, H. 2019. Women, Automation and the Future of Work. Washington, DC: The Chui, M. et al. 2018. Notes from the AI Frontier: Applying Institute for Women’s Policy Research. AI for Social Good. McKinsey Global Institute https://iwpr.org/iwpr-issues/employment-and-earnings/ Discussion Paper, December 2018. Washington, DC: women-automation-and-the-future-of-work/ McKinsey Global Institute. https://www.mckinsey. com/featured-insights/artificial-intelligence/ Herrera, C. 2020. TIBCO: The good, the bad, and the applying-artificial-intelligence-for-social-good# human sides of AI. Women in Big Data Webinar on AI and Ethics, 21 May. https://www.youtube.com/ watch?v=qLFIfHoRe_E

48 ARTIFICIAL INTELLIGENCE AND GENDER EQUALITY

IEEE. 2019. Ethically Aligned Design: A Vision for Prioritizing on Artificial Intelligence, Ethics, and Society (AIES), Human Well-being with Autonomous and Intelligent Honolulu, HI, USA, 2019. Systems. First Edition. New York: IEEE. https://arxiv.org/pdf/1812.08769.pdf https://ethicsinaction.ieee.org/ Tannenbaum, C., Ellis, R. P., Eyssel, F., Zou, J. and Jobin, A., Ienca, M. and Vayena, E. 2019. The global Schiebinger, L. 2019. Sex and gender analysis improves landscape of AI ethics guidelines. Nature Machine science and engineering. Nature, Vol. 575, pp. 137–146. Intelligence, Vol. 1, pp. 389–399. DOI: 10.1038/ DOI: 10.1038/s41586-019-1657-6. s42256-019-0088-2. UNESCO and EQUALS. 2019. I’d Blush if I Could: Closing Kwet, M. 2019. Digital colonialism is threatening the Global Gender Divides in Digital Skills Through Education. Geneva: South. Aljazeera, 13 March 2019. https://www.aljazeera. EQUALS. https://unesdoc.unesco.org/ark:/48223/ com/indepth/opinion/digital-colonialism-threatening- pf0000367416 global-south-190129140828809.html UNESCO. 2019. Preliminary Study on a Possible Standard- Litt, M. 2018. I accidentally built a brogrammer culture. Now Setting Instrument on the Ethics of Artificial Intelligence, we’re undoing it. Fast Company, 2 February. 40 C/67. Paris: UNESCO. https://unesdoc.unesco.org/ https://www.fastcompany.com/40524955/i- ark:/48223/pf0000367823 accidentally-built-a-brogrammer-culture-now-were- undoing-it UNESCO. 2020. Outcome Document: First Version of a Draft Text of a Recommendation on the Ethics of Artificial Microsoft. 2020. Microsoft AI Principles. Intelligence. SHS/BIO/AHEG-AI/2020/4REV. Paris: https://www.microsoft.com/en-us/ai/ UNESCO. https://unesdoc.unesco.org/ark:/48223/ responsible-ai?activetab=pivot1%3aprimaryr6 pf0000373434 Mozilla Foundation. 2020. Internet Health: Trustworthy UNESCO. 2020. Working Document: Toward a Draft Text of a Artificial Intelligence. https://foundation.mozilla.org/en/ Recommendation on the Ethics of Artificial Intelligence. internet-health/trustworthy-artificial-intelligence/ SHS/BIO/AHEG-AI/2020/3REV. Paris: UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000373199 Murphy, M. 2020. Inside Google’s struggle to control its ‘racist’ and ‘sexist’ AI. The Telegraph, 16 January. UNI Global Union. 2017. Top 10 Principles For Ethical https://www.telegraph.co.uk/technology/2020/01/16/ Artificial Intelligence. Nyon, Switzerland: UNI Global google-asking-employees-hurl-insults-ai/ Union. http://www.thefutureworldofwork.org/ opinions/10-principles-for-ethical-ai/ OECD. 2020. Recommendation of the Council on Artificial Intelligence.OECD Legal Instruments, OECD/ United Nations System Chief Executive Board for LEGAL/0449. Paris: OECD. https://legalinstruments. Coordination (CEB). 2019. A United Nations system- .org/en/instruments/OECD-LEGAL-0449 wide strategic approach and road map for supporting artificial intelligence capacity development on artificial ORBIT. 2019. 100 Brilliant Women in AI and Ethics: Conference intelligence. CEB/2019/1/Add.3. Geneva: CEB. Report. Leicester, UK: Observatory for Responsible https://undocs.org/en/CEB/2019/1/Add.3 Research and Innovation in ICT. https://www.orbit-rri. org/100-plus-women/ Weinberger, D. 2019. Where fairness ends. Harvard Business School Digital Initiative, 26 March. https://digital. Pang, W. 2019. How to Ensure Data Quality for AI. hbs.edu/artificial-intelligence-machine-learning/ insideBIGDATA, 17 November. https://insidebigdata. where-fairness-ends/ com/2019/11/17/how-to-ensure-data-quality-for-ai/ West, S. M., Whittaker, M. and Crawford, K. 2019. Peña, P. and Varon, J. 2019. Decolonising AI: A transfeminist Discriminating Systems: Gender, Race and Power in AI. approach to data and social justice. Global Information New York: AI Now Institute. https://ainowinstitute.org/ Society Watch 2019: Artificial intelligence: Human discriminatingsystems.html rights, social justice and development. Johannesburg, South Africa: APC. https://www.giswatch.org/2019- Wong, J. C. 2016. Women considered better coders – but artificial-intelligence-human-rights-social-justice-and- only if they hide their gender. The Guardian, 12 February. development https://www.theguardian.com/technology/2016/feb/12/ women-considered-better-coders-hide-gender-github Sciforce. 2020. Introduction to the White-Box AI: the Concept of Interpretability. Medium, 31 January. World Economic Forum Global Future Council on Human https://medium.com/sciforce/introduction-to-the-white- Rights 2016-2018. 2018. How to Prevent Discriminatory box-ai-the-concept-of-interpretability-5a31e1058611 Outcomes in Machine Learning. White Paper, March 2018. Geneva: World Economic Forum. Swinger, N., De-Arteaga, M., Heffernan IV, N. T., Leiserson, M. https://www.weforum.org/whitepapers/how-to-prevent- D. M., Kalai, A. T. 2019. What are the biases in my word discriminatory-outcomes-in-machine-learning embedding? Proceedings of the AAAI/ACM Conference

49