A Survey on Bias in Visual Datasets

A Survey on Bias in Visual Datasets

A Survey on Bias in Visual Datasets Simone Fabbrizzi1, Symeon Papadopoulos1, Eirini Ntoutsi2, and Ioannis Kompatsiaris1 1CERTH-ITI, Thessaloniki, Greece 2Freie Universit¨at,Berlin, Germany Abstract Computer Vision (CV) has achieved remarkable results, outperforming humans in sev- eral tasks. Nonetheless, it may result in major discrimination if not dealt with proper care. CV systems highly depend on the data they are fed with and can learn and am- plify biases within such data. Thus, both the problems of understanding and discovering biases are of utmost importance. Yet, to date there is no comprehensive survey on bias in visual datasets. To this end, this work aims to: i) describe the biases that can affect visual datasets; ii) review the literature on methods for bias discovery and quantification in visual datasets; iii) discuss existing attempts to collect bias-aware visual datasets. A key conclusion of our study is that the problem of bias discovery and quantification in visual datasets is still open and there is room for improvement in terms of both methods and the range of biases that can be addressed; moreover, there is no such thing as a bias-free dataset, so scientists and practitioners must become aware of the biases in their datasets and make them explicit. To this end, we propose a checklist that can be used to spot different types of bias during visual dataset collection. 1 Introduction In the fields of Artificial Intelligence (AI), Algorithmic Fairness and (Big) Data Ethics, the term bias has many different meanings: it might refer to a statistically biased estimator, to a systematic error in a prediction, to a disparity among demographic groups, or even to an undesired causal relationship between a protected attribute and another feature. Ntoutsi et al. [64] define bias as \the inclination or prejudice of a decision made by an AI system which is for or against one person or group, especially in a way considered to be unfair", but also identify several ways in which bias is encoded in the data (e.g. via spurious correlations, causal relationship among the variables, and unrepresentative data samples). The aim of this work is to provide the reader with a survey on the latter problem in the context of visual data (i.e. images and videos). Thanks to Deep Learning technology, Computer Vision (CV) gained unprecedented momen- tum and reached performance levels that were unimaginable before. For example, in tasks like arXiv:2107.07919v1 [cs.CV] 16 Jul 2021 object detection, image classification and image segmentation, CV achieves great results and sometimes even outperforms humans (e.g. in object classification tasks). Nonetheless, visual data, which CV relies heavily on for both training and evaluation, remain a challenging data type to analyse. An image encapsulates many features that require human interpretation and context to make sense. These include: the human subjects, the way they are depicted and their reciprocal position in the image frame, implicit references to culture-specific notions and back- ground knowledge, etc. Even the colouring scheme can be used to convey different messages. Thus, making sense of visual content remains a very complex task. Furthermore, CV has recently drawn attention because of its ethical implications when de- ployed in a number of application settings, ranging from targeted advertising to law enforcement. There has been mounting evidence that deploying CV systems without a comprehensive ethical assessment may result in major discrimination against minorities and protected groups. Indeed, 1 facial recognition technologies [68], gender classification algorithms [11], and even autonomous driving systems [84] have been shown to exhibit discriminatory behaviour. While bias in AI systems is a well studied field, the research in biased CV is more limited despite the widespread analysis on image data in the ML community and the abundance of visual data produced nowadays. Moreover, to the best of our knowledge, there is no comprehensive survey on bias in visual datasets ([78] represents a seminal work in the field, but it is limited to object detection datasets). Hence, the contributions of the present work are: i) to explore and discuss the different types of bias that arise in different visual datasets; ii) to systematically review the work that the CV community has done so far for addressing and measuring bias in visual datasets; and iii) to discuss some attempts to compile bias-aware datasets. We believe this work to be a useful tool for helping scientists and practitioners to both develop new bias- discovery methods and to collect data as less biased as possible. To this end, we propose a checklist that can be used to spot the different types of bias that can enter the data during the collection process (Table 6). The structure of the survey is as follows. First, we provide some background about bias in AI systems, and the life cycle of visual content and how biases can enter at different steps of this cycle (Section 2). Second, we describe in detail the different types of bias that might affect visual datasets (Section 3) and we provide concrete examples of CV applications that are affected by those biases. Third, we systematically review the methods for bias discovery in visual content proposed in the literature and provide for each of them a brief summary (Section 4). We also outline future streams of research based on our review. Finally, in Section 5, we discuss weaknesses and strengths of some bias-aware visual benchmark datasets. 2 Background In this section, we provide some background knowledge on bias in AI systems (Section 2.1) and, describe how different types of bias appear during the life cycle of visual content (Section 2.2). 2.1 Bias in AI Systems In the field of AI ethics, bias is the prejudice of an automated decision system towards in- dividuals or groups of people on the basis of protected attributes like gender, race, age, etc. [64]. Instances of this prejudice have caused discrimination in many fields, including recidivism scoring [1], online advertisement [75], gender classification [11], and credit scoring [7]. While algorithms may also be responsible for the amplification of pre-existing biases in the training data [8], the quality of the data itself contributes significantly to the development of discriminatory AI applications, such as those mentioned above. Ntoutsi et al. [64] identified two ways in which bias is encoded in the data: correlations and causal influences among the protected attributes and other features; and the lack of representation of protected groups in the data. They also noted that bias can manifest in ways that are specific to the data type. In Section 2.2, and in more detail in Section 3, we explore bias specifically for visual data. Furthermore, it is important to note that defining the concepts of bias and fairness in mathematical terms is not a trivial task. Indeed, Verma & Rubin [80] provide a survey on more than 20 different measures of algorithmic fairness, many of which are incompatible with each other [13, 49]. This incompatibility - the so-called impossibility theorem [13, 49] - forces scientists and practitioners to choose the measures they use based on their personal believes or other constraints (e.g. business models) on what has to be considered fair for the particular problem/domain. Given the impact of AI, the mitigation of bias is a crucial task. It can be achieved in several different ways including pro-active approaches to bias, mitigation approaches, and retroactive approaches [64]. Our work falls into the first category of proactive bias-aware data collec- tion approaches (Section 5). Bias mitigation approaches can be further categorised into: pre- processing, in-processing and post-processing approaches (further details can be found at [64]). Finally, explainability of black box models [30] is among the most prominent retrospective ap- proaches, especially since EU introduced the \right to explanations" as part of the General 2 1. Real World Visual Content Life Cycle Types of Bias: Historical discrimination Selection bias; framing bias Framing bias 3. Editing 4. Dissemination Selection bias; framing bias Selection bias; label bias Algorithmic bias 2. Capture Discrimination Actors/structures involved: Society Generation of new Photographers/video makers 5. Data collection 6. Algorithms biased visual data Mainstream/social media Scientists/businesses Figure 1: Simplified illustration of visual content life cycle and associated sources of bias. Data Protection Regulation (GDPR)1 (see also Association for Computing Machinery's state- ment on Algorithmic Transparency and Accountability2). According to this, it is important to understand why models make certain decisions instead of others both for debugging and improving the models themselves and for providing the final recipient of those decisions with meaningful feedback. 2.2 Visual Content Life Cycle and Bias Figure 1 gives an overview of the life cycle of visual content. It additionally depicts how biases can enter at any step of this cycle and can be amplified in consecutive interactions. Real world. The journey of visual content alongside with bias starts even before the actual content is generated. Our world is undeniably shaped by inequalities and this is reflected in the generation of data in general and, in particular, in the generation of visual content. For example, Zhao et al. [89] found out that the dataset MS-COCO [54], a large-scale object detection, segmentation, and captioning dataset which is used as a benchmark in CV competitions, was more likely to associate kitchen objects to women. While both image capturing and datasets collection come at later stage in the life cycle of Figure 1, it is clear that in this instance, such bias has roots in the gender division between productive and reproductive/care labour. Nevertheless, as shown in the following paragraphs, each step of the life cycle of visual content can reproduce or amplify historical discrimination as well as insert new biases.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    32 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us