
The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) Learning from Crowds by Modeling Common Confusions Zhendong Chu, Jing Ma, Hongning Wang Department of Computer Science, University of Virginia {zc9uy, jm3mr, hw5x}@virginia.edu Abstract different label aggregation methods based on the DS model (Venanzi et al. 2014; Zhang et al. 2014; Whitehill et al. Crowdsourcing provides a practical way to obtain large 2009). Recent developments focus more on unified solu- amounts of labeled data at a low cost. However, the anno- tation quality of annotators varies considerably, which im- tions, where variants of the Expectation-Maximization (EM) poses new challenges in learning a high-quality model from algorithm are proposed to integrate label aggregation and the crowdsourced annotations. In this work, we provide a classifier training (Albarqouni et al. 2016; Cao et al. 2019; new perspective to decompose annotation noise into com- Raykar et al. 2010). Typically, such solutions treat the classi- mon noise and individual noise and differentiate the source fier’s predictions as latent variables, which are then mapped of confusion based on instance difficulty and annotator ex- to the observed crowdsourced labels using individual con- pertise on a per-instance-annotator basis. We realize this new fusion matrices of annotators. Rodrigues and Pereira (2018) crowdsourcing model by an end-to-end learning solution with further fuse label inference and classifier training in an end- two types of noise adaptation layers: one is shared across an- to-end approach using neural networks, where the gradient notators to capture their commonly shared confusions, and from label aggregation is directly propagated to estimate the the other one is pertaining to each annotator to realize indi- vidual confusion. To recognize the source of noise in each annotators’ confusion matrices. Tanno et al. (2019) propose annotation, we use an auxiliary network to choose from the a similar solution but encourage the annotator confusion ma- two noise adaptation layers with respect to both instances and trix to be close to an identity matrix by trace regularization. annotators. Extensive experiments on both synthesized and All existing DS-model-based solutions assume noise in real-world benchmarks demonstrate the effectiveness of our crowdsourced labels is only caused by individual annota- proposed common noise adaptation solution. tors’ expertise. However, it is not uncommon that different annotators would share common confusion about the labels. Introduction For example, when a bird in an image is too small, every an- notator has a chance to confuse it with an airplane because The availability of large amounts of labeled data is often of the background sky. We hypothesize that on an instance a prerequisite for applying supervised learning solutions in the annotator is confident about, he/she is more likely to use practice. Crowdsourcing makes it possible to collect mas- his/her expertise to provide a label (i.e., introducing indi- sive labeled data in both time- and cost-efficient manner vidualized noise), while he/she would use common sense to (Buecheler et al. 2010). However, because of varying and label those unconfident ones. We empirically evaluate this unknown expertise of annotators, crowdsourced labels are hypothesis on two public crowdsourcing datasets, one for usually noisy, which naturally lead to an important research image labeling and one for music genre classification (more problem: how to train an accurate learning model with only details of the datasets can be found in the Experiment Sec- crowdsourced annotations? tion), and visualize the results in Figure 1. On both datasets, The first step to estimate an accurate learning model from there are quite some commonly made mistakes across an- crowdsourced annotations is to properly model the genera- notators. For example, on the image labeling dataset La- tion of such data. In this work, we focus on the crowdsourced belMe, 61.0% annotators mistakenly labeled street as inside classification problem. The seminal work from Dawid and city and 44.1% of them mislabeled open country as forest; Skene (1979) (known as the DS model) assumes that each on the music classification dataset, 63.6% annotators misla- annotator has his/her own class-dependent confusion when beled metal as rock and 38.6% of them mislabeled disco as providing annotations to instances. This is modeled by an pop. The existence of such shared confusions across anno- annotator-specific confusion matrix, whose entries are the tators directly affects label aggregation: the majority of an- probability of flipping one class into another. The DS model notators are not necessarily correct, as their mistakes are no has become the cornerstone of most learning from crowds longer independent (e.g., those large off-diagonal entries in solutions; and mainstream solutions perform label aggrega- Figure 1). This is against the fundamental assumption in the tion prior to classifier training: their key difference lies on DS model, and strongly urges new noise modeling to better Copyright © 2021, Association for the Advancement of Artificial handle real-world crowdsourced data. Intelligence (www.aaai.org). All rights reserved. Moving beyond the independent noise assumption in the 5832 entropy principle on a probability distribution over annota- tors, instances and annotations, in which by minimizing en- tropy instance confusability and annotator expertise are nat- urally inferred. Khetan and Oh (2016) and Shah, Balakrish- nan, and Wainwright (2016) consider generalized DS mod- els which model the instance difficulty. Instead of simply using a single scalar to model instance difficulty and an- (a) LabelMe (b) Music notator expertise as in previous works, we model them by learning their corresponding representations via an auxiliary Figure 1: Analysis of commonly made mistakes across an- network, which can better capture the shared statistical pat- notators on two real-world crowdsourcing datasets. The tern across observed annotations. value of each entry in the heatmap denotes the percentage Our method is closely related to several existing DS-based of annotators with this confusion pair (e.g., mistakenly label models considering relations among annotators; but it is street as inside city on LabelMe dataset). also clearly distinct from them. Kamar, Kapoor, and Horvitz (2015) use a global confusion matrix to capture the identi- cal mistakes by all annotators, and it is designed to replace family of DS models (Dawid and Skene 1979; Rodrigues the individual matrix when observations of an annotator are and Pereira 2018), we decompose annotation noise into two rare. Moreover, the choice of confusion matrix in this so- sources, common noise and individual noise, and differen- lution only depends on the number of annotations an anno- tiate the source of noise based on both annotators and in- tator provided. This unnecessarily reflects the annotator ex- stances. We refer to the annotation confusions shared across pertise, as the task assignment is typically out of their con- annotators as common noise, and model it by a global con- trol in crowdsourcing. Venanzi et al. (2014) and Imamura, fusion matrix shared by all annotators. In the meanwhile, Sato, and Sugiyama (2018) cluster annotators to generate we also maintain annotator-specific confusion matrices for their own confusion matrices from a shared community- individual noise modeling. We still treat ground-truth la- wide confusion matrix. However, the above approaches still bels of instances as latent variables, but map them to noisy assume a single underlying noise source, and thus they do annotations by two parallel confusion matrices, to capture not consider the difference between global (or community- these different sources of noise. We determine the choice level) and individual confusions. Li, Rubinstein, and Cohn of confusion matrices on a per-instance-annotator basis, by (2019) explore the correlation of annotation across annota- explicitly modeling of annotator expertise and instance dif- tors by classifying them into auxiliary subtypes under dif- ficulty (Whitehill et al. 2009; Yin et al. 2017). To leverage ferent ground-truth classes. However, the characteristics of the power of representation learning to model annotator ex- each annotator are missing since they are only represented pertise and instance difficulty, we realize all our model com- by a specific subtype. In our work, we still characterize in- ponents using neural networks. In particular, we model the dividual annotators by modeling their own confusions. two types of confusion matrices as two parallel noise adap- tation layers (Goldberger and Ben-Reuven 2016). For each Common Confusion Modeling in annotator-instance pair, the classifier first maps the instance Crowdsourced Data to a latent class label, then an auxiliary network decides which noise adaptation layer to map the latent class label In this section, we formulate our problem-solving frame- to the observed annotation. Cross-entropy loss is counted work for training classifiers directly from crowdsourced la- on the predicted annotations for end-to-end training of these bels, based on the insight of common confusion modeling components. We name this approach CoNAL - learning from across annotators. We first describe the notations and our crowds with common noise adaptation layers. Extensive ex- probabilistic modeling of the noisy annotation process, con- periments show considerable improvement
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-