Semantic Embeddings of Generic Objects for Zero-Shot Learning Tristan Hascoet1* , Yasuo Ariki1,2 and Tetsuya Takiguchi1,2

Semantic Embeddings of Generic Objects for Zero-Shot Learning Tristan Hascoet1* , Yasuo Ariki1,2 and Tetsuya Takiguchi1,2

Hascoet et al. EURASIP Journal on Image and Video EURASIP Journal on Image Processing (2019) 2019:13 https://doi.org/10.1186/s13640-018-0371-x and Video Processing RESEARCH Open Access Semantic embeddings of generic objects for zero-shot learning Tristan Hascoet1* , Yasuo Ariki1,2 and Tetsuya Takiguchi1,2 Abstract Zero-shot learning (ZSL) models use semantic representations of visual classes to transfer the knowledge learned from a set of training classes to a set of unknown test classes. In the context of generic object recognition, previous research has mainly focused on developing custom architectures, loss functions, and regularization schemes for ZSL using word embeddings as semantic representation of visual classes. In this paper, we exclusively focus on the affect of different semantic representations on the accuracy of ZSL. We first conduct a large scale evaluation of semantic representations learned from either words, text documents, or knowledge graphs on the standard ImageNet ZSL benchmark. We show that, using appropriate semantic representations of visual classes, a basic linear regression model outperforms the vast majority of previously proposed approaches. We then analyze the classification errors of our model to provide insights into the relevance and limitations of the different semantic representations we investigate. Finally, our investigation helps us understand the reasons behind the success of recently proposed approaches based on graph convolution networks (GCN) which have shown dramatic improvements over previous state-of-the-art models. Keywords: ZSL, Semantic embedding 1 Introduction been driven by relatively small-scale benchmarks [1, 2] Recent successes in generic object recognition have for which human-annotated visual attributes are avail- largely been driven by the successful application of con- able as visual class descriptions. In the case of generic volutional neural networks (CNN) trained in a supervised object recognition, however, manually annotating each manner on large image datasets. One main drawback and every possible visual class of interest with a set of of these approaches is that they require a large amount visual attributes is impractical. Hence, generalizing the of annotated data to successfully generalize to unseen zero-shot learning approaches developed on such bench- image samples. The collection and annotation of such marks to the more practical case of generic object recog- dataset for custom applications can be prohibitively com- nition comes with the additional challenge of collecting plex and/or expensive which hinders their applications to suitable descriptions of the visual classes. many real-world practical scenarios. To reduce the num- Finding such description presents two challenges: first, ber of training samples needed for efficient learning, few- the collection of these descriptions must be automated so shot learning techniques are being actively researched. as to not require an expensive human annotation process. The zero-shot learning (ZSL) paradigm represents the Second, the collected descriptions must be visually dis- extreme case of few-shot learning in which recognition criminative enough to enable the zero-shot recognition of models are trained to recognize instances of a set of target generic objects. Word embeddings are learned in an unsu- classes without any training sample to learn from. pervised manner from large text corpora so that they can To recognize unseen classes, ZSL models use descrip- be collected in a large scale without human supervision. tions of the visual classes, i.e., representations of the visual Furthermore, their successful application to a number classes in a non-visual modality. Research in ZSL has of natural language processing (NLP) tasks has shown that word embedding representations encode a number *Correspondence: [email protected] of desirable semantic features, which have been naturally 1Kobe University, 1-1 Rokkodaicho, Nada Ward, Kobe 657-0013, Japan assumed to generalize to vision tasks. For these desirable Full list of author information is available at the end of the article properties, word embeddings have become the standard © The Author(s). 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Hascoet et al. EURASIP Journal on Image and Video Processing (2019) 2019:13 Page 2 of 14 visual class descriptions used by recent zero-shot generic similarity scores and part attributes while we evaluate object recognition models ([3–7]). graph embedding models. In addition to word embeddings, we argue that generic Mensink et al. [11] uses visual class co-occurrence objects can also be described by either text documents or statistics to perform ZSL. Given a training set of multi- knowledge graph data that satisfy our requirements: these labeled images and similarity scores between known and descriptions both contain visually discriminative informa- unknown labels, they use the co-occurrence distribution tion and are automatically collectible from the web in a of known labels to predict the occurrence of unknown large scale, without requiring human intervention. labels in test images. Their multi-label classification set- This paper aims to discuss the role and the affect of ting differs from the ZSL setting in which input images are semantic representations on zero-shot learning of generic classified into a unique class. objects. To do so, we conduct an extensive empirical eval- Mukherjee and Hospedales [12] question the limits of uation of different semantic representations on the stan- using a single data point (word embedding vectors) as dard generic object ZSL benchmark. We investigate the semantic representations of visual classes because this use of both different raw descriptions (i.e., different text setting does not allow the representation of intra-class documents and knowledge graphs collected from the web) variance of semantic concepts. They used Gaussian dis- and different embedding models in each semantic modal- tributions to model both semantic and visual feature ity (word, graph, and document embedding models). distributions of the visual classes. The main result of our study is to show that a basic More related to this work, [13] investigates different linear regression model using graph embeddings outper- semantic representations for zero-shot action recognition. forms previous the state-of-the art ZSL models based on They compare different representations of documents and word embeddings. This result highlights the first-class videos, while we investigate the application of word, doc- role of semantic representations in ZSL accuracy, a topic ument, and knowledge graph embeddings to zero-shot that has been relatively little discussed in the recent lit- recognition of generic objects. erature. We believe that our results emphasize the need A series of works of [14–16] compares the zero-shot for a better understanding of the nature of the informa- classification accuracy obtained with semantic repre- tion needed from semantic representations to recognize sentations derived from words, taxonomy, and manual unseen classes. To shed some light on these results, we attribute annotations on fine-grained or small-scale ZSL then discuss the efficiency, relevance, and limitations of benchmarks. Our investigation differs in that we are con- different semantic representations. cerned with the more practical task of generic object Finally, our investigation allows us to better understand recognition, and we investigate a broader class of semantic the outstanding results recently presented in [8, 9]. In par- features. ticular, we show that much of the improvement shown bythesemodelsoverthepreviousstateoftheartcan 3 Preliminaries be attributed to their use of explicit knowledge about the 3.1 Semantic data acquisition relationships between training and test classes as made To conduct our study, we are heavily dependent on the availablebyknowledgegraphs. data available to us in the form of image/semantic descrip- The remainder of this paper is organized as follows: tion pairs (x, y). We use the ImageNet dataset as our Section 2 reviews the literature for related work, Section 3 starting point as it has become the standard evalua- presents the preliminaries to understand our results, and tion benchmark for generic object ZSL. In ImageNet, Section 4 presents the methodology used in our experi- visual classes are indexed by WordNet [17]concepts, ments. Section 5 is the main section of our paper in which which are defined by three components that correspond we present and discuss the results of our investigation. to the three semantic modalities we investigate: their lemmas (a set of synonym words that refer to the con- 2 Related work cept), a definition in natural language, and a node con- While the majority of the works on zero-shot generic nected by a set of predicate edges to other concept object recognition have used word embeddings as seman- nodes of the WordNet knowledge graph. Figure 1 illus- tic features, some works have explored the use of different trates the different semantic descriptions provided by semantic features, which we present in

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us