How a General-Purpose Commonsense Ontology Can Improve Performance of Learning-Based Image Retrieval∗ Rodrigo Toro Icarte†, Jorge A

How a General-Purpose Commonsense Ontology Can Improve Performance of Learning-Based Image Retrieval∗ Rodrigo Toro Icarte†, Jorge A

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) How a General-Purpose Commonsense Ontology can Improve Performance of Learning-Based Image Retrieval∗ Rodrigo Toro Icartey, Jorge A. Baierz,x, Cristian Ruzz, Alvaro Sotoz xChilean Center for Semantic Web Research, zPontificia Universidad Catolica´ de Chile, yUniversity of Toronto [email protected], fjabaier, cruz, [email protected] Abstract The knowledge representation community has built general-purpose ontologies which contain large amounts of commonsense knowledge over rele- vant aspects of the world, including useful vi- sual information, e.g.: “a ball is used by a foot- ball player”, “a tennis player is located at a ten- !"!"#$"#$%%&'#"($)*+"%,"-%&(" nis court”. Current state-of-the-art approaches for ./"-,0$"-%&("1(+"&'"%2$"/)' visual recognition do not exploit these rule-based Figure 1: Left. An image and one of its associated sentences from knowledge sources. Instead, they learn recogni- the MS COCO dataset. Among its words, the sentence features the tion models directly from training examples. In this word Chef, for which there is not a visual detector available. Right. paper, we study how general-purpose ontologies— Part of the hypergraph at distance 1 related to word Chef in Concept- specifically, MIT’s ConceptNet ontology—can im- Net. In the list of nodes related to the concept Chef, there are several informative concepts for which we have visual detectors available. prove the performance of state-of-the-art vision systems. As a testbed, we tackle the problem of sentence-based image retrieval. Our retrieval ap- as good as a four-year old in an IQ test [Ohlsson et al., 2013]. proach incorporates knowledge from ConceptNet CN also contains many assertions that seem visually relevant; on top of a large pool of object detectors derived such as “a chef is (usually) located at the kitchen”. from a deep learning technique. In our experi- State-of-the-art approaches to visual recognition tasks are ments, we show that ConceptNet can improve per- mostly based on learning techniques. Some use mid-level formance on a common benchmark dataset. Key representations [Singh et al., 2012; Lobel et al., 2013], oth- to our performance is the use of the ESPGAME ers deep hierarchical layers of composable features [Ranzato dataset to select visually relevant relations from et al., 2008; Krizhevsky et al., 2012]. Their goal is to un- ConceptNet. Consequently, a main conclusion of cover visual spaces where visual similarities carry enough in- this work is that general-purpose commonsense on- formation to achieve robust visual recognition. While some tologies improve performance on visual reasoning approaches exploit knowledge and semantic information [Liu tasks when properly filtered to select meaningful et al., 2011; Espinace et al., 2013], none of them utilize large- visual relations. scale ontologies to improve performance. In terms of CN, previous works have suggested that incor- 1 Introduction porating CN knowledge to visual applications is nontrivial [Le et al., 2013; Xie and He, 2013; Snoek et al., 2007]. In- The knowledge representation community has recognized deed, poor results in [Le et al., 2013] and [Xie and He, 2013] that commonsense knowledge bases are needed for reasoning can be attributed to a non-negligible rate of noisy relations in in the real world. Cyc [Lenat, 1995] and ConceptNet (CN) CN. The work in [Snoek et al., 2007] helps to support this [Havasi et al., 2007] are two well-known examples of large, claim: “...manual process (of CN relations) guarantees high publicly available commonsense knowledge bases. quality links, which are necessary to avoid obscuring the ex- CN has been used successfully for tasks that require rather perimental results.” complex commonsense reasoning, including a recent study In this paper we study the question of how large and that showed that the information in CN may be used to score publicly available general-purpose commonsense knowledge ∗This work was partially funded by grants FONDECYT 1151018 repositories, specifically CN, can be used to improve state- and 1150328. Rodrigo also gratefully acknowledges funding from of-the-art vision techniques. We focus on the problem of CONICYT (Becas Chile). We thank the IJCAI-17 reviewers and sentence-based image retrieval. We approach the problem by Margarita Castro for providing valuable insights into this research. assuming that we have visual detectors for a number of words, 1283 Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) ConceptNet relation ConceptNet’s description and describe a CN-based method to enrich the existing set of sofa –IsA! piece of furniture A sofa is a piece of furniture detectors. Figure 1 shows an illustrative example: An im- sofa –AtLocation! livingroom Somewhere sofas can be is livingroom age retrieval query contains the word Chef, for which there sofa –UsedFor! read book A sofa is for reading a book is not a visual detector available. In this case, the informa- sofa –MadeOf ! leather sofas are made from leather tion contained in the nodes directly connected to the concept Figure 2: A sample of CN relations that involve the concept sofa, Chef in CN provides key information to trigger related visual together with the English description provided by the CN team in detectors, such as Person, Dish, and Kitchen that are highly their website. relevant to retrieve the intended image. Given a word w for which we do not have a visual detector wrong semantic inference. available, we propose various probabilistic-based approaches Recently, work on automatic image captioning has made to use CN’s relations to estimate the likelihood that there is great advances to integrate image and text data [Karpa- an object for w in a given image. Key to the performance of thy and Fei-Fei, 2015; Vinyals et al., 2015; Klein et al., our approach is an additional step that uses a complementary 2015]. These approaches use datasets consisting of im- source of knowledge, the ESPGAME dataset [Von Ahn and ages as well as sentences describing their content, such as Dabbish, 2004], to filter out noisy and non-visual relations the Microsoft COCO dataset [Lin et al., 2014]. Coinciden- provided by CN. Consequently, a main conclusion of this tally, work by Karpathy and Fei-Fei; Vinyals et al. [2015; work is that filtering out relations from CN is very important 2015] share similar ideas which follow initial work by Weston for achieving good performance, suggesting that future work et al. [2011]. Briefly, these works employ deep neural net- that attempts to integrate pre-existing general knowledge with work models, mainly convolutional and recurrent neural net- machine learning techniques should put close attention to this works, to infer a suitable alignment between sentence snip- issue. pets and the corresponding image region that they describe. The rest of the paper is organized as follows: Section 2 re- [Klein et al., 2015], on the other hand, proposes to use the views related work; Section 3 describes the elements used in Fisher Vector as a sentence representation instead of recurrent this paper; Sections 4 and 5 motivate and describe our pro- neural networks. In contrast to our approach, these methods posed method; Section 6 presents qualitative and quantitative do not make explicit use of high level semantic knowledge. experiments on standard benchmark datasets; finally, Section In terms of works that use ontologies to perform visual 7 presents future research directions and concluding remarks. recognition, [Maillot and Thonnat, 2008] builds a custom on- tology to perform visual object recognition. [Ordonez et al., 2 Previous Work 2015] uses Wordnet and a large set of visual object detec- tors to automatically predict natural nouns that people will The relevance of contextual or semantic information to vi- use to name visual object categories. [Zhu et al., 2014] uses sual recognition has long been acknowledged and studied by Markov Logic Networks and a custom ontology to identify the cognitive psychology and computer vision communities several properties related to object affordance in images. In [Biederman, 1972]. In computer vision, the main focus has contrast to our work, these methods target different applica- been on using contextual relations in the form of object co- tions. Furthermore, they do not exploit the type of common- occurrences, and geometrical and spatial constraints. Due sense relations that we want to extract from CN. to space constraints, we refer the reader to [Marques et al., 2011] for an in-depth review about these topics. As a com- 3 Preliminaries mon issue, these methods do not employ high-level semantic relations as the one included in CN. ConceptNet ConceptNet (CN) [Havasi et al., 2007] is Knowledge acquisition is one of the main challenges of a commonsense-knowledge semantic network which repre- using a semantic based approach to object recognition. One sents knowledge in a hypergraph structure. Nodes in the hy- common approach to obtain this knowledge is via text min- pergraph correspond to a concept represented by a word or ing [Rabinovich et al., 2007; Espinace et al., 2013] or crowd a phrase. In addition, hyperarcs represent relations between sourcing [Deng et al., 2009]. As an alternative, recently, nodes, and are associated with a weight that expresses the Chen et al. [2013] and Divvala et al. [2014] present boot- confidence in such a relation. As stated in its webpage, CN is strapped approaches where an initial set of object detectors a knowledge base “containing lots of things computers should and relations is used to mine the web in order to discover new know about the world, especially when understanding text object instances and new commonsense relationships. The written by people.” new knowledge is in turn used to improve the search for new Among the set of relation types in CN, a number of them classifiers and semantic knowledge in a never ending process.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us