MMFEAT: a Toolkit for Extracting Multi-Modal Features

MMFEAT: a Toolkit for Extracting Multi-Modal Features

MMFEAT: A Toolkit for Extracting Multi-Modal Features Douwe Kiela Computer Laboratory University of Cambridge [email protected] Abstract In this demonstration paper, we describe MM- FEAT, a Python toolkit that makes it easy to ob- Research at the intersection of language tain images and sound files and extract visual and other modalities, most notably vision, or auditory features from them. The toolkit in- is becoming increasingly important in nat- cludes two standalone command-line tools that ural language processing. We introduce a do not require any knowledge of the Python pro- toolkit that can be used to obtain feature gramming language: one that can be used for representations for visual and auditory in- automatically obtaining files from a variety of formation. MMFEAT is an easy-to-use sources, including Google, Bing and FreeSound Python toolkit, which has been developed (miner.py); and one that can be used for extract- with the purpose of making non-linguistic ing different types of features from directories of modalities more accessible to natural lan- data files (extract.py). In addition, the package guage processing researchers. comes with code for manipulating multi-modal spaces and several demos to illustrate the wide 1 Introduction range of applications. The toolkit is open source Distributional models are built on the assumption under the BSD license and available at https: that the meaning of a word is represented as a //github.com/douwekiela/mmfeat. distribution over others (Turney and Pantel, 2010; Clark, 2015), which implies that they suffer from 2 Background the grounding problem (Harnad, 1990). That is, 2.1 Bag of multi-modal words they do not account for the fact that human se- mantic knowledge is grounded in the perceptual Although it is possible to ground distributional se- system (Louwerse, 2008). There has been a lot mantics in perception using e.g. co-occurrence of interest within the Natural Language Processing patterns of image tags (Baroni and Lenci, 2008) community for making use of extra-linguistic per- or surrogates of human semantic knowledge such ceptual information, much of it in a subfield called as feature norms (Andrews et al., 2009), the de multi-modal semantics. Such multi-modal models facto method for grounding representations in per- outperform language-only models on a range of ception has relied on processing raw image data tasks, including modelling semantic similarity and (Baroni, 2016). The traditional method for ob- relatedness (Bruni et al., 2014; Silberer and La- taining visual representations (Feng and Lapata, pata, 2014), improving lexical entailment (Kiela 2010; Leong and Mihalcea, 2011; Bruni et al., et al., 2015b), predicting compositionality (Roller 2011) has been to apply the bag-of-visual-words and Schulte im Walde, 2013), bilingual lexicon (BoVW) approach (Sivic and Zisserman, 2003). induction (Bergsma and Van Durme, 2011) and The method can be described as follows: metaphor identification (Shutova et al., 2016). Al- though most of this work has relied on vision 1. obtain relevant images for a word or set of for the perceptual input, recent approaches have words; also used auditory (Lopopolo and van Miltenburg, 2. for each image, get local feature descriptors; 2015; Kiela and Clark, 2015) and even olfactory 3. cluster feature descriptors with k-means to (Kiela et al., 2015a) information. find the centroids, a.k.a. the “visual words”; 55 Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics—System Demonstrations, pages 55–60, Berlin, Germany, August 7-12, 2016. c 2016 Association for Computational Linguistics 4. quantize the local descriptors by comparing 2.3 Related work them to the cluster centroids; and The process for obtaining perceptual representa- 5. combine relevant image representations into tions thus involves three distinct steps: obtaining an overall visual representation for a word. files relevant to words or phrases, obtaining repre- sentations for the files, and aggregating these into The local feature descriptors in step (2) tend visual or auditory representations. To our knowl- to be variants of the dense scale-invariant feature edge, this is the first toolkit that spans this entire transform (SIFT) algorithm (Lowe, 2004), where process. There are libraries that cover some of an image is laid out as a dense grid and feature these steps. Notably, VSEM (Bruni et al., 2013) descriptors are computed for each keypoint. is a Matlab library for visual semantics represen- A similar method has recently been applied to tation that implements BoVW and useful func- the auditory modality (Lopopolo and van Mil- tionality for manipulating visual representations. tenburg, 2015; Kiela and Clark, 2015), using DISSECT (Dinu et al., 2013) is a toolkit for dis- sound files from FreeSound (Font et al., 2013). tributional compositional semantics that makes it Bag-of-audio-words (BoAW) uses mel-frequency easy to work with (textual) distributional spaces. cepstral coefficients (MFCCs) (O’Shaughnessy, Lopopolo and van Miltenburg (2015) have also re- 1987) for the local descriptors, although other lo- leased their code for obtaning BoAW representa- cal frame representations may also be used. In tions1. MFCC, frequency bands are spaced along the mel scale (Stevens et al., 1937), which has the advan- 3 MMFeat Overview tage that it approximates human auditory percep- The MMFeat toolkit is written in Python. There tion more closely than e.g. linearly-spaced fre- are two command-line tools (described below) for quency bands. obtaining files and extracting representations that 2.2 Convolutional neural networks do not require any knowledge of Python. The Python interface maintains a modular structure In computer vision, the BoVW method has been and contains the following modules: superseded by deep convolutional neural networks mmfeat.miner (CNNs) (LeCun et al., 1998; Krizhevsky et al., • 2012). Kiela and Bottou (2014) showed that such mmfeat.bow • mmfeat.cnn networks learn high-quality representations that • mmfeat.space can successfully be transfered to natural language • processing tasks. Their method works as follows: Source files (images or sounds) can be obtained with the miner module, although this is not a re- 1. obtain relevant images for a word or set of quirement: it is straightforward to build an in- words; dex of a data directory that matches words or 2. for each image, do a forward pass through phrases with relevant files. The miner module au- a CNN trained on an image recognition task tomatically generates this index, a Python dictio- and extract the pre-softmax layer; nary mapping labels to lists of filenames, which 3. combine relevant image representations into is stored as a Python pickle file index.pkl in the an overall visual representation for a word. data directory. The index is used by the bow and cnn modules, which together form the core of the They used the pre-softmax layer (referred to as package for obtaining perceptual representations. FC7) from a CNN trained by Oquab et al. (2014), The space package allows for the manipulation which was an adaptation of the well-known CNN and combination of multi-modal spaces. by Krizhevsky et al. (2012) that played a key role miner Three data sources are currently sup- in the deep learning revolution in computer vision ported: Google Images2 (GoogleMiner), Bing Im- (Razavian et al., 2014; LeCun et al., 2015). Such ages3 (BingMiner) and FreeSound4 (FreeSound- CNN-derived representations perform much better Miner). All three of them require API keys, than BoVW features and have since been used in 1 a variety of NLP applications (Kiela et al., 2015c; https://github.com/evanmiltenburg/soundmodels-iwcs 2https://images.google.com Lazaridou et al., 2015; Shutova et al., 2016; Bulat 3https://www.bing.com/images et al., 2016). 4https://www.freesound.org 56 which can be obtained online and are stored in the 4 Tools miner.yaml settings file in the root folder. MMFeat comes with two easy-to-use command- bow The bag-of-words methods are contained in line tools for those unfamiliar with the Python pro- this module. BoVW and BoAW are accessible gramming language. through the mmfeat.bow.vw and mmfeat.bow.aw modules respectively, through the BoVW and 4.1 Mining: miner.py BoAW classes. These classes obtain feature de- scriptors and perform clustering and quantization The miner.py tool takes three arguments: the data through a standard set of methods. BoVW uses source (bing, google or freesound), a query file dense SIFT for its local feature descriptors; BoAW that contains a line-by-line list of queries, and a uses MFCC. The modules also contain an inter- data directory to store the mined image or sound face for loading local feature descriptors from files in. Its usage is as follows: Matlab, allowing for simple integraton with e.g. miner.py {bing,google,freesound} \ VLFeat5. The centroids obtained by the clustering query_file data_dir [-n int] (sometimes also called the “codebook”) are stored The -n option can be used to specify the number of in the data directory for re-use at a later stage. images to download per query. The following ex- cnn The CNN module uses Python bindings amples show how to use the tool to get 10 images to the Caffe deep learning framework (Jia et from Bing and 100 sound files from FreeSound for al., 2014). It supports the pre-trained reference the queries “dog” and “cat”: adaptation of AlexNet (Krizhevsky et al., 2012), $ echo -e "dog\ncat" > queries.txt GoogLeNet (Szegedy et al., 2015) and VGGNet $ python miner.py -n 10 bing \ (Simonyan and Zisserman, 2015). The interface is queries.txt ./img_data_dir $ python miner.py -n 100 freesound \ identical to the bow interface. queries.txt ./sound_data_dir space An additional module is provided for 4.2 Feature extraction: extract.py making it easy to manipulate perceptual represen- tations.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us