
Fashion Style in 128 Floats: Joint Ranking and Classification using Weak Data for Feature Extraction Edgar Simo-Serra and Hiroshi Ishikawa Department of Computer Science and Engineering Waseda University, Tokyo, Japan [email protected] [email protected] Abstract We propose a novel approach for learning features from Classifier weakly-supervised data by joint ranking and classification. Shared In order to exploit data with weak labels, we jointly train a parameters feature extraction network with a ranking loss and a classi- fication network with a cross-entropy loss. We obtain high- Feature Feature Feature quality compact discriminative features with few parame- CNN CNN CNN ters, learned on relatively small datasets without additional annotations. This enables us to tackle tasks with specialized I- II+ images not very similar to the more generic ones in exist- ing fully-supervised datasets. We show that the resulting features in combination with a linear classifier surpass the Dissimilar Similar state-of-the-art on the Hipster Wars dataset despite using features only 0.3% of the size. Our proposed features sig- nificantly outperform those obtained from networks trained Figure 1: Overview of the proposed feature learning ap- on ImageNet, despite being 32 times smaller (128 single- proach. We train a feature extraction on weakly annotated precision floats), trained on noisy and weakly-labeled data, data by jointly training a feature extractor network with a and using only 1.5% of the number of parameters.1. classification network. For training, an anchor image (cen- ter) is provided in conjunction with a similar image (right) 1. Introduction and a dissimilar image (left) according to a metric provided With the emergence of large-scale datasets and the ap- on the weak noisy annotations. The classification loss lC pearance of deep networks with millions of parameters, re- serves to learn useful feature maps while the ranking loss searchers have started to replace hand-crafted global image lR on the triplet of Feature CNN encourages them to learn features such as GIST [21] with those obtained from inter- a discriminative feature representation. mediate representations of deep networks trained for clas- sification on large datasets [44]. Although this has led to a lack of guidance to what to learn given by explicit labels great improvement over the previous generation of features, makes it a much more complex task [19]. these networks are learned in a fully-supervised manner on However, images obtained from the Internet usually have large amounts of data with very costly and time-consuming associated metadata which, although often inaccurate, can annotation. Features learned on one dataset can be used on be used as weak labels. In this paper, we study how to ex- another, but naturally not all datasets are equal [32], and ploit data with weak labels in order to obtain high-quality thus features taken from networks trained on ImageNet [7] compact discriminative features without additional annota- will not work as well on datasets with very different vi- tions. With such features, we tackle tasks in which the im- sual characteristics, such as the scene classification dataset ages are more specific and not very similar to those of ex- Places [49], and vice versa. While unsupervised feature isting fully-supervised datasets such as ImageNet or Places. learning exists as an alternative to supervised learning, the In this work, we focus on the domain of fashion images, which have only recently become the focus of research [40]. 1Models available at http://hi.cs.waseda.ac.jp/ esimo/research/stylenet/ These images have several characteristics that make them 1298 Hipster Wars (1,893 images) Mean Places (2,469,371 images) Mean Fashion144k (277,527 images) Mean ImageNet (1,331,167 images) Mean Figure 2: We show example images and the mean image from the Hipster Wars [16], Fashion144k [27], Places [49], and ImageNet [7] datasets. In both fashion-related datasets we can make out a human silhouette, although it is significantly more diffuse in the Fashion144k dataset due to the much larger pose variation. In the Places dataset mean image we can see a gradient where the top of the image is clearer, likely corresponding to the sky in many images. While in the ImageNet mean we see a much more uniform image with a slightly clearer area in the center of the image. Unlike the other datasets, Fashion144k only has weak labels. With our approach we are able to exploit the Fashion144k for training to evaluate on the Hipster Wars dataset, outperforming fully supervised approaches that use larger datasets such as ImageNet or Places. very challenging to tackle with computer vision. On one make use of weak labels to learn features on datasets closer hand, they have small local details such as accessories that to the target application, as shown in Fig. 2. only depend on a very small part of the image, making seg- In summary, our novel approach for feature learning: mentation very challenging [26, 40, 43]. On the other hand, • Exploits large amounts of weakly-labeled data commonly we still have to consider more global properties such as the found on the Internet. fashion style [16, 27, 34], which depend jointly on the var- • Learns a compact discriminative representation with few ious items in the image. These difficulties, along with the parameters on relatively small datasets. fact that the images generally have a 3:4 aspect ratio and • Allows for efficient comparisons by Euclidean distances. tend to have much brighter colors, cause the features taken We complement our in-depth quantitative analysis with vi- from networks trained on ImageNet or Places to general- sualizations for qualitative analysis. In addition to our ize poorly to fashion-oriented tasks. While no large fully- feature extraction network learning approach we present a annotated fashion dataset exists, there are many datasets, novel visualization approach for comparing image similar- such as the Paperdoll [41] and Fashion144k [27] datasets, ity between two images by exploiting the change in the ex- that have a large amount of weak labels exploitable for tracted features when partially occluded. learning. In this work, we take advantage of these noisy labels to learn compact discriminative features with deep 2. Related Work networks that can then be used in other challenging fashion- Fashion: Interest in fashion has been growing in the related tasks, such as style classification [16], in which they computer vision community. Some of the more tradi- greatly outperform the state-of-the-art and other pretrained tional problems have been semantic segmentation of gar- CNN baselines, while having 1.5% the number of parame- ments [26, 41, 43], image retrieval [12, 15], and classifica- ters and being the size of a SIFT descriptor [20]: 128 floats tion of garments and styles [2, 5, 16, 35, 42]. Attempting to and 32× smaller than the best competing approach. directly predict more esoteric measurements, such as pop- Instead of training networks for classification and us- ularity [38] or fashionability [27], have also been recently ing an intermediate-layer representation as a feature vector, studied. As defining absolute metrics is rather complicated we propose performing joint classification and ranking as in such a subjective domain as fashion, exploiting rela- shown in Fig. 1. The ranking acts as a soft constraint on the tive attributes [17] and learning image similarity [15, 34] intermediate layer and encourages the model to learn more have also been proposed. Many of these approaches rely representative features guided by the classifier. We perform on datasets created by crawling the Internet and have large ranking by considering three images simultaneously: we amounts of exploitable weak labels [27, 39]. Yamaguchi first pick an anchor image and then pick an image that is et al.[39] used these tags to provide priors for semantic similar to the anchor and one that is different. We establish segmentation. Our method is complementary to these ap- a similarity metric by exploiting the weak user-provided la- proaches and can provide features for greater performance. bels, which are also used as a classification target. By simul- Deep Learning: There has been a flurry of new research taneously considering both ranking and classification, we focusing on exploiting deep learning in computer vision. are able to outperform approaches that use either ranking or Much of it focuses on improving classification results on classification alone. Our approach allows us to efficiently large datasets such as ImageNet [7] or Places [49]. This 299 has led from initial models such as Alexnet [18] to more so- 3.1. Joint Ranking and Classification phisticated ones such as the VGG models [29] with up to 19 We formulate the problem as a joint ranking and classi- layers; or the Googlenet models [31], that jointly use convo- fication problem. A ranking loss on triplets of images is ap- lutions of different sizes in each layer. A more structured plied on a feature extraction network, while a classification analysis of different networks was presented by Chatfield loss is employed on a classification network that uses the et al.[4], where they also analyzed using bottleneck layers output of the feature extraction network. For training, we to provide features of different sizes. All these approaches assume that we have a set of images with associated weak rely on fully supervised datasets. Xiao et al.[37] extended labels with large amounts of noise. learning to partially noisy labels. However, they still require For ranking, we take three images as input simultane- roughly 50% of the labels to be correct and need to learn an- ously, as shown in Fig. 1. One image (center) is the an- other network to correct the noisy labels, while only noisy chor or reference image, to which the second image (right) labels suffice for our approach.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-