Unsupervised Learning from Video with Deep Neural Embeddings

Unsupervised Learning from Video with Deep Neural Embeddings

Unsupervised Learning from Video with Deep Neural Embeddings Chengxu Zhuang1 Tianwei She1 Alex Andonian2 Max Sobol Mark1 Daniel Yamins1 1Stanford University 2 MIT {chengxuz, shetw, joelmax, yamins}@stanford.edu [email protected] Abstract learning of visual representations for both static and dynamic visual tasks. Because of the rich dynamical structure of videos and However, it has been challenging to embody this hy- their ubiquity in everyday life, it is a natural idea that pothesis in a concrete neural network that can consume video data could serve as a powerful unsupervised learn- unlabelled video data to learn useful feature represen- ing signal for visual representations. However, instanti- tations, especially in the context of at-scale real-world ating this idea, especially at large scale, has remained applications. Perhaps the biggest source of difficulty in a significant artificial intelligence challenge. Here we making progress on unsupervised video learning, though, present the Video Instance Embedding (VIE) frame- is that unsupervised learning has presented a formidable work, which trains deep nonlinear embeddings on video challenge even for the case of single static images. Even sequence inputs. By learning embedding dimensions that for the case of single static images, the gap in represen- identify and group similar videos together, while pushing tational power between the features learned by unsu- inherently different videos apart in the embedding space, pervised and supervised neural networks has been very VIE captures the strong statistical structure inherent in substantial, to the point where the former were unsuit- videos, without the need for external annotation labels. able for use in any at-scale visual task. However, recent We find that, when trained on a large-scale video dataset, advances in learning with deep visual embeddings have VIE yields powerful representations both for action recog- begun to produce unsupervised representations that nition and single-frame object categorization, showing rival the visual task transfer power of representations substantially improving on the state of the art wherever learned by their supervised counterparts [37, 36, 40, 3]. direct comparisons are possible. We show that a two- These methods leverage simple but apparently strong pathway model with both static and dynamic processing heuristics about data separation and clustering to iter- pathways is optimal, provide analyses indicating how the atively bootstrap feature representations that increas- model works, and perform ablation studies showing the ingly better capture subtle natural image statistics. As importance of key architecture and loss function choices. a result, it is now possible to obtain unsupervised deep Our results suggest that deep neural embeddings are a convolutional neural networks that outperform “early promising approach to unsupervised video learning for modern” deep networks (such as AlexNet in [13]) on a wide variety of task domains. challenging recognition tasks such as ImageNet, even when the latter are trained in a supervised fashion. Moreover, works in supervised video classification, ac- 1. Introduction tion recognition and video captioning have proposed A video’s temporal sequence often contains informa- novel combinations of two- and three-dimensional con- tion about dynamics and events in the world that is volutional structures that are increasingly well-suited richer than that in its unordered set of frames. For to video feature extraction [5]. example, as objects and agents move and interact with In this work, we show how the idea of deep un- each other, they give rise to characteristic patterns of supervised embeddings can be used to learn features visual change that strongly correlate with their visual from videos, introducing the Video Instance Embedding and physical identities, including object category, ge- (VIE) framework. In VIE, videos are projected into a ometric shape, texture, mass, deformability, motion compact latent space via a deep neural network, whose tendencies, and many other properties. It is thus an at- parameters are then tuned to optimally distribute em- tractive hypothesis that ubiquitiously available natural bedded video instances so that similar videos aggregate videos could serve as a powerful signal for unsupervised while dissimilar videos separate. We find that VIE 9563 learns powerful representations for transfer learning to video tasks, including captioning [19] and 3D shape ex- action recognition in the large-scale Kinetics dataset, as traction [23, 1], but the architectures deployed in those well as for single-frame object classification in the Ima- works are quite different from those used here. The geNet dataset. Moreover, where direct comparison to structures we employ are more directly inspired by work previous methods is possible, we find that VIE substan- on supervised action recognition. A core architectural tially improves on the state-of-the-art. We evaluate sev- choice explored in this literature is how and where to eral possibilities for the unsupervised VIE loss function, use 2D single-frame vs 3D multi-frame convolution. A finding that those that have been shown to be most ef- purely 2D approach is the Temporal Relational Net- fective in the single-frame unsupervised learning [40] are work (TRN) [39], which processes aggregates of 2D also most effective in the video context. We also explore convolutional features using MLP readouts. Methods several neural network embedding and frame sampling such as I3D [4] have shown that combinations of both architectures, finding that different temporal sampling 2D and 3D can be useful, deploying 2D processing on statistics are better priors for different transfer tasks, RGB videos and 3D convolution on an optical flow and that a two-pathway static-dynamic architecture is component. A current high-performing architecture for optimal. Finally, we present analyses of the learned rep- action recognition is the SlowFast network [5], which resentations giving some intuition as to how the models computes mixed 2D-3D convolutional features from se- work, and a series of ablation studies illustrating the quences of images at multiple time scales, including a importance of key architectural choices. Codes can be slow branch for low frequency events and a fast branch found at https://github.com/neuroailab/VIE. for higher-frequency events. The dynamic branch of our two-pathway architecture is chosen to mimic the most 2. Related Work successful SlowFast network. However, we find it useful to include in our architecture a static pathway that is Unsupervised Learning of Deep Visual Em- not equivalent to either of the SlowFast branches. beddings. In this work, we employ a framework de- Unsupervised Learning on Videos. The litera- rived from ideas first introduced in the recent literature ture on unsupervised video learning is too extensive to on unsupervised learning of embeddings for images [37]. review comprehensively here, so we focus our discussion In the Instance Recognition (IR) task, a deep nonlinear on several of the most relevant approaches. Temporal image embedding is trained to maximize the distances autoencoders such as PredNet [22], PredRNN [35], and between different images while minimizing distances PredRNN++ [34] are intriguing but have not yet ev- between augmentations (e.g. crops) of a given image, idenced substantial transfer learning performance at thus maximizing the network’s ability to recognize in- scale. Transfer learning results have been generated dividual image instances. In the Local Aggregation from a variety of approaches including the Geometry- (LA) task [40], the embedding loss also allows selected Guided CNN [6], motion masks [26], VideoGAN [31], a groups of images to aggregate, dynamically determin- pairwise-frame siamese triplet network [33], the Shuf- ing the groupings based on a local clustering measure. fle and Learn approach [24], and the Order Prediction Conceptually, the LA approach resembles a blending Network (OPN) [21]. More recent works, including of IR and the also-recent DeepCluster method [3], and the Video Rotations Prediction task (3DRotNet) [16], is more powerful than either IR or DeepCluster alone, the Video Motion and Appearance task (MoAp) [32], achieving state-of-the-art results on unsupervised learn- the Space-Time Puzzle task (ST-puzzle) [18], and the ing with images. Another task Contrastive Multiview Dense Predictive Coding (DPC) task [9], have reported Coding (CMC) [28] achieves similar performance to improved performance, with the help of pretraining on LA within this embedding framework while aggregat- large-scale datasets and using spatiotemporal network ing different views of a given image. CMC have also architectures. All these works only operate on relation- been directly applied to videos, where single frame is ships defined within a single video, differentiating them clustered with its future frame and its corresponding op- from VIE, which exploits the relationships both within tical flow image. The VIE framework allows the use of and between different videos through a loss function any of these embedding objectives, and we test several defined on the distribution of video embeddings. of them here. Other recently proposed methods such as [12] and [2] have achieved comparable results to LA 3. Methods and CMC through optimizing the mutual information of different views of the images, though they use much VIE Embedding framework. The general prob- deeper and more complex

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us