Learning Relationships Between Text, Audio, and Video Via Deep Canonical Correlation for Multimodal Language Analysis

Learning Relationships Between Text, Audio, and Video Via Deep Canonical Correlation for Multimodal Language Analysis

The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Learning Relationships between Text, Audio, and Video via Deep Canonical Correlation for Multimodal Language Analysis Zhongkai Sun,1 Prathusha K Sarma,2∗ William A Sethares,1 Yingyu Liang1 1University of Wisconsin-Madison, 2Curai {zsun227, sethares}@wisc.edu, [email protected], [email protected] Abstract modes. This is plausible for at least three reasons: 1) Text it- self contains considerable sentiment-related information. 2) Multimodal language analysis often considers relationships Visual or acoustic information may sometimes confuse the between features based on text and those based on acousti- cal and visual properties. Text features typically outperform sentiment or emotion analysis task. For instance: “angry” non-text features in sentiment analysis or emotion recognition and “excited” may have similar acoustic performances (high tasks in part because the text features are derived from ad- volume and high pitch) even though they belong to oppo- vanced language models or word embeddings trained on mas- site sentiments. Similarly, “sad” and “disgusted” may have sive data sources while audio and video features are human- different visual features though they both belong to the neg- engineered and comparatively underdeveloped. Given that the ative sentiment. 3) Algorithms for text analysis have a richer text, audio, and video are describing the same utterance in dif- history and are well studied. ferent ways, we hypothesize that the multimodal sentiment Based on this observation, learning the hidden relation- analysis and emotion recognition can be improved by learn- ship between verbal information and non-verbal information ing (hidden) correlations between features extracted from the is a key point in multi-modal language analysis. This can be outer product of text and audio (we call this text-based au- dio) and analogous text-based video. This paper proposes a approached by looking at different ways of combining multi- novel model, the Interaction Canonical Correlation Network modal features. (ICCN), to learn such multimodal embeddings. ICCN learns The simplest way to combine text (T), audio (A) and correlations between all three modes via deep canonical cor- video (V) for feature extraction and classification is to relation analysis (DCCA) and the proposed embeddings are concatenate the A, V, and T vectors. An alternative is to then tested on several benchmark datasets and against other use the outer product (Liu et al. 2018; Zadeh et al. 2017) state-of-the-art multimodal embedding algorithms. Empirical which can represent the interaction between pairs of fea- results and ablation studies confirm the effectiveness of ICCN tures, resulting in 2D or 3D arrays that can be processed in capturing useful information from all three views. using advanced methods such as convolutional neural net- works (CNNs) (Lawrence et al. 1997). Other approaches 1 Introduction (Zadeh et al. 2018a; Liang et al. 2018; Zadeh et al. 2018c; Human language communication occurs in several modal- Wang et al. 2018) study multi-modal interactions and intra- ities: via words that are spoken, by tone of voice, and actions by using either graph or temporal memory networks by facial and bodily expressions. Understanding the con- with a sequential neural network LSTM (Gers, Schmidhu- tent of a message thus requires understanding all three ber, and Cummins 1999). While all these have contributed modes. With the explosive growth in availability of data, towards learning multi-modal features, they typically ig- several machine learning algorithms have been successfully nore the hidden correlation between text-based audio and applied towards multimodal tasks such as sentiment anal- text-based video. Individual modalities are either combined ysis (Morency, Mihalcea, and Doshi 2011; Soleymani et via neural networks or passed directly to the final classifier al. 2017), emotion recognition (Haq and Jackson 2011), stage. However, it is obvious that attaching both audio and image-text retrieval (Wang, Li, and Lazebnik 2016), and video features to the same textual information may enable aiding medical diagnose (Liu et al. 2019; Lee et al. 2014) non-text information to be better understood, and in turn the etc. Among multimodal language sentiment or emotion ex- non-text information may impart greater meaning to the text. periments involving unimodal features (Zadeh et al. 2016; Thus, it is reasonable to study the deeper correlations be- 2018c; Tsai et al. 2018; 2019), it is commonly observed that tween text-based audio and text-based video features. text based features perform better than visual or auditory This paper proposes a novel model which uses the outer- product of feature pairs along with Deep Canonical Correla- ∗Work done while at UW-Madison tion Analysis (DCCA) (Andrew et al. 2013) to study useful Copyright c 2020, Association for the Advancement of Artificial multi-modal embedding features. The effectiveness of us- Intelligence (www.aaai.org). All rights reserved. 8992 ing an outer-product to extract cross-modal information has all three modalities. In their work (Chen et al. 2017) pro- been explored in (Zadeh et al. 2017; Liu et al. 2018). Thus, pose improvements to multi-modal embeddings using rein- features from each mode are first extracted independently at forcement learning to align the multi-modal embedding at the sentence (or utterance) level and two outer-product ma- the word level by removing noises. A multi-modal tensor trices (T ⊗ V and T ⊗ A) are built for representing the in- fusion network is built in (Zadeh et al. 2017) by calculating teractions between text-video and between text-audio. Each the outer-product of text, audio and video features to repre- outer-product matrix is connected to a convolutional neural sent comprehensive features. However this method is limited network (CNN) for feature extraction. Outputs of these two by the need of a large computational resources to perform CNNs can be considered as feature vectors for text-based calculations of the outer dot product. In their work (Liu et audio and text-based video and should be correlated. al. 2018) developed an efficient low rank method for build- In order to better correlate the above text-based audio ing tensor networks which reduce computational complex- and text-based video, we use Canonical Correlation Analy- ity and are able to achieve competitive results. A Mem- sis (CCA) (Hotelling 1936), which is a well-known method ory Fusion Network (MFN) is proposed by (Zadeh et al. for finding a linear subspace where two inputs are maxi- 2018a) which memorizes temporal and long-term interac- mally correlated. Unlike cosine similarity or Euclidean dis- tions and intra-actions between cross-modals, this memory tance, CCA is able to learn the direction of maximum cor- can be stored and updated in a LSTM. (Liang et al. 2018) relation over all possible linear transformations and is not learned multistage fusion at each LSTM step so that the limited by the original coordinate systems. However, one multi-modal fusion can be decomposed into several sub- limitation of CCA is that it can only learn linear transfor- problems and then solved in a specialized and effective way. mations. An extension to CCA named Deep CCA (DCCA) A multimodal transformer is proposed by (Tsai et al. 2019) (Andrew et al. 2013) uses a deep neural network to al- that uses attention based cross-modal transformers to learn low non-linear relationships in the CCA transformation. Re- interactions between modalities. cently several authors (Rotman, Vulic,´ and Reichart 2018; Cross-modal relationship learning via CCA: Canoni- Hazarika et al. 2018) have shown the advantage of using cal Correlation Analysis(CCA) (Hotelling 1936) learns the CCA-based methods for studying correlations between dif- maximum correlation between two variables by mapping ferent inputs. Inspired by these, we use DCCA to correlate them into a new subspace. Deep CCA (DCCA) (Andrew et text-based audio and text-based video. Text-based audio and al. 2013) improves the performance of CCA by using feed- text-based video features derived from the two CNNs are in- forward neural networks in place of the linear transformation put into a CCA layer which consists of two projections and in CCA. a CCA Loss calculator. The two CNNs and the CCA layer A survey of recent literature sees applications of CCA- then form a DCCA, the weights of the two CNNs and the based methods in analyzing the potential relationship be- projections are updated by minimizing the CCA Loss. In this tween different variables. For example, a CCA based model way, the two CNNs are able to extract useful features from to combine domain knowledge and universal word embed- the outer-product matrices constrained by the CCA loss. Af- dings is proposed by (Sarma, Liang, and Sethares 2018). ter optimizing the whole network, outputs of the two CNNs Models proposed by (Rotman, Vulic,´ and Reichart 2018) are concatenated with the original text sentence embedding use Deep Partial Canonical Correlation Analysis (DPCCA), as the final multi-modal embedding, which can be used for a variant of DCCA, for studying the relationship between the classification. two languages based on the same image they are describ- We evaluate our approach on three benchmark multi- ing. Work by (Sun et al. 2019) investigates the application modal sentiment analysis and emotion recognition datasets: of DCCA to simple concatenations of multimodal-features, CMU-MOSI (Zadeh et al. 2016), CMU-MOSEI (Zadeh et while (Hazarika et al. 2018) applied CCA methods to learn al. 2018c), and IEMOCAP(Busso et al. 2008). Additional joint-representation for detecting sarcasm. Both approaches experiments are presented to illustrate the performance of show the effectiveness of CCA methods towards learning the ICCN algorithm. The rest of the paper is organized as potential correlation between two input variables. follows: Section 2 presents related work, Section 3 intro- duces our proposed model and Section 4 describes our ex- 3 Methodology perimental setup.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us