Speech Emotion Classification Using Raw Audio Input and Transcriptions

Speech Emotion Classification Using Raw Audio Input and Transcriptions

Speech Emotion Classification using Raw Audio Input and Transcriptions Gabriel Lima and JinYeong Bak KAIST Daejeon, Republic of Korea +82 42 350 7749 {gcamilo, jy.bak}@kaist.ac.kr ABSTRACT machines must at first be able to understand and analyze emotions As new gadgets that interact with the user through voice become from their users, allowing them to change their actions and speech accessible, the importance of not only the content of the speech according to the situation. increases, but also the significance of the way the user has spo- In this paper, we propose a model that extracts features from raw ken. Even though many techniques have been developed to indicate audio waveforms of speech and their transcriptions and classify emotion on speech, none of them can fully grasp the real emotion them into emotion classes. By utilizing convolutional layers on of the speaker. This paper presents a neural network model capable audio waves and word embeddings, as proposed by [1; 2; 3], we of predicting emotions in conversations by analyzing transcriptions can extract features that when combined together, through different and raw audio waveforms, focusing on feature extraction using con- forms, can classify speech’s emotion. volutional layers and feature combination. The model achieves an accuracy of over 71% across four classes: Anger, Happiness, Neu- In later sections, we analyze the importance of the extracted tex- trality and Sadness. We also analyze the effect of audio and textual tual and audio features by interpreting attention scores for both ele- features on the classification task, by interpreting attention scores ments. These attention scores represent how much focus the neural and parts of speech. This paper explores the use of raw audio wave- network should put on each feature, allowing it to efficiently utilize forms, that in the best of our knowledge, have not yet been used the most important ones. We also show words and expressions that deeply in the emotion classification task, achieving close to state of the model has learned as characteristic for such emotions. art results. For all results on this paper, we used a multimodal dataset, named IEMOCAP [4], which consists of two-way conversations among 10 CCS Concepts speakers. The conversations are then segmented into utterances that •Computing methodologies ! Machine learning; Neural net- are annotated using 4 emotion classes: Anger, Happiness, Neutral- works; Machine learning approaches; ity and Sadness. We used 10% of the dataset as testing set and the other 90% for training, resulting in 555 and 4976 utterances, re- Keywords spectively. To the best of our knowledge, the state of the art perfor- Emotion Classification; Feature Extraction; Signal Processing; Neu- mance in the emotion classification task with this dataset achieves ral Networks; Convolutional Layers. an accuracy of 0.721 [5]. The main contributions of this paper are 1) a deep learning model 1. INTRODUCTION that classifies emotional speech into its respective emotion using With the introduction of new technologies, human computer inter- raw audio waveforms and transcriptions, 2) an audio model capable action has increased immensely. Tasks such as asking your cell- of extracting audio features from raw audio waveforms, 3) a study phone to automatically set calendar events or alarms, using a home of acoustic and textual features importance in the emotion classi- assistant to control your appliances or just ordering food online fication task using attention models and 4) an analysis of possible with your voice have become routine. Everything was developed emotional words in the IEMOCAP dataset. We tackle the human to improve and facilitate from the simplest to the most complex computer interaction popularity increase by proposing a model ca- tasks people complete every day. However, these technologies still pable of classifying speech emotion in order to allow systems to lack a basic human ability: empathy. In order to develop empathy, understand users’ emotions and adapt their behavior according to the user. 2. RELATED WORK Permission to make digital or hard copies of all or part of this work for personal or Many works in emotion recognition and classification use textual, classroom use is granted without fee provided that copies are not made or distributed acoustic and visual features as input for such task [6; 7]. Acous- for profit or commercial advantage and that copies bear this notice and the full citation tic features from audio data, such as Mel-Frequency coefficients, on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, are often extracted [8] using tools not embedded inside the classifi- to post on servers or to redistribute to lists, requires prior specific permission and/or a cation model. However, raw audio waveforms have achieved great fee. Request permissions from [email protected]. results on speech generation [9], modelling [10] and recognition SPML ’18, November 28–30, 2018, Shanghai, China [11], but have not yet been fully explored in emotion classification. c 2018 ACM. ISBN 978-1-4503-6605-2/18/11. $15.00 The work with raw waveforms have surpassed and equalized pre- DOI: https://doi.org/10.1145/3297067.3297089 vious results using features extracted outside the model in those Figure 1. Graphical representation of our model. areas. By using convolutional layers, we believe that it is possible tional layers as proposed by Kim [2] to extract features from the to also extract features that can be useful for the classification task. sentences. As showed by Hazarika et al. [5], it is possible to achieve better re- A sentence, which is the concatenation of words, of length n is sults on the emotion recognition task by mixing acoustic and textual first padded with zeros inside its batch and convoluted with a filter emb_size×p features in different forms. The authors also analyzed many models w 2 IR ; p 2 f f1; f2g. Each sentence is convoluted twice that tackle feature-level fusion in emotion classification, such as at- with the same number of channels n f ms and max-pooled, result- n tention, and their counterpart without such fusion, inspiring a part ing in vectors xtext;i 2 IR fms ;i = 1;2. The vectors xtext;i are then of this paper. Also, attention models have achieved good results in concatenated to xtext and used for the attention model explained in some tasks, such as image [12] and document [13] classification, Section 3.3. by allowing models to focus on the most important features of the input. We also use subsampling of frequent words in order to compensate for the imbalance between frequent and rare words: each word wi 3. MODEL is discarded with probability Pi as in Equation 1, where f (wi) is the frequency of the word in the dataset. We used the parameter t equal Our model has two different networks, one for the transcriptions to 8000. and other for raw audio waveforms, that are chained together with different methods explained in Section 3.3. After combining the features extracted with such methods, we use a fully connected p Pi(wi) = f (wi)=t (1) layer to classify the sentence into an emotion class. All networks are trained at the same time with Adam Optimizer, L2 regulariza- tion and using the PyTorch framework. The acoustic and textual 3.3 Combining Text and Audio features extraction models are presented in Section 3.1 and 3.2, re- As methods for combining text and audio, we propose attention spectively. models along with trivial concatenation (Equation 2) and addition (Equation 3). The attention models use as starting point the work 3.1 Raw Audio Waveforms of Hazarika et al. [5]. As explained in Section 2, raw audio waveforms have showed good results on speech generation [9], modelling [10] and recognition y = x ⊕ x (2) [11] and in this paper, we propose the use of these raw waveforms text audio for audio classification. Even though authors have usually extracted y = xtext + xaudio (3) features using outside tools, such as audio coefficients, for such task, we believe that convolutional layers, as shown not only in Computer Vision [14; 15] and Natural Language Processing [2], In Equation 2, ⊕ represents trivial concatenation. can learn meaningful and complex features for audio classification As for the attention models, we calculate the attention scores for as well. textual and audio features by using matrix multiplication and pro- The raw audio waveforms are first padded with zeros in a batch, so jecting the scores on either IRn, where n is the number of features, they have the same length. Our model has two convolutional lay- or IR1. In the former, each feature has its own attention score, while ers with different filter sizes (k1 and k2) and number of channels (c1 in the latter, the attention score is shared across all dimensions. c and 2), along with Batch Normalization, ReLU and a residual con- Equations 4-8 show the methods for calculating the attention score nection. The extracted features are then pooled with Adaptive Max m×n ai of audio and text features. Let Wi be m × n weights trained Pooling that outputs a fixed npool-sized vector for each waveform. n alongside the model, xi 2 IR be either audio or text features and f We use a GRU layer to capture the temporal dependency of the the ReLU function. waveform with nGRU units. Finally, the extracted audio features are x 2 2×emb_size scaled to audio IR in order to be used in our attention Att1 : a = f (W 1×nx ) (4) model. s i 1 i 1 1×n n×n Attd : ai = W2 f (W1 xi) (5) 3.2 Transcriptions n n×n Atts : ai = f (W1 xi) (6) Each word of the sentence is either embedded using an embed- n n×n n×n ding matrix, trained alongside the model, or embedded using a pre- Attd1 : ai = W1 f (W1 xi) (7) trained Word2Vec [16] to x 2 IRemb_size.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us