Audio Word2vec: Unsupervised Learning of Audio Segment Representations Using Sequence-To-Sequence Autoencoder

Audio Word2vec: Unsupervised Learning of Audio Segment Representations Using Sequence-To-Sequence Autoencoder

INTERSPEECH 2016 September 8–12, 2016, San Francisco, USA Audio Word2Vec: Unsupervised Learning of Audio Segment Representations using Sequence-to-sequence Autoencoder Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, Lin-Shan Lee College of Electrical Engineering and Computer Science, National Taiwan University {b01902040, b01902038, r04921047, hungyilee}@ntu.edu.tw, [email protected] Abstract identification [4], and several approaches have been success- fully used in STD [10, 6, 7, 8]. But these vector representa- The vector representations of fixed dimensionality for tions may not be able to precisely describe the sequential pho- words (in text) offered by Word2Vec have been shown to be netic structures of the audio segments as we wish to have in this very useful in many application scenarios, in particular due paper, and these approaches were developed primarily in more to the semantic information they carry. This paper proposes a heuristic ways, rather than learned from data. Deep learning has parallel version, the Audio Word2Vec. It offers the vector rep- also been used for this purpose [12, 13]. By learning Recurrent resentations of fixed dimensionality for variable-length audio Neural Network (RNN) with an audio segment as the input and segments. These vector representations are shown to describe the corresponding word as the target, the outputs of the hidden the sequential phonetic structures of the audio segments to a layer at the last few time steps can be taken as the representation good degree, with very attractive real world applications such as of the input segment [13]. However, this approach is supervised query-by-example Spoken Term Detection (STD). In this STD and therefore needs a large amount of labeled training data. application, the proposed approach significantly outperformed On the other hand, autoencoder has been a very success- the conventional Dynamic Time Warping (DTW) based ap- ful machine learning technique for extracting representations in proaches at significantly lower computation requirements. We an unsupervised way [14, 15], but its input should be vectors propose unsupervised learning of Audio Word2Vec from au- of fixed dimensionality. This is a dreadful limitation because dio data without human annotation using Sequence-to-sequence audio segments are intrinsically expressed as sequences of ar- Autoencoder (SA). SA consists of two RNNs equipped with bitrary length. A general framework was proposed to encode Long Short-Term Memory (LSTM) units: the first RNN (en- a sequence using Sequence-to-sequence Autoencoder (SA), in coder) maps the input audio sequence into a vector represen- which a RNN is used to encode the input sequence into a fixed- tation of fixed dimensionality, and the second RNN (decoder) length representation, and then another RNN to decode this maps the representation back to the input audio sequence. The input sequence out of that representation. This general frame- two RNNs are jointly trained by minimizing the reconstruction work has been applied in natural language processing [16, 17] error. Denoising Sequence-to-sequence Autoencoder (DSA) is and video processing [18], but not yet on speech signals to our further proposed offering more robust learning. knowledge. In this paper, we propose to use Sequence-to-sequence Au- 1. Introduction toencoder (SA) to represent variable-length audio segments by vectors with fixed dimensionality. We hope the vector repre- Word2Vec [1, 2, 3] transforming each word (in text) into a vec- sentations obtained in this way can describe more precisely the tor of fixed dimensionality has been shown to be very useful sequential phonetic structures of the audio signals, so the au- in various applications in natural language processing. In par- dio segments that sound alike would have vector representations ticular, the vector representations obtained in this way usually nearby in the space. This is referred to as Audio Word2Vec in carry plenty of semantic information about the word. It is there- this paper. Different from the previous work [12, 13, 11], here fore interesting to ask a question: can we transform the audio learning SA does not need any supervision. That is, only the segment of each word into a vector of fixed dimensionality? If audio segments without human annotation are needed, good for yes, what kind of information these vector representations can applications in the low resource scenario. Inspired from denois- carry and in what kind of applications can these “audio version ing autoencoder [19, 20], an improved version of Denoising of Word2Vec” be useful? This paper is to try to answer these Sequence-to-sequence Autoencoder (DSA) is also proposed. questions at least partially. Among the many possible applications, we chose query-by- Representing variable-length audio segments by vectors example STD in this preliminary study, and show that the pro- with fixed dimensionality has been very useful for many speech posed Audio Word2Vec can be very useful. Query-by-example applications. For example, in speaker identification [4], audio STD using Audio Word2Vec here is much more efficient than emotion classification [5], and spoken term detection (STD) [6, the conventional Dynamic Time Warping (DTW) based ap- 7, 8], the audio segments are usually represented as feature vec- proaches, because only the similarities between two single vec- tors to be applied to standard classifiers to determine the speaker tors are needed, in additional to the significantly better retrieval or emotion labels, or whether containing the input queries. In performance obtained. query-by-example spoken term detection (STD), by represent- ing each word segment as a vector, easier indexing makes the retrieval much more efficient [9, 10, 11]. 2. Proposed Approach But audio segment representation is still an open problem. The goal here is to transform an audio segment represented by It is common to use i-vectors to represent utterances in speaker a variable-length sequence of acoustic features such as MFCC, Copyright © 2016 ISCA 765 http://dx.doi.org/10.21437/Interspeech.2016-82 x =(x1,x2, ..., xT ), where xt is the acoustic feature at time t of an Encoder RNN (the left part of Figure 1) and a Decoder and T is the length, into a vector representation of fixed dimen- RNN (the right part). Given an audio segment represented as an d sionality z ∈ R , where d is the dimensionality of the encoded acoustic feature sequence x =(x1,x2, ..., xT ) of any length T , space. It is desired that this vector representation can describe to the Encoder RNN reads each acoustic feature xt sequentially some degree the sequential phonetic structure of the original au- and the hidden state ht is updated accordingly. After the last dio segment. Below we first give a recap on the RNN Encoder- acoustic feature xT has been read and processed, the hidden Decoder framework [21, 22] in Section 2.1, followed by the state hT of the Encoder RNN is viewed as the learned repre- formal presentation of the proposed Sequence-to-sequence Au- sentation z of the input sequence (the small red block in Fig- toencoder (SA) in Section 2.2, and its extension in Section 2.3. ure 1). The Decoder RNN takes hT as the first input and gen- erates its first output y1. Then it takes y1 as input to generate 2.1. RNN Encoder-Decoder framework y2, and so on. Based on the principles of Autoencoder [14, 15], the target of the output sequence y =(y1,y2, ..., yT ) is the in- RNNs are neural networks whose hidden neurons form a di- put sequence x =(x1,x2, ..., xT ). In other words, the RNN rected cycle. Given a sequence x =(x1,x2, ..., xT ), RNN Encoder and Decoder are jointly trained by minimizing the re- updates its hidden state ht according to the current input xt construction error, measured by the general mean squared er- and the previous ht−1. The hidden state ht acts as an internal T 2 ror xt − yt . Because the input sequence is taken as memory at time t that enables the network to capture dynamic t=1 the learning target, the training process does not need any la- temporal information, and also allows the network to process beled data. The fixed-length vector representation z will be a sequences of variable length. In practice, RNN does not seem meaningful representation for the input audio segment x, be- to learn long-term dependencies [23], so LSTM [24] has been cause the whole input sequence x can be reconstructed from z widely used to conquer such difficulties. Because many amaz- by the RNN Decoder. ing results were achieved by LSTM, RNN has widely been equipped with LSTM units [25, 26, 27, 28, 22, 29, 30]. 2.3. Denoising Sequence-to-sequence Autoencoder (DSA) RNN Encoder-Decoder [22, 21] consists of an Encoder RNN and a Decoder RNN. The Encoder RNN reads the in- To learn more robust representation, we further apply the de- put sequence x =(x1,x2, ..., xT ) sequentially and the hid- noising criterion [20] to the SA learning proposed above. The den state ht of the RNN is updated accordingly. After the last input acoustic feature sequence x is randomly added with some symbol xT is processed, the hidden state hT is interpreted as noise, yielding a corrupted version x˜. Here the input to SA is x˜, the learned representation of the whole input sequence. Then, and SA is expected to generate the output y closest to the orig- by taking hT as input, the Decoder RNN generates the output inal x based on x˜. The SA learned with this denoising criterion sequence y =(y1,y2, ..., xT ) sequentially, where T and T is referred to as Denoising SA (DSA) below. can be different, or the length of x and y can be different.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us