Autoencoders and Self-Supervision

Total Page:16

File Type:pdf, Size:1020Kb

Autoencoders and Self-Supervision COMP 451 – Fundamentals of Machine Learning Lecture 25 --- Autoencoders and self-supervision William L. Hamilton * Unless otherwise noted, all material posted for this course are copyright of the instructor, and cannot be reused or reposted without the instructor’s written permission. William L. Hamilton, McGill University and Mila 1 Autoencoders and self-supervision § Two approaches to dimensionality reduction using deep learning. § This is a rough categorization and not a strict division!! § Autoencoders: § Optimize a “reconstruction loss.” § Encoder maps input to a low-dimensional space and decoder tries to recover the original data from the low-dimensional space. § “Self-supervision”: § Try to predict some parts of the input from other parts of the input. § I.e., make up labels from x. William L. Hamilton, McGill University and Mila 2 Autoencoders and self-supervision § Two approaches to dimensionality reduction using deep learning. § This is a rough categorization and not a strict division!! § Autoencoders: § Optimize a “reconstruction loss.” § Encoder maps input to a low-dimensional space and decoder tries to recover the original data from the low-dimensional space. § “Self-supervision”: § Try to predict some parts of the input from other parts of the input. § I.e., make up labels from x. William L. Hamilton, McGill University and Mila 3 Autoencoding: the basic idea Image credit: https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html William L. Hamilton, McGill University and Mila 4 Learning an autoencoder function § Goal: Learn a compressed representation of the input data. § We have two functions (usually neural networks): § Encoder: z = gφ(x) § Decoder: xˆ = f✓(z) William L. Hamilton, McGill University and Mila 5 Learning an autoencoder function § Goal: Learn a compressed representation of the input data. § We have two functions (usually neural networks): § Encoder: z = gφ(x) § Decoder: xˆ = f✓(z) § Train using a reconstruction loss: J(x, xˆ)= x xˆ 2 k − k = x f (g (x)) 2 k − ✓ φ k William L. Hamilton, McGill University and Mila 6 Learning an autoencoder function § Goal: Learn a compressed representation of the input data. § We have two functions (usually neural networks): § Encoder: z = gφ(x) § Decoder: Only interesting when z has xˆ = f (z) much smaller dimension ✓ than x! § Train using a reconstruction loss: J(x, xˆ)= x xˆ 2 k − k = x f (g (x)) 2 k − ✓ φ k William L. Hamilton, McGill University and Mila 7 Autoencoding Image credit: https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html William L. Hamilton, McGill University and Mila 8 Recall: Principal Component Analysis (PCA) § Idea: Project data into a lower-dimensional sub-space, Rm -->Rm’, where m’<m. William L. Hamilton, McGill University and Mila 9 Recall: Principal Component Analysis (PCA) § Idea: Project data into a lower-dimensional sub-space, Rm -->Rm’, where m’<m. T § Consider a linear mapping, xi --> W xi § W is the compression matrix with dimension Rmxm’. § Assume there is a decompression matrix Um’xm. William L. Hamilton, McGill University and Mila 10 Recall: Principal Component Analysis (PCA) § Idea: Project data into a lower-dimensional sub-space, Rm -->Rm’, where m’<m. T § Consider a linear mapping, xi --> W xi § W is the compression matrix with dimension Rmxm’. § Assume there is a decompression matrix Um’xm. T 2 § Solve the following problem: argminW,U Σi=1:n || xi – UW xi || William L. Hamilton, McGill University and Mila 11 Recall: Principal Component Analysis (PCA) T 2 § Solve the following problem: argminW,U Σi=1:n || xi – UW xi || T 2 § Equivalently: argminW,U || X – XWU || § Solution is given by eigen-decomposition of XTX. § W is mxm’ matrix corresponding to the first m’ eigenvectors of XTX (sorted in descending order by the magnitude of the eigenvalue). § Equivalently: W is mxm’ matrix containing the first m’ left singular vectors of X § Note: The columns of W are orthogonal! William L. Hamilton, McGill University and Mila 12 PCA vs autoencoders In the case of a linear encoders and decoders: fW(x) = Wx gŴ(h) = W’h , with squared-error reconstruction loss we can show that the minimum error solution W yields the same subspace as PCA. William L. Hamilton, McGill University and Mila 13 More advanced encoders and decoders § What to use as encoders and decoders? § Most data (e.g., arbitrary real-valued or categorical features). § Encoder and decoder are feed-forward neural networks. § Sequence data § Encoder and decoder are RNNs. § Image data § Encoder is a CNN; decoder is a deconvolutional network. William L. Hamilton, McGill University and Mila 14 Aside: Deconvolutions § “Deconvolution” is just a transposed convolution. William L. Hamilton, McGill University and Mila 15 Regularization of autoencoders § How can we generate sparse autoencoders? (And also, why?) William L. Hamilton, McGill University and Mila 16 Regularization of autoencoders § How can we generate sparse autoencoders? (And also, why?) § Weight tying of the encoder and decoder weights (� = �) to explicitly constrain (regularize) the learned function. William L. Hamilton, McGill University and Mila 17 Regularization of autoencoders § How can we generate sparse autoencoders? (And also, why?) § Weight tying of the encoder and decoder weights (� = �) to explicitly constrain (regularize) the learned function. § Directly penalize the output of the hidden units (e.g. with L1 penalty) to introduce sparsity in the weights. William L. Hamilton, McGill University and Mila 18 Regularization of autoencoders § How can we generate sparse autoencoders? (And also, why?) § Weight tying of the encoder and decoder weights (� = �) to explicitly constrain (regularize) the learned function. § Directly penalize the output of the hidden units (e.g. with L1 penalty) to introduce sparsity in the weights. § Penalize the average output (over a batch of data) to encourage it to approach a fixed target. William L. Hamilton, McGill University and Mila 19 Denoising autoencoders William L. Hamilton, McGill University and Mila 20 Extracting and Composing Robust Features with Denoising Autoencoders 2.3. The Denoising Autoencoder Extracting and Composingtowards Robust reconstructing Features with the Denoisinguncorrupted Autoencoders version from the corrupted version. Note that in this way, the au- To test our hypothesis2.3. and The enforce Denoising robustness Autoencoder to par- towards reconstructing the uncorrupted version from toencoder cannot learn the identity, unlike the basic tially destroyed inputs we modify the basic autoen- the corrupted version. Note that in this way, the au- ExtractingTo test and our Composing hypothesis and Robust enforce Features robustnessautoencoder, with to Denoising par- thus Autoencoders removing the constraint that d <d coder we just described. We will now train it to recon- toencoder cannot learn the identity,0 unlike the basic tially destroyed inputs we modify theor basic the autoen- need to regularize specifically to avoid such a struct a clean “repaired” input from a corrupted, par- autoencoder, thus removing the constraint that d0 <d 2.3. The Denoisingcoder Autoencoder we just described. We will nowtowards traintrivial it reconstructing to solution. recon- the uncorrupted version from tially destroyed one. Thisstruct is a done clean by “repaired” first corrupting input fromthe a corrupted corrupted, par- version.or Note the that need in to this regularize way, the specifically au- to avoid such a theTo initial test ourinput hypothesisx to get and a partially enforce robustness destroyed to version par- trivial solution. tially destroyed inputstially we destroyedmodify the one. basic This autoen- is done bytoencoder first corrupting cannot learn the identity, unlike the basic x˜ by means of a stochastic mapping x˜ q (x˜ x). In 2.4. Layer-wise Initialization and Fine Tuning coder we just described.the We initial will input now trainx to it get to a recon- partiallyautoencoder, destroyed version thus removing the constraint that d0 <d ⇠ D | 2.4. Layer-wise Initialization and Fine Tuning ourstruct experiments, a clean “repaired” we consideredx˜ by input means from the of following acorrupted stochastic corrupt-, par-mappingorx˜The theq need basic(x˜ x to). autoencoder regularize In specifically has been to avoidused assuch a a building ing process, parameterized by the desired proportion ⌫ trivial⇠ solution.D | tially destroyed one.our This experiments, is done by first we considered corrupting the followingblock to corrupt- train deepThe networks basic autoencoder (Bengio et al., has 2007), been used with as a building of “destruction”:the initial input forx to each geting aprocess, input partiallyx parameterized, a destroyed fixed number version by the⌫d desiredthe proportion representation⌫ block of the to traink-th deep layer networks used as (Bengio input et for al., 2007), with 2.4. Layer-wise Initialization and Fine Tuning of componentsx˜ by means of are a stochastic chosenof “destruction”: at mapping random,x˜ and forq each their(x˜ x input). value In x, a fixedthe number (k + 1)-th,⌫d andthe the representation (k + 1)-th of layer the k trained-th layer after used as input for of components are⇠ chosenD | at random, and their value is forcedour experiments, to 0, while we the considered others are the left following untouched. corrupt- All Thethe basick-th autoencoder has beenthe trained. has (k + been 1)-th, After used and a fewas the a layers building (k + have 1)-th been layer trained after is forced to 0, while
Recommended publications
  • Malware Classification with BERT
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-25-2021 Malware Classification with BERT Joel Lawrence Alvares Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons, and the Information Security Commons Malware Classification with Word Embeddings Generated by BERT and Word2Vec Malware Classification with BERT Presented to Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree By Joel Alvares May 2021 Malware Classification with Word Embeddings Generated by BERT and Word2Vec The Designated Project Committee Approves the Project Titled Malware Classification with BERT by Joel Lawrence Alvares APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE San Jose State University May 2021 Prof. Fabio Di Troia Department of Computer Science Prof. William Andreopoulos Department of Computer Science Prof. Katerina Potika Department of Computer Science 1 Malware Classification with Word Embeddings Generated by BERT and Word2Vec ABSTRACT Malware Classification is used to distinguish unique types of malware from each other. This project aims to carry out malware classification using word embeddings which are used in Natural Language Processing (NLP) to identify and evaluate the relationship between words of a sentence. Word embeddings generated by BERT and Word2Vec for malware samples to carry out multi-class classification. BERT is a transformer based pre- trained natural language processing (NLP) model which can be used for a wide range of tasks such as question answering, paraphrase generation and next sentence prediction. However, the attention mechanism of a pre-trained BERT model can also be used in malware classification by capturing information about relation between each opcode and every other opcode belonging to a malware family.
    [Show full text]
  • Learned in Speech Recognition: Contextual Acoustic Word Embeddings
    LEARNED IN SPEECH RECOGNITION: CONTEXTUAL ACOUSTIC WORD EMBEDDINGS Shruti Palaskar∗, Vikas Raunak∗ and Florian Metze Carnegie Mellon University, Pittsburgh, PA, U.S.A. fspalaska j vraunak j fmetze [email protected] ABSTRACT model [10, 11, 12] trained for direct Acoustic-to-Word (A2W) speech recognition [13]. Using this model, we jointly learn to End-to-end acoustic-to-word speech recognition models have re- automatically segment and classify input speech into individual cently gained popularity because they are easy to train, scale well to words, hence getting rid of the problem of chunking or requiring large amounts of training data, and do not require a lexicon. In addi- pre-defined word boundaries. As our A2W model is trained at the tion, word models may also be easier to integrate with downstream utterance level, we show that we can not only learn acoustic word tasks such as spoken language understanding, because inference embeddings, but also learn them in the proper context of their con- (search) is much simplified compared to phoneme, character or any taining sentence. We also evaluate our contextual acoustic word other sort of sub-word units. In this paper, we describe methods embeddings on a spoken language understanding task, demonstrat- to construct contextual acoustic word embeddings directly from a ing that they can be useful in non-transcription downstream tasks. supervised sequence-to-sequence acoustic-to-word speech recog- Our main contributions in this paper are the following: nition model using the learned attention distribution. On a suite 1. We demonstrate the usability of attention not only for aligning of 16 standard sentence evaluation tasks, our embeddings show words to acoustic frames without any forced alignment but also for competitive performance against a word2vec model trained on the constructing Contextual Acoustic Word Embeddings (CAWE).
    [Show full text]
  • Lecture 26 Word Embeddings and Recurrent Nets
    CS447: Natural Language Processing http://courses.engr.illinois.edu/cs447 Where we’re at Lecture 25: Word Embeddings and neural LMs Lecture 26: Recurrent networks Lecture 26 Lecture 27: Sequence labeling and Seq2Seq Lecture 28: Review for the final exam Word Embeddings and Lecture 29: In-class final exam Recurrent Nets Julia Hockenmaier [email protected] 3324 Siebel Center CS447: Natural Language Processing (J. Hockenmaier) !2 What are neural nets? Simplest variant: single-layer feedforward net For binary Output unit: scalar y classification tasks: Single output unit Input layer: vector x Return 1 if y > 0.5 Recap Return 0 otherwise For multiclass Output layer: vector y classification tasks: K output units (a vector) Input layer: vector x Each output unit " yi = class i Return argmaxi(yi) CS447: Natural Language Processing (J. Hockenmaier) !3 CS447: Natural Language Processing (J. Hockenmaier) !4 Multi-layer feedforward networks Multiclass models: softmax(yi) We can generalize this to multi-layer feedforward nets Multiclass classification = predict one of K classes. Return the class i with the highest score: argmaxi(yi) Output layer: vector y In neural networks, this is typically done by using the softmax N Hidden layer: vector hn function, which maps real-valued vectors in R into a distribution … … … over the N outputs … … … … … …. For a vector z = (z0…zK): P(i) = softmax(zi) = exp(zi) ∕ ∑k=0..K exp(zk) Hidden layer: vector h1 (NB: This is just logistic regression) Input layer: vector x CS447: Natural Language Processing (J. Hockenmaier) !5 CS447: Natural Language Processing (J. Hockenmaier) !6 Neural Language Models LMs define a distribution over strings: P(w1….wk) LMs factor P(w1….wk) into the probability of each word: " P(w1….wk) = P(w1)·P(w2|w1)·P(w3|w1w2)·…· P(wk | w1….wk#1) A neural LM needs to define a distribution over the V words in Neural Language the vocabulary, conditioned on the preceding words.
    [Show full text]
  • Arxiv:2007.00183V2 [Eess.AS] 24 Nov 2020
    WHOLE-WORD SEGMENTAL SPEECH RECOGNITION WITH ACOUSTIC WORD EMBEDDINGS Bowen Shi, Shane Settle, Karen Livescu TTI-Chicago, USA fbshi,settle.shane,[email protected] ABSTRACT Segmental models are sequence prediction models in which scores of hypotheses are based on entire variable-length seg- ments of frames. We consider segmental models for whole- word (“acoustic-to-word”) speech recognition, with the feature vectors defined using vector embeddings of segments. Such models are computationally challenging as the number of paths is proportional to the vocabulary size, which can be orders of magnitude larger than when using subword units like phones. We describe an efficient approach for end-to-end whole-word segmental models, with forward-backward and Viterbi de- coding performed on a GPU and a simple segment scoring function that reduces space complexity. In addition, we inves- tigate the use of pre-training via jointly trained acoustic word embeddings (AWEs) and acoustically grounded word embed- dings (AGWEs) of written word labels. We find that word error rate can be reduced by a large margin by pre-training the acoustic segment representation with AWEs, and additional Fig. 1. Whole-word segmental model for speech recognition. (smaller) gains can be obtained by pre-training the word pre- Note: boundary frames are not shared. diction layer with AGWEs. Our final models improve over segmental models, where the sequence probability is com- prior A2W models. puted based on segment scores instead of frame probabilities. Index Terms— speech recognition, segmental model, Segmental models have a long history in speech recognition acoustic-to-word, acoustic word embeddings, pre-training research, but they have been used primarily for phonetic recog- nition or as phone-level acoustic models [11–18].
    [Show full text]
  • Overview of Lstms and Word2vec and a Bit About Compositional Distributional Semantics If There’S Time
    Overview of LSTMs and word2vec and a bit about compositional distributional semantics if there’s time Ann Copestake Computer Laboratory University of Cambridge November 2016 Outline RNNs and LSTMs Word2vec Compositional distributional semantics Some slides adapted from Aurelie Herbelot. Outline. RNNs and LSTMs Word2vec Compositional distributional semantics Motivation I Standard NNs cannot handle sequence information well. I Can pass them sequences encoded as vectors, but input vectors are fixed length. I Models are needed which are sensitive to sequence input and can output sequences. I RNN: Recurrent neural network. I Long short term memory (LSTM): development of RNN, more effective for most language applications. I More info: http://neuralnetworksanddeeplearning.com/ (mostly about simpler models and CNNs) https://karpathy.github.io/2015/05/21/rnn-effectiveness/ http://colah.github.io/posts/2015-08-Understanding-LSTMs/ Sequences I Video frame categorization: strict time sequence, one output per input. I Real-time speech recognition: strict time sequence. I Neural MT: target not one-to-one with source, order differences. I Many language tasks: best to operate left-to-right and right-to-left (e.g., bi-LSTM). I attention: model ‘concentrates’ on part of input relevant at a particular point. Caption generation: treat image data as ordered, align parts of image with parts of caption. Recurrent Neural Networks http://colah.github.io/posts/ 2015-08-Understanding-LSTMs/ RNN language model: Mikolov et al, 2010 RNN as a language model I Input vector: vector for word at t concatenated to vector which is output from context layer at t − 1. I Performance better than n-grams but won’t capture ‘long-term’ dependencies: She shook her head.
    [Show full text]
  • Feedforward Neural Networks and Word Embeddings
    Feedforward Neural Networks and Word Embeddings Fabienne Braune1 1LMU Munich May 14th, 2017 Fabienne Braune (CIS) Feedforward Neural Networks and Word Embeddings May 14th, 2017 · 1 Outline 1 Linear models 2 Limitations of linear models 3 Neural networks 4 A neural language model 5 Word embeddings Fabienne Braune (CIS) Feedforward Neural Networks and Word Embeddings May 14th, 2017 · 2 Linear Models Fabienne Braune (CIS) Feedforward Neural Networks and Word Embeddings May 14th, 2017 · 3 Binary Classification with Linear Models Example: the seminar at < time > 4 pm will Classification task: Do we have an < time > tag in the current position? Word Lemma LexCat Case SemCat Tag the the Art low seminar seminar Noun low at at Prep low stime 4 4 Digit low pm pm Other low timeid will will Verb low Fabienne Braune (CIS) Feedforward Neural Networks and Word Embeddings May 14th, 2017 · 4 Feature Vector Encode context into feature vector: 1 bias term 1 2 -3 lemma the 1 3 -3 lemma giraffe 0 ... ... ... 102 -2 lemma seminar 1 103 -2 lemma giraffe 0 ... ... ... 202 -1 lemma at 1 203 -1 lemma giraffe 0 ... ... ... 302 +1 lemma 4 1 303 +1 lemma giraffe 0 ... ... ... Fabienne Braune (CIS) Feedforward Neural Networks and Word Embeddings May 14th, 2017 · 5 Dot product with (initial) weight vector 2 3 2 3 x0 = 1 w0 = 1:00 6 x = 1 7 6 w = 0:01 7 6 1 7 6 1 7 6 x = 0 7 6 w = 0:01 7 6 2 7 6 2 7 6 ··· 7 6 ··· 7 6 7 6 7 6x = 17 6x = 0:017 6 101 7 6 101 7 6 7 6 7 6x102 = 07 6x102 = 0:017 T 6 7 6 7 h(X )= X · Θ X = 6 ··· 7 Θ = 6 ··· 7 6 7 6 7 6x201 = 17 6x201 = 0:017 6 7
    [Show full text]
  • An Unsupervised Method for OCR Post-Correction and Spelling
    An Unsupervised method for OCR Post-Correction and Spelling Normalisation for Finnish Quan Duong,♣ Mika Hämäläinen,♣,♦ Simon Hengchen,♠ firstname.lastname@{helsinki.fi;gu.se} ♣University of Helsinki, ♦Rootroo Ltd, ♠Språkbanken Text – University of Gothenburg Abstract Historical corpora are known to contain errors introduced by OCR (optical character recognition) methods used in the digitization process, often said to be degrading the performance of NLP systems. Correcting these errors manually is a time-consuming process and a great part of the automatic approaches have been relying on rules or supervised machine learning. We build on previous work on fully automatic unsupervised extraction of parallel data to train a character- based sequence-to-sequence NMT (neural machine translation) model to conduct OCR error correction designed for English, and adapt it to Finnish by proposing solutions that take the rich morphology of the language into account. Our new method shows increased performance while remaining fully unsupervised, with the added benefit of spelling normalisation. The source code and models are available on GitHub1 and Zenodo2. 1 Introduction As many people dealing with digital humanities study historical data, the problem that researchers continue to face is the quality of their digitized corpora. There have been large scale projects in the past focusing on digitizing parts of cultural history using OCR models available in those times (see Benner (2003), Bremer-Laamanen (2006)). While OCR methods have substantially developed and im- proved in recent times, it is still not always possible to re-OCR text. Re-OCRing documents is also not a priority when not all collections are digitised, as is the case in many national libraries.
    [Show full text]
  • Poor Man's OCR Post-Correction
    Poor Man’s OCR Post-Correction Unsupervised Recognition of Variant Spelling Applied to a Multilingual Document Collection Harald Hammarström Shafqat Mumtaz Virk Markus Forsberg Department of Linguistics and Språkbanken Språkbanken Philology University of Gothenburg University of Gothenburg Uppsala University P.O. Box 200 P.O. Box 200 P.O. Box 635 Gothenburg 405 30, Sweden Gothenburg 405 30, Sweden Uppsala 751 26, Sweden [email protected] [email protected] [email protected] ABSTRACT 1 INTRODUCTION The accuracy of Optical Character Recognition (OCR) is sets the In search engines and digital libraries, more and more documents limit for the success of subsequent applications used in text analyz- become available from scanning of legacy print collections ren- ing pipeline. Recent models of OCR post-processing significantly dered searchable through optical character recognition (OCR). Ac- improve the quality of OCR-generated text but require engineering cess to these texts is often not satisfactory due to the mediocre work or resources such as human-labeled data or a dictionary to performance of OCR on historical texts and imperfectly scanned perform with such accuracy on novel datasets. In the present pa- or physically deteriorated originals [19, 21]. Historical texts also per we introduce a technique for OCR post-processing that runs often contain more spelling variation which turns out to be a re- off-the-shelf with no resources or parameter tuning required. In lated obstacle. essence, words which are similar in form that are also distribu- Given this need, techniques for automatically improving OCR tionally more similar than expected at random are deemed OCR- output have been around for decades.
    [Show full text]
  • Recent Improvements on Error Detection for Automatic Speech Recognition
    Recent improvements on error detection for automatic speech recognition Yannick Esteve` and Sahar Ghannay and Nathalie Camelin1 Abstract. Error detection can be performed in three steps: first, generating a Automatic speech recognition(ASR) offers the ability to access the set of features that are based on ASR system or gathered from other semantic content present in spoken language within audio and video source of knowledge. Then, based on these features, estimating cor- documents. While acoustic models based on deep neural networks rectness probabilities (confidence measures). Finally, a decision is have recently significantly improved the performances of ASR sys- made by applying a threshold on these probabilities. tems, automatic transcriptions still contain errors. Errors perturb the Many studies focus on the ASR error detection. In [14], authors exploitation of these ASR outputs by introducing noise to the text. have applied the detection capability for filtering data for unsuper- To reduce this noise, it is possible to apply an ASR error detection in vised learning of an acoustic model. Their approach was based on order to remove recognized words labelled as errors. applying two thresholds on the linear combination of two confi- This paper presents an approach that reaches very good results, dence measures. The first one, was derived from language model and better than previous state-of-the-art approaches. This work is based takes into account backoff behavior during the ASR decoding. This on a neural approach, and more especially on a study targeted to measure is different from the language model score, because it pro- acoustic and linguistic word embeddings, that are representations of vides information about the word context.
    [Show full text]
  • Unsupervised Learning of Cross-Modal Mappings Between
    Unsupervised Learning of Cross-Modal Mappings between Speech and Text by Yu-An Chung B.S., National Taiwan University (2016) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2019 c Massachusetts Institute of Technology 2019. All rights reserved. Author.............................................................. Department of Electrical Engineering and Computer Science May 3, 2019 Certified by. James R. Glass Senior Research Scientist in Computer Science Thesis Supervisor Accepted by . Leslie A. Kolodziejski Professor of Electrical Engineering and Computer Science Chair, Department Committee on Graduate Students 2 Unsupervised Learning of Cross-Modal Mappings between Speech and Text by Yu-An Chung Submitted to the Department of Electrical Engineering and Computer Science on May 3, 2019, in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering and Computer Science Abstract Deep learning is one of the most prominent machine learning techniques nowadays, being the state-of-the-art on a broad range of applications in computer vision, nat- ural language processing, and speech and audio processing. Current deep learning models, however, rely on significant amounts of supervision for training to achieve exceptional performance. For example, commercial speech recognition systems are usually trained on tens of thousands of hours of annotated data, which take the form of audio paired with transcriptions for training acoustic models, collections of text for training language models, and (possibly) linguist-crafted lexicons mapping words to their pronunciations. The immense cost of collecting these resources makes applying state-of-the-art speech recognition algorithm to under-resourced languages infeasible.
    [Show full text]
  • Multi-Modal Speech Emotion Recognition Using Speech Embeddings and Audio Features
    Multi-Modal Speech Emotion Recognition Using Speech Embeddings and Audio Features Krishna D N, Sai Sumith Reddy YouPlus India [email protected], [email protected] Abstract volution neural networks are used in the paper by Dario Bertero et. al [10] for speech emotion recognition. Work by Abdul Ma- In this work, we propose a multi-modal emotion recognition lik Badshah et. al [11] shows how deep convolutional networks model to improve the speech emotion recognition system per- can be used for speech emotion recognition and also they show formance. We use two parallel Bidirectional LSTM networks how pretrained Alex-net model can be used as means of transfer called acoustic encoder (ENC1) and speech embedding encoder learning for the emotion recognition task. Recently many works (ENC2). The acoustic encoder is a Bi-LSTM which takes se- have been done on improving speech emotion recognition us- quence of speech features as inputs and speech embedding en- ing multi-modal techniques. Researchers have shown that we coder is also a Bi-LSTM which takes sequence of speech em- can improve the accuracy of an emotion recognition model if beddings as input and the output hidden representation at the we have text data of an utterance. If we provide text data as last time step of both the Bi-LSTM are concatenated and passed a guiding signal along with speech features to our model, the into a classification which predicts emotion label for that par- model performance can be greatly improved. Since face emo- ticular utterance. The speech embeddings are learned using the tion and speech emotions are correlated with each other, work encoder-decoder framework as described in [1] using skipgram by Samuel Albanie et.
    [Show full text]
  • Audio Word2vec: Unsupervised Learning of Audio Segment Representations Using Sequence-To-Sequence Autoencoder
    INTERSPEECH 2016 September 8–12, 2016, San Francisco, USA Audio Word2Vec: Unsupervised Learning of Audio Segment Representations using Sequence-to-sequence Autoencoder Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, Lin-Shan Lee College of Electrical Engineering and Computer Science, National Taiwan University {b01902040, b01902038, r04921047, hungyilee}@ntu.edu.tw, [email protected] Abstract identification [4], and several approaches have been success- fully used in STD [10, 6, 7, 8]. But these vector representa- The vector representations of fixed dimensionality for tions may not be able to precisely describe the sequential pho- words (in text) offered by Word2Vec have been shown to be netic structures of the audio segments as we wish to have in this very useful in many application scenarios, in particular due paper, and these approaches were developed primarily in more to the semantic information they carry. This paper proposes a heuristic ways, rather than learned from data. Deep learning has parallel version, the Audio Word2Vec. It offers the vector rep- also been used for this purpose [12, 13]. By learning Recurrent resentations of fixed dimensionality for variable-length audio Neural Network (RNN) with an audio segment as the input and segments. These vector representations are shown to describe the corresponding word as the target, the outputs of the hidden the sequential phonetic structures of the audio segments to a layer at the last few time steps can be taken as the representation good degree, with very attractive real world applications such as of the input segment [13]. However, this approach is supervised query-by-example Spoken Term Detection (STD).
    [Show full text]