Imputation and Generation of Multidimensional Market Data

Total Page:16

File Type:pdf, Size:1020Kb

Imputation and Generation of Multidimensional Market Data IMPUTATION AND GENERATION OF MULTIDIMENSIONAL MARKET DATA Master Thesis Tobias Wall & Jacob Titus Master thesis, 30 credits Department of mathematics and Mathematical Statistics Spring Term 2021 Imputation and Generation of Multidimensional Market Data Tobias Wall†, [email protected] Jacob Titus†, [email protected] Copyright © by Tobias Wall and Jacob Titus, 2021. All rights reserved. Supervisors: Jonas Nylén Nasdaq Inc. Armin Eftekhari Umeå University Examiner: Jianfeng Wang Umeå University Master of Science Thesis in Industrial Engineering and Management, 30 ECTS Department of Mathematics and Mathematical Statistics Umeå University SE-901 87 Umeå, Sweden †Equal contribution. The order of the contributors names were chosen based on a bootstrapping procedure where the names was drawn 100 times. i Abstract Market risk is one of the most prevailing risks to which financial institutions are exposed. The most popular approach in quantifying market risk is through Value at Risk. Organisations and regulators often require a long historical horizon of the affecting financial variables to estimate the risk exposures. A long horizon stresses the completeness of the available data; something risk applications need to handle. The goal of this thesis is to evaluate and propose methods to impute finan- cial time series. The performance of the methods will be measured with respect to both price-, and risk metric replication. Two different use cases are evaluated; missing values randomly place in the time series and consecutively missing values at the end-point of a time series. In total, there are five models applied to each use case, respectively. For the first use case, the results show that all models perform better than the naive approach. The Lasso model lowered the price replication error by 35% compared to the naive model. The result from use case two is ambiguous. Still, we can conclude that all models performed better than the naive model with respect to risk metric replication. In general, all models systemically underestimated the downstream risk metrics, implying that they failed to replicate the fat-tailed property of the price movement. Keywords: Time Series Imputation, Financial Time Series, Machine Learn- ing, Deep Learning, Value at Risk, Expected Shortfall ii Sammanfattning Marknadsrisk är en av de mest betydande riskerna som finansiella institut exponeras mot. Det populäraste sättet att kvantifiera marknadsrisk är genom Value at Risk. Organisationer och tillsynsmyndigheter kräver ofta en lång historisk horisont för de berörda marknadsvariablerna vid dessa beräkningar. En lång horisont ökar risken av ofullständighet i det tillgängliga datat, något riskapplikationer behöver hantera. Målet med denna uppsats är att utvärdera och föreslå metoder för att im- putera finansiella tidsserier. Metodernas prestanda kommer att mätas med avseende på både pris- och riskreproducerbarhet. Två olika scenarion utvärderas; värden som slumpmässigt saknas i tidsserien, och på varandra följande saknande värden i änden av en tidsserie. Totalt har fem modeller tillämpats på varje scenario. Under det första scenariot visar resultaten att alla modeller presterar bättre än det naiva tillvägagångssättet. Lasso-modellen sänkte prisreplikationsfelet med 35 % jämfört med den naiva modellen. Resultatet från det andra scenariot är tvetydigt. Vi kan ändå dra slutsatsen att alla modeller presterade bättre än den naiva modellen med avseende på riskvärdes-replikering. I allmänhet underskattade alla modeller systematiskt riskvärdena, vilket antyder på att de misslyckades med att replikera egenskapen av tjocka svansar i prisrörelsens distribution. Nyckelord: Imputering av Tidsserier, Finansiella Tidsserier, Maskininlärn- ing, Djupinlärning, Value at Risk, Expected Shortfall iii Acknowledgement We would like to extend our gratitude to Jonas Nylén, Anders Stäring, Markus Nyberg, and Oskar Janson at Nasdaq Inc., who has given us the opportunity to do this thesis work as well as providing supervision and support throughout the entire project. We would also like to thank our supervisor at the Department of Mathemat- ics and Mathematical Statistics, Assistant Professor Armin Eftekhari, for guidance and valuable advice during the project. Finally, we would like to thank our families and friends for their support and words of encouragement throughout our time at Umeå University which comes to an end with the completion of this thesis. Tobias Wall Jacob Titus Umeå, May 26, 2021 iv Contents 1 Introduction 1 1.1 Problem Definition . .4 1.2 Dataset . .4 2 Background 5 2.1 Market Risk . .5 2.1.1 Value at Risk . .6 2.1.2 Expected Shortfall . .7 2.2 Financial Variables . .7 2.2.1 Futures . .7 2.2.2 Discount Rates . .7 2.2.3 Foreign Exchange Rates . .8 2.2.4 Options . .8 2.2.5 Volatility . .9 2.3 Financial Time Series . 10 2.3.1 Stylised Facts . 11 2.4 Related Work . 13 2.4.1 Autoregressive Models . 13 2.4.2 State-Space Models . 14 2.4.3 Expectation Maximisation . 14 2.4.4 Key Points . 14 3 Theory 16 3.1 Nearest Neighbour Imputation . 16 3.2 Linear Interpolation . 17 3.3 Lasso . 17 3.4 Random Forest . 18 3.5 Bayesian Inference . 19 3.5.1 Bayes’ Rule . 20 3.5.2 Multivariate Normal Distribution . 20 3.5.3 Conditional Distribution . 20 3.5.4 Bayesian Linear Regression . 21 3.5.5 Feature Space Projection . 23 3.5.6 The Kernel Trick . 23 3.6 Gaussian Processes . 24 3.6.1 Choice of Covariance Function . 25 v 3.6.2 Optimising the Hyperparameters . 27 3.7 Artificial Neural Networks . 27 3.7.1 Multilayer Perceptron . 28 3.7.2 Training Neural Networks . 29 3.8 Recurrent Neural Networks . 30 3.8.1 Long-Short Term Memory . 31 3.9 Convolutional Neural Networks . 33 3.10 WaveNet . 35 3.11 Batch Normalisation . 37 4 Method 39 4.1 Notation . 39 4.2 Problem Framing . 40 4.2.1 Use Case One . 40 4.2.2 Use Case Two . 41 4.3 Dataset . 41 4.4 Data Preparation . 42 4.4.1 Handling of Missing Values . 42 4.4.2 Converting to Prices . 42 4.4.3 Training and Test Split . 43 4.4.4 Sliding Windows and Forward Validation . 43 4.5 Data Post-Processing . 43 4.6 Experiment Design . 44 4.7 Evaluation . 45 4.7.1 Mean Absolute Scaled Error . 45 4.7.2 Relative Deviation of VaR . 46 4.7.3 Relative Deviation of ES . 46 4.8 Models . 47 4.8.1 Nearest Neighbour Imputation . 47 4.8.2 Linear Interpolation . 47 4.8.3 Lasso . 47 4.8.4 Random Forest . 48 4.8.5 Gaussian Process . 48 4.8.6 Multilayer Perceptron . 49 4.8.7 WaveNet . 51 4.8.8 SeriesNet . 53 5 Results 56 5.1 Use Case One . 56 5.2 Use Case Two . 59 6 Discussion and Reflection 63 6.1 Risk Underestimation . 63 6.2 Time Component . 64 6.3 Fallback logic . 64 6.4 Error Measures . 64 6.5 Complexity . 65 6.6 Use Case Framing . 66 6.7 Excluded Models . 66 vi 6.8 Improvements and Extensions . 67 7 Conclusion 69 Appendices 73 Appendix A Removed Holidays 74 Appendix B Dataset 76 Appendix C Stylised Facts 78 Appendix D Example of a WaveNet-architecture 80 Appendix E Explanatory Data Analysis 81 Appendix F Asset Class Results Use Case One 85 Appendix G Asset Class Results Use Case Two 88 Appendix H Example of Imputation 91 vii Chapter 1 Introduction Market risk is one of the most prevailing risks that financial institutions are subjected to. It is the potential losses that investments inherit by uncertainties of market variables [24]. Risk management is all about identifying, quantifying, and analysing these risks to decide how market risk exposures should be avoided, accepted, or hedged. The most common approach in quantifying the market risk is by looking at how the affecting market variables, e.g. prices, have moved historically and use that knowledge to conclude how big losses could get in the future. Value at Risk, henceforth VaR, is one of the most widely used market risk metrics. There are several different ways to calculate VaR, but we will focus on a non-parametric approach using historical simulations from observed market data. VaR aims to make the following statement of an investment; “We are X percent certain that we will not lose more than V dollars in time T”. Suppose we would like to calculate 1-day 99% VaR of a USD 1 000 000 investment in the American stock index S&P5001, using seven years historical prices from 2014 to the end of 2020. Then, start by computing the daily price returns over the given period, find the return at the 1st percentile, and multiply that return by the current value of the investment. This yields 1-day 99% VaR to be USD 32 677. But, what if the price series was incomplete over the specific period, where several days had missing price data? Assume our dataset lacked the desired long-term data, and only five years history were available hence, from the beginning of 2016 as illustrated by the dashed line in Figure 1.1. Calculating 1-day 99% VaR from 2016 onwards results in a value of USD 35 675, which is 9:17% higher than the complete dataset. This discrepancy is intuitive when analysing the price and logarithmic return process presented in Figure 1.1. The period from February 19th to March 23rd of 2020 was turbulent in many ways, but mainly, it was the start of the COVID-19 pandemic. The financial markets fell, with S&P500 dropping 34% and the Swedish stock index OMX30 dropping 31%, leaving no markets unaffected. The period has two ”Black Mondays”, the 9th and 16th of March, where markets fell 8% and 13% respectively, and one ”Black Thursday” on the 12th of March where markets fell 10%.
Recommended publications
  • Malware Classification with BERT
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-25-2021 Malware Classification with BERT Joel Lawrence Alvares Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons, and the Information Security Commons Malware Classification with Word Embeddings Generated by BERT and Word2Vec Malware Classification with BERT Presented to Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree By Joel Alvares May 2021 Malware Classification with Word Embeddings Generated by BERT and Word2Vec The Designated Project Committee Approves the Project Titled Malware Classification with BERT by Joel Lawrence Alvares APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE San Jose State University May 2021 Prof. Fabio Di Troia Department of Computer Science Prof. William Andreopoulos Department of Computer Science Prof. Katerina Potika Department of Computer Science 1 Malware Classification with Word Embeddings Generated by BERT and Word2Vec ABSTRACT Malware Classification is used to distinguish unique types of malware from each other. This project aims to carry out malware classification using word embeddings which are used in Natural Language Processing (NLP) to identify and evaluate the relationship between words of a sentence. Word embeddings generated by BERT and Word2Vec for malware samples to carry out multi-class classification. BERT is a transformer based pre- trained natural language processing (NLP) model which can be used for a wide range of tasks such as question answering, paraphrase generation and next sentence prediction. However, the attention mechanism of a pre-trained BERT model can also be used in malware classification by capturing information about relation between each opcode and every other opcode belonging to a malware family.
    [Show full text]
  • Fun with Hyperplanes: Perceptrons, Svms, and Friends
    Perceptrons, SVMs, and Friends: Some Discriminative Models for Classification Parallel to AIMA 18.1, 18.2, 18.6.3, 18.9 The Automatic Classification Problem Assign object/event or sequence of objects/events to one of a given finite set of categories. • Fraud detection for credit card transactions, telephone calls, etc. • Worm detection in network packets • Spam filtering in email • Recommending articles, books, movies, music • Medical diagnosis • Speech recognition • OCR of handwritten letters • Recognition of specific astronomical images • Recognition of specific DNA sequences • Financial investment Machine Learning methods provide one set of approaches to this problem CIS 391 - Intro to AI 2 Universal Machine Learning Diagram Feature Things to Magic Vector Classification be Classifier Represent- Decision classified Box ation CIS 391 - Intro to AI 3 Example: handwritten digit recognition Machine learning algorithms that Automatically cluster these images Use a training set of labeled images to learn to classify new images Discover how to account for variability in writing style CIS 391 - Intro to AI 4 A machine learning algorithm development pipeline: minimization Problem statement Given training vectors x1,…,xN and targets t1,…,tN, find… Mathematical description of a cost function Mathematical description of how to minimize/maximize the cost function Implementation r(i,k) = s(i,k) – maxj{s(i,j)+a(i,j)} … CIS 391 - Intro to AI 5 Universal Machine Learning Diagram Today: Perceptron, SVM and Friends Feature Things to Magic Vector
    [Show full text]
  • Performance Comparison of Support Vector Machine, Random Forest, and Extreme Learning Machine for Intrusion Detection
    Technological University Dublin ARROW@TU Dublin Articles School of Science and Computing 2018-7 Performance Comparison of Support Vector Machine, Random Forest, and Extreme Learning Machine for Intrusion Detection Iftikhar Ahmad King Abdulaziz University, Saudi Arabia, [email protected] MUHAMMAD JAVED IQBAL UET Taxila MOHAMMAD BASHERI King Abdulaziz University, Saudi Arabia See next page for additional authors Follow this and additional works at: https://arrow.tudublin.ie/ittsciart Part of the Computer Sciences Commons Recommended Citation Ahmad, I. et al. (2018) Performance Comparison of Support Vector Machine, Random Forest, and Extreme Learning Machine for Intrusion Detection, IEEE Access, vol. 6, pp. 33789-33795, 2018. DOI :10.1109/ACCESS.2018.2841987 This Article is brought to you for free and open access by the School of Science and Computing at ARROW@TU Dublin. It has been accepted for inclusion in Articles by an authorized administrator of ARROW@TU Dublin. For more information, please contact [email protected], [email protected]. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License Authors Iftikhar Ahmad, MUHAMMAD JAVED IQBAL, MOHAMMAD BASHERI, and Aneel Rahim This article is available at ARROW@TU Dublin: https://arrow.tudublin.ie/ittsciart/44 SPECIAL SECTION ON SURVIVABILITY STRATEGIES FOR EMERGING WIRELESS NETWORKS Received April 15, 2018, accepted May 18, 2018, date of publication May 30, 2018, date of current version July 6, 2018. Digital Object Identifier 10.1109/ACCESS.2018.2841987
    [Show full text]
  • Machine Learning Methods for Classification of the Green
    International Journal of Geo-Information Article Machine Learning Methods for Classification of the Green Infrastructure in City Areas Nikola Kranjˇci´c 1,* , Damir Medak 2, Robert Župan 2 and Milan Rezo 1 1 Faculty of Geotechnical Engineering, University of Zagreb, Hallerova aleja 7, 42000 Varaždin, Croatia; [email protected] 2 Faculty of Geodesy, University of Zagreb, Kaˇci´ceva26, 10000 Zagreb, Croatia; [email protected] (D.M.); [email protected] (R.Ž.) * Correspondence: [email protected]; Tel.: +385-95-505-8336 Received: 23 August 2019; Accepted: 21 October 2019; Published: 22 October 2019 Abstract: Rapid urbanization in cities can result in a decrease in green urban areas. Reductions in green urban infrastructure pose a threat to the sustainability of cities. Up-to-date maps are important for the effective planning of urban development and the maintenance of green urban infrastructure. There are many possible ways to map vegetation; however, the most effective way is to apply machine learning methods to satellite imagery. In this study, we analyze four machine learning methods (support vector machine, random forest, artificial neural network, and the naïve Bayes classifier) for mapping green urban areas using satellite imagery from the Sentinel-2 multispectral instrument. The methods are tested on two cities in Croatia (Varaždin and Osijek). Support vector machines outperform random forest, artificial neural networks, and the naïve Bayes classifier in terms of classification accuracy (a Kappa value of 0.87 for Varaždin and 0.89 for Osijek) and performance time. Keywords: green urban infrastructure; support vector machines; artificial neural networks; naïve Bayes classifier; random forest; Sentinel 2-MSI 1.
    [Show full text]
  • Random Forest Regression of Markov Chains for Accessible Music Generation
    Random Forest Regression of Markov Chains for Accessible Music Generation Vivian Chen Jackson DeVico Arianna Reischer [email protected] [email protected] [email protected] Leo Stepanewk Ananya Vasireddy Nicholas Zhang [email protected] [email protected] [email protected] Sabar Dasgupta* [email protected] New Jersey’s Governor’s School of Engineering and Technology July 24, 2020 *Corresponding Author Abstract—With the advent of machine learning, new generative algorithms have expanded the ability of computers to compose creative and meaningful music. These advances allow for a greater balance between human input and autonomy when creating original compositions. This project proposes a method of melody generation using random forest regression, which in- creases the accessibility of generative music models by addressing the downsides of previous approaches. The solution generalizes the concept of Markov chains while avoiding the excessive computational costs and dataset requirements associated with past models. To improve the musical quality of the outputs, the model utilizes post-processing based on various scoring metrics. A user interface combines these modules into an application that achieves the ultimate goal of creating an accessible generative music model. Fig. 1. A screenshot of the user interface developed for this project. I. INTRODUCTION One of the greatest challenges in making generative music is emulating human artistic expression. DeepMind’s generative II. BACKGROUND audio model, WaveNet, attempts this challenge, but requires A. History of Generative Music large datasets and extensive training time to produce qual- ity musical outputs [1]. Similarly, other music generation The term “generative music,” first popularized by English algorithms such as MelodyRNN, while effective, are also musician Brian Eno in the late 20th century, describes the resource intensive and time-consuming.
    [Show full text]
  • Evaluating the Combination of Word Embeddings with Mixture of Experts and Cascading Gcforest in Identifying Sentiment Polarity
    Evaluating the Combination of Word Embeddings with Mixture of Experts and Cascading gcForest In Identifying Sentiment Polarity by Mounika Marreddy, Subba Reddy Oota, Radha Agarwal, Radhika Mamidi in 25TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (SIGKDD-2019) Anchorage, Alaska, USA Report No: IIIT/TR/2019/-1 Centre for Language Technologies Research Centre International Institute of Information Technology Hyderabad - 500 032, INDIA August 2019 Evaluating the Combination of Word Embeddings with Mixture of Experts and Cascading gcForest In Identifying Sentiment Polarity Mounika Marreddy Subba Reddy Oota [email protected] IIIT-Hyderabad IIIT-Hyderabad Hyderabad, India Hyderabad, India [email protected] [email protected] Radha Agarwal Radhika Mamidi IIIT-Hyderabad IIIT-Hyderabad Hyderabad, India Hyderabad, India [email protected] [email protected] ABSTRACT an effective neural networks to generate low dimensional contex- Neural word embeddings have been able to deliver impressive re- tual representations and yields promising results on the sentiment sults in many Natural Language Processing tasks. The quality of analysis [7, 14, 21]. the word embedding determines the performance of a supervised Since the work of [2], NLP community is focusing on improving model. However, choosing the right set of word embeddings for a the feature representation of sentence/document with continuous given dataset is a major challenging task for enhancing the results. development in neural word embedding. Word2Vec embedding In this paper, we have evaluated neural word embeddings with was the first powerful technique to achieve semantic similarity (i) a mixture of classification experts (MoCE) model for sentiment between words but fail to capture the meaning of a word based classification task, (ii) to compare and improve the classification on context [17].
    [Show full text]
  • Introduction to Machine Learning
    Introduction to Machine Learning Perceptron Barnabás Póczos Contents History of Artificial Neural Networks Definitions: Perceptron, Multi-Layer Perceptron Perceptron algorithm 2 Short History of Artificial Neural Networks 3 Short History Progression (1943-1960) • First mathematical model of neurons ▪ Pitts & McCulloch (1943) • Beginning of artificial neural networks • Perceptron, Rosenblatt (1958) ▪ A single neuron for classification ▪ Perceptron learning rule ▪ Perceptron convergence theorem Degression (1960-1980) • Perceptron can’t even learn the XOR function • We don’t know how to train MLP • 1963 Backpropagation… but not much attention… Bryson, A.E.; W.F. Denham; S.E. Dreyfus. Optimal programming problems with inequality constraints. I: Necessary conditions for extremal solutions. AIAA J. 1, 11 (1963) 2544-2550 4 Short History Progression (1980-) • 1986 Backpropagation reinvented: ▪ Rumelhart, Hinton, Williams: Learning representations by back-propagating errors. Nature, 323, 533—536, 1986 • Successful applications: ▪ Character recognition, autonomous cars,… • Open questions: Overfitting? Network structure? Neuron number? Layer number? Bad local minimum points? When to stop training? • Hopfield nets (1982), Boltzmann machines,… 5 Short History Degression (1993-) • SVM: Vapnik and his co-workers developed the Support Vector Machine (1993). It is a shallow architecture. • SVM and Graphical models almost kill the ANN research. • Training deeper networks consistently yields poor results. • Exception: deep convolutional neural networks, Yann LeCun 1998. (discriminative model) 6 Short History Progression (2006-) Deep Belief Networks (DBN) • Hinton, G. E, Osindero, S., and Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18:1527-1554. • Generative graphical model • Based on restrictive Boltzmann machines • Can be trained efficiently Deep Autoencoder based networks Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H.
    [Show full text]
  • Self-Training Wavenet for TTS in Low-Data Regimes
    StrawNet: Self-Training WaveNet for TTS in Low-Data Regimes Manish Sharma, Tom Kenter, Rob Clark Google UK fskmanish, tomkenter, [email protected] Abstract is increased. However, it can be seen from their results that the quality degrades when the number of recordings is further Recently, WaveNet has become a popular choice of neural net- decreased. work to synthesize speech audio. Autoregressive WaveNet is To reduce the voice artefacts observed in WaveNet stu- capable of producing high-fidelity audio, but is too slow for dent models trained under a low-data regime, we aim to lever- real-time synthesis. As a remedy, Parallel WaveNet was pro- age both the high-fidelity audio produced by an autoregressive posed, which can produce audio faster than real time through WaveNet, and the faster-than-real-time synthesis capability of distillation of an autoregressive teacher into a feedforward stu- a Parallel WaveNet. We propose a training paradigm, called dent network. A shortcoming of this approach, however, is that StrawNet, which stands for “Self-Training WaveNet”. The key a large amount of recorded speech data is required to produce contribution lies in using high-fidelity speech samples produced high-quality student models, and this data is not always avail- by an autoregressive WaveNet to self-train first a new autore- able. In this paper, we propose StrawNet: a self-training ap- gressive WaveNet and then a Parallel WaveNet model. We refer proach to train a Parallel WaveNet. Self-training is performed to models distilled this way as StrawNet student models. using the synthetic examples generated by the autoregressive We evaluate StrawNet by comparing it to a baseline WaveNet teacher.
    [Show full text]
  • Audio Event Classification Using Deep Learning in an End-To-End Approach
    Audio Event Classification using Deep Learning in an End-to-End Approach Master thesis Jose Luis Diez Antich Aalborg University Copenhagen A. C. Meyers Vænge 15 2450 Copenhagen SV Denmark Title: Abstract: Audio Event Classification using Deep Learning in an End-to-End Approach The goal of the master thesis is to study the task of Sound Event Classification Participant(s): using Deep Neural Networks in an end- Jose Luis Diez Antich to-end approach. Sound Event Classifi- cation it is a multi-label classification problem of sound sources originated Supervisor(s): from everyday environments. An auto- Hendrik Purwins matic system for it would many applica- tions, for example, it could help users of hearing devices to understand their sur- Page Numbers: 38 roundings or enhance robot navigation systems. The end-to-end approach con- Date of Completion: sists in systems that learn directly from June 16, 2017 data, not from features, and it has been recently applied to audio and its results are remarkable. Even though the re- sults do not show an improvement over standard approaches, the contribution of this thesis is an exploration of deep learning architectures which can be use- ful to understand how networks process audio. The content of this report is freely available, but publication (with reference) may only be pursued due to agreement with the author. Contents 1 Introduction1 1.1 Scope of this work.............................2 2 Deep Learning3 2.1 Overview..................................3 2.2 Multilayer Perceptron...........................4
    [Show full text]
  • Unsupervised Speech Representation Learning Using Wavenet Autoencoders Jan Chorowski, Ron J
    1 Unsupervised speech representation learning using WaveNet autoencoders Jan Chorowski, Ron J. Weiss, Samy Bengio, Aaron¨ van den Oord Abstract—We consider the task of unsupervised extraction speaker gender and identity, from phonetic content, properties of meaningful latent representations of speech by applying which are consistent with internal representations learned autoencoding neural networks to speech waveforms. The goal by speech recognizers [13], [14]. Such representations are is to learn a representation able to capture high level semantic content from the signal, e.g. phoneme identities, while being desired in several tasks, such as low resource automatic speech invariant to confounding low level details in the signal such as recognition (ASR), where only a small amount of labeled the underlying pitch contour or background noise. Since the training data is available. In such scenario, limited amounts learned representation is tuned to contain only phonetic content, of data may be sufficient to learn an acoustic model on the we resort to using a high capacity WaveNet decoder to infer representation discovered without supervision, but insufficient information discarded by the encoder from previous samples. Moreover, the behavior of autoencoder models depends on the to learn the acoustic model and a data representation in a fully kind of constraint that is applied to the latent representation. supervised manner [15], [16]. We compare three variants: a simple dimensionality reduction We focus on representations learned with autoencoders bottleneck, a Gaussian Variational Autoencoder (VAE), and a applied to raw waveforms and spectrogram features and discrete Vector Quantized VAE (VQ-VAE). We analyze the quality investigate the quality of learned representations on LibriSpeech of learned representations in terms of speaker independence, the ability to predict phonetic content, and the ability to accurately re- [17].
    [Show full text]
  • Unsupervised Speech Representation Learning Using Wavenet Autoencoders
    Unsupervised speech representation learning using WaveNet autoencoders https://arxiv.org/abs/1901.08810 Jan Chorowski University of Wrocław 06.06.2019 Deep Model = Hierarchy of Concepts Cat Dog … Moon Banana M. Zieler, “Visualizing and Understanding Convolutional Networks” Deep Learning history: 2006 2006: Stacked RBMs Hinton, Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks” Deep Learning history: 2012 2012: Alexnet SOTA on Imagenet Fully supervised training Deep Learning Recipe 1. Get a massive, labeled dataset 퐷 = {(푥, 푦)}: – Comp. vision: Imagenet, 1M images – Machine translation: EuroParlamanet data, CommonCrawl, several million sent. pairs – Speech recognition: 1000h (LibriSpeech), 12000h (Google Voice Search) – Question answering: SQuAD, 150k questions with human answers – … 2. Train model to maximize log 푝(푦|푥) Value of Labeled Data • Labeled data is crucial for deep learning • But labels carry little information: – Example: An ImageNet model has 30M weights, but ImageNet is about 1M images from 1000 classes Labels: 1M * 10bit = 10Mbits Raw data: (128 x 128 images): ca 500 Gbits! Value of Unlabeled Data “The brain has about 1014 synapses and we only live for about 109 seconds. So we have a lot more parameters than data. This motivates the idea that we must do a lot of unsupervised learning since the perceptual input (including proprioception) is the only place we can get 105 dimensions of constraint per second.” Geoff Hinton https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/ Unsupervised learning recipe 1. Get a massive labeled dataset 퐷 = 푥 Easy, unlabeled data is nearly free 2. Train model to…??? What is the task? What is the loss function? Unsupervised learning by modeling data distribution Train the model to minimize − log 푝(푥) E.g.
    [Show full text]
  • Real-Time Black-Box Modelling with Recurrent Neural Networks
    Proceedings of the 22nd International Conference on Digital Audio Effects (DAFx-19), Birmingham, UK, September 2–6, 2019 REAL-TIME BLACK-BOX MODELLING WITH RECURRENT NEURAL NETWORKS Alec Wright, Eero-Pekka Damskägg, and Vesa Välimäki∗ Acoustics Lab, Department of Signal Processing and Acoustics Aalto University Espoo, Finland [email protected] ABSTRACT tube amplifiers and distortion pedals. In [14] it was shown that the WaveNet model of several distortion effects was capable of This paper proposes to use a recurrent neural network for black- running in real time. The resulting deep neural network model, box modelling of nonlinear audio systems, such as tube amplifiers however, was still fairly computationally expensive to run. and distortion pedals. As a recurrent unit structure, we test both Long Short-Term Memory and a Gated Recurrent Unit. We com- In this paper, we propose an alternative black-box model based pare the proposed neural network with a WaveNet-style deep neu- on an RNN. We demonstrate that the trained RNN model is capa- ral network, which has been suggested previously for tube ampli- ble of achieving the accuracy of the WaveNet model, whilst re- fier modelling. The neural networks are trained with several min- quiring considerably less processing power to run. The proposed utes of guitar and bass recordings, which have been passed through neural network, which consists of a single recurrent layer and a the devices to be modelled. A real-time audio plugin implement- fully connected layer, is suitable for real-time emulation of tube ing the proposed networks has been developed in the JUCE frame- amplifiers and distortion pedals.
    [Show full text]