Anomaly Detection in Raw Audio Using Deep Autoregressive Networks
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Review on Outlier/Anomaly Detection in Time Series Data
A review on outlier/anomaly detection in time series data ANE BLÁZQUEZ-GARCÍA and ANGEL CONDE, Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA), Spain USUE MORI, Intelligent Systems Group (ISG), Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Spain JOSE A. LOZANO, Intelligent Systems Group (ISG), Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Spain and Basque Center for Applied Mathematics (BCAM), Spain Recent advances in technology have brought major breakthroughs in data collection, enabling a large amount of data to be gathered over time and thus generating time series. Mining this data has become an important task for researchers and practitioners in the past few years, including the detection of outliers or anomalies that may represent errors or events of interest. This review aims to provide a structured and comprehensive state-of-the-art on outlier detection techniques in the context of time series. To this end, a taxonomy is presented based on the main aspects that characterize an outlier detection technique. Additional Key Words and Phrases: Outlier detection, anomaly detection, time series, data mining, taxonomy, software 1 INTRODUCTION Recent advances in technology allow us to collect a large amount of data over time in diverse research areas. Observations that have been recorded in an orderly fashion and which are correlated in time constitute a time series. Time series data mining aims to extract all meaningful knowledge from this data, and several mining tasks (e.g., classification, clustering, forecasting, and outlier detection) have been considered in the literature [Esling and Agon 2012; Fu 2011; Ratanamahatana et al. -
Self-Training Wavenet for TTS in Low-Data Regimes
StrawNet: Self-Training WaveNet for TTS in Low-Data Regimes Manish Sharma, Tom Kenter, Rob Clark Google UK fskmanish, tomkenter, [email protected] Abstract is increased. However, it can be seen from their results that the quality degrades when the number of recordings is further Recently, WaveNet has become a popular choice of neural net- decreased. work to synthesize speech audio. Autoregressive WaveNet is To reduce the voice artefacts observed in WaveNet stu- capable of producing high-fidelity audio, but is too slow for dent models trained under a low-data regime, we aim to lever- real-time synthesis. As a remedy, Parallel WaveNet was pro- age both the high-fidelity audio produced by an autoregressive posed, which can produce audio faster than real time through WaveNet, and the faster-than-real-time synthesis capability of distillation of an autoregressive teacher into a feedforward stu- a Parallel WaveNet. We propose a training paradigm, called dent network. A shortcoming of this approach, however, is that StrawNet, which stands for “Self-Training WaveNet”. The key a large amount of recorded speech data is required to produce contribution lies in using high-fidelity speech samples produced high-quality student models, and this data is not always avail- by an autoregressive WaveNet to self-train first a new autore- able. In this paper, we propose StrawNet: a self-training ap- gressive WaveNet and then a Parallel WaveNet model. We refer proach to train a Parallel WaveNet. Self-training is performed to models distilled this way as StrawNet student models. using the synthetic examples generated by the autoregressive We evaluate StrawNet by comparing it to a baseline WaveNet teacher. -
Image Anomaly Detection Using Normal Data Only by Latent Space Resampling
applied sciences Article Image Anomaly Detection Using Normal Data Only by Latent Space Resampling Lu Wang 1 , Dongkai Zhang 1 , Jiahao Guo 1 and Yuexing Han 1,2,* 1 School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China; [email protected] (L.W.); [email protected] (D.Z.); [email protected] (J.G.) 2 Shanghai Institute for Advanced Communication and Data Science, Shanghai University, 99 Shangda Road, Shanghai 200444, China * Correspondence: [email protected] Received: 29 September 2020; Accepted: 30 November 2020; Published: 3 December 2020 Abstract: Detecting image anomalies automatically in industrial scenarios can improve economic efficiency, but the scarcity of anomalous samples increases the challenge of the task. Recently, autoencoder has been widely used in image anomaly detection without using anomalous images during training. However, it is hard to determine the proper dimensionality of the latent space, and it often leads to unwanted reconstructions of the anomalous parts. To solve this problem, we propose a novel method based on the autoencoder. In this method, the latent space of the autoencoder is estimated using a discrete probability model. With the estimated probability model, the anomalous components in the latent space can be well excluded and undesirable reconstruction of the anomalous parts can be avoided. Specifically, we first adopt VQ-VAE as the reconstruction model to get a discrete latent space of normal samples. Then, PixelSail, a deep autoregressive model, is used to estimate the probability model of the discrete latent space. In the detection stage, the autoregressive model will determine the parts that deviate from the normal distribution in the input latent space. -
Unsupervised Speech Representation Learning Using Wavenet Autoencoders Jan Chorowski, Ron J
1 Unsupervised speech representation learning using WaveNet autoencoders Jan Chorowski, Ron J. Weiss, Samy Bengio, Aaron¨ van den Oord Abstract—We consider the task of unsupervised extraction speaker gender and identity, from phonetic content, properties of meaningful latent representations of speech by applying which are consistent with internal representations learned autoencoding neural networks to speech waveforms. The goal by speech recognizers [13], [14]. Such representations are is to learn a representation able to capture high level semantic content from the signal, e.g. phoneme identities, while being desired in several tasks, such as low resource automatic speech invariant to confounding low level details in the signal such as recognition (ASR), where only a small amount of labeled the underlying pitch contour or background noise. Since the training data is available. In such scenario, limited amounts learned representation is tuned to contain only phonetic content, of data may be sufficient to learn an acoustic model on the we resort to using a high capacity WaveNet decoder to infer representation discovered without supervision, but insufficient information discarded by the encoder from previous samples. Moreover, the behavior of autoencoder models depends on the to learn the acoustic model and a data representation in a fully kind of constraint that is applied to the latent representation. supervised manner [15], [16]. We compare three variants: a simple dimensionality reduction We focus on representations learned with autoencoders bottleneck, a Gaussian Variational Autoencoder (VAE), and a applied to raw waveforms and spectrogram features and discrete Vector Quantized VAE (VQ-VAE). We analyze the quality investigate the quality of learned representations on LibriSpeech of learned representations in terms of speaker independence, the ability to predict phonetic content, and the ability to accurately re- [17]. -
Unsupervised Speech Representation Learning Using Wavenet Autoencoders
Unsupervised speech representation learning using WaveNet autoencoders https://arxiv.org/abs/1901.08810 Jan Chorowski University of Wrocław 06.06.2019 Deep Model = Hierarchy of Concepts Cat Dog … Moon Banana M. Zieler, “Visualizing and Understanding Convolutional Networks” Deep Learning history: 2006 2006: Stacked RBMs Hinton, Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks” Deep Learning history: 2012 2012: Alexnet SOTA on Imagenet Fully supervised training Deep Learning Recipe 1. Get a massive, labeled dataset 퐷 = {(푥, 푦)}: – Comp. vision: Imagenet, 1M images – Machine translation: EuroParlamanet data, CommonCrawl, several million sent. pairs – Speech recognition: 1000h (LibriSpeech), 12000h (Google Voice Search) – Question answering: SQuAD, 150k questions with human answers – … 2. Train model to maximize log 푝(푦|푥) Value of Labeled Data • Labeled data is crucial for deep learning • But labels carry little information: – Example: An ImageNet model has 30M weights, but ImageNet is about 1M images from 1000 classes Labels: 1M * 10bit = 10Mbits Raw data: (128 x 128 images): ca 500 Gbits! Value of Unlabeled Data “The brain has about 1014 synapses and we only live for about 109 seconds. So we have a lot more parameters than data. This motivates the idea that we must do a lot of unsupervised learning since the perceptual input (including proprioception) is the only place we can get 105 dimensions of constraint per second.” Geoff Hinton https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/ Unsupervised learning recipe 1. Get a massive labeled dataset 퐷 = 푥 Easy, unlabeled data is nearly free 2. Train model to…??? What is the task? What is the loss function? Unsupervised learning by modeling data distribution Train the model to minimize − log 푝(푥) E.g. -
Real-Time Black-Box Modelling with Recurrent Neural Networks
Proceedings of the 22nd International Conference on Digital Audio Effects (DAFx-19), Birmingham, UK, September 2–6, 2019 REAL-TIME BLACK-BOX MODELLING WITH RECURRENT NEURAL NETWORKS Alec Wright, Eero-Pekka Damskägg, and Vesa Välimäki∗ Acoustics Lab, Department of Signal Processing and Acoustics Aalto University Espoo, Finland [email protected] ABSTRACT tube amplifiers and distortion pedals. In [14] it was shown that the WaveNet model of several distortion effects was capable of This paper proposes to use a recurrent neural network for black- running in real time. The resulting deep neural network model, box modelling of nonlinear audio systems, such as tube amplifiers however, was still fairly computationally expensive to run. and distortion pedals. As a recurrent unit structure, we test both Long Short-Term Memory and a Gated Recurrent Unit. We com- In this paper, we propose an alternative black-box model based pare the proposed neural network with a WaveNet-style deep neu- on an RNN. We demonstrate that the trained RNN model is capa- ral network, which has been suggested previously for tube ampli- ble of achieving the accuracy of the WaveNet model, whilst re- fier modelling. The neural networks are trained with several min- quiring considerably less processing power to run. The proposed utes of guitar and bass recordings, which have been passed through neural network, which consists of a single recurrent layer and a the devices to be modelled. A real-time audio plugin implement- fully connected layer, is suitable for real-time emulation of tube ing the proposed networks has been developed in the JUCE frame- amplifiers and distortion pedals. -
Unsupervised Network Anomaly Detection Johan Mazel
Unsupervised network anomaly detection Johan Mazel To cite this version: Johan Mazel. Unsupervised network anomaly detection. Networking and Internet Architecture [cs.NI]. INSA de Toulouse, 2011. English. tel-00667654 HAL Id: tel-00667654 https://tel.archives-ouvertes.fr/tel-00667654 Submitted on 8 Feb 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 5)µ4& &OWVFEFMPCUFOUJPOEV %0$503"5%&-6/*7&34*5²%&506-064& %ÏMJWSÏQBS Institut National des Sciences Appliquées de Toulouse (INSA de Toulouse) $PUVUFMMFJOUFSOBUJPOBMFBWFD 1SÏTFOUÏFFUTPVUFOVFQBS Mazel Johan -F lundi 19 décembre 2011 5J tre : Unsupervised network anomaly detection ED MITT : Domaine STIC : Réseaux, Télécoms, Systèmes et Architecture 6OJUÏEFSFDIFSDIF LAAS-CNRS %JSFDUFVS T EFʾÒTF Owezarski Philippe Labit Yann 3BQQPSUFVST Festor Olivier Leduc Guy Fukuda Kensuke "VUSF T NFNCSF T EVKVSZ Chassot Christophe Vaton Sandrine Acknowledgements I would like to first thank my PhD advisors for their help, support and for letting me lead my research toward my own direction. Their inputs and comments along the stages of this thesis have been highly valuable. I want to especially thank them for having been able to dive into my work without drowning, and then, provide me useful remarks. -
Linear Prediction-Based Wavenet Speech Synthesis
LP-WaveNet: Linear Prediction-based WaveNet Speech Synthesis Min-Jae Hwang Frank Soong Eunwoo Song Search Solution Microsoft Naver Corporation Seongnam, South Korea Beijing, China Seongnam, South Korea [email protected] [email protected] [email protected] Xi Wang Hyeonjoo Kang Hong-Goo Kang Microsoft Yonsei University Yonsei University Beijing, China Seoul, South Korea Seoul, South Korea [email protected] [email protected] [email protected] Abstract—We propose a linear prediction (LP)-based wave- than the speech signal, the training and generation processes form generation method via WaveNet vocoding framework. A become more efficient. WaveNet-based neural vocoder has significantly improved the However, the synthesized speech is likely to be unnatural quality of parametric text-to-speech (TTS) systems. However, it is challenging to effectively train the neural vocoder when the target when the prediction errors in estimating the excitation are database contains massive amount of acoustical information propagated through the LP synthesis process. As the effect such as prosody, style or expressiveness. As a solution, the of LP synthesis is not considered in the training process, the approaches that only generate the vocal source component by synthesis output is vulnerable to the variation of LP synthesis a neural vocoder have been proposed. However, they tend to filter. generate synthetic noise because the vocal source component is independently handled without considering the entire speech To alleviate this problem, we propose an LP-WaveNet, production process; where it is inevitable to come up with a which enables to jointly train the complicated interactions mismatch between vocal source and vocal tract filter. -
Machine Learning and Extremes for Anomaly Detection — Apprentissage Automatique Et Extrêmes Pour La Détection D’Anomalies
École Doctorale ED130 “Informatique, télécommunications et électronique de Paris” Machine Learning and Extremes for Anomaly Detection — Apprentissage Automatique et Extrêmes pour la Détection d’Anomalies Thèse pour obtenir le grade de docteur délivré par TELECOM PARISTECH Spécialité “Signal et Images” présentée et soutenue publiquement par Nicolas GOIX le 28 Novembre 2016 LTCI, CNRS, Télécom ParisTech, Université Paris-Saclay, 75013, Paris, France Jury : Gérard Biau Professeur, Université Pierre et Marie Curie Examinateur Stéphane Boucheron Professeur, Université Paris Diderot Rapporteur Stéphan Clémençon Professeur, Télécom ParisTech Directeur Stéphane Girard Directeur de Recherche, Inria Grenoble Rhône-Alpes Rapporteur Alexandre Gramfort Maitre de Conférence, Télécom ParisTech Examinateur Anne Sabourin Maitre de Conférence, Télécom ParisTech Co-directeur Jean-Philippe Vert Directeur de Recherche, Mines ParisTech Examinateur List of Contributions Journal Sparse Representation of Multivariate Extremes with Applications to Anomaly Detection. (Un- • der review for Journal of Multivariate Analysis). Authors: Goix, Sabourin, and Clémençon. Conferences On Anomaly Ranking and Excess-Mass Curves. (AISTATS 2015). • Authors: Goix, Sabourin, and Clémençon. Learning the dependence structure of rare events: a non-asymptotic study. (COLT 2015). • Authors: Goix, Sabourin, and Clémençon. Sparse Representation of Multivariate Extremes with Applications to Anomaly Ranking. (AIS- • TATS 2016). Authors: Goix, Sabourin, and Clémençon. How to Evaluate the Quality of Unsupervised Anomaly Detection Algorithms? (to be submit- • ted). Authors: Goix and Thomas. One-Class Splitting Criteria for Random Forests with Application to Anomaly Detection. (to be • submitted). Authors: Goix, Brault, Drougard and Chiapino. Workshops Sparse Representation of Multivariate Extremes with Applications to Anomaly Ranking. (NIPS • 2015 Workshop on Nonparametric Methods for Large Scale Representation Learning). Authors: Goix, Sabourin, and Clémençon. -
Machine Learning for Anomaly Detection and Categorization In
Machine Learning for Anomaly Detection and Categorization in Multi-cloud Environments Tara Salman Deval Bhamare Aiman Erbad Raj Jain Mohammed SamaKa Washington University in St. Louis, Qatar University, Qatar University, Washington University in St. Louis, Qatar University, St. Louis, USA Doha, Qatar Doha, Qatar St. Louis, USA Doha, Qatar [email protected] [email protected] [email protected] [email protected] [email protected] Abstract— Cloud computing has been widely adopted by is getting popular within organizations, clients, and service application service providers (ASPs) and enterprises to reduce providers [2]. Despite this fact, data security is a major concern both capital expenditures (CAPEX) and operational expenditures for the end-users in multi-cloud environments. Protecting such (OPEX). Applications and services previously running on private environments against attacks and intrusions is a major concern data centers are now being migrated to private or public clouds. in both research and industry [3]. Since most of the ASPs and enterprises have globally distributed user bases, their services need to be distributed across multiple Firewalls and other rule-based security approaches have clouds, spread across the globe which can achieve better been used extensively to provide protection against attacks in performance in terms of latency, scalability and load balancing. the data centers and contemporary networks. However, large The shift has eventually led the research community to study distributed multi-cloud environments would require a multi-cloud environments. However, the widespread acceptance significantly large number of complicated rules to be of such environments has been hampered by major security configured, which could be costly, time-consuming and error concerns. -
CSE601 Anomaly Detection
Anomaly Detection Jing Gao SUNY Buffalo 1 Anomaly Detection • Anomalies – the set of objects are considerably dissimilar from the remainder of the data – occur relatively infrequently – when they do occur, their consequences can be quite dramatic and quite often in a negative sense “Mining needle in a haystack. So much hay and so little time” 2 Definition of Anomalies • Anomaly is a pattern in the data that does not conform to the expected behavior • Also referred to as outliers, exceptions, peculiarities, surprise, etc. • Anomalies translate to significant (often critical) real life entities – Cyber intrusions – Credit card fraud 3 Real World Anomalies • Credit Card Fraud – An abnormally high purchase made on a credit card • Cyber Intrusions – Computer virus spread over Internet 4 Simple Example Y • N1 and N2 are N regions of normal 1 o1 behavior O3 • Points o1 and o2 are anomalies o2 • Points in region O 3 N2 are anomalies X 5 Related problems • Rare Class Mining • Chance discovery • Novelty Detection • Exception Mining • Noise Removal 6 Key Challenges • Defining a representative normal region is challenging • The boundary between normal and outlying behavior is often not precise • The exact notion of an outlier is different for different application domains • Limited availability of labeled data for training/validation • Malicious adversaries • Data might contain noise • Normal behaviour keeps evolving 7 Aspects of Anomaly Detection Problem • Nature of input data • Availability of supervision • Type of anomaly: point, contextual, structural -
Anomaly Detection in Data Mining: a Review Jagruti D
Volume 7, Issue 4, April 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Anomaly Detection in Data Mining: A Review Jagruti D. Parmar Prof. Jalpa T. Patel M.E IT, SVMIT, Bharuch, Gujarat, CSE & IT, SVMIT Bharuch, Gujarat, India India Abstract— Anomaly detection is the new research topic to this new generation researcher in present time. Anomaly detection is a domain i.e., the key for the upcoming data mining. The term ‘data mining’ is referred for methods and algorithms that allow extracting and analyzing data so that find rules and patterns describing the characteristic properties of the information. Techniques of data mining can be applied to any type of data to learn more about hidden structures and connections. In the present world, vast amounts of data are kept and transported from one location to another. The data when transported or kept is informed exposed to attack. Though many techniques or applications are available to secure data, ambiguities exist. As a result to analyze data and to determine different type of attack data mining techniques have occurred to make it less open to attack. Anomaly detection is used the techniques of data mining to detect the surprising or unexpected behaviour hidden within data growing the chances of being intruded or attacked. This paper work focuses on Anomaly Detection in Data mining. The main goal is to detect the anomaly in time series data using machine learning techniques. Keywords— Anomaly Detection; Data Mining; Time Series Data; Machine Learning Techniques.