Bachelor Thesis Project

Total Page:16

File Type:pdf, Size:1020Kb

Bachelor Thesis Project Bachelor Thesis Project Electrocardiographic deviation detection - Using long short-term memory recurrent neural networks to detect deviations within electrocardiographic records Author: Michael Racette Olsén Supervisor: Anders Haggren Semester: VT2017 Subject: Computer Engineering Abstract Artificial neural networks have been gaining attention in recent years due to their impressive ability to map out complex nonlinear relations within data. In this report, an attempt is made to use a Long short-term memory neural network for detecting anomalies within electrocardiographic records. The hypothesis is that if a neural network is trained on records of normal ECGs to predict future ECG sequences, it is expected to have trouble predicting abnormalities not previously seen in the training data. Three different LSTM model configurations were trained using records from the MIT-BIH Arrhythmia database. Afterwards the models were evaluated for their ability to predict previously unseen normal and anomalous sections. This was done by measuring the mean squared error of each prediction and the uncertainty of over- lapping predictions. The preliminary results of this study demonstrate that recurrent neural networks with the use of LSTM units are capable of detecting anomalies. Keywords: ECG, LSTM, RNN, Neural Network, Deeplearning4j, Time Series i Preface I would like to thank HIQ and KIWOK for providing me the opportunity to do research in a field I find very interesting. ii Contents 1 Introduction1 1.1 Background.................................1 1.2 Hypothesis.................................2 1.3 Problem formulation............................2 1.4 Objectives..................................2 1.5 Scope/Limitations..............................2 1.6 Target group.................................2 1.7 Outline...................................3 2 Theory4 2.1 Electrocardiography (ECG).........................4 2.2 Supervised learning with features and labels................4 2.3 Artificial Neural Network..........................5 2.3.1 Artificial neuron..........................5 2.3.2 Hidden Layers...........................6 2.3.3 Gradient descent..........................7 2.3.4 Recurrent neural network (RNN)..................8 2.3.5 Long short-term memory (LSTM).................8 2.4 Preparing the data..............................9 2.5 Training the network............................9 2.5.1 Hyper-parameter optimization...................9 2.5.2 Generalization and overfitting................... 10 3 Method 11 3.1 Collecting data for training......................... 11 3.2 Evaluation.................................. 12 3.3 Reliability and Validity........................... 12 4 Implementation 13 4.1 Preparing the data.............................. 13 4.2 Structuring the neural network....................... 16 4.3 Preventing overfitting by using early stopping............... 17 4.4 Evaluation.................................. 17 5 Results 18 5.1 Sequence to One.............................. 20 5.2 Sequence to Vector............................. 21 5.3 Sequence to Sequence............................ 22 6 Analysis 23 7 Discussion 24 8 Conclusion 24 References 25 1 Introduction The eHealth company Kiwok AB has developed a system called BodyKom for remotely monitoring a person’s electrocardiography. The recordings are measured with a soft elec- trode shirt and sent via a cellphone to a server where the recordings can be downloaded by a caregiver for analysis. Kiwok AB has asked HIQ to develop an automatic system for real-time analysis of ECGs. The goal is to be able to report deviations directly to a healthcare provider, working preventively to avoid future cardiac complications. Kiwok has stated that they are particularly interested in finding a solution that involves machine learning. 1.1 Background Studies show that better screening methods are becoming increasingly important as the older population continuous to grow. In a mass screening performed on individuals in the age 75 to 76 it was revealed that a significant proportion of the participants had untreated atrial fibrillation [1]. T. Lindberg et al. later confirmed these findings with the use of BodyKom. In their conclusion they state: ...many older outpatients have undiagnosed and thus untreated persistent and paroxysmal AF. This is a challenge for health care providers, and it is essen- tial to develop more effective strategies for the detection, treatment, and pre- vention of arrhythmias. This study confirms that the long-term wireless ECG recorder BodyKom has good feasibility for arrhythmia screening in older out- patient populations. [2, p. 1089]. Visually analyzing an ECG is a very time consuming job. An ECG record of 72 hours contains approximately 300 thousand heartbeats. Being able to detect anomalies and identify people within risk zones from a distance with the use of machine learning would make ECG monitoring available to far more people. The goal of the requested system is twofold; general deviation detection and individually adapted deviation detection. The second system is meant to recognize what is normal for a particular individual and detect changes as the individual ages. Long short-term memory neural networks have proven to be exceptionally good at learning long term temporal dependencies within time-series data. In 2015 Malhotra et al. demonstrated how LSTM neural networks can be used to detect anomalies: A network is trained on non-anomalous data and used as a predictor over a number of time steps. The resulting prediction errors are modeled as a multi- variate Gaussian distribution, which is used to assess the likelihood of anoma- lous behavior. The efficacy of this approach is demonstrated on four datasets: ECG, space shuttle, power demand, and multi-sensor engine dataset. [3]. 1 In this report a similar approach is evaluated using three different structures of LSTM neural networks which are trained to predict future sequences of ECG data from the MIT- BIH Arrythmia database. Their prediction accuracy and ability to detect anomalies are compared to each other. Additionally two different metrics for anomaly detection are compared; prediction difficulty through squared error and prediction uncertainty through variance in overlapping predictions. 1.2 Hypothesis When training a neural network model to accurately predict sequential data a few steps a- head of time, it should encounter great difficulties in predicting anomalies not previously seen in the training data. The anomalies can then be identified by either computing an error vector or measuring the uncertainty in overlapping predictions. 1.3 Problem formulation In order for the network to identify anomalous sections of data, it first needs to be able to accurately predict future sequences of non-anomalous ECG data. The three different network compositions need to be reliably evaluated against each other to answer the fol- lowing questions: Which architecture has the most accurate predictions? • Are the trained models, capable of detecting anomalies? • Which architecture has the best performance in terms of analysis speed? • 1.4 Objectives O1 Implement the different architectures using deeplearning4j. O2 Compare the different architectures to each other. O3 Evaluate the effectiveness of the methods. 1.5 Scope/Limitations This report will include a brief explanation of how Recurrent Neural Networks and Long Short-term memory neural networks work and the difference between them. How LSTMs can be implemented on electrocardiographic data and its effectiveness in detecting devia- tions will also be discussed. 1.6 Target group This report is directed mainly for HIQ and KIWOK as an evaluation of a proposed devel- opment path for their live monitoring system. 2 1.7 Outline The report begins with chapter 2 where the theory is outlined. Chapter 3 (Method) describes how the system will be evaluated, the reliability and validity of the results. Chapter 4 (Implementation) describes how the systems were designed and imple- mented. It contains a brief description of the library used, the methods for finding optimal structures and the methods that were used for evaluating the structures. In chapter 5 the results are presented for each of the different models. Chapters 6, 7 and 8, contains the analysis, discussion and conclusion. 3 2 Theory 2.1 Electrocardiography (ECG) Electrocardiography is the process of measuring and recording the electrical activity of the heart over time by attaching small electrodes to the surface of the skin. The elec- trodes measure the small electrical changes in the skin caused by the depolarization and repolarization of the different compartments within the heart muscle. These events form a series of waves, starting with the P-Wave representing the depolarization of the atria, followed by the QRS-complex representing the depolarization of the ventricles and finally the T-wave representing the ventricles being repolarized [4]. Figure 2.1: ECG of a single heartbeat in normal sinus rhythm 2.2 Supervised learning with features and labels Supervised learning is useful when a system’s inputs and outputs are known in advance and can be measured. The measured characteristics of the input data are called "features", and the measurements of the known output data are called "labels". Features and labels are prepared in pairs. By using algorithms for supervised learning the neural network is able to map the relation between features and labels. 4 2.3 Artificial Neural Network Artificial neural networks consist of
Recommended publications
  • Comparative Study of Deep Learning Software Frameworks
    Comparative Study of Deep Learning Software Frameworks Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah Research and Technology Center, Robert Bosch LLC {Soheil.Bahrampour, Naveen.Ramakrishnan, fixed-term.Lukas.Schott, Mohak.Shah}@us.bosch.com ABSTRACT such as dropout and weight decay [2]. As the popular- Deep learning methods have resulted in significant perfor- ity of the deep learning methods have increased over the mance improvements in several application domains and as last few years, several deep learning software frameworks such several software frameworks have been developed to have appeared to enable efficient development and imple- facilitate their implementation. This paper presents a com- mentation of these methods. The list of available frame- parative study of five deep learning frameworks, namely works includes, but is not limited to, Caffe, DeepLearning4J, Caffe, Neon, TensorFlow, Theano, and Torch, on three as- deepmat, Eblearn, Neon, PyLearn, TensorFlow, Theano, pects: extensibility, hardware utilization, and speed. The Torch, etc. Different frameworks try to optimize different as- study is performed on several types of deep learning ar- pects of training or deployment of a deep learning algorithm. chitectures and we evaluate the performance of the above For instance, Caffe emphasises ease of use where standard frameworks when employed on a single machine for both layers can be easily configured without hard-coding while (multi-threaded) CPU and GPU (Nvidia Titan X) settings. Theano provides automatic differentiation capabilities which The speed performance metrics used here include the gradi- facilitates flexibility to modify architecture for research and ent computation time, which is important during the train- development. Several of these frameworks have received ing phase of deep networks, and the forward time, which wide attention from the research community and are well- is important from the deployment perspective of trained developed allowing efficient training of deep networks with networks.
    [Show full text]
  • Reinforcement Learning in Videogames
    Reinforcement Learning in Videogames Alex` Os´esLaza Final Project Director: Javier B´ejarAlonso FIB • UPC 20 de junio de 2017 2 Acknowledgements First I would like and appreciate the work and effort that my director, Javier B´ejar, has put into me. Either solving a ton of questions that I had or guiding me throughout the whole project, he has helped me a lot to make a good final project. I would also love to thank my family and friends, that supported me throughout the entirety of the career and specially during this project. 3 4 Abstract (English) While there are still a lot of projects and papers focused on: given a game, discover and measure which is the best algorithm for it, I decided to twist things around and decided to focus on two algorithms and its parameters be able to tell which games will be best approachable with it. To do this, I will be implementing both algorithms Q-Learning and SARSA, helping myself with Neural Networks to be able to represent the vast state space that the games have. The idea is to implement the algorithms as general as possible.This way in case someone wanted to use my algorithms for their game, it would take the less amount of time possible to adapt the game for the algorithm. I will be using some games that are used to make Artificial Intelligence competitions so I have a base to work with, having more time to focus on the actual algorithm implementation and results comparison. 5 6 Abstract (Catal`a) Mentre ja existeixen molts projectes i estudis centrats en: donat un joc, descobrir i mesurar quin es el millor algoritme per aquell joc, he decidit donar-li la volta i centrar-me en donat dos algorismes i els seus par`ametres,ser capa¸cde trobar quin tipus de jocs es beneficien m´esde la configuraci´odonada.
    [Show full text]
  • Distributed Negative Sampling for Word Embeddings Stergios Stergiou Zygimantas Straznickas Yahoo Research MIT [email protected] [email protected]
    Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Distributed Negative Sampling for Word Embeddings Stergios Stergiou Zygimantas Straznickas Yahoo Research MIT [email protected] [email protected] Rolina Wu Kostas Tsioutsiouliklis University of Waterloo Yahoo Research [email protected] [email protected] Abstract Training for such large vocabularies presents several chal- lenges. Word2Vec needs to maintain two d-dimensional vec- Word2Vec recently popularized dense vector word represen- tors of single-precision floating point numbers for each vo- tations as fixed-length features for machine learning algo- cabulary word. All vectors need to be kept in main memory rithms and is in widespread use today. In this paper we in- to achieve acceptable training latency, which is impractical vestigate one of its core components, Negative Sampling, and propose efficient distributed algorithms that allow us to scale for contemporary commodity servers. As a concrete exam- to vocabulary sizes of more than 1 billion unique words and ple, Ordentlich et al. (Ordentlich et al. 2016) describe an corpus sizes of more than 1 trillion words. application in which they use word embeddings to match search queries to online ads. Their dictionary comprises ≈ 200 million composite words which implies a main mem- Introduction ory requirement of ≈ 800 GB for d = 500. A complemen- tary challenge is that corpora sizes themselves increase. The Recently, Mikolov et al (Mikolov et al. 2013; Mikolov and largest reported dataset that has been used was 100 billion Dean 2013) introduced Word2Vec, a suite of algorithms words (Mikolov and Dean 2013) which required training for unsupervised training of dense vector representations of time in the order of days.
    [Show full text]
  • Comparative Study of Caffe, Neon, Theano, and Torch
    Workshop track - ICLR 2016 COMPARATIVE STUDY OF CAFFE,NEON,THEANO, AND TORCH FOR DEEP LEARNING Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah Bosch Research and Technology Center fSoheil.Bahrampour,Naveen.Ramakrishnan, fixed-term.Lukas.Schott,[email protected] ABSTRACT Deep learning methods have resulted in significant performance improvements in several application domains and as such several software frameworks have been developed to facilitate their implementation. This paper presents a comparative study of four deep learning frameworks, namely Caffe, Neon, Theano, and Torch, on three aspects: extensibility, hardware utilization, and speed. The study is per- formed on several types of deep learning architectures and we evaluate the per- formance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings. The speed performance metrics used here include the gradient computation time, which is important dur- ing the training phase of deep networks, and the forward time, which is important from the deployment perspective of trained networks. For convolutional networks, we also report how each of these frameworks support various convolutional algo- rithms and their corresponding performance. From our experiments, we observe that Theano and Torch are the most easily extensible frameworks. We observe that Torch is best suited for any deep architecture on CPU, followed by Theano. It also achieves the best performance on the GPU for large convolutional and fully connected networks, followed closely by Neon. Theano achieves the best perfor- mance on GPU for training and deployment of LSTM networks. Finally Caffe is the easiest for evaluating the performance of standard deep architectures.
    [Show full text]
  • BUSEM at Semeval-2017 Task 4A Sentiment Analysis with Word
    BUSEM at SemEval-2017 Task 4 Sentiment Analysis with Word Embedding and Long Short Term Memory RNN Approaches Deger Ayata1, Murat Saraclar 1, Arzucan Ozgur2 1Electrical & Electronical Engineering Department, Bogaziçi University 2Computer Engineering Department, Bogaziçi University Istanbul , Turkey {deger.ayata, murat.saraclar, arzucan.ozgur}@boun.edu.tr Abstract uses word embeddings for feature representation and Support Vector Machine This paper describes our approach for (SVM), Random Forest (RF) and Naive Bayes SemEval-2017 Task 4: Sentiment Analysis in (NB) algorithms for classification Twitter Twitter. We have participated in Subtask A: messages into negative, neutral and positive Message Polarity Classification subtask and polarity. The second system is based on Long developed two systems. The first system Short Term Memory Recurrent Neural uses word embeddings for feature representation and Support Vector Machine, Networks (LSTM) and uses word indexes as Random Forest and Naive Bayes algorithms sequence of inputs for feature representation. for the classification of Twitter messages into The remainder of this article is structured as negative, neutral and positive polarity. The follows: Section 2 contains information about second system is based on Long Short Term the system description and Section 3 explains Memory Recurrent Neural Networks and methods, models, tools and software packages uses word indexes as sequence of inputs for used in this work. Test cases and datasets are feature representation. explained in Section 4. Results are given in Section 5 with discussions. Finally, section 6 summarizes the conclusions and future work. 1 Introduction 2 System Description Sentiment analysis is extracting subjective information from source materials, via natural We have developed two independent language processing, computational linguistics, systems.
    [Show full text]
  • Using Long Short-Term Memory Recurrent Neural Network in Land Cover Classification on Landsat and Cropland Data Layer Time Series
    Using Long Short-Term Memory Recurrent Neural Network in Land Cover Classification on Landsat and Cropland Data Layer time series Ziheng Suna, Liping Dia,*, Hui Fang a Center for Spatial Information Science and Systems, George Mason University, Fairfax, United States * [email protected]; Tel: +1-703-993-6114; Fax: +1-703-993-6127; mailing address: 4087 University Dr STE 3100, Fairfax, VA, 22030, United States. Dr. Ziheng Sun is a research assistant professor in Center for Spatial Information Science and Systems, George Mason University. Dr. Liping Di is a professor of Geography and Geoinformation Science, George Mason University and the director of Center for Spatial Information Science and Systems, George Mason University. Hui Fang receives her M.Sc degree from State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University. This manuscript contains 6328 words. Using Long Short-Term Memory Recurrent Neural Network in Land Cover Classification on Landsat Time Series and Cropland Data Layer Land cover maps are significant in assisting agricultural decision making. However, the existing workflow of producing land cover maps is very complicated and the result accuracy is ambiguous. This work builds a long short-term memory (LSTM) recurrent neural network (RNN) model to take advantage of the temporal pattern of crops across image time series to improve the accuracy and reduce the complexity. An end-to-end framework is proposed to train and test the model. Landsat scenes are used as Earth observations, and some field- measured data together with CDL (Cropland Data Layer) datasets are used as reference data. The network is thoroughly trained using state-of-the-art techniques of deep learning.
    [Show full text]
  • Learning Options in Deep Reinforcement Learning Jean
    Learning Options in Deep Reinforcement Learning Jean Merheb-Harb Computer Science McGill University, Montreal December 8, 2016 A thesis submitted to McGill University in partial fulfilment of the requirements of the degree of Master of Science. c Jean Merheb-Harb; December 8, 2016. i Acknowledgements First and foremost, I would like to thank my supervisor, Doina Precup, for all the encouragement and for teaching me so much. A special thanks to Pierre-Luc Bacon, for the great discussions we’ve had and enduring my constant bombardment of questions. ii Abstract Temporal abstraction is a key idea in decision making that is seen as crucial for creating artificial intelligence. Humans have the ability to reason over long periods of time and make decisions on multiple time scales. Options are a rein- forcement learning framework that allow an agent to make temporally extended decisions. In this thesis, we present the deep option-critic, which combines the powerful representation learning capabilities of deep learning models with the option-critic to learn both state and temporal abstraction automatically from data in an end-to-end fashion. We apply the algorithm on the Arcade Learning Environment, where the agent must learn to play Atari 2600 video games, and analyze performance and behaviours learned using the options framework. iii Résumé L’abstraction temporelle est une idée clée dans la prise de décision qui est considérée comme cruciale pour la création de l’intelligence artificielle. Les humains ont la capacité de raisonner sur de longues périodes de temps et pren- dre des décisions sur différentes échelles de temps.
    [Show full text]
  • Deep Learning: Concepts and Implementation Tools
    Deep learning: concepts and implementation tools Eddy S´anchez-DelaCruz and David Lara-Alabazares Postgraduate Department, Technological Institute of Misantla, Veracruz Mexico. feddsacx, [email protected] Abstract. In this paper, the concepts and tool available to use Deep learning in scholar projects are given. We carry out experiments by com- bining meta-classifiers with a deep artificial neural network in four binary datasets. The results show optimal percentages of correct classification in some cases. The sample criteria that prevalence in this study was a representative sample over traditional criteria. Keywords: Artificial Intelligence · Deep Learning · Artificial Neural Network · Meta-classifiers 1 Introduction At the beginning of the 21st century, Artificial Intelligence (AI) in its various disciplines that integrate it, has started in a surprising way, to emulate faith- fully human behavior and reasoning. As results of this, remarkable progress have emerged in different fields such as the computer-assisted medical diagnosis, clas- sification of DNA sequences, data mining, artificial vision, voice recognition, analysis of written language, virtual games, robotics, and any others where the reasoning is the main element [9]. Among the different disciplines of AI, the deep learning is a novel alternative (from a few decades ago) that has as main objective to make that an intelligent agent can be capable to make its own decisions, something that currently is only possible in science fiction. In this sense, different approaches of Artificial Neural Networks (ANN) are used in deep learning, having as goal provides to an agent of personality comparable to a human. Among these approaches, we have Deep Neural Networks, Convolutional Neural Networks, and Deep Belief Networks.
    [Show full text]
  • An Exploration of Approaches to Integrating Neural Reranking Models in Multi-Stage Ranking Architectures
    An Exploration of Approaches to Integrating Neural Reranking Models in Multi-Stage Ranking Architectures Zhucheng Tu, Ma Crane, Royal Sequiera, Junchen Zhang, and Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo, Ontario, Canada fmichael.tu,ma.crane,rdsequie,j345zhan,[email protected] ABSTRACT results. Recent eorts include the Lucene4IR1 workshop organized We explore dierent approaches to integrating a simple convolu- by Azzopardi et al. [3], the Lucene for Information Access and tional neural network (CNN) with the Lucene search engine in a Retrieval Research (LIARR) Workshop [2] at SIGIR 2017, and the multi-stage ranking architecture. Our models are trained using the Anserini IR toolkit built on top of Lucene [24]. PyTorch deep learning toolkit, which is implemented in C/C++ with Given the already substantial and growing interest in applying a Python frontend. One obvious integration strategy is to expose deep learning to information retrieval [11], it would make sense to the neural network directly as a service. For this, we use Apache integrate existing deep learning toolkits with open-source search ri, a soware framework for building scalable cross-language engines. In this paper, we explore dierent approaches to integrat- services. In exploring alternative architectures, we observe that ing a simple convolutional neural network (CNN) with the Lucene once trained, the feedforward evaluation of neural networks is search engine, evaluating alternatives in terms of performance quite straightforward. erefore, we can extract the parameters (latency and throughput) as well as ease of integration. of a trained CNN from PyTorch and import the model into Java, Fundamentally, neural network models (and even more broadly, taking advantage of the Java Deeplearning4J library for feedfor- learning-to-rank approaches) for a variety of information retrieval ward evaluation.
    [Show full text]
  • Using Deep Learning for Optimized Personalized Card Linked Offer
    Using Deep Learning for optimized personalized card linked offer March 5, 2018 Suqiang Song , Director, Chapter Leader of Data Engineering & AI Shared Components and Security Solutions LinkedIn : https://www.linkedin.com/in/suqiang-song-72041716/ Mastercard Big Data & AI Expertise Differentiation starts with consumer insights from a massive worldwide payments network and our experience in data cleansing, analytics and modeling What can MULTI-SOURCED • 38MM+ merchant locations • 22,000 issuers CLEANSED, AGGREGATD, ANONYMOUS, AUGMENTED 2.4 BILLION • 1.5MM automated rules • Continuously tested and Global Cards WAREHOUSED • 10 petabytes 56 BILLION • 5+ year historic global view • Rapid retrieval Transactions/ • Above-and-beyond privacy protection and security Year mean TRANSFORMED INTO ACTIONABLE INSIGHTS • Reports, indexes, benchmarks to you? • Behavioral variables • Models, scores, forecasting • Econometrics Mastercard Enhanced Artificial Intelligence Capability with the Acquisitions of Applied Predictive Technologies(2015) and Brighterion (2017) Hierarchy of needs of AI https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007?gi=b66e6ae1b71d ©2016 Mastercard.Proprietaryand Confidential. Our Explore & Evaluation Journey Enterprise Requirements for Deep Learning Analyze a large amount of data on the same Big Data clusters where the data are stored (HDFS, HBase, Hive, etc.) rather than move or duplicate data. Add deep learning capabilities to existing Analytic Applications and/or machine learning workflows rather than rebuild all of them. Leverage existing Big Data clusters and deep learning workloads should be managed and monitored with other workloads (ETL, data warehouse, traditional ML etc..) rather than separated deployment , resources allocation ,workloads management and enterprise monitoring …… ©2016 Mastercard.Proprietaryand Confidential. Challenges and limitations to Production considering some “Super Stars”….
    [Show full text]
  • Long Short-Term Memory Recurrent Neural Network Architectures for Generating Music and Japanese Lyrics
    Long Short-Term Memory Recurrent Neural Network Architectures for Generating Music and Japanese Lyrics Ayako Mikami 2016 Honors Thesis Advised by Professor Sergio Alvarez Computer Science Department, Boston College Abstract Recent work in deep machine learning has led to more powerful artificial neural network designs, including Recurrent Neural Networks (RNN) that can process input sequences of arbitrary length. We focus on a special kind of RNN known as a Long-Short-Term-Memory (LSTM) network. LSTM networks have enhanced memory capability, creating the possibility of using them for learning and generating music and language. This thesis focuses on generating Chinese music and Japanese lyrics using LSTM networks. For Chinese music generation, an existing LSTM implementation is used called char-RNN written by Andrej Karpathy in the Lua programming language, using the Torch deep learning library. I collected a data set of 2,500 Chinese folk songs in abc notation, to serve as the LSTM training input. The network learns a probabilistic model of sequences of musical notes from the input data that allows it to generate new songs. To generate Japanese lyrics, I modified Denny Britz’s GRU model into a LSTM networks in the Python programming language, using the Theano deep learning library. I collected over 1MB of Japanese Pop lyrics as the training data set. For both implementations, I discuss the overall performance, design of the model, and adjustments made in order to improve performance. 1 Contents 1. Introduction . 3 1.1 Overview . 3 1.2 Feedforward Neural Networks . 4 1.2.1 Forward Propagation . 4 1.2.2 Backpropagation .
    [Show full text]
  • Chained Anomaly Detection Models for Federated Learning: an Intrusion Detection Case Study
    applied sciences Article Chained Anomaly Detection Models for Federated Learning: An Intrusion Detection Case Study Davy Preuveneers 1,* , Vera Rimmer 1, Ilias Tsingenopoulos 1, Jan Spooren 1, Wouter Joosen 1 and Elisabeth Ilie-Zudor 2 1 imec-DistriNet-KU Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium; [email protected] (V.R.); [email protected] (I.T.); [email protected] (J.S.); [email protected] (W.J.) 2 MTA SZTAKI, Kende u. 13-17, H-1111 Budapest, Hungary; [email protected] * Correspondence: [email protected]; Tel.: +32-16-327853 Received: 4 November 2018; Accepted: 14 December 2018; Published: 18 December 2018 Abstract: The adoption of machine learning and deep learning is on the rise in the cybersecurity domain where these AI methods help strengthen traditional system monitoring and threat detection solutions. However, adversaries too are becoming more effective in concealing malicious behavior amongst large amounts of benign behavior data. To address the increasing time-to-detection of these stealthy attacks, interconnected and federated learning systems can improve the detection of malicious behavior by joining forces and pooling together monitoring data. The major challenge that we address in this work is that in a federated learning setup, an adversary has many more opportunities to poison one of the local machine learning models with malicious training samples, thereby influencing the outcome of the federated learning and evading detection. We present a solution where contributing parties in federated learning can be held accountable and have their model updates audited. We describe a permissioned blockchain-based federated learning method where incremental updates to an anomaly detection machine learning model are chained together on the distributed ledger.
    [Show full text]