Deep Learning on the 2-Dimensional Ising Model to Extract the Crossover

Total Page:16

File Type:pdf, Size:1020Kb

Load more

www.nature.com/scientificreports OPEN Deep learning on the 2‑dimensional Ising model to extract the crossover region with a variational autoencoder Nicholas Walker1*, Ka‑Ming Tam1,2 & Mark Jarrell1,2 The 2‑dimensional Ising model on a square lattice is investigated with a variational autoencoder in the non‑vanishing feld case for the purpose of extracting the crossover region between the ferromagnetic and paramagnetic phases. The encoded latent variable space is found to provide suitable metrics for tracking the order and disorder in the Ising confgurations that extends to the extraction of a crossover region in a way that is consistent with expectations. The extracted results achieve an exceptional prediction for the critical point as well as agreement with previously published results on the confgurational magnetizations of the model. The performance of this method provides encouragement for the use of machine learning to extract meaningful structural information from complex physical systems where little a priori data is available. Machine learning (ML) and consequently data science as a whole have seen rapid development over the last decade or so, due largely to considerable advances in implementations and hardware that have made computa- tions more accessible. Conceptually, the ML approach can be regarded as a data modeling approach employing algorithms that eschew explicit instructions in favor of strategies based around pattern extraction and inference driven by statistical analysis. Tis presents a colossal opportunity for modern scientifc investigations, particu- larly numerical studies, as they naturally involve large data sets and complex systems where obvious explicit instructions for analysis can be elusive. Conventional approaches ofen neglect possible nuance in the structure of the data in favor of rather simple measurements that are ofen untenable for sufciently complex problems. Some ML methods such as inference methods have been routinely applied to certain physical problems, such as the maximum likelihood method and the maximum entropy method 1,2, but applications which utilizing ML methods have only recently attracted attention in the physical sciences, particularly for the study of interacting systems on both classical and quantum scales3. Tere is a unique opportunity to take advantage of the advances in ML algorithms and implementations to provide interesting new approaches to understanding physical data and even perhaps improve upon existing numerical methods 4. Outstanding problems involving the predictions of transition points and phase diagrams are also of great interest for treatment with ML methods. In order to utilize ML approaches for studying phase transitions, one must assume that there is some pat- tern change in the measured data across the phase transition. Fortunately, this is in fact exactly what happens in most phase transitions. Te widely adopted Lindemann parameter, for example, is essentially a measure of the deviations of atomic positions in the system from equilibrium positions and is ofen used to characterize the melting of a crystal structure 5. Similar form of pattern changes in the positions of the constituent atoms are ofen present in molecular systems in general. Perhaps more importantly, for some sufciently complex systems, their phase transitions do not have obvious order parameters, ofen prohibiting the detection of such pattern changes using conventional methods. Tis is not a hypothetical situation, indeed hidden orderings for some interesting materials, such as heavy fermion materials and cuprate superconductors, have been proposed for long time 6–8. Other systems may not even exhibit a true phase transition, but rather a crossover region where there is no singularity across diferent phases that can be difcult to characterize with conventional methods. A conventional phase transition can be identifed in two ways, with the frst being a singularity in a derivative of the free energy 1Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803, USA. 2Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803, USA. *email: nwalk19@ lsu.edu SCIENTIFIC REPORTS | (2020) 10:13047 | https://doi.org/10.1038/s41598-020-69848-5 1 Vol.:(0123456789) www.nature.com/scientificreports/ as proposed by Ehrenfest and the second being a broken symmetry exhibited by an order parameter as proposed by Landau. Unlike a conventional phase transition, a crossover is not identifed by a singularity in the free energy. Tere is also no broken symmetry in such a situation and thus no order parameter is associated with a crossover. Te order parameter and singularity in the free energy are presumably sharp and obvious features which can be rather easily identifed. Te absence of such features clearly present a challenging situation in the prediction of a crossover region by ML. ML is a new route of studying these systems by searching for hidden patterns in the measured data where readily applicable a priori information is in short supply. A viable ML method for detecting a crossover will fnd its use in many interesting systems related to the quantum phase transition9. While the quantum phase transition is a second order phase transition controlled by non-thermal parameters at zero temperature, all experiments and most numerical simulations are conducted at fnite albeit low temperatures for practical reasons. As a consequence of said thermal conditions, quantum critical points at low temperatures behave as crossover phenomenona. It is widely believed that many interesting materi- als, particularly high temperature cuprate superconductors, harbor a quantum critical point. An ML approach for detecting the crossover phenomenon can thus be an important tool for studying quantum critical points. Work has been done on various problems to characterize phase transitions in physical systems using ML methods, including the Ising model in the vanishing feld case 3,10–20. Tis work will use a similar approach to those seen in these papers, but will focus on the crossover regions that are introduced in the non-vanishing feld case of the 2-dimensional Ising model instead of seeking only the exactly known transition point in the vanishing feld case21. Tis is a somewhat more difcult problem, as there is no explicit transition to be found, but it remains an interesting problem nonetheless and possibly carries much greater implications for crossover regions in more complicated problems. Te Ising model itself is a mathematical model for ferromagnetism that is ofen explored in the feld of statistical mechanics in physics to describe magnetic phenomena22. Originally, the Ising model was developed to investigate magnetic phenomena, as mentioned earlier. With the discovery of electron spins, the model was designed to determine whether or not local interactions between magnetic spins could induce a large fraction of the electronic spins in a material to align in order to produce a macroscopic net magnetic moment. It is expressed in the form of a multidimensional array of spins si that represent a discrete arrangement of magnetic dipole 22 moments of atomic spins . Te spins are restricted to spin-up or spin-down alignments such that si 1, 1 . ∈ {− + } Te spins interact with their nearest neighbors with an interaction strength given by Jij for neighbors si and sj . Te spins can additionally interact with an applied external magnetic feld Hi (where the magnetic dipole moment µ has been absorbed). Te full Hamiltonian describing the system is thus expressed as H Jijsisj Hisi =− − (1) i,j i � � where i, j indicates a sum over adjacent spins. For Jij > 0 , the interaction between the spins is ferromagnetic, for Jij < 0 , the interaction between the spins is antiferromagnetic, and for Jij 0 , the spins are noninteracting. Fur- = thermore, if Hi > 0 , the spin at site i tends to prefer spin-up alignment, if Hi < 0 , the spin at site i tends to prefer spin-down alignment, and if Hi 0 , there is no external magnetic feld infuence on the spin at site i. Te model = has seen extensive use in investigating magnetic phenomena in condensed matter phhysics 24–29. Additionally, the model can be equivalently expressed in the form of the lattice gas model, described by the following Hamiltonian H 4J ninj µ ni =− − (2) i,j i � � where the external feld strength H is reinterpreted as the chemical potential µ , J retains its role as the interaction strength, and ni 0, 1 represents the lattice site occupancy. Te original Ising Hamiltonian can be recovered ∈{ } using the relation σi 2ni 1 up to a constant. Tis model describes a multidimensional array of lattice sites = − which can be either occupied or unoccupied by a hard shell atom, disallowing occupancy greater than one. Te frst term is then interpreted as a short-range attractive interaction term while the second is the fow of atoms between the system and the reservoir. Tis is a simple model of density fuctuation and the liquid-gas transfor- mations used primarily in chemistry, albeit ofen with modifcations 30,31. Additionally, modifed versions of the lattice gas models have been applied to binding behavior in biology 32–34. Typically, the model is studied in the case of Jij J 1 and the vanishing feld case Hi H 0 is of particu- = = = = lar interest for dimension d 2 since a phase transition is exhibited as the critical temperature is crossed. For ≥ two dimensions, the critical temperature can be identifed by exploiting Kramers-Wannier
Recommended publications
  • Disentangled Variational Auto-Encoder for Semi-Supervised Learning

    Disentangled Variational Auto-Encoder for Semi-Supervised Learning

    Information Sciences 482 (2019) 73–85 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins Disentangled Variational Auto-Encoder for semi-supervised learning ∗ Yang Li a, Quan Pan a, Suhang Wang c, Haiyun Peng b, Tao Yang a, Erik Cambria b, a School of Automation, Northwestern Polytechnical University, China b School of Computer Science and Engineering, Nanyang Technological University, Singapore c College of Information Sciences and Technology, Pennsylvania State University, USA a r t i c l e i n f o a b s t r a c t Article history: Semi-supervised learning is attracting increasing attention due to the fact that datasets Received 5 February 2018 of many domains lack enough labeled data. Variational Auto-Encoder (VAE), in particu- Revised 23 December 2018 lar, has demonstrated the benefits of semi-supervised learning. The majority of existing Accepted 24 December 2018 semi-supervised VAEs utilize a classifier to exploit label information, where the param- Available online 3 January 2019 eters of the classifier are introduced to the VAE. Given the limited labeled data, learn- Keywords: ing the parameters for the classifiers may not be an optimal solution for exploiting label Semi-supervised learning information. Therefore, in this paper, we develop a novel approach for semi-supervised Variational Auto-encoder VAE without classifier. Specifically, we propose a new model called Semi-supervised Dis- Disentangled representation entangled VAE (SDVAE), which encodes the input data into disentangled representation Neural networks and non-interpretable representation, then the category information is directly utilized to regularize the disentangled representation via the equality constraint.
  • A Deep Hierarchical Variational Autoencoder

    A Deep Hierarchical Variational Autoencoder

    NVAE: A Deep Hierarchical Variational Autoencoder Arash Vahdat, Jan Kautz NVIDIA {avahdat, jkautz}@nvidia.com Abstract Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are cur- rently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural archi- tectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierar- chical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels. The source code is available at https://github.com/NVlabs/NVAE. 1 Introduction The majority of the research efforts on improving VAEs [1, 2] is dedicated to the statistical challenges, such as reducing the gap between approximate and true posterior distributions [3, 4, 5, 6, 7, 8, 9, 10], formulating tighter bounds [11, 12, 13, 14], reducing the gradient noise [15, 16], extending VAEs to discrete variables [17, 18, 19, 20, 21, 22, 23], or tackling posterior collapse [24, 25, 26, 27].
  • Explainable Deep Learning Models in Medical Image Analysis

    Explainable Deep Learning Models in Medical Image Analysis

    Journal of Imaging Review Explainable Deep Learning Models in Medical Image Analysis Amitojdeep Singh 1,2,* , Sourya Sengupta 1,2 and Vasudevan Lakshminarayanan 1,2 1 Theoretical and Experimental Epistemology Laboratory, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada; [email protected] (S.S.); [email protected] (V.L.) 2 Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada * Correspondence: [email protected] Received: 28 May 2020; Accepted: 17 June 2020; Published: 20 June 2020 Abstract: Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those. However, the black-box nature of the algorithms has restricted their clinical use. Recent explainability studies aim to show the features that influence the decision of a model the most. The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations. A review of the current applications of explainable deep learning for different medical imaging tasks is presented here. The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users. Keywords: explainability; explainable AI; XAI; deep learning; medical imaging; diagnosis 1. Introduction Computer-aided diagnostics (CAD) using artificial intelligence (AI) provides a promising way to make the diagnosis process more efficient and available to the masses. Deep learning is the leading artificial intelligence (AI) method for a wide range of tasks including medical imaging problems.
  • Double Backpropagation for Training Autoencoders Against Adversarial Attack

    Double Backpropagation for Training Autoencoders Against Adversarial Attack

    1 Double Backpropagation for Training Autoencoders against Adversarial Attack Chengjin Sun, Sizhe Chen, and Xiaolin Huang, Senior Member, IEEE Abstract—Deep learning, as widely known, is vulnerable to adversarial samples. This paper focuses on the adversarial attack on autoencoders. Safety of the autoencoders (AEs) is important because they are widely used as a compression scheme for data storage and transmission, however, the current autoencoders are easily attacked, i.e., one can slightly modify an input but has totally different codes. The vulnerability is rooted the sensitivity of the autoencoders and to enhance the robustness, we propose to adopt double backpropagation (DBP) to secure autoencoder such as VAE and DRAW. We restrict the gradient from the reconstruction image to the original one so that the autoencoder is not sensitive to trivial perturbation produced by the adversarial attack. After smoothing the gradient by DBP, we further smooth the label by Gaussian Mixture Model (GMM), aiming for accurate and robust classification. We demonstrate in MNIST, CelebA, SVHN that our method leads to a robust autoencoder resistant to attack and a robust classifier able for image transition and immune to adversarial attack if combined with GMM. Index Terms—double backpropagation, autoencoder, network robustness, GMM. F 1 INTRODUCTION N the past few years, deep neural networks have been feature [9], [10], [11], [12], [13], or network structure [3], [14], I greatly developed and successfully used in a vast of fields, [15]. such as pattern recognition, intelligent robots, automatic Adversarial attack and its defense are revolving around a control, medicine [1]. Despite the great success, researchers small ∆x and a big resulting difference between f(x + ∆x) have found the vulnerability of deep neural networks to and f(x).
  • Generative Models

    Generative Models

    Lecture 11: Generative Models Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 11 - 1 May 9, 2019 Administrative ● A3 is out. Due May 22. ● Milestone is due next Wednesday. ○ Read Piazza post for milestone requirements. ○ Need to Finish data preprocessing and initial results by then. ● Don't discuss exam yet since people are still taking it. Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 11 -2 May 9, 2019 Overview ● Unsupervised Learning ● Generative Models ○ PixelRNN and PixelCNN ○ Variational Autoencoders (VAE) ○ Generative Adversarial Networks (GAN) Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 11 - 3 May 9, 2019 Supervised vs Unsupervised Learning Supervised Learning Data: (x, y) x is data, y is label Goal: Learn a function to map x -> y Examples: Classification, regression, object detection, semantic segmentation, image captioning, etc. Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 11 - 4 May 9, 2019 Supervised vs Unsupervised Learning Supervised Learning Data: (x, y) x is data, y is label Cat Goal: Learn a function to map x -> y Examples: Classification, regression, object detection, Classification semantic segmentation, image captioning, etc. This image is CC0 public domain Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 11 - 5 May 9, 2019 Supervised vs Unsupervised Learning Supervised Learning Data: (x, y) x is data, y is label Goal: Learn a function to map x -> y Examples: Classification, DOG, DOG, CAT regression, object detection, semantic segmentation, image Object Detection captioning, etc. This image is CC0 public domain Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 11 - 6 May 9, 2019 Supervised vs Unsupervised Learning Supervised Learning Data: (x, y) x is data, y is label Goal: Learn a function to map x -> y Examples: Classification, GRASS, CAT, TREE, SKY regression, object detection, semantic segmentation, image Semantic Segmentation captioning, etc.
  • Infinite Variational Autoencoder for Semi-Supervised Learning

    Infinite Variational Autoencoder for Semi-Supervised Learning

    Infinite Variational Autoencoder for Semi-Supervised Learning M. Ehsan Abbasnejad Anthony Dick Anton van den Hengel The University of Adelaide {ehsan.abbasnejad, anthony.dick, anton.vandenhengel}@adelaide.edu.au Abstract with whatever labelled data is available to train a discrimi- native model for classification. This paper presents an infinite variational autoencoder We demonstrate that our infinite VAE outperforms both (VAE) whose capacity adapts to suit the input data. This the classical VAE and standard classification methods, par- is achieved using a mixture model where the mixing coef- ticularly when the number of available labelled samples is ficients are modeled by a Dirichlet process, allowing us to small. This is because the infinite VAE is able to more ac- integrate over the coefficients when performing inference. curately capture the distribution of the unlabelled data. It Critically, this then allows us to automatically vary the therefore provides a generative model that allows the dis- number of autoencoders in the mixture based on the data. criminative model, which is trained based on its output, to Experiments show the flexibility of our method, particularly be more effectively learnt using a small number of samples. for semi-supervised learning, where only a small number of The main contribution of this paper is twofold: (1) we training samples are available. provide a Bayesian non-parametric model for combining autoencoders, in particular variational autoencoders. This bridges the gap between non-parametric Bayesian meth- 1. Introduction ods and the deep neural networks; (2) we provide a semi- supervised learning approach that utilizes the infinite mix- The Variational Autoencoder (VAE) [18] is a newly in- ture of autoencoders learned by our model for prediction troduced tool for unsupervised learning of a distribution x x with from a small number of labeled examples.
  • Variational Autoencoder for End-To-End Control of Autonomous Driving with Novelty Detection and Training De-Biasing

    Variational Autoencoder for End-To-End Control of Autonomous Driving with Novelty Detection and Training De-Biasing

    Variational Autoencoder for End-to-End Control of Autonomous Driving with Novelty Detection and Training De-biasing Alexander Amini1, Wilko Schwarting1, Guy Rosman2, Brandon Araki1, Sertac Karaman3, Daniela Rus1 Abstract— This paper introduces a new method for end-to- Uncertainty Estimation end training of deep neural networks (DNNs) and evaluates & Novelty Detection it in the context of autonomous driving. DNN training has Uncertainty been shown to result in high accuracy for perception to action Propagate learning given sufficient training data. However, the trained models may fail without warning in situations with insufficient or biased training data. In this paper, we propose and evaluate Encoder a novel architecture for self-supervised learning of latent variables to detect the insufficiently trained situations. Our method also addresses training data imbalance, by learning a set of underlying latent variables that characterize the training Dataset Debiasing data and evaluate potential biases. We show how these latent Weather Adjacent Road Steering distributions can be leveraged to adapt and accelerate the (Snow) Vehicles Surface Control training pipeline by training on only a fraction of the total Command Resampled data dataset. We evaluate our approach on a challenging dataset distribution for driving. The data is collected from a full-scale autonomous Unsupervised Latent vehicle. Our method provides qualitative explanation for the Variables Sample Efficient latent variables learned in the model. Finally, we show how Accelerated Training our model can be additionally trained as an end-to-end con- troller, directly outputting a steering control command for an Fig. 1: Semi-supervised end-to-end control. An encoder autonomous vehicle.
  • An Introduction to Variational Autoencoders Full Text Available At

    An Introduction to Variational Autoencoders Full Text Available At

    Full text available at: http://dx.doi.org/10.1561/2200000056 An Introduction to Variational Autoencoders Full text available at: http://dx.doi.org/10.1561/2200000056 Other titles in Foundations and Trends R in Machine Learning Computational Optimal Transport Gabriel Peyre and Marco Cuturi ISBN: 978-1-68083-550-2 An Introduction to Deep Reinforcement Learning Vincent Francois-Lavet, Peter Henderson, Riashat Islam, Marc G. Bellemare and Joelle Pineau ISBN: 978-1-68083-538-0 An Introduction to Wishart Matrix Moments Adrian N. Bishop, Pierre Del Moral and Angele Niclas ISBN: 978-1-68083-506-9 A Tutorial on Thompson Sampling Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband and Zheng Wen ISBN: 978-1-68083-470-3 Full text available at: http://dx.doi.org/10.1561/2200000056 An Introduction to Variational Autoencoders Diederik P. Kingma Google [email protected] Max Welling University of Amsterdam Qualcomm [email protected] Boston — Delft Full text available at: http://dx.doi.org/10.1561/2200000056 Foundations and Trends R in Machine Learning Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 United States Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is D. P. Kingma and M. Welling. An Introduction to Variational Autoencoders. Foundations and Trends R in Machine Learning, vol. 12, no. 4, pp. 307–392, 2019. ISBN: 978-1-68083-623-3 c 2019 D.
  • Exploring Semi-Supervised Variational Autoencoders for Biomedical Relation Extraction

    Exploring Semi-Supervised Variational Autoencoders for Biomedical Relation Extraction

    Exploring Semi-supervised Variational Autoencoders for Biomedical Relation Extraction Yijia Zhanga,b and Zhiyong Lua* a National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, Maryland 20894, USA b School of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning 116023, China Corresponding author: Zhiyong Lu ([email protected]) Abstract The biomedical literature provides a rich source of knowledge such as protein-protein interactions (PPIs), drug-drug interactions (DDIs) and chemical-protein interactions (CPIs). Biomedical relation extraction aims to automatically extract biomedical relations from biomedical text for various biomedical research. State-of-the-art methods for biomedical relation extraction are primarily based on supervised machine learning and therefore depend on (sufficient) labeled data. However, creating large sets of training data is prohibitively expensive and labor-intensive, especially so in biomedicine as domain knowledge is required. In contrast, there is a large amount of unlabeled biomedical text available in PubMed. Hence, computational methods capable of employing unlabeled data to reduce the burden of manual annotation are of particular interest in biomedical relation extraction. We present a novel semi-supervised approach based on variational autoencoder (VAE) for biomedical relation extraction. Our model consists of the following three parts, a classifier, an encoder and a decoder. The classifier is implemented using multi-layer convolutional neural networks (CNNs), and the encoder and decoder are implemented using both bidirectional long short-term memory networks (Bi-LSTMs) and CNNs, respectively. The semi-supervised mechanism allows our model to learn features from both the labeled and unlabeled data.
  • Deep Variational Autoencoder: an Efficient Tool for PHM Frameworks

    Deep Variational Autoencoder: an Efficient Tool for PHM Frameworks

    Deep Variational Autoencoder: An Efficient Tool for PHM Frameworks Ryad Zemouri, Melanie Levesque, Normand Amyot, Claude Hudon, Olivier Kokoko To cite this version: Ryad Zemouri, Melanie Levesque, Normand Amyot, Claude Hudon, Olivier Kokoko. Deep Varia- tional Autoencoder: An Efficient Tool for PHM Frameworks. 2020 Prognostics and Health Man- agement Conference (PHM-Besançon), May 2020, Besancon, France. pp.235-240, 10.1109/PHM- Besancon49106.2020.00046. hal-02868384 HAL Id: hal-02868384 https://hal-cnam.archives-ouvertes.fr/hal-02868384 Submitted on 7 Jul 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Deep Variational Autoencoder: an efficient tool for PHM frameworks. Ryad Zemouri∗,Melanie´ Levesque´ y, Normand Amyoty, Claude Hudony and Olivier Kokokoy ∗Cedric-Lab, CNAM, HESAM universite,´ Paris (France), [email protected] y Institut de Recherche d’Hydro-Quebec´ (IREQ), Varennes, QC, J3X1S1 Canada Abstract—Deep learning (DL) has been recently used in several Variational Autoencoder applications of machine health monitoring systems. Unfortu- nately, most of these DL models are considered as black-boxes with low interpretability. In this research, we propose an original PHM framework based on visual data analysis. The most suitable space dimension for the data visualization is the 2D-space, Encoder Decoder which necessarily involves a significant reduction from a high- dimensional to a low-dimensional data space.
  • A Generative Model for Semi-Supervised Learning

    A Generative Model for Semi-Supervised Learning

    Iowa State University Capstones, Theses and Creative Components Dissertations Fall 2019 A Generative Model for Semi-Supervised Learning Shang Da Follow this and additional works at: https://lib.dr.iastate.edu/creativecomponents Part of the Artificial Intelligence and Robotics Commons Recommended Citation Da, Shang, "A Generative Model for Semi-Supervised Learning" (2019). Creative Components. 382. https://lib.dr.iastate.edu/creativecomponents/382 This Creative Component is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Creative Components by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. A Generative Model for Semi-Supervised Learning Shang Da A Creative Component submitted to the graduate faculty in partial fulfillment of the requirements for the degree of Master of Science in Computer Science Program of Study Committee: Dr. Jin Tian, Major Professor Dr. Jia Liu, Committee Member Dr. Wensheng Zhang, Committee Member The student author, whose presentation of the scholarship herein was approved by the program of study committee, is solely responsible for the content of this dissertation. The Graduate College will ensure this dissertation is globally accessible and will not permit alterations after a degree is conferred. Iowa State University Ames, Iowa 2019 Copyright © Cy Cardinal, 2019. All rights reserved. ii TABLE
  • Latent Property State Abstraction for Reinforcement Learning

    Latent Property State Abstraction for Reinforcement Learning

    Latent Property State Abstraction For Reinforcement learning John Burden Sajjad Kamali Siahroudi Daniel Kudenko Centre for the Study Of Existential L3S Research Center L3S Research Center Risk Hanover Hanover University of Cambridge [email protected] [email protected] [email protected] ABSTRACT the construction of RS functions. This would allow for reaping the Potential Based Reward Shaping has proven itself to be an effective benefits of reward shaping without the onus of manually encoding method for improving the learning rate for Reinforcement Learning domain knowledge or abstract tasks. algorithms — especially when the potential function is derived Our main contribution is a novel RL technique, called Latent from the solution to an Abstract Markov Decision Process (AMDP) Property State Abstraction (LPSA) that creates its own intrinsic encapsulating an abstraction of the desired task. The provenance reward function for use with reward shaping which can speed of the AMDP is often a domain expert. In this paper we introduce up the learning process of Deep RL algorithms. In our work we a novel method for the full automation of creating and solving an demonstrate this speed-up on the widely popular DQN algorithm AMDP to induce a potential function. We then show empirically [13]. Our method works by learning its own abstract representation that the potential function our method creates improves the sample of environment states using an auto-encoder. Then it learns an efficiency of DQN in the domain in which we test our approach. abstract value function over the environment before utilising this to increase the learning rate on the ground level of the domain KEYWORDS using reward shaping.