Privacy-Preserving Reinforcement Learning

Total Page:16

File Type:pdf, Size:1020Kb

Privacy-Preserving Reinforcement Learning Privacy-Preserving Reinforcement Learning Jun Sakuma [email protected] Shigenobu Kobayashi [email protected] Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midoriku, Yokohama, 226-8502, Japan Rebecca N. Wright [email protected] Rutgers University, 96 Frelinghuysen Road, Piscataway, NJ 08854, USA Abstract In this paper, we consider the privacy of agents’ per- ceptions in DRL. Specifically, we provide solutions for We consider the problem of distributed rein- privacy-preserving reinforcement learning (PPRL), in forcement learning (DRL) from private per- which agents’ perceptions, such as states, rewards, and ceptions. In our setting, agents’ perceptions, actions, are not only distributed but are desired to be such as states, rewards, and actions, are not kept private. Consider two example scenarios: only distributed but also should be kept pri- vate. Conventional DRL algorithms can han- Optimized Marketing (Abe et al., 2004): Consider dle multiple agents, but do not necessarily the modeling of the customer’s purchase behavior as guarantee privacy preservation and may not a Markov Decision Process (MDP). The goal is to ob- guarantee optimality. In this work, we design tain the optimal catalog mailing strategy which max- cryptographic solutions that achieve optimal imizes the long-term profit. Timestamped histories of policies without requiring the agents to share customer status and mailing records are used as state their private information. variables. Their purchase patterns are used as actions. Value functions are learned from these records to learn the optimal policy. If these histories are managed sep- 1. Introduction arately by two or more enterprises, they may not want to share their histories for privacy reasons (for exam- With the rapid growth of computer networks and net- ple, in keeping with privacy promises made to their worked computing, a large amount of information is customers), but might still like to learn a value func- being sensed and gathered by distributed agents phys- tion from their joint data in order that they can all ically or virtually. Distributed reinforcement learning maximize their profits. (DRL) has been studied as an approach to learn a con- Load Balancing (Cogill et al., 2006): Consider a load trol policy thorough interactions between distributed balancing among competing factories. Each factory agents and environments—for example, sensor net- wants to accept customer jobs, but in order to max- works and mobile robots. DRL algorithms, such as the imize its own profit, may need to redirect jobs when distributed value function approach (Schneider et al., heavily loaded. Each factory can observe its own back- 1999) and the policy gradient approach (Moallemi & log, but factories do not want to share their backlog Roy, 2004), typically seek to satisfy two types of physi- information with each other for business reasons, but cal constraints. One is constraints on communication, they would still like to make optimal decisions. such as an unstable network environment or limited communication channels. The other is memory con- Privacy constraints prevent the data from being com- straints to manage the huge state/action space. There- bined in a single location where centralized reinforce- fore, the main emphasis of DRL has been to learn ment algorithms (CRL) could be applied. Although good, but sub-optimal, policies with minimal or lim- DRL algorithms work in a distributed setting, they ited sharing of agents’ perceptions. are designed to limit the total amount of data sent be- tween agents, but do not necessarily do so in a way Appearing in Proceedings of the 25 th International Confer- that guarantees privacy preservation. Additionally, ence on Machine Learning, Helsinki, Finland, 2008. Copy- DRL often sacrifices optimality in order to learn with right 2008 by the author(s)/owner(s). low communication. In contrast, we propose solutions Privacy-Preserving Reinforcement Learning that employ cryptographic techniques to achieve op- comp. comm. accuracy privacy timal policies (as would be learned if all the informa- CRL good good good none DRL good good medium imperfect tion were combined into a centralized reinforcement IDRL good good bad perfect learning (CRL) problem) while also explicitly protect- PPRL medium medium good perfect ing the agents’ private information. We describe solu- SFE bad bad good perfect tions both for data that is “partitioned-by-time” (as in the optimized marketing example) and “partitioned- Table 1. Comparison of different approaches by-observation” (as in the load balancing example). Related Work. Private distributed protocols have approaches: CRL, DRL, independent distributed re- been considered extensively for data mining, pioneered inforcement learning (IDRL, explained below), SFE, by Lindell and Pinkas (Lindell & Pinkas, 2002), who and our privacy-preserving reinforcement learning so- presented a privacy-preserving data-mining algorithm lutions (PPRL). In CRL, all the agents send their per- for ID3 decision-tree learning. Private distributed pro- ceptions to a designated agent, and then a centralized tocols have also been proposed for other data min- reinforcement is applied. In this case, the optimal con- k ing and machine learning problems, including -means vergence of value functions is theoretically guaranteed clustering (Jagannathan & Wright, 2005; Sakuma & when the dynamics of environments follow a discrete Kobayashi, 2008), support vector machines (Yu et al., MDP; however, privacy is not provided, as all the data 2006), boosting (Gambs et al., 2007), and belief prop- must be shared. agation (Kearns et al., 2007). On the opposite end of the spectrum, in IDRL (inde- Agent privacy in reinforcement learning has been pre- pendent DRL), each agent independently applies CRL viously considered by Zhang and Makedon (Zhang & only using its own local information; no information is Makedon, 2005). Their solution uses a form of average shared. In this case, privacy is completely preserved, reward reinforcement learning that does not necessar- but the learning results will be different and indepen- ily guarantee an optimal solution; further, their solu- dent. In particular, accuracy will be unacceptable if tion only applies partitioning by time. In contrast, our the agents have incomplete but important perceptions solutions guarantee optimality under appropriate con- about the environment. DRL can be viewed as an in- ditions and we provide solutions both when the data termediate approach between CRL and IDRL, in that is partitioned by time and by observation. the parties share only some information and accord- In principle, private distributed computations such as ingly reap only some gains in accuracy. these can be carried out using secure function evalu- The table also includes the direct use of general SFE ation (SFE) (Yao, 1986; Goldreich, 2004), which is a and our approach of PPRL. Both PPRL and SFE ob- general and well studied methodology for evaluating tain good privacy and good accuracy. Although our any function privately. However, although asymptot- solution incurs a significant cost (as compared to CRL, ically polynomially bounded, these computations can IDRL, and DRL) in computation and communication be too inefficient for practical use, particular when the to obtain this, it does so with significantly improved input size is large. For the reinforcement learning al- computational efficiency over SFE. We provide a more gorithms we address, we make use of existing SFE so- detailed comparison of the privacy, accuracy, and effi- lutions for small portions of our computation in order ciency of our approach and other possible approaches as part of a more efficient overall solution. along with our experimental results in Section 6. Our Contribution. We introduce the concepts of partitioning by time and partitioning by observation 2. Preliminaries in distributed reinforcement learning (Section 2). We 2.1. Reinforcement Learning and MDP show privacy-preserving solutions for SARSA learn- ing algorithms with random action selection for both Let S be a finite state set and A be a finite action set. kinds of partitioning (Section 4). Additionally, these A policy π is a mapping from state/action pair (s, a) algorithms are expanded to Q-learning with greedy or to the probability π(s, a) with which action a is taken -greedy action selection (Section 5). We provide ex- at state s.Attimestept,wedenotebyst, at,andrt, perimental results in Section 6. the state, action, and reward at time t, respectively. π Table 1 provides a qualitative comparison of vari- A Q-function is the expected return Q (s, a)= E ∞ γkr | s s, a a γ ants of reinforcement learning in terms of efficiency, π k=0 t+k+1 t = t = ,where is a dis- learning accuracy, and privacy loss. We compare five count factor (0 ≤ γ<1). The goal is to learn the op- timal policy π maximizing the Q-function: Q∗(s, a)= Privacy-Preserving Reinforcement Learning agent A agent B this global reword. The perception of the ith agent at A A A A B B B B i i i i i i (s1, a1 ,r1, s2 ) (s1, a1 ,r1, s2 ) (s1, a1 ,r1, s2 ) t h {s ,a ,r ,s ,a } time is denoted as t = t t t t+1 t+1 .The .... .... agent A .... .... .... .... i i private information of the ith agent is H = {ht}. , , A A , A, A B B , B, B We note that partitioning by observation is more gen- (st , at rt st+1) (st , at rt st+1) (st , at rt st+1) agent B eral than partitioning by time, in that one can always represent a sequence that is partitioned by time by Partitioned-by-time Partitioned-by-observation one that is partitioned by observation. However, we Figure 1. Partitioning model in the two-agent case provide more efficient solutions in simpler case of par- titioning by time. Let πc be a policy learned by CRL. Then, informally, maxπ Q(s, a) for all (s, a). In SARSA learning, Q- the objective of PPRL is stated as follows: values are updated at each step as: Statement 1.
Recommended publications
  • A Study on Social Network Analysis Through Data Mining Techniques – a Detailed Survey
    ISSN (Online) 2278-1021 ISSN (Print) 2319-5940 IJARCCE International Journal of Advanced Research in Computer and Communication Engineering ICRITCSA M S Ramaiah Institute of Technology, Bangalore Vol. 5, Special Issue 2, October 2016 A Study on Social Network Analysis through Data Mining Techniques – A detailed Survey Annie Syrien1, M. Hanumanthappa2 Research Scholar, Department of Computer Science and Applications, Bangalore University, India1 Professor, Department of Computer Science and Applications, Bangalore University, India2 Abstract: The paper aims to have a detailed study on data collection, data preprocessing and various methods used in developing a useful algorithms or methodologies on social network analysis in social media. The recent trends and advancements in the big data have led many researchers to focus on social media. Web enabled devices is an another important reason for this advancement, electronic media such as tablets, mobile phones, desktops, laptops and notepads enable the users to actively participate in different social networking systems. Many research has also significantly shows the advantages and challenges that social media has posed to the research world. The principal objective of this paper is to provide an overview of social media research carried out in recent years. Keywords: data mining, social media, big data. I. INTRODUCTION Data mining is an important technique in social media in scrutiny and analysis. In any industry data mining recent years as it is used to help in extraction of lot of technique based reports become the base line for making useful information, whereas this information turns to be an decisions and formulating the decisions. This report important asset to both academia and industries.
    [Show full text]
  • Geoseq2seq: Information Geometric Sequence-To-Sequence Networks
    GEOSEQ2SEQ:INFORMATION GEOMETRIC SEQUENCE-TO-SEQUENCE NETWORKS Alessandro Bay Biswa Sengupta Cortexica Vision Systems Ltd. Noah’s Ark Lab (Huawei Technologies UK) London, UK Imperial College London, London, UK [email protected] ABSTRACT The Fisher information metric is an important foundation of information geometry, wherein it allows us to approximate the local geometry of a probability distribu- tion. Recurrent neural networks such as the Sequence-to-Sequence (Seq2Seq) networks that have lately been used to yield state-of-the-art performance on speech translation or image captioning have so far ignored the geometry of the latent embedding, that they iteratively learn. We propose the information geometric Seq2Seq (GeoSeq2Seq) network which abridges the gap between deep recurrent neural networks and information geometry. Specifically, the latent embedding offered by a recurrent network is encoded as a Fisher kernel of a parametric Gaus- sian Mixture Model, a formalism common in computer vision. We utilise such a network to predict the shortest routes between two nodes of a graph by learning the adjacency matrix using the GeoSeq2Seq formalism; our results show that for such a problem the probabilistic representation of the latent embedding supersedes the non-probabilistic embedding by 10-15%. 1 INTRODUCTION Information geometry situates itself in the intersection of probability theory and differential geometry, wherein it has found utility in understanding the geometry of a wide variety of probability models (Amari & Nagaoka, 2000). By virtue of Cencov’s characterisation theorem, the metric on such probability manifolds is described by a unique kernel known as the Fisher information metric. In statistics, the Fisher information is simply the variance of the score function.
    [Show full text]
  • Data Mining and Soft Computing
    © 2002 WIT Press, Ashurst Lodge, Southampton, SO40 7AA, UK. All rights reserved. Web: www.witpress.com Email [email protected] Paper from: Data Mining III, A Zanasi, CA Brebbia, NFF Ebecken & P Melli (Editors). ISBN 1-85312-925-9 Data mining and soft computing O. Ciftcioglu Faculty of Architecture, Deljl University of Technology Delft, The Netherlands Abstract The term data mining refers to information elicitation. On the other hand, soft computing deals with information processing. If these two key properties can be combined in a constructive way, then this formation can effectively be used for knowledge discovery in large databases. Referring to this synergetic combination, the basic merits of data mining and soft computing paradigms are pointed out and novel data mining implementation coupled to a soft computing approach for knowledge discovery is presented. Knowledge modeling by machine learning together with the computer experiments is described and the effectiveness of the machine learning approach employed is demonstrated. 1 Introduction Although the concept of data mining can be defined in several ways, it is clear enough to understand that it is related to information extraction especially in large databases. The methods of data mining are essentially borrowed from exact sciences among which statistics play the essential role. By contrast, soft computing is essentially used for information processing by employing methods, which are capable to deal with imprecision and uncertainty especially needed in ill-defined problem areas. In the former case, the outcomes are exact within the error bounds estimated. In the latter, the outcomes are approximate and in some cases they may be interpreted as outcomes from an intelligent behavior.
    [Show full text]
  • Data Mining with Cloud Computing: - an Overview
    International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 5 Issue 1, January 2016 Data Mining with Cloud Computing: - An Overview Miss. Rohini A. Dhote Dr. S. P. Deshpande Department of Computer Science & Information Technology HOD at Computer Science Technology Department, HVPM, COET Amravati, Maharashtra, India. HVPM, COET Amravati, Maharashtra, India Abstract: Data mining is a process of extracting competitive environment for data analysis. The cloud potentially useful information from data, so as to makes it possible for you to access your information improve the quality of information service. The from anywhere at any time. The cloud removes the integration of data mining techniques with Cloud need for you to be in the same physical location as computing allows the users to extract useful the hardware that stores your data. The use of cloud information from a data warehouse that reduces the computing is gaining popularity due to its mobility, costs of infrastructure and storage. Data mining has attracted a great deal of attention in the information huge availability and low cost. industry and in society as a whole in recent Years Wide availability of huge amounts of data and the imminent 2. DATA MINING need for turning such data into useful information and Data Mining involves the use of sophisticated data knowledge Market analysis, fraud detection, and and tool to discover previously unknown, valid customer retention, production control and science patterns and Relationship in large data sets. These exploration. Data mining can be viewed as a result of tools can include statistical models, mathematical the natural evolution of information technology.
    [Show full text]
  • Challenges in Bioinformatics for Statistical Data Miners
    This article appeared in the October 2003 issue of the “Bulletin of the Swiss Statistical Society” (46; 10-17), and is reprinted by permission from its editor. Challenges in Bioinformatics for Statistical Data Miners Dr. Diego Kuonen Statoo Consulting, PSE-B, 1015 Lausanne 15, Switzerland [email protected] Abstract Starting with possible definitions of statistical data mining and bioinformatics, this article will give a general side-by-side view of both fields, emphasising on the statistical data mining part and its techniques, illustrate possible synergies and discuss how statistical data miners may collaborate in bioinformatics’ challenges in order to unlock the secrets of the cell. What is statistics and why is it needed? Statistics is the science of “learning from data”. It includes everything from planning for the collection of data and subsequent data management to end-of-the-line activities, such as drawing inferences from numerical facts called data and presentation of results. Statistics is concerned with one of the most basic of human needs: the need to find out more about the world and how it operates in face of variation and uncertainty. Because of the increasing use of statistics, it has become very important to understand and practise statistical thinking. Or, in the words of H. G. Wells: “Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write”. But, why is statistics needed? Knowledge is what we know. Information is the communication of knowledge. Data are known to be crude information and not knowledge by itself. The way leading from data to knowledge is as follows: from data to information (data become information when they become relevant to the decision problem); from information to facts (information becomes facts when the data can support it); and finally, from facts to knowledge (facts become knowledge when they are used in the successful completion of the decision process).
    [Show full text]
  • The Theoretical Framework of Data Mining & Its
    International Journal of Social Science & Interdisciplinary Research__________________________________ ISSN 2277- 3630 IJSSIR, Vol.2 (1), January (2013) Online available at indianresearchjournals.com THE THEORETICAL FRAMEWORK OF DATA MINING & ITS TECHNIQUES SAYYAD RASHEEDUDDIN Associate Professor – HOD CSE dept Pulla Reddy Engineering College Wargal, Medak(Dist), A.P. ABSTRACT Data mining is a process which finds useful patterns from large amount of data. The research in databases and information technology has given rise to an approach to store and manipulate this precious data for further decision making. Data mining is a process of extraction of useful information and patterns from huge data. It is also called as knowledge discovery process, knowledge mining from data, knowledge extraction or data pattern analysis. The current paper discusses the following Objectives: 1. To discuss briefly about the concept of Data Mining 2. To explain about the Data Mining Algorithms & Techniques 3. To know about the Data Mining Applications Keywords: Concept of Data mining, Data mining Techniques, Data mining algorithms, Data mining applications. Introduction to Data Mining The development of Information Technology has generated large amount of databases and huge data in various areas. The research in databases and information technology has given rise to an approach to store and manipulate this precious data for further decision making. Data mining is a process of extraction of useful information and patterns from huge data. It is also called as knowledge discovery process, knowledge mining from data, knowledge extraction or data /pattern analysis. The current paper discuss about the following Objectives: 1. To discuss briefly about the concept of Data Mining 2.
    [Show full text]
  • Reinforcement Learning for Data Mining Based Evolution Algorithm
    Reinforcement Learning for Data Mining Based Evolution Algorithm for TSK-type Neural Fuzzy Systems Design Sheng-Fuu Lin, Yi-Chang Cheng, Jyun-Wei Chang, and Yung-Chi Hsu Department of Electrical and Control Engineering, National Chiao Tung University Hsinchu, Taiwan, R.O.C. ABSTRACT membership functions of a fuzzy controller, with the fuzzy rule This paper proposed a TSK-type neural fuzzy system set assigned in advance. Juang [5] proposed the CQGAF to (TNFS) with reinforcement learning for data mining based simultaneously design the number of fuzzy rules and free evolution algorithm (R-DMEA). In R-DMEA, the group-based parameters in a fuzzy system. In [6], the group-based symbiotic symbiotic evolution (GSE) is adopted that each group represents evolution (GSE) was proposed to solve the problem that all the the set of single fuzzy rule. The R-DMEA contains both fuzzy rules were encoded into one chromosome in the structure learning and parameter learning. In the structure traditional genetic algorithm. In the GSE, each chromosome learning, the proposed R-DMEA uses the two-step self-adaptive represents a single rule of fuzzy system. The GSE uses algorithm to decide the suitable number of rules in a TNFS. In group-based population to evaluate the fuzzy rule locally. parameter learning, the R-DMEA uses the mining-based Although GSE has a better performance when compared with selection strategy to decide the selected groups in GSE by the tradition symbiotic evolution, the combination of groups is data mining method called frequent pattern growth. Illustrative fixed. example is conducted to show the performance and applicability Although the above evolution learning algorithms [4]-[6] of R-DMEA.
    [Show full text]
  • Survey on Reinforcement Learning for Language Processing
    Survey on reinforcement learning for language processing V´ıctorUc-Cetina1, Nicol´asNavarro-Guerrero2, Anabel Martin-Gonzalez1, Cornelius Weber3, Stefan Wermter3 1 Universidad Aut´onomade Yucat´an- fuccetina, [email protected] 2 Aarhus University - [email protected] 3 Universit¨atHamburg - fweber, [email protected] February 2021 Abstract In recent years some researchers have explored the use of reinforcement learning (RL) algorithms as key components in the solution of various natural language process- ing tasks. For instance, some of these algorithms leveraging deep neural learning have found their way into conversational systems. This paper reviews the state of the art of RL methods for their possible use for different problems of natural language processing, focusing primarily on conversational systems, mainly due to their growing relevance. We provide detailed descriptions of the problems as well as discussions of why RL is well-suited to solve them. Also, we analyze the advantages and limitations of these methods. Finally, we elaborate on promising research directions in natural language processing that might benefit from reinforcement learning. Keywords| reinforcement learning, natural language processing, conversational systems, pars- ing, translation, text generation 1 Introduction Machine learning algorithms have been very successful to solve problems in the natural language pro- arXiv:2104.05565v1 [cs.CL] 12 Apr 2021 cessing (NLP) domain for many years, especially supervised and unsupervised methods. However, this is not the case with reinforcement learning (RL), which is somewhat surprising since in other domains, reinforcement learning methods have experienced an increased level of success with some impressive results, for instance in board games such as AlphaGo Zero [106].
    [Show full text]
  • A Deep Reinforcement Learning Neural Network Folding Proteins
    DeepFoldit - A Deep Reinforcement Learning Neural Network Folding Proteins Dimitra Panou1, Martin Reczko2 1University of Athens, Department of Informatics and Telecommunications 2Biomedical Sciences Research Center “Alexander Fleming” ABSTRACT Despite considerable progress, ab initio protein structure prediction remains suboptimal. A crowdsourcing approach is the online puzzle video game Foldit [1], that provided several useful results that matched or even outperformed algorithmically computed solutions [2]. Using Foldit, the WeFold [3] crowd had several successful participations in the Critical Assessment of Techniques for Protein Structure Prediction. Based on the recent Foldit standalone version [4], we trained a deep reinforcement neural network called DeepFoldit to improve the score assigned to an unfolded protein, using the Q-learning method [5] with experience replay. This paper is focused on model improvement through hyperparameter tuning. We examined various implementations by examining different model architectures and changing hyperparameter values to improve the accuracy of the model. The new model’s hyper-parameters also improved its ability to generalize. Initial results, from the latest implementation, show that given a set of small unfolded training proteins, DeepFoldit learns action sequences that improve the score both on the training set and on novel test proteins. Our approach combines the intuitive user interface of Foldit with the efficiency of deep reinforcement learning. KEYWORDS: ab initio protein structure prediction, Reinforcement Learning, Deep Learning, Convolution Neural Networks, Q-learning 1. ALGORITHMIC BACKGROUND Machine learning (ML) is the study of algorithms and statistical models used by computer systems to accomplish a given task without using explicit guidelines, relying on inferences derived from patterns. ML is a field of artificial intelligence.
    [Show full text]
  • An Introduction to Deep Reinforcement Learning
    An Introduction to Deep Reinforcement Learning Ehsan Abbasnejad Remember: Supervised Learning We have a set of sample observations, with labels learn to predict the labels, given a new sample cat Learn the function that associates a picture of a dog/cat with the label dog Remember: supervised learning We need thousands of samples Samples have to be provided by experts There are applications where • We can’t provide expert samples • Expert examples are not what we mimic • There is an interaction with the world Deep Reinforcement Learning AlphaGo Scenario of Reinforcement Learning Observation Action State Change the environment Agent Don’t do that Reward Environment Agent learns to take actions maximizing expected Scenario of Reinforcement Learningreward. Observation Action State Change the environment Agent Thank you. Reward https://yoast.com/how-t Environment o-clean-site-structure/ Machine Learning Actor/Policy ≈ Looking for a Function Action = π( Observation ) Observation Action Function Function input output Used to pick the Reward best function Environment Reinforcement Learning in a nutshell RL is a general-purpose framework for decision-making • RL is for an agent with the capacity to act • Each action influences the agent’s future state • Success is measured by a scalar reward signal Goal: select actions to maximise future reward Deep Learning in a nutshell DL is a general-purpose framework for representation learning • Given an objective • Learning representation that is required to achieve objective • Directly from raw inputs
    [Show full text]
  • Modelling Working Memory Using Deep Recurrent Reinforcement Learning
    Modelling Working Memory using Deep Recurrent Reinforcement Learning Pravish Sainath1;2;3 Pierre Bellec1;3 Guillaume Lajoie 2;3 1Centre de Recherche, Institut Universitaire de Gériatrie de Montréal 2Mila - Quebec AI Institute 3Université de Montréal Abstract 1 In cognitive systems, the role of a working memory is crucial for visual reasoning 2 and decision making. Tremendous progress has been made in understanding the 3 mechanisms of the human/animal working memory, as well as in formulating 4 different frameworks of artificial neural networks. In the case of humans, the [1] 5 visual working memory (VWM) task is a standard one in which the subjects are 6 presented with a sequence of images, each of which needs to be identified as to 7 whether it was already seen or not. Our work is a study of multiple ways to learn a 8 working memory model using recurrent neural networks that learn to remember 9 input images across timesteps in supervised and reinforcement learning settings. 10 The reinforcement learning setting is inspired by the popular view in Neuroscience 11 that the working memory in the prefrontal cortex is modulated by a dopaminergic 12 mechanism. We consider the VWM task as an environment that rewards the 13 agent when it remembers past information and penalizes it for forgetting. We 14 quantitatively estimate the performance of these models on sequences of images [2] 15 from a standard image dataset (CIFAR-100 ) and their ability to remember 16 and recall. Based on our analysis, we establish that a gated recurrent neural 17 network model with long short-term memory units trained using reinforcement 18 learning is powerful and more efficient in temporally consolidating the input spatial 19 information.
    [Show full text]
  • Machine Learning and Data Mining Machine Learning Algorithms Enable Discovery of Important “Regularities” in Large Data Sets
    General Domains Machine Learning and Data Mining Machine learning algorithms enable discovery of important “regularities” in large data sets. Over the past Tom M. Mitchell decade, many organizations have begun to routinely cap- ture huge volumes of historical data describing their operations, products, and customers. At the same time, scientists and engineers in many fields have been captur- ing increasingly complex experimental data sets, such as gigabytes of functional mag- netic resonance imaging (MRI) data describing brain activity in humans. The field of data mining addresses the question of how best to use this historical data to discover general regularities and improve PHILIP NORTHOVER/PNORTHOV.FUTURE.EASYSPACE.COM/INDEX.HTML the process of making decisions. COMMUNICATIONS OF THE ACM November 1999/Vol. 42, No. 11 31 he increasing interest in Figure 1. Data mining application. A historical set of 9,714 medical data mining, or the use of records describes pregnant women over time. The top portion is a historical data to discover typical patient record (“?” indicates the feature value is unknown). regularities and improve The task for the algorithm is to discover rules that predict which T future patients will be at high risk of requiring an emergency future decisions, follows from the confluence of several recent trends: C-section delivery. The bottom portion shows one of many rules the falling cost of large data storage discovered from this data. Whereas 7% of all pregnant women in devices and the increasing ease of the data set received emergency C-sections, the rule identifies a collecting data over networks; the subclass at 60% at risk for needing C-sections.
    [Show full text]