On the Generalization Gap in Reparameterizable Reinforcement

Total Page:16

File Type:pdf, Size:1020Kb

On the Generalization Gap in Reparameterizable Reinforcement On the Generalization Gap in Reparameterizable Reinforcement Learning Huan Wang 1 Stephan Zheng 1 Caiming Xiong 1 Richard Socher 1 Abstract Cobbe et al., 2018; Zhang et al., 2018b; Packer et al., 2018; Zhang et al., 2018a). A model that performs well in the Understanding generalization in reinforcement training environment, may or may not perform well when learning (RL) is a significant challenge, as many used in the testing environment. There is also a growing common assumptions of traditional supervised interest in understanding the conditions for model general- learning theory do not apply. We focus on the ization and developing algorithms that improve generaliza- special class of reparameterizable RL prob- tion. lems, where the trajectory distribution can be de- composed using the reparametrization trick. For In general we would like to measure how accurate an al- this problem class, estimating the expected return gorithm is able to predict on previously unseen data. One is efficient and the trajectory can be computed metric of interest is the gap between the training and test- deterministically given peripheral random vari- ing loss or reward. It has been observed that such gaps are ables, which enables us to study reparametrizable related to multiple factors: initial state distribution, envi- RL using supervised learning and transfer learn- ronment transition, the level of “difficulty” in the environ- ing theory. Through these relationships, we de- ment, model architectures, and optimization. Zhang et al. rive guarantees on the gap between the expected (2018b) split randomly sampled initial states into training and empirical return for both intrinsic and exter- and testing and evaluated the performance gap in deep re- nal errors, based on Rademacher complexity as inforcement learning. They empirically observed overfit- well as the PAC-Bayes bound. Our bound sug- ting caused by the randomness of the environment, even gests the generalization capability of reparame- if the initial distribution and the transition in the test- terizable RL is related to multiple factors includ- ing environment are kept the same as training. On the ing “smoothness” of the environment transition, other hand, Farebrother et al. (2018); Justesen et al. (2018); reward and agent policy function class. We also Cobbe et al. (2018) allowed the test environment to vary empirically verify the relationship between the from training, and observed huge differences in testing per- generalization gap and these factors through sim- formance. Packer et al. (2018) also reported very different ulations. testing behaviors across models and algorithms, even for the same RL problem. Although overfitting has been empirically observed in RL 1. Introduction from time to time, theoretical guarantees on generalization, especially finite-sample guarantees, are still missing. In arXiv:1905.12654v1 [cs.LG] 29 May 2019 Reinforcement learning (RL) has proven successful in a this work, we focus on on-policy RL, where agent poli- series of applications such as games (Silver et al., 2016; cies are trained based on episodes of experience that are 2017; Mnih et al., 2015; Vinyals et al., 2017; OpenAI, sampled “on-the-fly” using the current policy in training. 2018), robotics (Kober et al., 2013), recommendation sys- We identify two major obstacles in the analysis of on- tems (Li et al., 2010; Shani et al., 2005), resource manage- policy RL. First, the episode distribution keeps changing ment (Mao et al., 2016; Mirhoseini et al., 2018), neural ar- as the policy gets updated during optimization. Therefore, chitecture design (Baker et al., 2017), and more. However episodes have to be continuously redrawn from the new dis- some key questions in reinforcement learning remain un- tribution induced by the updated policy during optimiza- solved. One that draws more and more attention is the is- tion. For finite-sample analysis, this leads to a process with sue of overfitting in reinforcement learning (Sutton, 1995; complex dependencies. Second, state-of-the-art research 1Salesforce Research, Palo Alto CA, USA. Correspondence to: on RL tends to mix the errors caused by randomness in the Huan Wang <[email protected]>. environment and shifts in the environment distribution. We argue that actually these two types of errors are very dif- 36 th Proceedings of the International Conference on Machine ferent. One, which we call intrinsic error, is analogous to Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s). overfitting in supervised learning, and the other, called ex- On the Generalization Gap in Reparameterizable RL ternal error, looks more like the errors in transfer learning. viding a simple bound for “smooth” environmentsand models with a limited number of parameters. Our key observation is there exists a special class of RL, called reparameterizable RL, where randomness in the environment can be decoupled from the transition and • A guarantee for reparameterized RL when the envi- initialization procedures via the reparameterization trick ronment is changed during testing. In particular we (Kingma & Welling, 2014). Through reparameterization, discuss two cases in environment shift: change in the an episode’s dependency on the policy is “lifted” to the initial distribution for the states, or the transition func- states. Hence, as the policy gets updated, episodes are de- tion. terministic given peripheral random variables. As a con- sequence, the expected reward in reparameterizable RL 2. Notation and Formulation is connected to the Rademacher complexity as well as the PAC-Bayes bound. The reparameterization trick also We denote a Markov Decision Process (MDP) as a 5-tuple makes the analysis for the second type of errors, i.e., when (S, A, P, r, P0). Here S is the state space, A is the action- the environment distribution is shifted, much easier since space, P(s,a,s′): S×A×S → [0, 1] is the transition the environment parameters are also “lifted” to the repre- probability from state s to s′ when taking action a, r(s): R sentation of states. S → represents the reward function, and P0(s): S → [0, 1] is the initial state distribution. Let π(s) ∈ Π: S → A Related Work Generalization in reinforcement learn- be the policy map that returns the action a at state s. ing has been investigated a lot both theoretically and We consider episodic MDPs with a finite horizon. Given empirically. Theoretical work includes bandit anal- the policy map π and the transition probability P, the state- ysis (Agarwal et al., 2014; Auer et al., 2002; 2009; to-state transition probability is Tπ(s,s′) = P(s, π(s),s′). Beygelzimer et al., 2011), Probably Approximately Without loss of generality, the length of the episode is T +1. Correct (PAC) analysis (Jiang et al., 2017; Dann et al., We denote a sequence of states [s0,s1,...,sT ] as s. The 2017; Strehl et al., 2009; Lattimore & Hutter, 2014) T total reward in an episode is R(s) = γtr , where as well as minimax analysis (Azar et al., 2017; t=0 t γ ∈ (0, 1] is a discount factor and rt = r(st). Chakravorty & Hyland, 2003). Most works focus on P the analysis of regret and consider the gap between the Denote the joint distribution of the sequence of states in an expected value and optimal return. On the empirical side, episode s = [s0,s1,...,sT ] as Dπ. Note Dπ is also related besides the previously mentioned work, Whiteson et al. to P and P0. In this work we assume P and P0 are fixed, (2011) proposes generalized methodologies that are so Dπ is a function of π. Our goal is to find a policy that based on multiple environments sampled from a distri- maximizes the expected total discounted reward (return): bution. Nair et al. (2015) also use random starts to test generalization. T E s E t π∗ = argmax s π R( ) = argmax s π γ rt. Other research has also examined generalization π Π ∼D π Π ∼D ∈ ∈ t=0 from a transfer learning perspective. Lazaric (2012); X (1) Taylor & Stone (2009); Zhan & Taylor (2015); Laroche (2017) examine model generalization across different learning tasks, and provide guarantees on asymptotic Suppose during training we have a budget of n episodes, performance. then the empirical return is There are also works in robotics for transferring policy n 1 from simulator to real world and optimizing an internal πˆ =arg max R(si), (2) i π Π,s π n model from data (Kearns & Singh, 2002), or works trying ∈ ∼D i=1 to solve abstracted or compressed MDPs (Majeed & Hutter, X si i i i 2018). where = [s0,s1,...,sT ] is the ith episode of length . We are interested in the generalization gap Our Contributions: T +1 n • A connection between (on-policy) reinforcement 1 i Φ= R(s ) − Es ′ R(s) . (3) learning and supervised learning through the reparam- πˆ n ∼D i=1 eterization trick. It simplifies the finite-sample anal- X ysis for RL, and yields Rademacher and PAC-Bayes Note that in (3) the distribution D may be different from bounds on Markov Decision Processes (MDP). π′ˆ Dπˆ since in the testing environment P′ as well as P0′ may • Identifying a class of reparameterizable RL and pro- be shifted compared to the training environment. On the Generalization Gap in Reparameterizable RL 3. Generalization in Reinforcement Learning using the triangle inequality. The first term in (5) is the v.s. Supervised Learning concentrationerror between the empirical reward and its ex- pectation. Since it is caused by intrinsic randomness of the Generalization has been well studied in the supervised environment, we call it the intrinsic error. Even if the test learning scenario. A popular assumption is that samples are environment shares the same distribution with training, in independent and identically distributed (xi,yi) ∼ D, ∀i ∈ the finite-sample scenario there is still a gap between train- {1, 2,...,n}.
Recommended publications
  • A Simple Algorithm for Semi-Supervised Learning with Improved Generalization Error Bound
    A Simple Algorithm for Semi-supervised Learning with Improved Generalization Error Bound Ming Ji∗‡ [email protected] Tianbao Yang∗† [email protected] Binbin Lin\ [email protected] Rong Jiny [email protected] Jiawei Hanz [email protected] zDepartment of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA yDepartment of Computer Science and Engineering, Michigan State University, East Lansing, MI, 48824, USA \State Key Lab of CAD&CG, College of Computer Science, Zhejiang University, Hangzhou, 310058, China ∗Equal contribution Abstract which states that the prediction function lives in a low In this work, we develop a simple algorithm dimensional manifold of the marginal distribution PX . for semi-supervised regression. The key idea It has been pointed out by several stud- is to use the top eigenfunctions of integral ies (Lafferty & Wasserman, 2007; Nadler et al., operator derived from both labeled and un- 2009) that the manifold assumption by itself is labeled examples as the basis functions and insufficient to reduce the generalization error bound learn the prediction function by a simple lin- of supervised learning. However, on the other hand, it ear regression. We show that under appropri- was found in (Niyogi, 2008) that for certain learning ate assumptions about the integral operator, problems, no supervised learner can learn effectively, this approach is able to achieve an improved while a manifold based learner (that knows the man- regression error bound better than existing ifold or learns it from unlabeled examples) can learn bounds of supervised learning. We also veri- well with relatively few labeled examples.
    [Show full text]
  • Statistical Mechanics Methods for Discovering Knowledge from Production-Scale Neural Networks
    Statistical Mechanics Methods for Discovering Knowledge from Production-Scale Neural Networks Charles H. Martin∗ and Michael W. Mahoney† Tutorial at ACM-KDD, August 2019 ∗Calculation Consulting, [email protected] †ICSI and Dept of Statistics, UC Berkeley, https://www.stat.berkeley.edu/~mmahoney/ Martin and Mahoney (CC & ICSI/UCB) Statistical Mechanics Methods August 2019 1 / 98 Outline 1 Prehistory and History Older Background A Very Simple Deep Learning Model More Immediate Background 2 Preliminary Results Regularization and the Energy Landscape Preliminary Empirical Results Gaussian and Heavy-tailed Random Matrix Theory 3 Developing a Theory for Deep Learning More Detailed Empirical Results An RMT-based Theory for Deep Learning Tikhonov Regularization versus Heavy-tailed Regularization 4 Validating and Using the Theory Varying the Batch Size: Explaining the Generalization Gap Using the Theory: pip install weightwatcher Diagnostics at Scale: Predicting Test Accuracies 5 More General Implications and Conclusions Outline 1 Prehistory and History Older Background A Very Simple Deep Learning Model More Immediate Background 2 Preliminary Results Regularization and the Energy Landscape Preliminary Empirical Results Gaussian and Heavy-tailed Random Matrix Theory 3 Developing a Theory for Deep Learning More Detailed Empirical Results An RMT-based Theory for Deep Learning Tikhonov Regularization versus Heavy-tailed Regularization 4 Validating and Using the Theory Varying the Batch Size: Explaining the Generalization Gap Using the Theory: pip install weightwatcher Diagnostics at Scale: Predicting Test Accuracies 5 More General Implications and Conclusions Statistical Physics & Neural Networks: A Long History 60s: I J. D. Cowan, Statistical Mechanics of Neural Networks, 1967. 70s: I W. A. Little, “The existence of persistent states in the brain,” Math.
    [Show full text]
  • Multitask Learning with Local Attention for Tibetan Speech Recognition
    Hindawi Complexity Volume 2020, Article ID 8894566, 10 pages https://doi.org/10.1155/2020/8894566 Research Article Multitask Learning with Local Attention for Tibetan Speech Recognition Hui Wang , Fei Gao , Yue Zhao , Li Yang , Jianjian Yue , and Huilin Ma School of Information Engineering, Minzu University of China, Beijing 100081, China Correspondence should be addressed to Fei Gao; [email protected] Received 22 September 2020; Revised 12 November 2020; Accepted 26 November 2020; Published 18 December 2020 Academic Editor: Ning Cai Copyright © 2020 Hui Wang et al. *is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper, we propose to incorporate the local attention in WaveNet-CTC to improve the performance of Tibetan speech recognition in multitask learning. With an increase in task number, such as simultaneous Tibetan speech content recognition, dialect identification, and speaker recognition, the accuracy rate of a single WaveNet-CTC decreases on speech recognition. Inspired by the attention mechanism, we introduce the local attention to automatically tune the weights of feature frames in a window and pay different attention on context information for multitask learning. *e experimental results show that our method improves the accuracies of speech recognition for all Tibetan dialects in three-task learning, compared with the baseline model. Furthermore, our method significantly improves the accuracy for low-resource dialect by 5.11% against the specific-dialect model. 1. Introduction recognition, dialect identification, and speaker recognition in a single model.
    [Show full text]
  • Lecture 9: Generalization
    Lecture 9: Generalization Roger Grosse 1 Introduction When we train a machine learning model, we don't just want it to learn to model the training data. We want it to generalize to data it hasn't seen before. Fortunately, there's a very convenient way to measure an algorithm's generalization performance: we measure its performance on a held-out test set, consisting of examples it hasn't seen before. If an algorithm works well on the training set but fails to generalize, we say it is overfitting. Improving generalization (or preventing overfitting) in neural nets is still somewhat of a dark art, but this lecture will cover a few simple strategies that can often help a lot. 1.1 Learning Goals • Know the difference between a training set, validation set, and test set. • Be able to reason qualitatively about how training and test error de- pend on the size of the model, the number of training examples, and the number of training iterations. • Understand the motivation behind, and be able to use, several strate- gies to improve generalization: { reducing the capacity { early stopping { weight decay { ensembles { input transformations { stochastic regularization 2 Measuring generalization So far in this course, we've focused on training, or optimizing, neural net- works. We defined a cost function, the average loss over the training set: N 1 X L(y(x(i)); t(i)): (1) N i=1 But we don't just want the network to get the training examples right; we also want it to generalize to novel instances it hasn't seen before.
    [Show full text]
  • Issues in Using Function Approximation for Reinforcement Learning
    To appear in: Proceedings of the Fourth Connectionist Models Summer School Lawrence Erlbaum Publisher, Hillsdale, NJ, Dec. 1993 Issues in Using Function Approximation for Reinforcement Learning Sebastian Thrun Anton Schwartz Institut fÈur Informatik III Dept. of Computer Science UniversitÈat Bonn Stanford University RÈomerstr. 164, D±53225 Bonn, Germany Stanford, CA 94305 [email protected] [email protected] Reinforcement learning techniques address the problem of learning to select actions in unknown, dynamic environments. It is widely acknowledged that to be of use in complex domains, reinforcement learning techniques must be combined with generalizing function approximation methods such as arti®cial neural networks. Little, however, is understood about the theoretical properties of such combinations, and many researchers have encountered failures in practice. In this paper we identify a prime source of such failuresÐnamely, a systematic overestimation of utility values. Using Watkins' Q-Learning [18] as an example, we give a theoretical account of the phenomenon, deriving conditions under which one may expected it to cause learning to fail. Employing some of the most popular function approximators, we present experimental results which support the theoretical ®ndings. 1 Introduction Reinforcement learning methods [1, 16, 18] address the problem of learning, through experimentation, to choose actions so as to maximize one's productivity in unknown, dynamic environments. Unlike most learning algorithms that have been studied in the ®eld of Machine Learning, reinforcement learning techniques allow for ®nding optimal action sequences in temporal decision tasks where the external evaluation is sparse, and neither the effects of actions, nor the temporal delay between actions and its effects on the learner's performance is known to the learner beforehand.
    [Show full text]
  • Effective Dimensionality Revisited
    Under review as a conference paper at ICLR 2021 RETHINKING PARAMETER COUNTING: EFFECTIVE DIMENSIONALITY REVISITED Anonymous authors Paper under double-blind review ABSTRACT Neural networks appear to have mysterious generalization properties when using parameter counting as a proxy for complexity. Indeed, neural networks often have many more parameters than there are data points, yet still provide good general- ization performance. Moreover, when we measure generalization as a function of parameters, we see double descent behaviour, where the test error decreases, increases, and then again decreases. We show that many of these properties become understandable when viewed through the lens of effective dimensionality, which measures the dimensionality of the parameter space determined by the data. We relate effective dimensionality to posterior contraction in Bayesian deep learning, model selection, width-depth tradeoffs, double descent, and functional diversity in loss surfaces, leading to a richer understanding of the interplay between parameters and functions in deep models. We also show that effective dimensionality compares favourably to alternative norm- and flatness- based generalization measures. 1 INTRODUCTION Parameter counting pervades the narrative in modern deep learning. “One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data. In light of this capacity for overfitting, it is remarkable that simple algorithms like SGD reliably return solutions with low test error” (Dziugaite and Roy, 2017). “Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance” (Zhang et al., 2017). “Increasing the number of parameters of neural networks can give much better prediction accuracy” (Shazeer et al., 2017).
    [Show full text]
  • Generalizing to Unseen Domains: a Survey on Domain Generalization
    PREPRINT 1 Generalizing to Unseen Domains: A Survey on Domain Generalization Jindong Wang, Cuiling Lan, Member, IEEE, Chang Liu, Yidong Ouyang, Wenjun Zeng, Fellow, IEEE, Tao Qin, Senior Member, IEEE Abstract—Machine learning systems generally assume that the Sketch Cartoon Art painting Photo training and testing distributions are the same. To this end, a key requirement is to develop models that can generalize to unseen distributions. Domain generalization (DG), i.e., out-of- distribution generalization, has attracted increasing interests in recent years. Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to Training set Test set an unseen test domain. Great progress has been made in the area of domain generalization for years. This paper presents Fig. 1. Examples from the dataset PACS [1] for domain generalization. The the first review of recent advances in this area. First, we training set is composed of images belonging to domains of sketch, cartoon, provide a formal definition of domain generalization and discuss and art paintings. DG aims to learn a generalized model that performs well several related fields. We then thoroughly review the theories on the unseen target domain of photos. related to domain generalization and carefully analyze the theory behind generalization. We categorize recent algorithms into three classes: data manipulation, representation learning, and learning datasets) that will generalize well on unseen testing domains. strategy, and present several popular algorithms in detail for For instance, given a training set consisting of images coming each category.
    [Show full text]
  • Batch Policy Learning Under Constraints
    Batch Policy Learning under Constraints Hoang M. Le 1 Cameron Voloshin 1 Yisong Yue 1 Abstract deed, many such real-world applications require the primary objective function be augmented with an appropriate set of When learning policies for real-world domains, constraints (Altman, 1999). two important questions arise: (i) how to effi- ciently use pre-collected off-policy, non-optimal Contemporary policy learning research has largely focused behavior data; and (ii) how to mediate among dif- on either online reinforcement learning (RL) with a focus on ferent competing objectives and constraints. We exploration, or imitation learning (IL) with a focus on learn- thus study the problem of batch policy learning un- ing from expert demonstrations. However, many real-world der multiple constraints, and offer a systematic so- settings already contain large amounts of pre-collected data lution. We first propose a flexible meta-algorithm generated by existing policies (e.g., existing driving behav- that admits any batch reinforcement learning and ior, power grid control policies, etc.). We thus study the online learning procedure as subroutines. We then complementary question: can we leverage this abundant present a specific algorithmic instantiation and source of (non-optimal) behavior data in order to learn se- provide performance guarantees for the main ob- quential decision making policies with provable guarantees jective and all constraints. As part of off-policy on both primary objective and constraint satisfaction? learning, we propose a simple method for off- We thus propose and study the problem of batch policy policy policy evaluation (OPE) and derive PAC- learning under multiple constraints.
    [Show full text]
  • Learning Better Structured Representations Using Low-Rank Adaptive Label Smoothing
    Published as a conference paper at ICLR 2021 LEARNING BETTER STRUCTURED REPRESENTATIONS USING LOW-RANK ADAPTIVE LABEL SMOOTHING Asish Ghoshal, Xilun Chen, Sonal Gupta, Luke Zettlemoyer & Yashar Mehdad {aghoshal,xilun,sonalgupta,lsz,mehdad}@fb.com Facebook AI ABSTRACT Training with soft targets instead of hard targets has been shown to improve per- formance and calibration of deep neural networks. Label smoothing is a popular way of computing soft targets, where one-hot encoding of a class is smoothed with a uniform distribution. Owing to its simplicity, label smoothing has found wide- spread use for training deep neural networks on a wide variety of tasks, ranging from image and text classification to machine translation and semantic parsing. Complementing recent empirical justification for label smoothing, we obtain PAC- Bayesian generalization bounds for label smoothing and show that the generaliza- tion error depends on the choice of the noise (smoothing) distribution. Then we propose low-rank adaptive label smoothing (LORAS): a simple yet novel method for training with learned soft targets that generalizes label smoothing and adapts to the latent structure of the label space in structured prediction tasks. Specifi- cally, we evaluate our method on semantic parsing tasks and show that training with appropriately smoothed soft targets can significantly improve accuracy and model calibration, especially in low-resource settings. Used in conjunction with pre-trained sequence-to-sequence models, our method achieves state of the art performance on four semantic parsing data sets. LORAS can be used with any model, improves performance and implicit model calibration without increasing the number of model parameters, and can be scaled to problems with large label spaces containing tens of thousands of labels.
    [Show full text]
  • Learning Universal Graph Neural Network Embeddings with Aid of Transfer Learning
    Learning Universal Graph Neural Network Embeddings With Aid Of Transfer Learning Saurabh Verma Zhi-Li Zhang Department of Computer Science Department of Computer Science University of Minnesota Twin Cities University of Minnesota Twin Cities [email protected] [email protected] Abstract Learning powerful data embeddings has become a center piece in machine learn- ing, especially in natural language processing and computer vision domains. The crux of these embeddings is that they are pretrained on huge corpus of data in a unsupervised fashion, sometimes aided with transfer learning. However currently in the graph learning domain, embeddings learned through existing graph neural networks (GNNs) are task dependent and thus cannot be shared across different datasets. In this paper, we present a first powerful and theoretically guaranteed graph neural network that is designed to learn task-independent graph embed- dings, thereafter referred to as deep universal graph embedding (DUGNN). Our DUGNN model incorporates a novel graph neural network (as a universal graph encoder) and leverages rich Graph Kernels (as a multi-task graph decoder) for both unsupervised learning and (task-specific) adaptive supervised learning. By learning task-independent graph embeddings across diverse datasets, DUGNN also reaps the benefits of transfer learning. Through extensive experiments and ablation studies, we show that the proposed DUGNN model consistently outperforms both the existing state-of-art GNN models and Graph Kernels by an increased accuracy of 3% − 8% on graph classification benchmark datasets. 1 Introduction Learning powerful data embeddings has become a center piece in machine learning for producing superior results. This new trend of learning embeddings from data can be attributed to the huge success of word2vec [31, 35] with unprecedented real-world performance in natural language processing (NLP).
    [Show full text]
  • 1 Introduction 2 the Learning Problem
    9.520: Statistical Learning Theory and Applications February 8th, 2010 The Learning Problem and Regularization Lecturer: Tomaso Poggio Scribe: Sara Abbasabadi 1 Introduction Today we will introduce a few basic mathematical concepts underlying conditions of the learning theory and generalization property of learning algorithms. We will dene empirical risk, expected risk and empirical risk minimization. Then regularization algorithms will be introduced which forms the bases for the next few classes. 2 The Learning Problem Consider the problem of supervised learning where for a number of input data X, the output labels Y are also known. Data samples are taken independently from an underlying distribution µ(z) on Z = X × Y and form the training set S: (x1; y1);:::; (xn; yn) that is z1; : : : ; zn. We assume independent and identically distributed (i.i.d.) random variables which means the order does not matter (this is the exchangeability property). Our goal is to nd the conditional probability of y given a data x. µ(z) = p(x; y) = p(yjx) · p(x) p(x; y) is xed but unknown. Finding this probabilistic relation between X and Y solves the learning problem. XY P(y|x) P(x) For example suppose we have the data xi;Fi measured from a spring. Hooke's law F = ax gives 2 2 p(F=x) = δ(F −ax). For additive Gaussian noise Hooke's law F = ax gives p(F=x) = e−(F −ax) =2σ . 2-1 2.1 Hypothesis Space Hypothesis space H should be dened in every learning algorithm as the space of functions that can be explored.
    [Show full text]
  • Principles and Algorithms for Forecasting Groups of Time Series: Locality and Globality
    Principles and Algorithms for Forecasting Groups of Time Series: Locality and Globality Pablo Montero-Manso Rob J Hyndman Discipline of Business Analytics Department of Econometrics and Business Statistics University of Sydney Monash University Darlington, NSW 2006, Australia Clayton, VIC 3800, Australia [email protected] [email protected] March 30, 2021 Abstract Forecasting of groups of time series (e.g. demand for multiple products offered by a retailer, server loads within a data center or the number of completed ride shares in zones within a city) can be approached locally, by considering each time series as a separate regression task and fitting a function to each, or globally, by fitting a single function to all time series in the set. While global methods can outperform local for groups composed of similar time series, recent empirical evidence shows surprisingly good performance on heterogeneous groups. This suggests a more general applicability of global methods, potentially leading to more accurate tools and new scenarios to study. However, the evidence has been of empirical nature and a more fundamental study is required. Formalizing the setting of forecasting a set of time series with local and global methods, we provide the following contributions: • We show that global methods are not more restrictive than local methods for time series forecasting, a result which does not apply to sets of regression problems in general. Global and local methods can produce the same forecasts without any assumptions about similarity of the series in the set, therefore global models can succeed in a wider range of problems than previously thought.
    [Show full text]