Neural Networks + Backpropagation

Total Page:16

File Type:pdf, Size:1020Kb

Neural Networks + Backpropagation 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Neural Networks + Backpropagation Matt Gormley Lecture 13 Oct. 7, 2019 1 Reminders • Homework 4: Logistic Regression – Out: Wed, Sep. 25 – Due: Fri, Oct. 11 at 11:59pm • Homework 5: Neural Networks – Out: Fri, Oct. 11 – Due: Fri, Oct. 25 at 11:59pm • Today’s In-Class Poll – http://p13.mlcourse.org 2 Q&A Q: What is mini-batch SGD? A: A variant of SGD… 3 Mini-Batch SGD • Gradient Descent: Compute true gradient exactly from all N examples • Mini-Batch SGD: Approximate true gradient by the average gradient of K randomly chosen examples • Stochastic Gradient Descent (SGD): Approximate true gradient by the gradient of one randomly chosen example 4 Mini-Batch SGD Three variants of first-order optimization: 5 NEURAL NETWORKS 6 Neural Networks Chalkboard – Example: Neural Network w/1 Hidden Layer – Example: Neural Network w/2 Hidden Layers – Example: Feed Forward Neural Network 7 Neural Network Parameters Question: Suppose you are training a one-hidden layer neural network with sigmoid activations for binary classification. Answer: True or False: There is a unique set of parameters that maximize the likelihood of the dataset above. 8 ARCHITECTURES 9 Neural Network Architectures Even for a basic Neural Network, there are many design decisions to make: 1. # of hidden layers (depth) 2. # of units per hidden layer (width) 3. Type of activation function (nonlinearity) 4. Form of objective function 10 Building a Neural Net Q: How many hidden units, D, should we use? Output Hidden Layer … D = M Input … 13 Building a Neural Net Q: How many hidden units, D, should we use? Output Hidden Layer … D = M Input … 14 Building a Neural Net Q: How many hidden units, D, should we use? Output What method(s) is this setting similar to? Hidden Layer … D < M Input … 15 Building a Neural Net Q: How many hidden units, D, should we use? Output Hidden Layer … D > M What method(s) is this setting similar to? Input … 16 Deeper Networks Q: How many layers should we use? Output … Hidden Layer 1 … Input 17 Deeper Networks Q: How many layers should we use? Output … Hidden Layer 2 … Hidden Layer 1 … Input 18 Deeper Networks Q: How manyOutput layers should we use? … Hidden Layer 3 … Hidden Layer 2 … Hidden Layer 1 … Input 19 Deeper Networks Q: How many layers should we use? • Theoretical answer: – A neural network with 1 hidden layer is a universal function approximator – Cybenko (1989): For any continuous function g(x), there exists a 1-hidden-layer neural net hθ(x) s.t. | hθ(x) – g(x) | < ϵ for all x, assuming sigmoid activation functions Output • Empirical answer: – Before 2006: “Deep networks (e.g.… 3 or more hidden layers) are too hardHidden Layerto 1train” – After 2006: “Deep networks are easier to train than shallow networks (e.g. 2 or fewer layers) for many problems” … Input Big caveat: You need to know and use the right tricks. 20 Different Levels of Abstraction • We don’t know the “right” levels of abstraction • So let the model figure it out! 24 Example from Honglak Lee (NIPS 2010) Different Levels of Abstraction Face Recognition: – Deep Network can build up increasingly higher levels of abstraction – Lines, parts, regions 25 Example from Honglak Lee (NIPS 2010) Different Levels of Abstraction Output … HiDDen Layer 3 … HiDDen Layer 2 … HiDDen Layer 1 … Input 26 Example from Honglak Lee (NIPS 2010) Activation Functions (F) Loss Neural Network with sigmoid 1 2 J = (y y∗) activation functions 2 − (E) Output (sigmoid) 1 y = 1+2tT( b) Output − (D) Output (linear) D … b = βj zj Hidden Layer j=0 (C) Hidden (sigmoid) 1 zj = 1+2tT( a ) , j − j ∀ … Input (B) Hidden (linear) a = M α x , j j i=0 ji i ∀ (A) Input Given x , i 27 i ∀ Activation Functions (F) Loss Neural Network with arbitrary 1 2 J = (y y∗) nonlinear activation functions 2 − (E) Output (nonlinear) y = σ(b) Output (D) Output (linear) D … b = βj zj Hidden Layer j=0 (C) Hidden (nonlinear) z = σ(a ), j j j ∀ … Input (B) Hidden (linear) a = M α x , j j i=0 ji i ∀ (A) Input Given x , i 28 i ∀ Activation Functions Sigmoid / Logistic Function So far, we’ve 1 assumed that the logistic(u) ≡ 1+ e−u activation function (nonlinearity) is always the sigmoid function… 29 Activation Functions • A new change: modifying the nonlinearity – The logistic is not widely used in modern ANNs Alternate 1: tanh Like logistic function but shifted to range [-1, +1] Slide from William Cohen AI Stats 2010 depth 4? sigmoid vs. tanh Figure from Glorot & Bentio (2010) Activation Functions • A new change: modifying the nonlinearity – reLU often used in vision tasks Alternate 2: rectified linear unit Linear with a cutoff at zero (Implementation: clip the gradient when you pass zero) Slide from William Cohen Activation Functions • A new change: modifying the nonlinearity – reLU often used in vision tasks Alternate 2: rectified linear unit Soft version: log(exp(x)+1) Doesn’t saturate (at one end) Sparsifies outputs Helps with vanishing gradient Slide from William Cohen Decision Neural Network Functions (F) Loss Neural Network for Classification J = 1 (y y(d))2 2 − (E) Output (sigmoid) 1 y = 1+2tT( b) Output − (D) Output (linear) D … b = βj zj Hidden Layer j=0 (C) Hidden (sigmoid) 1 zj = 1+2tT( a ) , j − j ∀ … Input (B) Hidden (linear) a = M α x , j j i=0 ji i ∀ (A) Input Given x , i 34 i ∀ Decision Neural Network Functions (F) Loss Neural Network for Regression J = 1 (y y(d))2 2 − (E) Output (sigmoid) 1 y = 1+2tT( b) Output − (D) Output (linear) D … yb = βj zj Hidden Layer j=0 (C) Hidden (sigmoid) 1 zj = 1+2tT( a ) , j − j ∀ … Input (B) Hidden (linear) a = M α x , j j i=0 ji i ∀ (A) Input Given x , i 35 i ∀ Objective Functions for NNs 1. Quadratic Loss: – the same objective as Linear Regression – i.e. mean squared error 2. Cross-Entropy: – the same objective as Logistic Regression – i.e. negative log likelihood – This requires probabilities, so we add an additional “softmax” layer at the end of our network Forward Backward 1 2 dJ Quadratic J = (y y∗) = y y∗ 2 − dy − dJ 1 1 Cross Entropy J = y∗ HQ;(y)+(1 y∗) HQ;(1 y) = y∗ +(1 y∗) − − dy y − y 1 − 36 Objective Functions for NNs Cross-entropy vs. Quadratic loss Figure from Glorot & Bentio (2010) Multi-Class Output Output … Hidden Layer … Input … 38 Multi-Class Output (F) Loss K Softmax: J = k=1 yk∗ HQ;(yk) 2tT(bk) (E) Output (softmax) y = 2tT(bk) k yk = K K l=1 2tT(bl) l=1 2tT(bl) (D) Output (linear) b = D β z k k j=0 kj j ∀ … Output (C) Hidden (nonlinear) z = σ(a ), j j j ∀ … Hidden Layer (B) Hidden (linear) a = M α x , j j i=0 ji i ∀ … Input (A) Input Given xi, i ∀ 39 Neural Network Errors Question A: On which of the datasets below Question B: On which of the datasets could a one-hidden layer neural network below could a one-hidden layer neural achieve zero classification error? Select all network for regression achieve nearly zero that apply. MSE? Select all that apply. A) B) A) B) C) D) C) D) 40 DECISION BOUNDARY EXAMPLES 41 Example #1: Diagonal Band Example #2: One Pocket Example #3: Four Gaussians Example #4: Two Pockets 42 Example #1: Diagonal Band 43 Example #1: Diagonal Band 44 Example #1: Diagonal Band hidden 45 Example #1: Diagonal Band hidden 46 Example #1: Diagonal Band hidden 47 Example #1: Diagonal Band hidden 48 Example #1: Diagonal Band hidden hidden hidden hidden 49 Example #2: One Pocket 50 Example #2: One Pocket 51 Example #2: One Pocket hidden 52 Example #2: One Pocket hidden 53 Example #2: One Pocket hidden 54 Example #2: One Pocket hidden 55 Example #2: One Pocket hidden 56 Example #2: One Pocket hidden hidden hidden hidden 57 Example #3: Four Gaussians 58 Example #3: Four Gaussians 59 Example #3: Four Gaussians 60 Example #3: Four Gaussians hidden 61 Example #3: Four Gaussians hidden 62 Example #3: Four Gaussians hidden 63 Example #3: Four Gaussians hidden 64 Example #4: Two Pockets 65 Example #4: Two Pockets 66 Example #4: Two Pockets 67 Example #4: Two Pockets 68 Example #4: Two Pockets 69 Example #4: Two Pockets hidden 70 Example #4: Two Pockets hidden 71 Example #4: Two Pockets hidden 72 Example #4: Two Pockets hidden 73 Neural Networks Objectives You should be able to… • Explain the biological motivations for a neural network • Combine simpler models (e.g. linear regression, binary logistic regression, multinomial logistic regression) as components to build up feed-forward neural network architectures • Explain the reasons why a neural network can model nonlinear decision boundaries for classification • Compare and contrast feature engineering with learning features • Identify (some of) the options available when designing the architecture of a neural network • Implement a feed-forward neural network 74 Computing Gradients DIFFERENTIATION 75 A Recipe for Background Machine Learning 1. Given training data: 3. Define goal: 2. Choose each of these: – Decision function 4. Train with SGD: (take small steps opposite the gradient) – Loss function 76 Approaches to Training Differentiation • Question 1: When can we compute the gradients for an arbitrary neural network? • Question 2: When can we make the gradient computation efficient? 77 Approaches to Training Differentiation 1. Finite Difference Method – Pro: Great for testing implementations of backpropagation – Con: Slow for high dimensional inputs / outputs – Required: Ability to call the function f(x) on any input x 2.
Recommended publications
  • 6.5 Applications of Exponential and Logarithmic Functions 469
    6.5 Applications of Exponential and Logarithmic Functions 469 6.5 Applications of Exponential and Logarithmic Functions As we mentioned in Section 6.1, exponential and logarithmic functions are used to model a wide variety of behaviors in the real world. In the examples that follow, note that while the applications are drawn from many different disciplines, the mathematics remains essentially the same. Due to the applied nature of the problems we will examine in this section, the calculator is often used to express our answers as decimal approximations. 6.5.1 Applications of Exponential Functions Perhaps the most well-known application of exponential functions comes from the financial world. Suppose you have $100 to invest at your local bank and they are offering a whopping 5 % annual percentage interest rate. This means that after one year, the bank will pay you 5% of that $100, or $100(0:05) = $5 in interest, so you now have $105.1 This is in accordance with the formula for simple interest which you have undoubtedly run across at some point before. Equation 6.1. Simple Interest The amount of interest I accrued at an annual rate r on an investmenta P after t years is I = P rt The amount A in the account after t years is given by A = P + I = P + P rt = P (1 + rt) aCalled the principal Suppose, however, that six months into the year, you hear of a better deal at a rival bank.2 Naturally, you withdraw your money and try to invest it at the higher rate there.
    [Show full text]
  • Chapter 10. Logistic Models
    www.ck12.org Chapter 10. Logistic Models CHAPTER 10 Logistic Models Chapter Outline 10.1 LOGISTIC GROWTH MODEL 10.2 CONDITIONS FAVORING LOGISTIC GROWTH 10.3 REFERENCES 207 10.1. Logistic Growth Model www.ck12.org 10.1 Logistic Growth Model Here you will explore the graph and equation of the logistic function. Learning Objectives • Recognize logistic functions. • Identify the carrying capacity and inflection point of a logistic function. Logistic Models Exponential growth increases without bound. This is reasonable for some situations; however, for populations there is usually some type of upper bound. This can be caused by limitations on food, space or other scarce resources. The effect of this limiting upper bound is a curve that grows exponentially at first and then slows down and hardly grows at all. This is characteristic of a logistic growth model. The logistic equation is of the form: C C f (t) = 1+ab−t = 1+ae−kt The above equations represent the logistic function, and it contains three important pieces: C, a, and b. C determines that maximum value of the function, also known as the carrying capacity. C is represented by the dashed line in the graph below. 208 www.ck12.org Chapter 10. Logistic Models The constant of a in the logistic function is used much like a in the exponential function: it helps determine the value of the function at t=0. Specifically: C f (0) = 1+a The constant of b also follows a similar concept to the exponential function: it helps dictate the rate of change at the beginning and the end of the function.
    [Show full text]
  • Neural Network Architectures and Activation Functions: a Gaussian Process Approach
    Fakultät für Informatik Neural Network Architectures and Activation Functions: A Gaussian Process Approach Sebastian Urban Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: Prof. Dr. rer. nat. Stephan Günnemann Prüfende der Dissertation: 1. Prof. Dr. Patrick van der Smagt 2. Prof. Dr. rer. nat. Daniel Cremers 3. Prof. Dr. Bernd Bischl, Ludwig-Maximilians-Universität München Die Dissertation wurde am 22.11.2017 bei der Technischen Universtät München eingereicht und durch die Fakultät für Informatik am 14.05.2018 angenommen. Neural Network Architectures and Activation Functions: A Gaussian Process Approach Sebastian Urban Technical University Munich 2017 ii Abstract The success of applying neural networks crucially depends on the network architecture being appropriate for the task. Determining the right architecture is a computationally intensive process, requiring many trials with different candidate architectures. We show that the neural activation function, if allowed to individually change for each neuron, can implicitly control many aspects of the network architecture, such as effective number of layers, effective number of neurons in a layer, skip connections and whether a neuron is additive or multiplicative. Motivated by this observation we propose stochastic, non-parametric activation functions that are fully learnable and individual to each neuron. Complexity and the risk of overfitting are controlled by placing a Gaussian process prior over these functions. The result is the Gaussian process neuron, a probabilistic unit that can be used as the basic building block for probabilistic graphical models that resemble the structure of neural networks.
    [Show full text]
  • Neural Networks
    School of Computer Science 10-601B Introduction to Machine Learning Neural Networks Readings: Matt Gormley Bishop Ch. 5 Lecture 15 Murphy Ch. 16.5, Ch. 28 October 19, 2016 Mitchell Ch. 4 1 Reminders 2 Outline • Logistic Regression (Recap) • Neural Networks • Backpropagation 3 RECALL: LOGISTIC REGRESSION 4 Using gradient ascent for linear Recall… classifiers Key idea behind today’s lecture: 1. Define a linear classifier (logistic regression) 2. Define an objective function (likelihood) 3. Optimize it with gradient descent to learn parameters 4. Predict the class with highest probability under the model 5 Using gradient ascent for linear Recall… classifiers This decision function isn’t Use a differentiable function differentiable: instead: 1 T pθ(y =1t)= h(t)=sign(θ t) | 1+2tT( θT t) − sign(x) 1 logistic(u) ≡ −u 1+ e 6 Using gradient ascent for linear Recall… classifiers This decision function isn’t Use a differentiable function differentiable: instead: 1 T pθ(y =1t)= h(t)=sign(θ t) | 1+2tT( θT t) − sign(x) 1 logistic(u) ≡ −u 1+ e 7 Recall… Logistic Regression Data: Inputs are continuous vectors of length K. Outputs are discrete. (i) (i) N K = t ,y where t R and y 0, 1 D { }i=1 ∈ ∈ { } Model: Logistic function applied to dot product of parameters with input vector. 1 pθ(y =1t)= | 1+2tT( θT t) − Learning: finds the parameters that minimize some objective function. θ∗ = argmin J(θ) θ Prediction: Output is the most probable class. yˆ = `;Kt pθ(y t) y 0,1 | ∈{ } 8 NEURAL NETWORKS 9 Learning highly non-linear functions f: X à Y l f might be non-linear
    [Show full text]
  • Lecture 5: Logistic Regression 1 MLE Derivation
    CSCI 5525 Machine Learning Fall 2019 Lecture 5: Logistic Regression Feb 10 2020 Lecturer: Steven Wu Scribe: Steven Wu Last lecture, we give several convex surrogate loss functions to replace the zero-one loss func- tion, which is NP-hard to optimize. Now let us look into one of the examples, logistic loss: given d parameter w and example (xi; yi) 2 R × {±1g, the logistic loss of w on example (xi; yi) is defined as | ln (1 + exp(−yiw xi)) This loss function is used in logistic regression. We will introduce the statistical model behind logistic regression, and show that the ERM problem for logistic regression is the same as the relevant maximum likelihood estimation (MLE) problem. 1 MLE Derivation For this derivation it is more convenient to have Y = f0; 1g. Note that for any label yi 2 f0; 1g, we also have the “signed” version of the label 2yi − 1 2 {−1; 1g. Recall that in general su- pervised learning setting, the learner receive examples (x1; y1);:::; (xn; yn) drawn iid from some distribution P over labeled examples. We will make the following parametric assumption on P : | yi j xi ∼ Bern(σ(w xi)) where Bern denotes the Bernoulli distribution, and σ is the logistic function defined as follows 1 exp(z) σ(z) = = 1 + exp(−z) 1 + exp(z) See Figure 1 for a visualization of the logistic function. In general, the logistic function is a useful function to convert real values into probabilities (in the range of (0; 1)). If w|x increases, then σ(w|x) also increases, and so does the probability of Y = 1.
    [Show full text]
  • An Introduction to Logistic Regression: from Basic Concepts to Interpretation with Particular Attention to Nursing Domain
    J Korean Acad Nurs Vol.43 No.2, 154 -164 J Korean Acad Nurs Vol.43 No.2 April 2013 http://dx.doi.org/10.4040/jkan.2013.43.2.154 An Introduction to Logistic Regression: From Basic Concepts to Interpretation with Particular Attention to Nursing Domain Park, Hyeoun-Ae College of Nursing and System Biomedical Informatics National Core Research Center, Seoul National University, Seoul, Korea Purpose: The purpose of this article is twofold: 1) introducing logistic regression (LR), a multivariable method for modeling the relationship between multiple independent variables and a categorical dependent variable, and 2) examining use and reporting of LR in the nursing literature. Methods: Text books on LR and research articles employing LR as main statistical analysis were reviewed. Twenty-three articles published between 2010 and 2011 in the Journal of Korean Academy of Nursing were analyzed for proper use and reporting of LR models. Results: Logistic regression from basic concepts such as odds, odds ratio, logit transformation and logistic curve, assumption, fitting, reporting and interpreting to cautions were presented. Substantial short- comings were found in both use of LR and reporting of results. For many studies, sample size was not sufficiently large to call into question the accuracy of the regression model. Additionally, only one study reported validation analysis. Conclusion: Nurs- ing researchers need to pay greater attention to guidelines concerning the use and reporting of LR models. Key words: Logit function, Maximum likelihood estimation, Odds, Odds ratio, Wald test INTRODUCTION The model serves two purposes: (1) it can predict the value of the depen- dent variable for new values of the independent variables, and (2) it can Multivariable methods of statistical analysis commonly appear in help describe the relative contribution of each independent variable to general health science literature (Bagley, White, & Golomb, 2001).
    [Show full text]
  • Fm 3-34 Engineer Operations Headquarters, Department
    FM 3-34 April 2009 ENGINEER OPERATIONS DISTRIBUTION RESTRICTION: Approved for public release; distribution is unlimited. HEADQUARTERS, DEPARTMENT OF THE ARMY This publication is available at Army Knowledge Online (AKO) <www.us.army.mil> and the General Dennis J. Reimer Training and Doctrine Digital Library at <www.train.army.mil>. *FM 3-34 Field Manual No. Headquarters 3-34 Department of the Army Washington, DC, 2 April 2009 Engineer Operations Contents Page PREFACE .............................................................................................................. v INTRODUCTION .................................................................................................. vii Chapter 1 THE OPERATIONAL ENVIRONMENT ............................................................. 1-1 Understanding the Operational Environment ..................................................... 1-1 The Military Variable ........................................................................................... 1-4 Spectrum of Requirements ................................................................................. 1-5 Support Spanning the Levels of War.................................................................. 1-7 Engineer Soldiers ............................................................................................... 1-9 Chapter 2 ENGINEERING IN UNIFIED ACTION ............................................................... 2-1 SECTION I–THE ENGINEER REGIMENT ........................................................ 2-1 The
    [Show full text]
  • Notes: Logistic Functions I. Use, Function, and General Graph If
    Notes: Logistic Functions I. Use, Function, and General Graph If growth begins slowly, then increases rapidly and eventually levels off, the data often can be model by an “S-curve”, or a logistic function. The logistic function describes certain kinds of growth. These functions, like exponential functions, grow quickly at first, but because of restrictions that place limits on the size of the underlying population, eventually grow more slowly and then level off. For real numbers a, b, and c, the function is a logistic function. If a >0, a logistic function increases when b >0 and decreases when b < 0. Logistic Growth Function Equation: x-Intercept: none y-intercept: Horizontal Asymptote: The number, , is called the limiting value or the upper limit of the function because the graph of a logistic growth function will have a horizontal asymptote at y = c. As is clear from the graph above, the characteristic S-shape in the graph of a logistic function shows that initial exponential growth is followed by a period in which growth slows and then levels off, approaching (but never attaining) a maximum upper limit. Logistic functions are good models of biological population growth in species which have grown so large that they are near to saturating their ecosystems, or of the spread of information within societies. They are also common in marketing, where they chart the sales of new products over time; in a different context, they can also describe demand curves: the decline of demand for a product as a function of increasing price can be modeled by a logistic function, as in the figure below.
    [Show full text]
  • Sigmoid Functions in Reliability Based Management 2007 15 2 67 2.1 the Nature of Performance Growth
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Periodica Polytechnica (Budapest University of Technology and Economics) Ŕ periodica polytechnica Sigmoid functions in reliability based Social and Management Sciences management 15/2 (2007) 67–72 doi: 10.3311/pp.so.2007-2.04 Tamás Jónás web: http://www.pp.bme.hu/so c Periodica Polytechnica 2007 RESEARCH ARTICLE Received 2008-09-14 Abstract 1 Introduction This paper introduces the theoretical background of a few A sigmoid function is a mathematical function that produces possible applications of the so-called sigmoid-type functions in a curve having an "S" shape, and is defined by the manufacturing and service management. 1 σλ,x (x) (1) An extended concept of reliability, derived from fuzzy theory, 0 λ(x x0) = 1 e− − is discussed here to illustrate how reliability based management + formula. The sigmoid function is also called sigmoidal curve decisions can be made consistent, when handling of weakly de- [1] or logistic function. The interpretation of sigmoid-type func- fined concepts is needed. tions – I use here – is that any function that can be transformed I demonstrate how performance growth can be modelled us- into σλ,x (x) through substitutions and linear transformations ing an aggregate approach with support of sigmoid-type func- 0 is considered as a sigmoid-type one. There are several well- tions. known applications of sigmoid-type functions. A few examples Suitably chosen parameters of sigmoid-type functions allow are: threshold function in neural networks [2], approximation these to be used as failure probability distribution and survival of Gaussian probability distribution, logistic regression [3], or functions.
    [Show full text]
  • Learning Activation Functions in Deep Neural Networks
    UNIVERSITÉ DE MONTRÉAL LEARNING ACTIVATION FUNCTIONS IN DEEP NEURAL NETWORKS FARNOUSH FARHADI DÉPARTEMENT DE MATHÉMATIQUES ET DE GÉNIE INDUSTRIEL ÉCOLE POLYTECHNIQUE DE MONTRÉAL MÉMOIRE PRÉSENTÉ EN VUE DE L’OBTENTION DU DIPLÔME DE MAÎTRISE ÈS SCIENCES APPLIQUÉES (GÉNIE INDUSTRIEL) DÉCEMBRE 2017 c Farnoush Farhadi, 2017. UNIVERSITÉ DE MONTRÉAL ÉCOLE POLYTECHNIQUE DE MONTRÉAL Ce mémoire intitulé : LEARNING ACTIVATION FUNCTIONS IN DEEP NEURAL NETWORKS présenté par : FARHADI Farnoush en vue de l’obtention du diplôme de : Maîtrise ès sciences appliquées a été dûment accepté par le jury d’examen constitué de : M. ADJENGUE Luc-Désiré, Ph. D., président M. LODI Andrea, Ph. D., membre et directeur de recherche M. PARTOVI NIA Vahid, Doctorat, membre et codirecteur de recherche M. CHARLIN Laurent, Ph. D., membre iii DEDICATION This thesis is dedicated to my beloved parents, Ahmadreza and Sholeh, who are my first teachers and always love me unconditionally. This work is also dedicated to my love, Arash, who has been a great source of motivation and encouragement during the challenges of graduate studies and life. iv ACKNOWLEDGEMENTS I would like to express my gratitude to Prof. Andrea Lodi for his permission to be my supervisor at Ecole Polytechnique de Montreal and more importantly for his enthusiastic encouragements and invaluable continuous support during my research and education. I am very appreciated to him for introducing me to a MITACS internship in which I have developed myself both academically and professionally. I would express my deepest thanks to Dr. Vahid Partovi Nia, my co-supervisor at Ecole Poly- technique de Montreal, for his supportive and careful guidance and taking part in important decisions which were extremely precious for my research both theoretically and practically.
    [Show full text]
  • Applied Machine Learning
    Applied Machine Learning Annalisa Marsico OWL RNA Bionformatics group Max Planck Institute for Molecular Genetics Free University of Berlin 13 Mai, SoSe 2015 Neural Networks Motivation: the non-linear hypothesis x x x Goal: learn the non-linear problem x x x x x One solution: add a lot of non-linear features x as input to the logistic function x x x x x x x x x = ( + + + + x + + + ⋯ . ) x x g = sigmoid function If more than two dimensions, including polynomial feature not a good idea It might lead to over fitting. Neural Networks • Solution : SVM with projection in a high-dimensional space. Kernel ‘trick’. • New solution: Neural Networks to learn non-linear complex hypothesis Neural Networks (NNs) – Biological motivation • Originally motivated by the desire of having machines which can mimic the brain • Widely used in the 80s and early 90s, popularity diminished in the late 90s • Recent resurgence: more computational power to run large-scale NNs The ‘brain’ algorithm • Fascinating hypothesis (Roe et al. 1992): the brain functions through a single learning algorithm Auditory cortex X Neurons Schematic representation of a single neuron in the brain - Neuron is a computational unit ‘Inputs’ (dendrites) Synapse Node of Cell body Ranvier Schwann cell Myelin sheath Cell nucleus Communication between neurons Modeling neurons Logistic unit Activation function is often a sigmoid 1 ≡ ℎ = ≡ () 1 + There exist other activation function: step function, linear function Logistic unit representation Bias unit, =1 () Sigmoid
    [Show full text]
  • Logistic Regression
    CHAPTER 5 Logistic regression Logistic regression is the standard way to model binary outcomes (that is, data yi that take on the values 0 or 1). Section 5.1 introduces logistic regression in a simple example with one predictor, then for most of the rest of the chapter we work through an extended example with multiple predictors and interactions. 5.1 Logistic regression with a single predictor Example: modeling political preference given income Conservative parties generally receive more support among voters with higher in- comes. We illustrate classical logistic regression with a simple analysis of this pat- tern from the National Election Study in 1992. For each respondent i in this poll, we label yi =1ifheorshepreferredGeorgeBush(theRepublicancandidatefor president) or 0 if he or she preferred Bill Clinton (the Democratic candidate), for now excluding respondents who preferred Ross Perot or other candidates, or had no opinion. We predict preferences given the respondent’s income level, which is characterized on a five-point scale.1 The data are shown as (jittered) dots in Figure 5.1, along with the fitted logistic regression line, a curve that is constrained to lie between 0 and 1. We interpret the line as the probability that y =1givenx—in mathematical notation, Pr(y =1x). We fit and display the logistic regression using the following R function calls:| fit.1 <- glm (vote ~ income, family=binomial(link="logit")) Rcode display (fit.1) to yield coef.est coef.se Routput (Intercept) -1.40 0.19 income 0.33 0.06 n=1179,k=2 residual deviance = 1556.9, null deviance = 1591.2 (difference = 34.3) 1 The fitted model is Pr(yi =1)=logit− ( 1.40 + 0.33 income).
    [Show full text]