Lecture 4 – Discriminant Analysis, K-Nearest Neighbors

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 4 – Discriminant Analysis, K-Nearest Neighbors Lecture 4 – Discriminant Analysis, k-Nearest Neighbors Johan Wågberg Division of Systems and Control Department of Information Technology Uppsala University. Email: [email protected] [email protected] The mini project has now started! Task: Classify songs according to whether Andreas likes them or not (based on features from the Spotify Web-API). Training data: 750 songs labeled as like or dislike. Test data: 200 unlabeled songs. Upload your predictions to the leaderboard! Examination: A written report (max 6 pages). Instructions and data is available at the course homepage http://www.it.uu.se/edu/course/homepage/sml/project/ You will receive your group password per e-mail tomorrow. Deadline for submitting your final solution: February 21 1 / 30 [email protected] Summary of Lecture 3 (I/VI) The classification problem amounts to modeling the relationship between the input x and a categorical output y, i.e., the output belongs to one out of M distinct classes. A classifier is a prediction model yb(x) that maps any input x into a predicted class yb ∈ {1,...,M}. Common classifier predicts each input as belonging to the most likely class according to the conditional probabilities p(y = m | x) for m ∈ {1,...,M}. 2 / 30 [email protected] Summary of Lecture 3 (II/VI) For binary (two-class) classification, y ∈ {−1, 1}, the logistic regression model is T eθ x p(y = 1 | x) ≈ g(x) = . 1 + eθTx The model parameters θ are found by maximum likelihood by (numerically) maximizing the log-likelihood function, n X T log `(θ) = − log 1 + e−yiθ xi . i=1 3 / 30 [email protected] Summary of Lecture 3 (III/VI) Using the common classifier, we get the prediction model T 1 if g(x) > 0.5 ⇔ θb x > 0 yb(x) = −1 otherwise This attempts to minimize the total misclassification error. More generally, we can use (1 if g(x) > r, y(x) = b −1 otherwise where 0 ≤ r ≤ 1 is a user chosen threshold. 4 / 30 [email protected] Summary of Lecture 3 (IV/VI) Confusion matrix: Predicted condition yb = 0 yb = 1 Total True y = 0 TNFP N condition y = 1 FNTP P Total N* P* For the classifier (1 if g(x) > r, y(x) = b −1 otherwise the numbers in the confusion matrix will depend on the threshold r. • Decreasing r ⇒ TN, FN decrease and FP, TP increase. • Increasing r ⇒ TN, FN increase and FP, TP decrease. 5 / 30 [email protected] Summary of Lecture 3 (V/VI) Some commonly used performance measures are: TP TP • True positive rate: TPR = = ∈ [0, 1] P FN + TP FP FP • False positive rate: FPR = = ∈ [0, 1] N FP + TN TP TP • Precision: Prec = = ∈ [0, 1] P* FP + TP 6 / 30 [email protected] Summary of Lecture 3 (VI/VI) ROC curve for logistic regression 1 0.8 0.6 0.4 True positive rate 0.2 0 0 0.2 0.4 0.6 0.8 1 False positive rate • ROC: plot of TPR vs. FPR as r ranges from 0 to 1.1 • Area Under Curve (AUC): condensed performance measure for the classifier, taking all possible thresholds into account. 1Possible to draw ROC curves based on other tuning parameters as well. 7 / 30 [email protected] Contents – Lecture 4 1. Summary of lecture 3 2. Generative models 3. Linear Discriminant Analysis (LDA) 4. Quadratic Discriminant Analysis (QDA) 5. A nonparametric classifier – k-Nearest Neighbors (kNN) 8 / 30 [email protected] Generative models A discriminative model describes how the output y is generated, i.e. p(y | x). A generative model describes how both the output y and the input x is generated via p(y, x). Predictions can be derived using laws of probability p(y, x) p(y)p(x | y) p(y | x) = = R . p(x) y p(y, x) 9 / 30 [email protected] Generative models for classification We have built classifiers based on models gm(x?) of the conditional probabilities p(y = m | x). A generative model for classification can be expressed as p(y = m)p(x | y = m) p(y = m | x) = PM m=1 p(y = m)p(x | y = m) • p(y = m) denotes the marginal (prior) probability of class m ∈ {1,...,M}. • p(x | y = m) denotes the conditional probability density of x for an observation from class m. 10 / 30 [email protected] Multivariate Gaussian density The p-dimensional Gaussian (normal) probability density function with mean vector µ and covariance matrix Σ is, 1 1 N (x | µ, Σ) = exp − (x − µ)TΣ−1(x − µ) , (2π)p/2|Σ|1/2 2 where µ : p × 1 vector and Σ : p × p positive definite matrix. T Let x = (x1, . , xp) ∼ N (µ, Σ), • µj is the mean of xj, • Σjj is the variance of xj, • Σij (i 6= j) is the covariance between xi and xj. 11 / 30 [email protected] Multivariate Gaussian density 0.2 0.3 0.15 0.2 0.1 0.1 0.05 Value of PDF Value of PDF 0 0 5 5 5 5 0 0 0 0 -5 -5 x -5 x x -5 x 2 1 2 1 ! ! 1 0 1 0.4 Σ = Σ = 0 1 0.4 0.5 12 / 30 [email protected] Linear discriminant analysis We need, 1. The prior class probabilities πm , p(y = m), m ∈ {1,...,M}. 2. The conditional probability densities of the input x, fm(x) , p(x | y = m), for each class m. This gives us the model π f (x) g (x) = m m m PM m=1 πmfm(x) 13 / 30 [email protected] Linear discriminant analysis For the first task, a natural estimator is the proportion of training samples in the mth class 1 n n π = X {y = m} = m bm n I i n i=1 where n is the size of the training set and nm the number of training samples of class m. 14 / 30 [email protected] Linear discriminant analysis For the second task, a simple model is to assume that fm(x) is a multivariate normal density with mean vector µm and covariance matrix Σm, 1 1 f (x) = exp − (x − µ )TΣ−1(x − µ ) . m p/2 1/2 m m m (2π) |Σm| 2 If we further assume that all classes share the same covariance matrix, def Σ = Σ1 = ··· = ΣM , the remaining parameters of the model are µ1, µ2,..., µM , Σ. 15 / 30 [email protected] Linear discriminant analysis These parameters are naturally estimated as the (within class) sample means and sample covariance, respectively: 1 X µb m = xi, m = 1, . , M, nm i:yi=m M 1 X X Σb = (x − µ )(x − µ )T. n − M i b m i b m m=1 i:yi=m Modeling the class probabilities using the normal assumptions and these parameter estimates is referred to as Linear Discriminant Analysis (LDA). 16 / 30 [email protected] Linear discriminant analysis, summary The LDA classifier assigns a test input x to class m for which, −1 1 −1 δb (x) = xTΣb µ − µT Σb µ + log π m b m 2 b m b m bm is largest, where n π = m m = 1, . , M, bm n 1 X µb m = xi, m = 1, . , M, nm i:yi=m M 1 X X Σb = (x − µ )(x − µ )T. n − M i b m i b m m=1 i:yi=m LDA is a linear classifier (its decision boundaries are linear). 17 / 30 [email protected] ex) LDA decision boundary Illustration of LDA decision boundary – the level curves of two Gaussian PDFs with the same covariance matrix intersect along a straight line, δb1(x) = δb2(x). 2 x x 1 18 / 30 [email protected] ex) Simple spam filter We will use LDA to construct a simple spam filter: • Output: y ∈ {spam, ham} • Input: x = 57-dimensional vector of “features” extracted from the email • Frequencies of 48 predefined words (make, address, all,...) • Frequencies of 6 predefined characters (;, (, [, !, $, #) • Average length of uninterrupted sequences of capital letters • Length of longest uninterrupted sequence of capital letters • Total number of capital letters in the e-mail • Dataset consists of 4, 601 emails classified as either spam or ham (split into 75 % training and 25 % testing) UCI Machine Learning Repository — Spambase Dataset (https://archive.ics.uci.edu/ml/datasets/Spambase) 19 / 30 [email protected] ex) Simple spam filter LDA confusion matrix (test data): yb = 0 yb = 1 yb = 0 yb = 1 y = 0 667 23 y = 0 687 3 y = 1 102 358 y = 1 237 223 Table: Threshold r = 0.5 Table: Threshold r = 0.9 20 / 30 [email protected] ex) Simple spam filter A linear classifier uses a linear decision boundary for separating the two classes as much as possible. p In R this is a (p − 1)-dimensional hyperplane. How can we visualize the classifier? One way: project the (labeled) test inputs onto the normal of the decision boundary. 200 Y=0 150 Y=1 100 50 0 -0.5 0 0.5 21 / 30 [email protected] Projection on normal Gmail’s spam filter Official Gmail blog July 9, 2015: “. the spam filter now uses an artificial neural network to detect and block the especially sneaky spam—the kind that could actually pass for wanted mail.” — Sri Harsha Somanchi, Product Manager Official Gmail blog February 6, 2019: “With TensorFlow, we are now blocking around 100 million additional spam messages every day” — Neil Kumaran, Product Manager, Gmail Security 22 / 30 [email protected] Quadratic discriminant analysis Do we have to assume a common covariance matrix? No! Estimating a separate covariance matrix for each class leads to an alternative method, Quadratic Discriminant Analysis (QDA).
Recommended publications
  • Warm Start for Parameter Selection of Linear Classifiers
    Warm Start for Parameter Selection of Linear Classifiers Bo-Yu Chu Chia-Hua Ho Cheng-Hao Tsai Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science National Taiwan Univ., Taiwan National Taiwan Univ., Taiwan National Taiwan Univ., Taiwan [email protected] [email protected] [email protected] Chieh-Yen Lin Chih-Jen Lin Dept. of Computer Science Dept. of Computer Science National Taiwan Univ., Taiwan National Taiwan Univ., Taiwan [email protected] [email protected] ABSTRACT we may need to solve many optimization problems. Sec- In linear classification, a regularization term effectively reme- ondly, if we do not know the reasonable range of the pa- dies the overfitting problem, but selecting a good regulariza- rameters, we may need a long time to solve optimization tion parameter is usually time consuming. We consider cross problems under extreme parameter values. validation for the selection process, so several optimization In this paper, we consider using warm start to efficiently problems under different parameters must be solved. Our solve a sequence of optimization problems with different reg- aim is to devise effective warm-start strategies to efficiently ularization parameters. Warm start is a technique to reduce solve this sequence of optimization problems. We detailedly the running time of iterative methods by using the solution investigate the relationship between optimal solutions of lo- of a slightly different optimization problem as an initial point gistic regression/linear SVM and regularization parameters. for the current problem. If the initial point is close to the op- Based on the analysis, we develop an efficient tool to auto- timum, warm start is very useful.
    [Show full text]
  • Neural Networks and Backpropagation
    CS 179: LECTURE 14 NEURAL NETWORKS AND BACKPROPAGATION LAST TIME Intro to machine learning Linear regression https://en.wikipedia.org/wiki/Linear_regression Gradient descent https://en.wikipedia.org/wiki/Gradient_descent (Linear classification = minimize cross-entropy) https://en.wikipedia.org/wiki/Cross_entropy TODAY Derivation of gradient descent for linear classifier https://en.wikipedia.org/wiki/Linear_classifier Using linear classifiers to build up neural networks Gradient descent for neural networks (Back Propagation) https://en.wikipedia.org/wiki/Backpropagation REFRESHER ON THE TASK Note “Grandmother Cell” representation for {x,y} pairs. See https://en.wikipedia.org/wiki/Grandmother_cell REFRESHER ON THE TASK Find i for zi: “Best-index” -- estimated “Grandmother Cell” Neuron Can use parallel GPU reduction to find “i” for largest value. LINEAR CLASSIFIER GRADIENT We will be going through some extra steps to derive the gradient of the linear classifier -- We’ll be using the “Softmax function” https://en.wikipedia.org/wiki/Softmax_function Similarities will be seen when we start talking about neural networks LINEAR CLASSIFIER J & GRADIENT LINEAR CLASSIFIER GRADIENT LINEAR CLASSIFIER GRADIENT LINEAR CLASSIFIER GRADIENT GRADIENT DESCENT GRADIENT DESCENT, REVIEW GRADIENT DESCENT IN ND GRADIENT DESCENT STOCHASTIC GRADIENT DESCENT STOCHASTIC GRADIENT DESCENT STOCHASTIC GRADIENT DESCENT, FOR W LIMITATIONS OF LINEAR MODELS Most real-world data is not separable by a linear decision boundary Simplest example: XOR gate What if we could combine the results of multiple linear classifiers? Combine two OR gates with an AND gate to get a XOR gate ANOTHER VIEW OF LINEAR MODELS NEURAL NETWORKS NEURAL NETWORKS EXAMPLES OF ACTIVATION FNS Note that most derivatives of tanh function will be zero! Makes for much needless computation in gradient descent! MORE ACTIVATION FUNCTIONS https://medium.com/@shrutijadon10104776/survey-on- activation-functions-for-deep-learning-9689331ba092 Tanh and sigmoid used historically.
    [Show full text]
  • Lecture 2: Linear Classifiers
    Lecture 2: Linear Classifiers Andr´eMartins Deep Structured Learning Course, Fall 2018 Andr´eMartins (IST) Lecture 2 IST, Fall 2018 1 / 117 Course Information • Instructor: Andr´eMartins ([email protected]) • TAs/Guest Lecturers: Erick Fonseca & Vlad Niculae • Location: LT2 (North Tower, 4th floor) • Schedule: Wednesdays 14:30{18:00 • Communication: piazza.com/tecnico.ulisboa.pt/fall2018/pdeecdsl Andr´eMartins (IST) Lecture 2 IST, Fall 2018 2 / 117 Announcements Homework 1 is out! • Deadline: October 10 (two weeks from now) • Start early!!! List of potential projects will be sent out soon! • Deadline for project proposal: October 17 (three weeks from now) • Teams of 3 people Andr´eMartins (IST) Lecture 2 IST, Fall 2018 3 / 117 Today's Roadmap Before talking about deep learning, let us talk about shallow learning: • Supervised learning: binary and multi-class classification • Feature-based linear classifiers • Rosenblatt's perceptron algorithm • Linear separability and separation margin: perceptron's mistake bound • Other linear classifiers: naive Bayes, logistic regression, SVMs • Regularization and optimization • Limitations of linear classifiers: the XOR problem • Kernel trick. Gaussian and polynomial kernels. Andr´eMartins (IST) Lecture 2 IST, Fall 2018 4 / 117 Fake News Detection Task: tell if a news article / quote is fake or real. This is a binary classification problem. Andr´eMartins (IST) Lecture 2 IST, Fall 2018 5 / 117 Fake Or Real? Andr´eMartins (IST) Lecture 2 IST, Fall 2018 6 / 117 Fake Or Real? Andr´eMartins (IST) Lecture 2 IST, Fall 2018 7 / 117 Fake Or Real? Andr´eMartins (IST) Lecture 2 IST, Fall 2018 8 / 117 Fake Or Real? Andr´eMartins (IST) Lecture 2 IST, Fall 2018 9 / 117 Fake Or Real? Can a machine determine this automatically? Can be a very hard problem, since fact-checking is hard and requires combining several knowledge sources ..
    [Show full text]
  • Hyperplane Based Classification: Perceptron and (Intro
    Hyperplane based Classification: Perceptron and (Intro to) Support Vector Machines Piyush Rai CS5350/6350: Machine Learning September 8, 2011 (CS5350/6350) Hyperplane based Classification September8,2011 1/20 Hyperplane Separates a D-dimensional space into two half-spaces Defined by an outward pointing normal vector w RD ∈ (CS5350/6350) Hyperplane based Classification September8,2011 2/20 Hyperplane Separates a D-dimensional space into two half-spaces Defined by an outward pointing normal vector w RD ∈ w is orthogonal to any vector lying on the hyperplane (CS5350/6350) Hyperplane based Classification September8,2011 2/20 Hyperplane Separates a D-dimensional space into two half-spaces Defined by an outward pointing normal vector w RD ∈ w is orthogonal to any vector lying on the hyperplane Assumption: The hyperplane passes through origin. (CS5350/6350) Hyperplane based Classification September8,2011 2/20 Hyperplane Separates a D-dimensional space into two half-spaces Defined by an outward pointing normal vector w RD ∈ w is orthogonal to any vector lying on the hyperplane Assumption: The hyperplane passes through origin. If not, have a bias term b; we will then need both w and b to define it b > 0 means moving it parallely along w (b < 0 means in opposite direction) (CS5350/6350) Hyperplane based Classification September8,2011 2/20 Linear Classification via Hyperplanes Linear Classifiers: Represent the decision boundary by a hyperplane w For binary classification, w is assumed to point towards the positive class (CS5350/6350) Hyperplane based Classification
    [Show full text]
  • 2 Linear Classifiers and Perceptrons
    Linear Classifiers and Perceptrons 7 2 Linear Classifiers and Perceptrons CLASSIFIERS You are given sample of n observations, each with d features [aka predictors]. Some observations belong to class C; some do not. Example: Observations are bank loans Features are income & age (d = 2) Some are in class “defaulted,” some are not Goal: Predict whether future borrowers will default, based on their income & age. Represent each observation as a point in d-dimensional space, called a sample point / a feature vector / independent variables. overfitting X X X X X X X C X X X X X C X C C X C C X C X C C C income income income X C C X C X C X X X C C X C C C X C C C C C age age age [Draw this by hand; decision boundaries last. classify3.pdf ] [We draw these lines/curves separating C’s from X’s. Then we use these curves to predict which future borrowers will default. In the last example, though, we’re probably overfitting, which could hurt our predic- tions.] decision boundary: the boundary chosen by our classifier to separate items in the class from those not. overfitting: When sinuous decision boundary fits sample points so well that it doesn’t classify future points well. [A reminder that underlined phrases are definitions, worth memorizing.] Some (not all) classifiers work by computing a decision function: A function f (x) that maps a point x to a scalar such that f (x) > 0 if x class C; 2 f (x) 0 if x < class C.
    [Show full text]
  • Machine Learning Empirical Risk Minimization
    Machine Learning Empirical Risk Minimization Dmitrij Schlesinger WS2014/2015, 08.12.2014 Recap – tasks considered before n For a training dataset L = (xl, yl) ... with xl ∈ R (data) and yl ∈ {+1, −1} (classes) fing a separating hyperplane yl · [hw, xli + b] ≥ 0 ∀l → Perceptron algorithm The goal is to fing a "corridor" (stripe) of the maximal width that separates the data → Large Margin learning, linear SVM 1 kwk2 → min 2 w s.t. yl[hw, xli + b] ≥ 1 In both cases the data is assumed to be separable. What if not? ML: Empirical Risk Minimization 08.12.2014 2 Empirical Risk Minimization Let a loss function C(y, y0) be given that penalizes deviations between the true class and the estimated one (like the loss in the Bayesian Decision theory). The Empirical Risk of a decision strategy is the total loss over the training set: X R(e) = C yl, e(xl) → min e l It should bi minimized with respect to the decision strategy e. Special case (today): – the set of decisions is {+1, −1} – the loss is the delta-function C(y, y0) = δ(y 6= y0) – the decision strategy can be expressed in the form e(x) = sign f(x) with an evaluation function f : X → R Example: f(x) = hw, xi − b is a linear classifier. ML: Empirical Risk Minimization 08.12.2014 3 Hinge Loss The problem: the subject is not convex The way out: replace the real loss by its convex upper bound ← example for y = 1 (for y = −1 it should be flipped) δ y 6= sign f(x) ≤ max 0, 1 − y · f(x) It is called Hinge Loss ML: Empirical Risk Minimization 08.12.2014 4 Sub-gradient algorithm Let the evaluation function be parameterized, i.e.
    [Show full text]
  • Using Linear Classifier Probes
    Under review as a conference paper at ICLR 2017 UNDERSTANDING INTERMEDIATE LAYERS USING LINEAR CLASSIFIER PROBES Guillaume Alain & Yoshua Bengio Department of Computer Science and Operations Research Universite´ de Montreal´ Montreal, QC. H3C 3J7 [email protected] ABSTRACT Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as “probes”, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. 1 INTRODUCTION The recent history of deep neural networks features an impressive number of new methods and technological improvements to allow the training of deeper and more powerful networks. Despite this, models still have a reputation for being black boxes. Neural networks are criticized for their lack of interpretability, which is a tradeoff that we accept because of their amazing performance on many tasks. Efforts have been made to identify the role played by each layer, but it can be hard to find a meaning to individual layers.
    [Show full text]
  • Chapter 2 Classification
    Chapter 2 Classification In this chapter we study one of the most basic topics in machine learning called classi- fication. By classification we meant a supervised learning procedure where we try to classify a sample based on a model learned from the data in a training dataset. The goal of this chapter is to understand the general principle of classification, to explore various methodologies so that we can do classification, and to make connections between different methods. 2.1 Discriminant Function Basic Terminologies A classification method always starts with a pair of variables (x; y). The first variable x 2 X ⊂ Rd is an input vector. The set X is called the input space. Depending on the application, an input could the intensity of a pixel, a wavelet coefficient, the phase angle of wave, or anything we want to model. We assume that each input vector has a dimensionality d and d is finite. The second variable y 2 f1; : : : ; kg = Y denotes the class label of the classes fC1;:::; Ckg. The choice of the class labels is arbitrary. They do not need to be positive integers. For example, in binary classification, one can choose y 2 f0; 1g or y 2 {−1; +1g, depending which one is more convenient mathematically. In supervised learning, we assume that we have a training set D. A training set is a collection of paired variables (xj; yj), for j = 1; : : : ; n. The vector xj denotes the j-th sample input in D, and yj denotes the corresponding class label. The relationship between xj and yj is specified by the target function f : X!Y such that yj = f(xj).
    [Show full text]
  • Linear Classification Overview Logistic Regression Model I Logistic
    Overview I Logistic regression model I Linear classifiers I Gradient descent and SGD Linear classification I Multinomial logistic regression model COMS 4771 Fall 2019 0 / 27 1 / 27 Logistic regression model I Logistic regression model II d t I Suppose x is given by d real-valued features, so x R , while I Sigmoid function σ(t) := 1/(1 + e− ) ∈ y 1, +1 . I Useful property: 1 σ(t) = σ( t) ∈ {− } − − I Logistic regression model for (X,Y ): I Pr(Y = +1 X = x) = σ(xTw) | T T I Y X = x is Bernoulli (but taking values in 1, +1 rather than I Pr(Y = 1 X = x) = 1 σ(x w) = σ( x w) | {− } − | − − 0, 1 ), with “heads probability” parameter I Convenient formula: for each y 1, +1 , { } ∈ {− } 1 . Pr(Y = y X = x) = σ(yxTw). 1 + exp( xTw) | − d I w R is parameter vector of interest 1 ∈ I w not involved in marginal distribution of X (which we don’t care 0.8 much about) 0.6 0.4 0.2 0 -6 -4 -2 0 2 4 6 Figure 1: Sigmoid function 2 / 27 3 / 27 Log-odds in logistic regression model Optimal classifier in logistic regression model I Log-odds in the model is given by a linear function: I Recall that Bayes classifier is Pr(Y = +1 X = x) T ln | = x w. ? +1 if Pr(Y = +1 X = x) > 1/2 Pr(Y = 1 X = x) f (x) = | − | 1 otherwise. (− I Just like in linear regression, common to use feature expansion! d+1 If distribution of (X,Y ) comes from logistic regression model with I E.g., affine feature expansion ϕ(x) = (1, x) R I ∈ parameter w, then Bayes classifier is +1 if xTw > 0 f ?(x) = 1 otherwise.
    [Show full text]
  • Differentially Private Empirical Risk Minimization
    JournalofMachineLearningResearch12(2011)1069-1109 Submitted 6/10; Revised 2/11; Published 3/11 Differentially Private Empirical Risk Minimization Kamalika Chaudhuri [email protected] Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92093, USA Claire Monteleoni [email protected] Center for Computational Learning Systems Columbia University New York, NY 10115, USA Anand D. Sarwate [email protected] Information Theory and Applications Center University of California, San Diego La Jolla, CA 92093-0447, USA Editor: Nicolas Vayatis Abstract Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy- preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets.
    [Show full text]
  • Tutorial on Support Vector Machine (SVM) Vikramaditya Jakkula, School of EECS, Washington State University, Pullman 99164
    Tutorial on Support Vector Machine (SVM) Vikramaditya Jakkula, School of EECS, Washington State University, Pullman 99164. Abstract: In this tutorial we present a brief introduction to SVM, and we discuss about SVM from published papers, workshop materials & material collected from books and material available online on the World Wide Web. In the beginning we try to define SVM and try to talk as why SVM, with a brief overview of statistical learning theory. The mathematical formulation of SVM is presented, and theory for the implementation of SVM is briefly discussed. Finally some conclusions on SVM and application areas are included. Support Vector Machines (SVMs) are competing with Neural Networks as tools for solving pattern recognition problems. This tutorial assumes you are familiar with concepts of Linear Algebra, real analysis and also understand the working of neural networks and have some background in AI. Introduction Machine Learning is considered as a subfield of Artificial Intelligence and it is concerned with the development of techniques and methods which enable the computer to learn. In simple terms development of algorithms which enable the machine to learn and perform tasks and activities. Machine learning overlaps with statistics in many ways. Over the period of time many techniques and methodologies were developed for machine learning tasks [1]. Support Vector Machine (SVM) was first heard in 1992, introduced by Boser, Guyon, and Vapnik in COLT-92. Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression [1]. They belong to a family of generalized linear classifiers. In another terms, Support Vector Machine (SVM) is a classification and regression prediction tool that uses machine learning theory to maximize predictive accuracy while automatically avoiding over-fit to the data.
    [Show full text]
  • 2DI70 - Statistical Learning Theory Lecture Notes
    2DI70 - Statistical Learning Theory Lecture Notes Rui Castro April 3, 2018 Some of the material in these notes will be published by Cambridge University Press as Statistical Machine Learning: A Gentle Primer by Rui M. Castro and Robert D. Nowak. This early draft is free to view and download for personal use only. Not for re-distribution, re-sale or use in derivative works. c Rui M. Castro and Robert D. Nowak, 2017. 2 Contents Contents 2 1 Introduction 9 1.1 Learning from Data..................................9 1.1.1 Data Spaces...................................9 1.1.2 Loss Functions................................. 10 1.1.3 Probability Measure and Expectation.................... 11 1.1.4 Statistical Risk................................. 11 1.1.5 The Learning Problem............................. 12 2 Binary Classification and Regression 15 2.1 Binary Classification.................................. 15 2.2 Regression........................................ 18 2.3 Empirical Risk Minimization............................. 19 2.4 Model Complexity and Overfitting.......................... 21 2.5 Exercises........................................ 25 3 Competing Goals: approximation vs. estimation 29 3.1 Strategies To Avoid Overfitting............................ 30 3.1.1 Method of Sieves................................ 31 3.1.2 Complexity Penalization Methods...................... 32 3.1.3 Hold-out Methods............................... 33 4 Estimation of Lipschitz smooth functions 37 4.1 Setting.......................................... 37 4.2 Analysis........................................
    [Show full text]