Introduction to Machine Learning and Pattern Recognition Lecture 1

Introduction to Machine Learning and Pattern Recognition Lecture 1

Introduction to machine learning and pattern recognition Lecture 1 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 C.A.L. Bailer-Jones. Machine learning and pattern recognition 1 What is machine learning? ● Data description and interpretation – finding simpler relationship between variables (predictors and responses) – discovering “natural” groups or latent parameters in data – relating observables to physical quantities ● Prediction – capturing relationship between “inputs” and “outputs” for a set of labelled data with the goal of predicting outputs for unlabelled data (“pattern recognition”) ● Learning from data – dealing with noise – coping with high dimensions (many potentially relevant variables) – fitting models to data – generalizing solutions C.A.L. Bailer-Jones. Machine learning and pattern recognition 2 Parameter estimation from stellar spectra ● learn mapping from spectra to data using labelled examples ● multidimensional (few hundred) ● nonlinear ● inverse al. (2005) Willemsenet C.A.L. Bailer-Jones. Machine learning and pattern recognition 3 Source classification ● distinguish between stars, galaxies, quasars, based on colours or low-resolution spectroscopy ● can simulate spectra of known classes www.astro.princeton.edu ● much variance within these basic classes (2007) Tsalmantza et al. C.A.L. Bailer-Jones. Machine learning and pattern recognition 4 Course objectives ● learn the basic concepts of machine learning ● learn about some machine learning algorithms – classification – regression – clustering ● appreciate why and when these methods are required ● provide some understanding of techniques used in the literature ● promote use of useful techniques in your research ● introduction to R C.A.L. Bailer-Jones. Machine learning and pattern recognition 5 Course methodology ● emphasis is on – principles – specific techniques ● some examples ● some maths is essential, but few derivations ● slides both contain more than is covered and are incomplete ● R scripts on web page C.A.L. Bailer-Jones. Machine learning and pattern recognition 6 Course content (nominal plan) 1) supervised vs. unsupervised learning; linear methods; nearest neighbours; curse of dimensionality; regularization & generalization; k-means clustering 2) density estimation; linear discriminant analysis; nonlinear regression; kernels and basis functions 3) neural networks; separating hyperplanes; support vector machines 4) principal components analysis; mixture models; model selection; self-organizing maps C.A.L. Bailer-Jones. Machine learning and pattern recognition 7 R (S, S-PLUS) ● http://www.r-project.org ● “a language and environment for statistical computing and graphics” ● open source ● runs on Linux, Windows, MacOS ● top-level operations for vectors and matrices ● OO-based ● large number of statistical and machine learning packages ● can link to external code (e.g. C, C++, Fortran) ● Good book on using R for statistics – Modern Applied Statistics with S, Venables & Ripley, 2002, Springer – “MASS4” C.A.L. Bailer-Jones. Machine learning and pattern recognition 8 Two types of learning problem ● supervised learning – predictors (x) and responses (y) – infer P(y | x), perhaps modelled as f(x ; w) – learn model parameters, w, by training on labelled data – discrete y is a classification problem; real-valued is regression – examples: trees, nearest neighbours, neural networks, SVMs ● unsupervised learning – no distinction between predictors and responses – infer P(x), or things about this ● e.g. no. of modes/classes (mixture modelling, peak finding) ● low dimensional projections (descriptions) ● outlier detection (discovery) – examples: PCA, ICA, k-mean clustering, MDS, SOM C.A.L. Bailer-Jones. Machine learning and pattern recognition 9 Linear regression and least squares N data vectors, each of p dimensions. Regress against response variable, y Data: xi , yi x = {x1, x2, ... , x j ,... , x p} Model: y = x Least squares solution: N p 2 = min∑i=1 yi−∑ j=1 xi , j j In matrix form this is RSS = y− X T y− X minimize w.r.t and the solution is = X T X −1 X T y C.A.L. Bailer-Jones. Machine learning and pattern recognition 10 Linear regression and linear least squares: more details Fit a linear model to a set of data {y , x} includes 1 for the intercept p T y = 0∑ x j j = x x , are p×1 column vectors j=1 Determine parameters by minimizing sum-of-squares error on all N training data N T 2 RSS = ∑i=1 yi−xi T T min∥y− X ∥2 = min y−X y−X X is a N × p matrix This is quadratic in so always has a minimum. Differentiate w.r.t X T y− X = 0 X T X = X T y If X T X (the “information matrix”) is non-singular then the unique solution is = X T X −1 X T y The prediced values are y = X = X X T X −1 X T y H = X X T X −1 X T is sometimes called the 'hat' matrix C.A.L. Bailer-Jones. Machine learning and pattern recognition 11 Linear regression ● See R scripts on web page – taken from section 1.3 of Venables & Ripley C.A.L. Bailer-Jones. Machine learning and pattern recognition 12 Model testing: cross validation ● Solve for model parameters using a training data set, but must evaluate performance on an independent set to avoid “overtraining” ● To determine generalization performance and (in particular) to optimize free parameters we can split/sample available data – train/test sets – N-fold cross-validation (N models) – train, test and evaluation sets – bootstrapping C.A.L. Bailer-Jones. Machine learning and pattern recognition 13 Nearest neighbours (χ2 minimization) New object: s=s1, s2, ... , si ,... , s I Assign class (or parameters) of nearest template in a labelled grid of K object templates { xk} k=1..K min s xk 2 k ∑i i− i In general can apply weights to each dimension: 2 s −xk min i i k ∑i w i 2 minimization is case where the weights estimate the s.d. of the noise. In k-nn estimate parameters from average of k nearest neighbours. C.A.L. Bailer-Jones. Machine learning and pattern recognition 14 2-class classification: K-nearest neighbours (2001) Hastie, Tibshirani, Friedman © C.A.L. Bailer-Jones. Machine learning and pattern recognition 15 Classification via linear regression T f G x = k x parameters estimated by least squares T min∥ f − X ∥2 Friedman (2001) f G1 x=0 for green class f G2 x=1 for red class Hastie, Tibshirani, Boundary is © f G1 x = f G2 x = 0.5 C.A.L. Bailer-Jones. Machine learning and pattern recognition 16 Classification via linear regression: more details K classes, represented by K indicators, Y k , k=1,... , K with Y k=1 if G=k else Y k =0 Y is the N ×K indicator response matrix. Fitting linear regression model to columns of Y simultaneously gives Y = X B = X X T X −1 X T Y where B is a p1×K coefficient matrix. To classify a new observations with input x do 1. y = [1, x B]T 1×K vector 2. G x = argmax y k x k Can interpret/motivate probabilistically: E Y k∣X =x = PrG=k∣X =x C.A.L. Bailer-Jones. Machine learning and pattern recognition 17 Comparison ● Linear model – makes a very strong assumption about the data viz. well- approximated by a globally linear function ● stable but biased – learn relationship between (X, y) and encapsulate into parameters, ● K-nearest neighbours – no assumption about functional form of relationship (X, y), i.e. it is nonparametric – but does assume that function is well-approximated by a locally constant function ● less stable but less biased – no free parameters to learn, so application to new data relatively slow: brute force search for neighbours takes O(N) C.A.L. Bailer-Jones. Machine learning and pattern recognition 18 Which solution is optimal? ● if we know nothing about how the data were generated (underlying model, noise), we don't know ● if data drawn from two uncorrelated Gaussians: linear decision boundary is “optimal” ● if data drawn from mixture of multiple distributions: linear boundary not optimal (nonlinear, disjoint) ● what is optimal? – smallest generalization errors – simple solution (interpretability) ● more complex models permit lower errors on training data – but we want models to generalize – need to control complexity / nonlinearity (regularization) C.A.L. Bailer-Jones. Machine learning and pattern recognition 19 Learning, generalization and regularization See R scripts on web page C.A.L. Bailer-Jones. Machine learning and pattern recognition 20 Bias-Variance decomposition ● Error in a fit or prediction can be divided into three components: – Error = bias2 + variance + irreducible error ● Bias measures the degree to which our estimates typically differ from the truth ● Variance is the extent to which our estimates vary or scatter (e.g. as a result of using slightly different data, small changes in the parameters etc.) – expected standard deviation of estimate around mean estimate ● In practice permitting a small amount of bias in a model can lead to large reduction in variance (and thus total error) C.A.L. Bailer-Jones. Machine learning and pattern recognition 21 Bias-variance trade-off local polynomial regression with a variable kernel using locpoly{KernSmooth} black line: true function black points: data set Blue line: h = 0.02 “complex”, “rough” low bias, high variance Red line: h = 0.5 “simple”, “smooth” high bias, low variance C.A.L. Bailer-Jones. Machine learning and pattern recognition 22 Bias and variance Friedman (2001) Hastie, Tibshirani, © C.A.L. Bailer-Jones. Machine learning and pattern recognition 23 Limitations of nearest neighbours (and χ2 minimization) ● how do we determine appropriate weights for each dimension? – need a “significance” measure, which noise s.d. is not ● limited to constant number of neighbours (or constant volume) – no adaptation depending on variable density of grid ● problematic for multiple parameter estimation – “strong” vs. “weak” parameters ● curse of dimensionality 2 s −xk min i i k ∑i w i C.A.L.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    31 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us