Overfitting Princeton University COS 495 Instructor: Yingyu Liang Review: Machine Learning Basics Math Formulation

Overfitting Princeton University COS 495 Instructor: Yingyu Liang Review: Machine Learning Basics Math Formulation

Machine Learning Basics Lecture 6: Overfitting Princeton University COS 495 Instructor: Yingyu Liang Review: machine learning basics Math formulation • Given training data 푥푖, 푦푖 : 1 ≤ 푖 ≤ 푛 i.i.d. from distribution 퐷 1 • Find 푦 = 푓(푥) ∈ 퓗 that minimizes 퐿෠ 푓 = σ푛 푙(푓, 푥 , 푦 ) 푛 푖=1 푖 푖 • s.t. the expected loss is small 퐿 푓 = 피 푥,푦 ~퐷[푙(푓, 푥, 푦)] Machine learning 1-2-3 • Collect data and extract features • Build model: choose hypothesis class 퓗 and loss function 푙 • Optimization: minimize the empirical loss Machine learning 1-2-3 Feature mapping Maximum Likelihood • Collect data and extract features • Build model: choose hypothesis class 퓗 and loss function 푙 • Optimization: minimize the empirical loss Occam’s razor Gradient descent; convex optimization Overfitting Linear vs nonlinear models 2 푥1 2 푥2 푥1 2푥1푥2 푦 = sign(푤푇휙(푥) + 푏) 2푐푥1 푥2 2푐푥2 푐 Polynomial kernel Linear vs nonlinear models • Linear model: 푓 푥 = 푎0 + 푎1푥 2 3 푀 • Nonlinear model: 푓 푥 = 푎0 + 푎1푥 + 푎2푥 + 푎3푥 + … + 푎푀 푥 • Linear model ⊆ Nonlinear model (since can always set 푎푖 = 0 (푖 > 1)) • Looks like nonlinear model can always achieve same/smaller error • Why one use Occam’s razor (choose a smaller hypothesis class)? Example: regression using polynomial curve 푡 = sin 2휋푥 + 휖 Figure from Machine Learning and Pattern Recognition, Bishop Example: regression using polynomial curve 푡 = sin 2휋푥 + 휖 Regression using polynomial of degree M Figure from Machine Learning and Pattern Recognition, Bishop Example: regression using polynomial curve 푡 = sin 2휋푥 + 휖 Figure from Machine Learning and Pattern Recognition, Bishop Example: regression using polynomial curve 푡 = sin 2휋푥 + 휖 Figure from Machine Learning and Pattern Recognition, Bishop Example: regression using polynomial curve 푡 = sin 2휋푥 + 휖 Figure from Machine Learning and Pattern Recognition, Bishop Example: regression using polynomial curve Figure from Machine Learning and Pattern Recognition, Bishop Prevent overfitting • Empirical loss and expected loss are different • Also called training error and test/generalization error • Larger the data set, smaller the difference between the two • Larger the hypothesis class, easier to find a hypothesis that fits the difference between the two • Thus has small training error but large test error (overfitting) • Larger data set helps! • Throwing away useless hypotheses also helps! Prevent overfitting • Empirical loss and expected loss are different • Also called training error and test error • Larger the hypothesis class, easier to find a hypothesis that fits the difference between the two • Thus has small training error but large test errorUse (overfitting) prior knowledge/model to prune hypotheses • Larger the data set, smaller the difference between the two • Throwing away useless hypotheses also helps!Use experience/data to • Larger data set helps! prune hypotheses Prior v.s. data Prior vs experience • Super strong prior knowledge: 퓗 = {푓∗} • No data is needed! 푓∗: the best function Prior vs experience ∗ • Super strong prior knowledge: 퓗 = {푓 , 푓1} • A few data points suffices to detect 푓∗ 푓∗: the best function Prior vs experience • Super larger data set: infinite data • Hypothesis class 퓗 can be all functions! 푓∗: the best function Prior vs experience • Practical scenarios: finite data, 퓗 of median capacity,푓∗ in/not in 퓗 퓗1 퓗2 푓∗: the best function Prior vs experience • Practical scenarios lie between the two extreme cases 퓗 = {푓∗} practice Infinite data General Phenomenon Figure from Deep Learning, Goodfellow, Bengio and Courville Cross validation Model selection • How to choose the optimal capacity? • e.g., choose the best degree for polynomial curve fitting • Cannot be done by training data alone • Create held-out data to approx. the test error • Called validation data set Model selection: cross validation • Partition the training data into several groups • Each time use one group as validation set Figure from Machine Learning and Pattern Recognition, Bishop Model selection: cross validation • Also used for selecting other hyper-parameters for model/algorithm • E.g., learning rate, stopping criterion of SGD, etc. • Pros: general, simple • Cons: computationally expensive; even worse when there are more hyper-parameters.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    27 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us