Definition of the pliable lasso Optimization Statistical Learning and Sparsity with applications to biomedicine Rob Tibshirani Departments of Biomedical Data Science & Statistics Stanford University AIStats 2019 1 / 46 Definition of the pliable lasso Optimization Outline 1. Some general comments about supervised learning, statistical approaches and deep learning 2. Example: Predicting platelet usage at Stanford Hospital 3. Two recent advances: • Principal components lasso [Combines PC regression and sparsity] • Pliable lasso [enables lasso model to vary across the feature space] 2 / 46 Definition of the pliable lasso Optimization For Statisticians: 15 minutes of fame • 2009: \ I keep saying the sexy job in the next ten years will be statisticians." Hal Varian, Chief Economist Google • 2012 \Data Scientist: The Sexiest Job of the 21st Century" Harvard Business Review 3 / 46 Definition of the pliable lasso Optimization Sexiest man alive? 4 / 46 Definition of the pliable lasso Optimization The Supervising Learning Paradigm Training Data Fitting Prediction Traditional statistics: domain experts work for 10 years to learn good features; they bring the statistician a small clean dataset Today's approach: we start with a large dataset with many features, and use a machine learning algorithm to find the good ones. A huge change. 5 / 46 Definition of the pliable lasso Optimization This talk is about supervised learning: building models from data for predicting an outcome using a collection of input features. Big data vary in shape. These call for different approaches. Wide Data Thousands / Millions of Variables Hundreds of Samples Lasso & Elastic Net We have too many variables; prone to overfitting. Lasso fits linear models to the data that are sparse in Tall Data the variables. Does automatic variable selection. Tens / Hundreds of Variables Thousands / Tens of Thousands of Samples Random Forests & Gradient Boosting Sometimes simple models (linear) don’t suffice. We have enough samples to fit nonlinear models with many interactions, and not too many variables. A Random Forest is an automatic and powerful way to do this. 6 / 46 Definition of the pliable lasso Optimization The Elephant in the Room: DEEP LEARNING Will it eat the lasso and other statistical models? 7 / 46 Definition of the pliable lasso Optimization The Lasso The Lasso is an estimator defined by the following optimization problem: 1 X X 2 X minimize (yi − β0 − xijβj) subject to jβjj ≤ s β0,β 2 i j • Penalty =) sparsity (feature selection) • Convex problem (good for computation and theory) • Our lab has written a open-source R language package called glmnet for fitting lasso models (Friedman, Hastie, Simon, Tibs). Available on CRAN. More than one million downloads! 8 / 46 Definition of the pliable lasso Optimization Lasso and black holes Apparently, sparse modelling and Lasso played an important role in the recent reconstruction of black hole image. And the work was done in part by Japanese scientists. 9 / 46 Definition of the pliable lasso Optimization Deep Nets/Deep Learning Input Hidden Output layer L1 layer L2 layer L3 x1 x2 f(x) x3 x4 Neural network diagram with a single hidden layer. The hidden layer derives transformations of the inputs | nonlinear transformations of linear combinations | which are then used to model the output 10 / 46 Definition of the pliable lasso Optimization Back to the Elephant What makes Deep Nets so powerful: (and challenging to analyze!) It's not one \mathematical model" but a customizable framework{ a set of engineering tools that can exploit the special aspects of the problem (weight-sharing, convolution, feedback, recurrence ...) 11 / 46 Definition of the pliable lasso Optimization Will Deep Nets eat the lasso and other statistical models? Not in cases where • we have moderate #obs or wide data ( #obs < #features), • SNR is low, or • interpretability is important 12 / 46 Definition of the pliable lasso Optimization In Praise of Simplicity `Simplicity is the ultimate sophistication" | Leornardo Da Vinci • Many times I have been asked to review a data analysis by a biology postdoc or a company employee. Almost every time, they are unnecessarily complicated. Multiple steps, each one poorly justified. • Why? I think we all like to justify{ internally and externally| our advanced degrees. And then there's the \hit everything with deep learning" problem • Suggestion: Always try simple methods first. Move on to more complex methods, only if necessary 13 / 46 Definition of the pliable lasso Optimization How many units of platelets will the Stanford Hospital need tomorrow? 14 / 46 Definition of the pliable lasso Optimization Allison Zemek Tho Pham Saurabh Gombar Leying Guan Xiaoying Tian Balasubramanian Narasimhan 15 / 46 Definition of the pliable lasso Optimization 16 / 46 Definition of the pliable lasso Optimization Background • Each day Stanford hospital orders some number of units (bags) of platelets from Stanford blood center, based on the estimated need (roughly 45 units) • The daily needs are estimated \manually" • Platelets have just 5 days of shelf-life; they are safety-tested for 2 days. Hence are usable for just 3 days. • Currently about 1400 units (bags) are wasted each year. That's about 8% of the total number ordered. • There's rarely any shortage (shortage is bad but not catastrophic) • Can we do better? 17 / 46 Definition of the pliable lasso Optimization Data overview 160 100 60 0 200 400 600 Three days' total consumption Three days' days 60 40 20 Daily consumption 0 200 400 600 days 18 / 46 Definition of the pliable lasso Optimization Data description Daily platelet use from 2/8/2013 - 2/8/2015. • Response: number of platelet transfusions on a given day. • Covariates: 1. Complete blood count (CBC) data: Platelet count, White blood cell count, Red blood cell count, Hemoglobin concentration, number of lymphocytes, ::: 2. Census data: location of the patient, admission date, discharge date, ::: 3. Surgery schedule data: scheduled surgery date, type of surgical services, ::: 4. ::: 19 / 46 Definition of the pliable lasso Optimization Notation yi : actual PLT usage in day i. xi : amount of new PLT that arrives at day i. ri(k) : remaining PLT which can be used in the following k days, k = 1; 2 wi : PLT wasted in day i. si : PLT shortage in day i. • Overall objective: waste as little as possible, with little or no shortage 20 / 46 Definition of the pliable lasso Optimization Our first approach 21 / 46 Definition of the pliable lasso Optimization Our first approach • Build a supervised learning model (via lasso) to predict use yi for next three days (other methods like random forests or gradient boosting didn't give better accuracy). • Use the estimatesy ^i to estimate how many units xi to order. Add a buffer to predictions to ensure there is no shortage. Do this is a \rolling manner". • Worked quite well- reducing waste to 2.8%- - but the loss function here is not ideal 22 / 46 Definition of the pliable lasso Optimization More direct approach This approach minimizes the waste directly: n X J(β) = wi + λjjβjj1 (1) i=1 where 0 T three days total need ti = zi β; 8i = 1; 2; ::; n (2) number to order : xi+3 = ti − ri(1) − ri(2) − xi+1 − xi+2 (3) waste wi = [ri−1(1) − yi]+ (4) actual remaining ri(1) = [ri−1(2) + ri−1(1) − yi − wi]+ (5) ri(2) = [xi − [yi + wi − ri−1(2) − ri−1(1)]+]+ (6) Constraint : fresh bags remaining ri(2) ≥ c0 (7) (8) This can be shown to be a convex problem (LP). 23 / 46 Definition of the pliable lasso Optimization Results Chose sensible features- previous platelet use, day of week, # patients in key wards. Over 2 years of backtesting: no shortage, reduces waste from 1400 bags/ year (8%) to just 339 bags/year (1.9%) Corresponds to a predicted direct savings at Stanford of $350,000/year. If implemented nationally could result in approximately $110 million in savings. 24 / 46 Definition of the pliable lasso Optimization Moving forward • System has just been deployed at the Stanford Blood center (R Shiny app). • We are distributing the software around the world, for other centers to train and deploy • see Platelet inventory R package https://bnaras.github.io/pip/ 25 / 46 Definition of the pliable lasso Optimization pcLasso: the lasso meets principal components regression (PCR) Joint work with Ken Tay and Jerome Friedman • Given a set of features, principal components regression computes the first few PCs z1; z2 : : : zk and does a regression of y on these derived variables. • PCR is a powerful way of capturing the main sources of variability, and hopefully signal, in the data. But it doesn't provide sparsity. • How can we combine PCR and the lasso? 26 / 46 Definition of the pliable lasso Optimization The Principal Components Lasso • Let X = UDVT , the singular value decomposition of X. The columns of V contain the PCs The pcLasso minimizes 1 2 1 T T J(β) = jjy − Xβjj + λjjβjj1 + θ β VDd2−d2 V β (9) 2n 2 1 j 2 2 2 • The values dj are the eigenvalues of X, with d1 ≥ d2 ··· In words: the pcLasso gives no penalty (\a free ride") to the part of β that lines up with the first PC, and increasing penalties for components that line up with the second, third etc components. P 2 • The choice D = I results in the ridge penalty θ βj and gives the elastic net • the parameter θ ≥ 0 controls the rate of increase in the penalty 27 / 46 Definition of the pliable lasso Optimization Eigenvalues and shrinkage factors 28 / 46 Definition of the pliable lasso Optimization Contours of penalty functions 29 / 46 Definition of the pliable lasso Optimization Three dimensional case: θ increases as we move left to right 30 / 46 Definition of the pliable lasso Optimization Where it gets more interesting: grouped predictors Suppose our features come in pre-defined groups like gene pathways, protein networks, or groups formed by clustering.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages46 Page
-
File Size-