Word Embedding Techniques for Malware Classification
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Harmless Overfitting: Using Denoising Autoencoders in Estimation of Distribution Algorithms
Journal of Machine Learning Research 21 (2020) 1-31 Submitted 10/16; Revised 4/20; Published 4/20 Harmless Overfitting: Using Denoising Autoencoders in Estimation of Distribution Algorithms Malte Probst∗ [email protected] Honda Research Institute EU Carl-Legien-Str. 30, 63073 Offenbach, Germany Franz Rothlauf [email protected] Information Systems and Business Administration Johannes Gutenberg-Universit¨atMainz Jakob-Welder-Weg 9, 55128 Mainz, Germany Editor: Francis Bach Abstract Estimation of Distribution Algorithms (EDAs) are metaheuristics where learning a model and sampling new solutions replaces the variation operators recombination and mutation used in standard Genetic Algorithms. The choice of these models as well as the correspond- ing training processes are subject to the bias/variance tradeoff, also known as under- and overfitting: simple models cannot capture complex interactions between problem variables, whereas complex models are susceptible to modeling random noise. This paper suggests using Denoising Autoencoders (DAEs) as generative models within EDAs (DAE-EDA). The resulting DAE-EDA is able to model complex probability distributions. Furthermore, overfitting is less harmful, since DAEs overfit by learning the identity function. This over- fitting behavior introduces unbiased random noise into the samples, which is no major problem for the EDA but just leads to higher population diversity. As a result, DAE-EDA runs for more generations before convergence and searches promising parts of the solution space more thoroughly. We study the performance of DAE-EDA on several combinatorial single-objective optimization problems. In comparison to the Bayesian Optimization Al- gorithm, DAE-EDA requires a similar number of evaluations of the objective function but is much faster and can be parallelized efficiently, making it the preferred choice especially for large and difficult optimization problems. -
More Perceptron
More Perceptron Instructor: Wei Xu Some slides adapted from Dan Jurfasky, Brendan O’Connor and Marine Carpuat A Biological Neuron https://www.youtube.com/watch?v=6qS83wD29PY Perceptron (an artificial neuron) inputs Perceptron Algorithm • Very similar to logistic regression • Not exactly computing gradient (simpler) vs. weighted sum of features Online Learning • Rather than making a full pass through the data, compute gradient and update parameters after each training example. Stochastic Gradient Descent • Updates (or more specially, gradients) will be less accurate, but the overall effect is to move in the right direction • Often works well and converges faster than batch learning Online Learning • update parameters for each training example Initialize weight vector w = 0 Create features Loop for K iterations Loop for all training examples x_i, y_i … update_weights(w, x_i, y_i) Perceptron Algorithm • Very similar to logistic regression • Not exactly computing gradient Initialize weight vector w = 0 Loop for K iterations Loop For all training examples x_i if sign(w * x_i) != y_i Error-driven! w += y_i * x_i Error-driven Intuition • For a given example, makes a prediction, then checks to see if this prediction is correct. - If the prediction is correct, do nothing. - If the prediction is wrong, change its parameters so that it would do better on this example next time around. Error-driven Intuition Error-driven Intuition Error-driven Intuition Exercise Perceptron (vs. LR) • Only hyperparameter is maximum number of iterations (LR also needs learning rate) • Guaranteed to converge if the data is linearly separable (LR always converge) objective function is convex What if non linearly separable? • In real-world problems, this is nearly always the case. -
Incorporating Automated Feature Engineering Routines Into Automated Machine Learning Pipelines by Wesley Runnels
Incorporating Automated Feature Engineering Routines into Automated Machine Learning Pipelines by Wesley Runnels Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 2020 c Massachusetts Institute of Technology 2020. All rights reserved. Author . Wesley Runnels Department of Electrical Engineering and Computer Science May 12th, 2020 Certified by . Tim Kraska Department of Electrical Engineering and Computer Science Thesis Supervisor Accepted by . Katrina LaCurts Chair, Master of Engineering Thesis Committee 2 Incorporating Automated Feature Engineering Routines into Automated Machine Learning Pipelines by Wesley Runnels Submitted to the Department of Electrical Engineering and Computer Science on May 12, 2020, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Automating the construction of consistently high-performing machine learning pipelines has remained difficult for researchers, especially given the domain knowl- edge and expertise often necessary for achieving optimal performance on a given dataset. In particular, the task of feature engineering, a key step in achieving high performance for machine learning tasks, is still mostly performed manually by ex- perienced data scientists. In this thesis, building upon the results of prior work in this domain, we present a tool, rl feature eng, which automatically generates promising features for an arbitrary dataset. In particular, this tool is specifically adapted to the requirements of augmenting a more general auto-ML framework. We discuss the performance of this tool in a series of experiments highlighting the various options available for use, and finally discuss its performance when used in conjunction with Alpine Meadow, a general auto-ML package. -
Measuring Overfitting and Mispecification in Nonlinear Models
WP 11/25 Measuring overfitting and mispecification in nonlinear models Marcel Bilger Willard G. Manning August 2011 york.ac.uk/res/herc/hedgwp Measuring overfitting and mispecification in nonlinear models Marcel Bilger, Willard G. Manning The Harris School of Public Policy Studies, University of Chicago, USA Abstract We start by proposing a new measure of overfitting expressed on the untransformed scale of the dependent variable, which is generally the scale of interest to the analyst. We then show that with nonlinear models shrinkage due to overfitting gets confounded by shrinkage—or expansion— arising from model misspecification. Out-of-sample predictive calibration can in fact be expressed as in-sample calibration times 1 minus this new measure of overfitting. We finally argue that re-calibration should be performed on the scale of interest and provide both a simulation study and a real-data illustration based on health care expenditure data. JEL classification: C21, C52, C53, I11 Keywords: overfitting, shrinkage, misspecification, forecasting, health care expenditure 1. Introduction When fitting a model, it is well-known that we pick up part of the idiosyncratic char- acteristics of the data as well as the systematic relationship between a dependent and explanatory variables. This phenomenon is known as overfitting and generally occurs when a model is excessively complex relative to the amount of data available. Overfit- ting is a major threat to regression analysis in terms of both inference and prediction. When models greatly over-explain the data at hand they can even show relations which reflect chance only. Consequently, overfitting casts doubt on the true statistical signif- icance of the effects found by the analyst as well as the magnitude of the response. -
Over-Fitting in Model Selection with Gaussian Process Regression
Over-Fitting in Model Selection with Gaussian Process Regression Rekar O. Mohammed and Gavin C. Cawley University of East Anglia, Norwich, UK [email protected], [email protected] http://theoval.cmp.uea.ac.uk/ Abstract. Model selection in Gaussian Process Regression (GPR) seeks to determine the optimal values of the hyper-parameters governing the covariance function, which allows flexible customization of the GP to the problem at hand. An oft-overlooked issue that is often encountered in the model process is over-fitting the model selection criterion, typi- cally the marginal likelihood. The over-fitting in machine learning refers to the fitting of random noise present in the model selection criterion in addition to features improving the generalisation performance of the statistical model. In this paper, we construct several Gaussian process regression models for a range of high-dimensional datasets from the UCI machine learning repository. Afterwards, we compare both MSE on the test dataset and the negative log marginal likelihood (nlZ), used as the model selection criteria, to find whether the problem of overfitting in model selection also affects GPR. We found that the squared exponential covariance function with Automatic Relevance Determination (SEard) is better than other kernels including squared exponential covariance func- tion with isotropic distance measure (SEiso) according to the nLZ, but it is clearly not the best according to MSE on the test data, and this is an indication of over-fitting problem in model selection. Keywords: Gaussian process · Regression · Covariance function · Model selection · Over-fitting 1 Introduction Supervised learning tasks can be divided into two main types, namely classifi- cation and regression problems. -
On Overfitting and Asymptotic Bias in Batch Reinforcement Learning With
Journal of Artificial Intelligence Research 65 (2019) 1-30 Submitted 06/2018; published 05/2019 On Overfitting and Asymptotic Bias in Batch Reinforcement Learning with Partial Observability Vincent François-Lavet [email protected] Guillaume Rabusseau [email protected] Joelle Pineau [email protected] School of Computer Science, McGill University University Street 3480, Montreal, QC, H3A 2A7, Canada Damien Ernst [email protected] Raphael Fonteneau [email protected] Montefiore Institute, University of Liege Allée de la découverte 10, 4000 Liège, Belgium Abstract This paper provides an analysis of the tradeoff between asymptotic bias (suboptimality with unlimited data) and overfitting (additional suboptimality due to limited data) in the context of reinforcement learning with partial observability. Our theoretical analysis formally characterizes that while potentially increasing the asymptotic bias, a smaller state representation decreases the risk of overfitting. This analysis relies on expressing the quality of a state representation by bounding L1 error terms of the associated belief states. Theoretical results are empirically illustrated when the state representation is a truncated history of observations, both on synthetic POMDPs and on a large-scale POMDP in the context of smartgrids, with real-world data. Finally, similarly to known results in the fully observable setting, we also briefly discuss and empirically illustrate how using function approximators and adapting the discount factor may enhance the tradeoff between asymptotic bias and overfitting in the partially observable context. 1. Introduction This paper studies sequential decision-making problems that may be modeled as Markov Decision Processes (MDP) but for which the state is partially observable. -
Overfitting Princeton University COS 495 Instructor: Yingyu Liang Review: Machine Learning Basics Math Formulation
Machine Learning Basics Lecture 6: Overfitting Princeton University COS 495 Instructor: Yingyu Liang Review: machine learning basics Math formulation • Given training data 푥푖, 푦푖 : 1 ≤ 푖 ≤ 푛 i.i.d. from distribution 퐷 1 • Find 푦 = 푓(푥) ∈ 퓗 that minimizes 퐿 푓 = σ푛 푙(푓, 푥 , 푦 ) 푛 푖=1 푖 푖 • s.t. the expected loss is small 퐿 푓 = 피 푥,푦 ~퐷[푙(푓, 푥, 푦)] Machine learning 1-2-3 • Collect data and extract features • Build model: choose hypothesis class 퓗 and loss function 푙 • Optimization: minimize the empirical loss Machine learning 1-2-3 Feature mapping Maximum Likelihood • Collect data and extract features • Build model: choose hypothesis class 퓗 and loss function 푙 • Optimization: minimize the empirical loss Occam’s razor Gradient descent; convex optimization Overfitting Linear vs nonlinear models 2 푥1 2 푥2 푥1 2푥1푥2 푦 = sign(푤푇휙(푥) + 푏) 2푐푥1 푥2 2푐푥2 푐 Polynomial kernel Linear vs nonlinear models • Linear model: 푓 푥 = 푎0 + 푎1푥 2 3 푀 • Nonlinear model: 푓 푥 = 푎0 + 푎1푥 + 푎2푥 + 푎3푥 + … + 푎푀 푥 • Linear model ⊆ Nonlinear model (since can always set 푎푖 = 0 (푖 > 1)) • Looks like nonlinear model can always achieve same/smaller error • Why one use Occam’s razor (choose a smaller hypothesis class)? Example: regression using polynomial curve 푡 = sin 2휋푥 + 휖 Figure from Machine Learning and Pattern Recognition, Bishop Example: regression using polynomial curve 푡 = sin 2휋푥 + 휖 Regression using polynomial of degree M Figure from Machine Learning and Pattern Recognition, Bishop Example: regression using polynomial curve 푡 = sin 2휋푥 + 휖 Figure from Machine Learning -
Learning from Few Subjects with Large Amounts of Voice Monitoring Data
Learning from Few Subjects with Large Amounts of Voice Monitoring Data Jose Javier Gonzalez Ortiz John V. Guttag Robert E. Hillman Daryush D. Mehta Jarrad H. Van Stan Marzyeh Ghassemi Challenges of Many Medical Time Series • Few subjects and large amounts of data → Overfitting to subjects • No obvious mapping from signal to features → Feature engineering is labor intensive • Subject-level labels → In many cases, no good way of getting sample specific annotations 1 Jose Javier Gonzalez Ortiz Challenges of Many Medical Time Series • Few subjects and large amounts of data → Overfitting to subjects Unsupervised feature • No obvious mapping from signal to features extraction → Feature engineering is labor intensive • Subject-level labels Multiple → In many cases, no good way of getting sample Instance specific annotations Learning 2 Jose Javier Gonzalez Ortiz Learning Features • Segment signal into windows • Compute time-frequency representation • Unsupervised feature extraction Conv + BatchNorm + ReLU 128 x 64 Pooling Dense 128 x 64 Upsampling Sigmoid 3 Jose Javier Gonzalez Ortiz Classification Using Multiple Instance Learning • Logistic regression on learned features with subject labels RawRaw LogisticLogistic WaveformWaveform SpectrogramSpectrogram EncoderEncoder RegressionRegression PredictionPrediction •• %% Positive Positive Ours PerPer Window Window PerPer Subject Subject • Aggregate prediction using % positive windows per subject 4 Jose Javier Gonzalez Ortiz Application: Voice Monitoring Data • Voice disorders affect 7% of the US population • Data collected through neck placed accelerometer 1 week = ~4 billion samples ~100 5 Jose Javier Gonzalez Ortiz Results Previous work relied on expert designed features[1] AUC Accuracy Comparable performance Train 0.70 ± 0.05 0.71 ± 0.04 Expert LR without Test 0.68 ± 0.05 0.69 ± 0.04 task-specific feature Train 0.73 ± 0.06 0.72 ± 0.04 engineering! Ours Test 0.69 ± 0.07 0.70 ± 0.05 [1] Marzyeh Ghassemi et al. -
Expert Feature-Engineering Vs. Deep Neural Networks: Which Is Better for Sensor-Free Affect Detection?
Expert Feature-Engineering vs. Deep Neural Networks: Which is Better for Sensor-Free Affect Detection? Yang Jiang1, Nigel Bosch2, Ryan S. Baker3, Luc Paquette2, Jaclyn Ocumpaugh3, Juliana Ma. Alexandra L. Andres3, Allison L. Moore4, Gautam Biswas4 1 Teachers College, Columbia University, New York, NY, United States [email protected] 2 University of Illinois at Urbana-Champaign, Champaign, IL, United States {pnb, lpaq}@illinois.edu 3 University of Pennsylvania, Philadelphia, PA, United States {rybaker, ojaclyn}@upenn.edu [email protected] 4 Vanderbilt University, Nashville, TN, United States {allison.l.moore, gautam.biswas}@vanderbilt.edu Abstract. The past few years have seen a surge of interest in deep neural net- works. The wide application of deep learning in other domains such as image classification has driven considerable recent interest and efforts in applying these methods in educational domains. However, there is still limited research compar- ing the predictive power of the deep learning approach with the traditional feature engineering approach for common student modeling problems such as sensor- free affect detection. This paper aims to address this gap by presenting a thorough comparison of several deep neural network approaches with a traditional feature engineering approach in the context of affect and behavior modeling. We built detectors of student affective states and behaviors as middle school students learned science in an open-ended learning environment called Betty’s Brain, us- ing both approaches. Overall, we observed a tradeoff where the feature engineer- ing models were better when considering a single optimized threshold (for inter- vention), whereas the deep learning models were better when taking model con- fidence fully into account (for discovery with models analyses). -
Deep Learning Feature Extraction Approach for Hematopoietic Cancer Subtype Classification
International Journal of Environmental Research and Public Health Article Deep Learning Feature Extraction Approach for Hematopoietic Cancer Subtype Classification Kwang Ho Park 1 , Erdenebileg Batbaatar 1 , Yongjun Piao 2, Nipon Theera-Umpon 3,4,* and Keun Ho Ryu 4,5,6,* 1 Database and Bioinformatics Laboratory, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea; [email protected] (K.H.P.); [email protected] (E.B.) 2 School of Medicine, Nankai University, Tianjin 300071, China; [email protected] 3 Department of Electrical Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai 50200, Thailand 4 Biomedical Engineering Institute, Chiang Mai University, Chiang Mai 50200, Thailand 5 Data Science Laboratory, Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh 700000, Vietnam 6 Department of Computer Science, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea * Correspondence: [email protected] (N.T.-U.); [email protected] or [email protected] (K.H.R.) Abstract: Hematopoietic cancer is a malignant transformation in immune system cells. Hematopoi- etic cancer is characterized by the cells that are expressed, so it is usually difficult to distinguish its heterogeneities in the hematopoiesis process. Traditional approaches for cancer subtyping use statistical techniques. Furthermore, due to the overfitting problem of small samples, in case of a minor cancer, it does not have enough sample material for building a classification model. Therefore, Citation: Park, K.H.; Batbaatar, E.; we propose not only to build a classification model for five major subtypes using two kinds of losses, Piao, Y.; Theera-Umpon, N.; Ryu, K.H. -
Feature Engineering for Predictive Modeling Using Reinforcement Learning
The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) Feature Engineering for Predictive Modeling Using Reinforcement Learning Udayan Khurana Horst Samulowitz Deepak Turaga [email protected] [email protected] [email protected] IBM Research AI IBM Research AI IBM Research AI Abstract Feature engineering is a crucial step in the process of pre- dictive modeling. It involves the transformation of given fea- ture space, typically using mathematical functions, with the objective of reducing the modeling error for a given target. However, there is no well-defined basis for performing effec- tive feature engineering. It involves domain knowledge, in- tuition, and most of all, a lengthy process of trial and error. The human attention involved in overseeing this process sig- nificantly influences the cost of model generation. We present (a) Original data (b) Engineered data. a new framework to automate feature engineering. It is based on performance driven exploration of a transformation graph, Figure 1: Illustration of different representation choices. which systematically and compactly captures the space of given options. A highly efficient exploration strategy is de- rived through reinforcement learning on past examples. rental demand (kaggle.com/c/bike-sharing-demand) in Fig- ure 2(a). Deriving several features (Figure 2(b)) dramatically Introduction reduces the modeling error. For instance, extracting the hour of the day from the given timestamp feature helps to capture Predictive analytics are widely used in support for decision certain trends such as peak versus non-peak demand. Note making across a variety of domains including fraud detec- that certain valuable features are derived through a compo- tion, marketing, drug discovery, advertising, risk manage- sition of multiple simpler functions. -
Word2bits - Quantized Word Vectors
Word2Bits - Quantized Word Vectors Maximilian Lam [email protected] Abstract Word vectors require significant amounts of memory and storage, posing issues to resource limited devices like mobile phones and GPUs. We show that high quality quantized word vectors using 1-2 bits per parameter can be learned by introducing a quantization function into Word2Vec. We furthermore show that training with the quantization function acts as a regularizer. We train word vectors on English Wikipedia (2017) and evaluate them on standard word similarity and analogy tasks and on question answering (SQuAD). Our quantized word vectors not only take 8-16x less space than full precision (32 bit) word vectors but also outperform them on word similarity tasks and question answering. 1 Introduction Word vectors are extensively used in deep learning models for natural language processing. Each word vector is typically represented as a 300-500 dimensional vector, with each parameter being 32 bits. As there are millions of words, word vectors may take up to 3-6 GB of memory/storage – a massive amount relative to other portions of a deep learning model[25]. These requirements pose issues to memory/storage limited devices like mobile phones and GPUs. Furthermore, word vectors are often re-trained on application specific data for better performance in application specific domains[27]. This motivates directly learning high quality compact word representations rather than adding an extra layer of compression on top of pretrained word vectors which may be computational expensive and degrade accuracy. Recent trends indicate that deep learning models can reach a high accuracy even while training in the presence of significant noise and perturbation[5, 6, 9, 28, 32].