An Empirical Analysis of Feature Engineering for Predictive Modeling

An Empirical Analysis of Feature Engineering for Predictive Modeling

An Empirical Analysis of Feature Engineering for Predictive Modeling Jeff Heaton McKelvey School of Engineering Washington University in St. Louis St. Louis, MO 63130 Email: [email protected] Abstract—Machine learning models, such as neural networks, in the deep learning space has been in image and speech decision trees, random forests, and gradient boosting machines, recognition [1]. Such techniques are successful in the high- accept a feature vector, and provide a prediction. These models dimension space of image processing and often amount to learn in a supervised fashion where we provide feature vectors mapped to the expected output. It is common practice to engineer dimensionality reduction techniques [5] such as PCA [6] and new features from the provided feature set. Such engineered auto-encoders [7]. features will either augment or replace portions of the existing feature vector. These engineered features are essentially calcu- II. BACKGROUND AND PRIOR WORK lated fields based on the values of the other features. Engineering such features is primarily a manual, time- Feature engineering grew out of the desire to transform non- consuming task. Additionally, each type of model will respond normally distributed linear regression inputs. Such a transfor- differently to different kinds of engineered features. This paper mation can be helpful for linear regression. The seminal work reports empirical research to demonstrate what kinds of engi- neered features are best suited to various machine learning model by George Box and David Cox in 1964 introduced a method types. We provide this recommendation by generating several for determining which of several power functions might be a datasets that we designed to benefit from a particular type of useful transformation for the outcome of linear regression [8]. engineered feature. The experiment demonstrates to what degree This technique became known as the Box-Cox transformation. the machine learning model can synthesize the needed feature The alternating conditional expectation (ACE) algorithm [9] on its own. If a model can synthesize a planned feature, it is not necessary to provide that feature. The research demonstrated that works similarly to the Box-Cox transformation. An individual the studied models do indeed perform differently with various can apply a mathematical function to each component of types of engineered features. the feature vector outcome. However, unlike the Box-Cox transformation, ACE can guarantee optimal transformations I. INTRODUCTION for linear regression. Feature engineering is an essential but labor-intensive com- Linear regression is not the only machine-learning model ponent of machine learning applications [1]. Most machine- that can benefit from feature engineering and other trans- learning performance is heavily dependent on the represen- formations. In 1999, researchers demonstrated that feature tation of the feature vector. As a result, data scientists spend engineering could enhance rules learning performance for much of their effort designing preprocessing pipelines and data text classification [10]. Feature engineering was successfully transformations [1]. applied to the KDD Cup 2010 competition using a variety of To utilize feature engineering, the model must preprocess machine learning models. its input data by adding new features based on the other features [2]. These new features might be ratios, differences, arXiv:1701.07852v2 [cs.LG] 1 Nov 2020 III. EXPERIMENT DESIGN AND METHODOLOGY or other mathematical transformations of existing features. This process is similar to the equations that human analysts Different machine learning model types have varying de- design. They construct new features such as body mass index grees of ability to synthesize various types of mathematical (BMI), wind chill, or Triglyceride/HDL cholesterol ratio to expressions. If the model can learn to synthesize an engineered help understand existing features’ interactions. feature on its own, there was no reason to engineer the Kaggle and ACM’s KDD Cup have seen feature engineer- feature in the first place. Demonstrating empirically a model’s ing play an essential part in several winning submissions. ability to synthesize a particular type of expression shows if Individuals applied feature engineering to the winning KDD engineered features of this type might be useful to that model. Cup 2010 competition entry [3]. Additionally, researchers won To explore these relations, we created ten datasets contain the Kaggle Algorithmic Trading Challenge with an ensemble the inputs and outputs that correspond to a particular type of of models and feature engineering. These individuals created engineered feature. If the machine-learning model can learn these engineered features by hand. to reproduce that feature with a low error, it means that that Technologies such as deep learning [4] can benefit from particular model could have learned that engineered feature feature engineering. Most research into feature engineering without assistance. For this research, only considered regression machine learn- Algorithm 1 Generate count test dataset ing models for this experiment. We chose the following four 1: INPUT: The number of rows to generate r. machine learning models because of the relative popularity 2: OUTPUT: A dataset where y contains random integers and differences in approach. sampled from [0; 50], and x contains 50 columns randomly • Deep Neural Networks (DANN) chosen to sum to y. • Gradient Boosted Machines (GBM) 3: METHOD: • Random Forests 4: x [...empty set...] • Support Vector Machines for Regression (SVR) 5: y [...empty set...] 6: n 1 T O r To mitigate the stochastic nature of some of these machine for do 7: v (50) . learning models, each experiment was run 5 times, and the zeros Vector of length 50 8: o (0; 50) . best run’s outcome was used for the comparison. These exper- uniform random int Outcome(y) 9: r . iments were conducted in the Python programming language, o remaining 10: r ≥ 0 using the following third-party packages: Scikit-Learn [11] while do: 11: i (0; (x) − 1) and TensorFlow[12]. Using this combination of packages, uniform random int len 12: x[i] = 0 model types of support vector machine (SVM) [13][14], deep if then 13: v[i] 1 neural network [15], random forest [16], and gradient boosting 14: r r − 1 machine (GBM) [17] were evaluated against the following sixteen selected engineered features: 15: x:append(x) 16: y:append(o) • Counts • Differences return [x; y] • Distance Between Quadratic Roots • Distance Formula TABLE I • Logarithms COUNTS TRANSFORMATION • Max of Inputs x1 x2 x3 ... x50 y1 • Polynomials 1 0 1 ... 0 2 • Power Ratio (such as BMI) 1 1 1 ... 1 12 0 1 0 ... 1 8 • Powers 1 0 0 ... 1 5 • Ratio of a Product • Rational Differences • Rational Polynomials Several example rows of the count input vector are shown • Ratios in Table I. The y1 value simply holds the count of the number • Root Distance of features x1 through x50 that contain a value greater than 0. • Root of a Ratio (such as Standard Deviation) B. Differences and Ratios • Square Roots Differences and ratios are common choices for feature The techniques used to create each of these datasets are engineering. To evaluate this feature type a dataset is generated described in the following sections. The Python source code with x observations uniformly sampled in the real number for these experiments can be downloaded from the author’s range [0; 1], a single y prediction is also generated that GitHub page [18] or Kaggle [19]. is various differences and ratios of the observations. When A. Counts sampling uniform real numbers for the denominator, the range [0:1; 1] is used to avoid division by zero. The equations chosen The count engineered feature counts the number of elements are simple difference (Equation 2), simple ratio (Equation 3), in the feature vector that satisfies a certain condition. For power ratio (Equation 4), product power ratio (Equation 5) example, the program might generate a count feature that and ratio of a polynomial (Equation 6). counts other features above a specified threshold, such as zero. Equation 1 defines how a count feature might be engineered. y = x1 − x2 (2) n X y = 1 if x > t else 0 (1) x i y = 1 (3) i=1 x2 The x-vector represents the input vector of length n. The x1 y x y = (4) resulting contains an integer equal to the number of x2 values above the threshold (t). The resulting y-count was 2 uniformly sampled from integers in the range [1; 50], and x1x2 y = 2 (5) the program creates the corresponding input vectors for the x3 program to generate a count dataset. Algorithm 1 demonstrates 1 y = (6) this process. 5x + 8x2 C. Distance Between Quadratic Roots G. Polynomials It is also useful to see how capable the four machine Engineered features might take the form of polynomials. learning models are at synthesizing ordinary mathematical This paper investigated the machine learning models’ ability equations. We generate the final synthesized feature from to synthesize features that follow the polynomial given by a distance between the roots of a quadratic equation. The Equation 13. distance between roots of a quadratic equation can easily be calculated by taking the difference of the two outputs of the y = 1 + 5x + 8x2 (13) quadratic formula, as given in Equation 7, in its unsimplified form. An equation such as this shows the models’ ability to synthesize features that contain several multiplications and an p p −b + b2 − 4ac −b − b2 − 4ac exponent. The data set was generated by uniformly sampling y = − (7) x from real numbers in the range [0; 2). The y1 value is simply 2a 2a calculated based on x1 as input to Equation 13. The dataset for the transformation represented by Equation 7 is generated by uniformly sampling x values from the real H. Rational Differences and Polynomials number range [−10; 10]. We discard any invalid results. Useful features might also come from combinations of D.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us