Predicting House Prices on the Countryside Using Boosted Decision Trees

Total Page:16

File Type:pdf, Size:1020Kb

Predicting House Prices on the Countryside Using Boosted Decision Trees DEGREE PROJECT IN MATHEMATICS, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2020 Predicting House Prices on the Countryside using Boosted Decision Trees WAR REVEND KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES Predicting House Prices on the Countryside using Boosted Decision Trees WAR REVEND Degree Projects in Mathematical Statistics (30 ECTS credits) Degree Programme in Applied and Computational Mathematics (120 credits) KTH Royal Institute of Technology year 2020 Supervisors at Booli Search Technologies AB: Christopher Madsen Supervisor at KTH: Joakim Andén-Pantera Examiner at KTH: Joakim Andén-Pantera TRITA-SCI-GRU 2020:302 MAT-E 2020:075 Royal Institute of Technology School of Engineering Sciences KTH SCI SE-100 44 Stockholm, Sweden URL: www.kth.se/sci Abstract This thesis intends to evaluate the feasibility of supervised learning models for predicting house prices on the countryside of South Sweden. It is essen- tial for mortgage lenders to have accurate housing valuation algorithms and the current model offered by Booli is not accurate enough when evaluating residence prices on the countryside. Different types of boosted decision trees were implemented to address this issue and their performances were compared to traditional machine learning methods. These different types of supervised learning models were implemented in order to find the best model with re- gards to relevant evaluation metrics such as root-mean-squared error (RMSE) and mean absolute percentage error (MAPE). The implemented models were ridge regression, lasso regression, random forest, AdaBoost, gradient boost- ing, CatBoost, XGBoost, and LightGBM. All these models were benchmarked against Booli’s current housing valuation algorithms which are based on a k- NN model. The results from this thesis indicated that the LightGBM model is the optimal one as it had the best overall performance with respect to the cho- sen evaluation metrics. When comparing the LightGBM model to the bench- mark, the performance was overall better, the LightGBM model had an RMSE score of 0.330 compared to 0.358 for the Booli model, indicating that there is a potential of using boosted decision trees to improve the predictive accuracy of residence prices on the countryside. Sammanfattning Denna uppsats ämnar utvärdera genomförbarheten hos olika övervakade in- lärningsmodeller för att förutse huspriser på landsbygden i Södra Sverige. Det är viktigt för bostadslånsgivare att ha noggranna algoritmer när de värderar bostäder, den nuvarande modellen som Booli erbjuder har dålig precision när det gäller värderingar av bostäder på landsbygden. Olika typer av boostade beslutsträd implementerades för att ta itu med denna fråga och deras prestan- da jämfördes med traditionella maskininlärningsmetoder. Dessa olika typer av övervakad inlärningsmodeller implementerades för att hitta den bästa model- len med avseende på relevanta prestationsmått som t.ex. root-mean-squared error (RMSE) och mean absolute percentage error (MAPE). De övervaka- de inlärningsmodellerna var ridge regression, lasso regression, random forest, AdaBoost, gradient boosting, CatBoost, XGBoost, and LightGBM. Samtliga algoritmers prestanda jämförs med Boolis nuvarande bostadsvärderingsalgo- ritm, som är baserade på en k-NN modell. Resultatet från denna uppsats vi- sar att LightGBM modellen är den optimala modellen för att värdera husen på landsbygden eftersom den hade den bästa totala prestandan med avseen- de på de utvalda utvärderingsmetoderna. LightGBM modellen jämfördes med Booli modellen där prestandan av LightGBM modellen var i överlag bättre, där LightGBM modellen hade ett RMSE värde på 0.330 jämfört med Booli modellen som hade ett RMSE värde på 0.358. Vilket indikerar att det finns en potential att använda boostade beslutsträd för att förbättra noggrannheten i förutsägelserna av huspriser på landsbygden. Acknowledgements First, I wish to thank my supervisor at KTH, Joakim Andén-Pantera, for his excellent patience, guidance, and support in completing this thesis. I would also like to express my gratitude toward Christopher Madsen, my supervisors at Booli Search Technologies AB for inspiration and support. Further, I want to thank Johan Mattsson and Olof Sjöbergh at Booli for additional advice and providing the opportunity for this thesis. Contents Acknowledgements 1 Introduction 1 1.1 Background and problem formulation . 1 1.2 Purpose . 2 1.3 Research question . 2 1.4 Scope . 3 1.5 Limitation . 4 2 Background 5 2.1 Supervised Learning . 5 2.2 Regression . 6 2.2.1 Simple linear regression . 6 2.2.2 Multiple linear regression . 7 2.3 Shrinkage Methods . 9 2.3.1 Ridge regression . 10 2.3.2 Lasso regression . 11 2.4 k-Nearest Neighbors Algorithm . 12 2.5 Decision Trees . 13 2.5.1 Regression Trees . 13 2.6 Random Forest . 16 2.7 Boosting . 17 2.7.1 AdaBoost . 19 2.7.2 Gradient Boosting . 21 2.7.3 Categorical Boosting: CatBoost . 23 2.7.4 XGBoost . 24 2.7.5 LightGBM . 26 2.8 Hyper-Parameter Tuning . 28 2.8.1 Cross-validation . 28 CONTENTS 2.9 Metrics of interest . 31 3 Methods 33 3.1 Data . 33 3.1.1 Overview of the available data . 33 3.1.2 Preprocessing . 36 3.2 Model Implementation . 43 3.2.1 Hyper-Parameter Tuning of the Models . 43 4 Results 47 5 Discussion 58 5.1 Results Evaluation . 58 5.1.1 Model Comparison . 58 5.1.2 Benchmark Comparison . 59 6 Conclusion 61 6.1 Answering Research Questions . 61 6.2 Future work . 62 Bibliography 63 A Scatterplots of variables vs Absolute Percentage Error 67 List of Tables 3.1 Description of the variables available in the data set obtained from Booli. 35 3.2 Description of the variables available in the data set obtained from SCB . 36 3.3 Hyper-parameter set for lasso and ridge regression . 43 3.4 Hyper-parameter set for gradient boosting regression . 44 3.5 Hyper-parameter set for random forest . 44 3.6 Hyper-parameter set for AdaBoost regression . 44 3.7 Hyper-parameter set for LightGBM . 45 3.8 Hyper-parameter set for XGBoost . 45 3.9 Hyper-parameter set for CatBoost . 46 4.1 Performance of the models evaluated with metrics used by Booli 52 List of Figures 2.1 Illustrating leaf-wise and level-wise tree growth . 28 2.2 Illustrating 5-fold cross-validation on a data set . 30 3.1 Illustrating which variables in the data set have NaN values and the percentage of missing values and have the percentage . 38 3.2 Illustrating one-hot encoding . 39 LIST OF FIGURES 3.3 Distribution of the adjusted residence price together with Q-Q plot . 40 3.4 Distribution of the log-transformed adjusted residence price together with Q-Q plot . 40 3.5 Heat map between numerical variables in the data set. 42 3.6 Heat map between the ten highest numerical variables that cor- relates to the residence price in the data set. 42 4.1 Barplot of the RMSE score on the train set when evaluating different scaling and transformation methods. 47 4.2 Barplot of the RMSE score on the test set when evaluating different scaling and transformation methods. 48 4.3 Evaluating MSE and MAPE as loss function for LightGBM and CatBoost . 48 4.4 Barplot of the RMSE score on the train set when evaluating sample weights. 49 4.5 Barplot of the RMSE score on the test set when evaluating sample weights. 49 4.6 Barplot of the RMSE score on the train and test set. 50 4.7 Barplot of the MAPE score on the train and test set. 51 4.8 Barplot of the MdAPE score on the train and test set. 51 4.9 Histogram of the percentage error between the Booli and Light- GBM model . 53 4.10 Map of the mean percentage error in each DESO area using LightGBM and Booli model . 53 4.11 Map of the mean percentage error in each DESO area using XGBoost . 54 4.12 Scatterplot of the spread of house prices in DeSO area vs MAPE in DeSO area using LightGBM . 55 4.13 Scatterplot of the spread of house prices in DeSO area vs MAPE in DeSO area using Booli model . 55 4.14 Scatterplot of the spread of house prices in DeSO area vs MAPE in DeSO area using lasso . 56 4.15 Scatterplots of the variables assessedValue respective as- sessedValueBuilding vs absolute percentage error, us- ing the LightGBM model . 56 4.16 Scatterplots of the variables assessedValuePlot respec- tive assessmentPoints vs absolute percentage error, us- ing the LightGBM model . 57 LIST OF FIGURES A.1 Scatterplots of the variables constructionYear respec- tive delta_Date vs absolute percentage error, using the Light- GBM model . 67 A.2 Scatterplots of the variables latitude, longitude, to- talArea , livingArea, plotArea respective rooms vs absolute percentage error, using the LightGBM model . 68 A.3 Scatterplots of the variables distanceToOceanFront re- spective distanceToWater vs absolute percentage error, using the LightGBM model . 69 Chapter 1 Introduction This chapter provides an overview of the aim of the thesis. The topics discussed within this chapter are the thesis’ background, purpose, scope, and limitations of the thesis. 1.1 Background and problem formulation For the majority of the people, purchasing a residence is the biggest financial commitment in their life. Ensuring that homeowners have a trusted way of monitoring the value of their assets is important. Even companies such as Zil- low, an American online real estate database company, offered a competition in 2018 with a one million dollar grand prize for improving their valuation al- gorithm [1]. The valuation algorithm is an important feature that Booli offers to its consumers, but it is also used by their owner, SBAB Bank AB, when they need to value residences across Sweden. Usually, the bank customers want to get a valuation on their residence when they want to get a loan on the house or when they need a new mortgage for purchasing a new property.
Recommended publications
  • Benchmarking and Optimization of Gradient Boosting Decision Tree Algorithms
    Benchmarking and Optimization of Gradient Boosting Decision Tree Algorithms Andreea Anghel, Nikolaos Papandreou, Thomas Parnell Alessandro de Palma, Haralampos Pozidis IBM Research – Zurich, Rüschlikon, Switzerland {aan,npo,tpa,les,hap}@zurich.ibm.com Abstract Gradient boosting decision trees (GBDTs) have seen widespread adoption in academia, industry and competitive data science due to their state-of-the-art perfor- mance in many machine learning tasks. One relative downside to these models is the large number of hyper-parameters that they expose to the end-user. To max- imize the predictive power of GBDT models, one must either manually tune the hyper-parameters, or utilize automated techniques such as those based on Bayesian optimization. Both of these approaches are time-consuming since they involve repeatably training the model for different sets of hyper-parameters. A number of software GBDT packages have started to offer GPU acceleration which can help to alleviate this problem. In this paper, we consider three such packages: XG- Boost, LightGBM and Catboost. Firstly, we evaluate the performance of the GPU acceleration provided by these packages using large-scale datasets with varying shapes, sparsities and learning tasks. Then, we compare the packages in the con- text of hyper-parameter optimization, both in terms of how quickly each package converges to a good validation score, and in terms of generalization performance. 1 Introduction Many powerful techniques in machine learning construct a strong learner from a number of weak learners. Bagging combines the predictions of the weak learners, each using a different bootstrap sample of the training data set [1]. Boosting, an alternative approach, iteratively trains a sequence of weak learners, whereby the training examples for the next learner are weighted according to the success of the previously-constructed learners.
    [Show full text]
  • A Hybrid Machine Learning/Deep Learning COVID-19 Severity Predictive Model from CT Images and Clinical Data
    A hybrid machine learning/deep learning COVID-19 severity predictive model from CT images and clinical data Matteo Chieregato1,*, Fabio Frangiamore2, Mauro Morassi3, Claudia Baresi4, Stefania Nici1, Chiara Bassetti1, Claudio Bna` 2, and Marco Galelli1 1Fondazione Poliambulanza Istituto Ospedaliero, Unit of Medical Physics, Brescia, 25124, Italy 2Tattile s.r.l., Mairano (BS), 25030 Italy 3Fondazione Poliambulanza Istituto Ospedaliero, Department of Diagnostic Imaging, Unit of Radiology, Brescia, 25124, Italy 4Fondazione Poliambulanza Istituto Ospedaliero, Information and Communications Technology, Unit of Lean Managing, Brescia, 25124, Italy *[email protected] ABSTRACT COVID-19 clinical presentation and prognosis are highly variable, ranging from asymptomatic and paucisymptomatic cases to acute respiratory distress syndrome and multi-organ involvement. We developed a hybrid machine learning/deep learning model to classify patients in two outcome categories, non-ICU and ICU (intensive care admission or death), using 558 patients admitted in a northern Italy hospital in February/May of 2020. A fully 3D patient-level CNN classifier on baseline CT images is used as feature extractor. Features extracted, alongside with laboratory and clinical data, are fed for selection in a Boruta algorithm with SHAP game theoretical values. A classifier is built on the reduced feature space using CatBoost gradient boosting algorithm and reaching a probabilistic AUC of 0.949 on holdout test set. The model aims to provide clinical decision support to medical doctors, with the probability score of belonging to an outcome class and with case-based SHAP interpretation of features importance. Introduction To date (May 2021), more than one hundred millions of individuals have been reported as affected by COVID-19.
    [Show full text]
  • New Directions in Automated Traffic Analysis
    New Directions in Automated Traffic Analysis Jordan Holland1, Paul Schmitt1, Nick Feamster2, Prateek Mittal1 1 Princeton University 2 University of Chicago https://nprint.github.io/nprint ABSTRACT This paper reconsiders long-held norms in applying machine Despite the use of machine learning for many network traffic anal- learning to network traffic analysis; namely, we seek to reduce ysis tasks in security, from application identification to intrusion reliance upon human-driven feature engineering. To do so, we detection, the aspects of the machine learning pipeline that ulti- explore whether and how a single, standard representation of a mately determine the performance of the model—feature selection network packet can serve as a building block for the automation of and representation, model selection, and parameter tuning—remain many common traffic analysis tasks. Our goal is not to retread any manual and painstaking. This paper presents a method to auto- specific network classification problem, but rather to argue that mate many aspects of traffic analysis, making it easier to apply many of these problems can be made easier—and in some cases, machine learning techniques to a wider variety of traffic analysis completely automated—with a unified representation of traffic that tasks. We introduce nPrint, a tool that generates a unified packet is amenable for input to existing automated machine learning (Au- representation that is amenable for representation learning and toML) pipelines [14]. To demonstrate this capability, we designed a model training. We integrate nPrint with automated machine learn- standard packet representation, nPrint, that encodes each packet in ing (AutoML), resulting in nPrintML, a public system that largely an inherently normalized, binary representation while preserving eliminates feature extraction and model tuning for a wide variety the underlying semantics of each packet.
    [Show full text]
  • ISBN # 1-60132-514-2; American Council on Science & Education / CSCE 2021
    ISBN # 1-60132-514-2; American Council on Science & Education / CSCE 2021 CSCI 2021 BOOK of ABSTRACTS The 2021 World Congress in Computer Science, Computer Engineering, and Applied Computing CSCE 2021 https://www.american-cse.org/csce2021/ July 26-29, 2021 Luxor Hotel (MGM Property), 3900 Las Vegas Blvd. South, Las Vegas, 89109, USA Table of Contents Keynote Addresses .................................................................................................................... 2 Int'l Conf. on Applied Cognitive Computing (ACC) ...................................................................... 3 Int'l Conf. on Bioinformatics & Computational Biology (BIOCOMP) ............................................ 6 Int'l Conf. on Biomedical Engineering & Sciences (BIOENG) ................................................... 12 Int'l Conf. on Scientific Computing (CSC) .................................................................................. 14 SESSION: Military & Defense Modeling and Simulation ............................................................ 27 Int'l Conf. on e-Learning, e-Business, EIS & e-Government (EEE) ............................................ 28 SESSION: Agile IT Service Practices for the cloud ................................................................... 34 Int'l Conf. on Embedded Systems, CPS & Applications (ESCS) ................................................ 37 Int'l Conf. on Foundations of Computer Science (FCS) ............................................................. 39 Int'l Conf. on Frontiers
    [Show full text]
  • Catboost for Big Data: an Interdisciplinary Review
    CatBoost for Big Data: an Interdisciplinary Review John Hancock ( [email protected] ) Florida Atlantic University https://orcid.org/0000-0003-0699-3042 Taghi M Khoshgoftaar Florida Atlantic University Survey paper Keywords: CatBoost, Big Data, Categorical Variable Encoding, Ensemble Methods, Machine Learning, Decision Tree Posted Date: October 24th, 2020 DOI: https://doi.org/10.21203/rs.3.rs-54646/v2 License: This work is licensed under a Creative Commons Attribution 4.0 International License. Read Full License Version of Record: A version of this preprint was published on November 4th, 2020. See the published version at https://doi.org/10.1186/s40537-020-00369-8. Hancock and Khoshgoftaar SURVEY PAPER CatBoost for Big Data: an Interdisciplinary Review John T Hancock* and Taghi M Khoshgoftaar *Correspondence: [email protected], Abstract [email protected] Florida Atlantic University, 777 Gradient Boosted Decision Trees (GBDT’s) are a powerful tool for classification Glades Road, Boca Raton, FL, and regression tasks in Big Data. Researchers should be familiar with the USA strengths and weaknesses of current implementations of GBDT’s in order to use Full list of author information is available at the end of the article them effectively and make successful contributions. CatBoost is a member of the family of GBDT machine learning ensemble techniques. Since its debut in late 2018, researchers have successfully used CatBoost for machine learning studies involving Big Data. We take this opportunity to review recent research on CatBoost as it relates to Big Data, and learn best practices from studies that cast CatBoost in a positive light, as well as studies where CatBoost does not outshine other techniques, since we can learn lessons from both types of scenarios.
    [Show full text]
  • Xgboost Add-In for JMP Pro
    XGBoost Add-In for JMP Pro Overview The XGBoost Add-In for JMP Pro provides a point-and-click interface to the popular XGBoost open- source library for predictive modeling with extreme gradient boosted trees. Value-added functionality includes: - Repeated k-fold cross validation with out-of-fold predictions, plus a separate routine to create optimized k-fold validation columns - Ability to fit multiple Y responses in one run - Automated parameter search via JMP Design of Experiments (DOE) Fast Flexible Filling Design - Interactive graphical and statistical outputs - Model comparison interface - Profiling - Export of JMP Scripting Language (JSL) and Python code for reproducibility What is XGBoost? Why Use It? XGBoost is a scalable, portable, distributed, open-source C++ library for gradient boosted tree prediction written by the dmlc team; see https://github.com/dmlc/xgboost and XGBoost: A Scalable Tree Boosting System. The original theory and applications were developed by Leo Breiman and Jerry Friedman in the late 1990s. XGBoost sprang from a research project at the University of Washington around 2015 and is now sponsored by Amazon, NVIDIA, and others. XGBoost has grown dramatically in popularity due to successes in nearly every major Kaggle competition and others with tabular data over the past five years. It has also been the top performer in several published studies, including the following result from https://github.com/rhiever/sklearn-benchmarks We have done extensive internal testing of XGBoost within JMP R&D and obtained results like those above. Two alternative open-source libraries with similar methodologies, LightGBM from Microsoft and CatBoost from Yandex, have also enjoyed strong and growing popularity and further validate the effectiveness of the advanced boosted tree methods available in XGBoost.
    [Show full text]
  • Arxiv:2009.09993V3 [Q-Fin.TR] 14 May 2021 Formats Following Predetermined Protocols and Data Structures
    Machine Learning Classification of Price Extrema Based on Market Microstructure and Price Action Features A Case Study of S&P500 E-mini Futures Artur Sokolovsky* Luca Arnaboldi†‡ Newcastle University, School of Computing Newcastle Upon Tyne, UK The University of Edinburgh, School of Informatics Edinburgh, UK Abstract The study introduces an automated trading system for S&P500 E-mini futures (ES) based on state-of-the-art machine learning. Concretely: we extract a set of scenarios from the tick market data to train the models and further use the predictions to statistically assess the soundness of the approach. We define the scenarios from the local extrema of the price action. Price extrema is a commonly traded pattern, however, to the best of our knowledge, there is no study presenting a pipeline for automated classification and profitability evaluation. Additionally, we evaluate the ap- proach in the simulated trading environment on the historical data. Our study is filling this gap by presenting a broad evaluation of the approach supported by statistical tools which make it general- isable to unseen data and comparable to other approaches. 1 Introduction As machine learning (ML) changes and takes over virtually every aspect of our lives, we are now able to automate tasks that previously were only possible with human intervention. A field in which it has quickly gained in traction and popularity is finance [18]. This field, which is often dominated by organisations with extreme expertise, knowledge and assets, is often considered out of reach to indi- viduals, due to the complex decision making and high risks.
    [Show full text]
  • Minimal Variance Sampling in Stochastic Gradient Boosting
    Minimal Variance Sampling in Stochastic Gradient Boosting Bulat Ibragimov Gleb Gusev Yandex, Moscow, Russia Sberbank∗, Moscow, Russia Moscow Institute of Physics and Technology [email protected] [email protected] Abstract Stochastic Gradient Boosting (SGB) is a widely used approach to regularization of boosting models based on decision trees. It was shown that, in many cases, ran- dom sampling at each iteration can lead to better generalization performance of the model and can also decrease the learning time. Different sampling approaches were proposed, where probabilities are not uniform, and it is not currently clear which approach is the most effective. In this paper, we formulate the problem of randomization in SGB in terms of optimization of sampling probabilities to maximize the estimation accuracy of split scoring used to train decision trees. This optimization problem has a closed-form nearly optimal solution, and it leads to a new sampling technique, which we call Minimal Variance Sampling (MVS). The method both decreases the number of examples needed for each iteration of boosting and increases the quality of the model significantly as compared to the state-of-the art sampling methods. The superiority of the algorithm was confirmed by introducing MVS as a new default option for subsampling in CatBoost, a gradi- ent boosting library achieving state-of-the-art quality on various machine learning tasks. 1 Introduction Gradient boosted decision trees (GBDT) [16] is one of the most popular machine learning algorithms as it provides high-quality models in a large number of machine learning problems containing het- erogeneous features, noisy data, and complex dependencies [31].
    [Show full text]
  • Estimating the Pan Evaporation in Northwest China by Coupling Catboost with Bat Algorithm
    water Article Estimating the Pan Evaporation in Northwest China by Coupling CatBoost with Bat Algorithm Liming Dong 1, Wenzhi Zeng 1,* , Lifeng Wu 2,*, Guoqing Lei 1, Haorui Chen 3 , Amit Kumar Srivastava 4 and Thomas Gaiser 4 1 State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan 430072, China; [email protected] (L.D.); [email protected] (G.L.) 2 Nanchang Institute of Technology, Nanchang 330099, China 3 State Key Laboratory of Simulation and Regulation of Water Cycle in River Basin, China Institute of Water Resources and Hydropower Research, Beijing 100038, China; [email protected] 4 Crop Science Group, Institute of Crop Science and Resource Conservation (INRES), University of Bonn, Katzenburgweg 5, D-53115 Bonn, Germany; [email protected] (A.K.S.); [email protected] (T.G.) * Correspondence: [email protected] (W.Z.); [email protected] (L.W.) Abstract: Accurate estimation of pan evaporation (Ep) is vital for the development of water resources and agricultural water management, especially in arid and semi-arid regions where it is restricted to set up the facilities and measure pan evaporation accurately and consistently. Besides, using pan evap- oration estimating models and pan coefficient (kp) models is a classic method to assess the reference evapotranspiration (ET0) which is indispensable to crop growth, irrigation scheduling, and economic assessment. This study estimated the potential of a novel hybrid machine learning model Coupling Bat algorithm (Bat) and Gradient boosting with categorical features support (CatBoost) for estimating daily pan evaporation in arid and semi-arid regions of northwest China.
    [Show full text]
  • Catboost for Big Data: an Interdisciplinary Review
    Hancock and Khoshgoftaar J Big Data (2020) 7:94 https://doi.org/10.1186/s40537-020-00369-8 SURVEY PAPER Open Access CatBoost for big data: an interdisciplinary review John T. Hancock* and Taghi M. Khoshgoftaar *Correspondence: [email protected] Abstract Florida Atlantic University, Gradient Boosted Decision Trees (GBDT’s) are a powerful tool for classifcation and 777 Glades Road, Boca Raton, FL, USA regression tasks in Big Data. Researchers should be familiar with the strengths and weaknesses of current implementations of GBDT’s in order to use them efectively and make successful contributions. CatBoost is a member of the family of GBDT machine learning ensemble techniques. Since its debut in late 2018, researchers have suc- cessfully used CatBoost for machine learning studies involving Big Data. We take this opportunity to review recent research on CatBoost as it relates to Big Data, and learn best practices from studies that cast CatBoost in a positive light, as well as studies where CatBoost does not outshine other techniques, since we can learn lessons from both types of scenarios. Furthermore, as a Decision Tree based algorithm, CatBoost is well-suited to machine learning tasks involving categorical, heterogeneous data. Recent work across multiple disciplines illustrates CatBoost’s efectiveness and short- comings in classifcation and regression tasks. Another important issue we expose in literature on CatBoost is its sensitivity to hyper-parameters and the importance of hyper-parameter tuning. One contribution we make is to take an interdisciplinary approach to cover studies related to CatBoost in a single work. This provides research- ers an in-depth understanding to help clarify proper application of CatBoost in solving problems.
    [Show full text]
  • Catboost: Unbiased Boosting with Categorical Features
    CatBoost: unbiased boosting with categorical features Liudmila Prokhorenkova1;2, Gleb Gusev1;2, Aleksandr Vorobev1, Anna Veronika Dorogush1, Andrey Gulin1 1Yandex, Moscow, Russia 2Moscow Institute of Physics and Technology, Dolgoprudny, Russia {ostroumova-la, gleb57, alvor88, annaveronika, gulin}@yandex-team.ru Abstract This paper presents the key algorithmic techniques behind CatBoost, a new gradient boosting toolkit. Their combination leads to CatBoost outperforming other publicly available boosting implementations in terms of quality on a variety of datasets. Two critical algorithmic advances introduced in CatBoost are the implementation of ordered boosting, a permutation-driven alternative to the classic algorithm, and an innovative algorithm for processing categorical features. Both techniques were created to fight a prediction shift caused by a special kind of target leakage present in all currently existing implementations of gradient boosting algorithms. In this paper, we provide a detailed analysis of this problem and demonstrate that proposed algorithms solve it effectively, leading to excellent empirical results. 1 Introduction Gradient boosting is a powerful machine-learning technique that achieves state-of-the-art results in a variety of practical tasks. For many years, it has remained the primary method for learning problems with heterogeneous features, noisy data, and complex dependencies: web search, recommendation systems, weather forecasting, and many others [5, 26, 29, 32]. Gradient boosting is essentially a process of constructing an ensemble predictor by performing gradient descent in a functional space. It is backed by solid theoretical results that explain how strong predictors can be built by iteratively combining weaker models (base predictors) in a greedy manner [17]. We show in this paper that all existing implementations of gradient boosting face the following statistical issue.
    [Show full text]
  • Comparison of Gradient Boosting Decision Tree Algorithms for CPU Performance CPU Performansı Için Gradyan Artırıcı Karar A
    Erciyes Üniversitesi Erciyes University Fen Bilimleri Enstitüsü Dergisi Journal of Institue Of Science and Technology Cilt 37, Sayı 1 , 2021 Volume 37, Issue 1, 2021 Comparison of Gradient Boosting Decision Tree Algorithms for CPU Performance Haithm ALSHARI*1 , Abdulrazak Yahya SALEH2 , Alper ODABAŞ3 *1 Eskişehir Osmangazi Üniversitesi Matematik ve Bilgisayar Bilimleri Bölümü, ESKIŞEHIR 2 FSKPM Faculty, Malaysia Sarawak University (UNIMAS), 94300 Kota Samarahan, SARAWAK 3 Eskişehir Osmangazi Üniversitesi Matematik ve Bilgisayar Bilimleri Bölümü, ESKIŞEHIR (Alınış / Received: 14.02.2021, Kabul / Accepted: 15.04.2021, Online Yayınlanma / Published Online: 28.04.2021) Keywords Abstract: Gradient Boosting Decision Trees (GBDT) algorithms have been proven Decision Tree, to be among the best algorithms in machine learning. XGBoost, the most popular Gradient Boosting, GBDT algorithm, has won many competitions on websites like Kaggle. However, XGBoost, XGBoost is not the only GBDT algorithm with state-of-the-art performance. There LightGBM, are other GBDT algorithms that have more advantages than XGBoost and sometimes CatBoost even more potent like LightGBM and CatBoost. This paper aims to compare the performance of CPU implementation of the top three gradient boosting algorithms. We start by explaining how the three algorithms work and the hyperparameters similarities between them. Then we use a variety of performance criteria to evaluate their performance. We divide the performance criteria into four: accuracy, speed, reliability, and ease of use. The performance of the three algorithms has been tested with five classification and regression problems. Our findings show that the LightGBM algorithm has the best performance of the three with a balanced combination of accuracy, speed, reliability, and ease of use, followed by XGBoost with the histogram method, and CatBoost came last with slow and inconsistent performance.
    [Show full text]