50 Years of Data Analysis from EDA to Predictive Modelling and Machine Learning

Total Page:16

File Type:pdf, Size:1020Kb

50 Years of Data Analysis from EDA to Predictive Modelling and Machine Learning 50 Years of Data Analysis from EDA to predictive modelling and machine learning Gilbert Saporta CEDRIC- CNAM, 292 rue Saint Martin, F-75003 Paris [email protected] http://cedric.cnam.fr/~saporta Data analysis vs mathematical statistics • An international movement in reaction to the abuses of formalization • Let the data speak • Computerized statistics ASMDA 2017 2 • « He (Tukey) seems to identify statistics with the grotesque phenomenon generally known as mathematical statistics and find it necessary to replace statistics by data analysis » (Anscombe, 1967). John Wilder Tukey (1915-2000) ASMDA 2017 3 • « Statistics is not probability, under the name of mathematical statistics was built a pompous discipline based on theoretical assumptions that are rarely met in practice » (Benzécri, 1972) • « Data analysis is a tool to release from the gangue of the data the pure diamond of the true nature » Jean-Paul Benzécri (1932- ) ASMDA 2017 4 Japan, Netherlands, Canada, Italy… Chikio Hayashi (1918-2002) Jan de Leeuw (1945-) Shizuiko Nishisato (1935-) Carlo Lauro (1943-) ASMDA 2017 5 Meetings • Data Analysis and Informatics • ASMDA from 1981 from 1977 Edwin Diday Ludovic Lebart Jacques Janssen ASMDA 2017 6 Part 1 Exploratory data analysis « Data Analysis »: a collection of unsupervised methods for dimension reduction • « Factor » analysis • PCA, • CA, MCA • Cluster analysis • k-means partitioning • Hierarchical clustering ASMDA 2017 8 and came the time of syntheses: • All (factorial) methods are particular cases of: PCA canonical correlation analysis 1976 J.Douglas Carroll (1939 -2011) ASMDA 2017 9 Even more methods are particular cases of the maximum association principle: p MaxYj(,) Y X j1 M.Tenenhaus (1977), J.F. Marcotorchino (1986), G.S. (1988) ASMDA 2017 10 • A few cases criterium analysis p 2 PCA max r ( c , xjj ) with x numerical j1 p 2 xx MCA max (c ,jj ) with categorical j1 p GCA (Carroll) 2 max Rc ( ,XXjj ) with data set j1 p Central partition xx max Rand ( Y ,jj ) with Y and categorical j1 p Condorcet aggregation rule max (yx ,j ) with rank orders j1 ASMDA 2017 11 …and the time of clusterwise methods: • Looking simultaneously for a partition and k local models instead of a global model (PCA, regression etc.) Diday, 1974 Charles, 1977 Späth, 1979 DeSarbo & Cron, 1988 Preda & S., 2005 Bougeard, Niang, & S. 2017 Adapted from Hennig, 2000 ASMDA 2017 12 followed by the time of extensions to new types of data • PCA and MCA of functional data X in xt(n) xt(2) i2 xt(1) i1 0 t T in state 1 state 2 i state 3 2 state 4 i1 Jean-Claude Deville Jim Ramsay 0 T ASMDA 2017 13 • Symbolic data • Textual Data • Intervals, histograms, distributions etc. (Bock, Diday, Billard , …) ASMDA 2017 14 Towards non-linear data analysis • Semi linear PCA (Dauxois & Pousse 1976, Gifi 1990) p p j arg maxV a x j arg maxVx j instead of j j1 j1 • Kernel PCA (Schölkopf, B., Smola, A., Müller, K.L., 1998) • Metric (Torgerson) MDS in the feature space where the dot product k(,)xy is a simple function of xy, ASMDA 2017 15 The time of sparse methods • Inspired by the Lasso. • Useful for high dimensional data: • Lack of interpretability • Unstable results • Sparse PCA βˆ arg min z - Xβ22 + β + β 1 1 Zou et al., 2006 • Alternates SVD (z) and elastic-net (β) • Sparse multiple correspondence analysis (Bernard, Guinot & S., 2012) ASMDA 2017 16 Application on genetic data Single Nucleotide Polymorphisms Data: n=502 individuals p=537 SNPs (among more than 800 000 of the original data base, 15000 genes) q=1554 (total number of columns) X : 502 x 537matrix of qualitative variables K : 502 x 1554 complete disjunctive table K=(K1, …, K1554) 1 block = 1 SNP = 1 Kj matrix ASMDA 2017 17 Application on genetic data Comparison of the loadings . ASMDA 2017 18 Part 2 Predictive modelling Machine Learning • A continuation of Data Analysis • “the models should follow the data, not vice versa” JPB principle n°2 * • “use the computer implies the abandonment of all the techniques designed before of computing” JPB principle n°5 * • Data driven methods vs hypothesis driven methods • No (or few) prespecified distribution assumptions * Translated by C.Lauro: https://www.researchgate.net/post/The_origin_of_Data_Science_the_5_principles_of_Data_Analysis_Analyse _des_donnees_by_JP_Benzecri ASMDA 2017 20 The two cultures 1928-2005 ASMDA 2017 21 • The generative modelling culture • seeks to develop stochastic models which fits the data, and then make inferences about the data-generating mechanism based on the structure of those models. Implicit (…) is the notion that there is a true model generating the data, and often a truly “best” way to analyze the data. • The predictive modelling culture • is silent about the underlying mechanism generating the data, and allows for many different predictive algorithms, preferring to discuss only accuracy of prediction made by different algorithm on various datasets. Machine Learning is identified by Breiman as the epicenter of the Predictive Modeling culture. From Donoho, 2015 ASMDA 2017 22 • Standard conception (models for understanding) • Provide some comprehension of data and their generative mechanism through a parsimonious representation. • A model should be simple and its parameters interpretable for the specialist : elasticity, odds-ratio, etc. • In « Big Data Analytics » one focus on prediction • For new observations: generalization • Models are merely algorithms Cf GS, compstat 2008 ASMDA 2017 23 Same formula: y= f(x;)+ • Generative modelling • Predictive modelling • Underlying theory • Models come from data • Narrow set of models • Algorithmic models • Focus on parameter estimation and • Focus on control of generalization goodness of fit: predict the past error : predict the future • Error: white noise • Error: minimal ASMDA 2017 24 Paradigms and paradoxes • Understanding but poorly predict • a model with a good fit may provide poor predictions at an individual level (eg epidemiology) • Predict without understanding? • Good predictions may be obtained with uninterpretable models (targetting customers, or approving loans, do not need a consumer theory) • Simplicity • « Occam’s Razor, long admired, is usually interpreted to mean that simpler is better. Unfortunately in prediction, accuracy and simplicity (interpretability) are in conflict » Breiman, 2011 ASMDA 2017 25 The black box model Vladimir Vapnik (1936-) • Let y=f(x)+ be an unknown generative model . One looks for a good approximation of the black box rule. • Two very different concepts : • Be close to the true f • Provide good enough predictions (mimic the black box behaviour). ASMDA 2017 26 • Modern statistical thinking makes a clear distinction between the statistical model and the world. The actual mechanisms underlying the data are considered unknown. The statistical models do not need to reproduce these mechanisms to emulate the observable data (Breiman, 2001). • Better models are sometimes obtained by deliberately avoiding to reproduce the true mechanisms (Vapnik, 2006). • Statistical significance plays a minor or no role in assessing predictive performance. In fact, it is sometimes the case that removing inputs with small coefficients, even if they are statistically significant, results in improved prediction accuracy (Shmueli, 2010) ASMDA 2017 27 Too big ? • Estimation and tests become useless • Everything is significant! • with n=106 a correlation coefficient = 0,002 is significantly different from 0 but without any interest • Usual distributional models are rejected since small discrepancies between model and data are significant • Confidence intervals have zero length • George Box: « All models are wrong, some are useful » ASMDA 2017 28 Meta or ensemble models • Bagging, boosting, random forests etc. dramatically improve simple models • Stacking (Wolpert, Breiman) linearly combines predictions coming from various models (linear, trees, knn, neural networks, etc.) ˆ ˆ ˆ f12(x ), f ( x ),..., fm ( x ) • First idea: OLS 2 nm ˆ minyi w j f j (x ) ij11 • Favorize most complex models, risk of overfitting ASMDA 2017 29 • Better solution: use leave one out predicted values 2 nm ˆ i minyi w j f j (x ) ij11 • Improvements : (Noçairi, Gomes, Thomas & S , 2016) • Nonnegative coefficients adding to 1 • Regularised regression (eg PLS) since predictions are usually highly correlated ASMDA 2017 30 Empirical validation • Combining Machine Learning and Statistics • A good model must give good predictions • Bootstrap, cross-validation, etc. • Learning and validation sets ASMDA 2017 31 The three samples procedure for selecting a model inside a family of models • Learning set: estimate parameters for all models in competition • Test set : choice of the best model in terms of prediction • NB Reestimation of the final model: with all available observations • Validation set : estimate the performance for future data. « Generalization » • Parameter estimation ≠ performance estimation ASMDA 2017 32 • One split is not enough! ASMDA 2017 33 • Elementary? • Not that sure… • Have a look on publications in econometrics, epidemiology, .. prediction is rarely checked on a hold-out sample (except in time series forecasting) • Forerunners: • « the usefulness of a prediction procedure is not established when it is found to predict adequately on the original sample; the necessary next step must be its application to at least a second group. Only if it predicts adequately on subsequent samples can the value of the procedure be regarded as established » Horst, 1941 • Leave one out : Lachenbruch et Mickey,
Recommended publications
  • Health Outcomes from Home Hospitalization: Multisource Predictive Modeling
    JOURNAL OF MEDICAL INTERNET RESEARCH Calvo et al Original Paper Health Outcomes from Home Hospitalization: Multisource Predictive Modeling Mireia Calvo1, PhD; Rubèn González2, MSc; Núria Seijas2, MSc, RN; Emili Vela3, MSc; Carme Hernández2, PhD; Guillem Batiste2, MD; Felip Miralles4, PhD; Josep Roca2, MD, PhD; Isaac Cano2*, PhD; Raimon Jané1*, PhD 1Institute for Bioengineering of Catalonia (IBEC), Barcelona Institute of Science and Technology (BIST), Universitat Politècnica de Catalunya (UPC), CIBER-BBN, Barcelona, Spain 2Hospital Clínic de Barcelona, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Universitat de Barcelona (UB), Barcelona, Spain 3Àrea de sistemes d'informació, Servei Català de la Salut, Barcelona, Spain 4Eurecat, Technology Center of Catalonia, Barcelona, Spain *these authors contributed equally Corresponding Author: Isaac Cano, PhD Hospital Clínic de Barcelona Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) Universitat de Barcelona (UB) Villarroel 170 Barcelona, 08036 Spain Phone: 34 932275540 Email: [email protected] Abstract Background: Home hospitalization is widely accepted as a cost-effective alternative to conventional hospitalization for selected patients. A recent analysis of the home hospitalization and early discharge (HH/ED) program at Hospital Clínic de Barcelona over a 10-year period demonstrated high levels of acceptance by patients and professionals, as well as health value-based generation at the provider and health-system levels. However, health risk assessment was identified as an unmet need with the potential to enhance clinical decision making. Objective: The objective of this study is to generate and assess predictive models of mortality and in-hospital admission at entry and at HH/ED discharge. Methods: Predictive modeling of mortality and in-hospital admission was done in 2 different scenarios: at entry into the HH/ED program and at discharge, from January 2009 to December 2015.
    [Show full text]
  • Review on Predictive Modelling Techniques for Identifying Students at Risk in University Environment
    MATEC Web of Conferences 255, 03002 (2019) https://doi.org/10.1051/matecconf/20192 5503002 EAAI Conference 2018 Review on Predictive Modelling Techniques for Identifying Students at Risk in University Environment Nik Nurul Hafzan Mat Yaacob1,*, Safaai Deris1,3, Asiah Mat2, Mohd Saberi Mohamad1,3, Siti Syuhaida Safaai1 1Faculty of Bioengineering and Technology, Universiti Malaysia Kelantan, Jeli Campus, 17600 Jeli, Kelantan, Malaysia 2Faculty of Creative and Heritage Technology, Universiti Malaysia Kelantan,16300 Bachok, Kelantan, Malaysia 3Institute for Artificial Intelligence and Big Data, Universiti Malaysia Kelantan, City Campus, Pengkalan Chepa, 16100 Kota Bharu, Kelantan, Malaysia Abstract. Predictive analytics including statistical techniques, predictive modelling, machine learning, and data mining that analyse current and historical facts to make predictions about future or otherwise unknown events. Higher education institutions nowadays are under increasing pressure to respond to national and global economic, political and social changes such as the growing need to increase the proportion of students in certain disciplines, embedding workplace graduate attributes and ensuring that the quality of learning programs are both nationally and globally relevant. However, in higher education institution, there are significant numbers of students that stop their studies before graduation, especially for undergraduate students. Problem related to stopping out student and late or not graduating student can be improved by applying analytics. Using analytics, administrators, instructors and student can predict what will happen in future. Administrator and instructors can decide suitable intervention programs for at-risk students and before students decide to leave their study. Many different machine learning techniques have been implemented for predictive modelling in the past including decision tree, k-nearest neighbour, random forest, neural network, support vector machine, naïve Bayesian and a few others.
    [Show full text]
  • From Exploratory Data Analysis to Predictive Modeling and Machine Learning Gilbert Saporta
    50 Years of Data Analysis: From Exploratory Data Analysis to Predictive Modeling and Machine Learning Gilbert Saporta To cite this version: Gilbert Saporta. 50 Years of Data Analysis: From Exploratory Data Analysis to Predictive Modeling and Machine Learning. Christos Skiadas; James Bozeman. Data Analysis and Applications 1. Clus- tering and Regression, Modeling-estimating, Forecasting and Data Mining, ISTE-Wiley, 2019, Data Analysis and Applications, 978-1-78630-382-0. 10.1002/9781119597568.fmatter. hal-02470740 HAL Id: hal-02470740 https://hal-cnam.archives-ouvertes.fr/hal-02470740 Submitted on 11 Feb 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 50 years of data analysis: from exploratory data analysis to predictive modelling and machine learning Gilbert Saporta CEDRIC-CNAM, France In 1962, J.W.Tukey wrote his famous paper “The future of data analysis” and promoted Exploratory Data Analysis (EDA), a set of simple techniques conceived to let the data speak, without prespecified generative models. In the same spirit J.P.Benzécri and many others developed multivariate descriptive analysis tools. Since that time, many generalizations occurred, but the basic methods (SVD, k-means, …) are still incredibly efficient in the Big Data era.
    [Show full text]
  • Data Mining and Exploration Michael Gutmann the University of Edinburgh
    Lecture Notes Data Mining and Exploration Michael Gutmann The University of Edinburgh Spring Semester 2017 February 27, 2017 Contents 1 First steps in exploratory data analysis1 1.1 Distribution of single variables....................1 1.1.1 Numerical summaries.....................1 1.1.2 Graphs.............................6 1.2 Joint distribution of two variables................... 10 1.2.1 Numerical summaries..................... 10 1.2.2 Graphs............................. 14 1.3 Simple preprocessing.......................... 15 1.3.1 Simple outlier detection.................... 15 1.3.2 Data standardisation...................... 15 2 Principal component analysis 19 2.1 PCA by sequential variance maximisation.............. 19 2.1.1 First principal component direction............. 19 2.1.2 Subsequent principal component directions......... 21 2.2 PCA by simultaneous variance maximisation............ 23 2.2.1 Principle............................ 23 2.2.2 Sequential maximisation yields simultaneous maximisation∗ 23 2.3 PCA by minimisation of approximation error............ 25 2.3.1 Principle............................ 26 2.3.2 Equivalence to PCA by variance maximisation....... 27 2.4 PCA by low rank matrix approximation............... 28 2.4.1 Approximating the data matrix................ 28 2.4.2 Approximating the sample covariance matrix........ 30 2.4.3 Approximating the Gram matrix............... 30 3 Dimensionality reduction 33 3.1 Dimensionality reduction by PCA.................. 33 3.1.1 From data points........................ 33 3.1.2 From inner products...................... 34 3.1.3 From distances......................... 35 3.1.4 Example............................. 36 3.2 Dimensionality reduction by kernel PCA............... 37 3.2.1 Idea............................... 37 3.2.2 Kernel trick........................... 38 3.2.3 Example............................. 39 3.3 Multidimensional scaling........................ 40 iv CONTENTS 3.3.1 Metric MDS..........................
    [Show full text]
  • Comparing Out-Of-Sample Predictive Ability of PLS, Covariance, and Regression Models Completed Research Paper
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by AIS Electronic Library (AISeL) Comparing Out-of-Sample Predictive Ability of PLS, Covariance, and Regression Models Completed Research Paper Joerg Evermann Mary Tate Memorial University of Newfoundland Victoria University of Wellington St. John’s, Canada Wellington, New Zealand [email protected] [email protected] Abstract Partial Least Squares Path Modelling (PLSPM) is a popular technique for estimating structural equation models in the social sciences, and is frequently presented as an alternative to covariance-based analysis as being especially suited for predictive modeling. While existing research on PLSPM has focused on its use in causal- explanatory modeling, this paper follows two recent papers at ICIS 2012 and 2013 in examining how PLSPM performs when used for predictive purposes. Additionally, as a predictive technique, we compare PLSPM to traditional regression methods that are widely used for predictive modelling in other disciplines. Specifically, we employ out-of- sample k-fold cross-validation to compare PLSPM to covariance-SEM and a range of a- theoretical regression techniques in a simulation study. Our results show that PLSPM offers advantages over covariance-SEM and other prediction methods. Keywords: Research Methods, Statistical methods, Structural equation modeling (SEM), Predictive modeling, Partial Least Squares, Covariance Analysis, Regression Introduction Explanation and prediction are two main purposes of theories and statistical methods (Gregor, 2006). Explanation is concerned with the identification of causal mechanisms underlying a phenomenon. On the statistical level, explanation is primarily concerned with testing the faithful representation of causal mechanisms by the statistical model and the estimation of true population parameter values from samples.
    [Show full text]
  • Deciding the Best Machine Learning Algorithm for Customer Attrition Prediction for a Telecommunication Company
    American Journal of Engineering Research (AJER) 2020 American Journal of Engineering Research (AJER) e-ISSN: 2320-0847 p-ISSN : 2320-0936 Volume-9, Issue-2, pp-1-7 www.ajer.org Research Paper Open Access Deciding the Best Machine Learning Algorithm for Customer Attrition Prediction for a Telecommunication Company Omijeh Bourdillon O. 1, Abarugo Goodness Chidinma2 1(Electronic and Computer Engineering,University of Port-Harcourt, Rivers State, Nigeria. 2Centre for Information and Telecommunications Engineering (CITE),University of Port-Harcourt, Rivers State, Nigeria.) Corresponding Author: Abarugo Goodness Chidinma. ABSTRACT : Customers are so important in business that every firm should put great effort into retaining them. To achieve that with some measure of success, the firm needs to be able to predict the behaviour of their customers with respect to churn or attrition. There are many machine learning algorithms that may be used to predict attrition, but this paper considers only four of them. Logistic regression, k-nearest neighbour, random forest and XGBoost machine learning algorithms were applied in different ways to the dataset gotten from Kaggle in order to decide the best algorithm to suggest to the company for customer attrition prediction. Results showed that the logistic regression or random forest algorithm may be adopted by the telecom company to predict which of their customers may leave in the future based on their recall and precision scores as well as the AUC values of approximately 75%. The logistic regression algorithm gave metrics of 73% accuracy and 59% f1-score while the random forest algorithm yielded 70% accuracy and a 58% f1-score.
    [Show full text]
  • Machine Learning and Deep Learning Methods for Predictive Modelling from Raman Spectra in Bioprocessing
    Semion Rozov Machine Learning and Deep Learning methods for predictive modelling from Raman spectra in bioprocessing Master Thesis Institute for Chemical and Bioengineering Swiss Federal Institute of Technology (ETH) Zurich arXiv:2005.02935v1 [q-bio.QM] 6 May 2020 Supervision Dr. Michael Sokolov Dr. Fabian Feidl Prof. Dr. Gonzalo Guillen Gosalbez February 2020 Abstract In chemical processing and bioprocessing, conventional online sensors are limited to measure only basic process variables like pressure and temperature, pH, dissolved O and CO2 and viable cell density (VCD). The concentration of other chemical species is more difficult to measure, as it usu- ally requires an at-line or off-line approach. Such approaches are invasive and slow compared to on-line sensing. It is known that different molecules can be distinguished by their interaction with monochromatic light, producing different profiles for the resulting Raman spectrum, depending on the concentration. Given the availability of reference measurements for the target variable, regres- sion methods can be used to model the relationship between the profile of the Raman spectra and the concentration of the analyte. This work focused on pretreatment methods of Raman spectra for the facilitation of the regres- sion task using Machine Learning and Deep Learning methods, as well as the development of new regression models based on these methods. In the majority of cases, this allowed to outperform conventional Raman models in terms of pre- diction error and prediction robustness. 1 2 Contents Abstract 1 1 Introduction 11 1.1 Goals of the project . 12 2 Experimental Section and Setup 13 2.1 Evaluation criteria and model selection .
    [Show full text]
  • Predictive Modelling Applied to Propensity to Buy Personal Accidents Insurance Products
    Predictive Modelling Applied to Propensity to Buy Personal Accidents Insurance Products Esdras Christo Moura dos Santos Internship report presented as partial requirement for obtaining the Master’s degree in Advanced Analytics i 2017 Title: Predictive Models Applied to Propensity to Buy Personal Accidents Student: Esdras Christo Moura dos Santos Insurance Products MAA i i NOVA Information Management School Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa PREDICTIVE MODELLING APPLIED TO PROPENSITY TO BUY PERSONAL ACCIDENTS INSURANCE PRODUCTS by Esdras Christo Moura dos Santos Internship report presented as partial requirement for obtaining the Master’s degree in Advanced Analytics Advisor: Mauro Castelli ii February 2018 DEDICATION Dedicated to my beloved family. iii ACKNOWLEDGEMENTS I would like to express my gratitude to my supervisor, Professor Mauro Castelli of Information Management School of Universidade Nova de Lisboa for all the mentoring and assistance. I also want to show my gratitude for the data mining team at Ocidental, Magdalena Neate and Franklin Minang. I deeply appreciate all the guidance, patience and support during this project. iv ABSTRACT Predictive models have been largely used in organizational scenarios with the increasing popularity of machine learning. They play a fundamental role in the support of customer acquisition in marketing campaigns. This report describes the development of a propensity to buy model for personal accident insurance products. The entire process from business understanding to the deployment of the final model is analyzed with the objective of linking the theory to practice. KEYWORDS Predictive models; data mining; supervised learning; propensity to buy; logistic regression; decision trees; artificial neural networks; ensemble models.
    [Show full text]
  • The Development of a Predictive Model of On-Time High School Graduation in British Columbia
    The Development of a Predictive Model of On-Time High School Graduation in British Columbia May 1, 2019 Ross Finnie Eda Suleymanoglu Ashley Pullman Michael Dubois Table of Contents Executive Summary ........................................................................................................................ 1 1. Introduction .............................................................................................................................. 4 2. Literature Review .................................................................................................................... 5 3. Data .......................................................................................................................................... 7 3.1 Outcome Variables of Interest ............................................................................................ 8 3.2 Predictor Variables .............................................................................................................. 8 3.1 Sample Selection ................................................................................................................. 9 4. Methodology .......................................................................................................................... 10 4.1 Predictive Model ............................................................................................................... 10 4.2 Predictive Accuracy .........................................................................................................
    [Show full text]
  • Quantification of Predictive Uncertainty in Hydrological Modelling by Harnessing the Wisdom of the Crowd: Methodology Development and Investigation Using Toy Models
    Quantification of predictive uncertainty in hydrological modelling by harnessing the wisdom of the crowd: Methodology development and investigation using toy models Georgia Papacharalampous1,*, Demetris Koutsoyiannis2, and Alberto Montanari3 1 Department of Water Resources and Environmental Engineering, School of Civil Engineering, National Technical University of Athens, Heroon Polytechneiou 5, 157 80 Zographou, Greece; [email protected]; https://orcid.org/0000-0001-5446-954X 2 Department of Water Resources and Environmental Engineering, School of Civil Engineering, National Technical University of Athens, Heroon Polytechneiou 5, 157 80 Zographou, Greece; [email protected]; https://orcid.org/0000-0002-6226-0241 3 Department of Civil, Chemical, Environmental and Materials Engineering (DICAM), University of Bologna, via del Risorgimento 2, 40136 Bologna, Italy; [email protected]; https://orcid.org/0000-0001-7428-0410 * Correspondence: [email protected], tel: +30 69474 98589 This is the accepted manuscript of an article published in Advances in Water Resources. Please cite the article as: Papacharalampous GA, Koutsoyiannis D, Montanari A (2020) Quantification of predictive uncertainty in hydrological modelling by harnessing the wisdom of the crowd: Methodology development and investigation using toy models. Advances in Water Resources 136:103471. https://doi.org/10.1016/j.advwatres.2019.103471 Abstract: We introduce an ensemble learning post-processing methodology for probabilistic hydrological modelling. This methodology generates numerous point predictions by applying a single hydrological model, yet with different parameter values drawn from the respective simulated posterior distribution. We call these predictions “sister predictions”. Each sister prediction extending in the period of interest is converted into a probabilistic prediction using information about the hydrological model’s errors.
    [Show full text]
  • Exploration of Customer Churn Routes Using Machine Learning Probabilistic Models
    UNIVERSITAT POLITECNICA` DE CATALUNYA Exploration of Customer Churn Routes Using Machine Learning Probabilistic Models DOCTORAL THESIS Author: David L. Garc´ıa Advisors: Dra. Angela` Nebot and Dr. Alfredo Vellido Soft Computing Research Group Departament de Llenguatges i Sistemes Informatics` Universitat Politecnica` de Catalunya. BarcelonaTech Barcelona, 10th April 2014 A bird doesn’t sing because it has an answer, it sings because it has a song. Maya Angelou AMonica,´ Olivia, Pol i Jana 1 Acknowledgments Siempre he pensado que las cosas, en la vida, no ocurren por casualidad; son el conjunto de pequenas˜ (y grandes) aportaciones, casualidades, ideas, retos, azar, amor, exigencia y apoyo, animo´ y desanimo,´ es- fuerzo, inspiracion,´ reflexiones y cr´ıticas y, en definitiva, de todo aquello que nos configura como personas. Y la presente Tesis Doctoral no es una excepcion.´ Sin la aportacion´ en pequenas˜ dosis de estos ingredientes por parte de muchas personas esta Tesis no hubiera visto nunca la luz. A todos ellos mi publica´ gratitud. A los Dres. Angela` Nebot y Alfredo Vellido, en primer lugar por reconocer algun´ tipo de ’valor’ difer- encial en un perfil como el m´ıo y aceptarme en el Soft Computing Group como estudiante de doctorado. Y en segundo lugar, y mas´ importante, por ser el alma de este trabajo. Si os fijais´ observareis´ su presencia e ideas a lo largo de la Tesis Doctoral. Para ellos mi gratitud y admiracion.´ A mis clientes y colegas en el mundo empresarial, con los que he compartido infinitas horas y problemas y cuya exigencia ha puesto a prueba, cada d´ıa, el sentido y utilidad practica´ de las ideas y metodolog´ıas aqu´ı presentadas.
    [Show full text]
  • Operational Analytics & Data and Implementation Sciences
    IISE Annual Conference Orlando 2019 Industry Practitioner Track 12 Nov 2019 The New Industrial and Systems Engineering: Operational Analytics & Data and Implementation Sciences Scott Sink, Senior Advisor, The Poirier Group Ben Amaba, Global CTO, Data Science and AI, IBM Industrial Mftg. ISE and IISE for Life—how IISE supports you for your entire Career….. CISE (seasoned executives, CISEISE IAB thought (Highly leaders) successful midIAB-career ISE’s) Young Young Professionals Professionals (early career) IISE Student Chapter Professional Chapters are: Alumni Affinity Groups, Local/State/Regional Affinity CareerGroups, Industry Path and and Practitioner Timeline Focused You can get involved in Societies, Divisions and also ‘Affinity Groups’ like Young Professionals, Industry Advisory Board and the Council on Industrial and Systems Engineering Performance Excellence Track—New Orleans 2020 The Performance Excellence Track is focused on: Technology (e.g. AI), Strategy (e.g. shaping Cultures to Support Lean), Process (e.g. Agile), People (e.g. how to successfully navigate politics). How to Improve Culture Expand and Successfully Extend my Navigating Network of Politics Accelerate Peers Voice of Member and Customer Career Progress led us to this example of our and Success Altitude on my life and Operational job and Programming for the Annual Analytics career and have some Conference in New Orleans. Fun Agile Webinar Line-up 13 June—Chapter #1 Annual Virtual Meeting 9 July—Operational Analytics: ideas on how to sustain visible measurement systems and the process improvement benefits you’ve worked to achieve (Scott Sink) 13 Aug—Virtual Mentoring: Career Choicepoint learnings, lessons, tips from Senior ISE Leaders (David Poirier, President, The Poirier Group; Ron Romano, Sr.
    [Show full text]