
Ensemble Predictors for Deep Neural Networks Hector Calvo-Pardo∗ Tullio Mancini† Jose Olmo‡ May 14, 2021 Abstract The aim of this paper is to propose a novel prediction model based on an ensemble of deep neural networks. To do this, we adapt the extremely randomized trees method originally developed for random forests. The extra-randomness introduced in the ensemble reduces the variance of the predictions and yields gains in out-of-sample accuracy. As a byproduct, we are able to compute the uncertainty about our model predictions and construct interval forecasts. An extensive Monte Carlo simulation exercise shows the good performance of this novel prediction method in terms of mean square prediction error and the accuracy of the prediction intervals in terms of out-of-sample coverage probabilities. This approach is superior to state-of-the-art methods extant in the literature such as the widely used Monte Carlo dropout and bootstrap procedures. The out-of-sample accuracy of the novel algorithm is further evaluated using experimental settings already adopted in the literature. Keywords: Neural networks, ensemble methods, prediction interval, uncertainty quantifi- cation, dropout arXiv:2010.04044v2 [stat.ML] 13 May 2021 ∗University of Southampton, Centre for Economic and Policy Research (CEPR), Centre for Population Change (CPC), and Institut Louis Bachelier (ILB). †University of Southampton. Corresponding address: Department of Economics, University of Southamp- ton. Highfield Campus, SO17 1BJ, Southampton. E-mail: [email protected]. Tullio Mancini acknowl- edges financial support from The University of Southampton Presidential Scholarship. ‡University of Southampton and Universidad de Zaragoza. Jose Olmo acknowledges financial support from Fundación Aragonesa para la Investigación y el Desarrollo (ARAID). 1 1 Introduction A popular and fruitful strategy in the prediction literature is model averaging. Steel (2020) distinguishes two main categories: Bayesian model averaging (see Leamer, 1978), where the model index is treated as unknown (and thus a prior is specified on both model and model parameters); and frequentist model averaging (see Wang et al., 2009; Dormann et al., 2018), where the predictions of a battery of different prediction models are ensembled. There is a well established theoretical and empirical literature analyzing the predictive advantages of both Bayesian and frequentist approaches. When focusing on Bayesian model averaging, Min and Zellner (1993) show that the expected squared errors are minimized by Bayesian ensembles as long as the model underlying the data generating process is included in the model space. Fernandez et al. (2001) explain how Bayesian ensembles improve in terms of predictability over single models when dealing with growth data. More generally, the empirical study conducted by Raftery et al. (1997) highlights how Bayesian model averaging improves over single model predictions. As explained in Steel (2020), one of the main advantages of Bayesian ensembling is the possibility of integrating the prior structure analytically; nonetheless, a vast model space may constitute a computational challenge due to the impossibility of a complete model enumeration (this problem is often solved with Markov chain Monte Carlo (MCMC) methods). Steel (2020) highlights that MCMC methods are not implementable for frequentist model averaging approaches (no estimation of the model probabilities), ultimately limiting the possi- bility of applying frequentist approaches to a large number of models. As a result, the literature focusing on frequentist ensembles tries to propose methods directed to reducing the model space (see, for example, Claeskens et al., 2006; Zhang et al., 2016; and Zhang et al., 2013). Finally, when analyzing frequentist model averaging, it is essential to mention the research conducted by Stock and Watson (2004). After examining different weighting schemes found in the litera- ture, these authors show how ”optimally” estimated weights perform worse than equal weights in terms of out-of-sample mean squared error. Smith and Wallis (2009) explain that this ”forecast combination puzzle” can be explained starting from the double estimation uncertainty associated with estimating the ”optimal” weights. Similarly, model averaging in machine learning (e.g., boosting, bagging, random forest, and extremely randomized trees) aims to construct a predictive model by combining ”weak” learners to obtain a strong learner. As opposed to the aforementioned literature, model averaging in machine learning does not allow the estimation of the uncertainty on the parameter estimates and the identification of the model structure. Instead, it is mainly focused on point prediction/- forecasting and associated uncertainty estimation. 2 Neural networks are increasingly popular prediction models in the machine learning liter- ature. These models are widely used in prediction tasks due to their unrivaled performance and flexibility in modeling complex unknown functions of the data. Although these methods provide accurate predictions, the development of tools to estimate the uncertainty around their predictions is still in its infancy. As explained in Hüllermeier and Waegeman (2020) and Pearce et al. (2018), out-of-sample pointwise accuracy is not enough1. The predictions of deep neural network (DNN) models need to be supported by measures of uncertainty in order to provide satisfactory answers for prediction in high-dimensional regression models, pattern recognition, biomedical diagnosis, and others (see Schmidhuber (2015) and LeCun et al. (2015) for overviews of the topic). The present paper focuses on a machine learning approach for model prediction. We propose an ensemble of neural network models with the aim of improving the accuracy of existing model predictions from individual neural networks. A second main contribution of the present study is to assess the uncertainy about the predictions of these ensembles of neural network models and construct interval forecasts. Our novel approach extends the Extra-trees algorithm (Geurts et al., 2006) to ensembles of deep neural networks using a fixed Bernoulli mask. To do this, we estimate T different sub-networks with randomized architectures (each network will have different layer-specific widths) that are independently trained on the same dataset. Thus, the fixed Bernoulli mask introduces an additional randomization scheme to the prediction obtained from the ensemble of neural networks that ensures independence between the components of the ensemble reducing, in turn, the variance associated to the prediction and yielding accurate pre- diction intervals. Additionally, based on the findings of Lee et al. (2015) and Lakshminarayanan et al. (2017), the novel procedure is expected to outperform bootstrap based approaches in terms not only of estimation accuracy but also of uncertainty estimation. This is confirmed in our sim- ulation experiments. The competitors of our ensemble prediction model are found in the machine learning litera- ture. In particular, we consider Monte Carlo dropout and bootstrap procedures as the benchmark models to beat in out-of-sample prediction exercises. Monte Carlo dropout approximates the predictive distribution of a target variable by fitting a deep or shallow network with dropout both at train and test time. Conversely, both extra-neural network and bootstrap based ap- proaches approximate the target predictive distribution via ensemble methods. When comparing classical bootstrap approaches to the extra-neural network approach proposed in this paper, we notice that (i) both methods guarantee conditional randomness of the predicted outputs, the 1A trustworthy representation of uncertainty can be considered pivotal when machine learning techniques are applied to medicine (Yang et al., 2009; Lambrou et al., 2011), or to anomaly detection, optimal resource allocation and budget planning (Zhu and Laptev, 2017), or cyber-physical systems (Varshney and Alemzadeh, 2017) defined as surgical robots, self-driving cars and the smart grid. 3 extra-neural network method does it through the Bernoulli random variables with probability p and random weight initialization, whereas the bootstrap does it through the nonparametric data resampling and random weight initialization; (ii) by performing data resampling, the naive (non- parametric) bootstrap approach requires the assumption that observations are independent and identically distributed (i.i.d). Importantly, each single model is trained with only 63% unique observations of the original sample due to resampling with replacement; (iii) by randomizing the neural network structures, the extra-neural network approach increases the diversity (see Zhou (2012) for an analysis of diversity and ensemble methods) among the individual learners; and (iv) the extra-neural network will benefit from the generalization gains associated with dropout (one can think of the dropout approach of Srivastava et al. (2014) as an ensemble of sub-networks trained for one gradient step). To analyze the out-of-sample performance and the empirical coverage probabilities of the pro- posed methodologies, we carry out an extensive Monte Carlo exercise that evaluates the Monte Carlo dropout, the bootstrap approach, and extra-neural network for both deep and shallow neural networks given different dropout rates and data generating processes. The simulation results show that all three procedures return prediction intervals approximately equal to the theoretical
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages37 Page
-
File Size-