Ecological Modelling 230 (2012) 50-62

Ecological Modelling 230 (2012) 50-62

Ecological Modelling 230 (2012) 50-62 Contents lists available at SciVerse ScienceDirect Ecological Modelling ELSEVlER jou rna I homepa g e: www .elsevier .co m/locate/eco I model Metrics for evaluating performance and uncertainty of Bayesian network models Bruce G. Marcot* U.S. Forest Service, Pacific Northwest Research Station, 620 S. W. Main Street. Portland, OR 9720~. United States ARTICLE INFO ABSTRACT Article history: This paper presents a selected set of existing and new metrics for gauging Bayesian network model Received 12 September 2011 performance and uncertainty. Selected existing and new metrics are discussed for conducting model Received in revised form 10 january 2012 sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios Accepted 11 january 2012 (influence analysis); depicting model complexity (numbers of model variables, links, node states, con­ ditional probabilities, and node cliques); assessing prediction performance (confusion tables, covariate­ Keywords: and conditional probability-weighted confusion error rates, area under receiver operating characteristic Bayesian network model Uncertainty curves, k-fold cross-validation, spherical payoff. Schwarz' Bayesian information criterion, true skill statis­ Model performance tic, Cohen's kappa); and evaluating uncertainty of model posterior probability distributions (Bayesian Model validation credible interval, posterior probability certainty index, certainty envelope, Gini coefficient). Examples are Sensitivity analysis presented of applying the metrics to 3 real-world models of wildlife population analysis and manage­ Error rates ment. Using such metrics can vitally bolster model credibility, acceptance, and appropriate application, Probability analysis particularly when informing management decisions. Published by Elsevier B.V. 1. Introduction different outcome states, that is, the spread of alternative predic­ tions. Ideally, the best model would have high performance and low Bayesian networks (BNs) are models that link variables with uncertainty, but to date their measures are either lacking or have probabilities and that use Bayes' theorem and associated Bayesian not been well summarized. learning algorithms to calculate posterior probabilities of outcome The purpose of this paper is to present a selected set of existing states Uensen and Nielsen, 2007). BN models are used in many and new metrics for gauging BN model performance and uncer­ ecological and environmental analyses (Aalders, 2008; McCann tainty, including: assessment of model sensitivity and influence of et al., 2006; Pourret et al., 2008), in part spurred by the avail­ input variables; various measures of model complexity, prediction ability of user-friendly computer modeling shells such as Hugin performance, error rates, model selection, and model validation; (www.hugin.com), Netica (www.norsys.com), and others, and and various metrics for depicting uncertainty of model output. I use of the WinBugs open-source modeling platform (www.mrc­ demonstrate application of the metrics to published, real-world bsu.cam.ac.ukfbugs). As their popularity increases, it becomes more BN models. and their degree of correlation and performance char­ important to ensure rigor in their application to real-world prob­ acteristics. I then summarize the utility and caveats of the metrics lems (Uusitalo, 2007). Two such areas addressed here are methods and conclude with the need for considering such metrics to bolster for evaluating performance and uncertainty of BN model results. model credibility, acceptance, and appropriate application, partic­ Performance pertains to how well a BN model predicts or diagnoses ularly when informing management decisions. some outcome, that is, the accuracy of model results. Uncertainty pertains to the dispersion of posterior probability values among 2. Methods 2.1. Background on Bayesian network models Abbreviations: AIC, Aka ike information criterion; AUC. area under the (receiver operating) curve; BIC. Bayesian information criterion; BN, Bayesian network; CPT, BN models can vary in their construction but most consist of conditional probability table; GCM, global circulation model; GHG, greenhouse gas; variables represented as nodes with discrete, mutually exclusive PPCI, posterior probability certainty index; PPPCIMAX· maximum PPCI value given states (Cain et al., 1999). Each state is represented with a proba­ one or more state probability values; PPCIMJN, minimum PPCI value given one or bility. Types of variables ("nodes") in a BN model include: inputs more state probability values; PPD, posterior probability distribution; ROC. receiver operating characteristic (curve); SP, spherical payoff; TSS, true skill statistic; VR, (covariates, prediction variables) with states comprised of uncon­ variance reduction. ditional, marginal, prior probabilities; outputs (response variables) *Tel.: +111 503 808 2010. with states calculated as posterior probabilities; and, in many mod­ E-mail addresses: [email protected]. [email protected] els, intermediate summary nodes (latent variables), with states 0304-3800/$- see front matter. Published by Elsevier B.V. doi: 10.1 016/j.ecolmodel.2012.01.013 B. G. Marcot I Ecological Modelling 230 (2012) 50-62 51 comprised of conditional probabilities (Marcot et al., 2006). Vari­ and from other scenario settings. The difference between influence ables also can constitute scalars and continuous equations. analysis and sensitivity analysis is that specifying the value of an Nodes are linked according to direct causal or correlative rela­ input variable forces that variable's sensitivity value (variance or tions between variables. BN model structure - including selection entropy reduction) to zero whereas it still may have a high influence and linkage of variables and their states, and their underlying prob­ on the PPD outcome. Conducting influence runs can help reveal the ability values- can be defined by expert judgment, use of empirical degree to which individual or sets of input variables co1:1ld affect data, or a combination. BN decision models include decision nodes outcome probabilities. This is helpful in a decision setting, where and utility nodes. Before calculations can be made of posterior out­ management might prioritize activities to best effect desirable, or come probabilities, most nodes in a BN model must be "discretized" to avoid undesirable, outcomes. whereby continuous values are represented as discrete states or value ranges. Running a BN model typically consists of specifying 2.3. Metrics of model complexity input values and calculating the posterior probability distribution (PPD) of the outcome variable(s). For a given application, the set Much of ecological statistical modeling strives to balance accu­ of expected or normal values of the input variables constitute the racy with parsimony in explanation of some outcome (Burnham "normative" model scenario (e.g., as used by jay et al., 2011 ). and Anderson, 201 0), because overly complex models can perform poorly (Adkison, 2009). The parsimony criterion refers to identify­ 2.2. Metrics of model sensitivity and influence ing the simplest model that still provides acceptable results, and can be depicted by several metrics of BN model complexity. 2.2.1. Sensitivity analysis Two simple metrics ofBN model complexity are number of vari­ Sensitivity analysis in BN modeling pertains to determining the ables (nodes) and number of links. More involved metrics include degree to which variation in PPDs is explained by other variables, total numbers of: node states (of categorical, ordinal, and dis­ and essentially depicts the underlying probability structure of a cretized continuous states of all variables). conditional probabilities model given prior probability distributions. Model sensitivity can (excluding marginal prior probabilities). and node cliques (sub­ be calculated as variance reduction with continuous variables or sets of fully interconnected nodes). Total number of conditional entropy reduction with ordinal-scale or categorical variables. probabilities is As used in the modeling shell Netica (B. Boerlage, pers. comm.), variance reduction (VR) is calculated as the expected reduction in the variation of the expected real value of an output variable Q that has q states, due to the value of an input variable F that has f states. The calculation is VR = V( Q) - V( QIF), where V( Q) = 2 where S =no. states of the child node. P1 =no. of states of the jth I:;qP(q)[Xq- E(Q)] • V(QIF) = l:l(qlf)[Xq- E(Qlf)f, E(Q) = parent node, for n parent nodes, among all V nodes in the model. I:;qP(q)Xq. where Xq is the numeric real value of state q, E(Q) is the Further, any of these metrics of model complexity could be expected real value of Q before applying new findings, E( Qjf) is the partitioned by type of node (nature, decision, utility, or constant) expected real value of Q after applying new findings f for variable involved. Overall. metrics of model complexity are not necessarily F. and V( Q) is the variance of the real value of Q before an new find­ correlated. For example, a model with n nodes could be struc­ ings. Entropy reduction, I, is calculated as the expected reduction in tured (linked) in many different ways, with nodes bearing few to mutual information of Qfrom a finding for variable F. calculated as many states. Thus, using> 1 metric of model complexity can help to represent the fuller array of model architectures when addressing

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us