Analysis of Classification Models File

Total Page:16

File Type:pdf, Size:1020Kb

Analysis of Classification Models File Analysis of Classification Models The Essentials of Data Analytics and Machine Learning [A guide for anyone who wants to learn practical machining learning using R] Author: Dr. Mike Ashcroft Editor: Ali Syed This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc- sa/4.0/. ® 2016 Dr. Michael Ashcroft and Persontyle Limited Essentials of Data Analytics and Machine Learning 1 ANALYSIS OF Module 10 CLASSIFICATION MODELS We introduced misclassification error in module 9, where we also discussed the use of mean squared error for probability estimates. These are the most common error scores used for fitting parameters. You should note that misclassification is the simple accuracy statistic discussed in the next section, and hence shares its problem. Analysis of Classification Models Analysis of Classification Models In this module we will see a number of alternative performance measures, including precision, recall, and other confusion matrix based statistics, as well as AUROC. It is also possible to use the Gini index and cross-entropy as error scores. However, these are most commonly used in the generation of decision trees, and we wait for introducing these until we discuss tree based methods in module 15. Besides basic performance measures, we also discuss cost-weighted performance optimization, confidence intervals and significance tests for comparing two classifiers. Confusion Matrices Confusion matrices are an excellent way of providing all information about the performance of your classifier on test data. They provide a matrix of actual values vs classified values. These classified values may include special values, such as uncertain, if your model outputs such. The simplest version are binary confusion matrices. When we are seeking to classify objects as being of a class or not, a number of terms are associated with the different elements: Actual Class \ Predicted Class F T F TN – true negative FP – false positive T FN – false negative TP – true positive False positives and false negatives are also called type I and II errors respectively. A number of statistics can be read directly of such confusion matrices. Unfortunately, the terminology is different in different fields: Accuracy (TN+TP)/(TN+FP+FN+TN) Error rate (FN+FP)/(TN+FP+FN+TN) Recall / Sensitivity / True Positive Rate / Hit Rate TPR TP/(TP+FN) Specificity / True Negative Rate TNR TN/(TN+FN) Precision / Positive Predictive Value TP/(TP+FP) Negative Predictive Value TN/(TN+FN) False Omission Rate FN/(TN+FN) False Discovery Rate FP/(FP+TP) Fall Out / False Positive Rate FPR FP/(FP+TN) False Negative Rate / Miss Rate FNR FN/(FN+TP) Essentials of Data Analytics and Machine Learning 1 Analysis of Classification Models Positive Likelihood Ratio LR+ TPR/FPR Negative Likelihood Ration LR– FNR/TNR Diagnostic Odds Ratio LR+/LR– F1 Score 2TP/(2TP+FP+FN) Balanced Accuracy (TPR+TNR)/2 Informedness TPR+TNR-1 The most important of these, and the terms we will use for these, are highlighted. The importance of accuracy should be obvious, but in fact recall and precision are generally the most important statistics of a classifier. Imagine we are testing for a rare cancer. It occurs in only 1 in 10000 individuals. Creating a classifier that always estimates that the cancer is absent will have 99.99% accuracy on future data and is completely useless. It has high accuracy, but zero recall and precision. Let us imagine a second situation, where we attempt to identify aliased social media accounts (different accounts that belong to the same individual). To do this, we create a classifer that classifies whether pairs of accounts are aliases or not. We train and test the model on balanced data, where half of the pairs are aliased, and half are not. Let us imagine our results are: Not Aliased Aliased Not Aliased 490 10 Aliased 20 480 Here our statistics are: 970 Accuracy: = 97% 1000 480 Recall: = 96% 500 480 Precision: = 97.96% 490 We also have: 490 True Negative Rate: = 98% 500 The recall (true positive rate) and true negative rate provide us with estimates of the class accuracy of the classifier. That is to say, the accuracy at predicting unaliased accounts as unaliased accounts, and aliased accounts as aliased accounts. Let us consider what these rates would mean when dealing with wild, unbalanced data. It is likely that very few pairs of accounts are actually aliased. Let’s estimate the real number at 1 in a million. If we used our classifier to estimate 1 billion pairs of accounts in the wild, we would expect: Essentials of Data Analytics and Machine Learning 2 Analysis of Classification Models Not Aliased Aliased Not Aliased: 푵푨 = 490 10 푁퐴 = 979999020 푁퐴 = 19999980 ퟗퟗퟗퟗퟗퟗퟎퟎퟎ 500 500 Aliased: 푨 = ퟏퟎퟎퟎ 20 480 퐴 = 40 퐴 = 960 500 500 With statistics: Accuracy: 97% Recall: 96% Precision: 0.000008% With the class balance found in the wild, our accuracy is wash away by the inevitable deluge of false positives due to the massive preponderance of the negative class. In such a situation it is clear that we should value precision far more highly than recall when evaluating our models. Of course, we have made things difficult by using a balanced dataset for training and testing. But this may be suitable, and certainly we will need to use data that has a far more equal balance than that found in the wild so as to have sufficient positive cases to hope to find any pattern in them and avoid our classifier defaulting to simply classifying everything as not-aliased. A rule of thumb is that few machine learning algorithms work well with binary data where one class has less than 10% of the cases. There are reverse cases where recall rather than precision is to be valued, though since we are often interested in classifying unusual or valuable events it is often the case that our positive class is outnumbered by the negative and the precision is rightly valued more highly. In any case, you should think about the balance of classes in the population as well as the training and testing data. Depending on this balance you should consider selecting models on the basis of precision or recall, or a weighted combination of both, rather than merely accuracy. In such cases, or when you are unsure of the population balance, you may also wish to use some of the statistics towards the end of our list that evaluate models based on recall and precision, such as balanced accuracy or informedness. The caret package has a confusionMatrix function that will produce a confusion matrix and a number of statistics. To demonstrate this, let us generate some possible estimates of a pretend classifier (setting the seed for reproducibility): > set.seed(0) > y=sample(c(T,F),100,replace=T) > m1=y > m1_err=sample(1:100,sample(15:30,1)) > m1[m1_err]=!m1[m1_err] Essentials of Data Analytics and Machine Learning 3 Analysis of Classification Models Now we can type: > caret::confusionMatrix(m1,y,positive="TRUE") Confusion Matrix and Statistics Reference Prediction FALSE TRUE FALSE 33 9 TRUE 15 43 Accuracy : 0.76 95% CI : (0.6643, 0.8398) No Information Rate : 0.52 P-Value [Acc > NIR] : 6.939e-07 Kappa : 0.5169 Mcnemar's Test P-Value : 0.3074 Sensitivity : 0.8269 Specificity : 0.6875 Pos Pred Value : 0.7414 Neg Pred Value : 0.7857 Prevalence : 0.5200 Detection Rate : 0.4300 Detection Prevalence : 0.5800 Balanced Accuracy : 0.7572 'Positive' Class : TRUE We will see how to interpret a number of the new statistics given here in later sections. ROC The receiver operating characteristic (ROC) provides a means of evaluating both models and parameters of models. ROC-space is given by the true and false positive rates, with intervals for both between 0 and 1 inclusive. A point in this space specifies a models performance, with the vertical coordinate giving the proportion of true positives vs false negatives, and the horizontal coordinate giving the proportion of false positives vs true negatives. A perfect classifier would give results that would be mapped to X=0 (0% false positives, so 100% true negatives), Y=1 (100% true positives, so 0% false negatives). This corresponds to the green dot in the plot to the left. Essentials of Data Analytics and Machine Learning 4 Analysis of Classification Models A model that performs no better than random guessing would fall on the red line. Its location on this line would depend on the ratio of the classes in the data being classified. This is also termed the positive and negative base rates. Accordingly, a model’s performance can be judged by its distance from the red line. Strictly speaking, this assumes that all models will be better than random guessing (and fall in the top left half of the ROC graph). However, note that were a model to be worse that random guessing (and fall in the lower right half of the ROC graph) then we could use it as a model that is better than random guessing simply by negating its predictions. The distance from the red line of this inverted model, located in the top left half, would be equal to the distance from the red line of the uninverted model, located in the bottom right half. Solely in the case of binary classification, a terrible model is as valuable as a wonderful one. Evaluation is often done visually, but it is simple to specify the distance of a point to the red line, dROC, analytically: (푇푃푅 − 퐹푃푅)2 푑푅푂퐶 = √ 2 This statistic has a very nice characteristic: It is invariant to changes in the base rates of the classes.
Recommended publications
  • Statistical Evaluation of Diagnostic Tests – Part 2 [Pre-Test and Post-Test Probability and Odds, Likelihood Ratios, Receiver
    86 Journal of The Association of Physicians of India ■ Vol. 65 ■ July 2017 STATISTICS FOR RESEARCHERS Statistical Evaluation of Diagnostic Tests – Part 2 [Pre-test and post-test probability and odds, Likelihood ratios, Receiver Operating Characteristic Curve, Youden’s Index and Diagnostic test biases] NJ Gogtay, UM Thatte Introduction Understanding Probability From the example, it follows that and Odds and the Odds = p/1-p, where p is the n the previous article on probability of the event occurring. the statistical evaluation Relationship between the I Probability, on the other hand, of diagnostic tests –Part 1, we Two is given by the formula understood the measures of sensitivity, specificity, positive Let us understand probability p = Odds/1+Odds and negative predictive values. and odds with the example of The use of these metrices stems a drug producing bleeding in Bayesian Statistics, Pre- from the fact that no diagnostic 10/100 patients treated with it. Test Probability and Pre- test is ever perfect and every time The probability of bleeding will Test Odds we carry out a test, it will yield be 10/100 [10%], while the odds of one of four possible outcomes– bleeding will be 10/90 [11%]. This A clinician often suspects that true positive, false positive, true is because odds is defined as the a patient has the disease even negative or false negative. The 2 x probability of the event occurring before he orders a test [screening 2 table [Table 1] gives each of these divided by the probability of or diagnostic] on the patient. For four possibilities along with their the event not occurring.2 Thus, example, when a patient who is mathematical calculations when a every odds can be expressed as a chronic smoker and presents new test is compared with a gold probability and every probability with cough and weight loss of a standard test.1 as odds as these are two ways of six-month duration, the suspicion In this article, the second in explaining the same concept.
    [Show full text]
  • The Diagnostic Odds Ratio: a Single Indicator of Test Performance Afina S
    Journal of Clinical Epidemiology 56 (2003) 1129–1135 The diagnostic odds ratio: a single indicator of test performance Afina S. Glasa,*, Jeroen G. Lijmerb, Martin H. Prinsc, Gouke J. Bonseld, Patrick M.M. Bossuyta aDepartment of Clinical Epidemiology & Biostatistics, University of Amsterdam, Academic Medical Center, Post Office Box 22700, 100 DE Amsterdam, The Netherlands bDepartment of Psychiatry, University Medical Center, Post Office Box 85500, 3508 GA, Utrecht, The Netherlands cDepartment of Epidemiology, University of Maastricht, Post Office Box 6166200 MD, Maastricht, The Netherlands dDepartment of Public Health, Academic Medical Center, Post Office Box 22700, 1100 DE, Amsterdam, The Netherlands Accepted 17 April 2003 Abstract Diagnostic testing can be used to discriminate subjects with a target disorder from subjects without it. Several indicators of diagnostic performance have been proposed, such as sensitivity and specificity. Using paired indicators can be a disadvantage in comparing the performance of competing tests, especially if one test does not outperform the other on both indicators. Here we propose the use of the odds ratio as a single indicator of diagnostic performance. The diagnostic odds ratio is closely linked to existing indicators, it facilitates formal meta-analysis of studies on diagnostic test performance, and it is derived from logistic models, which allow for the inclusion of additional variables to correct for heterogeneity. A disadvantage is the impossibility of weighing the true positive and false positive rate separately. In this article the application of the diagnostic odds ratio in test evaluation is illustrated. Ć 2003 Elsevier Inc. All rights reserved. Keywords: Diagnostic odds ratio; Tutorial; Diagnostic test; Sensitivity and specificity; Logistic regression; Meta-analysis 1.
    [Show full text]
  • A Readers' Guide to the Interpretation of Diagnostic Test Properties
    CORE Metadata, citation and similar papers at core.ac.uk Provided by RERO DOC Digital Library Intensive Care Med (2003) 29:1043–1051 DOI 10.1007/s00134-003-1761-8 REVIEW Joachim E. Fischer A readers’ guide to the interpretation Lucas M. Bachmann Roman Jaeschke of diagnostic test properties: clinical example of sepsis Received: 30 October 2002 Abstract Background: One of the hood ratios, pretest probability, post- Accepted: 13 March 2003 most challenging practical and daily test probability, and diagnostic odds Published online: 7 May 2003 problems in intensive care medicine ratio. Conclusions: We suggest the © Springer-Verlag 2003 is the interpretation of the results following minimal requirements for from diagnostic tests. In neonatology reporting on the diagnostic accuracy and pediatric intensive care the early of tests: a plot of the raw data, multi- diagnosis of potentially life-threaten- level likelihood ratios, the area under ing infections is a particularly impor- the receiver operating characteristic tant issue. Focus: A plethora of tests curve, and the cutoff yielding the J. E. Fischer (✉) have been suggested to improve di- highest discriminative ability. For Department of Pediatrics, agnostic decision making in the clin- critical appraisal it is mandatory to University Children’s Hospital, Steinweisstrasse 75, 8032 Zurich, ical setting of infection which is a report confidence intervals for each Switzerland clinical example used in this article. of these measures. Moreover, to al- e-mail: [email protected] Several criteria that are critical to ev- low comparison to the readers’ pa- Tel.: +41-1-2667751 idence-based appraisal of published tient population authors should pro- Fax: +41-1-2667164 data are often not adhered to during vide data on study population char- L.
    [Show full text]
  • A Single Indicator of Test Performance E
    UvA-DARE (Digital Academic Repository) Beyond diagnostic accuracy. Applying and extending methods for diagnostic test research Glas, A.S. Publication date 2003 Link to publication Citation for published version (APA): Glas, A. S. (2003). Beyond diagnostic accuracy. Applying and extending methods for diagnostic test research. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:30 Sep 2021 11 The Diagnostic Odds Ratio: A single indicator of test performance e Afinaa S. Glas, Jeroen G. Lijmer, Martin H. Prins, Gouke J. Bonsel, Patrickk M.M. Bossuyt Journall of Clinical Epidemiology, accepted Abstract t Diagnosticc testing can be used to discriminate subjects with a target disorder from subjectss without it. Several indicators of diagnostic performance have been proposed, suchh as sensitivity and specificity.
    [Show full text]
  • Evaluation of Binary Diagnostic Tests Accuracy for Medical Researches
    Turk J Biochem 2021; 46(2): 103–113 Review Article Jale Karakaya* Evaluation of binary diagnostic tests accuracy for medical researches [Tıbbi araştırmalar için iki sonuçlu tanı testlerinin doğruluğunun değerlendirilmesi] https://doi.org/10.1515/tjb-2020-0337 In this review, some basic definitions, performance mea- Received July 8, 2020; accepted November 6, 2020; sures that can be used only in evaluating the diagnostic published online November 30, 2020 tests with binary results and their confidence intervals are mentioned. Having many different measures provides Abstract different interpretations in the evaluation of test perfor- mance. Accurately predicting the performance of a diag- Objectives: The aim of this study is to introduce the fea- nostic test depends on many factors. These factors can be tures of diagnostic tests. In addition, it will be demon- study design, criteria of participant selection, sample size strated which performance measures can be used for calculation, test methods etc. There are guidelines that diagnostic tests with binary results, the properties of these ensure that all information regarding the conditions under measures and how to interpret them. which the study was conducted is in report, in terms of Materials and Methods: The evaluation of the diagnostic such factors. Therefore, these guidelines are recommended test performance measures may differ depending on for use of the checklist by many publishers. whether the test result is numerical or binary. When the diagnostic test result is continuous numerical data, ROC Keywords: binary diagnostic test; diagnostic odds ratio; analysis is often utilized. The performance of a diagnostic likelihood ratios; overall accuracy; predictive values; test with binary results are usually evaluated using the sensitivity; specificity.
    [Show full text]
  • Measures of Diagnostic Accuracy: Basic Definitions
    Measures of diagnostic accuracy: basic definitions Ana-Maria Šimundić Department of Molecular Diagnostics University Department of Chemistry, Sestre milosrdnice University Hospital, Zagreb, Croatia E-mail : Abstract Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care.
    [Show full text]
  • Meta-Analysis of Diagnostic Test Accuracy Studies Guideline Final Nov 2014
    EUnetHTA JA2 Guideline ”Meta-analysis of diagnostic test accuracy studies” WP 7 GUIDELINE Meta-analysis of Diagnostic Test Accuracy Studies November 2014 Copyright © EUnetHTA 2013. All Rights Reserved. No part of this document may be reproduced without an explicit acknowledgement of the source and EUnetHTA’s expressed consent. EUnetHTA JA2 Guideline ”Meta-analysis of diagnostic test accuracy studies” WP 7 The primary objective of EUnetHTA JA2 WP 7 methodology guidelines is to focus on methodological challenges that are encountered by HTA assessors while performing relative effectiveness assessments of pharmaceuticals or non- pharmaceutical health technologies. As such the guideline represents a consolidated view of non-binding recommendations of EUnetHTA network members and in no case an official opinion of the participating institutions or individuals. Disclaimer: EUnetHTA Joint Action 2 is supported by a grant from the European Commission. The sole responsibility for the content of this document lies with the authors and neither the European Commission nor EUnetHTA are responsible for any use that may be made of the information contained therein. NOV 2014 © EUnetHTA, 2015. Reproduction is authorised provided EUnetHTA is explicitly acknowledged 2 EUnetHTA JA2 Guideline ”Meta-analysis of diagnostic test accuracy studies” WP 7 This guideline on “Meta-analysis of Diagnostic Test Accuracy Studies” has been developed by HIQA – IRELAND, with assistance from draft group members from IQWiG – GERMANY. The guideline was also reviewed and validated by a group of dedicated reviewers from GYEMSZI – HUNGARY, HAS – FRANCE and SBU - SWEDEN. NOV 2014 © EUnetHTA, 2015. Reproduction is authorised provided EUnetHTA is explicitly acknowledged 3 EUnetHTA JA2 Guideline ”Meta-analysis of diagnostic test accuracy studies” WP 7 Table of contents Acronyms - Abbreviations ..........................................................................................
    [Show full text]
  • Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy
    Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy Chapter 11 Interpreting results and drawing conclusions Patrick Bossuyt, Clare Davenport, Jon Deeks, Chris Hyde, Mariska Leeflang, Rob Scholten. Version 0.9 Released December 13th 2013. ©The Cochrane Collaboration Please cite this version as: Bossuyt P, Davenport C, Deeks J, Hyde C, Leeflang M, Scholten R. Chapter 11:Interpreting results and drawing conclusions. In: Deeks JJ, Bossuyt PM, Gatsonis C (editors), Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy Version 0.9. The Cochrane Collaboration, 2013. Available from: http://srdta.cochrane.org/. Saved date and time 13/12/2013 10:32 Jon Deeks 1 | P a g e Contents 11.1 Key points ................................................................................................................ 3 11.2 Introduction ............................................................................................................. 3 11.3 Summary of main results ......................................................................................... 4 11.4 Summarising statistical findings .............................................................................. 4 11.4.1 Paired summary statistics ..................................................................................... 5 11.4.2 Global measures of test accuracy ....................................................................... 10 11.4.3 Interpretation of summary statistics comparing index tests ............................... 12 11.4.4 Expressing
    [Show full text]
  • The Basic Four Measures and Their Derivates in Dichotomous Diagnostic Tests Tadeusz R Ostrowski, MD1 and Tadeusz Ostrowski, Phd2*
    ISSN: 2469-5831 Ostrowski and Ostrowski. Int J Clin Biostat Biom 2020, 6:026 DOI: 10.23937/2469-5831/1510026 Volume 6 | Issue 1 International Journal of Open Access Clinical Biostatistics and Biometrics ORIGINAL ARTICLE The Basic Four Measures and their Derivates in Dichotomous Diagnostic Tests Tadeusz R Ostrowski, MD1 and Tadeusz Ostrowski, PhD2* 1The Jacob of Paradyż University of Applied Sciences,, Poland Check for 2WSB University in Gorzow, Poland updates *Corresponding author: Tadeusz Ostrowski, PhD, WSB University in Gorzow, Poland table be modeled as Abstract The paper focuses on four basic statistics of dichotomous TP FP M = , (1) diagnostic tests, i.e. sensitivity, specificity, positive and FN TN negative predictive value, and some of their derivates, like Youden index and predictive summary index, and on further Where according to the commonly used convention: derivates of these derivates, i.e. Matthews correlation coef- ficient (or Yule phi), chi squared test and Cramer’s V coeffi- the upper row, T+ = [TP, FP], stands for a positive cient. The paper contains also the necessary and sufficient test result; conditions for a test to be invalid, to be uninformative and the necessary condition to be possibly valuable. The build- the bottom row, T- = [FN, TN], stands for a negative er-concept of the paper is the determinant of 2 by 2 matrix. test result; Keywords the left and right column, D+ = TP , D- = FP , Sensitivity, Specificity, Positive (Negative) predictive value, FN TN Youden index, Predictive summary index, Matthews correla- stands for disease positive, negative (respectively). It tion coefficient should not lead to mix-up in the paper using the conve- Abbreviations nient convention, namely T+ = TP + FP for the frequency of T+ (the same concerns other concepts).
    [Show full text]
  • Classification Assessment Methods
    The current issue and full text archive of this journal is available on Emerald Insight at: https://www.emerald.com/insight/2210-8327.htm ACI 17,1 Classification assessment methods Alaa Tharwat Faculty of Computer Science and Engineering, Frankfurt University of Applied Sciences, Frankfurt, Germany 168 Abstract Classification techniques have been applied to many applications in various fields of sciences. There are several Received 4 May 2018 ways of evaluating classification algorithms. The analysis of such metrics and its significance must be Revised 7 August 2018 Accepted 17 August 2018 interpreted correctly for evaluating different learning algorithms. Most of these measures are scalar metrics and some of them are graphical methods. This paper introduces a detailed overview of the classification assessment measures with the aim of providing the basics of these measures and to show how it works to serve as a comprehensive source for researchers who are interested in this field. This overview starts by highlighting the definition of the confusion matrix in binary and multi-class classification problems. Many classification measures are also explained in details, and the influence of balanced and imbalanced data on each metric is presented. An illustrative example is introduced to show (1) how to calculate these measures in binary and multi-class classification problems, and (2) the robustness of some measures against balanced and imbalanced data. Moreover, some graphical measures such as Receiver operating characteristics (ROC), Precision-Recall, and Detection error trade-off (DET) curves are presented with details. Additionally, in a step-by-step approach, different numerical examples are demonstrated to explain the preprocessing steps of plotting ROC, PR, and DET curves.
    [Show full text]
  • Diagnostic Research: an Introductory Overview
    Diagnostic research: an introductory overview Madhukar Pai, MD, PhD Assistant Professor of Epidemiology, McGill University Montreal, Canada Professor Extraordinary, Stellenbosch University, S Africa Email: [email protected] Diagnosis: why does it matter? To effectively practice medicine and public health, we need evidence/knowledge on 3 fundamental types of professional knowing “gnosis”: For individual Dia-gnosis Etio-gnosis Pro-gnosis (Clinical Medicine) For community Dia-gnosis Etio-gnosis Pro-gnosis (Public and community health) Miettinen OS Dia-gnosis The word diagnosis is derived through Latin from Greek: “dia” meaning apart, and “gnosis” meaning to learn. Diagnosis Vs Screening A diagnostic test is done on sick people patient presents with symptoms pre-test probability of disease is high (i.e. disease prevalence is high) A screening test is usually done on asymptomatic, apparently healthy people healthy people are encouraged to get screened pre-test probability of disease is low (i.e. disease prevalence is low) Approaches to Diagnosis Consider the following diagnostic situations: A 43-year-old woman presents with a painful cluster of vesicles grouped in the T3 dermatome of her left thorax. A 78-year-old man returns to the office for follow- up of hypertension. He has lost 10 kg since his last visit 4 months ago. He describes reduced appetite, but otherwise, there are no localizing symptoms. You recall that his wife died a year ago and consider depression as a likely explanation, yet his age and exposure history (ie, smoking) suggest other possibilities. Approaches to Diagnosis Misdiagnosis is common! Most misguided care results from thinking errors rather than technical mistakes.
    [Show full text]
  • Meta-Analysis of Individual Participant Diagnostic Test Data
    Meta-analysis of Individual Participant Diagnostic Test Data Ben A. Dwamena, MD The University of Michigan Radiology & VAMC Nuclear Medicine, Ann Arbor, Michigan Canadian Stata Conference, Banff, Alberta - May 30, 2019 B.A. Dwamena (UofM-VAMC) Diagnostic IPD Meta-analysis Banff 2019 1 / 56 Outline 1 Objectives 2 Diagnostic Test Evaluation 3 Current Methods for Meta-analysis of Aggregate Data 4 Modeling Framework for Individual Participant Data 5 References B.A. Dwamena (UofM-VAMC) Diagnostic IPD Meta-analysis Banff 2019 2 / 56 Objectives Objectives 1 Review underlying concepts of medical diagnostic test evaluation 2 Discuss a recommended model for meta-analysis of aggregate diagnostic test data 3 Describe framework for meta-analysis of individual participant diagnostic test data 4 Illustrate implementation with MIDASIPD, a user-written STATA routine B.A. Dwamena (UofM-VAMC) Diagnostic IPD Meta-analysis Banff 2019 3 / 56 Diagnostic Test Evaluation Medical Diagnostic Test Any measurement aiming to identify individuals who could potentially benefit from preventative or therapeutic intervention This includes: 1 Elements of medical history 2 Physical examination 3 Imaging procedures 4 Laboratory investigations 5 Clinical prediction rules B.A. Dwamena (UofM-VAMC) Diagnostic IPD Meta-analysis Banff 2019 4 / 56 Diagnostic Test Evaluation Diagnostic Accuracy Studies Figure: Basic Study Design SERIES OF PATIENTS INDEX TEST REFERENCE TEST CROSS-CLASSIFICATION B.A. Dwamena (UofM-VAMC) Diagnostic IPD Meta-analysis Banff 2019 5 / 56 Diagnostic Test Evaluation Diagnostic Accuracy Studies Figure: Distributions of test result for diseased and non-diseased populations defined by threshold (DT) Test - Test + Group 0 (Healthy) TN Group 1 TP (Diseased) DT Diagnostic variable, D Threshold B.A.
    [Show full text]