STAT 6200 | Introduction to Biostatistics Lecture Notes

Total Page:16

File Type:pdf, Size:1020Kb

STAT 6200 | Introduction to Biostatistics Lecture Notes STAT Intro duction to Biostatistics Lecture Notes Intro duction Statistics and Biostatistics The eld of statistics The study and use of theory and metho ds for the analysis of data arising from random pro cesses or phenomena The study of how we make sense of data The eld of statistics provides some of the most fundamental to ols and techniques of the scientic metho d forming hyp otheses designing exp eriments and observational studies gathering data summarizing data drawing inferences from data eg testing hyp otheses A statistic rather than the eld of statistics also refers to a numerical quantity computed from sample data eg the mean the median the maximum Roughly sp eaking the eld of statistics can be divided into Mathematical Statistics the study and development of statistical theory and metho ds in the abstract and Applied Statistics the application of statistical metho ds to solve real problems involving randomly generated data and the development of new statistical metho dology motivated by real problems Read Ch of our text Biostatistics is the branch of applied statistics directed toward applica tions in the health sciences and biology Biostatistics is sometimes distinguished from the eld of biometry based up on whether applications are in the health sciences bio statistics or in broader biology biometry eg agriculture ecology wildlife biology Other branches of applied statistics psychometrics econometrics chemometrics astrostatistics environmetrics etc Why biostatistics Whats the dierence Because some statistical metho ds are more heavily used in health applications than elsewhere eg survival analysis longitudinal data analysis Because examples are drawn from health sciences Makes sub ject more app ealing to those interested in health Illustrates how to apply metho dology to similar problems en countered in real life We will emphasize the metho ds of data analysis but some basic theory will also be necessary to enhance understanding of the metho ds and to allow further coursework Mathematical notation and techniques are necessary No ap ologies We will study what to do and how to do it but also very imp ortant is why the metho ds are appropriate and what are the concepts justifying those metho ds The latter the why will get you further than the former the what Data Data Typ es Data are observations of random variables made on the elements of a p opulation or sample Data are the quantities numbers or qualities attributes measured or observed that are to be collected andor analyzed The word data is plural datum is singular A collection of data is often called a data set singular Example Low Birth Weight Infant Data App endix B of our text contains a data set called lowbwt contain ing measurements and observed attributes on low birth weight infants born in two teaching hospitals in Boston MA The variables measured here are sbp systolic blo o d pressure sex gender male female tox maternal diagnosis of toxemia yes no grmhem whether infant had a germinal matrix hemorrhage yes no gestage gestational age weeks apgar Apgar score measures oxygen deprivation at minutes after birth Data are repro duced on the top of the following page Read Ch of our text There are variables here sbp sex etc measured on unitselementssub jects the infants of a random sample of size An observation can refer to the value of a single variable for a par ticular sub ject but more commonly it refers to the observed values of all variables measured on a particular sub ject There are observations here Typ es of Variables Variable typ es can be distinguished based on their scale Typically dier ent statistical metho ds are appropriate for variables of dierent scales Scale Characteristic Question Examples Nominal Is A dierent than B Marital status Eye color Gender Religious aliation Race Ordinal Is A bigger than B Stage of disease Severity of pain Level of satisfaction Interval By how many units do A and B dier Temp erature SAT score Ratio How many times bigger than B is A Distance Length Time until death Weight Op erations that make sense for variables of dierent scales Op erations that make sense Addition Multiplication Scale Counting Ranking Subtraction Division p Nominal p p Ordinal p p p Interval p p p p Ratio Often the distinction between interval and ratio scales can be ig nored in statistical analyses Distinction between these two typ es and ordinal and nominal are more imp ortant Another way to distinguish between typ es of variables is as quantitative or qualitative Qualitative variables have values that are intrinsically nonnumeric categorical Eg Cause of death nationality race gender severity of pain mild mo derate severe Qualitative variables generally have either nominal or ordinal scales Qualitative variables can be reassigned numeric values eg male female but they are still intrinsically qualitative Quantitative variables have values that are intrinsically numeric Eg survival time systolic blood pressure number of children in a family height age body mass index Quantitative variables can be further sub divided into discrete and con tinuous variables Discrete variables have a set of p ossible values that is either nite or countably innite Eg numb er of pregnancies sho e size numb er of missing teeth For a discrete variable there are gaps between its p ossible val ues Discrete values often take integer whole numb ers values eg counts but some discrete variables can take noninteger values A continuous variable has a set of p ossible values including all values in an interval of the real line Eg duration of a seizure body mass index height No gaps between p ossible values The distinction between discrete and continuous quantitative variables is typically clear theoretically but can be fuzzy in practice In practice the continuity of a variable is limited by the precision of the measurement Eg height is measured to the nearest centimeter or p erhaps millimeter so in practice heights measured in millimeters only take integer values Another example survival time is measured to the nearest day but could theoretically be measured to any level of precision On the other hand the total annual attendance at UGA fo otball games is a discrete inherently integervalued variable but in prac tice can be treated as continuous In practice all variables are discrete but we treat some variables as continuous based up on whether their distribution can be well approximated by a continuous distribution Data Sources Data arise from exp erimental or observational studies and it is imp ortant to distinguish the two In an exp eriment the researcher delib erately imp oses a treatment on one or more sub jects or exp erimental units not necessarily hu man The exp erimenter then measures or observes the sub jects resp onse to the treatment Crucial element is that there is an intervention Example To assess whether or not saccharine is carcinogenic a re searcher feeds mice daily doses of saccharine After months of the mice have develop ed tumors By denition this is an exp eriment but not a very go o d one In the saccharine example we dont know whether with tumors is high b ecause there is no control group to which comparison can b e made Solution Select more mice and treat them exactly the same but give them daily doses of an inert substance a placeb o Supp ose that in the control group only mouse develops a tumor Is this evidence of a carcinogenic eect Mayb e but theres still a problem What if the mice in the groups dier systematically Eg group from genetic strain group from genetic strain Here we dont know whether saccharine is carcinogenic or if genetic strain is simply more susceptible to tumors We say that the eects of genetic strain and saccharine are con founded mixed up Solution Starting with relatively homogeneous similar mice ran domly assign to the saccharine treatment and to the control treat ment Randomization an extremely imp ortant asp ect of exp erimental de sign In the saccharine example we should start out with homoge neous mice but of course they will dier some Randomization ensures that the two exp erimental groups will b e probabilisti cally alike with resp ect to all nuisance variables p otential confounders Eg the distribution of body weights should be ab out the same in the two groups Another imp ortant concept esp ecially in human exp erimentation is blind ing An exp eriment is blind if the sub jects dont know which treatment they receive Eg supp ose we randomize of migraine suerers to an active drug and the remaining to a placeb o control treatment Exp eriment is blind if pills in the two treatment groups lo ok and taste identical and sub jects are not told which treatment they receive This guards against the placeb o eect An exp eriment is doubleblind if the researcher who administers the treatments and measures the resp onse do es not know which treat ment is assigned Guards against exp erimenter eects Exp erimenter may behave dierently toward the sub jects in the two groups or measure the resp onse dierently in the two groups Exp eriments are to be contrasted with observational studies No intervention Data collected on an existing system Less exp ensive Easier logistically More often ethically practical Interventions often not p ossible Exp eriments have many advantages and are strongly preferred when p ossible However exp eriments are rarely feasible in public healthepidemiology In health sciencesmedicine exp eriments involving humans
Recommended publications
  • Real-Time Wavelet Compression and Self-Modeling Curve
    REAL-TIME WAVELET COMPRESSION AND SELF-MODELING CURVE RESOLUTION FOR ION MOBILITY SPECTROMETRY A dissertation presented to the faculty of the College of Arts and Sciences of Ohio University In partial fulfillment of the requirements for the degree Doctor of Philosophy Guoxiang Chen March 2003 This dissertation entitled REAL-TIME WAVELET COMPRESSION AND SELF-MODELING CURVE RESOLUTION FOR ION MOBILITY SPECTROMETRY BY GUOXIANG CHEN has been approved for the Department of Chemistry and Biochemistry and the College of Arts and Sciences by Peter de B. Harrington Associate Professor of Chemistry and Biochemistry Leslie A. Flemming Dean, College of Arts and Sciences CHEN, GUOXIANG. Ph.D. March 2003. Analytical Chemistry Real-Time Wavelet Compression and Self-Modeling Curve Resolution for Ion Mobility Spectrometry (203 pp.) Director of Dissertation: Peter de B. Harrington Chemometrics has proven useful for solving chemistry problems. Most of the chemometric methods are applied in post-run analyses, for which data are processed after being collected and archived. However, in many applications, real-time processing is required to obtain knowledge underlying complex chemical systems instantly. Moreover, real-time chemometrics can eliminate the storage burden for large amounts of raw data that occurs in post-run analyses. These attributes are important for the construction of portable intelligent instruments. Ion mobility spectrometry (IMS) furnishes inexpensive, sensitive, fast, and portable sensors that afford a wide variety of potential applications. SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) is a self-modeling curve resolution method that has been demonstrated as an effective tool for enhancing IMS measurements. However, all of the previously reported studies have applied SIMPLISMA as a post-run tool.
    [Show full text]
  • The Savvy Survey #16: Data Analysis and Survey Results1 Milton G
    AEC409 The Savvy Survey #16: Data Analysis and Survey Results1 Milton G. Newberry, III, Jessica L. O’Leary, and Glenn D. Israel2 Introduction In order to evaluate the effectiveness of a program, an agent or specialist may opt to survey the knowledge or Continuing the Savvy Survey Series, this fact sheet is one of attitudes of the clients attending the program. To capture three focused on working with survey data. In its most raw what participants know or feel at the end of a program, he form, the data collected from surveys do not tell much of a or she could implement a posttest questionnaire. However, story except who completed the survey, partially completed if one is curious about how much information their clients it, or did not respond at all. To truly interpret the data, it learned or how their attitudes changed after participation must be analyzed. Where does one begin the data analysis in a program, then either a pretest/posttest questionnaire process? What computer program(s) can be used to analyze or a pretest/follow-up questionnaire would need to be data? How should the data be analyzed? This publication implemented. It is important to consider realistic expecta- serves to answer these questions. tions when evaluating a program. Participants should not be expected to make a drastic change in behavior within the Extension faculty members often use surveys to explore duration of a program. Therefore, an agent should consider relevant situations when conducting needs and asset implementing a posttest/follow up questionnaire in several assessments, when planning for programs, or when waves in a specific timeframe after the program (e.g., six assessing customer satisfaction.
    [Show full text]
  • Projections of Education Statistics to 2022 Forty-First Edition
    Projections of Education Statistics to 2022 Forty-first Edition 20192019 20212021 20182018 20202020 20222022 NCES 2014-051 U.S. DEPARTMENT OF EDUCATION Projections of Education Statistics to 2022 Forty-first Edition FEBRUARY 2014 William J. Hussar National Center for Education Statistics Tabitha M. Bailey IHS Global Insight NCES 2014-051 U.S. DEPARTMENT OF EDUCATION U.S. Department of Education Arne Duncan Secretary Institute of Education Sciences John Q. Easton Director National Center for Education Statistics John Q. Easton Acting Commissioner The National Center for Education Statistics (NCES) is the primary federal entity for collecting, analyzing, and reporting data related to education in the United States and other nations. It fulfills a congressional mandate to collect, collate, analyze, and report full and complete statistics on the condition of education in the United States; conduct and publish reports and specialized analyses of the meaning and significance of such statistics; assist state and local education agencies in improving their statistical systems; and review and report on education activities in foreign countries. NCES activities are designed to address high-priority education data needs; provide consistent, reliable, complete, and accurate indicators of education status and trends; and report timely, useful, and high-quality data to the U.S. Department of Education, the Congress, the states, other education policymakers, practitioners, data users, and the general public. Unless specifically noted, all information contained herein is in the public domain. We strive to make our products available in a variety of formats and in language that is appropriate to a variety of audiences. You, as our customer, are the best judge of our success in communicating information effectively.
    [Show full text]
  • Curriculum Vitae
    Jeff Goldsmith 722 W 168th Street, 6th floor New York, NY 10032 jeff[email protected] Date of Preparation April 20, 2021 Academic Appointments / Work Experience 06/2018{Present Department of Biostatistics Mailman School of Public Health, Columbia University Associate Professor 06/2012{05/2018 Department of Biostatistics Mailman School of Public Health, Columbia University Assistant Professor 01/2009{12/2010 Department of Biostatistics Bloomberg School of Public Health, Johns Hopkins University Research Assistant (R01NS060910) 01/2008{12/2009 Department of Biostatistics Bloomberg School of Public Health, Johns Hopkins University Research Assistant (U19 AI060614 and U19 AI082637) Education 08/2007{05/2012 Johns Hopkins University PhD in Biostatistics, May 2012 Thesis: Statistical Methods for Cross-sectional and Longitudinal Functional Observations Advisors: Ciprian Crainiceanu and Brian Caffo 08/2003{05/2007 Dickinson College BS in Mathematics, May 2007 Jeff Goldsmith 2 Honors 04/2021 Dean's Excellence in Leadership Award 03/2021 COPSS Leadership Academy For Emerging Leaders in Statistics 06/2017 Tow Faculty Scholar 01/2016 Public Voices Fellow 10/2013 Calderone Junior Faculty Prize 05/2012 ASA Biometrics Section Travel Award 12/2011 Invited Paper in \Highlights of JCGS" Session at Interface 05/2011 Margaret Merrell Award for Outstanding Research by a Biostatistics Doc- toral Student 05/2011 School-wide Teaching Assistant Recognition Award 05/2011 Helen Abbey Award for Excellence in Teaching 03/2011 ENAR Distinguished Student Paper Award 05/2010 Jane and Steve Dykacz Award for Outstanding Paper in Medical Statistics 05/2009 Nominated for School-wide Teaching Assistant Recognition Award 08/2007{05/2012 Sommer Scholar 05/2007 James Fowler Rusling Prize 05/2007 Lance E.
    [Show full text]
  • Evolution of the Infographic
    EVOLUTION OF THE INFOGRAPHIC: Then, now, and future-now. EVOLUTION People have been using images and data to tell stories for ages—long before the days of the Internet, smartphones, and Excel. In fact, the history of infographics pre-dates the web by more than 30,000 years with the earliest forms of these visuals being cave paintings that helped early humans find food, resources, and shelter. But as technology has advanced, so has our ability to tell meaningful stories. Here’s a look into the evolution of modern infographics—where they’ve been, how they’ve evolved, and where they’re headed. Then: Printed, static infographics The 20th Century introduced the infographic—a staple for how we communicate, visualize, and share information today. Early on, these print graphics married illustration and data to communicate information in a revolutionary way. ADVANTAGE Design elements enable people to quickly absorb information previously confined to long paragraphs of text. LIMITATION Static infographics didn’t allow for deeper dives into the data to explore granularities. Hoping to drill down for more detail or context? Tough luck—what you see is what you get. Source: http://www.wired.co.uk/news/archive/2012-01/16/painting- by-numbers-at-london-transport-museum INFOGRAPHICS THROUGH THE AGES DOMO 03 Now: Web-based, interactive infographics While the first wave of modern infographics made complex data more consumable, web-based, interactive infographics made data more explorable. These are everywhere today. ADVANTAGE Everyone looking to make data an asset, from executives to graphic designers, are now building interactive data stories that deliver additional context and value.
    [Show full text]
  • Use of Statistical Tables
    TUTORIAL | SCOPE USE OF STATISTICAL TABLES Lucy Radford, Jenny V Freeman and Stephen J Walters introduce three important statistical distributions: the standard Normal, t and Chi-squared distributions PREVIOUS TUTORIALS HAVE LOOKED at hypothesis testing1 and basic statistical tests.2–4 As part of the process of statistical hypothesis testing, a test statistic is calculated and compared to a hypothesised critical value and this is used to obtain a P- value. This P-value is then used to decide whether the study results are statistically significant or not. It will explain how statistical tables are used to link test statistics to P-values. This tutorial introduces tables for three important statistical distributions (the TABLE 1. Extract from two-tailed standard Normal, t and Chi-squared standard Normal table. Values distributions) and explains how to use tabulated are P-values corresponding them with the help of some simple to particular cut-offs and are for z examples. values calculated to two decimal places. STANDARD NORMAL DISTRIBUTION TABLE 1 The Normal distribution is widely used in statistics and has been discussed in z 0.00 0.01 0.02 0.03 0.050.04 0.05 0.06 0.07 0.08 0.09 detail previously.5 As the mean of a Normally distributed variable can take 0.00 1.0000 0.9920 0.9840 0.9761 0.9681 0.9601 0.9522 0.9442 0.9362 0.9283 any value (−∞ to ∞) and the standard 0.10 0.9203 0.9124 0.9045 0.8966 0.8887 0.8808 0.8729 0.8650 0.8572 0.8493 deviation any positive value (0 to ∞), 0.20 0.8415 0.8337 0.8259 0.8181 0.8103 0.8206 0.7949 0.7872 0.7795 0.7718 there are an infinite number of possible 0.30 0.7642 0.7566 0.7490 0.7414 0.7339 0.7263 0.7188 0.7114 0.7039 0.6965 Normal distributions.
    [Show full text]
  • An Introduction to Psychometric Theory with Applications in R
    What is psychometrics? What is R? Where did it come from, why use it? Basic statistics and graphics TOD An introduction to Psychometric Theory with applications in R William Revelle Department of Psychology Northwestern University Evanston, Illinois USA February, 2013 1 / 71 What is psychometrics? What is R? Where did it come from, why use it? Basic statistics and graphics TOD Overview 1 Overview Psychometrics and R What is Psychometrics What is R 2 Part I: an introduction to R What is R A brief example Basic steps and graphics 3 Day 1: Theory of Data, Issues in Scaling 4 Day 2: More than you ever wanted to know about correlation 5 Day 3: Dimension reduction through factor analysis, principal components analyze and cluster analysis 6 Day 4: Classical Test Theory and Item Response Theory 7 Day 5: Structural Equation Modeling and applied scale construction 2 / 71 What is psychometrics? What is R? Where did it come from, why use it? Basic statistics and graphics TOD Outline of Day 1/part 1 1 What is psychometrics? Conceptual overview Theory: the organization of Observed and Latent variables A latent variable approach to measurement Data and scaling Structural Equation Models 2 What is R? Where did it come from, why use it? Installing R on your computer and adding packages Installing and using packages Implementations of R Basic R capabilities: Calculation, Statistical tables, Graphics Data sets 3 Basic statistics and graphics 4 steps: read, explore, test, graph Basic descriptive and inferential statistics 4 TOD 3 / 71 What is psychometrics? What is R? Where did it come from, why use it? Basic statistics and graphics TOD What is psychometrics? In physical science a first essential step in the direction of learning any subject is to find principles of numerical reckoning and methods for practicably measuring some quality connected with it.
    [Show full text]
  • Cluster Analysis for Gene Expression Data: a Survey
    Cluster Analysis for Gene Expression Data: A Survey Daxin Jiang Chun Tang Aidong Zhang Department of Computer Science and Engineering State University of New York at Buffalo Email: djiang3, chuntang, azhang @cse.buffalo.edu Abstract DNA microarray technology has now made it possible to simultaneously monitor the expres- sion levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremen- dous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. Cluster analysis seeks to partition a given data set into groups based on specified features so that the data points within a group are more similar to each other than the points in different groups. A very rich literature on cluster analysis has developed over the past three decades. Many conventional clustering algorithms have been adapted or directly applied to gene expres- sion data, and also new algorithms have recently been proposed specifically aiming at gene ex- pression data. These clustering algorithms have been proven useful for identifying biologically relevant groups of genes and samples. In this paper, we first briefly introduce the concepts of microarray technology and discuss the basic elements of clustering on gene expression data.
    [Show full text]
  • Reliability Engineering: Today and Beyond
    Reliability Engineering: Today and Beyond Keynote Talk at the 6th Annual Conference of the Institute for Quality and Reliability Tsinghua University People's Republic of China by Professor Mohammad Modarres Director, Center for Risk and Reliability Department of Mechanical Engineering Outline – A New Era in Reliability Engineering – Reliability Engineering Timeline and Research Frontiers – Prognostics and Health Management – Physics of Failure – Data-driven Approaches in PHM – Hybrid Methods – Conclusions New Era in Reliability Sciences and Engineering • Started as an afterthought analysis – In enduing years dismissed as a legitimate field of science and engineering – Worked with small data • Three advances transformed reliability into a legitimate science: – 1. Availability of inexpensive sensors and information systems – 2. Ability to better described physics of damage, degradation, and failure time using empirical and theoretical sciences – 3. Access to big data and PHM techniques for diagnosing faults and incipient failures • Today we can predict abnormalities, offer just-in-time remedies to avert failures, and making systems robust and resilient to failures Seventy Years of Reliability Engineering – Reliability Engineering Initiatives in 1950’s • Weakest link • Exponential life model • Reliability Block Diagrams (RBDs) – Beyond Exp. Dist. & Birth of System Reliability in 1960’s • Birth of Physics of Failure (POF) • Uses of more proper distributions (Weibull, etc.) • Reliability growth • Life testing • Failure Mode and Effect Analysis
    [Show full text]
  • Receiver Operating Characteristic (ROC) Curve: Practical Review for Radiologists
    Receiver Operating Characteristic (ROC) Curve: Practical Review for Radiologists Seong Ho Park, MD1 The receiver operating characteristic (ROC) curve, which is defined as a plot of Jin Mo Goo, MD1 test sensitivity as the y coordinate versus its 1-specificity or false positive rate Chan-Hee Jo, PhD2 (FPR) as the x coordinate, is an effective method of evaluating the performance of diagnostic tests. The purpose of this article is to provide a nonmathematical introduction to ROC analysis. Important concepts involved in the correct use and interpretation of this analysis, such as smooth and empirical ROC curves, para- metric and nonparametric methods, the area under the ROC curve and its 95% confidence interval, the sensitivity at a particular FPR, and the use of a partial area under the ROC curve are discussed. Various considerations concerning the collection of data in radiological ROC studies are briefly discussed. An introduc- tion to the software frequently used for performing ROC analyses is also present- ed. he receiver operating characteristic (ROC) curve, which is defined as a Index terms: plot of test sensitivity as the y coordinate versus its 1-specificity or false Diagnostic radiology positive rate (FPR) as the x coordinate, is an effective method of evaluat- Receiver operating characteristic T (ROC) curve ing the quality or performance of diagnostic tests, and is widely used in radiology to Software reviews evaluate the performance of many radiological tests. Although one does not necessar- Statistical analysis ily need to understand the complicated mathematical equations and theories of ROC analysis, understanding the key concepts of ROC analysis is a prerequisite for the correct use and interpretation of the results that it provides.
    [Show full text]
  • Using Survey Data Author: Jen Buckley and Sarah King-Hele Updated: August 2015 Version: 1
    ukdataservice.ac.uk Using survey data Author: Jen Buckley and Sarah King-Hele Updated: August 2015 Version: 1 Acknowledgement/Citation These pages are based on the following workbook, funded by the Economic and Social Research Council (ESRC). Williamson, Lee, Mark Brown, Jo Wathan, Vanessa Higgins (2013) Secondary Analysis for Social Scientists; Analysing the fear of crime using the British Crime Survey. Updated version by Sarah King-Hele. Centre for Census and Survey Research We are happy for our materials to be used and copied but request that users should: • link to our original materials instead of re-mounting our materials on your website • cite this as an original source as follows: Buckley, Jen and Sarah King-Hele (2015). Using survey data. UK Data Service, University of Essex and University of Manchester. UK Data Service – Using survey data Contents 1. Introduction 3 2. Before you start 4 2.1. Research topic and questions 4 2.2. Survey data and secondary analysis 5 2.3. Concepts and measurement 6 2.4. Change over time 8 2.5. Worksheets 9 3. Find data 10 3.1. Survey microdata 10 3.2. UK Data Service 12 3.3. Other ways to find data 14 3.4. Evaluating data 15 3.5. Tables and reports 17 3.6. Worksheets 18 4. Get started with survey data 19 4.1. Registration and access conditions 19 4.2. Download 20 4.3. Statistics packages 21 4.4. Survey weights 22 4.5. Worksheets 24 5. Data analysis 25 5.1. Types of variables 25 5.2. Variable distributions 27 5.3.
    [Show full text]
  • Epidemiology and Biostatistics (EPBI) 1
    Epidemiology and Biostatistics (EPBI) 1 Epidemiology and Biostatistics (EPBI) Courses EPBI 2219. Biostatistics and Public Health. 3 Credit Hours. This course is designed to provide students with a solid background in applied biostatistics in the field of public health. Specifically, the course includes an introduction to the application of biostatistics and a discussion of key statistical tests. Appropriate techniques to measure the extent of disease, the development of disease, and comparisons between groups in terms of the extent and development of disease are discussed. Techniques for summarizing data collected in samples are presented along with limited discussion of probability theory. Procedures for estimation and hypothesis testing are presented for means, for proportions, and for comparisons of means and proportions in two or more groups. Multivariable statistical methods are introduced but not covered extensively in this undergraduate course. Public Health majors, minors or students studying in the Public Health concentration must complete this course with a C or better. Level Registration Restrictions: May not be enrolled in one of the following Levels: Graduate. Repeatability: This course may not be repeated for additional credits. EPBI 2301. Public Health without Borders. 3 Credit Hours. Public Health without Borders is a course that will introduce you to the world of disease detectives to solve public health challenges in glocal (i.e., global and local) communities. You will learn about conducting disease investigations to support public health actions relevant to affected populations. You will discover what it takes to become a field epidemiologist through hands-on activities focused on promoting health and preventing disease in diverse populations across the globe.
    [Show full text]