
The Introductory Statistics Course: A Ptolemaic Curriculum George W. Cobb Mount Holyoke College 1. INTRODUCTION The founding of this journal recognizes the likelihood that our profession stands at the thresh- old of a fundamental reshaping of how we do what we do, how we think about what we do, and how we present what we do to students who want to learn about the science of data. For two thousand years, as far back as Archimedes, and continuing until just a few decades ago, the mathematical sciences have had but two strategies for implementing solutions to applied problems: brute force and analytical circumventions. Until recently, brute force has meant arithmetic by human calculators, making that approach so slow as to create a needle's eye that effectively blocked direct solution of most scientific problems of consequence. The greatest minds of generations past were forced to invent analytical end runs that relied on simplifying assumptions and, often, asymptotic approximations. Our generation is the first ever to have the computing power to rely on the most direct approach, leaving the hard work of implementation to obliging little chunks of silicon. In recent years, almost all teachers of statistics who care about doing well by their students and doing well by their subject have recognized that computers are changing the teaching of our subject. Two major changes were recognized and articulated fifteen years ago by David Moore (1992): we can and should automate calculations; we can and should automate graph- ics. Automating calculation enables multiple re-analyses; we can use such multiple analyses to evaluate sensitivity to changes in model assumptions. Automating graphics enhances diag- nostic assessment of models; we can use these diagnostics to guide and focus our re-analyses. Both flavors of automation are having major consequences for the practice of statistics, but I think there is a third, as yet largely unrecognized consequence: we can and should rethink the way we present the core ideas of inference to beginning students. In what follows I argue that what we teach has always been shaped by what we can compute. On a surface level, this tyranny of the computable is reflected in the way introductory textbooks have moved away from their former cookbook emphasis on recipes for calculation using artificial data with simple numbers. Computers have freed us to put our emphasis on things that matter more. On a deeper level, I shall argue, the tyranny of the computable has shaped the development of the logic of inference, and forced us to teach a curriculum in which the most important ideas are often made to seem secondary. Just as computers have freed us to analyze real data sets, with more emphasis on interpretation and less on how to crunch numbers, computers have freed us to simplify our curriculum, so that we can put more emphasis on core ideas like randomized data production and the link between randomization and inference, less emphasis on cogs in the mechanism, such as whether 30 is an adequate sample size for using the normal approximation. To illustrate my thesis, I offer the following example from Ernst (2004). Example: Figure 1 shows post-surgery recovery times in days, for seven patients who were randomly divided into a control group of three that received standard care, and a treatment group of four that received a new kind of care. Times (days) Mean SD Control (standard): 22, 33, 40 31.67 3.0 Treatment (new): 19, 22, 25, 26 23.00 9.1 Post−operative times (days) 50 40 30 20 AVE. DIFF. 8.67 10 0 CONTROL TREATMENT Figure 1: Results of a randomized controlled experiment. How would your best students, and mine, analyze this data set? A two-sample t-test? As- suming unequal SDs, and so using the Welch-adjusted t-statistic? Let's examine that answer. Every statistical method is derived from a model, and one of our goals in teaching statistical thinking should be to encourage our students to think explicitly about the fit between model and reality. Here, that fit is not good. Model: Data constitute two independent simple random samples from normal populations. Reality: Data constitute a single convenience sample, randomly divided into two groups. How many of us devote much time in our class to the difference between model and reality here? One of the most important things our students should take away from an introductory course is the habit of always asking, \Where was the randomization, and what inferences does it support" How many of us ask our students, with each new data set, to distinguish between random sampling and random assignment, and between the corresponding differences in the kinds of inferences supported { from sample to population if samples have been randomized, and from association to causation if assignment to treatment or control has been randomized, as illustrated in Figure 2 (Ramsey and Shafer, 2002, p.9)? Figure 2: A chart from Ramsey and Shafter (2002). An alternative to the t-test for this data set is the permutation test: Start with the null hypothesis that the treatment has no effect. Then the seven numerical values have nothing to do with treatment or control. The only thing that determines which values get assigned to the treatment group is the randomization. The beauty of the randomization is that we can repeat it, over and over again, to see what sort of results are typical, and what should be considered unusual. This allows us to ask, \Is the observed mean difference of 8.67 too extreme to have occurred just by chance?" To answer this question, put each of the seven observed values on a card. Shuffle the seven cards, and deal out three at random to represent a simulated control group, leaving the other four as the simulated treatment group. Compute the difference of the means, and record whether it is at least 8.67. Reshuffle, redeal, and recompute. Do this over and over, and find the proportion of the times that you get an average of 8.67 or more. This turns out to be 8.6% of the time: unusual, but not quite unusual enough to call significant. Notice that because the set of observed values is taken as given; there is no need for any assumption about the distribution that generated them. Normality doesn't matter. We don't need to worry about whether SDs are equal either; in fact, we don't need SDs at all. Nor do we need a t-distribution, with or without artificial degrees of freedom. The model simply specifies that the observed values were randomly divided into two groups. Thus there is a very close match between the model and what actually happened to produce the data. It is easy for students to follow the sequence of links, from data production, to model, to inference. Question: Why, then, is the t-test the centerpiece of the introductory statistics curriculum? Answer: The t-test is what scientists and social scientists use most often. Question: Why does everyone use the t-test? Answer: Because it's the centerpiece of the introductory statistics curriculum. So why does our curriculum teach students to do a t-test? What are the costs of doing things this way? What could we be doing instead? My thesis is that both the content and the structure of our introductory curriculum are shaped by old history. What we teach was developed a little at a time, for reasons that had a lot to do with the need to use available theory to handle problems that were essentially computational. Almost one hundred years after Student published his 1908 paper on the t- test, we are still using 19th century analytical methods to solve what is essentially a technical problem { computing a p-value or a 95% margin of error. Intellectually, we are asking our students to do the equivalent of working with one of those old 30-pound Burroughs electric calculators with the rows of little wheels that clicked and spun as they churned out sums of squares. Now that we have computers, I think a large chunk of what is in the introductory statistics course should go. In what follows, I'll first review the content of a typical introductory course, offering Ptolemy's geocentric view of the universe as a parallel. Ptolemy's cosmology was needlessly complicated, because he put the earth at the center of his system, instead of putting the sun at the center. Our curriculum is needlessly complicated because we put the normal distribution, as an approximate sampling distribution for the mean, at the center of our curriculum, instead of putting the core logic of inference at the center. After reviewing our consensus curriculum, I'll give three reasons why this curriculum hurts our students and our profession: it's confusing, it takes up time that could be better spent on other things, and it even carries a whiff of the fraudulent. If it's all that bad, you may ask, how did we ever paint ourselves into the corner of teaching it? Until recently, I suggest, we had little choice, because our options were limited by what we could compute. I'll offer a handful of historically based speculations regarding the tyranny of the computable. After that, I'll sketch an alternative approach to the beginning curriculum, and conclude with an unabashed sales pitch. Here, then, is the plan: 1. What we teach: our Ptolemaic curriculum 2. What is wrong with what we teach: three reasons 3. Why we teach it anyway: the tyranny of the computable 4.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-