March Madness Model Meta-Analysis What determines success in a March Madness model? Ross Steinberg Zane Latif Senior Paper in Economics Professor Gary Smith Spring 2018 Abstract We conducted a meta-analysis of various systems that predict the outcome of games in the NCAA March Madness Tournament. Specifically, we examined the past four years of data from the NCAA seeding system, Nate Silver’s FiveThirtyEight March Madness predictions, Jeff Sagarin’s College Basketball Ratings, Sonny Moore’s Computer Power Ratings, and Ken Pomeroy’s College Basketball Ratings. Each ranking system assigns strength ratings of some variety to each team (i.e. seeding in the case of the NCAA and an overall team strength rating of some kind for the other four systems). Thus, every system should, theoretically, be able to predict the result of each individual game with some accuracy. This paper examined which ranking system best predicted the winner of each game based on the respective strength ratings of the two teams that were facing each other. We then used our findings to build a model that predicted this year’s March Madness tournament. Introduction March Madness is an annual tournament put on by the National College Athletic Association (NCAA) to determine the best D1 college basketball team in the United States. There is one March Madness tournament for men and one for women. We focused on the men’s tournament given the greater amount of analyses and model data available for it. The tournament consists of 68 teams that face each other in single-elimination games until there is only one team left. Out of all the teams, 32 receive automatic bids based on their regular season performance, and an NCAA selection committee chooses 36 additional teams. The NCAA selection committee seeds the teams based on perceived strength from 1 to 68, and eight teams play each other in 1 what are known as “play-in” games before the regular tournament. The four winners of the play-in games join the 60 other teams in the main March Madness tournament. Based on geographic region, these 64 teams are broken up into four groups of 16: Midwest, West, East, and South. Within each region, the teams’ playing schedules are arranged so that the highest seeded teams in each region play the teams with the respectively lowest seeds (e.g., seed 1 vs. seed 16 and seed 8 vs. seed 9). Interregional play occurs once four teams remain, also known as “The Final Four.” In order to measure which system best predicts the results of March Madness, we first collected pre-tournament ranking data for each of the five ranking systems and the round-by-round results of the past four tournaments. We collected these data for the 68 teams in the tournament for the past four years. We then noted how many total wins each model correctly predicted for each year and aggregated each model’s total wins for all four years. After this, we ran a regression on the specific metrics included in the models to see which ones had the most predictive value in determining game outcomes. Using the results of this regression, we then created our own models that we tested on this year’s March Madness tournament. More than just a way to determine the best DI college team, March Madness has become an important cultural touchstone. Each year, billions of dollars are bet on games throughout the tournament and millions of Americans fill out March Madness brackets. “Bracketology” has entered the common lexicon and during his tenure President Barack Obama filled out brackets in multiple years. A March Madness win can boost a college’s application numbers and help high school athletes determine to which college they wish to matriculate. In sum, there are a number of reasons people find it useful to have access to accurate predictions of March Madness 2 tournament games. This is why we analyzed the predictive values of both overall models and the individual metrics that go into the models. Review of Literature The literature review consists of an examination of two papers and how they relate to our meta-analysis. One (McCrea 2009) retroactively looked at how individuals predicted games in the NCAA March Madness tournament. It specifically focused on the factors people considered when they picked upsets and then evaluated the trends. The second paper (Toutkoushian 2011) discussed various prediction strategies and gave some background on these strategies, which provided context for the models we evaluated. This paper names common variables that many prediction systems use; however, it is backward looking and is focused solely on the creation of a model. Our paper in many ways serves as an update of their work, which was published in 2011. This is especially true as the paper did not reference some of the more popular prediction systems prevalent today. The first paper looked at the ways in which individuals predict March Madness games. It found that people tend to predict upsets at a strategically nonoptimal rate; people were found to predict a number of upsets similar to the number of upsets that had occurred in the past. However, because individuals were unable to match their upset predictions with situations where upsets occurred, they predicted fewer games correctly than they would have if they had gone for the favorite team more frequently (McCrea 2009, p. 27). Further, the study found that even sports bettors who have adequate remunerative incentives to correctly predict games were not able to beat bookkeepers and that at an individual level expert predictions were no better than 3 non-expert predictions (McCrea 2009, p. 27). Despite the amount of luck involved in the tournament, this suggests that there is much progress to be made in better predicting March Madness. The second paper paper shared a goal with our paper: develop a model that accurately predicts March Madness tournament game results. The paper used seed, data from the regular season (such as win percentage), and the average years in school of team players in order to create its model. It also looked at historical team data but found that they were not statistically significant in predicting wins and losses (Toutkoushian 2009, pp 29-32). The paper found that its model was more retrodictively accurate in determining game outcomes than predictively accurate when looking at the 2011 tournament (Toutkoushian 2009, pp 29-32). Our paper has drawn upon this by also focusing on data from the current regular season for our primary model, and it expands upon this by creating ancillary models that incorporate other predictive systems as well. Meta-Analysis As we mentioned in our introduction, we are analyzing five different NCAA Basketball ranking systems from the past four years in order to determine which system has the most predictive power. Over the past few years, some of these systems have changed the way in which they assign ratings or predict the winner of games; however, they have largely remained the same and the changes are minimal. In this meta-analysis, we provide a year-by-year breakdown of each model and give potential reasons for why certain models may have done better or worse. We also provide brief overviews for each of the years we are analyzing in order to contextualize 4 the results. In other words, 40 wins one year may not be equivalent to 40 wins the next year. Many of these models use similar factors as their inputs and are often linear combinations of one another. Note that for every year, there are 63 games played, not including play-in games. Thus, a prediction system could theoretically predict 63 games correctly, but there is no record of anyone ever creating perfect bracket. This section will give a brief overview of each prediction system, how they have changed (if at all), and provide year-by-year and total breakdowns.1 NCAA Tournament Overview 2014 March Madness Tournament In the 2014 March Madness Tournament, the higher seed won 40 of the 63 possible games. This represents the second most upsets in the four-year span we are analyzing. In conjunction with the upsets, every single computer-based system had its worst year in 2014. 2015 March Madness Tournament In the 2015 March Madness Tournament, the higher seeded team won 50 of the 63 games. This tournament had the fewest upsets of any of the four years, and every prediction system we are analyzing had its best year in 2015. 2016 March Madness Tournament The 2016 March Madness Tournament had the most upsets, but only by a margin of 1 game. This was the worst year for NCAA seeding and was seven games lower than the next 1 Note that the description of the ranking systems and our methods for analyzing them are in the Appendix section. 5 worst prediction system. In fact, most of the difference between the NCAA and the other prediction systems over the four-year span is a result of this year. 2017 March Madness Tournament For the 2017 March Madness Tournament, the higher seed won 46 of a possible 63 games. This is the second-highest total of the four-year span for NCAA seeding. Meta-Analysis Results and Conclusions In this section, we compare the ranking systems on a year-by-year and aggregate basis. The numbers under each prediction system are the number of games predicted correctly for the given year. YoY Results Below is a table displaying the year-by-year results: NCAA Seeding Sonny Moore FiveThirtyEight KenPom Jeff Sagarin 2014 40 44 42 39 46 2015 50 51 49.5 52 50 2016 39 47 46.5 50 48 2017 46 51 45 45 48 Total 175 193 183 186 192 A strong performance in the 2015 and 2017 March Madness Tournaments greatly assisted Sonny Moore’s total.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages29 Page
-
File Size-