<<

An Index of Oscar-Worthiness: Predicting the Academy Award for Best Picture∗

Andrew B. Bernard Tuck School of Business February 21, 2005

∗Profuse thanks are due to Conor Dougherty and the staff at the Weekend Journal of the Wall Street Journal for asking whether this might work and providing all the data. As of the writing of this paper I had not seen any of the nominated films for 2004. Unfortunately, all errors remain mine alone.

1 1 Introduction

The awarding of the Oscar for Best Picture probably has more to do with art than with science. However, once the five nominees have been selected, history and statistics provide a strong guide as to which film will take home the small golden statue. I use data from the last twenty years to identify whichmovieshadthegreatestchanceatwinningtheBestPictureaward and to make a prediction about the probability of winning for this year’s nominees. According to my model, The Aviator is a heavy favorite to take home the 2004 Best Picture award. In addition, I create a complete historical ranking for all the nominated films over the last twenty years, an index of so-called ‘Oscar-Worthiness’. This index helps us identify several types of movies: a) ‘Sure Things’ or films that would have won in almost any year, b) ‘Lucky Dogs’, movies that won by virtue of having weak competition, and c) ‘Doomed Gems’, movies thatwouldhavewoninalmostanyotheryearbuthadthemisfortuneto compete against a ‘Sure Thing’. Even by historical standards, The Aviator is a strong Oscar contender, with an Oscar-Worthiness Index among the top 15 nominated movies since 1984. The paper is laid out as follows: Section 2 presents a list of movie charac- teristics and their association with winning the Best Picture award; Section 3 presents the empirical model used to predict the Best Picture Award and gives the results of the prediction exercise for 2004; Section 4 introduces the Index of Oscar-Worthiness and presents ‘Sure Things’, ‘Lucky Dogs’, and ‘Doomed Gems’ over the last twenty years. Section 5 provides the requisite happy ending.

2 Potential Predictors

To begin, I identify a list of characteristics of each film nominated for an Oscar overthelasttwentyyears.1 These potential explanatory variables include both performance measures, e.g. the number of Golden Globe awards, and attributes of the film, e.g. whether the lead rode a horse, etc. The complete list of variables is given below:

Performance Measures • 1 Ideas for the variables were contributed by the WSJ staff.

2 — Number of Oscar Nominations — Number of Golden Globe Wins — Golden Globe for Best Picture

Movie Characteristics • — Based on Novel or Play — Comedy2 — Set at Least 20 Years before Release Date — Deals with Real Incident or Person — Lead Character Meets Untimely Death — Five Hanky Tearjerker — Unlikely Love Story — Lead Actor/Actress Comes From a Commonwealth Country — Has a Sports Theme — Lead Character gets on a Horse — Happy but Poor — Lead is Disabled — Lead Character is a Genius — Film Includes War Plot Line — Includes Action Outside of North America or Europe

The list of potential explanatory variables is not intended to be exhaustive and surely misses important elements in the Oscar selection process. Table 1 reports the differences for these characteristics across winners and losers for the last twenty years. Quite a few of the variables show statistical differences across the two groups. Perhaps not surprisingly, winners have more total Oscar nominations, 10.5, than losers, 6.7. In addition, Best Picture winners aremorelikelytohavewonGoldenGlobeawardsinthatsameyear:on average, winners take home 2.9 Golden Globes while losers end up with just

2 The designation of a movie as a comedy was based on its primary category on R Netflix°. For example, is classified as a romance.

3 Table 1: Differences Between Best Picture Winners and Losers, 1983-2003 Losers Winners Significant Number of Oscar Nominations 6.7 10.5 9 Number of Golden Globe Wins 0.9 2.9 9 Golden Globe for Best Picture 20% 85% 9 Based on Novel or Play 41% 45% Set at Least 20 Years 55% 70% Deals with Real Incident or Person 30% 30% Lead Meets Untimely Death 33% 30% Five Hanky Tearjerker 20% 10% Unlikely Love Story 28% 35% Lead Comes From A Commonwealth Country 35% 60% 9 Has a Sports Theme 4% 5% Lead Gets on a Horse 10% 30% 9 Happy but Poor 11% 15% Lead is Disabled 10% 10% Lead is a Genius 5% 20% 9 Includes War Plot Line 18% 20% Includes Action Outside of North America or Europe 19% 35% Comedy 15% 0% 9 The first two columns give the mean (or percentage) of the variable for winners and losers. The third column indicates that the differences in the variable between winners and losers were statistically significant at the 5% level.

under one Golden Globe apiece. Similarly, movies that go on to win the Academy Award for Best Picture are substantially more likely to have won aGoldenGlobeforBestPicture.3 Turning to the characteristics of themselves, I find fewer sta- tistically significant variables. However, there are some surprises. Movies whose lead comes from a Commonwealth country, i.e. an English-speaking country such as England, or New Zealand, are almost twice as likely to end up winning. Having the lead character get on a horse is also associated with Oscar success; 30 percent of the Best Picture winners saw the lead get on a horse while only 10 percent of the losers had mounted leads. It also appears to help if the leading character has above average intelligence; 20 percent of winners had ‘geniuses’ for a lead character while only 5 percent

3 There are two Golden Globe awards for Best Picture, one for Drama and one for Musical or Comedy.

4 of the losers did.4 Finally, the ultimate predictor of failure among nomi- nated pictures is the designation as a comedy. None of the ten nominated comedies has won the Best Picture Oscar over the last twenty years.5

3 A Model for Best Picture

Based on the results in Table 1, I assemble a short list of likely candidates for Best Picture predictors. I then proceed to assess the predictive power of combinations of the variables. The basic empirical technique employed in the paper is commonly referred to as a probit model. The simplest probit model attempts to estimate an unobserved variable, in this case the probability of winning the Best Picture award, by relating two observed phenomena: whether the picture won or lost and a characteristic of the movie.6

Pr(Winning)=f (Variable) .

Results from these simple probits (the log-likelihoods) can be examined to determine which variables are most strongly correlated with winning over the last twenty years. Using these results, I then run a probit on larger groups of variables, i.e. a probit with several explanatory factors.

3.1 The Simplest Model That Works Well From the earlier results, I narrow down the list of potential predictors to the group of seven significant characteristics, i.e. those marked with a X in Table 1. As expected, the number of overall Oscar nominations does a reasonable job in predicting Oscar winners. By itself, total nominations correctly identifies a clear winner in 14 of the last twenty years.7 Similarly, the number of Golden Globes correctly predicts the winner in 13 of 20 years.8

4 However, these results do not suggest that any film featuring an English genius who rides a horse is a shoe-in for Best Picture. 5 won the Best Picture in 1977, however, that film is not designated as a R comedy by Netflix°. 6 Formally probits are a form of regression that are described in every introductory econometrics text. 7 In four more years, the eventual winner is tied for the most nominations. 8 The number of Golden Globes incorrectly identifies the winner 6 times and in one year there is a tie.

5 Table 2: The Effects of Nominations and Golden Globes on Winning, 1984- 2003 Marginal Effect Standard Error Z-Stat

Number of Oscar Nominations 0.045 0.019 3.18 Number of Golden Globe Wins 0.102 0.040 3.73

Number of Correct Best Picture Predictions 18 out of 20 90% The first column gives the marginal increase in the probability of winning the Best Picture Oscar for an extra Oscar nomination or Golden Globe for the average movie. The second column reports the standard error and the third column gives the z-score. The probit was run on the sample of non-comedy nominated movies as Comedy perfectly predicts losing in sample. Both variables are significant at the 1% level.

However, using groups of variables I am able to substantially improve the predictive power of the simple model. Among these seven variables, one group of three variables far outper- forms any other combination in predicting the Best Picture winner over the last twenty years. These three predictors are: Total Oscar Nominations, Golden Globes Won, and Comedy. No other variable on the list improves the explanatory power of the model once these three predictors are included. The resulting estimating equation is

Pr(Winning)=f (α Nominations + β Golden Globes + θ Comedy + εit) . · · · To estimate the marginal effects of each variable on the probability of winning, I run a probit on Nominations and Golden Globes excluding the sample of comedies. Since no Comedy has won Best Picture in the last twenty years, the Comedy variable is perfectly correlated with losing. The results of the probit are summarized in Table 2. Foreachmovie,themodelgivesanoverallprobabilityofbeingawinner across the twenty years. To turn this into a prediction for each year, I examine the probability of winning for each of the five nominees in every year. The movie with the highest predicted probability is designated the predicted winner for that year. This simple framework, using only three variables, does an excellent job in separating winners from losers. Over the last twenty years, it accurately predicts eighteen Oscar winners, i.e. it is

6 Table 3: The Probability of Winning in 2004

Oscar- Predicted Worthiness Probability Year Winner Index This Year Title 2004 0.0 0.0% 2004 0.4 0.4% Finding Neverland 2004 1.3 1.4% Ray 2004 12.9 13.2% 2004 9 83.0 85.0% The Aviator correct 90 percent of the time.9 The model also gives the relative importance of each of the component predictors. For the average nominated film, an additional Oscar nomination increases the probability of winning by 4.5 percent, while each additional Golden Globe raises the probability of winning by 10.2 percent. It should not be surprising that these two variables are the best predictors of Oscar success. Both of these variables capture some body of opinion on the quality of the movie, which is precisely the underlying characteristic that is being considered by the members of the Academy. The number of overall Oscar nominations is a good predictor because of the nominating process itself. Members of the Academy, regardless of their area - cinematography, editing, costume etc - may nominate films in their area of specialty as well as intheBestPicturecategory. Thusthetotalnumberofnominationscaptures the breadth of support for the film among the Academy members.

3.2 And the Oscar Goes to ... Using the estimated model for 1984-2003, and the characteristics of the nom- inated movies for 2004, I estimate the probability of success for each film. The results for 2004 are given in column 4 of Table 3. The Aviator,withits eleven overall Oscar Nominations and three Golden Globes, is a clear favorite with an 85.0 percent chance of winning. Million Dollar Baby is a distant second according to the model with only a 13.2 percent chance.

9 The model incorrectly predicts that Born on the Fourth of July should have narrowly beaten in 1989. The biggest ‘surprise’ is the success of Silence of the Lambs in 1991 which the model has tied for last place in that year.

7 4 The Index of Oscar-Worthiness

One advantage of the prediction model is that I am able to go beyond just predicting the winner in any particular year and create an overall Index of Oscar-Worthiness. The model actually yields a measure of the probability that a movie would win a Best Picture Oscar, not just in the year of its nomination, but against all of the nominated films of the last twenty years.

4.1 The Greatest Movies of Our Time For 1984-2003, Tables 4 and 5 report both overall Oscar-Worthiness in col- umn 3 and the probability of winning in the year of nomination in column 4. The average for the Oscar-Worthiness Index is 20.0, i.e. the average probability of winning, however, there are big differences across movies. The median of the Oscar-Worthiness Index across all movies is 3.0, while 75 per- cent of the movies have an Index rating below 27. Titanic has the highest Oscar-Worthiness Index at 99.7 closely followed by Amadeus and : Return of The King, while, among the non-comedies, ASoldier’s Story, Field of Dreams,andAwakenings score the lowest at 0.004. The ‘Oscar-Worthiness Index’ for this year’s movies is given in the third column of Table 3. Even relative to the movies of the last twenty years, The Aviator is a strong competitor with an Oscar-Worthiness Index of 83.0, ranking in the top 15 movies of the past two decades. Different years vary substantially in the quality of the nominated films. Adding up the index for all the movies in a year, I find that the highest overall level of Oscar-Worthiness occurred in 1984. In that year, there were two highly rated movies, Amadeus and A Passage to India, each with an Oscar-Worthiness Index above 80. 1988 was a particularly lean year; in that year Rain Main led a relatively weak field with an Oscar-Worthiness Index of only 21.2. This year’s five nominees are close to an average group - their combined Oscar-Worthiness Index is 97.6, just under the annual average of 100.

4.2 Sure Things, Doomed Gems, and Lucky Dogs Using the Index of Oscar-Worthiness reported in Tables 4 and 5, I identify three types of movies in the Best Picture competition: ‘Sure Things’ or films that would have won in almost any year, ‘Lucky Dogs’, movies that won by

8 Table 4: The Oscar-Worthiness Index, 1984-1993

Oscar- Worthiness Probability Year Winner Index That Year Title 1984 0.0 0.0% A Soldier's Story 1984 3.0 1.6% The Killing Fields 1984 3.0 1.6% Places in the Heart 1984 83.0 45.0% A Passage to India 1984 9 95.6 51.8% Amadeus

1985 0.0 0.0% Prizzi's Honor 1985 0.0 0.0% Kiss of the Spider Woman 1985 1.1 0.9% Witness 1985 29.1 25.7% The Color Purple 1985 9 83.0 73.3% Out of Africa

1986 0.0 0.0% 1986 0.5 0.8% Children of a Lesser God 1986 6.1 9.0% A Room with a View 1986 12.9 19.1% The Mission 1986 9 48.1 71.2% Platoon

1987 0.0 0.0% Broadcast News 1987 0.0 0.0% Hope and Glory 1987 0.1 0.2% Moonstruck 1987 7.1 7.7% Fatal Attraction 1987 9 85.0 92.1%

1988 0.0 0.0% Working Girl 1988 0.0 0.1% The Accidental Tourist 1988 0.4 1.9% Mississippi Burning 1988 0.4 1.9% 1988 9 21.2 96.1%

1989 0.0 0.0% Field of Dreams 1989 0.0 0.0% 1989 0.0 0.0% My Left Foot 1989 9 61.2 44.6% Driving Miss Daisy 1989 75.9 55.3% Born on the Fourth of July

1990 0.0 0.0% Awakenings 1990 0.1 0.2% 1990 0.4 0.5% Part III 1990 0.5 0.6% Ghost 1990 9 90.1 98.8%

1991 3.0 5.5% The Prince of Tides 1991 9 3.0 5.5% The Silence of the Lambs 1991 6.1 11.1% JFK 1991 18.8 34.5% 1991 23.7 43.5% Beauty and the Beast

1992 0.0 0.0% A Few Good Men 1992 0.1 0.3% The Crying Game 1992 8.4 16.2% Scent of a Woman 1992 11.2 21.6% Howards End 1992 9 32.1 61.9%

1993 0.4 0.4% In the Name of the Father 1993 1.1 1.1% The Remains of the Day 1993 3.0 3.0% The Fugitive 1993 6.1 6.0% 1993 9 90.1 89.5% Schindler's List 9 Table 5: The Oscar-Worthiness Index, 1994-2003

Oscar- Worthiness Probability Year Winner Index That Year Title 1994 0.0 0.0% Four Weddings and a Funeral 1994 0.0 0.0% Quiz Show 1994 0.4 0.4% The Shawshank Redemption 1994 3.0 3.0% 1994 9 94.7 96.5%

1995 0.0 0.0% The Postman (Il Postino) 1995 0.0 0.0% Babe 1995 2.5 7.2% Apollo 13 1995 12.9 37.7% Sense and Sensibility 1995 9 18.8 55.1%

1996 0.4 0.6% Fargo 1996 0.5 0.7% Jerry Maguire 1996 0.5 0.7% Secrets & Lies 1996 3.0 4.0% Shine 1996 9 70.3 94.0% The English Patient

1997 0.0 0.0% 1997 0.0 0.0% As Good as it Gets 1997 11.2 9.2% Good Will Hunting 1997 11.2 9.2% L.A. Confidential 1997 9 99.7 81.7% Titanic

1998 0.4 0.3% The Thin Red Line 1998 0.4 0.3% 1998 3.0 1.9% Elizabeth 1998 58.0 37.0% 1998 9 94.7 60.5% Shakespeare in Love

1999 0.0 0.0% The Green Mile 1999 0.1 0.3% The Sixth Sense 1999 0.4 0.9% The Cider House Rules 1999 0.4 0.9% The Insider 1999 9 48.1 98.0% American Beauty

2000 0.0 0.0% Chocolat 2000 0.5 0.5% Erin Brockovich 2000 3.6 3.0% Traffic 2000 44.7 37.5% Crouching Tiger, Hidden Dragon 2000 9 70.3 59.0% Gladiator

2001 0.5 0.3% 2001 3.0 1.9% Gosford Park 2001 26.3 17.1% The Lord of the Rings: The Fellowship of the Ring 2001 48.1 31.3% Moulin Rouge 2001 9 75.9 49.4% A Beautiful Mind

2002 0.1 0.1% The Lord of the Rings: The Two Towers 2002 0.4 0.2% The Pianist 2002 32.1 18.6% The Hours 2002 44.7 26.0% Gangs of New York 2002 9 94.7 55.0% Chicago

2003 0.4 0.4% Seabiscuit 2003 5.1 4.4% Master and Commander: The Far Side of the World 2003 7.1 6.1% Mystic River 2003 8.4 7.2% Lost in Translation 2003 9 95.6 82.0% The Lord of the Rings: The Return of the King 10 Table 6: Sure Things, Doomed Gems, and Lucky Dogs The Sure Things Oscar- Worthiness Overall Rank Year Index Title 1 1997 99.7 Titanic 2 1984 95.6 Amadeus 3 2003 95.6 The Lord of the Rings: The Return of the King 4 1994 94.7 Forrest Gump 5 1998 94.7 Shakespeare in Love 6 2002 94.7 Chicago 7 1990 90.1 Dances With Wolves 8 1993 90.1 Schindler's List 9 1987 85.0 The Last Emperor 11 1985 83.0 Out of Africa 13 2001 75.9 A Beautiful Mind

Doomed Gems Oscar- Worthiness Overall Rank Year Index Title 10 1984 83.0 A Passage to India 12 1989 75.9 Born on the Fourth of July 17 1998 58.0 Saving Private Ryan

Lucky Dogs Oscar- Worthiness Overall Rank Year Index Title 28 1988 21.2 Rain Man 30 1995 18.8 Braveheart 47 1991 3.0 The Silence of the Lambs Note: A 'Sure Thing' is a movie with a predicted probability of 75% or higher that won the Best Picture Oscar. A 'Doomed Gem' is a movie with a probability of winning greater than 50%, i.e. a movie that would have won had it not faced a 'Sure Thing'. A 'Lucky Dog' is a Best Picture winner who probability of winning was less than 25%.

11 virtue of having weak competition, and ‘Doomed Gems’, movies that would have won in almost any other year but had the misfortune to compete against a‘SureThing’. More formally, I use the following criteria to group movies into these categories. ‘Sure Things’ win the Best Picture Oscar and have an Oscar- Worthiness Index of 75 or higher. ‘Doomed Gems’ have an Oscar-Worthiness Index above 50 but fail to win the Oscar because of the presence of a ‘Sure Thing’, or because of a surprise win, ’Lucky Dogs’ have an Oscar-Worthiness Index of less than 25 but win the Best Picture nonetheless. Table 6 reports on all three categories over the last twenty years. The ultimate ‘Lucky Dog’ is Silence of the Lambs with an Oscar-Worthiness Index of only 3.0. The most Oscar-Worthy picture not to win is A Passage to India which had the misfortune to be up against Amadeus, a veritable maestro of Oscar-Worthiness. Similarly, Saving Private Ryan, more Oscar-Worthy than 92% of all nominated movies over the last twenty year was knocked off by Shakespeare in Love, a true ‘Sure Thing’. This year, we find that The Aviator ranks well on the Oscar-Worthiness index and will make it onto the list as either a ‘Sure Thing’ or a ‘Doomed Gem’, depending on the voting. If any other movie wins in 2004, it would certainly have to qualify as a ‘Lucky Dog’.

5 The Happy Ending

In this paper, I create a simple model thatdoeswellinpredictingpastwinners of the Academy Award for Best Picture. The key variables, the number of overall nominations, the number of Golden Globe awards, and a designator for Comedy, correctly rank movies in 18 of the last 20 years. Using these criteria, I give The Aviator a 85 percent probability of winning the 2004 Best Picture Oscar. I also create an Oscar-Worthiness Index which allows me to compare movies across years and identify ’Sure Things’, ‘Doomed Gems’ and ‘Lucky Dogs’. Theultimatetestforanymodelishowwellitdoesinpredictingevents that have not yet occurred. This simple model will face its first test on Sunday, February 27, 2005. Remember it is an honor just to be nominated.

12