<<

1) New Paths to New Science

2) How an Unruly Mob Almost Stole the Grand Prize at the Last Moment

Jeff Howbert University of Washington

February 4, 2014 Viewing Recommendations Recommender Systems

DOMAIN: some field of activity where users buy, view, consume, or otherwise experience items

PROCESS: 1. users provide ratings on items they have experienced 2. Take all < user, item, rating > data and build a predictive model 3. For a user who hasn’t experienced a particular item, use model to predict how well they will like it (i.e. predict rating) Roles of Recommender Systems y Help users deal with paradox of choice y Allow online sites to: y Increase likelihood of sales y Retain customers by providing positive search experience y Considered essential in operation of: y Online retailing, e.g. Amazon, Netflix, etc. y Social networking sites Amazon.com Product Recommendations Social Network Recommendations y Recommendations on essentially every category of interest known to mankind y Friends y Groups y Activities y Media (TV shows, movies, music, books) y News stories y Ad placements y All based on connections in underlying social network graph, and the expressed ‘likes’ and ‘dislikes’ of yourself and your connections Types of Recommender Systems

Base predictions on either: y content‐based approach y explicit characteristics of users and items y approach y implicit characteristics based on similarity of users’ preferences to those of other users The Contest y GOAL: use training data to build a , which, when applied to qualifying data, improves error rate by 10% relative to Netflix’s existing system y PRIZE: first team to 10% wins $1,000,000 y Annual Progress Prizes of $50,000 also possible The Netflix Prize Contest y CONDITIONS: y Open to public y Compete as individual or group y Submit predictions no more than once a day y Prize winners must publish results and license code to Netflix (non‐exclusive) y SCHEDULE: y Started Oct. 2, 2006 y To end after 5 years The Netflix Prize Contest y PARTICIPATION: y 51051 contestants on 41305 teams from 186 different countries y 44014 valid submissions from 5169 different teams The Netflix Prize Data y Netflix released three datasets y 480,189 users (anonymous) y 17,770 movies y ratings on integer scale 1 to 5 y Training set: 99,072,112 < user, movie > pairs with ratings y Probe set: 1,408,395 < user, movie > pairs with ratings y Qualifying set of 2,817,131 < user, movie > pairs with no ratings Model Building and Submission Process

training set probe set ratings 99,072,112 1,408,395 known tuning MODEL validate

make predictions RMSE on RMSE kept public 1,408,342 1,408,789 secret for leaderboard quiz set test set final scoring

qualifying set (ratings unknown)

Why the Netflix Prize Was Hard 0 1777 10 9 8 7 6 5 4 3 2 1

y Massive dataset … movie movie movie movie movie movie movie movie movie movie y Very sparse –matrix movie only 1.2% occupied user 1 12 3 user 2 2334 y Extreme variation in user 3 53 4 number of ratings user 4 2322 user 5 4 534 per user user 6 2 y Statistical properties user 7 2 423 of qualifying and user 8 34 4 user 9 3 probe sets different user 10 12 2 from training set … user 480189 433 Dealing with Size of the Data y MEMORY: y 2 GB bare minimum for common algorithms y 4+ GB required for some algorithms y need 64‐bit machine with 4+ GB RAM if serious y SPEED: y Program in languages that compile to fast machine code y 64‐bit processor y Exploit low‐level parallelism in code (SIMD on Intel x86/x64) Common Types of Algorithms

y Global effects y Nearest neighbors y Matrix factorization y Restricted Boltzmann machine y Clustering y Etc. Nearest Neighbors in Action 0 1777 10 9 8 7 6 5 4 3 2 1

… movie movie movie movie movie movie movie movie movie movie movie user 1 12 3 user 2 2334? Identical preferences – user 3 53 strong weight user 4 2322 user 5 235424 Similar preferences – user 6 2 moderate weight user 7 242 user 8 31 34 5 4 user 9 3 user 10 12 2 … user 480189 433 Matrix Factorization in Action 1 2 3 4 5 6 7 8 9 10 17770

0 movie movie movie movie movie movie movie movie movie movie … movie 1 2 3 4 5 6 7 8 9 10 1777

factor 1 factor 2

movie movie movie movie movie movie movie movie movie movie … movie factor 3 < a bunch of numbers > user 1 12 3 factor 4 user 2 2334 factor 5 user 3 53 4 user 4 2322 + user 5 4534reduced‐rank 1 2 3 4 5 user 6 2 singular user 7 2423 value user 8 34 4 factor factor factor factor factor user 9 3 decomposition user 1 user 10 12 2 (sort of) user 2 … user 3

user 4 of user 480189 433 > user 5 user 6 user 7 bunch user 8 a

user 9 numbers user 10 < … user 480189 Matrix Factorization in Action 17770 10 9 8 7 6 5 4 3 2 1

… movie movie movie movie movie movie movie movie movie movie movie 0 factor 1 1 2 3 4 5 6 7 8 9 10 1777 factor 2 factor 3 factor 4 movie movie movie movie movie movie movie movie movie movie … movie factor 5 user 1 12 3 user 2 2334 user 3 53 4 + user 4 2322 user 5 4534 1 2 3 4 5

multiply and add user 6 2 user 7 2423

factor factor factor factor factor factor vectors user 1 (dot product) user 8 34 4? user 2 for desired user 9 3 user 3 user 10 12 2 < user, movie > … user 4 user 480189 433 user 5 prediction user 6 user 7 user 8 user 9 user 10 … user 480189 The Power of Blending y Error function (RMSE) is convex, so linear combinations of models should have lower error y Find blending coefficients with simple least squares fit of model predictions to true values of probe set y Example from my experience: y blended 89 diverse models y RMSE range = 0.8859 – 0.9959 y blended model had RMSE = 0.8736 y Improvement of 0.0123 over best single model y 13% of progress needed to win Algorithms: Other Things That Mattered

y Overfitting y Models typically had millions or even billions of parameters y Control with aggressive regularization

y Time‐related effects y Netflix data included date of movie release, dates of ratings y Most of progress in final two years of contest was from incorporating temporal information The Netflix Prize: Social Phenomena y Competition intense, but sharing and collaboration were equally so y Lots of publications and presentations at meetings while contest still active y Lots of sharing on contest forums of ideas and implementation details y Vast majority of teams: y Not machine learning professionals y Not competing to win (until very end) y Mostly used algorithms published by others One Algorithm from Winning Team (time‐dependent matrix factorization)

Yehuda Koren, Comm. ACM, 53, 89 (2010) Netflix Prize Progress: Major Milestones

1.05

Set 1.00 me, starting June, 2008 Quiz

on 0.95 Error

RMS 0.90

8.43% 9.44% 10.09% 10.00% 0.85 trivial algorithm Cinematch 2007 Progress 2008 Progress Grand Prize Prize Prize

DATE: Oct. 2007 Oct. 2008 July 2009 BellKor in WINNER: BellKor ??? BigChaos June 25, 2009 20:28 GMT June 26, 18:42 GMT –BPC Team Breaks 10% blending

30 day last call period begins Genesis of The Ensemble

Vandelay me Dinosaur Industries Planet 5 indiv. Opera 3 indiv. Solutions Gravity 4 indiv. 9 other 5 indiv. 7 other individuals individuals Opera and Grand Prize Team Vandelay United 14 individuals 19 individuals

The Ensemble 33 individuals

www.the‐ensemble.com June 30, 16:44 GMT July 8, 14:22 GMT July 17, 16:01 GMT July 25, 18:32 GMT –The Ensemble First Appears!

24 hours, 10 min before contest ends

#1 and #2 teams each have one more submission ! July 26, 18:18 GMT BPC Makes Their Final Submission

24 minutes before contest ends

The Ensemble can make one more submission – window opens 10 minutes before contest ends July 26, 18:43 GMT –Contest Over! Final Test Scores Final Test Scores Netflix Prize: What Did We Learn? y Significantly advanced science of recommender systems y Properly tuned and regularized matrix factorization is a powerful approach to collaborative filtering y Ensemble methods (blending) can markedly enhance predictive power of recommender systems y via a contest can unleash amazing amounts of sustained effort and creativity y Netflix made out like a bandit y But probably would not be successful in most problems Netflix Prize: What Did I Learn? y Several new machine learning algorithms y A lot about optimizing predictive models y Stochastic gradient descent y Regularization y A lot about optimizing code for speed and memory usage y Some linear algebra and a little PDQ y Enough to come up with one original approach that actually worked y Money and fame make people crazy, in both good ways and bad

COST: about 1000 hours of my free time over 13 months