Debiasing Online Reviews with Machine Learning

Debiasing Online Reviews with Machine Learning

DEBIASING ONLINE REVIEWS WITH MACHINE LEARNING A Thesis Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Master of Science by Shivank Goel May 2020 c 2020 Shivank Goel ALL RIGHTS RESERVED ABSTRACT Online review aggregation platforms suffer from biases which can lead to distress to both the platforms and their consumers as the average rating crowd sourced on these platforms do not represent the correct perceived quality of the product or service. We look at the problem of polarization bias on Yelp and present the evaluation of an estimation model to determine the unbiased average rating. We explore how the Yelp's elite membership program helps in cutting down the bias. Our results propose that the average biased rating listed on platforms is correlated with the true unbiased rating and by including more information from online re- views as input features of our model we can get a reasonably well estimate of the average unbiased ratings. We propose and compare several predictive models to estimate unbiased average ratings and show how textual data can play a critical role in enhancing the predictive power. Our results can help review aggregation platforms to determine mechanisms to cut down the bias on their platforms and can benefit businesses and consumers on their platform to access a fairer metric for service quality. BIOGRAPHICAL SKETCH Shivank Goel is a second-year MS candidate in the Operations Technology & In- formation Management department at Cornell University's SC Johnson College of Business. Shivank's research interests are to examine the impact of policy and economic factors using data-driven analytical methods from the intersection of Econometrics and Machine Learning. Shivank graduated with a Bachelor's degree in Computer Science from the Indian Institute of Technology (IIT) Delhi in 2018; in 2017, Shivank served as a software engineering intern with the big data team at Microsoft in Bangalore. At IIT, he held part in several data science projects. After joining Cornell's prestigious business school, Shivank was further reinforced of his enthusiasm for data science and its transformational power in dealing with com- plex business problems. After graduating from Cornell, Shivank will join Amazon's headquarters in Seattle, beginning in Fall of 2020, as a full-time software engineer. In his leisure time, Shivank runs a YouTube channel, where he regularly posts video tutorials for algorithmic coding, statistics and data science. iii This is dedicated to all Cornell graduate students. iv ACKNOWLEDGEMENTS I would first like to express my sincere thanks to my thesis advisors Prof. Li Chen and Prof. Shawn Mankad of the SC Johnson College of Business at Cornell University for their continuous guidance and support. The gate to Prof. Chen and Prof. Mankad's office was always open for me whenever I got caught or had a query about my research or writing. They always heard to accommodate the project scope corresponding to my interests and expertise so I could add more towards my thesis. They always allowed this paper to be my own work, but drove me in the right direction whenever I called for it. I would also like to thank all the faculty members of Operations Technology and Information Management (OTIM) department who helped me in this journey and backed me. Special thanks to Prof. Andrew Davis, Prof. Vishal Gaur, Prof. Chris Forman and Prof. Karan Girotra who were in close touch with me and provided their valuable advice throughout my graduate studies. I would also like to specially thank the director of graduate studies for Johnson, Prof. Vrinda Kadiyali for her guidance and support during my stay at Johnson. I would also like to acknowledge my friends and colleagues at Cornell Univer- sity as the second reader of this thesis, and I am gratefully thankful to them for their precious comments on this thesis. Finally, I would express my gratitude to my parents for giving me with unfailing support and unceasing encouragement throughout my years of study and through the process of researching and writing this thesis. This accomplishment would not have been achievable without them. Thank you. Shivank Goel v TABLE OF CONTENTS Biographical Sketch . iii Dedication . iv Acknowledgements . .v Table of Contents . vi List of Tables . vii List of Figures . viii 1 Introduction 1 1.1 Introduction . .1 1.2 Literature Review . .3 1.2.1 Economic Impact of Online Reviews . .4 1.2.2 Acquisition and Under-Reporting Bias . .5 1.2.3 Textual Data Analytics on Online reviews . .7 1.3 Dataset . .7 1.3.1 Performance Metrics . .9 2 Preliminary Observations 11 2.1 Acquisition Bias . 11 2.2 Under-Reporting Bias . 13 3 Results 16 3.1 Relation Between Average Elite Ratings and Non-Elite Ratings . 16 3.2 Adding Non-Elite Ratings Distribution to Regression . 20 3.3 Adding Text Information : Tf-Idf . 24 3.4 Pushing Boundaries with Machine Learning . 26 3.4.1 Bagging with Linear Regression . 26 3.4.2 Random Forest Regression . 27 3.4.3 Support Vector Regression . 31 3.4.4 Gradient Boosted Decision Trees . 32 3.4.5 Neural Networks . 32 3.5 Trade-Off: Heterogeneity and Amount of Data . 33 4 Conclusion 36 A Yelp's Elite Program 37 B ML Algorithms 38 B.1 Decision Trees . 38 B.2 Bagging . 40 B.3 Boosting . 41 B.4 Principal Component Analysis . 42 B.5 Support Vector Regression and Kernels . 43 B.6 Neural Networks . 46 vi LIST OF TABLES 1.1 Summary for different categories in Yelp dataset . .9 2.1 Number of distinct businesses reviewed : elites vs non-elites . 12 2.2 Comparison of distribution of average business ratings . 13 2.3 Linear Probability Model Results . 15 3.1 Distribution of non-elites for businesses with higher elite rating vs lower elite rating . 19 3.2 Performance of regression and identity on test dataset . 20 3.3 Using non-elite distribution as regression inputs . 21 3.4 Using the buckets in the ridge regression . 23 3.5 Adding text information to regression . 25 3.6 Comparison of bagging with linear regression . 27 3.7 Random forest comparison with linear regression without text . 28 3.8 Random forest comparison with linear regression with text . 30 3.9 Relative feature importance . 31 3.10 SVR comparison with linear regression . 32 3.11 Gradient boosted trees comparison with linear regression . 32 3.12 Neural networks comparison with linear regression . 33 3.13 Comparison between all machine learning models . 33 vii LIST OF FIGURES 1.1 Top five business categories . .8 2.1 Non-elite ratings distribution . 14 2.2 Elite ratings distribution . 14 3.1 Comparison of average ratings by non-elites and elites . 17 3.2 Businesses with higher average elite rating than average non-elite . 18 3.3 Businesses with lower average elite rating than average non-elite . 18 3.4 Debiasing effect of elites assumin all non-elite ratings are same . 22 3.5 De-biasing effect of elites using non-elite buckets . 24 3.6 Comparison of average ratings by non-elites and elites . 29 3.7 Comparison of average ratings by non-elites and elites . 34 3.8 Comparison of average ratings by non-elites and elites . 34 B.1 Entropy variation with probability of first label . 39 B.2 Working of random forests . 41 B.3 Support vector regression . 44 viii CHAPTER 1 INTRODUCTION 1.1 Introduction Online reviews are a powerful resource for consumers to determine the quality of service. In today's tech savvy world, 90% of consumers look for a business online and read online reviews before visiting a business.1 Star ratings provide a quick way for search engines and websites to share the experience of consumers with a particular product or service. These ratings significantly impact revenues. Luca (2016) shows that a one star increase in a Yelp rating contributes to a 5 to 9% hike in the business' revenue. Considering this economic impact of reviews, it is vital for both the businesses and the consumers that the platforms provide an unbiased estimate of the star ratings of the products and services offered. Specifically, a partisan review for a product/service may have three negative effects. First, a biased review may di- rectly impact sales and revenues for the business. Second, it could also lead to a bad customer experience, which hinders the long-term customer commitment, thereby indirectly impacting the business in the long run. A poor customer expe- rience also leads to a reputation loss for the platform. Most users voluntarily report ratings on the online review platforms. This in- evitably contributes to the ratings being distorted. For example, Hu et al. (2017) identify two self-selection biases prevalent on the review platforms: (i) acquisition bias and (ii) under-reporting bias (also known as polarization bias). Acquisition 1https://www.brightlocal.com/research/local-consumer-review-survey 1 bias occurs because consumers self-select to acquire a service, i.e., they opt in if they already perceive that the quality of service would be great a-priori. On the other hand, under-reporting bias occurs because online platforms tend to aggregate reviews from the users with the most polarized opinions (extremely positive or neg- ative). Thus, these reviews give a misleading representation of the product/service quality. In this work, we are interested in developing models to estimate the unbiased star rating of businesses. We propose a method that leverages a subset of poten- tially unbiased reviews to formulate a de-biasing model for the (biased) average rating across all reviewers. Furthermore, our larger purpose is to demonstrate how non-monetary incentives can turn out to be an efficient and economical way for the online review aggregation platforms to curate such a pool of unbiased users or reviews.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    59 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us