Group Decision Making with Partial Preferences

Group Decision Making with Partial Preferences

Group Decision Making with Partial Preferences by Tyler Lu A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science University of Toronto c Copyright 2015 by Tyler Lu Abstract Group Decision Making with Partial Preferences Tyler Lu Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2015 Group decision making is of fundamental importance in all aspects of a modern society. Many commonly studied decision procedures require that agents provide full preference information. This requirement imposes significant cognitive and time burdens on agents, increases communication overhead, and infringes agent privacy. As a result, specifying full preferences is one of the contributing factors for the limited real-world adoption of some commonly studied voting rules. In this dissertation, we introduce a framework consisting of new concepts, algorithms, and theoretical results to provide a sound foundation on which we can address these problems by being able to make group decisions with only partial preference information. In particular, we focus on single and multi-winner voting. We introduce minimax regret (MMR), a group decision-criterion for partial preferences, which quantifies the loss in social welfare of chosen alternative(s) compared to the unknown, but true winning alternative(s). We develop polynomial-time algorithms for the computation of MMR for a number of common of voting rules, and prove intractability results for other rules. We address preference elicitation, the second part of our framework, which concerns the extraction of only the relevant agent preferences that reduce MMR. We develop a few elicitation strategies, based on common ideas, for different voting rules and query types. While MMR can be applied in a distribution-free context, in many practical environ- ments decision makers have access to historical datasets of, and probabilistic knowledge of agent preferences. To leverage such information, we first address the problem of learn- ing probabilistic models of preferences from pairwise comparisons|the building block ii of many preference structures|for which previous techniques cannot handle. Then we extend our framework to a multi-round elicitation process that leverages probabilistic models to guide and analyze elicitation strategies. We empirically validate our framework and algorithms on real datasets. Experiments show our elicitation algorithms query only a fraction of full preferences to obtain alter- native(s) with small MMR. Experiments also show our learning algorithms can learn accurate mixture models of preference types, which we then use to guide the design of one-round top-k elicitation protocols. iii Dedication I dedicate this thesis to my family. iv Acknowledgements First and foremost, I would like to thank my advisor Craig Boutilier for assiduously guiding me during my PhD. Craig is a leading researcher in the AI community who is always enthusiastic about informing me of his new ideas, helping me push the envelope on my research, advising me throughout the research process, and engaging me in the broader research community. I would also like to thank members of my PhD committee: Alan Borodin, Martin Osborne and Rich Zemel. They were always available to talk to and provided advice and guidance throughout my PhD. They also sat in on my practice talks, listened carefully, took notes and had excellent questions, astute observations, and some great all around advice. I had the pleasure to work with, and to co-author papers with some of the best minds in all of computer science. They include Ariel Procaccia, Ioannis Caragiannis, Moshe Tenneholtz, Or Sheffet, Pingzhong Tang, Reshef Meir and Simi Haber. I am grateful to my colleagues in the research community who have provided plenty of feedback, ideas and support during my PhD. They include my co-authors, Jerome Lang, Lihi Dery, Lirong Xia, Meir Kalech, Tuomas Sandholm, Vincent Conitzer and many others whom I've had plenty of productive and enjoyable conversations with. I would also like to thank the AI faculty, including Geoff Hinton, Rich Zemel and Sheila McIlraith, who have taken an interest in my career, my research, and who have encouraged and cheered me on. It is this kind of collegial and caring environment that has boosted my self-confidence, ambitions and my drive to think big and do great things. I want to thank my fellow students who I can, and do, talk to about anything. There is no doubt that the people and culture that exists here is the reason the department is ranked so highly and one that has cultivated some of the leading researchers throughout the years. In fact, I would have to admit that sometimes my practice talks attracted a more intense and accomplished audience than the one in my actual talk at the actual conference. Luckily, they were all on my side. Being part of the Department of Computer Science during the past five years has been a great blessing. There have been some great research produced here, some great people flowing through here, and some great projects that have impacted society at large, sometimes through starting companies. There has never been a more exciting time to be here. v Contents 1 Introduction1 1.1 Overall Contributions.............................4 1.2 Organization of Dissertation.........................8 2 Overview of Social Choice 10 2.1 Preliminaries................................. 10 2.1.1 Preference Relations......................... 10 2.1.2 Partial Preferences.......................... 11 2.1.3 Distances over Preference Rankings................. 12 2.2 Single-Choice Problems............................ 13 2.3 Multi-Choice Problems............................ 19 2.4 Rank Aggregation............................... 22 2.5 Social Choice with Partial Preferences................... 25 2.5.1 Possible and Necessary Winners................... 26 2.5.2 Elicitation............................... 28 3 Robust Optimization and Elicitation for Single-Choice Problems 32 3.1 Robust Winner Determination........................ 33 3.1.1 Minimax Regret............................ 33 3.1.2 Relationship to Possible and Necessary Winners.......... 35 3.2 Computing Single-winner MMR....................... 36 3.2.1 Exploiting Pairwise Max Regret................... 37 3.2.2 Positional Scoring Rules....................... 37 3.2.3 Maximin Voting............................ 40 3.2.4 Bucklin Voting............................ 42 3.2.5 Egalitarian Voting.......................... 44 3.3 Preference Elicitation............................. 46 3.4 Empirical Evaluation............................. 50 vi 3.5 Related Work................................. 55 3.6 Conclusion................................... 56 4 Robust Optimization and Elicitation for Multiple-Choice Problems 58 4.1 Preliminaries................................. 58 4.2 Minimax Regret for Slate Optimization................... 59 4.2.1 Computing MMR-Optimal Slates.................. 60 4.2.2 A Greedy Algorithm for Robust Slate Optimization........ 64 4.3 Preference Elicitation............................. 69 4.4 Empirical Evaluation............................. 71 4.5 Conclusion................................... 75 5 Learning Rankings with Pairwise Preferences 77 5.1 Motivation................................... 78 5.2 Preliminaries................................. 80 5.2.1 Ordinal Preferences.......................... 80 5.2.2 Mallows Models and Sampling Procedures............. 82 5.2.3 A Mallows Mixture Model for Incomplete Preferences....... 90 5.3 Related Work................................. 91 5.4 Generalized Repeated Insertion Model................... 94 5.4.1 Sampling from Arbitrary Ranking Distributions.......... 94 5.4.2 Sampling from Mallows Posteriors.................. 96 5.4.3 Sampling Mallows Mixture Posterior................ 108 5.5 EM Learning Algorithm for Mallows Mixtures............... 108 5.5.1 Evaluating Log-Likelihood...................... 109 5.5.2 The EM Algorithm.......................... 111 5.5.3 Monte Carlo EM for Mallows Mixtures............... 112 5.5.4 Complexity of EM Steps....................... 116 5.6 Empirical Evaluation............................. 117 5.6.1 Sampling Quality........................... 118 5.6.2 Evaluating Log-Likelihood...................... 118 5.6.3 EM Mixture Learning........................ 119 5.6.4 Predicting Missing Pairwise Preferences............... 124 5.7 Applications to Non-Parametric Estimators................ 127 5.8 Conclusion................................... 130 vii 6 Elicitation with Probabilistic Preference Distributions 132 6.1 Motivation................................... 133 6.2 A Model of Multi-round Probabilistic Elicitation.............. 135 6.3 Probably Approximately Correct One-round Protocols.......... 138 6.4 Empirical Evaluation............................. 141 6.5 Conclusion................................... 149 7 Summary and Conclusions 150 7.1 Chapter Summary of Main Results..................... 151 7.2 Contributions................................. 153 7.3 Future Work.................................. 155 Bibliography 160 viii List of Tables 4.1 Avg. Greedy (slate) runtimes on Mallows data............... 74 5.1 Example of GRIM sampling on a conditional Mallows........... 96 5.2 Learned mixture model for sushi data (K = 6)............... 122 5.3 Learned mixture model for Movielens (K = 5)............... 125 5.4 Num. of missing pairwise comparisons at fixed distances......... 127

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    183 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us