Nash Q-Learning for General-Sum Stochastic Games

Total Page:16

File Type:pdf, Size:1020Kb

Nash Q-Learning for General-Sum Stochastic Games Journal of Machine Learning Research 4 (2003) 1039-1069 Submitted 11/01; Revised 10/02; Published 11/03 Nash Q-Learning for General-Sum Stochastic Games Junling Hu [email protected] Talkai Research 843 Roble Ave., 2 Menlo Park, CA 94025, USA Michael P. Wellman [email protected] Artificial Intelligence Laboratory University of Michigan Ann Arbor, MI 48109-2110, USA Editor: Craig Boutilier Abstract We extend Q-learning to a noncooperative multiagent context, using the framework of general- sum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably converges given certain restrictions on the stage games (defined by Q-values) that arise during learning. Experiments with a pair of two-player grid games suggest that such restric- tions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Q-function, but sometimes fails to converge in the second, which has three different equilibrium Q-functions. In a comparison of offline learn- ing performance in both games, we find agents are more likely to reach a joint optimal path with Nash Q-learning than with a single-agent Q-learning method. When at least one agent adopts Nash Q-learning, the performance of both agents is better than using single-agent Q-learning. We have also implemented an online version of Nash Q-learning that balances exploration with exploitation, yielding improved performance. Keywords: Reinforcement Learning, Q-learning, Multiagent Learning 1. Introduction Researchers investigating learning in the context of multiagent systems have been particularly at- tracted to reinforcement learning techniques (Kaelbling et al., 1996, Sutton and Barto, 1998), per- haps because they do not require environment models and they allow agents to take actions while they learn. In typical multiagent systems, agents lack full information about their counterparts, and thus the multiagent environment constantly changes as agents learn about each other and adapt their behaviors accordingly. Among reinforcement techniques, Q-learning (Watkins, 1989, Watkins and Dayan, 1992) has been especially well-studied, and possesses a firm foundation in the theory of Markov decision processes. It is also quite easy to use, and has seen wide application, for example to such areas as (to arbitrarily choose four diverse instances) cellular telephone channel allocation (Singh and Bertsekas, 1996), spoken dialog systems (Walker, 2000), robotic control (Hagen, 2001), and computer vision (Bandera et al., 1996). Although the single-agent properties do not transfer c 2003 Junling Hu and Michael P. Wellman. HU AND WELLMAN directly to the multiagent context, its ease of application does, and the method has been employed in such multiagent domains as robotic soccer (Stone and Sutton, 2001, Balch, 1997), predator-and- prey pursuit games (Tan, 1993, De Jong, 1997, Ono and Fukumoto, 1996), and Internet pricebots (Kephart and Tesauro, 2000). Whereas it is possible to apply Q-learning in a straightforward fashion to each agent in a mul- tiagent system, doing so (as recognized in several of the studies cited above) neglects two issues specific to the multiagent context. First, the environment consists of other agents who are simi- larly adapting, thus the environment is no longer stationary, and the familiar theoretical guarantees no longer apply. Second, the nonstationarity of the environment is not generated by an arbitrary stochastic process, but rather by other agents, who might be presumed rational or at least regular in some important way. We might expect that accounting for this explicitly would be advantageous to the learner. Indeed, several studies (Littman, 1994, Claus and Boutilier, 1998, Hu and Wellman, 2000) have demonstrated situations where an agent who considers the effect of joint actions outper- forms a corresponding agent who learns only in terms of its own actions. Boutilier (1999) studies the possibility of reasoning explicitly about coordination mechanisms, including learning processes, as a way of improving joint performance in coordination games. In extending Q-learning to multiagent environments, we adopt the framework of general-sum stochastic games. In a stochastic game, each agent’s reward depends on the joint action of all agents and the current state, and state transitions obey the Markov property. The stochastic game model includes Markov decision processes as a special case where there is only one agent in the system. General-sum games allow the agents’ rewards to be arbitrarily related. As special cases, zero- sum games are instances where agents’ rewards are always negatively related, and in coordination games rewards are always positively related. In other cases, agents may have both compatible and conflicting interests. For example, in a market system, the buyer and seller have compatible interests in reaching a deal, but have conflicting interests in the direction of price. The baseline solution concept for general-sum games is the Nash equilibrium (Nash, 1951). In a Nash equilibrium, each player effectively holds a correct expectation about the other players’ behaviors, and acts rationally with respect to this expectation. Acting rationally means the agent’s strategy is a best response to the others’ strategies. Any deviation would make that agent worse off. Despite its limitations, such as non-uniqueness, Nash equilibrium serves as the fundamental solution concept for general-sum games. In single-agent systems, the concept of optimal Q-value can be naturally defined in terms of an agent maximizing its own expected payoffs with respect to a stochastic environment. In multiagent systems, Q-values are contingent on other agents’ strategies. In the framework of general-sum stochastic games, we define optimal Q-values as Q-values received in a Nash equilibrium, and refer to them as Nash Q-values. The goal of learning is to find Nash Q-values through repeated play. Based on learned Q-values, our agent can then derive the Nash equilibrium and choose its actions accordingly. In our algorithm, called Nash Q-learning (NashQ), the agent attempts to learn its equilibrium Q-values, starting from an arbitrary guess. Toward this end, the Nash Q-learning agent maintains a model of other agents’ Q-values and uses that information to update its own Q-values. The updating rule is based on the expectation that agents would take their equilibrium actions in each state. Our goal is to find the best strategy for our agent, relative to how other agents play in the game. In order to do this, our agents have to learn about other agents’ strategies, and construct a best response. Learning about other agents’ strategies involves forming conjectures on other agents’ 1040 NASH Q-LEARNING behavior (Wellman and Hu, 1998). One can adopt a purely behavioral approach, inferring agent’s policies directly based on observed action patterns. Alternatively, one can base conjectures on a model that presumes the agents are rational in some sense, and attempt to learn their underlying preferences. If rewards are observable, it is possible to construct such a model. The Nash equi- librium approach can then be considered as one way to form conjectures, based on game-theoretic conceptions of rationality. We have proved the convergence of Nash Q-learning, albeit under highly restrictive technical conditions. Specifically, the learning process converges to Nash Q-values if every stage game (de- fined by interim Q-values) that arises during learning has a global optimum point, and the agents update according to values at this point. Alternatively, it will also converge if every stage game has a saddle point, and agents update in terms of these. In general, properties of stage games during learning are difficult to ensure. Nonetheless, establishing sufficient convergence conditions for this learning process may provide a useful starting point for analysis of extended methods or interesting special cases. We have constructed two grid-world games to test our learning algorithm. Our experiments confirm that the technical conditions for our theorem can be easily violated during the learning process. Nevertheless, they also indicate that learning reliably converges in Grid Game 1, which has three equally-valued global optimal points in equilibrium, but does not always converge in Grid Game 2, which has neither a global optimal point nor a saddle point, but three sets of other types of Nash Q-values in equilibrium. For both games, we evaluate the offline learning performance of several variants of multiagent Q-learning. When agents learn in an environment where the other agent acts randomly, we find agents are more likely to reach an optimal joint path with Nash Q-learning than with separate single- agent Q-learning. When at least one agent adopts the Nash Q-learning method, both agents are better off. We have also implemented an online version of Nash Q-learning method that balances exploration with exploitation, yielding improved performance. Experimentation with the algorithm is complicated by the fact that there might be multiple Nash equilibrium points for a stage game during learning. In our experiments, we choose one Nash equilibrium either based on the expected reward it brings, or based on the order it is ranked in solution list. Such an order is determined solely by the action sequence, which has little to do with the property of Nash equilibrium. In the next section, we introduce the basic concepts and background knowledge for our learn- ing method. In Section 3, we define the Nash Q-learning algorithm, and analyze its computational complexity. We prove that the algorithm converges under specified conditions in Section 4. Sec- tion 5 presents our experimental results, followed by sections summarizing related literature and discussing the contributions of this work.
Recommended publications
  • A New Approach for Dynamic Stochastic Fractal Search with Fuzzy Logic for Parameter Adaptation
    fractal and fractional Article A New Approach for Dynamic Stochastic Fractal Search with Fuzzy Logic for Parameter Adaptation Marylu L. Lagunes, Oscar Castillo * , Fevrier Valdez , Jose Soria and Patricia Melin Tijuana Institute of Technology, Calzada Tecnologico s/n, Fracc. Tomas Aquino, Tijuana 22379, Mexico; [email protected] (M.L.L.); [email protected] (F.V.); [email protected] (J.S.); [email protected] (P.M.) * Correspondence: [email protected] Abstract: Stochastic fractal search (SFS) is a novel method inspired by the process of stochastic growth in nature and the use of the fractal mathematical concept. Considering the chaotic stochastic diffusion property, an improved dynamic stochastic fractal search (DSFS) optimization algorithm is presented. The DSFS algorithm was tested with benchmark functions, such as the multimodal, hybrid, and composite functions, to evaluate the performance of the algorithm with dynamic parameter adaptation with type-1 and type-2 fuzzy inference models. The main contribution of the article is the utilization of fuzzy logic in the adaptation of the diffusion parameter in a dynamic fashion. This parameter is in charge of creating new fractal particles, and the diversity and iteration are the input information used in the fuzzy system to control the values of diffusion. Keywords: fractal search; fuzzy logic; parameter adaptation; CEC 2017 Citation: Lagunes, M.L.; Castillo, O.; Valdez, F.; Soria, J.; Melin, P. A New Approach for Dynamic Stochastic 1. Introduction Fractal Search with Fuzzy Logic for Metaheuristic algorithms are applied to optimization problems due to their charac- Parameter Adaptation. Fractal Fract. teristics that help in searching for the global optimum, while simple heuristics are mostly 2021, 5, 33.
    [Show full text]
  • Stochastic Game Theory Applications for Power Management in Cognitive Networks
    STOCHASTIC GAME THEORY APPLICATIONS FOR POWER MANAGEMENT IN COGNITIVE NETWORKS A thesis submitted to Kent State University in partial fulfillment of the requirements for the degree of Master of Digital Science by Sham Fung Feb 2014 Thesis written by Sham Fung M.D.S, Kent State University, 2014 Approved by Advisor , Director, School of Digital Science ii TABLE OF CONTENTS LIST OF FIGURES . vi LIST OF TABLES . vii Acknowledgments . viii Dedication . ix 1 BACKGROUND OF COGNITIVE NETWORK . 1 1.1 Motivation and Requirements . 1 1.2 Definition of Cognitive Networks . 2 1.3 Key Elements of Cognitive Networks . 3 1.3.1 Learning and Reasoning . 3 1.3.2 Cognitive Network as a Multi-agent System . 4 2 POWER ALLOCATION IN COGNITIVE NETWORKS . 5 2.1 Overview . 5 2.2 Two Communication Paradigms . 6 2.3 Power Allocation Scheme . 8 2.3.1 Interference Constraints for Primary Users . 8 2.3.2 QoS Constraints for Secondary Users . 9 2.4 Related Works on Power Management in Cognitive Networks . 10 iii 3 GAME THEORY|PRELIMINARIES AND RELATED WORKS . 12 3.1 Non-cooperative Game . 14 3.1.1 Nash Equilibrium . 14 3.1.2 Other Equilibriums . 16 3.2 Cooperative Game . 16 3.2.1 Bargaining Game . 17 3.2.2 Coalition Game . 17 3.2.3 Solution Concept . 18 3.3 Stochastic Game . 19 3.4 Other Types of Games . 20 4 GAME THEORETICAL APPROACH ON POWER ALLOCATION . 23 4.1 Two Fundamental Types of Utility Functions . 23 4.1.1 QoS-Based Game . 23 4.1.2 Linear Pricing Game .
    [Show full text]
  • Reinforcement Learning and Optimal Control
    Reinforcement Learning and Optimal Control by Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book information and orders http://www.athenasc.com Athena Scientific, Belmont, Massachusetts Athena Scientific Post Office Box 805 Nashua, NH 03060 U.S.A. Email: [email protected] WWW: http://www.athenasc.com Cover photography: Dimitri Bertsekas c 2019 Dimitri P. Bertsekas All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. Publisher’s Cataloging-in-Publication Data Bertsekas, Dimitri P. Reinforcement Learning and Optimal Control Includes Bibliography and Index 1. Mathematical Optimization. 2. Dynamic Programming. I. Title. QA402.5.B4652019 519.703 00-91281 ISBN-10: 1-886529-39-6, ISBN-13: 978-1-886529-39-7 ABOUT THE AUTHOR Dimitri Bertsekas studied Mechanical and Electrical Engineering at the National Technical University of Athens, Greece, and obtained his Ph.D. in system science from the Massachusetts Institute of Technology. He has held faculty positions with the Engineering-Economic Systems Department, Stanford University, and the Electrical Engineering Department of the Uni- versity of Illinois, Urbana. Since 1979 he has been teaching at the Electrical Engineering and Computer Science Department of the Massachusetts In- stitute of Technology (M.I.T.), where he is currently McAfee Professor of Engineering. Starting in August 2019, he will also be Fulton Professor of Computational Decision Making at the Arizona State University, Tempe, AZ. Professor Bertsekas’ teaching and research have spanned several fields, including deterministic optimization, dynamic programming and stochas- tic control, large-scale and distributed computation, and data communi- cation networks.
    [Show full text]
  • Introduction to Stochastic Processes - Lecture Notes (With 33 Illustrations)
    Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin Contents 1 Probability review 4 1.1 Random variables . 4 1.2 Countable sets . 5 1.3 Discrete random variables . 5 1.4 Expectation . 7 1.5 Events and probability . 8 1.6 Dependence and independence . 9 1.7 Conditional probability . 10 1.8 Examples . 12 2 Mathematica in 15 min 15 2.1 Basic Syntax . 15 2.2 Numerical Approximation . 16 2.3 Expression Manipulation . 16 2.4 Lists and Functions . 17 2.5 Linear Algebra . 19 2.6 Predefined Constants . 20 2.7 Calculus . 20 2.8 Solving Equations . 22 2.9 Graphics . 22 2.10 Probability Distributions and Simulation . 23 2.11 Help Commands . 24 2.12 Common Mistakes . 25 3 Stochastic Processes 26 3.1 The canonical probability space . 27 3.2 Constructing the Random Walk . 28 3.3 Simulation . 29 3.3.1 Random number generation . 29 3.3.2 Simulation of Random Variables . 30 3.4 Monte Carlo Integration . 33 4 The Simple Random Walk 35 4.1 Construction . 35 4.2 The maximum . 36 1 CONTENTS 5 Generating functions 40 5.1 Definition and first properties . 40 5.2 Convolution and moments . 42 5.3 Random sums and Wald’s identity . 44 6 Random walks - advanced methods 48 6.1 Stopping times . 48 6.2 Wald’s identity II . 50 6.3 The distribution of the first hitting time T1 .......................... 52 6.3.1 A recursive formula . 52 6.3.2 Generating-function approach .
    [Show full text]
  • 1 Stochastic Processes and Their Classification
    1 1 STOCHASTIC PROCESSES AND THEIR CLASSIFICATION 1.1 DEFINITION AND EXAMPLES Definition 1. Stochastic process or random process is a collection of random variables ordered by an index set. ☛ Example 1. Random variables X0;X1;X2;::: form a stochastic process ordered by the discrete index set f0; 1; 2;::: g: Notation: fXn : n = 0; 1; 2;::: g: ☛ Example 2. Stochastic process fYt : t ¸ 0g: with continuous index set ft : t ¸ 0g: The indices n and t are often referred to as "time", so that Xn is a descrete-time process and Yt is a continuous-time process. Convention: the index set of a stochastic process is always infinite. The range (possible values) of the random variables in a stochastic process is called the state space of the process. We consider both discrete-state and continuous-state processes. Further examples: ☛ Example 3. fXn : n = 0; 1; 2;::: g; where the state space of Xn is f0; 1; 2; 3; 4g representing which of four types of transactions a person submits to an on-line data- base service, and time n corresponds to the number of transactions submitted. ☛ Example 4. fXn : n = 0; 1; 2;::: g; where the state space of Xn is f1; 2g re- presenting whether an electronic component is acceptable or defective, and time n corresponds to the number of components produced. ☛ Example 5. fYt : t ¸ 0g; where the state space of Yt is f0; 1; 2;::: g representing the number of accidents that have occurred at an intersection, and time t corresponds to weeks. ☛ Example 6. fYt : t ¸ 0g; where the state space of Yt is f0; 1; 2; : : : ; sg representing the number of copies of a software product in inventory, and time t corresponds to days.
    [Show full text]
  • Deterministic and Stochastic Prisoner's Dilemma Games: Experiments in Interdependent Security
    NBER TECHNICAL WORKING PAPER SERIES DETERMINISTIC AND STOCHASTIC PRISONER'S DILEMMA GAMES: EXPERIMENTS IN INTERDEPENDENT SECURITY Howard Kunreuther Gabriel Silvasi Eric T. Bradlow Dylan Small Technical Working Paper 341 http://www.nber.org/papers/t0341 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 August 2007 We appreciate helpful discussions in designing the experiments and comments on earlier drafts of this paper by Colin Camerer, Vince Crawford, Rachel Croson, Robyn Dawes, Aureo DePaula, Geoff Heal, Charles Holt, David Krantz, Jack Ochs, Al Roth and Christian Schade. We also benefited from helpful comments from participants at the Workshop on Interdependent Security at the University of Pennsylvania (May 31-June 1 2006) and at the Workshop on Innovation and Coordination at Humboldt University (December 18-20, 2006). We thank George Abraham and Usmann Hassan for their help in organizing the data from the experiments. Support from NSF Grant CMS-0527598 and the Wharton Risk Management and Decision Processes Center is gratefully acknowledged. The views expressed herein are those of the author(s) and do not necessarily reflect the views of the National Bureau of Economic Research. © 2007 by Howard Kunreuther, Gabriel Silvasi, Eric T. Bradlow, and Dylan Small. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source. Deterministic and Stochastic Prisoner's Dilemma Games: Experiments in Interdependent Security Howard Kunreuther, Gabriel Silvasi, Eric T. Bradlow, and Dylan Small NBER Technical Working Paper No. 341 August 2007 JEL No. C11,C12,C22,C23,C73,C91 ABSTRACT This paper examines experiments on interdependent security prisoner's dilemma games with repeated play.
    [Show full text]
  • Stochastic Games with Hidden States∗
    Stochastic Games with Hidden States¤ Yuichi Yamamoto† First Draft: March 29, 2014 This Version: January 14, 2015 Abstract This paper studies infinite-horizon stochastic games in which players ob- serve noisy public information about a hidden state each period. We find that if the game is connected, the limit feasible payoff set exists and is invariant to the initial prior about the state. Building on this invariance result, we pro- vide a recursive characterization of the equilibrium payoff set and establish the folk theorem. We also show that connectedness can be replaced with an even weaker condition, called asymptotic connectedness. Asymptotic con- nectedness is satisfied for generic signal distributions, if the state evolution is irreducible. Journal of Economic Literature Classification Numbers: C72, C73. Keywords: stochastic game, hidden state, connectedness, stochastic self- generation, folk theorem. ¤The author thanks Naoki Aizawa, Drew Fudenberg, Johannes Horner,¨ Atsushi Iwasaki, Michi- hiro Kandori, George Mailath, Takeaki Sunada, and Masatoshi Tsumagari for helpful conversa- tions, and seminar participants at various places. †Department of Economics, University of Pennsylvania. Email: [email protected] 1 1 Introduction When agents have a long-run relationship, underlying economic conditions may change over time. A leading example is a repeated Bertrand competition with stochastic demand shocks. Rotemberg and Saloner (1986) explore optimal col- lusive pricing when random demand shocks are i.i.d. each period. Haltiwanger and Harrington (1991), Kandori (1991), and Bagwell and Staiger (1997) further extend the analysis to the case in which demand fluctuations are cyclic or persis- tent. One of the crucial assumptions of these papers is that demand shocks are publicly observable before firms make their decisions in each period.
    [Show full text]
  • Monte Carlo Sampling Methods
    [1] Monte Carlo Sampling Methods Jasmina L. Vujic Nuclear Engineering Department University of California, Berkeley Email: [email protected] phone: (510) 643-8085 fax: (510) 643-9685 UCBNE, J. Vujic [2] Monte Carlo Monte Carlo is a computational technique based on constructing a random process for a problem and carrying out a NUMERICAL EXPERIMENT by N-fold sampling from a random sequence of numbers with a PRESCRIBED probability distribution. x - random variable N 1 xˆ = ---- x N∑ i i = 1 Xˆ - the estimated or sample mean of x x - the expectation or true mean value of x If a problem can be given a PROBABILISTIC interpretation, then it can be modeled using RANDOM NUMBERS. UCBNE, J. Vujic [3] Monte Carlo • Monte Carlo techniques came from the complicated diffusion problems that were encountered in the early work on atomic energy. • 1772 Compte de Bufon - earliest documented use of random sampling to solve a mathematical problem. • 1786 Laplace suggested that π could be evaluated by random sampling. • Lord Kelvin used random sampling to aid in evaluating time integrals associated with the kinetic theory of gases. • Enrico Fermi was among the first to apply random sampling methods to study neutron moderation in Rome. • 1947 Fermi, John von Neuman, Stan Frankel, Nicholas Metropolis, Stan Ulam and others developed computer-oriented Monte Carlo methods at Los Alamos to trace neutrons through fissionable materials UCBNE, J. Vujic Monte Carlo [4] Monte Carlo methods can be used to solve: a) The problems that are stochastic (probabilistic) by nature: - particle transport, - telephone and other communication systems, - population studies based on the statistics of survival and reproduction.
    [Show full text]
  • 570: Minimax Sample Complexity for Turn-Based Stochastic Game
    Minimax Sample Complexity for Turn-based Stochastic Game Qiwen Cui1 Lin F. Yang2 1School of Mathematical Sciences, Peking University 2Electrical and Computer Engineering Department, University of California, Los Angeles Abstract guarantees are rather rare due to complex interaction be- tween agents that makes the problem considerably harder than single agent reinforcement learning. This is also known The empirical success of multi-agent reinforce- as non-stationarity in MARL, which means when multi- ment learning is encouraging, while few theoret- ple agents alter their strategies based on samples collected ical guarantees have been revealed. In this work, from previous strategy, the system becomes non-stationary we prove that the plug-in solver approach, proba- for each agent and the improvement can not be guaranteed. bly the most natural reinforcement learning algo- One fundamental question in MBRL is that how to design rithm, achieves minimax sample complexity for efficient algorithms to overcome non-stationarity. turn-based stochastic game (TBSG). Specifically, we perform planning in an empirical TBSG by Two-players turn-based stochastic game (TBSG) is a two- utilizing a ‘simulator’ that allows sampling from agents generalization of Markov decision process (MDP), arbitrary state-action pair. We show that the em- where two agents choose actions in turn and one agent wants pirical Nash equilibrium strategy is an approxi- to maximize the total reward while the other wants to min- mate Nash equilibrium strategy in the true TBSG imize it. As a zero-sum game, TBSG is known to have and give both problem-dependent and problem- Nash equilibrium strategy [Shapley, 1953], which means independent bound.
    [Show full text]
  • Random Numbers and Stochastic Simulation
    Stochastic Simulation and Randomness Random Number Generators Quasi-Random Sequences Scientific Computing: An Introductory Survey Chapter 13 – Random Numbers and Stochastic Simulation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial, educational use only. Michael T. Heath Scientific Computing 1 / 17 Stochastic Simulation and Randomness Random Number Generators Quasi-Random Sequences Stochastic Simulation Stochastic simulation mimics or replicates behavior of system by exploiting randomness to obtain statistical sample of possible outcomes Because of randomness involved, simulation methods are also known as Monte Carlo methods Such methods are useful for studying Nondeterministic (stochastic) processes Deterministic systems that are too complicated to model analytically Deterministic problems whose high dimensionality makes standard discretizations infeasible (e.g., Monte Carlo integration) < interactive example > < interactive example > Michael T. Heath Scientific Computing 2 / 17 Stochastic Simulation and Randomness Random Number Generators Quasi-Random Sequences Stochastic Simulation, continued Two main requirements for using stochastic simulation methods are Knowledge of relevant probability distributions Supply of random numbers for making random choices Knowledge of relevant probability distributions depends on theoretical or empirical information about physical system being simulated By simulating large number of trials, probability
    [Show full text]
  • Sampling and Inference in Complex Networks Jithin Kazhuthuveettil Sreedharan
    Sampling and inference in complex networks Jithin Kazhuthuveettil Sreedharan To cite this version: Jithin Kazhuthuveettil Sreedharan. Sampling and inference in complex networks. Other [cs.OH]. Université Côte d’Azur, 2016. English. NNT : 2016AZUR4121. tel-01485852 HAL Id: tel-01485852 https://tel.archives-ouvertes.fr/tel-01485852 Submitted on 9 Mar 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. École doctorale STIC Sciences et Technologies de l’Information et de la Communication Unité de recherche: INRIA (équipe Maestro) Thèse de doctorat Présentée en vue de l’obtention du grade de Docteur en Sciences de l’UNIVERSITE COTE D’AZUR Mention : Informatique par Jithin Kazhuthuveettil Sreedharan Sampling and Inference in Complex Networks (Échantillonnage et Inférence dans Réseaux Complexes) Dirigé par Konstantin Avrachenkov Soutenue le 2 décembre 2016 Devant le jury composé de: Konstantin Avrachenkov - Inria, France Directeur Nelly Litvak - University of Twente, The Netherlands Rapporteur Don Towsley - University of Massachusetts, USA Rapporteur Philippe Jacquet - Nokia Bell Labs, France Examinateur Alain Jean-Marie - Inria, France Président Abstract The recent emergence of large evolving networks, mainly due to the rise of Online Social Networks (OSNs), brought out the difficulty to gather a complete picture of a network and it opened up the development of new distributed techniques.
    [Show full text]
  • Discovery and Equilibrium in Games with Unawareness∗
    Discovery and Equilibrium in Games with Unawareness∗ Burkhard C. Schippery April 20, 2018 Abstract Equilibrium notions for games with unawareness in the literature cannot be interpreted as steady-states of a learning process because players may discover novel actions during play. In this sense, many games with unawareness are \self-destroying" as a player's representa- tion of the game may change after playing it once. We define discovery processes where at each state there is an extensive-form game with unawareness that together with the players' play determines the transition to possibly another extensive-form game with unawareness in which players are now aware of actions that they have discovered. A discovery process is rationalizable if players play extensive-form rationalizable strategies in each game with un- awareness. We show that for any game with unawareness there is a rationalizable discovery process that leads to a self-confirming game that possesses a self-confirming equilibrium in extensive-form rationalizable strategies. This notion of equilibrium can be interpreted as steady-state of both a discovery and learning process. Keywords: Self-confirming equilibrium, conjectural equilibrium, extensive-form rational- izability, unawareness, extensive-form games, equilibrium, learning, discovery. JEL-Classifications: C72, D83. ∗I thank Aviad Heifetz, Byung Soo Lee, and seminar participants at UC Berkeley, the University of Toronto, the Barcelona workshop on Limited Reasoning and Cognition, SAET 2015, CSLI 2016, TARK 2017, and the Virginia Tech Workshop on Advances in Decision Theory 2018 for helpful discussions. An abbreviated earlier version appeared in the online proceedings of TARK 2017 under the title “Self-Confirming Games: Unawareness, Discovery, and Equilibrium".
    [Show full text]