Towards Playing Full MOBA Games with Deep Reinforcement Learning

Total Page:16

File Type:pdf, Size:1020Kb

Towards Playing Full MOBA Games with Deep Reinforcement Learning Towards Playing Full MOBA Games with Deep Reinforcement Learning Deheng Ye1 Guibin Chen1 Wen Zhang1 Sheng Chen1 Bo Yuan1 Bo Liu1 Jia Chen1 Zhao Liu1 Fuhao Qiu1 Hongsheng Yu1 Yinyuting Yin1 Bei Shi1 Liang Wang1 Tengfei Shi1 Qiang Fu1 Wei Yang1 Lanxiao Huang2 Wei Liu1 1 Tencent AI Lab, Shenzhen, China 2 Tencent TiMi L1 Studio, Chengdu, China {dericye,beanchen,zivenwzhang,victchen,jerryyuan,leobliu,jaylahchen,ricardoliu, frankfhqiu,yannickyu,mailyyin,beishi,enginewang,francisshi,leonfu, willyang,jackiehuang}@tencent.com; [email protected] Abstract MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand challenges to AI systems such as multi-agent, enormous state-action space, complex action control, etc. Developing AI for playing MOBA games has raised much attention accordingly. However, existing work falls short in handling the raw game complexity caused by the explosion of agent combinations, i.e., lineups, when expanding the hero pool in case that OpenAI’s Dota AI limits the play to a pool of only 17 heroes. As a result, full MOBA games without restrictions are far from being mastered by any existing AI system. In this paper, we propose a MOBA AI learning paradigm that methodologically enables playing full MOBA games with deep reinforcement learning. Specifically, we develop a combination of novel and existing learning techniques, including curriculum self-play learning, policy distillation, off-policy adaption, multi-head value estimation, and Monte-Carlo tree-search, in training and playing a large pool of heroes, meanwhile addressing the scalability issue skillfully. Tested on Honor of Kings, a popular MOBA game, we show how to build superhuman AI agents that can defeat top esports players. The superiority of our AI is demonstrated by the first large-scale performance test of MOBA AI agent in the literature. 1 Introduction Artificial Intelligence for games, a.k.a. Game AI, has been actively studied for decades. We have witnessed the success of AI agents in many game types, including board games like Go [30], Atari series [21], first-person shooting (FPS) games like Capture the Flag [15], video games like Super Smash Bros [6], card games like Poker [3], etc. Nowadays, sophisticated strategy video games attract attention as they capture the nature of the real world [2], e.g., in 2019, AlphaStar achieved the grandmaster level in playing the general real-time strategy (RTS) game - StarCraft 2 [33]. As a sub-genre of RTS games, Multi-player Online Battle Arena (MOBA) has also attracted much attention recently [38, 36, 2]. Due to its playing mechanics which involve multi-agent competition and cooperation, imperfect information, complex action control, and enormous state-action space, MOBA is considered as a preferable testbed for AI research [29, 25]. Typical MOBA games include Honor of Kings, Dota, and League of Legends. In terms of complexity, a MOBA game, such as Honor of Kings, even with significant discretization, could have a state and action space of magnitude 1020000 [36], while that of a conventional Game AI testbed, such as Go, is at most 10360 [30]. MOBA games are further complicated by the real-time strategies of multiple heroes (each hero is uniquely 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. designed to have diverse playing mechanics), particularly in the 5 versus 5 (5v5) mode where two teams (each with 5 heroes selected from the hero pool) compete against each other 1. In spite of its suitability for AI research, mastering the playing of MOBA remains to be a grand challenge for current AI systems. State-of-the-art work for MOBA 5v5 game is OpenAI Five for playing Dota 2 [2]. It trains with self-play reinforcement learning (RL). However, OpenAI Five plays with one major limitation 2, i.e., only 17 heroes were supported, despite the fact that the hero-varying and team-varying playing mechanism is the soul of MOBA [38, 29]. As the most fundamental step towards playing full MOBA games, scaling up the hero pool is challenging for self-play reinforcement learning, because the number of agent combinations, i.e., 10 lineups, grows polynomially with the hero pool size. The agent combinations are 4,900,896 (C17 × 5 10 5 C10) for 17 heroes, while exploding to 213,610,453,056 (C40 × C10) for 40 heroes. Considering the fact that each MOBA hero is unique and has a learning curve even for experienced human players, existing methods by randomly presenting these disordered hero combinations to a learning system can lead to “learning collapse” [1], which has been observed from both OpenAI Five [2] and our experiments. For instance, OpenAI attempted to expand the hero pool up to 25 heroes, resulting in unacceptably slow training and degraded AI performance, even with thousands of GPUs (see Section “More heroes” in [24] for more details). Therefore, we need MOBA AI learning methods that deal with scalability-related issues caused by expanding the hero pool. In this paper, we propose a learning paradigm for supporting full MOBA game-playing with deep reinforcement learning. Under the actor-learner pattern [12], we first build a distributed RL infrastruc- ture that generates training data in an off-policy manner. We then develop a unified actor-critic [20] network architecture to capture the playing mechanics and actions of different heroes. To deal with policy deviations caused by the diversity of game episodes, we apply off-policy adaption, following that of [38]. To manage the uncertain value of state-action in game, we introduce multi-head value estimation into MOBA by grouping reward items. Inspired by the idea of curriculum learning [1] for neural network, we design a curriculum for the multi-agent training in MOBA, in which we “start small” and gradually increase the difficulty of learning. Particularly, we start with fixed lineups to obtain teacher models, from which we distill policies [26], and finally we perform merged training. We leverage student-driven policy distillation [9] to transfer the knowledge from easy tasks to difficult ones. Lastly, an emerging problem with expanding the hero pool is drafting, a.k.a. hero selection, at the beginning of a MOBA game. The Minimax algorithm [18] for drafting used in existing work with a small-sized hero pool [2] is no longer computationally feasible. To handle this, we develop an efficient and effective drafting agent based on Monte-Carlo tree search (MCTS) [7]. Note that there still lacks a large-scale performance test of Game AI in the literature, due to the expensive nature of evaluating AI agents in real games, particularly for sophisticated video games. For example, AlphaStar Final [33] and OpenAI Five [2] were tested: 1) against professionals for 11 matches and 8 matches, respectively; 2) against the public for 90 matches and for 7,257 matches, respectively (all levels of players can participate without an entry condition). To provide more statistically significant evaluations, we conduct a large-scale MOBA AI test. Specifically, we test using Honor of Kings, a popular and typical MOBA game, which has been widely used as a testbed for recent AI advances [36, 38, 37]. AI achieved a 95.2% win-rate over 42 matches against professionals, and a 97.7% win-rate against players of High King level 3 over 642,047 matches. To sum up, our contributions are: • We propose a novel MOBA AI learning paradigm towards playing full MOBA games with deep reinforcement learning. • We conduct the first large-scale performance test of MOBA AI agents. Extensive experiments show that our AI can defeat top esports players. 1In this paper, MOBA refers to the standard MOBA 5v5 game, unless otherwise stated. 2OpenAI Five has two limitations from the regular game: 1) the major one is the limit of hero pool, i.e., only a subset of 17 heroes supported; 2) some game rules were simplified, e.g., certain items were not allowed to buy. 3In Honor of Kings, a player’s game level can be: No-rank, Bronze, Silver, Gold, Platinum, Diamond, Heavenly, King (a.k.a, Champion) and High King (the highest), in ascending order. 2 2 Related Work Our work belongs to system-level AI development for strategy video game playing, so we mainly discuss representative works along this line, covering RTS and MOBA games. General RTS games StarCraft has been used as the testbed for Game AI research in RTS for many years. Methods adopted by existing studies include rule-based, supervised learning, reinforcement learning, and their combinations [23, 34]. For rule-based methods, a representative is SAIDA, the champion of StarCraft AI Competition 2018 (see https://github.com/TeamSAIDA/SAIDA). For learning-based methods, recently, AlphaStar combined supervised learning and multi-agent reinforcement learning and achieved the grandmaster level in playing StarCraft 2 [33]. Our value estimation (Section 3.2) shares similarity to AlphaStar’s by using invisible opponent’s information. MOBA games Recently, a macro strategy model, named Tencent HMS, was proposed for MOBA Game AI [36]. Specifically, HMS is a functional component for guiding where to go on the map during the game, without considering the action execution of agents, i.e., micro control or micro- management in esports, and is thus not a complete AI solution. The most relevant works are Tencent Solo [38] and OpenAI Five [2]. Ye et al. [38] performed a thorough and systematic study on the playing mechanics of different MOBA heroes. They developed a RL system that masters the micro control of agents in MOBA combats. However, only 1v1 solo games were studied without the much more sophisticated multi-agent 5v5 games. On the other hand, the similarities between this work and Ye et al.
Recommended publications
  • Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril
    Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Michael Matheny, Sonoo Thadaney Israni, Mahnoor Ahmed, and Danielle Whicher, Editors WASHINGTON, DC NAM.EDU PREPUBLICATION COPY - Uncorrected Proofs NATIONAL ACADEMY OF MEDICINE • 500 Fifth Street, NW • WASHINGTON, DC 20001 NOTICE: This publication has undergone peer review according to procedures established by the National Academy of Medicine (NAM). Publication by the NAM worthy of public attention, but does not constitute endorsement of conclusions and recommendationssignifies that it is the by productthe NAM. of The a carefully views presented considered in processthis publication and is a contributionare those of individual contributors and do not represent formal consensus positions of the authors’ organizations; the NAM; or the National Academies of Sciences, Engineering, and Medicine. Library of Congress Cataloging-in-Publication Data to Come Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America. Suggested citation: Matheny, M., S. Thadaney Israni, M. Ahmed, and D. Whicher, Editors. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National Academy of Medicine. PREPUBLICATION COPY - Uncorrected Proofs “Knowing is not enough; we must apply. Willing is not enough; we must do.” --GOETHE PREPUBLICATION COPY - Uncorrected Proofs ABOUT THE NATIONAL ACADEMY OF MEDICINE The National Academy of Medicine is one of three Academies constituting the Nation- al Academies of Sciences, Engineering, and Medicine (the National Academies). The Na- tional Academies provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.
    [Show full text]
  • Artificial Intelligence: with Great Power Comes Great Responsibility
    ARTIFICIAL INTELLIGENCE: WITH GREAT POWER COMES GREAT RESPONSIBILITY JOINT HEARING BEFORE THE SUBCOMMITTEE ON RESEARCH AND TECHNOLOGY & SUBCOMMITTEE ON ENERGY COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY HOUSE OF REPRESENTATIVES ONE HUNDRED FIFTEENTH CONGRESS SECOND SESSION JUNE 26, 2018 Serial No. 115–67 Printed for the use of the Committee on Science, Space, and Technology ( Available via the World Wide Web: http://science.house.gov U.S. GOVERNMENT PUBLISHING OFFICE 30–877PDF WASHINGTON : 2018 COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY HON. LAMAR S. SMITH, Texas, Chair FRANK D. LUCAS, Oklahoma EDDIE BERNICE JOHNSON, Texas DANA ROHRABACHER, California ZOE LOFGREN, California MO BROOKS, Alabama DANIEL LIPINSKI, Illinois RANDY HULTGREN, Illinois SUZANNE BONAMICI, Oregon BILL POSEY, Florida AMI BERA, California THOMAS MASSIE, Kentucky ELIZABETH H. ESTY, Connecticut RANDY K. WEBER, Texas MARC A. VEASEY, Texas STEPHEN KNIGHT, California DONALD S. BEYER, JR., Virginia BRIAN BABIN, Texas JACKY ROSEN, Nevada BARBARA COMSTOCK, Virginia CONOR LAMB, Pennsylvania BARRY LOUDERMILK, Georgia JERRY MCNERNEY, California RALPH LEE ABRAHAM, Louisiana ED PERLMUTTER, Colorado GARY PALMER, Alabama PAUL TONKO, New York DANIEL WEBSTER, Florida BILL FOSTER, Illinois ANDY BIGGS, Arizona MARK TAKANO, California ROGER W. MARSHALL, Kansas COLLEEN HANABUSA, Hawaii NEAL P. DUNN, Florida CHARLIE CRIST, Florida CLAY HIGGINS, Louisiana RALPH NORMAN, South Carolina DEBBIE LESKO, Arizona SUBCOMMITTEE ON RESEARCH AND TECHNOLOGY HON. BARBARA COMSTOCK, Virginia, Chair FRANK D. LUCAS, Oklahoma DANIEL LIPINSKI, Illinois RANDY HULTGREN, Illinois ELIZABETH H. ESTY, Connecticut STEPHEN KNIGHT, California JACKY ROSEN, Nevada BARRY LOUDERMILK, Georgia SUZANNE BONAMICI, Oregon DANIEL WEBSTER, Florida AMI BERA, California ROGER W. MARSHALL, Kansas DONALD S. BEYER, JR., Virginia DEBBIE LESKO, Arizona EDDIE BERNICE JOHNSON, Texas LAMAR S.
    [Show full text]
  • Flexible Games by Which I Mean Digital Game Systems That Can Accommodate Rule-Changing and Rule-Bending
    Let’s Play Our Way: Designing Flexibility into Card Game Systems Gifford Cheung A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2013 Reading Committee: David Hendry, Chair David McDonald Nicolas Ducheneaut Jennifer Turns Program Authorized to Offer Degree: Information School ©Copyright 2013 Gifford Cheung 2 University of Washington Abstract Let’s Play Our Way: Designing Flexibility into Card Game Systems Gifford Cheung Chair of the Supervisory Committee: Associate Professor David Hendry Information School In this dissertation, I explore the idea of designing “flexible game systems”. A flexible game system allows players (not software designers) to decide on what rules to enforce, who enforces them, and when. I explore this in the context of digital card games and introduce two design strategies for promoting flexibility. The first strategy is “robustness”. When players want to change the rules of a game, a robust system is able to resist extreme breakdowns that the new rule would provoke. The second is “versatility”. A versatile system can accommodate multiple use-scenarios and can support them very well. To investigate these concepts, first, I engage in reflective design inquiry through the design and implementation of Card Board, a highly flexible digital card game system. Second, via a user study of Card Board, I analyze how players negotiate the rules of play, take ownership of the game experience, and communicate in the course of play. Through a thematic and grounded qualitative analysis, I derive rich descriptions of negotiation, play, and communication. I offer contributions that include criteria for flexibility with sub-principles of robustness and versatility, design recommendations for flexible systems, 3 novel dimensions of design for gameplay and communications, and rich description of game play and rule-negotiation over flexible systems.
    [Show full text]
  • Game Design 2 Game Balance
    CE810 - Game Design 2 Game Balance Joseph Walton-Rivers & Piers Williams Friday, 18 May 2018 University of Essex 1 What is Balance? Game Balance Question What is balance? 2 Game Balance “All players have an equal chance of winning” – Richard Bartle Richard covered a combat example in the first part of the module. 3 On Strategies Game Balance • What about higher level strategies? • Zerg rush? • Dominant strategies • Metagaming 4 Metagaming - Rock Paper Scissors • A beats B, B beats C, C beats A • If there are lots of A players, people will play C • Then there are a lot of C players, so people play B • and so on... 5 Metagaming - Dominant Strategies • What if A is significantly stronger? • No one will use the other two strategies • We want to encourage variety in play 6 Can we detect this? • Can we detect strategies which are overpowered? • Try to punish strategies we don’t want to see • We did this earlier in the week with rotate and shoot! • Can we measure this? 7 Automated Game Tuning • Academics seem to think so... • Ryan Leigh et al (2008) - Co-evolution for game balancing • Alexander Jaffe et al (2012) - Restricted-Play balance framework • Mihail Morosan - GAs for tuning parameters 8 Game Curves First Move Advantage First Move Advantage • Typically affects turn based games • Going first in tac tac toe means either a win or adraw • White has > 50% win rate over all games • Worse effects if you have resources • We need a way of dealing with this 9 First Move Advantage Magic Second player gets an extra card Go Second player gets 7.5 bonus
    [Show full text]
  • Transfer Learning Between RTS Combat Scenarios Using Component-Action Deep Reinforcement Learning
    Transfer Learning Between RTS Combat Scenarios Using Component-Action Deep Reinforcement Learning Richard Kelly and David Churchill Department of Computer Science Memorial University of Newfoundland St. John’s, NL, Canada [email protected], [email protected] Abstract an enormous financial investment in hardware for training, using over 80000 CPU cores to run simultaneous instances Real-time Strategy (RTS) games provide a challenging en- of StarCraft II, 1200 Tensor Processor Units (TPUs) to train vironment for AI research, due to their large state and ac- the networks, as well as a large amount of infrastructure and tion spaces, hidden information, and real-time gameplay. Star- Craft II has become a new test-bed for deep reinforcement electricity to drive this large-scale computation. While Al- learning systems using the StarCraft II Learning Environment phaStar is estimated to be the strongest existing RTS AI agent (SC2LE). Recently the full game of StarCraft II has been ap- and was capable of beating many players at the Grandmas- proached with a complex multi-agent reinforcement learning ter rank on the StarCraft II ladder, it does not yet play at the (RL) system, however this is currently only possible with ex- level of the world’s best human players (e.g. in a tournament tremely large financial investments out of the reach of most setting). The creation of AlphaStar demonstrated that using researchers. In this paper we show progress on using varia- deep learning to tackle RTS AI is a powerful solution, how- tions of easier to use RL techniques, modified to accommo- ever applying it to the entire game as a whole is not an eco- date actions with multiple components used in the SC2LE.
    [Show full text]
  • Cgcopyright 2013 Alexander Jaffe
    c Copyright 2013 Alexander Jaffe Understanding Game Balance with Quantitative Methods Alexander Jaffe A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2013 Reading Committee: James R. Lee, Chair Zoran Popovi´c,Chair Anna Karlin Program Authorized to Offer Degree: UW Computer Science & Engineering University of Washington Abstract Understanding Game Balance with Quantitative Methods Alexander Jaffe Co-Chairs of the Supervisory Committee: Professor James R. Lee CSE Professor Zoran Popovi´c CSE Game balancing is the fine-tuning phase in which a functioning game is adjusted to be deep, fair, and interesting. Balancing is difficult and time-consuming, as designers must repeatedly tweak parameters and run lengthy playtests to evaluate the effects of these changes. Only recently has computer science played a role in balancing, through quantitative balance analysis. Such methods take two forms: analytics for repositories of real gameplay, and the study of simulated players. In this work I rectify a deficiency of prior work: largely ignoring the players themselves. I argue that variety among players is the main source of depth in many games, and that analysis should be contextualized by the behavioral properties of players. Concretely, I present a formalization of diverse forms of game balance. This formulation, called `restricted play', reveals the connection between balancing concerns, by effectively reducing them to the fairness of games with restricted players. Using restricted play as a foundation, I contribute four novel methods of quantitative balance analysis. I first show how game balance be estimated without players, using sim- ulated agents under algorithmic restrictions.
    [Show full text]
  • Prospering in the Pandemic: the Top 100 Companies the First in an FT Series on Corporate Resilience in a Year of Human and Economic Devastation
    FRIDAY 19 JUNE 2020 FT SERIES Coronavirus economic impact Prospering in the pandemic: the top 100 companies The first in an FT series on corporate resilience in a year of human and economic devastation In a dismal year for single day in April, up from 20m drawing more users into an most companies, a 1. Amazon in late 2019. ever-expanding ecosystem of minority have shone: wearables and services. pharmaceutical groups SECTOR: ECOMMERCE Apple executives predicted boosted by their hunt HQ: SEATTLE, US $269.9bn sales of some items would even for a Covid-19 vaccine; MARKET CAP ADDED accelerate, as millions of technology giants buoyed Key stat: Amazon anticipates consumers working from home by the trend for working it could spend $4bn to keep its Microsoft’s shift to the cloud would opt to upgrade their from home; and retailers logistics running during the under Satya Nadella has left it electronics. Investors crowned offering lockdown coronavirus crisis. well-placed for a world where Apple the first $1.5tn company. necessities online. large numbers of people are Patrick McGee in San Francisco Public companies working remotely. The Teams had the tailwind of a $401.1bn communication app has MARKET CAP ADDED become a way for workers to surprisingly robust stock stay in touch. The Azure cloud 4. Tesla market — which many As world leaders ordered their computing platform has become believe is a bubble. citizens indoors, Amazon became a more critical part of the digital SECTOR: AUTOS To rank companies the emergency port of call for backbone for many companies.
    [Show full text]
  • Game Balancing
    Game Balancing CS 2501 – Intro to Game Programming and Design Credit: Some slide material courtesy Walker White (Cornell) CS 2501 Dungeons and Dragons • D&D is a fantasy roll playing system • Dungeon Masters run (and someCmes create) campaigns for players to experience • These campaigns have several aspects – Roll playing – Skill challenges – Encounters • We will look at some simple balancing of these aspects 2 CS 2501 ProbabiliCes of D&D • Skill Challenge • Example: The player characters (PCs) have come upon a long wall that encompasses a compound they are trying to enter • The wall is 20 feet tall and 1 foot thick • What are some ways to overcome the wall? • How hard should it be for the PCs to overcome the wall? 3 CS 2501 ProbabiliCes of D&D • How hard should it be for the PCs to overcome the wall? • We approximate this in the game world using a Difficulty Class (DC) Level Easy Moderate Hard 7 11 16 23 8 12 16 23 9 12 17 25 10 13 18 26 11 13 19 27 4 CS 2501 ProbabiliCes of D&D • Easy – not trivial, but simple; reasonable challenge for untrained character • Medium – requires training, ability, or luck • Hard – designed to test characters focused on a skill Level Easy Moderate Hard 7 11 16 23 8 12 16 23 9 12 17 25 10 13 18 26 11 13 19 27 5 CS 2501 Balancing • Balancing a game is can be quite the black art • A typical player playing a game involves intuiCon, fantasy, and luck – it’s qualitave • A game designer playing a game… it’s quanCtave – They see the systems behind the game and this can actually “ruin” the game a bit 6 CS 2501 Building Balance • General advice – Build a game for creavity’s sake first – Build a game for parCcular mechanics – Build a game for parCcular aestheCcs • Then, aer all that… – Then balance – Complexity can be added and removed if needed – Other levers can be pulled • Complexity vs.
    [Show full text]
  • The Making of a Conceptual Design for a Balancing Tool
    Södertörns högskola | Institutionen för Institutionen för naturvetenskap, miljö och teknik Kandidatuppsats 15 hp | Medieteknik | höstterminen 2014 The Making of a Conceptual Design for a Balancing Tool Av: Jonas Eriksson Handledare: Inger Ekman Abstract Balancing is usually done in the later phases of creating a game to make sure everything comes together to an enjoyable experience. Most of the time balancing is done with a series of playthroughs by the designers or by outsourced play testers and the imbalances found are corrected followed by more playthroughs. This method occupies a lot of time and might therefore not find everything. In this study I use information gathered from interviews with experienced designers and designer texts along with features from methods frequently used for aiding the designers to make a conceptual design of a tool that is aimed towards simplifying the process of balancing and reducing the amount of work hours having to be spent on this phase. Keywords Balancing, Interviews, Conceptual Design, Game Development, External Tool 2 Sammanfattning Balansering görs framförallt i de senare faserna när man skapar ett spel för att se till att alla delar tillsammans skapar en bra upplevelse. För det mesta utförs balanseringen i form av upprepade speltester genomförda av antingen utvecklaren eller inhyrda testpersoner. Obalanserade saker som upptäcks korrigeras och följs sedan av ytterligare tester. Denna metod tar väldigt lång tid att utföra och på grund av detta är det inte säkert att alla fel upptäcks innan spelet lanseras. I den här studien använder jag information som insamlats från intervjuer med erfarna designers och designtexter sida vid sida med funktioner från metoder som ofta används för att underlätta för utvecklarna.
    [Show full text]
  • [TME] - Tencent Music Entertainment Group Second Quarter 2019 Financial Results Conference Call Monday, August 12, 2019, 8:00 PM ET
    [TME] - Tencent Music Entertainment Group Second Quarter 2019 Financial Results Conference Call Monday, August 12, 2019, 8:00 PM ET Officers Millicent Tu, VGM, IR Cussion Pang, CEO Tony Yip, CSO Shirley Hu, CFO Analysts John Egbert, Stifel, Nicolaus Alex Yao, JPMorgan Chase Eddie Leung, Bank of America Merrill Lynch Piyush Mubayi, Goldman Sachs Group Thomas Chong, Jefferies Hans Chung, KeyBanc Capital Markets Gary Yu, Morgan Stanley Presentation [Technical Difficulty] Operator: Ladies and gentlemen, good evening and good morning, and thank you for standing by. Welcome to the Tencent Music Entertainment Group's Second Quarter 2019 Earnings Conference Call. At this time, all participants are in listen-only mode. (Operator Instructions). Today you will hear discussions from the management team of Tencent Music Entertainment Group, followed by a question-and-answer session. (Operator Instructions). Please be advised that this conference is being recorded today. If you have any objections, you may disconnect at this time. Now, I will turn the conference over to your speaker host today, Ms. Millicent Tu. Please go ahead. Millicent Tu: Thank you, operator. Hello, everyone, and thank you all for joining us on today's call. Tencent Music Entertainment Group announced its financial results for the second quarter of 2019 today after the market close. An earnings release is now available on our IR website at ir.tencentmusic.com, as well as via newswire services. Today you will hear from Mr. Cussion Pang, our CEO, who will start off the call with an overview of our recent achievements and growth strategy. He will be followed by Mr.
    [Show full text]
  • Anti-Cheat Expert Product Introduction
    Anti-Cheat Expert Anti-Cheat Expert Product Introduction Product Documentation ©2013-2019 Tencent Cloud. All rights reserved. Page 1 of 7 Anti-Cheat Expert Copyright Notice ©2013-2019 Tencent Cloud. All rights reserved. Copyright in this document is exclusively owned by Tencent Cloud. You must not reproduce, modify, copy or distribute in any way, in whole or in part, the contents of this document without Tencent Cloud's the prior written consent. Trademark Notice All trademarks associated with Tencent Cloud and its services are owned by Tencent Cloud Computing (Beijing) Company Limited and its affiliated companies. Trademarks of third parties referred to in this document are owned by their respective proprietors. Service Statement This document is intended to provide users with general information about Tencent Cloud's products and services only and does not form part of Tencent Cloud's terms and conditions. Tencent Cloud's products or services are subject to change. Specific products and services and the standards applicable to them are exclusively provided for in Tencent Cloud's applicable terms and conditions. ©2013-2019 Tencent Cloud. All rights reserved. Page 2 of 7 Anti-Cheat Expert Contents Product Introduction Overview Features ©2013-2019 Tencent Cloud. All rights reserved. Page 3 of 7 Anti-Cheat Expert Product Introduction Overview Last updated:2021-06-22 11:17:30 Overview of the Mobile Game Market Market size According to a third-party data source, China's mobile game industry reported a total revenue of 102.28 billion CNY in 2016. With the rise of online gaming, game virtual social system, PVP system, and high-value game economy system are becoming more and more prevalent in mobile games, posing considerable security risks to the industry.
    [Show full text]
  • Deepfakes & Disinformation
    DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION Agnieszka M. Walorska ANALYSISANALYSE 2 DEEPFAKES & DISINFORMATION IMPRINT Publisher Friedrich Naumann Foundation for Freedom Karl-Marx-Straße 2 14482 Potsdam Germany /freiheit.org /FriedrichNaumannStiftungFreiheit /FNFreiheit Author Agnieszka M. Walorska Editors International Department Global Themes Unit Friedrich Naumann Foundation for Freedom Concept and layout TroNa GmbH Contact Phone: +49 (0)30 2201 2634 Fax: +49 (0)30 6908 8102 Email: [email protected] As of May 2020 Photo Credits Photomontages © Unsplash.de, © freepik.de, P. 30 © AdobeStock Screenshots P. 16 © https://youtu.be/mSaIrz8lM1U P. 18 © deepnude.to / Agnieszka M. Walorska P. 19 © thispersondoesnotexist.com P. 19 © linkedin.com P. 19 © talktotransformer.com P. 25 © gltr.io P. 26 © twitter.com All other photos © Friedrich Naumann Foundation for Freedom (Germany) P. 31 © Agnieszka M. Walorska Notes on using this publication This publication is an information service of the Friedrich Naumann Foundation for Freedom. The publication is available free of charge and not for sale. It may not be used by parties or election workers during the purpose of election campaigning (Bundestags-, regional and local elections and elections to the European Parliament). Licence Creative Commons (CC BY-NC-ND 4.0) https://creativecommons.org/licenses/by-nc-nd/4.0 DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION 3 4 DEEPFAKES & DISINFORMATION CONTENTS Table of contents EXECUTIVE SUMMARY 6 GLOSSARY 8 1.0 STATE OF DEVELOPMENT ARTIFICIAL
    [Show full text]