Reinforcement Learning

Total Page:16

File Type:pdf, Size:1020Kb

Reinforcement Learning Reinforcement Learning AlphaGo 的左右互搏 Tsung-Hsi Tsai 2018. 8. 8 統計研習營 1 絕藝如君天下少 , 閒⼈似我世間無 。 別後⽵窗⾵雪夜 , ⼀燈明暗覆吳圖 。 —— 杜牧 《 重送絕句 》 2 Overview I. Story of AlphaGo II. Algorithm of AlphaGo Zero III. Experiments of simple AlphaZero 3 Story of AlphaGo 4 To develop an AlphaGo Human experts technically games (oponal) ApproaCh: reinforcement learning + deep neural network Manpower: Compung programming power: CPUs + skill GPUs or TPUs 5 AlphaGo = 夢想 + 努⼒ + 時運 storytelling 6 Three key persons • Demis Hassabis (direcon) • David Silver (method) • Aja Huang (implement) Demis Hassabis (1976) • Age 13, chess master, No 2. • In 1993, designed classic game Theme Park, in 1998 to found Elixir Studios, games developer • 2009, PhD in Cognive Neuroscience • In 2010, founded DeepMind • In 2014, started AlphaGo project 8 David Silver • Demis Hassabis’ partner of game development in 1998 • Deep Q-Network (breakthrough on AGI arLficial general intelligence) show Atari game • Idea of Value Network (breakthrough on Go program) 9 Aja Huang • AlphaGo 的⼈⾁⼿臂 • 圍棋棋⼒業餘六段 • 2010 年 ,⿈⼠傑開發的圍棋程式 「 Erica」 得到 競賽冠軍 。 • 臺師⼤資⼯所 碩⼠論⽂為 《 電腦圍棋打劫的策略 》 2003 博⼠論⽂為 《 應⽤於電腦圍棋之蒙地卡羅樹搜尋 法的新啟發式演算法 》 2011 • Join DeepMind in 2012 10 The birth of AlphaGo • The DREAM started right aer DeepMind joined Google in 2014. • Research direcLon: deep learning and reinforcement learning. • First achievement: a high quality policy network to play Go, trained from big data of human expert games. • It beated No. 1 Go program CrazyStone with 70%. 11 A Cool idea • The most challenging part of Go program is to evaluate the situaon of board. • David Silver’s new idea: self-play using policy network to produce big data of games. Training Data: boards with label game results {0,1} • A high quality evaluaon of situaon of board was created, the AlphaGo team called it, value network. 12 Defeat professional player • The strength of AlphaGo improved quickly aer introducing value network. • In Oct. 2015, it beated European Go champion Fan Hui (樊麾) with 5:0, that is “a feat previously thought to be at least a decade away”. • Jan. 27 2016, Nature published the paper of AlphaGo, and the news of defeang Fan Hui Chinese professional 2 dan. 13 Next Challenge 李世乭( Lee Sedol, 33歲 ) 本世紀的圍棋世界棋⺩ • Challenge Lee Sedol • Compare Fan Hui & Lee Sedol 14 號外 Notable breakthrough Beated Lee with 4:1, in March 2016 15 AlphaGo 的代表作 : 第⼆局第 37 ⼿ 16 李世乭第四局的反擊 ( AlphaGo 執⿊棋 ) 17 AlphaGo upgrade • SoluLons of the bug in game 4. • AlphaGo Master: 2016 年底開始連續在圍棋網路平台上 , 以每天 下 10 盤的速度爭戰四⽅ , 「斬殺 」 中韓⽇最頂尖的圍棋⾼⼿ , 包括 : 柯潔 、 朴廷桓 、 井⼭裕太 、 ……。 • 退役之戰 :2017 年 5 ⽉在中國浙江烏鎮 ,與柯潔 (No 1) 下 3 場 分先對弈⽐賽 , 獎⾦ 150 萬美元 。 • 公開 50 局 AlphaGo ⾃⼰對弈的棋譜 。 18 DeepMind make another breakthrough, AlphaGo Zero 19 AlphaGo Zero • Algorithm base solely on reinforcement learning, without human data. • Start from completely random behavior and conLnued without human intervenLon. • AlphaGo Zero is “simple”, elegant, and it seems universal for board games. 20 Progress in Elo rangs --- 2015 --- 2005 21 AlphaZero General approach that applied to other board games such as chess and shogi (⽇本將棋 ). David Silver presented the result in NIPS 2017. 22 Algorithm of AlphaGo Zero 23 Chess, … MiniMax algorithm with alternate moves max min max min 24 Evaluaon of the posion example 25 Monte Carlo tree searCh (4 steps in one round) Each node stores a rao: # wins / # playouts 26 Balancing exploitaon and exploraon (Introduced in 2006) Recommend to choose move with the highest value in this formula: wi : # wins aer the i-th move ni : # simulaons aer the i-th move c : exploraon parameter t : total # simulaons for the node 27 Monte Carlo tree searCh in AlphaGo At each Lme step t of each simulaon, an acLon (or move) at is selected from state st where Q(s, a) is the acLon value and u(s, a) is the bonus, and they are determined by policy network and value network. 28 Determine next move from MCTS The distribuLon of recommending moves is π(a) ∝ N(s,a)^(1/τ) where s is the current state (board posiLon), a is an acLon (move), N(s,a) is the visit count of the edge (s,a) in the tree, and τ (tend to zero) is a temperature parameter. 29 Self-Play to generate data • Use policy network and value network to perform Monte Carlo tree search. • AlphaGo Zero perform 1,600 simulaons of MCTS to select each move. • Training data (s, π, z), where s is posiLon, π is the distribuLon of recommending next move and z ∈ {−1,+1} is outcome of game. • Loss funcon: 30 PoliCy networks Value networks (predict the perfect move) (predict the game result) 31 New network AlphaGo Zero combine policy network and value network into a two-head (two outputs) network. 32 Network arChiteCture Consists of a single convoluLonal block followed by either 19 or 39 residual convoluLonal blocks of the following modules: • A convoluLon of 256 filters of kernel size 3 × 3 with stride 1 • Batch normalizaon • A recLfier nonlinearity (ReLU) The output of “the residual tower” is passed into two separate “heads” (fully connected layer) for compuLng the policy and value. 33 Training under 20 bloCks AlphaGo Zero winning 100 : 0 against AlphaGo Lee. 34 Training under 40 bloCks AlphaGo Zero winning 89 : 11 against AlphaGo Mster. Go to AlphaGo Zero 对局研究 35 Simple experiments 36 QuesAon Is a simple reinforcement learning without using MCTS possible to master Go? 37 Implement on simple variants of Go. 38 Atari Go or the Capture Game win the game whenever capturing opponent’s stones. 39 More examples 40 AdvanCe strategy The strategy does not only capturing opponent's stones, but also building own territory. 41 Training pipeline Self-play using network Training network beyer network Game pool Pick data for training 42 Play game using network Apply predicon of network: Input: w×W×3, where w is board size Example: board & black next 43 Choose move using network output • Network output: w×W (each value in (0,1)) • Exclude invalid points. • Compute the distribuLon of recommending move. • Choose move randomly according to the distribuLon. 44 Network (Keras code) 1. Convoluon(3x3) layers: number of filters (denoted by F) 2. Number of residual blocks (denoted by K) 3. Batch Normalizaon & AcLvaon ReLU 4. Flaen before fully connected to output layer 45 Training network 1. Labeling data (giving incenLve) data: w×W×3 (choose all winning side) label the point of next move 1; other point 0 2. Pick training data randomly from the game pool. 3. Compile with loss funcLon 'categorical_crossentropy’ Game data: label: 46 Program flowchart & factors IniLally random-play Data Set (game-size: n_game) insert & delete Pick N data Self-play n_1 K = #blocks games F = #filters renew network 47 Examining the progression 1. Values of loss funcLon. 2. winning rate with random-play. 3. winning rate with previous network. 4. Other stasLc, such as black winning rate? 5. Print some games and check by human. 48 .
Recommended publications
  • Elements of DSAI: Game Tree Search, Learning Architectures
    Introduction Games Game Search Evaluation Fns AlphaGo/Zero Summary Quiz References Elements of DSAI AlphaGo Part 2: Game Tree Search, Learning Architectures Search & Learn: A Recipe for AI Action Decisions J¨orgHoffmann Winter Term 2019/20 Hoffmann Elements of DSAI Game Tree Search, Learning Architectures 1/49 Introduction Games Game Search Evaluation Fns AlphaGo/Zero Summary Quiz References Competitive Agents? Quote AI Introduction: \Single agent vs. multi-agent: One agent or several? Competitive or collaborative?" ! Single agent! Several koalas, several gorillas trying to beat these up. BUT there is only a single acting entity { one player decides which moves to take (who gets into the boat). Hoffmann Elements of DSAI Game Tree Search, Learning Architectures 3/49 Introduction Games Game Search Evaluation Fns AlphaGo/Zero Summary Quiz References Competitive Agents! Quote AI Introduction: \Single agent vs. multi-agent: One agent or several? Competitive or collaborative?" ! Multi-agent competitive! TWO players deciding which moves to take. Conflicting interests. Hoffmann Elements of DSAI Game Tree Search, Learning Architectures 4/49 Introduction Games Game Search Evaluation Fns AlphaGo/Zero Summary Quiz References Agenda: Game Search, AlphaGo Architecture Games: What is that? ! Game categories, game solutions. Game Search: How to solve a game? ! Searching the game tree. Evaluation Functions: How to evaluate a game position? ! Heuristic functions for games. AlphaGo: How does it work? ! Overview of AlphaGo architecture, and changes in Alpha(Go) Zero. Hoffmann Elements of DSAI Game Tree Search, Learning Architectures 5/49 Introduction Games Game Search Evaluation Fns AlphaGo/Zero Summary Quiz References Positioning in the DSAI Phase Model Hoffmann Elements of DSAI Game Tree Search, Learning Architectures 6/49 Introduction Games Game Search Evaluation Fns AlphaGo/Zero Summary Quiz References Which Games? ! No chance element.
    [Show full text]
  • ELF: an Extensive, Lightweight and Flexible Research Platform for Real-Time Strategy Games
    ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games Yuandong Tian1 Qucheng Gong1 Wenling Shang2 Yuxin Wu1 C. Lawrence Zitnick1 1Facebook AI Research 2Oculus 1fyuandong, qucheng, yuxinwu, [email protected] [email protected] Abstract In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environ- ments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a minia- ture version of StarCraft, captures key game dynamics and runs at 40K frame- per-second (FPS) per core on a laptop. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs end- to-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environ- ments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [17] and Batch Normalization [11] cou- pled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong per- formance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies. ELF, along with its RL platform, is open sourced at https://github.com/facebookresearch/ELF. 1 Introduction Game environments are commonly used for research in Reinforcement Learning (RL), i.e.
    [Show full text]
  • Improved Policy Networks for Computer Go
    Improved Policy Networks for Computer Go Tristan Cazenave Universite´ Paris-Dauphine, PSL Research University, CNRS, LAMSADE, PARIS, FRANCE Abstract. Golois uses residual policy networks to play Go. Two improvements to these residual policy networks are proposed and tested. The first one is to use three output planes. The second one is to add Spatial Batch Normalization. 1 Introduction Deep Learning for the game of Go with convolutional neural networks has been ad- dressed by [2]. It has been further improved using larger networks [7, 10]. AlphaGo [9] combines Monte Carlo Tree Search with a policy and a value network. Residual Networks improve the training of very deep networks [4]. These networks can gain accuracy from considerably increased depth. On the ImageNet dataset a 152 layers networks achieves 3.57% error. It won the 1st place on the ILSVRC 2015 classifi- cation task. The principle of residual nets is to add the input of the layer to the output of each layer. With this simple modification training is faster and enables deeper networks. Residual networks were recently successfully adapted to computer Go [1]. As a follow up to this paper, we propose improvements to residual networks for computer Go. The second section details different proposed improvements to policy networks for computer Go, the third section gives experimental results, and the last section con- cludes. 2 Proposed Improvements We propose two improvements for policy networks. The first improvement is to use multiple output planes as in DarkForest. The second improvement is to use Spatial Batch Normalization. 2.1 Multiple Output Planes In DarkForest [10] training with multiple output planes containing the next three moves to play has been shown to improve the level of play of a usual policy network with 13 layers.
    [Show full text]
  • Unsupervised State Representation Learning in Atari
    Unsupervised State Representation Learning in Atari Ankesh Anand⇤ Evan Racah⇤ Sherjil Ozair⇤ Mila, Université de Montréal Mila, Université de Montréal Mila, Université de Montréal Microsoft Research Yoshua Bengio Marc-Alexandre Côté R Devon Hjelm Mila, Université de Montréal Microsoft Research Microsoft Research Mila, Université de Montréal Abstract State representation learning, or the ability to capture latent generative factors of an environment, is crucial for building intelligent agents that can perform a wide variety of tasks. Learning such representations without supervision from rewards is a challenging open problem. We introduce a method that learns state representations by maximizing mutual information across spatially and tem- porally distinct features of a neural encoder of the observations. We also in- troduce a new benchmark based on Atari 2600 games where we evaluate rep- resentations based on how well they capture the ground truth state variables. We believe this new framework for evaluating representation learning models will be crucial for future representation learning research. Finally, we com- pare our technique with other state-of-the-art generative and contrastive repre- sentation learning methods. The code associated with this work is available at https://github.com/mila-iqia/atari-representation-learning 1 Introduction The ability to perceive and represent visual sensory data into useful and concise descriptions is con- sidered a fundamental cognitive capability in humans [1, 2], and thus crucial for building intelligent agents [3]. Representations that concisely capture the true state of the environment should empower agents to effectively transfer knowledge across different tasks in the environment, and enable learning with fewer interactions [4].
    [Show full text]
  • Chinese Health App Arrives Access to a Large Population Used to Sharing Data Could Give Icarbonx an Edge Over Rivals
    NEWS IN FOCUS ASTROPHYSICS Legendary CHEMISTRY Deceptive spice POLITICS Scientists spy ECOLOGY New Zealand Arecibo telescope faces molecule offers cautionary chance to green UK plans to kill off all uncertain future p.143 tale p.144 after Brexit p.145 invasive predators p.148 ICARBONX Jun Wang, founder of digital biotechnology firm iCarbonX, showcases the Meum app that will use reams of health data to provide customized medical advice. BIOTECHNOLOGY Chinese health app arrives Access to a large population used to sharing data could give iCarbonX an edge over rivals. BY DAVID CYRANOSKI, SHENZHEN medical advice directly to consumers through another $400 million had been invested in the an app. alliance members, but he declined to name the ne of China’s most intriguing biotech- The announcement was a long-anticipated source. Wang also demonstrated the smart- nology companies has fleshed out an debut for iCarbonX, which Wang founded phone app, called Meum after the Latin for earlier quixotic promise to use artificial in October 2015 shortly after he left his lead- ‘my’, that customers would use to enter data Ointelligence (AI) to revolutionize health care. ership position at China’s genomics pow- and receive advice. The Shenzhen firm iCarbonX has formed erhouse, BGI, also in Shenzhen. The firm As well as Google, IBM and various smaller an ambitious alliance with seven technology has now raised more than US$600 million in companies, such as Arivale of Seattle, Wash- companies from around the world that special- investment — this contrasts with the tens of ington, are working on similar technology. But ize in gathering different types of health-care millions that most of its rivals are thought Wang says that the iCarbonX alliance will be data, said the company’s founder, Jun Wang, to have invested (although several big play- able to collect data more cheaply and quickly.
    [Show full text]
  • Understanding & Generalizing Alphago Zero
    Under review as a conference paper at ICLR 2019 UNDERSTANDING &GENERALIZING ALPHAGO ZERO Anonymous authors Paper under double-blind review ABSTRACT AlphaGo Zero (AGZ) (Silver et al., 2017b) introduced a new tabula rasa rein- forcement learning algorithm that has achieved superhuman performance in the games of Go, Chess, and Shogi with no prior knowledge other than the rules of the game. This success naturally begs the question whether it is possible to develop similar high-performance reinforcement learning algorithms for generic sequential decision-making problems (beyond two-player games), using only the constraints of the environment as the “rules.” To address this challenge, we start by taking steps towards developing a formal understanding of AGZ. AGZ includes two key innovations: (1) it learns a policy (represented as a neural network) using super- vised learning with cross-entropy loss from samples generated via Monte-Carlo Tree Search (MCTS); (2) it uses self-play to learn without training data. We argue that the self-play in AGZ corresponds to learning a Nash equilibrium for the two-player game; and the supervised learning with MCTS is attempting to learn the policy corresponding to the Nash equilibrium, by establishing a novel bound on the difference between the expected return achieved by two policies in terms of the expected KL divergence (cross-entropy) of their induced distributions. To extend AGZ to generic sequential decision-making problems, we introduce a robust MDP framework, in which the agent and nature effectively play a zero-sum game: the agent aims to take actions to maximize reward while nature seeks state transitions, subject to the constraints of that environment, that minimize the agent’s reward.
    [Show full text]
  • Computer Go: from the Beginnings to Alphago Martin Müller, University of Alberta
    Computer Go: from the Beginnings to AlphaGo Martin Müller, University of Alberta 2017 Outline of the Talk ✤ Game of Go ✤ Short history - Computer Go from the beginnings to AlphaGo ✤ The science behind AlphaGo ✤ The legacy of AlphaGo The Game of Go Go ✤ Classic two-player board game ✤ Invented in China thousands of years ago ✤ Simple rules, complex strategy ✤ Played by millions ✤ Hundreds of top experts - professional players ✤ Until 2016, computers weaker than humans Go Rules ✤ Start with empty board ✤ Place stone of your own color ✤ Goal: surround empty points or opponent - capture ✤ Win: control more than half the board Final score, 9x9 board ✤ Komi: first player advantage Measuring Go Strength ✤ People in Europe and America use the traditional Japanese ranking system ✤ Kyu (student) and Dan (master) levels ✤ Separate Dan ranks for professional players ✤ Kyu grades go down from 30 (absolute beginner) to 1 (best) ✤ Dan grades go up from 1 (weakest) to about 6 ✤ There is also a numerical (Elo) system, e.g. 2500 = 5 Dan Short History of Computer Go Computer Go History - Beginnings ✤ 1960’s: initial ideas, designs on paper ✤ 1970’s: first serious program - Reitman & Wilcox ✤ Interviews with strong human players ✤ Try to build a model of human decision-making ✤ Level: “advanced beginner”, 15-20 kyu ✤ One game costs thousands of dollars in computer time 1980-89 The Arrival of PC ✤ From 1980: PC (personal computers) arrive ✤ Many people get cheap access to computers ✤ Many start writing Go programs ✤ First competitions, Computer Olympiad, Ing Cup ✤ Level 10-15 kyu 1990-2005: Slow Progress ✤ Slow progress, commercial successes ✤ 1990 Ing Cup in Beijing ✤ 1993 Ing Cup in Chengdu ✤ Top programs Handtalk (Prof.
    [Show full text]
  • Combining Tactical Search and Deep Learning in the Game of Go
    Combining tactical search and deep learning in the game of Go Tristan Cazenave PSL-Universite´ Paris-Dauphine, LAMSADE CNRS UMR 7243, Paris, France [email protected] Abstract Elaborated search algorithms have been developed to solve tactical problems in the game of Go such as capture problems In this paper we experiment with a Deep Convo- [Cazenave, 2003] or life and death problems [Kishimoto and lutional Neural Network for the game of Go. We Muller,¨ 2005]. In this paper we propose to combine tactical show that even if it leads to strong play, it has search algorithms with deep learning. weaknesses at tactical search. We propose to com- Other recent works combine symbolic and deep learn- bine tactical search with Deep Learning to improve ing approaches. For example in image surveillance systems Golois, the resulting Go program. A related work [Maynord et al., 2016] or in systems that combine reasoning is AlphaGo, it combines tactical search with Deep with visual processing [Aditya et al., 2015]. Learning giving as input to the network the results The next section presents our deep learning architecture. of ladders. We propose to extend this further to The third section presents tactical search in the game of Go. other kind of tactical search such as life and death The fourth section details experimental results. search. 2 Deep Learning 1 Introduction In the design of our network we follow previous work [Mad- Deep Learning has been recently used with a lot of success dison et al., 2014; Tian and Zhu, 2015]. Our network is fully in multiple different artificial intelligence tasks.
    [Show full text]
  • (CMPUT) 455 Search, Knowledge, and Simulations
    Computing Science (CMPUT) 455 Search, Knowledge, and Simulations James Wright Department of Computing Science University of Alberta [email protected] Winter 2021 1 455 Today - Lecture 22 • AlphaGo - overview and early versions • Coursework • Work on Assignment 4 • Reading: AlphaGo Zero paper 2 AlphaGo Introduction • High-level overview • History of DeepMind and AlphaGo • AlphaGo components and versions • Performance measurements • Games against humans • Impact, limitations, other applications, future 3 About DeepMind • Founded 2010 as a startup company • Bought by Google in 2014 • Based in London, UK, Edmonton (from 2017), Montreal, Paris • Expertise in Reinforcement Learning, deep learning and search 4 DeepMind and AlphaGo • A DeepMind team developed AlphaGo 2014-17 • Result: Massive advance in playing strength of Go programs • Before AlphaGo: programs about 3 levels below best humans • AlphaGo/Alpha Zero: far surpassed human skill in Go • Now: AlphaGo is retired • Now: Many other super-strong programs, including open source Image source: • https://www.nature.com All are based on AlphaGo, Alpha Zero ideas 5 DeepMind and UAlberta • UAlberta has deep connections • Faculty who work part-time or on leave at DeepMind • Rich Sutton, Michael Bowling, Patrick Pilarski, Csaba Szepesvari (all part time) • Many of our former students and postdocs work at DeepMind • David Silver - UofA PhD, designer of AlphaGo, lead of the DeepMind RL and AlphaGo teams • Aja Huang - UofA postdoc, main AlphaGo programmer • Many from the computer Poker group
    [Show full text]
  • Human Vs. Computer Go: Review and Prospect
    This article is accepted and will be published in IEEE Computational Intelligence Magazine in August 2016 Human vs. Computer Go: Review and Prospect Chang-Shing Lee*, Mei-Hui Wang Department of Computer Science and Information Engineering, National University of Tainan, TAIWAN Shi-Jim Yen Department of Computer Science and Information Engineering, National Dong Hwa University, TAIWAN Ting-Han Wei, I-Chen Wu Department of Computer Science, National Chiao Tung University, TAIWAN Ping-Chiang Chou, Chun-Hsun Chou Taiwan Go Association, TAIWAN Ming-Wan Wang Nihon Ki-in Go Institute, JAPAN Tai-Hsiung Yang Haifong Weiqi Academy, TAIWAN Abstract The Google DeepMind challenge match in March 2016 was a historic achievement for computer Go development. This article discusses the development of computational intelligence (CI) and its relative strength in comparison with human intelligence for the game of Go. We first summarize the milestones achieved for computer Go from 1998 to 2016. Then, the computer Go programs that have participated in previous IEEE CIS competitions as well as methods and techniques used in AlphaGo are briefly introduced. Commentaries from three high-level professional Go players on the five AlphaGo versus Lee Sedol games are also included. We conclude that AlphaGo beating Lee Sedol is a huge achievement in artificial intelligence (AI) based largely on CI methods. In the future, powerful computer Go programs such as AlphaGo are expected to be instrumental in promoting Go education and AI real-world applications. I. Computer Go Competitions The IEEE Computational Intelligence Society (CIS) has funded human vs. computer Go competitions in IEEE CIS flagship conferences since 2009.
    [Show full text]
  • Arxiv:1611.08903V1 [Cs.LG] 27 Nov 2016 20 Percent Increases in Cash Collections [20]
    Should I use TensorFlow? An evaluation of TensorFlow and its potential to replace pure Python implementations in Machine Learning Martin Schrimpf1;2;3 1 Augsburg University 2 Technische Universit¨atM¨unchen 3 Ludwig-Maximilians-Universit¨atM¨unchen Seminar \Human-Machine Interaction and Machine Learning" Supervisor: Elisabeth Andr´e Advisor: Dominik Schiller Abstract. Google's Machine Learning framework TensorFlow was open- sourced in November 2015 [1] and has since built a growing community around it. TensorFlow is supposed to be flexible for research purposes while also allowing its models to be deployed productively [7]. This work is aimed towards people with experience in Machine Learning consider- ing whether they should use TensorFlow in their environment. Several aspects of the framework important for such a decision are examined, such as the heterogenity, extensibility and its computation graph. A pure Python implementation of linear classification is compared with an im- plementation utilizing TensorFlow. I also contrast TensorFlow to other popular frameworks with respect to modeling capability, deployment and performance and give a brief description of the current adaption of the framework. 1 Introduction The rapidly growing field of Machine Learning has been gaining more and more attention, both in academia and in businesses that have realized the added value. For instance, according to a McKinsey report, more than a dozen European banks switched from statistical-modeling approaches to Machine Learning tech- niques and, in some cases, increased their sales of new products by 10 percent and arXiv:1611.08903v1 [cs.LG] 27 Nov 2016 20 percent increases in cash collections [20]. Intelligent machines are used in a variety of domains, including writing news articles [5], finding promising recruits given their CV [9] and many more.
    [Show full text]
  • Software-Defined Software: a Perspective of Machine Learning-Based Software Production
    2018 IEEE 38th International Conference on Distributed Computing Systems Software-defined Software: A Perspective of Machine Learning-based Software Production Rubao Lee, Hao Wang, and Xiaodong Zhang Department of Computer Science and Engineering, The Ohio State University {liru, wangh, zhang}@cse.ohio-state.edu Abstract—As the Moore’s Law is ending, and increasingly high have been hidden in the CPU-dominated computing era for two demand of software development continues in the human society, reasons. First, the Moore’s Law can automatically improve the we are facing two serious challenges in the computing field. execution performance by increasing the number of transistors First, the general-purpose computing ecosystem that has been developed for more than 50 years will have to be changed by in CPU chips to enhance the capabilities of on-chip caches and including many diverse devices for various specialties in high computing power. Thus, execution performance continues to performance. Second, human-based software development is not be improved without major software modifications. Second, sustainable to respond the requests from all the fields in the the development of CPU-dominated computing ecosystem for society. We envision that we will enter a time of developing high many years has created multilayer abstractions in a deep quality software by machines, and we name this as Software- defined Software (SDS). In this paper, we will elaborate our software stack, e.g. from IAS, to LLVM, to JAVA/C, and vision, the goals and its roadmap. to JavaScript/Python. This software stack promotes software productivity by connecting programmers at different layers to I.
    [Show full text]