Game Playing with Deep Q-Learning Using Openai Gym

Total Page:16

File Type:pdf, Size:1020Kb

Game Playing with Deep Q-Learning Using Openai Gym Game Playing with Deep Q-Learning using OpenAI Gym Robert Chuchro Deepak Gupta [email protected] [email protected] Abstract sociated with performing a particular action in a given state. This information is fundamental to any reinforcement learn- Historically, designing game players requires domain- ing problem. specific knowledge of the particular game to be integrated The input to our model will be a sequence of pixel im- into the model for the game playing program. This leads ages as arrays (Width x Height x 3) generated by a particular to a program that can only learn to play a single particu- OpenAI Gym environment. We then use a Deep Q-Network lar game successfully. In this project, we explore the use of to output a action from the action space of the game. The general AI techniques, namely reinforcement learning and output that the model will learn is an action from the envi- neural networks, in order to architect a model which can ronments action space in order to maximize future reward be trained on more than one game. Our goal is to progress from a given state. In this paper, we explore using a neural from a simple convolutional neural network with a couple network with multiple convolutional layers as our model. fully connected layers, to experimenting with more complex additions, such as deeper layers or recurrent neural net- 2. Related Work works. The best known success story of classical reinforcement learning is TD-gammon, a backgammon playing program 1. Introduction which learned entirely by reinforcement learning [6]. TD- gammon used a model-free reinforcement learning algo- Game playing has recently emerged as a popular play- rithm similar to Q-learning. Attempt to explore on success ground for exploring the application of artificial intelligence of TD-gammon were less successful like applying the same theory. With the addition of convolutional neural networks, techniques on Go and checkers as it was later concluded that performance of game playing programs has seen improve- stochasticity in the dice rolls helps exploring state space and ment that can even go beyond the ability of even expert level makes value function particularly smooth [7]. human players. This demonstrates the powerful learning The core of the problem is to apply non linear func- capabilities of computers in a variety of challenging envi- tion approximation such as a neural network to reinforce- ronments. Designing a model which is agnostic to its envi- ment learning problem. Traditionally reinforcement learn- ronment allows us to investigate a core problem of artificial ing problems tend to be unstable or diverge when a non intelligence, which is the concept of general intelligence. linear function is applied to represent the action value aka The benefit to interfacing with OpenAI Gym is that it is Q value[1].Learning to control agents directly from high an actively developed interface which is adding more envi- dimensional sensory inputs like vision is one of the long ronments and features useful for training. One of the core standing challenges of reinforcement learning[2]. Most of challenges with computer vision is obtaining enough data to the recent algorithms and works that have used deep learn- properly train a neural network, and OpenAI Gym provides ing models to approximate Q function for RL exercise has a clean interface with dozens of different environments. been made possible due to recent breakthroughs in com- 1.1. Learning Environment puter vision [3,4,5].First deep learning model to tackle rein- forcement learning problem was introduced by DeepMind In this project, we will be exploring reinforcement learn- Technologies [2] to successfully learn control policies di- ing on a variety of OpenAI Gym environments (G. Brock- rectly from image input using reinforcement learning. They man et al., 2016). OpenAI Gym is an interface which pro- used a convolutional neural network to train seven Atari vides various environments which simulate reinforcement 2600 games from Arcade Learning Environment with no learning problems. Specifically, each environment has an adjustment of architecture or learning algorithm. They were observation state space, an action space to interact with the able to achieve state-of-the-art results in six of the seven environment to transition between states, and a reward as- games it was tested on, with no adjustment of the architec- 1 ture or hyperparameters. back is the successor state, represented as an image array of Following the success of DQN, the Google DeepMind form (Width x Height x 3), where the third dimension is the team experimented with applying deep convolution network RGB values at each pixel. In addition, we receive a reward to Double Q-learning algorithm [8,9], the motivation be- value from the transition as well as an indicator to notify if ing standard Deep Q-learning algorithms were known to the particular episode has terminated. overestimate values under certain conditions. If overestima- In the case of Flappy Bird, the observation space is a tions are not uniform they can negatively impact the quality matrix of (288x512x3) representing the pixel values of the of resulting policy [10].Double DQN builds on the idea of image of a particular frame in the game. Upon passing in Double Q-learning which aims at reducing overestimations between a set of pipes, a reward of 1 is received. If Flappy by decomposing the max operation in the target into ac- Bird collides with a pipe, the reward is -5 and the episode tion selection and action evaluation [8]. Google DeepMind ends. A reward of 0 is given in all other states. The action team concluded that DDQN clearly improves over DQN. space that we will learn our policy on is just two actions, 0 Noteworthy examples include Road Runner (from 233% to or 1. A sample image from a frame generated by the Ope- 617%), Asterix (from 70% to 180%), Zaxxon (from 54% nAi Gym environment is shown in figure [1]. to 111%), and Double Dunk (from 17% to 397%). DDQN also estimated to show low variance in results than the cor- responding DQN model. There has been experimentation with different network architectures within convolution network domain such as Dueling Networks which contain two separate estimators: one for the state value function and one for the state- dependent action advantage function. This helps in generalizing learning across actions with- out imposing any change to the underlying reinforce- ment learning algorithm[11].Google DeepMind team ex- perimented with applying dueling networks on DDQN with Figure 1. Sample frame of Flappy Bird. Action of 1 causes the prioritized experience replay [12] and uniform networks bird to go up, while 0 allows gravity to drag it down. without any prioritized experience replay buffer.They eval- uated dueling network architecture on 57 Atari games and found prioritized dueling agent performs significantly bet- 3.2. Input Preprocessing ter than both the prioritized baseline agent and the dueling Once we receive the image from the Gym environment, agent alone Recent works by Google DeepMind involves we apply a couple steps to format the image to finally use an asynchronous variant of action critic model which surpasses input to the model. First, we convert the image to gray scale, current state-of-the-art on the Atari domain while training which should reduce discrepancies between episodes with for half the time on a single multi-core CPU instead of a different background settings (night/day, pipe/bird color). It GPU [13].Asynchronous variant of actor critic model finds also reduces the size of the third dimension from 3 for RGB inspiration from General Reinforcement Learning Architec- to 1. Next, we De-noise the image using adaptive thresh- ture (Gorila) and some earlier work which studied conver- olding. In Flappy Bird, the background contains some un- gence properties of Q learning in the asynchronous opti- necessary pixels that can create some noise for the neural mization setting [15]. network, such as stars in the night background. Applying adaptive Gaussian thresholding (CITE) incorporates each 3. Data Format pixel’s neighboring values to determine whether its value is 3.1. Interacting with Environment noise or not. This same algorithm generalizes well format other games, as shown in figure [2]. All of the data used for training our model comes from Afterwards, we normalize values to be between 0 and interacting with the OpenAI Gym Interface (CITE). Inter- 1 and reduce the image to 80x80 pixels. This allows for acting with the Gym interface has three main steps: register- consistent input sizes across games as well as decreasing ing the desired game with Gym, resetting the environment dimensionality of each layer of our network, allowing for to get the initial state, then applying a step on the environ- faster passes and fewer parameters per layer. Finally, we ment to generate a successor state. stack last 4 frames as input to network (CITE) to form the The input which is required to step in the environment 80x80x4 input to the network. For the reward values, we is an action value. More specifically, it is an integer rang- clip all incoming rewards such that positive rewards are all ing from [0; numactions). The information that we receive 1, negative rewards are -1, and 0 rewards remain 0. This 2 Network Architecture Type Classes / Filters Filter Size Stride Activation Conv-1 32 8x8 4 ReLU Conv-2 64 4x4 2 ReLU Conv-3 64 3x3 1 ReLU Fully Connected-1 512 ReLU Fully Connected-2 # of actions Linear Figure 3. Details of layers of our current network, based off the Figure 2. Image preprocessing for Flappy Bird (top) and Pixel DeepMind DQN model (V.
Recommended publications
  • Deepfakes & Disinformation
    DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION Agnieszka M. Walorska ANALYSISANALYSE 2 DEEPFAKES & DISINFORMATION IMPRINT Publisher Friedrich Naumann Foundation for Freedom Karl-Marx-Straße 2 14482 Potsdam Germany /freiheit.org /FriedrichNaumannStiftungFreiheit /FNFreiheit Author Agnieszka M. Walorska Editors International Department Global Themes Unit Friedrich Naumann Foundation for Freedom Concept and layout TroNa GmbH Contact Phone: +49 (0)30 2201 2634 Fax: +49 (0)30 6908 8102 Email: [email protected] As of May 2020 Photo Credits Photomontages © Unsplash.de, © freepik.de, P. 30 © AdobeStock Screenshots P. 16 © https://youtu.be/mSaIrz8lM1U P. 18 © deepnude.to / Agnieszka M. Walorska P. 19 © thispersondoesnotexist.com P. 19 © linkedin.com P. 19 © talktotransformer.com P. 25 © gltr.io P. 26 © twitter.com All other photos © Friedrich Naumann Foundation for Freedom (Germany) P. 31 © Agnieszka M. Walorska Notes on using this publication This publication is an information service of the Friedrich Naumann Foundation for Freedom. The publication is available free of charge and not for sale. It may not be used by parties or election workers during the purpose of election campaigning (Bundestags-, regional and local elections and elections to the European Parliament). Licence Creative Commons (CC BY-NC-ND 4.0) https://creativecommons.org/licenses/by-nc-nd/4.0 DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION 3 4 DEEPFAKES & DISINFORMATION CONTENTS Table of contents EXECUTIVE SUMMARY 6 GLOSSARY 8 1.0 STATE OF DEVELOPMENT ARTIFICIAL
    [Show full text]
  • Reinforcement Learning with Tensorflow&Openai
    Lecture 1: Introduction Reinforcement Learning with TensorFlow&OpenAI Gym Sung Kim <[email protected]> http://angelpawstherapy.org/positive-reinforcement-dog-training.html Nature of Learning • We learn from past experiences. - When an infant plays, waves its arms, or looks about, it has no explicit teacher - But it does have direct interaction to its environment. • Years of positive compliments as well as negative criticism have all helped shape who we are today. • Reinforcement learning: computational approach to learning from interaction. Richard Sutton and Andrew Barto, Reinforcement Learning: An Introduction Nishant Shukla , Machine Learning with TensorFlow Reinforcement Learning https://www.cs.utexas.edu/~eladlieb/RLRG.html Machine Learning, Tom Mitchell, 1997 Atari Breakout Game (2013, 2015) Atari Games Nature : Human-level control through deep reinforcement learning Human-level control through deep reinforcement learning, Nature http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html Figure courtesy of Mnih et al. "Human-level control through deep reinforcement learning”, Nature 26 Feb. 2015 https://deepmind.com/blog/deep-reinforcement-learning/ https://deepmind.com/applied/deepmind-for-google/ Reinforcement Learning Applications • Robotics: torque at joints • Business operations - Inventory management: how much to purchase of inventory, spare parts - Resource allocation: e.g. in call center, who to service first • Finance: Investment decisions, portfolio design • E-commerce/media - What content to present to users (using click-through / visit time as reward) - What ads to present to users (avoiding ad fatigue) Audience • Want to understand basic reinforcement learning (RL) • No/weak math/computer science background - Q = r + Q • Want to use RL as black-box with basic understanding • Want to use TensorFlow and Python (optional labs) Schedule 1.
    [Show full text]
  • Latest Snapshot.” to Modify an Algo So It Does Produce Multiple Snapshots, find the Following Line (Which Is Present in All of the Algorithms)
    Spinning Up Documentation Release Joshua Achiam Feb 07, 2020 User Documentation 1 Introduction 3 1.1 What This Is...............................................3 1.2 Why We Built This............................................4 1.3 How This Serves Our Mission......................................4 1.4 Code Design Philosophy.........................................5 1.5 Long-Term Support and Support History................................5 2 Installation 7 2.1 Installing Python.............................................8 2.2 Installing OpenMPI...........................................8 2.3 Installing Spinning Up..........................................8 2.4 Check Your Install............................................9 2.5 Installing MuJoCo (Optional)......................................9 3 Algorithms 11 3.1 What’s Included............................................. 11 3.2 Why These Algorithms?......................................... 12 3.3 Code Format............................................... 12 4 Running Experiments 15 4.1 Launching from the Command Line................................... 16 4.2 Launching from Scripts......................................... 20 5 Experiment Outputs 23 5.1 Algorithm Outputs............................................ 24 5.2 Save Directory Location......................................... 26 5.3 Loading and Running Trained Policies................................. 26 6 Plotting Results 29 7 Part 1: Key Concepts in RL 31 7.1 What Can RL Do?............................................ 31 7.2
    [Show full text]
  • ELF Opengo: an Analysis and Open Reimplementation of Alphazero
    ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero Yuandong Tian 1 Jerry Ma * 1 Qucheng Gong * 1 Shubho Sengupta * 1 Zhuoyuan Chen 1 James Pinkerton 1 C. Lawrence Zitnick 1 Abstract However, these advances in playing ability come at signifi- The AlphaGo, AlphaGo Zero, and AlphaZero cant computational expense. A single training run requires series of algorithms are remarkable demonstra- millions of selfplay games and days of training on thousands tions of deep reinforcement learning’s capabili- of TPUs, which is an unattainable level of compute for the ties, achieving superhuman performance in the majority of the research community. When combined with complex game of Go with progressively increas- the unavailability of code and models, the result is that the ing autonomy. However, many obstacles remain approach is very difficult, if not impossible, to reproduce, in the understanding of and usability of these study, improve upon, and extend. promising approaches by the research commu- In this paper, we propose ELF OpenGo, an open-source nity. Toward elucidating unresolved mysteries reimplementation of the AlphaZero (Silver et al., 2018) and facilitating future research, we propose ELF algorithm for the game of Go. We then apply ELF OpenGo OpenGo, an open-source reimplementation of the toward the following three additional contributions. AlphaZero algorithm. ELF OpenGo is the first open-source Go AI to convincingly demonstrate First, we train a superhuman model for ELF OpenGo. Af- superhuman performance with a perfect (20:0) ter running our AlphaZero-style training software on 2,000 record against global top professionals. We ap- GPUs for 9 days, our 20-block model has achieved super- ply ELF OpenGo to conduct extensive ablation human performance that is arguably comparable to the 20- studies, and to identify and analyze numerous in- block models described in Silver et al.(2017) and Silver teresting phenomena in both the model training et al.(2018).
    [Show full text]
  • Applying Deep Double Q-Learning and Monte Carlo Tree Search to Playing Go
    CS221 FINAL PAPER 1 Applying Deep Double Q-Learning and Monte Carlo Tree Search to Playing Go Booher, Jonathan [email protected] De Alba, Enrique [email protected] Kannan, Nithin [email protected] I. INTRODUCTION the current position. By sampling states from the self play OR our project we replicate many of the methods games along with their respective rewards, the researchers F used in AlphaGo Zero to make an optimal Go player; were able to train a binary classifier to predict the outcome of a however, we modified the learning paradigm to a version of game with a certain confidence. Then based on the confidence Deep Q-Learning which we believe would result in better measures, the optimal move was taken. generalization of the network to novel positions. We depart from this method of training and use Deep The modification of Deep Q-Learning that we use is Double Q-Learning instead. We use the same concept of called Deep Double Q-Learning and will be described later. sampling states and their rewards from the games of self play, The evaluation metric for the success of our agent is the but instead of feeding this information to a binary classifier, percentage of games that are won against our Oracle, a we feed the information to a modified Q-Learning formula, Go-playing bot available in the OpenAI Gym. Since we are which we present shortly. implementing a version of reinforcement learning, there is no data that we will need other than the simulator. By training III. CHALLENGES on the games that are generated from self-play, our player The main challenge we faced was the computational com- will output a policy that is learned at the end of training by plexity of the game of Go.
    [Show full text]
  • AI in Focus - Fundamental Artificial Intelligence and Video Games
    AI in Focus - Fundamental Artificial Intelligence and Video Games April 5, 2019 By Isi Caulder and Lawrence Yu Patent filings for fundamental artificial intelligence (AI) technologies continue to rise. Led by a number of high profile technology companies, including IBM, Google, Amazon, Microsoft, Samsung, and AT&T, patent applications directed to fundamental AI technologies, such as machine learning, neural networks, natural language processing, speech processing, expert systems, robotic and machine vision, are being filed and issued in ever-increasing numbers.[1] In turn, these fundamental AI technologies are being applied to address problems in industries such as healthcare, manufacturing, and transportation. A somewhat unexpected source of fundamental AI technology development has been occurring in the field of video games. Traditional board games have long been a subject of study for AI research. In the 1990’s, IBM created an AI for playing chess, Deep Blue, which was able to defeat top-caliber human players using brute force algorithms.[2] More recently, machine learning algorithms have been developed for more complex board games, which include a larger breadth of possible moves. For example, DeepMind (since acquired by Google), recently developed the first AI capable of defeating professional Go players, AlphaGo.[3] Video games have recently garnered the interest of researchers, due to their closer similarity to the “messiness” and “continuousness” of the real world. In contrast to board games, video games typically include a greater
    [Show full text]
  • Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
    Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims∗ Miles Brundage1†, Shahar Avin3,2†, Jasmine Wang4,29†‡, Haydn Belfield3,2†, Gretchen Krueger1†, Gillian Hadfield1,5,30, Heidy Khlaaf6, Jingying Yang7, Helen Toner8, Ruth Fong9, Tegan Maharaj4,28, Pang Wei Koh10, Sara Hooker11, Jade Leung12, Andrew Trask9, Emma Bluemke9, Jonathan Lebensold4,29, Cullen O’Keefe1, Mark Koren13, Théo Ryffel14, JB Rubinovitz15, Tamay Besiroglu16, Federica Carugati17, Jack Clark1, Peter Eckersley7, Sarah de Haas18, Maritza Johnson18, Ben Laurie18, Alex Ingerman18, Igor Krawczuk19, Amanda Askell1, Rosario Cammarota20, Andrew Lohn21, David Krueger4,27, Charlotte Stix22, Peter Henderson10, Logan Graham9, Carina Prunkl12, Bianca Martin1, Elizabeth Seger16, Noa Zilberman9, Seán Ó hÉigeartaigh2,3, Frens Kroeger23, Girish Sastry1, Rebecca Kagan8, Adrian Weller16,24, Brian Tse12,7, Elizabeth Barnes1, Allan Dafoe12,9, Paul Scharre25, Ariel Herbert-Voss1, Martijn Rasser25, Shagun Sodhani4,27, Carrick Flynn8, Thomas Krendl Gilbert26, Lisa Dyer7, Saif Khan8, Yoshua Bengio4,27, Markus Anderljung12 1OpenAI, 2Leverhulme Centre for the Future of Intelligence, 3Centre for the Study of Existential Risk, 4Mila, 5University of Toronto, 6Adelard, 7Partnership on AI, 8Center for Security and Emerging Technology, 9University of Oxford, 10Stanford University, 11Google Brain, 12Future of Humanity Institute, 13Stanford Centre for AI Safety, 14École Normale Supérieure (Paris), 15Remedy.AI, 16University of Cambridge, 17Center for Advanced Study in the Behavioral Sciences,18Google Research, 19École Polytechnique Fédérale de Lausanne, 20Intel, 21RAND Corporation, 22Eindhoven University of Technology, 23Coventry University, 24Alan Turing Institute, 25Center for a New American Security, 26University of California, Berkeley, 27University of Montreal, 28Montreal Polytechnic, 29McGill University, 30Schwartz Reisman Institute for Technology and Society arXiv:2004.07213v2 [cs.CY] 20 Apr 2020 April 2020 ∗Listed authors are those who contributed substantive ideas and/or work to this report.
    [Show full text]
  • Towards Russian Text Generation Problem Using Openai's GPT-2
    Towards Russian Text Generation Problem Using OpenAI’s GPT-2 Oleksii Shatalov, Nataliya Ryabova National University of Radio Electronics, Nauky av., 14, Kharkiv, 61000, Ukraine Abstract This work is devoted to Natural Language Generation (NLG) problem. The modern approaches in this area based on deep neural networks are considered. The most famous and promising deep neural network architectures that are related to this problem are considered, in particular, the most popular free software solutions for NLG based on Transformers architecture with pre-trained deep neural network models GPT-2 and BERT. The main problem is that the main part of already existing solutions is devoted to the English language. But there are few models that are able to generate text in Russian. Moreover, the text they generate often belongs to a general topic and not about a specific subject area. The object of the study is the generation of a contextually coherent narrow-profile text in Russian. Within the framework of the study, a model was trained for generating coherent articles of a given subject area in Russian, as well as a software application for interacting with it. Keywords 1 Natural Language Generation, Natural Language Processing, Transformers Architecture, Deep Learning, Transfer Learning, GPT-2 1. Introduction The current rate of growth of content is so great that organizations are beginning to fail to keep up with their own set of speeds. Editors and copywriters do not have time to create new texts from scratch, think over ideas for new publications so that they are original. Hiring a large staff of additional staff can significantly increase the costs of the company, which will lead to lower profits.
    [Show full text]
  • The Future Just Arrived
    BLOG The Future Just Arrived Brad Neuman, CFA Senior Vice President Director of Market Strategy What if a computer were actually smart? You wouldn’t purpose natural language processing model. The have to be an expert in a particular application to interface is “text-in, text-out” that goal. but while the interact with it – you could just talk to it. You wouldn’t input is ordinary English language instructions, the need an advanced degree to train it to do something output text can be anything from prose to computer specific - it would just know how to complete a broad code to poetry. In fact, with simple commands, the range of tasks. program can create an entire webpage, generate a household income statement and balance sheet from a For example, how would you create the following digital description of your financial activities, create a cooking image or “button” in a traditional computer program? recipe, translate legalese into plain English, or even write an essay on the evolution of the economy that this strategist fears could put him out of a job! Indeed, if you talk with the program, you may even believe it is human. But can GPT-3 pass the Turing test, which assesses a machine’s ability to exhibit human-level intelligence. The answer: GPT-3 puts us a lot closer to that goal. I have no idea because I can’t code. But a new Data is the Fuel computer program (called GPT-3) is so smart that all you have to do is ask it in plain English to generate “a The technology behind this mind-blowing program has button that looks like a watermelon.” The computer then its roots in a neural network, deep learning architecture i generates the following code: introduced by Google in 2017.
    [Show full text]
  • (GPT-3) in Healthcare Delivery ✉ Diane M
    www.nature.com/npjdigitalmed COMMENT OPEN Considering the possibilities and pitfalls of Generative Pre- trained Transformer 3 (GPT-3) in healthcare delivery ✉ Diane M. Korngiebel 1 and Sean D. Mooney2 Natural language computer applications are becoming increasingly sophisticated and, with the recent release of Generative Pre- trained Transformer 3, they could be deployed in healthcare-related contexts that have historically comprised human-to-human interaction. However, for GPT-3 and similar applications to be considered for use in health-related contexts, possibilities and pitfalls need thoughtful exploration. In this article, we briefly introduce some opportunities and cautions that would accompany advanced Natural Language Processing applications deployed in eHealth. npj Digital Medicine (2021) 4:93 ; https://doi.org/10.1038/s41746-021-00464-x A seemingly sophisticated artificial intelligence, OpenAI’s Gen- language texts. Although its developers at OpenAI think it performs erative Pre-trained Transformer 3, or GPT-3, developed using well on translation, question answering, and cloze tasks (e.g., a fill-in- computer-based processing of huge amounts of publicly available the-blank test to demonstrate comprehension of text by providing textual data (natural language)1, may be coming to a healthcare the missing words in a sentence)1, it does not always predict a 1234567890():,; clinic (or eHealth application) near you. This may sound fantastical, correct string of words that are believable as a conversation. And but not too long ago so did a powerful computer so tiny it could once it has started a wrong prediction (ranging from a semantic fit in the palm of your hand.
    [Show full text]
  • Patent Claim Generation by Fine-Tuning Openai GPT-2
    Patent Claim Generation by Fine-Tuning OpenAI GPT-2 Jieh-Sheng Lee and Jieh Hsiang Department of Computer Science and Information Engineering National Taiwan University {d04922013, jhsiang}@ntu.edu.tw Abstract (Bidirectional Encoder Representations from Transformers) [4] has become the best practice In this work, we focus on fine-tuning an for state-of-the-art results. GPT-2 is the successor OpenAI GPT-2 pre-trained model for to GPT. Although both GPT-2 and BERT are generating patent claims. GPT-2 has capable of text generation, Wang and Cho [5] demonstrated impressive efficacy of pre- trained language models on various tasks, found that GPT-2 generations are of better particularly coherent text generation. quality. In fact, GPT-2 is claimed to be so Patent claim language itself has rarely powerful that the risk of its malicious use is high. been explored in the past and poses a For this reason, OpenAI decided to keep its unique challenge. We are motivated to largest model (1.5B parameters) closed so that generate coherent patent claims there is more time to discuss its ramifications. automatically so that augmented inventing In this work, we generated patent claims by might be viable someday. In our fine-tuning the released 345M medium version implementation, we identified a unique [6]. Overall we are impressed by how coherent language structure in patent claims and and complicate the generated patent claims could leveraged its implicit human annotations. We investigated the fine-tuning process by be, although not all text are generated equally in probing the first 100 steps and observing terms of quality.
    [Show full text]
  • Benchmarking Safe Exploration in Deep Reinforcement Learning
    Benchmarking Safe Exploration in Deep Reinforcement Learning Alex Ray∗ Joshua Achiam∗ Dario Amodei OpenAI OpenAI OpenAI Abstract Reinforcement learning (RL) agents need to explore their environments in order to learn optimal policies by trial and error. In many environments, safety is a critical concern and certain errors are unacceptable: for example, robotics systems that interact with humans should never cause injury to the humans while exploring. While it is currently typical to train RL agents mostly or entirely in simulation, where safety concerns are minimal, we anticipate that challenges in simulating the complexities of the real world (such as human-AI interactions) will cause a shift towards training RL agents directly in the real world, where safety concerns are paramount. Consequently we take the position that safe exploration should be viewed as a critical focus area for RL research, and in this work we make three contributions to advance the study of safe exploration. First, building on a wide range of prior work on safe reinforcement learning, we propose to standardize constrained RL as the main formalism for safe exploration. Second, we present the Safety Gym benchmark suite, a new slate of high-dimensional continuous control environments for measuring research progress on constrained RL. Finally, we benchmark several constrained deep RL algorithms on Safety Gym environments to establish baselines that future work can build on. 1 Introduction Reinforcement learning is an increasingly important technology for developing highly-capable AI systems. While RL is not yet fully mature or ready to serve as an “off-the-shelf” solution, it appears to offer a viable path to solving hard sequential decision-making problems that cannot currently be solved by any other approach.
    [Show full text]