Learn to Interpret Atari Agents

Total Page:16

File Type:pdf, Size:1020Kb

Learn to Interpret Atari Agents Learn to Interpret Atari Agents Zhao Yang 1 Song Bai 1 Li Zhang 1 Philip H.S. Torr 1 Abstract (DeepRL) (Mnih et al., 2013; 2015), there is an increas- ing interest in understanding DeepRL models. Combining Deep Reinforcement Learning (DeepRL) agents deep learning techniques with reinforcement learning algo- surpass human-level performances in a multitude rithms, DeepRL leverages the strong representation capacity of tasks. However, the direct mapping from states and approximation power of DNNs for return estimation and to actions makes it hard to interpret the rationale policy optimization (Sutton & Barto, 1998). In modern ap- behind the decision making of agents. In contrast plications where a state is defined by high-dimensional data to previous a-posteriori methods of visualizing input, e.g., Atari 2600 (Bellemare et al., 2013), the task of DeepRL policies, we propose an end-to-end train- DeepRL divides into two essential sub-tasks, i.e., generating able framework based on Rainbow, a representa- (low-dimensional) representations on states and subsequent tive Deep Q-Network (DQN) agent. Our method policy learning using such representations. automatically learns important regions in the input domain, which enables characterizations of the de- As DeepRL does not optimize for class discriminative ob- cision making and interpretations for non-intuitive jectives, previous interpretation methods developed for clas- behaviors. Hence we name it Region Sensitive sification models are not readily applicable to DeepRL mod- Rainbow (RS-Rainbow). RS-Rainbow utilizes els. The approximation of the optimal state value or action a simple yet effective mechanism to incorporate distribution not only operates in a black-box manner, but visualization ability into the learning model, not incorporates temporal information and environment dynam- only improving model interpretability, but leading ics. The black-box and sequential nature of DeepRL models to improved performance. Extensive experiments makes them inherently difficult to understand. on the challenging platform of Atari 2600 demon- Although interpreting DeepRL models is challenging, some strate the superiority of RS-Rainbow. In particu- efforts have been devoted in recent years to studying the lar, our agent achieves state of the art at just 25% behaviors of these complex models. Most of the existing of the training frames. Demonstrations and code interpretation methods (Mnih et al., 2015; Wang et al., 2016; are available at https://github.com/yz93/Learn-to- Zahavy et al., 2016; Greydanus et al., 2018) are a-posteriori, Interpret-Atari-Agents. explaining a model after it has been trained. For instance, some t-SNE-based methods (Mnih et al., 2015; Zahavy et al., 2016) employ game-specific human intuitions and expert 1. Introduction knowledge in RL. Other vision-inspired methods (Wang et al., 2016) adopt traditional saliency methods. The rep- Understanding deep neural networks (DNN) has been a resentative (Greydanus et al., 2018) adopts a data-driven long-standing goal of the machine learning community. approach for illustrating policy responses to a fixed input arXiv:1812.11276v2 [cs.LG] 24 Jan 2019 Many efforts exploit the class discriminative nature of the masking function, requiring hundreds of forward passes per CNN-based classification models (Krizhevsky et al., 2012) frame. As a common limitation, these a-posteriori methods for producing human-interpretable visual explanations (Si- cannot improve training with the deduced knowledge. monyan et al., 2014; Zeiler & Fergus, 2014; Springenberg In this work, we approach from a learning perspective, et al., 2015; Shrikumar et al., 2017; Fong & Vedaldi, 2017; and propose Region Sensitive Rainbow (RS-Rainbow) to Dabkowski & Gal, 2017). improve both the interpretability and performance of a With the advent of Deep Reinforcement Learning DeepRL model. To this end, RS-Rainbow leverages a region-sensitive module to estimate the importance of dif- 1 University of Oxford, Oxford, UK. Correspondence to: ferent sub-regions on the screen, which is used to guide Zhao Yang <[email protected]>, Song Bai <song- [email protected]>, Li Zhang <[email protected]>, Philip policy learning in end-to-end training. Specifically, a sub- H.S. Torr <[email protected]>. region containing a distinctive pattern or objects useful for policy learning is assigned with high importance. A com- Preliminary work. Learn to Interpret Atari Agents (a) (b) (c) (d) (e) (f) Figure 1. Visualizing Atari games (a) beam rider, (b) enduro, (c) frostbite, (d) ms pacman, (e) pong, and (f) space invaders. The left frame is the original game frame. The middle and the right frames each shows a gaze (defined in Sec.4) of RS-Rainbow during inference. The agent learns multiple salient regions containing functional objects, which are annotated in red circles for clarification purpose. bination of important sub-regions replaces the original un- 2. Background weighted screen as the representation of a state. Throughout an episode, the focus points of a pattern detector change 2.1. DQN and Rainbow as a result of game dynamics, and lead to policy variations. As an RL algorithm, DQN seeks to find a policy which Therefore, each pattern detector illustrates a distinct line of maximizes the long-term return of an agent acting in an reasoning by the agent. With the region-sensitive module, environment, with convergence guarantee provided by a we produce intuitive visualizations (see Fig.1) in a single Bellman equation. DQN combines deep learning with the backward pass without human interventions or repetitive, traditional off-policy, value-based Q-learning algorithm by costly passes through the network. employing a DNN as a value approximation function and The primary contribution of this work is to provide, to the the mean-squared error minimization as an alternative for best of our knowledge, the first learning-based approach for temporal difference updating (Sutton, 1988; Tesauro, 1995). automatically interpreting DeepRL models. It requires no Target network and experience replay are two key engineer- extra supervision and is end-to-end trainable. Moreover, it ing feats to stabilize training. In DQN, Q value refers to the possesses three advantages: expected discounted return for executing a particular action in a given state and following the current policy thereafter. 1) In contrast to previous methods (Zahavy et al., 2016; Given optimal Q values, the optimal policy follows as taking Greydanus et al., 2018), RS-Rainbow illustrates the actual the action with the highest Q value. rationale used in inference for decision making, in an intu- itive manner without human interventions. Rainbow (Hessel et al., 2018) incorporates many extensions over the original DQN (Mnih et al., 2013; 2015), each of 2) Besides supporting innate interpretation, quantitative ex- which enhances a different aspect of the model. Such exten- periments on the Atari 2600 platform (Bellemare et al., sions include double DQN (van Hasselt et al., 2016), dueling 2013) demonstrate that RS-Rainbow effectively improves DQN (Wang et al., 2016), priority experience replay (Schaul policy learning. In comparison, previous a-posteriori meth- et al., 2016), multi-step learning (Sutton, 1988), distribu- ods are unable to bring performance enhancements. tional RL (Bellemare et al., 2017), and noisy nets (Fortunato 3) The region-sensitive module, the core component of RS- et al., 2018). Double DQN addresses the over-estimation Rainbow, is a simple and efficient plug-in. It can be poten- of Q in the target function. Dueling DQN decomposes the tially applied to many DQN-based models for performance estimation of Q into separate estimations for a state value gains and a built-in visualization advantage. and an action advantage. Priority experience replay samples training data of higher learning potential with higher fre- The rest of the paper is organized as follows. We provide quency. Multi-step learning looks multiple steps ahead by a brief overview of background knowledge in Sec.2 and replacing one-step rewards and states with their multi-step present the details of the proposed RS-Rainbow in Sec.3. counterparts. Noisy net injects adaptable noises to linear Sec.4 demonstrates the interpretability of RS-Rainbow and layer outputs to introduce state-dependent exploration. In Sec.5 gives the quantitative evaluation of RS-Rainbow on distributional RL, Q is modeled as a random variable whose Atari games. Conclusions are given in Sec.6. distribution is learned over a fixed support set of discrete val- ues. The resulting Kullbeck-Leibler divergence loss enjoys Learn to Interpret Atari Agents Region-sensitive Module n o i ) t ) 2 v v a 2 1 z U , 5 i Advantage Stream n n l 1 , L × a 1 o o 1 × E ( m y y 1 C C ( r U s s o i i L Actions N o o e 4.3 R N N ) 1 2.3 ) = 2 p m ) , = v v v s 4 r 4 1.4 U U U , 6 = n n n 4 , s o L L L , 6 3 1.2 o o o , e e e 2 × N 4 3 3 ( C C C . , × R R R 2 8 4 . ( × Value Stream L . 8 ( y y U s s i i L o o e R N Input Images Image Encoder N Policy Layers Figure 2. The architecture of the proposed RS-Rainbow. convergence guarantee as the return distributions satisfy a 3. Proposed Approach Bellman equation. In this section, we introduce our motivation in Sec. 3.1, then describe the architecture of RS-Rainbow in Sec. 3.2, and 2.2. Understanding DeepRL finally present its capability for visualization in Sec. 3.3. Interpreting RL systems traditionally involves language gen- eration via first-order logic (Dodson et al., 2011; Elizalde 3.1. Motivation et al., 2008; Khan et al., 2009; Hayes & Shah, 2017). These approaches rely on small state spaces and high-level state There are three main considerations in our motivation for variables with interpretable semantics.
Recommended publications
  • Taito Pinball Tables Volume 1 User Manual
    TAITO PINBALL TABLES VOLUME 1 USER MANUAL For all Legends Arcade Family Devices © TAITO CORPORATION 1978, 1982, 1986, 1987 ALL RIGHTS RESERVED. Version 1.7 | September 1, 2021 Contents Overview ............................................................................................................1 DARIUS™ .............................................................................................................2 Description .......................................................................................................2 Rollovers ..........................................................................................................2 Specials ...........................................................................................................2 Standup Targets ................................................................................................. 2 Extra Ball .........................................................................................................2 Hole Score ........................................................................................................2 FRONT LINE™ ........................................................................................................3 Description .......................................................................................................3 50,000 Points Reward ...........................................................................................3 O-R-B-I-T Lamps ................................................................................................
    [Show full text]
  • Rétro Gaming
    ATARI - CONSOLE RETRO FLASHBACK 8 GOLD ACTIVISION – 130 JEUX Faites ressurgir vos meilleurs souvenirs ! Avec 130 classiques du jeu vidéo dont 39 jeux Activision, cette Atari Flashback 8 Gold édition Activision saura vous rappeler aux bons souvenirs du rétro-gaming. Avec les manettes sans fils ou vos anciennes manettes classiques Atari, vous n’avez qu’à brancher la console à votre télévision et vous voilà prêts pour l’action ! CARACTÉRISTIQUES : • 130 jeux classiques incluant les meilleurs hits de la console Atari 2600 et 39 titres Activision • Plug & Play • Inclut deux manettes sans fil 2.4G • Fonctions Sauvegarde, Reprise, Rembobinage • Sortie HD 720p • Port HDMI • Ecran FULL HD Inclut les jeux cultes : • Space Invaders • Centipede • Millipede • Pitfall! • River Raid • Kaboom! • Spider Fighter LISTE DES JEUX INCLUS LISTE DES JEUX ACTIVISION REF Adventure Black Jack Football Radar Lock Stellar Track™ Video Chess Beamrider Laser Blast Asteroids® Bowling Frog Pond Realsports® Baseball Street Racer Video Pinball Boxing Megamania JVCRETR0124 Centipede ® Breakout® Frogs and Flies Realsports® Basketball Submarine Commander Warlords® Bridge Oink! Kaboom! Canyon Bomber™ Fun with Numbers Realsports® Soccer Super Baseball Yars’ Return Checkers Pitfall! Missile Command® Championship Golf Realsports® Volleyball Super Breakout® Chopper Command Plaque Attack Pitfall! Soccer™ Gravitar® Return to Haunted Save Mary Cosmic Commuter Pressure Cooker EAN River Raid Circus Atari™ Hangman House Super Challenge™ Football Crackpots Private Eye Yars’ Revenge® Combat®
    [Show full text]
  • Knowledge Transfer for Deep Reinforcement Learning with Hierarchical Experience Replay
    Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Knowledge Transfer for Deep Reinforcement Learning with Hierarchical Experience Replay Haiyan Yin, Sinno Jialin Pan School of Computer Science and Engineering Nanyang Technological University, Singapore {haiyanyin, sinnopan}@ntu.edu.sg Abstract To tackle the stated issue, model compression and multi- task learning techniques have been integrated into deep re- The process for transferring knowledge of multiple reinforce- inforcement learning. The approach that utilizes distillation ment learning policies into a single multi-task policy via dis- technique to conduct knowledge transfer for multi-task rein- tillation technique is known as policy distillation. When pol- forcement learning is referred to as policy distillation (Rusu icy distillation is under a deep reinforcement learning setting, due to the giant parameter size and the huge state space for et al. 2016). The goal is to train a single policy network each task domain, it requires extensive computational efforts that can be used for multiple tasks at the same time. In to train the multi-task policy network. In this paper, we pro- general, it can be considered as a transfer learning process pose a new policy distillation architecture for deep reinforce- with a student-teacher architecture. The knowledge is firstly ment learning, where we assume that each task uses its task- learned in each single problem domain as teacher policies, specific high-level convolutional features as the inputs to the and then it is transferred to a multi-task policy that is known multi-task policy network. Furthermore, we propose a new as student policy.
    [Show full text]
  • Learning Robust Helpful Behaviors in Two-Player Cooperative Atari Environments
    Learning Robust Helpful Behaviors in Two-Player Cooperative Atari Environments Paul Tylkin Goran Radanovic David C. Parkes Harvard University MPI-SWS∗ Harvard University [email protected] [email protected] [email protected] Abstract We initiate the study of helpful behavior in the setting of two-player Atari games, suitably modified to provide cooperative incentives. Our main interest is to understand whether reinforcement learning can be used to achieve robust, helpful behavior— where one agent is trained to help a second, partner agent. Robustness requires the helpful AI to be able to cooperate effectively with a diverse set of partners. We study this question with both artificial partner agents as well as human participants (introducing a new, web-based framework for the study of human-with-AI behavior). We achieve positive results in both Space Invaders and Fall Down, as well as successful transfer to human partners, including with people who are asked to deliberately follow unexpected behaviors. 1 Introduction As we anticipate a future of systems of AIs, interacting in ever-increasing ways and with each other as well as with people, it is important to develop methods to promote cooperation. This need for cooperation is relevant, for example, in settings with automated vehicles [1], home robotics [14], as well as in military domains, where UAVs assist teams of soldiers [31]. In this paper, we seek to advance the study of cooperative behavior through suitably modified, two-player Atari games. Although closed and relatively simple environments, there is a rich, recent tradition of using Atari to drive advances in AI [23, 24].
    [Show full text]
  • A Strongly Typed GP-Based Video Game Player
    A Strongly Typed GP-based Video Game Player Baozhu Jia Marc Ebner Ernst-Moritz-Arndt Universitat¨ Greifswald Ernst-Moritz-Arndt Universitat¨ Greifswald Institut fur¨ Mathematik und Informatik Institut fur¨ Mathematik und Informatik Walther-Rathenau-Strae 47, 17487 Greifswald, Germany Walther-Rathenau-Strae 47, 17487 Greifswald, Germany Email: [email protected] Email: [email protected] Abstract—This paper attempts to evolve a general video game games lies in between classic board games and modern 3D player, i.e. an agent which is able to learn to play many different games. video games with little domain knowledge. Our project uses strongly typed genetic programming as a learning algorithm. Section II briefly summarises previous works on general Three simple hand-crafted features are chosen to represent the game players. Section III describes how we compute feature game state. Each feature is a vector which consists of the position vectors from screen grabs. Section IV presents three different and orientation of each game object that is visible on the screen. representations of the game state. Section V demonstrates These feature vectors are handed to the learning algorithm which how the game player is evolved using strongly typed genetic will output the action the game player will take next. Game programming. The conclusions are given in the final section. knowledge and feature vectors are acquired by processing screen grabs from the game. Three different video games are used to test the algorithm. Experiments show that our algorithm is able II. RELATED WORK to find solutions to play all these three games efficiently.
    [Show full text]
  • Classic Gaming Expo 2005 !! ! Wow
    San Francisco, California August 20-21, 2005 $5.00 Welcome to Classic Gaming Expo 2005 !! ! Wow .... eight years! It's truly amazing to think that we 've been doing this show, and trying to come up with a fresh introduction for this program, for eight years now. Many things have changed over the years - not the least of which has been ourselves. Eight years ago John was a cable splicer for the New York phone company, which was then called NYNEX, and was happily and peacefully married to his wife Beverly who had no idea what she was in for over the next eight years. Today, John's still married to Beverly though not quite as peacefully with the addition of two sons to his family. He's also in a supervisory position with Verizon - the new New York phone company. At the time of our first show, Sean was seven years into a thirteen-year stint with a convenience store he owned in Chicago. He was married to Melissa and they had two daughters. Eight years later, Sean has sold the convenience store and opened a videogame store - something of a life-long dream (or was that a nightmare?) Sean 's family has doubled in size and now consists of fou r daughters. Joe and Liz have probably had the fewest changes in their lives over the years but that's about to change . Joe has been working for a firm that manages and maintains database software for pharmaceutical companies for the past twenty-some years. While there haven 't been any additions to their family, Joe is about to leave his job and pursue his dream of owning his own business - and what would be more appropriate than a videogame store for someone who's life has been devoted to collecting both the games themselves and information about them for at least as many years? Despite these changes in our lives we once again find ourselves gathering to pay tribute to an industry for which our admiration will never change .
    [Show full text]
  • Colecovision
    ColecoVision Last Updated on September 30, 2021 Title Publisher Qty Box Man Comments 1942 Team Pixelboy 2010: The Graphic Action Game Coleco A.E. CollectorVision Activision Decathlon, The Activision Alcazar: The Forgotten Fortress Telegames Alphabet Zoo Spinnaker Amazing Bumpman Telegames Antarctic Adventure Coleco Aquattack Interphase Armageddon CollectorVision Artillery Duel Xonox Artillery Duel / Chuck Norris Superkicks Xonox Astro Invader AtariAge B.C.'s Quest for Tires Sierra B.C.'s Quest for Tires: White Label Sierra B.C.'s Quest for Tires: Upside-Down Label Sierra B.C.'s Quest for Tires II: Grog's Revenge Coleco Bank Panic Team Pixelboy Bankruptcy Builder Team Pixelboy Beamrider Activision Blockade Runner Interphase Bomb 'N Blast CollectorVision Bomber King Team Pixelboy Bosconian Opcode Games Boulder Dash Telegames Brain Strainers Coleco Buck Rogers Super Game Team Pixelboy Buck Rogers: Planet of Zoom Coleco Bump 'n' Jump Coleco Burgertime Coleco Burgertime: Telegames Rerelease Telegames Burn Rubber CollectorVision Cabbage Patch Kids: Picture Show Coleco Cabbage Patch Kids: Adventures in the Park Coleco Campaign '84 Sunrise Carnival Coleco Cat Scheduled Oil Sampling Game, The Caterpillar Centipede Atarisoft Chack'n Pop CollectorVision Children of the Night Team Pixelboy Choplifter Coleco Choplifter: Telegames Rerelease Telegames Chuck Norris Superkicks Xonox Circus Charlie Team Pixelboy Congo Bongo Coleco Cosmic Avenger Coleco Cosmic Crisis Telegames Cosmo Fighter 2 Red Bullet Software Cosmo Fighter 3 Red Bullet Software CVDRUM E-Mancanics Dam Busters Coleco Dance Fantasy Fisher Price Defender Atarisoft Deflektor Kollection AtariAge This checklist is generated using RF Generation's Database This checklist is updated daily, and it's completeness is dependent on the completeness of the database.
    [Show full text]
  • Deep Reinforcement Learning to Play Space Invaders
    Deep Reinforcement Learning to play Space Invaders Nihit Desai∗ Abhimanyu Banerjee∗ Stanford University Stanford University Abstract In this project, we explore algorithms that use reinforcement learning to play the game space in- vaders. The Q-Learning algorithm for reinforcement learning is modified to work on states that are extremely high dimensional(images) using a convolutional neural network and is called the Deep-Q learning algorithm. We also experiment with training on states represented by the RAM representa- tion of the state. We also look at an extension of Q-learning known as Double Q learning, explore optimal architectures for learning . 1 Introduction Video games provide an ideal testbed for artificial intelligence methods and algorithms. In particular, programming intelligent agents that learn how to play a game with human-level skills is a difficult and challenging task. Rein- forcement learning (Sutton and Barto 1998) has been widely used to solve this problem traditionally. The goal of reinforcement learning is to learn good policies for sequential decision problems by maximizing a cumulative future reward. The reinforcement learning agent must learn an optimal policy for the problem, without being explicitly told if its actions are good or bad. Reinforcement learning approaches rely on features created manually by using domain knowledge (e.g. the re- searcher might study game playing patterns of an expert gamer and see which actions lead to the player winning the game, and then construct features from these insights). However, the current deep learning revolution has made it possible to learn feature representations from high-dimensional raw input data such as images and videos, leading to breakthroughs in computer vision tasks such as recognition and segmentation (Kaiming He et al.
    [Show full text]
  • The Mechanics of Survivance in Indigenously-Determined Video-Games: Invaders and Never Alone
    Transmotion Vol 3, No 2 (2017) The Mechanics of Survivance in Indigenously-Determined Video-Games: Invaders and Never Alone DEBORAH L. MADSEN Survivance as a legal concept names the right to inheritance and more specifically the condition of being qualified to inherit a legacy. In his essay “Aesthetics of Survivance” (2008), Vizenor describes survivance as “the heritable right of succession or reversion of an estate” (1). This aspect of survivance is overlooked by those scholars of Vizenor’s work who focus rather on the conjunction of the terms “survival” and “resistance,” terms that are important most fundamentally as they intersect with the capacity to transmit and to accept the inheritance of the past that is itself the intersection of survival and resistance.1 That is to say, acts of resistance and survival form the axiology (or ethical action) of survivance; the preservation of tribal languages, for example, or the transmission of traditional stories, are acts that ensure the continual availability of the tribal values of knowing and being in the world that are encoded in those words and stories. These Indigenous lifeways constitute the inheritance that motivates survivance. Thus, survivance is not a static object or method but a dynamic, active condition of historical and cultural survival and also of political resistance, practiced in the continual readiness of Indigenous communities to accept and continue the inheritance passed on by elders and ancestors. In this sense, claims made by recent Indigenous video-game developers to speak to youth through digital media by creating games that transmit tribal legacies of language, stories, ontologies, and ways of knowing and being in the world, speak to the practice of survivance.
    [Show full text]
  • HOW to PLAY with MAPS by Ross Thorn Department of Geography, UW-Madison a Thesis Submitted in Partial Fulfillment of the Require
    HOW TO PLAY WITH MAPS by Ross Thorn Department of Geography, UW-Madison A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science (Geographic Information Science and Cartography) at the UNIVERSITY OF WISCONSIN–MADISON 2018 i Acknowledgments I have so many people to thank for helping me through the process of creating this thesis and my personal development throughout my time at UW-Madison. First, I would like to thank my advisor Rob Roth for supporting this seemingly crazy project and working with me despite his limited knowledge about games released after 1998. Your words of encouragement and excitement for this project were invaluable to keep this project moving. I also want to thank my ‘second advisor’ Ian Muehlenhaus for not only offering expert guidance in cartography, but also your addictive passion for games and their connection to maps. You provided endless inspiration and this research would not have been possible without your support and enthusiasm. I would like to thank Leanne Abraham and Alicia Iverson for reveling and commiserating with me through the ups and downs of grad school. You both are incredibly inspirational to me and I look forward to seeing the amazing things that you will undoubtedly accomplish in life. I would also like to thank Meghan Kelly, Nick Lally, Daniel Huffman, and Tanya Buckingham for creating a supportive and fun atmosphere in the Cartography Lab. I could not have succeeded without your encouragement and reminder that we all deserve to be here even if we feel inadequate. You made my academic experience unforgettable and I love you all.
    [Show full text]
  • Full Arcade List OVER 2700 ARCADE CLASSICS 1
    Full Arcade List OVER 2700 ARCADE CLASSICS 1. 005 54. Air Inferno 111. Arm Wrestling 2. 1 on 1 Government 55. Air Rescue 112. Armed Formation 3. 1000 Miglia: Great 1000 Miles 56. Airwolf 113. Armed Police Batrider Rally 57. Ajax 114. Armor Attack 4. 10-Yard Fight 58. Aladdin 115. Armored Car 5. 18 Holes Pro Golf 59. Alcon/SlaP Fight 116. Armored Warriors 6. 1941: Counter Attack 60. Alex Kidd: The Lost Stars 117. Art of Fighting / Ryuuko no 7. 1942 61. Ali Baba and 40 Thieves Ken 8. 1943 Kai: Midway Kaisen 62. Alien Arena 118. Art of Fighting 2 / Ryuuko no 9. 1943: The Battle of Midway 63. Alien Challenge Ken 2 10. 1944: The LooP Master 64. Alien Crush 119. Art of Fighting 3 - The Path of 11. 1945k III 65. Alien Invaders the Warrior / Art of Fighting - 12. 19XX: The War Against Destiny 66. Alien Sector Ryuuko no Ken Gaiden 13. 2 On 2 OPen Ice Challenge 67. Alien Storm 120. Ashura Blaster 14. 2020 SuPer Baseball 68. Alien Syndrome 121. ASO - Armored Scrum Object 15. 280-ZZZAP 69. Alien vs. Predator 122. Assault 16. 3 Count Bout / Fire SuPlex 70. Alien3: The Gun 123. Asterix 17. 30 Test 71. Aliens 124. Asteroids 18. 3-D Bowling 72. All American Football 125. Asteroids Deluxe 19. 4 En Raya 73. Alley Master 126. Astra SuPerStars 20. 4 Fun in 1 74. Alligator Hunt 127. Astro Blaster 21. 4-D Warriors 75. AlPha Fighter / Head On 128. Astro Chase 22. 64th. Street - A Detective Story 76.
    [Show full text]
  • Game List of Game Elf (Vertical) 001.Ms
    Game list of game elf (Vertical) 001.Ms. Pac-Man ▲ 044.Arkanoid 002.Ms. Pac-Man (speedup) 045.Super Qix 003.Ms. Pac-Man Plus 046.Juno First 004.Galaga 047.Xevious 005.Frogger 048.Mr. Do's Castle 006.Frog 049.Moon Cresta 007.Donkey Kong 050.Pinball Action 008.Crazy Kong 051.Scramble 009.Donkey Kong Junior 052.Super Pac-Man 010.Donkey Kong 3 053.Bomb Jack 011.Galaxian 054.Shao-Lin's Road 012.Galaxian Part X 055.King & Balloon 013.Galaxian Turbo ▲ 56.1943 014.Dig Dug 057.Van-Van Car 015.Crush Roller 058.Pac-Man Plus 016.Mr. Do! 059.Pac & Pal 017.Space Invaders Part II 060.Dig Dug II 018.Super Invaders (EMAG) 061.Amidar 019.Return of the Invaders 062.Zaxxon ▲ 020.Super Space Invaders '91 063.Super Zaxxon 021.Pac-Man 064.Pooyan 022.PuckMan ▲ 065.Pleiads 023.PuckMan (speedup) 066.Gun.Smoke 024.New Puck-X 067.The End 025.Newpuc2 ▲ 068.1943 Kai 026.Galaga 3 069.Congo Bongo 027.Gyruss 070.Jumping Jack 028.Tank Battalion 071.Big Kong 29.1942 072.Bongo 030.Lady Bug 073.Gaplus 031.Burger Time 074.Ms. Pac Attack 032.Mappy 075.Abscam 033.Centipede 076.Ajax ▲ 034.Millipede 077.Ali Baba and 40 Thieves 035.Jr. Pac-Man ▲ 078.Finalizer - Super Transformation 036.Pengo 079.Arabian 037.Son of Phoenix 080.Armored Car 038.Time Pilot 081.Astro Blaster 039.Super Cobra 082.Astro Fighter 040.Video Hustler 083.Astro Invader 041.Space Panic 084.Battle Lane! 042.Space Panic (harder) 085.Battle-Road, The ▲ 043.Super Breakout 086.Beastie Feastie Caution: ▲ No flipped screen’s games ! 1 Game list of game elf (Vertical) 087.Bio Attack 130.Go Go Mr.
    [Show full text]