Implementing a Chess Coach
Total Page:16
File Type:pdf, Size:1020Kb
Implementing a Chess Coach registration: 4861469 (Supervisor: Dr Barry Theobald) Abstract The core component of every chess engine is it’s AI (Artificial Intelligence). AI has been an area of constant interest and research since the dawn of modern computing, holding interest for a number of important fields including not only computing but also, among others, philosophy, psychology and neurology. In this project, a chess program capable of fielding a game between two players was designed and implemented. Prominent methods of achieving both this and a chess engine capable of providing a challenging game experience to a player were discussed, and the methods used in this instance were justified. The program underwent user testing to ascertain it’s performance, and the results were discussed. Acknowledgements I would like to extend my sincere thanks to my supervisor, Dr Barry J. Theobald, for providing me with invaluable guidance and support throughout this challenging project. I would also like to express my gratitude to all of the users that provided me with helpful and constructive feedback. CMPC3P2Y Contents Contents 1 Introduction 8 1.1 Background and History of Computer Chess Engines . .8 1.2 Aims . 10 2 Representing a Chess Board 11 2.1 Arrays . 11 2.2 Bitboards . 11 3 Tree Search Algorithms 14 3.1 Best-First Search . 14 3.2 Breadth-First Search . 16 3.3 Depth-First Search . 16 3.4 Machine Learning Approaches . 20 4 Evaluating a Chessboard 21 4.1 Heuristics . 21 4.2 Piece-Square Tables . 21 5 Weaknesses of Current Generation Chess Engines 23 6 Design and Implementation 24 6.1 Program Structure . 25 6.2 Board Representation . 27 6.3 Artificial Intelligence . 33 7 Testing 37 7.1 Framework . 37 7.2 Profiling . 38 7.3 Summary of Test Results . 38 Reg: 4861469 3 CMPC3P2Y Contents 8 Conclusions and Future Work 41 8.1 Future Work . 41 Reg: 4861469 4 CMPC3P2Y List of Figures List of Figures 1.1 Estimated likelyhood of winning a game of chess based on the differ- ence in Elo rank between the two players. Given a difference of approx- imately 450 Elo between top humans and top computers, this shows that a top human player would have only approximately a 5% chance of defeating a top computer. Sourced from (Moser, 2010). .9 2.1 An example bitboard representing white pawns at the start of a chess game, formatted into the shape of a chessboard for ease of understand- ing. Each ‘1’ represents represents a white pawn. 12 2.2 An illustration of the process of determining whether any of the posi- tions a white knight can attack contain a black piece. 12 2.3 An example attack bitboard representing the possible lines of attack for a knight on square C3. ‘x’ represents the knight’s position and ‘1’ rep- resents a square that can be attacked by the knight. 13 3.1 A demonstration of the order of node expansion when traversing a tree structure using breadth-first search. 16 3.2 A demonstration of the order of node expansion when traversing a tree structure using depth-first search. 17 3.3 An example minimax tree, searching 4 ply deep. The nodes highlighted in grey demonstrate the path through the tree that the algorithm will determine to be the best available move after completion. 19 3.4 An example alpha-beta tree, searching 5 ply deep and building upon the minimax algorithm by eliminating subtrees that cannot provide a better score than another subtree that has already been evaluated. The eliminated subtrees are highlighted in grey. 20 4.1 Example piece-square tables for white pawns, left, and white knights, right. 22 6.1 Illustration of the cycle of the model-view-controller design pattern. This approach is used to help make large and complex programs modu- lar and maintainable. 24 6.2 The hex values used to initialise the board at the start of a game. 27 Reg: 4861469 5 CMPC3P2Y List of Figures 6.3 Flowchart demonstrating the process of a human white player attempt- ing to make a move. 28 6.4 Flowchart demonstrating the validation process that is applied to any attempted move with a pawn. 29 6.5 An illustration of the process of determining whether the position under test contains a white pawn. 30 6.6 Demonstration of the use of the text-based input/output system used for early testing purposes. A text file showing the numerical values of each board position was implemented to aid usability. 31 6.7 Demonstration of the capabilities of the final GUI implementation. Se- lected friendly squares are highlighted green, unoccupied attackable squares are highlighted in light blue, and attackable squares occupied by opposing pieces are highlighted in red. Chessboard texture sourced from http://assets.freeprintable.com/images/item/original/ blank-chess-board.gif, chesspiece textures sourced from http: //www.wpclipart.com/recreation/games/chess/chess_set_ symbols.jpg............................... 32 6.8 Illustration of the recursive minimax search function traversing the game tree to determine the best possible move. 34 7.1 Results from profiling the memory usage of the program. The greatest source of memory usage is shown to be the allocation of Chessboard objects. This result is unexpected, as Chessboard objects individually use very little memory and should not be created in substantial enough numbers to use the indicated amount of memory. 39 7.2 Results from profiling the CPU usage of the program. The greatest source of CPU usage is shown to be Thread.sleep(), which is not used while other operations are ongoing. All logic-heavy operations, such as move validation, use very little CPU-time, and so can be con- sidered efficient. 40 Reg: 4861469 6 CMPC3P2Y List of Tables List of Tables 6.1 Classes associated with the program’s framework . 26 Reg: 4861469 7 CMPC3P2Y 1 Introduction 1 Introduction 1.1 Background and History of Computer Chess Engines The development of computer chess engines first began as an effort to improve AI tech- niques, led by the belief that the only way for a chess engine to attain any noteworthy level of ‘skill’ would be to strengthen the AI driving the engine. Many of the people involved believed that the successful creation of such an AI would irrefutably prove that human thinking can be artificially modelled, since chess was and still is widely regarded as the ultimate game of wits and cunning (Hsu et al., 1990). The original belief that only improved AI could sufficiently strengthen a chess engine for it to be a formidable opponent has proven false over the years, with the main point of progress being the constant and significant increase in the raw speed of the hardware running chess engines. The most notable example of this is Deep Blue, the computer purpose-built by IBM for playing chess that in 1997 famously defeated the then world chess champion Garry Kasparov. The computer was capable of evaluating 200 million positions per second and regularly searched over 20 ply deep, a feat impossible for any human player (Campbell et al., 2002). This degree of brute force easily compensated for it’s inferior tactical and strategic abilities, resulting in the first ever instance of a computer defeating a chess world champion. Contrast this computational power to the first fully-fledged chess computer, written by Alex Bernstein in 1957, which took 3 hours to search 4 ply deep. A second area in which chess programs have grown significantly more advanced over time is in the optimisation of their search algorithms. As hardware has grown faster, search algorithms have become less dependent upon hardware speed. In the last 10 years, chess programs have become increasingly viable on affordable, commercially available hardware. Modern mobile devices are now capable of playing chess well enough to easily overcome casual players, and where once supercomputers were re- quired to defeat grandmaster chess players, it is likely to eventually become the case that even mobile devices are capable of defeating them. In 1988, Kasparov was asked if he thought a computer would be able to defeat a chess grandmaster before the year 2000, to which he confidently replied ‘No way’ (Hsu et al., Reg: 4861469 8 CMPC3P2Y 1 Introduction 1990). It is testament to the astonishingly rapid progress of computer chess engines that an expert in the field like Kasparov could be proven so dramatically wrong, with grand- master Bent Larsen losing to Deep Thought just 10 months later (Simon and Schaeffer, 1990). Chess is a game of perfect information, a state in game theory where players ob- serve all previous moves, and can therefore determine all possible future game-states (Von Neumann and Morgenstern, 2007). The possession of perfect information allows the development of optimal strategies. However, there are far too many possible game- states in chess to compute a solution in a reasonable amount of time, if at all. The most powerful modern chess engines are all but untouchable to even the strongest human players. The required Elo (a ranking system used to measure the relative strengths of players in two-player games (Elo, 1978)) rating to qualify as a grandmaster is 2400, with the highest ever human Elo rating being 2851, belonging to Kasparov. Comparing this to the rating of the current strongest computer chess engine in the world, RYBKA 4, which has an unconfirmed but estimated 3300 Elo, the difference in playing strength between top humans and top computers becomes apparent. This fact is illustrated in Figure 1.1. Figure 1.1: Estimated likelyhood of winning a game of chess based on the difference in Elo rank between the two players.