CHINOOK the World Man-Machine Checkers Champion
Total Page:16
File Type:pdf, Size:1020Kb
AI Magazine Volume 17 Number 1 (1996) (© AAAI) Articles CHINOOK The World Man-Machine Checkers Champion Jonathan Schaeffer, Robert Lake, Paul Lu, and Martin Bryant ■ In 1992, the seemingly unbeatable World Check- the American Checker Federation (ACF), CHI- er Champion Marion Tinsley defended his title NOOK was allowed to play in the 1990 U.S. against the computer program CHINOOK. After an championship. This biennial event attracts intense, tightly contested match, Tinsley fought the best players in the world, with the winner back from behind to win the match by scoring earning the right to play a match for the four wins to CHINOOK’s two, with 33 draws. This match was the first time in history that a human world championship. world champion defended his title against a com- CHINOOK came in an undefeated second in puter. This article reports on the progress of the the tournament behind the world champion, checkers (8 3 8 draughts) program CHINOOK since Marion Tinsley. The four individual games 1992. Two years of research and development on between CHINOOK and Tinsley were drawn. the program culminated in a rematch with Tins- This placing earned CHINOOK the right to chal- ley in August 1994. In this match, after six games lenge Tinsley for the world championship, (all draws), Tinsley withdrew from the match and the first time a computer had earned such a relinquished the world championship title to CHI- right in any game. NOOK, citing health concerns. CHINOOK has since defended its title in two subsequent matches. It is Tinsley emerged as a dominant player in the first time in history that a computer has won the late 1940s. He was touted as the next a human-world championship. challenger for the world championship when, in 1950, an elementary blunder cost him the rom 1955 to 1965, the most well-known chance to play for the title. After quietly checker-playing program was Samuel’s retrenching, Tinsley reemerged as an unbeat- F(1967, 1959). This work remains a mile- able force, winning the world championship stone in AI research. Samuel’s program report- in 1955. From 1950 to 1992, Tinsley lost an edly beat a master and “solved” the game of incredible total of only five games! He played checkers. Both journalistic claims were false, eight matches for the world championship, but they created the impression that there usually winning each by a large margin. He was nothing of scientific interest left in the was as close to perfection as is possible in a game (Samuel himself made no such claims). human. No one else has dominated any game Consequently, most subsequent game-related of skill for such a prolonged period of time. research turned to chess. Other than a pro- In December 1990, Tinsley visited Edmon- gram from Duke University in the 1970s ton, Alberta, Canada, to play an exhibition (Truscott 1979), little attention was paid to match against CHINOOK. The final score was 1 achieving a world championship–caliber win for Tinsley, 0 for CHINOOK, and 13 draws. checker program. It was the closest anyone had come to defeat- The CHINOOK Project started in 1989 as an ing Tinsley in a match in over 40 years. attempt to better understand heuristic search The ACF and the English Draughts Associa- by using an experimental domain that is sim- tion (EDA) refused to sanction the CHINOOK- pler than chess. Early success in exhibition Tinsley world championship match, stating games against master and grand-master oppo- they did not want a computer playing for a sition convinced us that we could build a human title. Tinsley confided in us that he world-class checker player. After petitioning was bored playing humans because they were Copyright © 1996, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1996 / $2.00 SPRING 1996 21 Articles One of the no opposition to him. He relished the oppor- faster than in 1992, and four times as much tunity to play the computer because of its RAM. The improved hardware allowed the major aggressive style and willingness to take program to search a minimum of an addi- problems chances. When the ACF and the EDA refused tional two ply deeper. to budge on this issue, Tinsley resigned his As has been seen with chess programs, in chess world championship title and then immedi- deeper search in checkers translates into and checker ately signed an agreement to play the CHI- stronger play. Some studies in chess have programs NOOK match. The ACF and the EDA hastily indicated that there might be a linear rela- rethought their position and created a new tionship between search depth and perfor- is deciding man-machine world championship: the best mance (Thompson 1982). Experiments in CHI- when it is human against the best computer. Because NOOK show that there comes a point where Tinsley was the undisputed human champion increased search depth provides diminishing appropriate and CHINOOK had properly earned the right to returns (Schaeffer, Lu, et al. 1993). The data to stop challenge, the new man-machine title was a suggest that CHINOOK has already reached the convenient way to keep the checker federa- point where little is to be gained by deeper analyzing tions happy. searching; improved performance must come a line In August 1992, CHINOOK and Tinsley finally through knowledge enhancements rather did battle (Schaeffer, Treloar, et al. 1993). The than machine speed. of play match was held in London, England, and was Extensive work was done to improve the and apply the sponsored by Silicon Graphics, Inc. After ini- quality of the search tree built by CHINOOK. evaluation tially falling behind, Tinsley rebounded and We have seen positions that Tinsley can cor- emerged victorious, scoring 4 wins to CHI- rectly analyze, even though the solution function. NOOK’s 2, with 33 draws. The two CHINOOK requires over 60 ply of search! Fixed-depth wins represented only the sixth and seventh alpha-beta search is woefully inadequate times that Tinsley had lost in the previous 42 here. As in chess, the alpha-beta framework years. has been modified to selectively extend inter- Although Tinsley had a close call in Lon- esting lines of play (search extensions) and to don, he was eager to play the computer curtail effort on clearly inferior lines of play again. A rematch was set for August 1994. (forward pruning). A variety of static (heuris- tic) and dynamic (based on properties of the search tree) techniques were used to create a CHINOOK 1994 more intelligent searcher. Within a few weeks of the end of the 1992 One of the major problems in chess and CHINOOK-Tinsley match, preparations began checker programs is deciding when it is for the rematch. CHINOOK’s performance con- appropriate to stop analyzing a line of play sists of four aspects (Schaeffer, Lu, et al. 1993; and apply the evaluation function. Ideally, Schaeffer et al. 1992): (1) search, algorithms the evaluated position should be quiescent; for traversing through the O(1020) search in other words, it should be relatively stable, space; (2) evaluation function, decisions about with no surprises waiting around the corner. how good a position is; (3) end-game data- Instrumenting the search algorithm allowed bases, perfect information on which end- us to identify positions where a heuristic game positions are won, lost, and drawn; and evaluation was likely to make a gross misas- (4) opening book, a database of opening lines sessment as a result of hidden tactics. A pro- to start a game. Compared to the program gram was created to generalize the features of that played in 1992, important advances had these positions, to search for any commonali- been made to the program in each of these ties. The results were surprising: One type of areas. tactical combination accounted for roughly 50 percent of the errors. A further two types Search accounted for an additional 25 percent of the CHINOOK uses a parallel alpha-beta search algo- cases. These latter two features were easily rithm with all the latest enhancements (Lu accounted for by adding simple patterns to 1993). CHINOOK 1992 had an average mini- the evaluation function. mum search depth of 19 ply, excluding search The dominant source of error involved a extensions (one ply equals one move by one so-called two-for-one combination. Here, the player). CHINOOK 1994 ran on a 16-processor key move would result in a checker being Silicon Graphics CHALLENGE computer (150- captured, but a double jump ensued for a net megahertz processors) with 1 gigabyte of ran- gain of one checker. Hard coding the knowl- dom-access memory (RAM). The machine edge required to capture one instance of this had twice as many processors, each five times pattern was easy. However, generalizing this 22 AI MAGAZINE Articles knowledge to cover all the possible scenarios edge components were thoroughly tested for The addition and exceptions proved to be extremely utility and completeness. To our surprise, this difficult and error prone. To properly identify testing uncovered a number of (glaring) of the all the conditions necessary for the combina- errors in the code and some important gaps tactical tion involved consideration of 28 of the 32 in the program’s knowledge. Some of these squares on the board. The alternative to using errors could have been fatal in a game. One table a set of static rules was to perform dynamic of the best-worst properties of the alpha-beta and the search. Given that the majority of evaluated algorithm is that its maximum-minimum positions do not have the two-for-one pattern behavior hides evaluation-function errors.