AI as Sport

In 1965 the Russian mathematician Alexander CASE STUDY Kronrod said, " is the Drosophila of ." However, has developed as genetics might have if Chess: Deep Blue’s Victory the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophilia. We would have some science, but mainly we would have very fast fruit flies."

- John McCarthy

1 2

How Intelligent is Deep Blue? On Game 2

(Game 2 - Deep Blue took an early lead. Kasparov resigned, but it turned out he could have forced a draw by perpetual check.)

Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really This was real chess. This was a game any fly because it doesn't flap its wings. human grandmaster wouldhave been proud of. Joel Benjamin - Drew McDermott grandmaster, member Deep Blue team

3 4

Kasparov on Deep Blue Combinatorics of Chess

Opening book 1996: Kasparov Beats Deep Blue Endgame • database of all 5 piece endgames exists; “I could feel --- I could smell --- a new kind of database of all 6 piece games being built intelligence acrossthe table.” Middle game • branching factor of 30 to 40 1997: Deep Blue Beats Kasparov • 1000(d/2) positions – 1 move by each player = 1,000 “Deep Blue hasn't proven anything.” – 2 moves by each player = 1,000,000 – 3 moves by each player = 1,000,000,000

5 6

1 Positions with Alpha-Beta Pruning History of Search Innovations

Search Depth Positions Shannon, Turing Minimax search 1950 Kotok/McCarthy Alpha-beta pruning 1966 260MacHack Transposition tables 1967 4 2,000 Chess 3.0+ Iterative-deepening 1975 6 60,000 Belle Special hardware 1978 8 2,000,000 Cray Blitz Parallel search 1983 10 (<1 second DB) 60,000,000 Hitech Parallel evaluation 1985 12 2,000,000,000 Deep Blue ALL OF THE ABOVE 1987 14 (5 minutes DB) 60,000,000,000 16 2,000,000,000,000

7 8

Minimax Game Tree Search A Note on Minimax

Independently invented by Shannon (1950) and Minimax obviously correct -- but Turing (1951) • Nau (1982) discovered pathological game Evaluation function combines material & trees position Games where Shannon type A: all branches to same depth • evaluation function grows more accurate Shannon type B: different branches to different as it nears the leaves depths • but performance is worse the deeper you • Pruning "bad" nodes: doesn't work in search! practice • Extend "unstable" nodes (e.g. after captures): works well in practice

9 10

Clustering Evaluation Functions

Monte Carlo simulations showed clustering is Primary way knowledge of chess is encoded important • material • if winning or loosing terminal leaves tend • position to be clustered, pathologies do not occur – doubled pawns • in chess: a position is “strong” or – how constrained position is “weak”, rarely completely ambiguous! Must execute quickly - constant time But still no completely satisfactory theoretical • parallel evaluation: allows more complex understanding of why minimax is good! functions – tactics: patterns to recognitize weak positions – arbitrarily complicated domain knowledge

11 12

2 Learning better evaluation Transposition Tables functions

• Deep Blue: learn by tuning weights on Introduced by Greenblat's Mac Hack (1966) subterms of evaluation function Basic idea: cacheing • once a position is evaluated, save in a f(p) = w f (p) + w f (p) + ... + w f (p) 1 1 2 2 n n hash table, avoid re-evaluating • called “transposition” tables, because • Tune weights to find best least-squares different orderings (transpositions) of the fit with respect to moves actually same set of moves can lead to the same choosen by grandmasters in 1000+ position games

13 14

Transposition Tables as Time vs Space Learning

Can be viewed as a simple kind of learning Iterative Deepening • positions generalize sequences of moves • a good idea in chess, as well as almost • learning on-the-fly everywhere else! • don't repeat blunders: can't beat the • Chess 4.x, first to play at Master's level computer twice in a row using same • trades a little time for a huge reduction in moves! space – lets you do breadth-first search with (more space efficient) depth-first search Deep Blue - huge transposition tables • anytime: good for response-time critical (100,000,000+), must be carefully managed applications

15 16

Special-Purpose and Parallel Deep Blue Hardware

Belle (Thompson 1978) Hardware Cray Blitz (1993) • 32 general processors Hitech (1985) • 220 VSLI chess chips Deep Blue (1987-1996) Overall: 200,000,000 positions per second • Parallel evaluation: allows more • 5 minutes = depth 14 complicated evaluation functions Selective extensions - search deeper at • Hardest part: coordinating parallel search unstable positions • down to depth 25 !

17 18

3 Evolution of Deep Blue Power Comparison

From 1987 to 1996 [[ Fig. 5.12 R&N ]] • faster chess processors • port to IBM base machine from Sun – Deep Blue’s non-Chess hardware is actually quite slow, in integer performance! • bigger opening and endgame books • 1996 differed little from 1997 - fixed bugs and tuned evaluation function! – After its loss in 1996, people underestimated its strength!

19 20

Studying General AI in the Tactics into Strategy Context of Games

As Deep Blue goes deeper and deeper into a The questions behind the compute intensive position, it displays elements of strategic hypothesis: understanding. Somewhere out there mere • When can/must you use search in place tactics translate into strategy. This is the closet of knowledge? thing I've ever seen to computer intelligence. – the compute intensive approach It's a very weird form of intelligence, but you can feel it. You can smell it... It feels like • When can/must you use knowledge in thinking. place of search? • Frederick Friedel (grandmaster), Newsday, May 9, 1997 – the knowledge intensive (expert systems) approach

21 22

The Role of Knowledge in Things that Don't Work Compute Intensive Programs

Deep Blue’s strength lies in brute force The applicability and limits of many AI But - the improved evaluation function from techniques can be studied by understanding 1996 to 1997 pushed it from impressing the why they do NOT work for chess: world champion to beating the champion • Forward pruning • Traditional expert systems: a little search • Pattern recognition (Michalski & Negri 1977) on top of a lot of knowledge • Analogical reasoning (de Groot 1965, • Compute intensive programs: a little Levinson 1989) knowledge on top of a lot of search • Partitioning Is it time to revisit these techniques in the context of game playing?

23 24

4