
Proceedings of the 2014 Federated Conference on DOI: 10.15439/2014F33 Computer Science and Information Systems pp. 511–518 ACSIS, Vol. 2 A Comparison between Different Chess Rating Systems for Ranking Evolutionary Algorithms Niki Veˇcek∗, Matej Crepinˇsekˇ ∗, Marjan Mernik∗ and Dejan Hrnˇciˇc† ∗Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, Slovenia Email: {niki.vecek, marjan.mernik, matej.crepinsek}@uni-mb.si †Adacta d.o.o., Maribor, Slovenia Email: [email protected] Abstract—Chess Rating System for Evolutionary algorithms accuracy ǫ (they play a draw). It has been empirically shown (CRS4EAs) is a novel method for comparing evolutionary al- that CRS4EAs is comparable to NHST, and can be used gorithms which evaluates and ranks algorithms regarding the as a comparative method for evolutionary algorithms [30]. formula from the Glicko-2 chess rating system. It was empirically shown that CRS4EAs can be compared to the standard method A CRS4EAs method is used within an open-source frame- for comparing algorithms - null hypothesis significance testing. work Evolutionary Algorithms Rating System (EARS) [8], The following paper examines the applications of chess rating [9]. CRS4EAs and EARS were developed to provide fairer systems beyond Glicko-2. The results of 15 evolutionary algo- and easier to understand comparisons between evolutionary rithms on 20 minimisation problems obtained using the Glicko- algorithms. All the experiments in EARS are executed for 2 system were empirically compared to the Elo rating system, Chessmetrics rating system, and German Evaluation Number the same number of optimisation problems, the algorithms are (DWZ). The results of the experiment showed that Glicko-2 is the written in the same programming language (Java), have the most appropriate choice for evaluating and ranking evolutionary same termination criteria, are initialised with the same random algorithms. Whilst other three systems’ benefits were mainly the seed, and executed under the same hardware configuration. simple formulae, the ratings in Glicko-2 are proven to be more Hence, some factors that could affect the final results of reliable, the detected significant differences are supported by confidence intervals, the inflation or deflation of ratings is easily the experiment were excluded [30]. The CRS4EAs uses the detected, and the weight of individual results is set dynamically. Glicko-2 chess rating system [18], since it is one of the newest and it consists of many preferences that look promising. In Index Terms—chess rating system, ranking, evolutionary algo- the proposed paper the Glicko-2 rating system is compared to rithms comparison, Glicko-2, Elo, Chessmetrics three other better-known and well-established rating systems: Elo, Chessmetrics, and German Evaluation Number (DWZ). I. INTRODUCTION In order to compare these four rating systems the experiment METHOD for comparing the algorithms is needed for was conducted for 15 evolutionary algorithms covering 20 A determing whether one algorithm performs better than minimisation problems. The analysis showed that comparing the other. As numerous effective evolutionary algorithms are evolutionary algorithms the Glicko-2 was the most appropriate appearing, a comparison with only one algorithm is now choice. One downside to the Glicko-2 is its complicated for- insufficient. This fact leads to the need for determing which of mulae, for the understanding of which mathematical and statis- the multiple algorithms is better than the other. Which of them tical knowledge is needed. The differences amongst players are is the best and which the worst? A well-established method for more straightforward in the other three systems, however they comparing the experimental results of multiple evolutionary are unsupported by any reliable measurements - they are arbi- algorithms is Null Hypothesis Significance Testing (NHST) trary. Otherwise, Glicko-2 was shown to be more reliable: the [22]. Whilst there are many variants of NHST, there are still detected significant differences are supported by a confidence some pitfalls regarding statistics and its application [2], [7], interval, straightforward measurement for rating reliability, the [13], [21] that imply that this field still needs attention. A novel control of conservativity/liberty is more correct, the weightings method the Chess Rating System for Evolutionary Algorithms of individual results are set dynamically, improvement through (CRS4EAs) [30] suggests using a chess rating system for time is considered in final results, the inflation or deflation of evaluating the results and ranking the algorithms. CRS4EAs ratings is easily detected, and the selective pairing is not an treats (i) evolutionary algorithms as chess players, (ii) one issue. This paper presents the reasons why the first choice for comparison between two algorithms as one game, and (iii) rating system used in CRS4EAs was the Glicko-2. execution and evaluation of pairwise comparisons between all The paper is structured as follows. Section II summarises algorithms participating in the experiment as a tournament. four more popular chess rating systems. The formulae used Just like the standard comparison of two algorithms, one game in these systems are adapted for EARS and are used during in CRS4EA can have three outcomes: the first algorithm is the experiment. The CRS4EAs method and the experiment are better (and therefore wins), the second algorithm is better (and introduced in Section III, followed by a detailed analysis of therefore wins), or they perform equally regarding predefined the obtained results. Section IV concludes the paper. 978-83-60810-58-3/$25.00 c 2014, IEEE 511 512 PROCEEDINGS OF THE FEDCSIS. WARSAW, 2014 k II. BACKGROUND Rper is the performance rating calculated as j=1 Rj /k + m and with the meaning that each Chess is a strategic game of two players with three possible ( j=1 Si,j /m − 0.5) ∗ 850 P outcomes: the first player can win, the first player can lose, 10%P increase in percentage score corresponds to an 85 point or the players can play a draw. Usually, the absolute power advantage in the ratings [27]. of a chess player is described using a number that is called a C. German Evaluation Number (DWZ) ’rating’. A player’s rating is updated after each tournament the The simplest and one of the first rating systems was the Ingo player participates in and each chess organisation has its own rating system [20] by Anton Hoesslinger (1948), which has rating system with formulae that evaluate its players. In this influenced many other rating system, including the Deutsche section the more common chess rating systems are introduced. Wertungszahl (DWZ) [6]. DWZ is similar to the Elo rating sys- All the players are represented on the leaderboard, from best tem of the FIDE (World Chess Federation) but has improved to worst and although there are different formulae behind in its own way since 1990 when it was first introduced. The updating the players’ ratings, all of them have two things in expected score in DWZ is calculated using the same formula common: a player’s rating is always a positive integer and the as the expected score in the Elo system (Eq. 1), whilst the player with the highest rating value is expected to be better. rating is updated using the formula in Eq. 4. A player joins the tournament with k opponents in which the m ith player has a rating R , and plays m games. 800 i R′ = R + S − E(R ,R ) (4) i i D + m i,j i j A. Elo Xj=1 is the development coefficient (Eq. 5), dependent on the The best-known chess rating system is the Elo rating system D fundamental value D (Eq. 6), the acceleration factor a (Eq. [10] where the expected score of the ith player against the jth 0 7), and the breaking value (Eq. 8). player is calculated using the formula in Eq. 1. b D = a ∗ D + b 1 0 E(Ri,Rj ) = (1) 1 + 10(Rj −Ri)/400 (5) min(30, 5i) if b = 0 5 ≤ D ≤ The expected score of the ith against the jth player is the 150 if b > 0 probability of i defeating j. Hence, the sum of the expected R 4 scores of the ith and jth players (against each other) equals D = i + J (6) 0 1000 1. The score the ith player gained against the jth player is The coefficient J differs according to the different ages of denoted by Si,j and equals 1 if the ith player won, 0 if ith player lost, or 0.5 for a draw. All the ratings are updated at the players - the older the player, the bigger the J. The the end of a tournament using the formula from Eq. 2. The acceleration factor a (Eq. 7) cannot be higher than 1 or lower ′ than 0.5, and is calculated only if a player younger than 20 new rating of the ith player is denoted by Ri. years achieved more points than expected, otherwise a equals m 1. The breaking value b (Eq. 8) is computed only if the player R′ = R + K S − E(R ,R ) (2) i i i,j i j with a rating under 1300 achieved less points than expected, Xj=1 otherwise b equals 0. The K-factor is a constant that affect the emphasis of the Ri difference between the actual score and the expected score. a = (7) 2000 The USCF (United States Chess Federation) rating system − 1300 Ri implements the K-factor by dividing 800 by the sum of b = e 150 − 1 (8) effective number of games a player’s rating is based on (N ) e D. Glicko-2 and the number of games the player completed during a tournament (m) [17]. Even though, the Elo system is famous One of main concerns about the Elo system is the possibility for its simplicity and wide-usage, it has a few drawbacks such of a player winning the game and losing rating points, or losing as properly setting the K-factor, an inaccurate distribution the game and gaining rating points.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-