<<

applied sciences

Article Identification of Players Ranking in E-Sport

Karol Urbaniak 1 , Jarosław W ˛atróbski 2,* and Wojciech Sałabun 1,* 1 Research Team on Intelligent Decision Support Systems, Department of Artificial Intelligence Methods and Applied Mathematics, Faculty of Computer Science and Information Technology, West Pomeranian University of Technology in Szczecin, Szczecin ul. Zołnierska˙ 49, 71-210 Szczecin, ; [email protected] 2 Department of Information Systems Engineering in the Faculty of Economics, Finance and Management of the University of Szczecin, Mickiewicza 64, 71-101 Szczecin, Poland * Correspondence: [email protected] (J.W.); [email protected] (W.S.); Tel.: +48-91-449-5580 (W.S.)  Received: 20 August 2020; Accepted: 22 September 2020; Published: 27 September 2020 

Abstract: Human activity is moving steadily to virtual reality. More and more, people from all over the world are keen on growing fascination with e-sport. In practice, e-sport is a type of sport in which players compete using computer games. The competitions in games, like FIFA, Dota2, the , and Counter-Strike, are prestigious tournaments with a global reach and a budget of millions of dollars. On the other hand, reliable player ranking is a critical issue in both classic and e-sport. For example, the “Golden Ball” is the most valuable prize for an individual football player in the whole football history. Moreover, the entire players’ world wants to know who the best player is. The position of each player in the ranking depends on the assessment of his skills and predispositions. In this paper, we studied identification of players evaluation and ranking obtained using the multiple-criteria decision-making based method called Characteristic Objects METhod (COMET) on the example of the popular game Counter-Strike: Global Offensive (CS: GO). We present a range of advantages of the player evaluation model created using the COMET method and, therefore, prove the practicality of using multi-criteria decision analysis (MCDA) methods to build multi-criteria assessment models in emerging areas of . Thus, we provide a methodical and practical background for building a decision support system engine for the evaluation of players in several eSports.

Keywords: e-sport; ranking; COMET method

1. Introduction Sport has always played an essential role in every culture in the past and still does in current times. Everybody knows conventional sports, such as football, volleyball, basketball, etc., but there are new sports appearing that are increasingly expanding in popularity. One of them is Electronic Sports, also known as eSports or e-sports [1]. At the beginning of the 90s, the history of e-sport began. During this decade, it became more and more popular, and the number of players increased significantly [2–5]. E-sport is a type of sport in which players compete in computer games [6,7]. The players’ activities are only restrained from being placed in the virtual environment [3]. E-sport is exciting entertainment for many fans, but it is also a source of income for the professional players and the whole e-sport organization. Professional players usually belong to different e-sport organizations and represent their teams competing in omnifarious tournaments, events, and international championship [2–4,8]. The competition takes place online or through so-called local networks (LAN). The most encounters take place in a LAN network, where both smaller and larger numbers of computers are connected in one building allowed for lower in-game latency between

Appl. Sci. 2020, 10, 6768; doi:10.3390/app10196768 www.mdpi.com/journal/applsci Appl. Sci. 2020, 10, 6768 2 of 35 gamers [2,6,8–10]. In e-sports, the viewership is crucial. The gameplay should be designed to attract and emotionally engage the participation of as many gameplay observers as possible. E-sport is a lifestyle for computer gamers. It becomes a real career path from which you can start, develop, and build your future. People still consider e-sport very conservatively. They think of it as something trivial and frivolous. While some people do not take it seriously all the time, spectator count records, as well as prize pool records, are regularly updated during major tournaments, reaching millions watching Counter-Strike: Global Offensive (CS: GO) [11]. It is full of opportunities, awards, travel, and also requires great sacrifice. It is incredibly demanding to reach a world-class level [1]. Actually, it looks like a full-time job. Players usually train 8 hours a day or more. They use the computer as a tool to achieve success in a new field. To become a professional, people have to work hard without any excuses. A player is considered as professional when he is hired by an organization that pays for his work representing that entity by appearing at events, mostly official tournaments on a national or international level [8]. E-sport has become an area that requires so much precision that even milliseconds determine whether to win or lose. Pointing out the importance of specialized skills, such as hand-eye coordination, muscle memory, or reaction time, as well as strategical or tactical in-game knowledge, increases achieving success in that area [12]. Hand-eye coordination is the ability of the vision system to coordinate the information received through the eyes to control, guide, and direct the hands in the accomplishment of a given task, such as handwriting or catching a ball [13]. The aim of e-sports is defeating other players. It could be done by neutralizing them, or just like in sports games, by racing as fast as possible to cross the finish line before your opponents. In addition, the win may be achieved by scoring the most points [2,3]. One of the most popular genres of eSports games is First-Person-Shooter (FPS) [2,6,8,14]. The virtual environment of the game is approached from the perspective of the avatar. The only thing visible of the avatars on the screen is the hands and the weapons they handle [2]. Counter-Strike is an FPS multiplayer game created and released by Valve Corporation and Hidden Path Entertainment [5,6]. There were many other versions of the game, which did not achieve much success. Valve realized how popular e-sport had become and create the new Counter-Strike game we play today, wholly tailored for competition, known as CS: GO. The rules in CS: GO are uncomplicated. There are two teams in the game: terrorists (T) and counter-terrorists (CT). Each team aims to eliminate the opposing team or to perform a specific task. The first one’s target is to plant the bomb and let it explode, while the second’s is to prevent the bomb from being planted and/or exploding. Additionally, the game consists of 30 rounds, where each last about 2 min. After 15 rounds, players need to switch teams. Then, the team that first achieves 16 rounds is the winner. When the game does not end in 30 rounds, it goes to overtime. It consists of a best of six rounds, three on each side. The team that gets to 4 rounds wins. If there is another draw situation, the same rule applies until a winner is found [4,8]. The team’s economy is concerned with the amount of money that everybody on the team have pooled cooperatively in order to buy new weapons and equipment. Winning a round by eliminating the entire enemy team provides the winners with USD 3250 per player, plus USD 300 if the bomb is planted by a terrorist. Winning by time on the counter-terrorist’s side rewards players USD 3250, and winning the round with a defusal (CT) or detonation of the bomb (T) rewards USD 3500. However, if the terrorists run out of time before killing all the oponnents or planting the bomb, they will not come in for any cash prize. If a round is lost on the T-side, but they still manage to plant the bomb, the team will be awarded USD 800 in addition to the current round loss streak value. The money limit for each individual player in competitive matches is equal to USD 16.000 [15]. For gamers, the foundation of e-sports is the glory of winning, the ability to evoke excitement in people, and the privilege of being perceived as one of the best players in the world [2,8]. In the past, players had to bring their equipment to LAN events, while having fun in a hermetically sealed society. They could then eventually win small cash prizes or gadgets. Now, these players are winning a prize pool of over USD 500 thousand, performing on big stages full of cameras and audience [1]. The increase in popularity of e-sports was not only impressive but also forced many business people, Appl. Sci. 2020, 10, 6768 3 of 35 large corporations, and television companies to become interested in this dynamically developing market [8]. E-sport teams are often headed by traditional sports organizations and operated by traditional sports media. Tournaments are organized by conventional sports leagues highlighting the growing connections between classical sport and e-sport [16]. In recent years, e-sport has become one of the fastest-growing forms of new media driven by the growing origins of games broadcasting technologies [7,17]. E-sport and computer gaming have entered the mainstream, transforming into a convenient way of entertainment. In 2019, 453.8 million people had been watching e-sport worldwide, which increased by about 15% compared to 2018. It consisted of 201 million regular and 252 million occasional viewers. Between 2019 and 2023, total e-sport viewership is expected to increase by 9% per year, from 454 million in 2019 to 646 million in 2023. In six years, the number of watchers will almost double, reaching 335 million in 2017. In the current economic situation, global revenues from e-sport may reach USD 1.8 billion by 2022, or even an optimistic USD 3.2 billion. Hamari in Reference [3] claims that with the development of e-sport, classic sport is becoming a computer-based form of media and information technology. Therefore, e-sport is a fascinating subject of research in the field of information technology. The accurate player ranking is a crucial issue in both classic [18] and e-sport [19,20]. The result of a classification, calculated based on wins and losses in a competitive game, is often considered to be an indicator of a player’s skills [20]. Each player’s position in the ranking is strictly determined by their abilities, predispositions, and talent in the field of represented discipline [16]. However, there are more than just statistics to prove the player’s value and ability. Many professional players play a supporting role in their teams, and winning even a single round is a priority. What matters first and foremost is the team’s victory, unlike the ambitions of the individual units. The team members have to work collectively, like one organism, and everyone has to cooperate to achieve the team’s success and the best possible results [21]. That is why the creation of accurate player ranking is a problematic issue. In this paper, we identify the model to generate a ranking of players in the popular e-sport game, i.e., Counter-Strike: Global Offensive (CS: GO), using the Characteristic Objects METhod (COMET). The obtained ranking will be compared to Rating 2.0, which is the most popular for CS: GO game [22,23]. This study case facilitates the application of COMET in the new field of application. The COMET is a novel method of identifying a multi-criteria expert decision model to solve decision problems based on a rule set, using elements of the theory of fuzzy sets [23,24]. Unlike most available multi-criteria decision analysis (MCDA) methods, COMET is completely free of the rank reversal problem. The advantages of this technique are both an intuitive dialogue with the decision-maker and the identification of a complete model of the modeling area, which is a vital element in the application of the proposed approach in the methodological and algorithmic engine in the area of computer games and, more specifically, e-sport. The most important methodological contribution is the analysis of the significance of individual inputs and outputs, which enables the analysis of the dependence of results on individual input data. Similarly, as in the Analytic Hierarchy Process (AHP) method, it is to serve as a possibility of extended decision analyzing in order to explain what influence particular aspects had on the final result. The Spearman correlation coefficient is used to measure the input-output dependencies, which extends the COMET technique to include new interpretative possibilities. It is important to note that this is significant as the COMET method itself does not apply any significance weights. The proposed approach makes it possible to estimate the significant weights. Appl. Sci. 2020, 10, 6768 4 of 35

The justification of the undertaken research has both theoretical and practical dimensions. MCDA methods themselves have proved to be powerful tools to solve different practical problems [25,26]. In particular, the construction of assessment models and rankings using MCDA methods is extensively discussed in the literature [27–30]. Examples of decision-making problems successfully solved with the usage of different multi-criteria methods include the assessment of environmental effects of marine transportation [31], innovation [32,33], sustainability [34,35], evaluation of renewable energy sources (RES) investments [36,37] or a broad environmental investments assessment [38], and industrial [39], as well as personnel assessment [40] of preventive health effects [41] or even evaluation of medical therapy effects [42,43]. It is also worth noticing the additional motivation of the research that MCDA methods have already shown their utility in building assessment models in traditional sports. For instance, the MCDA-based evaluation of soccer players was conducted in Reference [44], where Choquet method was used to evaluate the performance of sailboats [45]. Preference ranking organization method for enrichment evaluation II (PROMETHEE II)-based evaluation model of football clubs was proposed in Reference [46], while application of AHP/SWOT model in sport marketing was presented in Reference [47]. MCDA-based, multi-stakeholder perspective was handled in the evaluation of national-level sport organizations in Reference [48]. Both the examples provided and state of the art presented in Reference [49] clearly show the critical role of MCDA methods in the area of building assessment models and rankings in the field of sport. When we analyze the area of the e-sport in addition to the dominant trends, including economic research [50], sociological [3,51], psychological [52] or conversion-oriented research, and user experience (UX) [53], we observe attempts to use quantitative methods in the search for the algorithmic engines of digital products and games. For example, ex-post surveys and statistical-based approach were used to manage the health of the eSport athlete [54]. Personal branding of eSports athletes was evaluated in Reference [55]. In Reference [56], streaming-based performance indicators were developed, and players’ behavior and performance were assessed. The research focused on win/live prediction in multi-player games was conducted in Reference [57]. The study aimed to identify the biometric features contributing to the classification of particular skills of the players was presented in Reference [58]. So, far, only one example of MCDA-based method usage in e-sport player selection and evaluation has been proposed [58]. The authors indicate the appropriateness of fuzzy MCDA in the domain of e-sport player selection and assessment. The above literature studies show a distinct research gap, including the limited application of MCDA in e-sport domain. Besides, the paper addresses the following essential theoretical and practical research gaps:

• extension of the COMET method by the stage of analyzing the significance of individual input data and decision-making sub-models to the final form of a ranking of decision-making options • transferring the methodological experience of using MCDA methods to the important and promising ground for building decision support systems in the area of eSports; • by identifying a domain-specific proper reflecting modeling domain (e-sport player evaluation), the form of which (both within the family of evaluation criteria and alternatives) is significantly different from that of classical sports; and • analysis and study of the adaptation and examination of MCDA methods usage as an algorithmic methodological engine of decision support system (potentially providing additional functionality to a range of available digital products and games).

The rest of the paper is organized as follows: MCDA foundations and simple comparison of MCDA techniques are presented in Section2. Section3 contains preliminaries of the fuzzy sets theory. The explanation of the definitions and algorithms of the multi-criteria decision-making method named COMET is given in Section4. Section5 introduces the results of the study, and, in Section6, the discussion about the differences in both rankings. In Section7, we present the conclusions and future directions. Appl. Sci. 2020, 10, 6768 5 of 35

2. MCDA Fundations Multi-criteria decision support aims to achieve a solution for the most satisfactory decision-maker and at the same time to meet a sufficient number of often conflicting goals [59]. The search for such solutions requires the consideration of many alternatives and their evaluation against many criteria, as well as the transfer of the subjectivity of evaluation (e.g., the relevance of the criteria by the decision-maker) into a target model. Multi-criteria Decision Analysis (MCDA) methods is dedicated to solving this class of decision problems. During many years of research, two schools of MCDA methods have been developed. First, American MCDA school is based on the assumption that the decision-maker’s preferences are expressed using two basic binary relations. When comparing the decision-making options, undifferentiated relations and preferences may occur. In the case of the European MCDA school, this set has been significantly extended by introducing the so-called “superiority relationship”. The relation of superiority, apart from the two basic relations mentioned above, introduces the relation of weak preference of one of the variants to another and the relation of incomparability of the decision options. In the case of the American school methods, the result of the comparison of variants is determined for each criterion separately, and the effect of the aggregation of the grades is a single, synthesized criterion, with the order of variants being full. The methods of the American school of decision support in the vast majority using the function of value or utility. The best-known methods of the American school are Multi Attribute Utility Theory (MAUT), AHP, Analytic Network Process (ANP), Simple Multi-Attribute Rating Technique (SMART), UTA, Simple Multi-Attribute Rating Technique (MACBETH), or Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). In contrast to the American school (which is at the same time an accusation of “European School”-oriented researchers), the algorithms of the European School methods are strongly oriented on faithful reflection of the decision-maker’s preferences (including their imprecision). The aggregation of the assessment results in itself is done with the use of the relation of superiority, and the effect of aggregation in the vast majority of methods is a partial order of variants (the effect of using the relation of incomparability). The primary methods of the European School are ELimination Et Choice Translating REality (ELECTRE) and PROMETHEE [60]. What is important, among them only the Promethee II method as a result of the aggregation of assessments provides a full order of decision options. Other methods belonging to the MCDA European School are, for example, ORESTE, REGIME, ARGUS, TACTIC, MELCHIOR or PAMSSEM. An important additional difference between the indicated schools is also the fact that there is a substitution of criteria in the methods using synthesis to one criterion. In contrast, the methods of the European School are considered non-compensatory [61]. The third group of MCDA methods are based on decision-making rules. The formal basis of these methods is fuzzy set theory and approximate set theory. Algorithms of this group of methods consist in building decision rules and consequences, and, using these rules, variants are compared and evaluated, and the final ranking is generated. Examples of MCDA rules are DRSA (Dominance-based Rough Set Approach) or Characteristic Objects Method (COMET) [24]. The COMET uses fuzzy triangular numbers to build criteria functions. A set of characteristic objects is created Using the core values of particular fuzzy numbers. So, it is a method based on fuzzy logic mechanisms. Additionally, it can also support problems with uncertain data. Table1 shows the comparison of the COMET method with other MCDA problems. The most important is that the COMET technique is working without knowing the criteria weights. The decision-maker’s task is to compare pairs of characteristic objects. Based on these comparisons, a model ranking is generated. The model variants are the base for building a fuzzy rule database. When the considered alternatives are given to the decision-making system, the appropriate rules are activated, and the aggregated evaluation of the variant is determined as the sum of the degree products in which the variants activate the individual rules [62]. Appl. Sci. 2020, 10, 6768 6 of 35

Table 1. Comparison of the Characteristic Objects Method (COMET) with other multi-criteria decision analysis (MCDA) methods.

Perf. of the v. Uncert. Method Name W. Usage Weights Type Type of Uncertainty Measurement Handling AHP Yes relative relative No - COMET No - quantitative Yes input data ELECTRE I Yes quantitative qualitative No - ELECTRE IS Yes quantitative quantitative Yes DM preferences ELECTRE TRI Yes quantitative quantitative Yes DM preferences Fuzzy AHP Yes relative relative Yes input data Fuzzy TOPSIS Yes quantitative quantitative Yes input data Fuzzy VIKOR Yes quantitative quantitative Yes input data IDRA Yes quantitative quantitative No - PROMETHEE I Yes quantitative quantitative Yes DM preferences PROMETHEE II Yes quantitative quantitative Yes DM preferences TOPSIS Yes quantitative quantitative No - VIKOR Yes quantitative quantitative No -

Additionally, the literature also indicates groups of so-called basic methods (e.g., lexicographic method, maximin method, or additive weighting method) and mixed methods, e.g., EVAMIX [63] or QUALIFLEX, as well as Pairwise Criterion Comparison Approach (PCCA). Examples of the latter are methods: MAPPAC, PRAGMA, PACMAN, and IDRA [64].

3. Fuzzy Set Theory: Preliminaries Fuzzy set theory is a very valuable strategy to control and model in several scientific fields. Modeling using Fuzzy sets has proven to be an efficient alternative of forming multicriteria decision problems. The necessary concepts of the Fuzzy Set Theory can be presented using the following eight definitions [13]:

Definition 1. The fuzzy set and the membership function—the characteristic function µA of a crisp set A ⊆ X assigns a value of either 0 or 1 to each member of X, and the crisp sets only allow a full membership (µA(x) = 1) or no membership at all (µA(x) = 0). This function can be generalized to a function µA˜ so that the value assigned to the element of the universal set X falls within a specified range, i.e., µA˜ : X → [0, 1]. The assigned value indicates the degree of membership of the element in the set A. The function µA˜ is called a membership ˜ function and the set A = (x, µA˜ (x)), where x ∈ X, defined by µA˜ (x) for each x ∈ X is called a fuzzy set.

Definition 2. Triangular fuzzy number (TFN)—A Fuzzy set Ae, defined on the universal set of real numbers <, is said to be a TFN Ae(a, m, b) if its membership function has the following form:  0, x ≤ a   x−a , a ≤ x ≤ m  m−a µAe(x, a, m, b) = 1, x = m (1)  b−x  , m ≤ x ≤ b  b−m  0, x ≥ a.

Definition 3. The support of a TFN Ae—This is the crisp subset of the set Ae in which all elements have nonzero membership values in the set Ae. S(A˜) = {x : µAe(x) > 0} = [a, b]. (2)

Definition 4. The core of a TFN A—Thise is the singleton (one-element Fuzzy set) with the membership value equal to one. C(Ae) = {x : µAe(x) = 1} = m. (3) Appl. Sci. 2020, 10, 6768 7 of 35

Definition 5. The Fuzzy rule—The single Fuzzy rule can be based on tautology modus ponens. The reasoning process uses logical connectives IF-THEN, OR, and AND.

Definition 6. The rule base—The rule base consists of logical rules determining causal relationships existing in the system between Fuzzy sets of its inputs and output.

Definition 7. The T-norm operator—the T-norm operator (intersection) is a T function modeling the AND intersection operation of two or more fuzzy numbers, e.g., Ae and Be.

µAe(x)ANDµBe(y) = µAe(x) · µBe(y). (4)

Definition 8. The S-norm operator—The S-norm operator (union), or T-conorm is an S function modeling the OR union operation of two or more fuzzy numbers, e.g., Ae and Be.

µA˜ (x)ORµB˜ (y) = (µA˜ (x) + µB˜ (y)) ∧ 1. (5)

4. The Characteristic Objects Method COMET (Characteristic Objects Method) is a very simple approach, most commonly used in the field of sustainable transport [34,35,62], interactive marketing [65,66], sport [67], medicine [68], in handling the uncertain data in decision-making [69,70], and banking [71]. Carnero, in Reference [72], suggests using COMET method as future work to improve her waste segregation model. The COMET is an innovative method of identifying a multi-criteria expert decision model to solve decision problems based on a rule set, using elements of the theory of fuzzy sets [24,68]. The COMET method distinguishes itself from other multiple-criteria decision-making methods by its resistance to the rank reversal paradox [73]. Contrary to other methods, the assessed alternatives are not being compared here, and the result of the assessment is obtained only based on the model [24]. The whole decision-making process by using the COMET method is presented in Figure1. The formal notation of this method can be presented using the following five steps [34]:

Figure 1. The procedure of the Characteristic Objects Method (COMET) to identify decision-making model.

Step 1. Define the space of the problem – an expert determines dimensionality of the problem by selecting number r of criteria, C1, C2, ..., Cr. Subsequently, the set of fuzzy numbers for each criterion ˜ ˜ ˜ Ci is selected, i.e., Ci1, Ci2, ..., Cici . In this way, the following result is obtained:

= { ˜ ˜ ˜ } C1 C11, C12, ..., C1c1 C = {C˜ , C˜ , ..., C˜ } 2 21 22 2c1 (6) ...... ˜ ˜ ˜ Cr = {Cr1, Cr2, ..., Crcr }, Appl. Sci. 2020, 10, 6768 8 of 35

where c1, c2, ..., cr are numbers of the fuzzy numbers for all criteria. Step 2. Generate the characteristic objects—The characteristic objects (CO) are obtained by using the Cartesian Product of fuzzy numbers cores for each criteria as follows:

CO = C(C1) × C(C2) × ... × C(Cr). (7)

As the result, the ordered set of all CO is obtained:

CO1 = C(C˜11), C(C˜21), ..., C(C˜r1) CO = C(C˜ ), C(C˜ ), ..., C(C˜ ) 2 11 21 r2 , (8) ...... = ( ˜ ) ( ˜ ) ( ˜ ) COt C C1c1 , C C2c2 , ..., C Crcr where t is a number of CO: r t = ∏ ci. (9) i=1 Step 3. Rank the characteristic objects—the expert determines the Matrix of Expert Judgment (MEJ). It is a result of pairwise comparison of the characteristic objects by the expert knowledge. The MEJ structure is as follows:   α11 α12 ... α1t    α21 α22 ... α2t  MEJ =   , (10)  ......  αt1 αt2 ... αtt where αij is a result of comparing COi and COj by the expert. The more preferred characteristic object gets one point and the second object get zero points. If the preferences are balanced, the both objects get half point. It depends solely on the knowledge of the expert and can be presented as:

( 0.0, fexp(COi) < fexp(COj) αij = 0.5, fexp(COi) = fexp(COj) , (11) 1.0, fexp(COi) > fexp(COj) where fexp is an expert mental judgment function. Afterwards, the vertical vector of the Summed Judgments (SJ) is obtained as follows:

t SJi = ∑ αij. (12) j=1 The last step assigns to each characteristic object an approximate value of preference. In the result, the vector P is obtained, where i-th row contains the approximate value of preference for COi. Step 4. The rule base—each characteristic object and value of preference is converted to a fuzzy rule as follows detailed form:

IFC(C˜1i) ANDC(C˜2i) AND ... THENPi . (13)

In this way, the complete fuzzy rule base is obtained, that approximates the expert mental judgement function fexp(COi) Step 5. Inference and final ranking—The each one alternative is a set of crisp numbers corresponding to criteria C1, C2, ..., Cr. It can be presented as follows:

Ai = {a1i, a2i, ..., ari}. (14) Appl. Sci. 2020, 10, 6768 9 of 35

5. Results The detailed steps of the research to identify players carried out according to the methodical framework is presented in Figure2. It is worth mentioning, once again, that algorithmic background and methodical approach provide COMET method.

Figure 2. Research procedure.

The identified model creates a ranking, which is compared with Rating 2.0 that was proposed by Half-Life Television (HLTV). It is a news website that covers professional CS: GO news, tournaments, statistics, and rankings [23]. The obtained ranking is more natural to interpret. Each player assessment has three additional parameters. Many parameters influence the player’s performance, including the evaluation of his skills and predispositions. For instance, with player’s age, the drop-off in reaction time makes it hard for them to compete and harder to aim the head of moving target. High percentage of headshots reflects the shooting skills and is a kind of prestige [74]. There are plenty of criteria, which could be used to create an evaluation model. For instance, the damage per round given by grenades, the total number of rounds played by a player, which could inform us about the player’s experience or a high percentage of headshots, that was mentioned earlier. Other criteria have been chosen because of their greater impact on the assessment of the individual skills of each player. Especially important are the C1 and C4 criteria. They inform us that eliminating the player is smaller than the possibility that he will kill the enemies [74]. Many parameters influence the player’s performance, including the assessment of his skills and predispositions. For instance, with player’s age, the drop-off in reaction time makes it hard for them to compete and harder to aim the head of moving target. Hight percentage of headshots reflects the shooting skills and is a kind of prestige. Therefore, the following six criteria have been selected [22,23]:

• C1—Average kills per round, the average number of kills scored by the player during one round; • C2—Average damage per round, mean damage inflicted by a player during one round; • C3—Total kills, the total number of kills gained by the player; • C4—K/D Ratio, the number of kills divided by number of deaths; • C5—Average assists per round, the mean number of assists gained by the player during one round; and

• C6—Average deaths per round, the average number of deaths of a player during one round. There are plenty of criteria, which could be used to create an evaluation model. For instance, the damage per round given by grenades, the total number of rounds played by a player, which could inform us about the player’s experience or a high percentage of headshots, that was mentioned earlier. However, a set of six criteria have been chosen because of their greater impact on the assessment of the individual skills of each player. The collected data for all applied criteria and Rating 2.0 assessment are derived from the official HLTV website and dated June 2019. Especially important are the C1 and C4 criteria. They inform us that the chance to eliminate the player is smaller than the possibility that he will kill the enemies. The economy of a player depends on how much he has spent on weapons and armor, the kill awards that have been received per elimination (based on weapon type), the status of bomb planting or defusing, and finally who won the round [15]. Average kills per round (C1) is always an important criterion because, by fragging (killing an enemy), you can eliminate first of all the threat from your opponent. For each elimination you get, depending on the weapon used, the amount of money needed to buy ammunition, equipment, grenades, and other utilities at a later stage of the game. Appl. Sci. 2020, 10, 6768 10 of 35

For instance, elimination with a sniper rifle (AWP) is the least economically profitable and gives the player only USD 100, while almost any pistol gives 300 dollars reward, and shotguns, which are the most cost-effective, give even up to USD 900 in cash prize. Additionally, by killing enemies, they lose the weapons they acquired, thus losing all equipment, such as kevlar with a helmet or defuse kit (CT). Criterion C1 is a profit type criterion, where the value increase means the preference increase. Based on the information about players statistics from the HLTV database for best 40 professional players, for C1, the lowest obtained value is 0.72, the highest 0.88, and the average value is equal to 0.78. As the number of Average damage per round increases, the probability of killing an enemy increases, as well. Moreover, the player is more priceless and useful for a team when he deprives the enemy team of the precious health points and makes gaining frag much easier for his teammates. There was a situation during the PGL Major Kraków 2017 event when a professional player Mikhail "Dosia" Stolyarov from during the grand final against Immortals team done some unbelievable action. His team (on CT side) was going to lose the round because there was not enough time to defuse the bomb versus three opponents. Dosia knew it was impossible to win, but he came up with an idea and threw a grenade to give some extra damage to players, who were saving their weapons. It was a few seconds before the detonation of the bomb, which takes many health points (HP) from players located in an area of the explosion. Doing it, he contributed to the death of two players, which lost precious weapons and equipment, forcing them to spent extra money in the next round. That was an example of the validity of this criterion on the professional field of CS: GO. Criterion C2 is characterized by a positive correlation to player value. For criterion C2, the lowest result is 75.60, the highest 88.20, and the mean value is equal to 82.70. Criterion C3 determine the total number of kills scored by the player, which could signify that the player plays a lot and has a background in Counter-Strike, like the legendary player Christopher “GeT RiGhT” Alesund from Sweden or Filip “Neo” Kubski from Poland. When C3 value increases, the player’s evaluation also improves. As the total number of kills increases, the player’s skill level and overall experience develop, as well, playing later against much better enemies. For criterion C3, the lowest result is 1516.00, the highest 4151.00, and the mean value is equal to 2514.90. Frankly, it is not the most critical parameter because players with much less number of frags could play as good or even better. It depends on individual predispositions and the innate potential of the gamer. Criterion C4 is probably the most prominent rate of players’ abilities in CS: GO. It is a profit type criterion, like the previous three criteria. It informs us that the chance to eliminate the player is smaller than the possibility that he will kill the enemies. If the total number of kills is more significant than the overall number of deaths, the player’s skill level is getting more superior, and the gamer improves every time he plays. For professional gamers, the criterion C4 obtained the lowest result equal to 1.15, the highest 1.51, and the mean value is equal to 1.25. Even the worst K/D Ratio value in this set of players is a great result. Obtaining assistance in team games is proof of successful and productive team play. In CS: GO, assists are also received in this way because it is an evident proof that the player was close to making an elimination on the opponent. However, something went wrong and only deprived him of most of the health points in the end without gaining a single frag. Then, he gives his teammates the opportunity for an easy kill, but he only got an assist instead of a full frag on his account. Often, players who play a supporting role get a significant amount of assists because they contribute to getting eliminations on the rival by, for example, blinding him with a flashlight, helping his colleagues. For criterion C5, the lowest result is 0.09, the highest 0.18, and the average value is equal to 0.13. As it is known in FPS games, the most important thing is to eliminate your opponents instead of being killed. By analyzing the Average number of deaths per round, we can conclude which player loses the most shooting duels and has to observe the actions of his teammates only as an observer. It could show us the weakness of the player and skill shortages that will allow the best ones to be distinguished. It is a cost-type criterion, which means the value increase indicates the preference decrease. For criterion C6, the lowest result is 0.52, the highest 0.68, and the mean value is equal to 0.63. Appl. Sci. 2020, 10, 6768 11 of 35

The values of selected criteria C1–C6, positions and names of alternatives are presented in the Table2. In this study case, the considered problem is simplified to a structure, which is presented in Figure3.

Table 2. The performance table of the alternatives and selected criteria.

Pos. Name C1 C2 C3 C4 C5 C6 1 s1mple 0.88 86.6 1958 1.50 0.09 0.59 2 ZywOo 0.83 85.3 4151 1.40 0.12 0.59 3 Jame 0.78 79.3 3505 1.51 0.09 0.52 4 Jamppi 0.83 83.1 2851 1.30 0.1 0.64 5 huNter 0.80 88.2 4100 1.22 0.15 0.66 6 vsm 0.80 86.6 2420 1.22 0.13 0.65 7 meyern 0.82 83.8 1728 1.28 0.12 0.64 8 Kaze 0.78 80.7 1750 1.32 0.1 0.60 9 Hatz 0.76 81.8 2017 1.28 0.15 0.60 10 Sico 0.76 78.4 1876 1.36 0.13 0.56

C1 Kills per round Effectiveness per P1 round assesment C2 model Damage per round

C3 Total kills CS:GO Frag gaining P2 Players P assesment model C4 assesment K/D Ratio model

C5 Assists per round Failures per round P3 assesment model C6 Deaths per round

Figure 3. The hierarchical structure of the players ranking assessment problem.

In that way, we have to identify three related models, where each one requires a lot smaller number of queries to the expert. The final decision model consists of three following models, where, for each one, nine characteristic objects and 36 pairwise comparisons are needed:

• P1—Effectiveness per round assessment model with two inputs; • P2—Frag gaining assessment model with two inputs; • P3—Failures per round assessment model with two inputs.

In the Effectiveness per round assessment model (P1), we aggregate two essential criteria, like Average kills per round (C1) and Average damage per round (C2), as input values. The output value is our player evaluation for model P1, and the lowest result is 0.23, the highest 0.88, and the mean value is equal to 0.45 for top 40 professional players in CS: GO. The input values of the Frag gaining assessment model (P2) are two significant criteria, like Total kills (C3) and K/D Ratio (C4). The outcome value is our player assessment for model P2, and the lowest result is 0.00, the highest 0.84, and the mean value is equal to 0.45. In the Failures per round assessment model (P3), we connect two crucial criteria, like Average assists per round (C5) and Average deaths per round (C6). The output value is our player evaluation for model P3, and the lowest result is 0.25, the highest 0.78, and the mean value is equal to 0.44. The model will be validated based on the results obtained from the official HLTV website for the top 10 professional CS: GO players for June 2019, which are presented in Table2. To identify the Appl. Sci. 2020, 10, 6768 12 of 35

final model for players assessment, we have to determine the three following assessment models, i.e., Effectiveness per round, Frag gaining, and Failures per round.

5.1. Effectiveness per Round Assessment Model This model evaluates the efficiency in eliminating and injuring enemies, which is one of the essential elements of CS: GO. The expert identified two significant criteria for the Effectiveness per round assessment model: Average kills per round, which is the mean number of frags scored by the player pending one round, and Average damage per round, that is mean damage delivered by a player during one round. Both of them are a profit type criteria, where the value increase means the preference increase. In such complex problems, the relationship is sporadically linear. Table3 presents the values of the criteria C1 and C2 and the P1 assessment model. Based on the presented data, it can be determined that the best value of the criteria C1 was achieved by ‘Simple’, which is equal to 0.88, while the worst result was obtained by ‘dexter’ with the value equal to 0.72. In the case of the second criterion, the best score was given to ‘huNter’ with 88.2, and the lowest score was received by ‘xsepower’ with value equal to 75.6. Analyzing the results of the effectiveness per round assessment model (P1), we can conclude that the highest score P1 was obtained by ‘Simple’, and is equal to 0.8825. The triangular fuzzy numbers of criterium C1 are presented in Figure4, while C2 is presented in Figure5.

C C C 1 11 12 13

0.8

µ 0.6

0.4

0.2

0 0.7 0.72 0.74 0.76 0.78 0.8 0.82 0.84 0.86 0.88 0.9

C1

Figure 4. Visualization of Average kills per round (C1) and triangular fuzzy numbers 0.70 (C11), 0.80 (C12), and 0.90 (C13).

C C C 1 21 22 23

0.8

µ 0.6

0.4

0.2

0 70 72 74 76 78 80 82 84 86 88 90

C2

Figure 5. Visualization of Average damage per round (C2) and triangular fuzzy numbers 70 (C21), 80 (C22), and 90 (C23).

In the considered set of parameters, there were players with: Average kills per round (C1) with the values of the support of the triangular fuzzy number from 0.7 (C11) to 0.9 (C13) and the core valued 0.8 (C12); Average damage per round (C2) with the values of the support of the triangular fuzzy number from 70 (C21) to 90 (C23) and the core valued 0.8 (C22) health points. Based on the data presented in the Table4, it turned out that the output P1 takes values from 0.1 to 0.9. Therefore, the variable P1 will take two values. Both of them will also be determined as triangular fuzzy numbers. They were displayed in Figure6. The comparison of the 36 pairwise of the 9 characteristic objects were executed. Appl. Sci. 2020, 10, 6768 13 of 35

Consequently, the Matrix of Expert Judgment (MEJ) was defined as (15), where each αij value was calculated using Equation (11).

 0.5 0 0 0 0 0 0 0 0   1 0.5 0 0 0 0 0 0 0       1 1 0.5 0 0 0 0 0 0     1 1 1 0.5 0 0 0 0 0    MEJ =  1 1 1 1 0.5 0 0 0 0 . (15)      1 1 1 1 1 0.5 0 0 0     1 1 1 1 1 1 0.5 0 0     1 1 1 1 1 1 1 0.5 0  1 1 1 1 1 1 1 1 0.5

As a result, the vector of the Summed Judgements (SJ) was calculated using Equation (12), and it was employed to determine the values of preference (P1), which are presented in Table3. The characteristic objects CO1–CO9 presented in Table3 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C1 and C2. The highest value of preference P1 received CO9 with a triangular fuzzy number of criterion C1 valued 0.9 (C13) and with a triangular fuzzy number of criterion C2 valued 90 (C23). The lowest value of preference P1 fell to CO1 with a triangular fuzzy number of criterion C1 valued 0.7 (C11) and with a triangular fuzzy number of criterion C2 valued 70 (C21). With an increase in the value of the criterion C1, the preference increases more significantly than with an increase in the value of the criterion C2. It means that C1 has a greater impact on the assessment of the P1 model than C2.

P P 1 11 12

0.8

µ 0.6

0.4

0.2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

P1

Figure 6. Visualization of triangular fuzzy numbers for Effectiveness per round assessment model (P1).

Table 3. Overview of characteristic objects CO, vector P values for the Effectiveness per round assessment model.

COi C1 C2 P1

CO1 0.7 70 0.0000 CO2 0.7 80 0.1250 CO3 0.7 90 0.2500 CO4 0.8 70 0.3750 CO5 0.8 80 0.5000 CO6 0.8 90 0.6250 CO7 0.9 70 0.7500 CO8 0.9 80 0.8750 CO9 0.9 90 1.0000 Appl. Sci. 2020, 10, 6768 14 of 35

Table 4. The performance table of the selected criteria C1,C2 and assessment model P1.

Pos. Name C1 C2 P1 1 s1mple 0.88 86.6 0.8825 2 ZywOo 0.83 85.3 0.6788 3 Jame 0.78 79.3 0.4163 4 Jamppi 0.83 83.1 0.6513 5 huNter 0.80 88.2 0.6025 6 vsm 0.80 86.6 0.5825 7 meyern 0.82 83.8 0.6225 8 Kaze 0.78 80.7 0.4338 9 Hatz 0.76 81.8 0.3725 10 Sico 0.76 78.4 0.3300

For a better demonstration of the relevance of the criteria to the P1 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C1, C2 and reference ranking obtained by P1 assessment model for top 10 players is equal to 0.9273 and 0.2970. The correlation between the first one is strong, while, in the second one, it is weak. The visualization of the relation diagram of Average kills per round (C1) and P1 assessment model, as well as the relation diagram of Average damage per round (C2) and P1 assessment model, is presented in Figure7.

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3 0.76 0.78 0.8 0.82 0.84 0.86 0.88 78 80 82 84 86 88 90

Figure 7. The relation diagram of Average kills per round (C1) for assessment P1 (left side) and Average damage per round (C2) for assessment P1 (right side).

5.2. Frag Gaining Assessment Model The model verifies the probability of a player to get an elimination based on the number of kills he has obtained in official CS: GO matches and a specific factor, which shows that the player is superior. The expert identified two significant criteria for the Frag gaining assessment model. Total kills, which is the total number of frags delivered by the player, and K/D Ratio, that is the number of frags divided by the number of deaths. Both of them are profit type criteria, whereas it was mentioned earlier, with the increase in values, preference increases, too. Table5 shows the values of the criteria C3 and C4 and the P2 assessment model. Based on the presented data, it can be determined that the best value of the criteria C3 was achieved by ‘ZywOo’, which is equal to 1.000, while the worst result was obtained by ‘BnTeT’ with the value equal to 0. In the case of the second criterion, the best score was given to ‘Jame’ with 1.51, and the lowest score was received by ‘Texta’ with a value equal to 1.15. Analyzing the results of the Frag gaining assessment model (P2), we can conclude that the highest score was obtained by ‘Jame’, and is equal to 0.8423. The triangular fuzzy numbers of criterium C3 are presented in Figure8 and C4 in Figure9. Appl. Sci. 2020, 10, 6768 15 of 35

C C C 1 31 32 33

0.8

µ 0.6

0.4

0.2

0 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1

C3

Figure 8. Visualization of Total kills (C3) and triangular fuzzy numbers 0.0 (C31), 0.5 (x32), and 1.0 (x33).

C C C 1 41 42 43

0.8

µ 0.6

0.4

0.2

0 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6

C4

Figure 9. Visualization of the number of kills divided by number of deaths (K/D) Ratio (C4) and triangular fuzzy numbers 1.00 (C41), 1.25 (C42), and 1.60 (C43).

In the considered set of parameters, there were players with: total kills (C3) with the values of the support of the triangular fuzzy number from 0 (C31) to 1 (C33) and the core valued 0.5 (C32); K/D ratio (C4) with the values of the support of the triangular fuzzy number from 1 (C41) to 1.6 (C43) and the core valued 1.25 (C42). Based on the data presented in the Table6, it turned out that the output P2 takes values from 0.2 to 0.9. Therefore, the variable P2 will take two values. Both of them will also be saved as triangular fuzzy numbers. They are displayed in Figure 10. The comparison of the 36 pairwise of the 9 characteristic objects was executed. Consequently, the Matrix of Expert Judgment (MEJ) was defined (16), where each αij value was calculated using Equation (11).

 0.5 0 0 0 0 0 0 0 0   1 0.5 0 1 0 0 1 0 0       1 1 0.5 1 1 0 1 1 0     1 0 0 0.5 0 0 0 0 0    MEJ =  1 1 0 1 0.5 0 1 0 0 . (16)      1 1 1 1 1 0.5 1 1 0     1 0 0 1 0 0 0.5 0 0     1 1 0 1 1 0 1 0.5 0  1 1 1 1 1 1 1 1 0.5

As a result, the vector of the Summed Judgements (SJ) was calculated using Equation (12), and it was used to determine the values of preference (P2), which are presented in Table5. The characteristic objects CO1–CO9 presented in Table5 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C3 and C4. The highest value of preference P2 received CO9 with a triangular fuzzy number of criterion C3 valued 1 (C33) and with a triangular fuzzy number of criterion C4 valued 1.6 (C43). The lowest value of preference P2 fell to CO1 with a triangular fuzzy number of criterion C3 valued 0 (C31) and with a triangular fuzzy number of criterion C4 valued 1 (C41). With an increase in the value of the criterion C4, the preference increases more significantly than with an increase in the value of the criterion C3. It means that C4 has a greater impact on the assessment of the P2 model than C3. Appl. Sci. 2020, 10, 6768 16 of 35

P P 1 21 22

0.8

µ 0.6

0.4

0.2

0 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9

P2

Figure 10. Visualization of triangular fuzzy numbers for Frag gaining assessment model (P2).

Table 5. Overview of characteristic objects CO, vector P values for the Frag gaining assessment model.

COi C3 C4 P2

CO1 0 1.0 0.0000 CO2 0 1.25 0.3750 CO3 0 1.6 0.7500 CO4 0.5 1.0 0.1250 CO5 0.5 1.25 0.5000 CO6 0.5 1.6 0.8750 CO7 1.0 1.0 0.2500 CO8 1.0 1.25 0.6250 CO9 1.0 1.6 1.0000

Table 6. The performance table of the selected criteria C3, C4 and assessment model P2.

Pos. Name C3 C4 P2 1 s1mple 0.168 1.50 0.6849 2 ZywOo 1.000 1.40 0.7857 3 Jame 0.755 1.51 0.8423 4 Jamppi 0.507 1.30 0.5553 5 huNter 0.981 1.22 0.5753 6 vsm 0.343 1.22 0.4158 7 meyern 0.080 1.28 0.4271 8 Kaze 0.089 1.32 0.4723 9 Hatz 0.190 1.28 0.4546 10 Sico 0.137 1.36 0.5271

For a better demonstration of the relevance of the criteria to the P2 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C3, C4, and reference ranking obtained by P2 assessment model is equal to 0.5636 and 0.4910. The correlation between the first one and the reference ranking is moderately strong, while, in the second one, it is weak. The visualization of the relation diagram of total kills (C3) and P2 assessment model, as well as the relation diagram of K/D ratio (C4) and P1 assessment model, is shown in Figure 11. Appl. Sci. 2020, 10, 6768 17 of 35

0.85 0.85

0.8 0.8

0.75 0.75

0.7 0.7

0.65 0.65

0.6 0.6

0.55 0.55

0.5 0.5

0.45 0.45

0.4 0.4 0 0.2 0.4 0.6 0.8 1 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55

Figure 11. The relation diagram of Total kills (C3) for assessment P2 (left side) and K/D Ratio (C4) for assessment P2 (right side).

5.3. Failures per Round Assessment Model This model evaluates the weaker side of the player by showing how often he has a decline in form and skill deficiencies, which are vital to maintaining himself at the top of the global e-sport scene. The expert identified two crucial criteria for the Failures per round assessment model. Average assists per round, which is the average number of assists scored by the player during one round and Average deaths per round, that is the average number of deaths of a player pending one round. The first one is a profit type criterion, which means that the value increase indicates the preference increase; however, the second one is a cost-type criterion, which means the value increase indicates the preference decrease. Table7 shows the values of the criteria C5 and C6 and the P3 assessment model. Based on the presented data, it can be determined that the best value of the criteria C5 was achieved by ‘INS’, which is equal to 1.18, while the worst result was obtained by ‘kNgV-’ with the value equal to 0.09. In the case of the second criterion, the best score was given to ‘Jame’ with 0.52, and the worst score was received by ‘roeJ’ with a value equal to 0.68. Analyzing the results of the Failures per round assessment model (P3), we can conclude that the highest score was obtained by ‘Jame’ and is equal to 0.7750. The triangular fuzzy numbers of criterium C5 are presented in Figure 12 and C6 in Figure 13.

C C C 1 51 52 53

0.8

µ 0.6

0.4

0.2

0 4 · 10−2 6 · 10−2 8 · 10−2 0.1 0.12 0.14 0.16 0.18 0.2 0.22 C5

Figure 12. Visualization of Assists per round (C5) and triangular fuzzy numbers 0.05 (C51), 0.10 (C52), and 0.20 (C53). Appl. Sci. 2020, 10, 6768 18 of 35

C C C 1 61 62 63

0.8

µ 0.6

0.4

0.2

0 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 0.7

C6

Figure 13. Visualization of Average deaths per round (C6) and triangular fuzzy numbers 0.5 (C61), 0.6 (C62), and 0.7 (C63).

In the considered set of parameters there were players with: Average assists per round (C5) with the values of the support of the triangular fuzzy number from 0.05 (C51) to 0.2 (C53) and the core valued 0.1 (C52); Average deaths per round (C6) with the values of the support of the triangular fuzzy number from 0.5 (C61) to 0.7 (C63) and the core valued 0.6 (C62). Based on the data presented in the Table8, it turned out that the output P3 takes values from 0.2 to 0.8. Therefore, the variable P3 will take two values. Both of them will also be saved as triangular fuzzy numbers. They were displayed in Figure 14. The comparison of the 36 pairwise of the 9 characteristic objects were executed. Consequently, the Matrix of Expert Judgment (MEJ) was defined (17), where each αij value was calculated using Equation (11).

 0.5 0 0 1 1 1 1 1 1   1 0.5 0 1 1 1 1 1 1       1 1 0.5 1 1 1 1 1 1     0 0 0 0.5 0 0 1 1 1    MEJ =  0 0 0 1 0.5 0 1 1 1 . (17)      0 0 0 1 1 0.5 1 1 1     0 0 0 0 0 0 0.5 0 0     0 0 0 0 0 0 1 0.5 0  0 0 0 0 0 1 1 0 0.5

As a result, the vector of the Summed Judgements (SJ) was calculated using Equation (12), and it was employed to determine the values of preference (P3), which are presented in Table7. The characteristic objects CO1–CO9 presented in Table7 are generated using the Cartesian product of the fuzzy numbers’ cores of criteria C5 and C6. The highest value of preference P3 received CO3 with a triangular fuzzy number of criterion C5 valued 0.2 (C53) and with a triangular fuzzy number of criterion C6 valued 0.5 (C63). The lowest value of preference P3 fell to CO7 with a triangular fuzzy number of criterion C5 valued 0.05 (C51) and with a triangular fuzzy number of criterion C6 valued 0.7 (C61). With a decrease in the value of the criterion C6, the preference increases more significantly than with an increase in the value of the criterion C5. It means that C6 has a greater impact on the assessment of the P3 model than C5. Appl. Sci. 2020, 10, 6768 19 of 35

P P 1 31 32

0.8

µ 0.6

0.4

0.2

0 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8

P3

Figure 14. Visualization of triangular fuzzy numbers for Failures per round assessment model (P3).

Table 7. Overview of characteristic objects CO, vector P values for the Failures per round assessment model.

COi C5 C6 P3

CO1 0.05 0.5 0.7500 CO2 0.1 0.5 0.8750 CO3 0.2 0.5 1.0000 CO4 0.05 0.6 0.3750 CO5 0.1 0.6 0.5000 CO6 0.2 0.6 0.6250 CO7 0.05 0.7 0.0000 CO8 0.1 0.7 0.1250 CO9 0.2 0.7 0.2500

Table 8. The performance table of the selected criteria C5,C6 and assessment model P3.

Pos. Name C5 C6 P3 1 s1mple 0.09 0.59 0.5125 2 ZywOo 0.12 0.59 0.5625 3 Jame 0.09 0.52 0.7750 4 Jamppi 0.10 0.64 0.3500 5 huNter 0.15 0.66 0.3375 6 vsm 0.13 0.65 0.3500 7 meyern 0.12 0.64 0.3750 8 Kaze 0.10 0.60 0.5000 9 Hatz 0.15 0.60 0.5625 10 Sico 0.13 0.56 0.6875

For a better demonstration of the relevance of the criteria to the P3 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C5, C6, and reference ranking obtained by P3 assessment model is equal to 0.5273 and 0.1636. The correlation between the first one and the reference ranking is moderately strong, while, in the second one, it is weak. The visualization of the relation diagram of Average assists per round (C5) and P3 assessment model, as well as the relation diagram of Average deaths per round (C6) and P3 assessment model, is shown in Figure 15. Appl. Sci. 2020, 10, 6768 20 of 35

0.8 0.8

0.75 0.75

0.7 0.7

0.65 0.65

0.6 0.6

0.55 0.55

0.5 0.5

0.45 0.45

0.4 0.4

0.35 0.35

0.3 0.3 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66

Figure 15. The relation diagram of Average assists per round (C5) for assessment P3 (left side) and Average deaths per round (C6) for assessment P3 (right side).

5.4. Final Model CS: GO Players assessment model finally determines the uniqueness of the Counter-Strike Global: Offensive player by placing him in the final ranking, based on previous partial assessments. The final model for the players’ assessment has three aggregated input variables. The output variable from the Effectiveness per round assessment, Frag gaining assessment, and the output variable from the Failures per round assessment were applied. The aggregated variables P1 and P2 are both profit type, whereas the P3 is cost type. The triangular fuzzy numbers of parameter P1 is presented in Figure 16, P2 in Figure 17, and P3 in Figure 18.

P P 1 11 12

0.8

µ 0.6

0.4

0.2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

P1

Figure 16. Visualization of Effectiveness per round assessment model (P1) and triangular fuzzy numbers 0.1 (P11) and 0.9 (P12).

P P 1 21 22

0.8

µ 0.6

0.4

0.2

0 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9

P2

Figure 17. Visualization of Frag gaining assessment model (P2) and triangular fuzzy numbers 0.2 (P21) and 0.9 (P22). Appl. Sci. 2020, 10, 6768 21 of 35

P P 1 31 32

0.8

µ 0.6

0.4

0.2

0 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8

P3

Figure 18. Visualization of Failures per round assessment model (P3) and triangular fuzzy numbers 0.2 (P31) and 0.8 (P32).

In the considered set of parameters there were players with: Effectiveness per round (P1) with the values of the support of the triangular fuzzy number from 0.2 (P11) to 1 (P12), Frag gaining (P2) with the values of the support of the triangular fuzzy number from 0.1 (P21) to 0.9 (P22), Failures per round (P3) with the values of the support of the triangular fuzzy number from 0.2 (P31) to 0.9 (P32). The comparison of the 28 pairwise of the 8 characteristic objects were executed. Consequently, the Matrix of Expert Judgment (MEJ) was defined as (18), where each αij value was calculated using Equation (11).   0.5 0 0 0 1 0 0 0    1 0.5 1 0 1 1 1 0     1 0 0.5 0 1 0 1 0     1 1 1 0.5 1 1 1 1  MEJ =  . (18)  0 0 0 0 0.5 0 0 0       1 0 1 0 1 0.5 1 0     1 0 0 0 1 0 0.5 0  1 1 1 0 1 1 1 0.5 As a result, the vector of the Summed Judgements (SJ) was calculated using Equation (12), and it was employed to determine the final values of preference (P), which are presented in Table9. The characteristic objects CO1–CO8 presented in Table9 are generated using the Cartesian product of the fuzzy numbers’ cores of related models P1, P2, and P3. The highest value of preference P received CO4 with a triangular fuzzy number of parameter P1 valued 0.9 (P12), with a triangular fuzzy number of parameter P2 valued 0.9 (P22), and with a triangular fuzzy number of parameter P3 valued 0.2 (P32). The lowest value of preference P fell to CO5 with a triangular fuzzy number of parameter P1 valued 0.1 (P11), with a triangular fuzzy number of parameter P2 valued 0.2 (P21), and with a triangular fuzzy number of parameter P3 valued 0.8 (P31). With an increase in the value of the parameter P2, the preference increases more significantly than with an increase in the value of the parameters P1 and P3. It means that P2 has the greatest impact on the assessment of the P model compared to the other two parameters.

Table 9. Overview of characteristic objects CO, vector P values for the Counter-Strike: Global Offensive (CS: GO) Players assessment model.

COi P1 P2 P3 P

CO1 0.1 0.2 0.2 0.1429 CO2 0.1 0.9 0.2 0.7143 CO3 0.9 0.2 0.2 0.4286 CO4 0.9 0.9 0.2 1.0000 CO5 0.1 0.2 0.8 0.0000 CO6 0.1 0.9 0.8 0.5714 CO7 0.9 0.2 0.8 0.2857 CO8 0.9 0.9 0.8 0.8571 Appl. Sci. 2020, 10, 6768 22 of 35

For a better demonstration of the relevance of the obtained parameters to the final assessment model P, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the P1, P2, and P3 model and final ranking P is equal, respectively, to 0.5122, 0.6679, and 0.3182. The correlation between the first two models is moderately strong, and, in the case of the third model, the correlation is weak. The visualization of the relation diagram of effectiveness per round (P1) and final assessment P is shown in Figure 19, the relation diagram of frag gaining (P2) and final assessment P is presented in Figure 20, and the relation diagram of failures per round (P3) and final assessment P is presented in Figure 21.

0.75

0.7

0.65

0.6

0.55

0.5

0.45

0.4

0.35 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Figure 19. The relation diagram of the Effectiveness per round assessment (P1) for final assessment P.

0.75

0.7

0.65

0.6

0.55

0.5

0.45

0.4

0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85

Figure 20. The relation diagram of the Frag gaining assessment (P2) for final assessment P.

The sample data for the top 10 players is shown in Table 10. The final decision assessment model identified ‘Simple’ as the best player at all when the worst rating was given to ‘Hatz’. Analyzing the results of the three related models, we can conclude that the highest score in the first model (P1) was obtained by ‘Simple’ again, and is equal to 0.8825. In the (P2) and P3 models, the best outcome was acquired by ‘Jame’ with the value 0.8423 as P2 and 0.7750 as P3. The interesting fact is that ‘ZywOo’, who placed the second position, even if he did not have the best score in any of the three models, was still better than Jame. ‘ZywOo’ received a much better result in the first model and had a comparable score to ‘Jame’ in the second model. Furthermore, ‘huNter’ with the fourth result was close to beat ‘Jame’ and take over his Appl. Sci. 2020, 10, 6768 23 of 35

position. In comparison with ‘Jame’, ‘huNter’ had much higher assessment in P1, getting average results at the rest of the models. It follows from this that the most critical models are P1 and P2.

0.75

0.7

0.65

0.6

0.55

0.5

0.45

0.4

0.35 0.3 0.4 0.5 0.6 0.7 0.8

Figure 21. The relation diagram of the Failures per round assessment (P3) for final assessment P.

Table 10. The performance table of the assessment models P1–P3, final assessment P with their rankings.

Pos. Name P1 P2 P3 P Rank P1 Rank P2 Rank P3 Rank P 1 s1mple 0.8825 0.6849 0.5125 0.7437 1 3 3 1 2 ZywOo 0.6788 0.7857 0.5625 0.7414 2 2 10 2 3 Jame 0.4163 0.8423 0.7750 0.6432 4 1 2 3 4 Jamppi 0.6513 0.5553 0.3500 0.5941 7 5 9 5 5 huNter 0.6025 0.5753 0.3375 0.5959 5 4 1 4 6 vsm 0.5825 0.4158 0.3500 0.4556 6 10 8 7 7 meyern 0.6225 0.4271 0.3750 0.4732 8 8 7 6 8 Kaze 0.4338 0.4723 0.5000 0.4129 3 9 4 8 9 Hatz 0.3725 0.4546 0.5625 0.3617 9 7 6 10 10 Sico 0.3300 0.5271 0.6875 0.3759 10 6 5 9

ρ Spearman’s rank correlation coefficient between the P1, P2, and P3 model and reference ranking is equal, respectively, to 0.7818, 0.7091, and 0.0061. The correlation between the first two models is moderately strong, and, in the case of the third model, there is no correlation. However, ρ Spearman’s coefficient between the final model and reference ranking is equal to 0.9636, which means that both rankings are strongly correlated, and the proposed structure of the assessment model well defines the investigated relationships.

6. Practical Exploitation of the Identified Model This section proposes and applies the own players’ assessment model using a hierarchical structure with the application of COMET method. It describes every related assessment model and shows the final summary and obtained results for the top 40 professional players in CS: GO game. The performance table is presented in Table 11. Appl. Sci. 2020, 10, 6768 24 of 35

Table 11. The performance table of the alternatives, selected criteria, and reference ranking.

Pos. Name C1 C2 C3 C4 C5 C6 Rating2.0 1 s1mple 0.88 86.6 1958 1.50 0.09 0.59 1.34 2 ZywOo 0.83 85.3 4151 1.40 0.12 0.59 1.32 3 Jame 0.78 79.3 3505 1.51 0.09 0.52 1.29 4 Jamppi 0.83 83.1 2851 1.30 0.1 0.64 1.25 5 huNter 0.80 88.2 4100 1.22 0.15 0.66 1.24 6 vsm 0.80 86.6 2420 1.22 0.13 0.65 1.24 7 meyern 0.82 83.8 1728 1.28 0.12 0.64 1.24 8 Kaze 0.78 80.7 1750 1.32 0.1 0.60 1.24 9 Hatz 0.76 81.8 2017 1.28 0.15 0.60 1.23 10 Sico 0.76 78.4 1876 1.36 0.13 0.56 1.23 11 yuurih 0.79 87.1 3833 1.24 0.15 0.63 1.22 12 aliStair 0.77 78.7 2028 1.32 0.12 0.58 1.22 13 TenZ 0.80 85.5 2406 1.22 0.13 0.65 1.22 14 xsepower 0.76 75.6 3856 1.31 0.09 0.58 1.21 15 roeJ 0.80 87.7 2338 1.18 0.14 0.68 1.21 16 floppy 0.79 84.3 3251 1.32 0.14 0.65 1.21 17 Brehze 0.80 83.3 2492 1.22 0.12 0.65 1.21 18 KSCERATO 0.75 79.4 3637 1.35 0.12 0.56 1.21 19 electronic 0.76 83.9 1631 1.21 0.14 0.62 1.21 20 EliGE 0.77 84.8 2900 1.19 0.14 0.65 1.20 21 woxic 0.77 80.5 1699 1.25 0.11 0.61 1.20 22 kNgV- 0.78 79.8 1964 1.27 0.09 0.62 1.20 23 INS 0.74 83.8 1961 1.19 0.18 0.62 1.19 24 BnTeT 0.73 81.2 1516 1.23 0.15 0.59 1.19 25 erkaSt 0.79 83.3 2076 1.22 0.14 0.65 1.19 26 somedieyoung 0.78 84.6 2398 1.19 0.14 0.65 1.19 27 NAF 0.72 82.2 2717 1.18 0.16 0.61 1.19 28 NiKo 0.78 83.9 2066 1.18 0.12 0.67 1.18 29 dexter 0.76 82.2 2047 1.18 0.14 0.65 1.18 30 kennyS 0.77 78.6 2934 1.24 0.1 0.62 1.18 31 jks 0.76 82.7 1741 1.19 0.12 0.64 1.18 32 blameF 0.75 83.3 2474 1.18 0.16 0.64 1.18 33 shz 0.80 83.0 2009 1.24 0.13 0.65 1.18 34 nexa 0.75 80.3 3819 1.24 0.13 0.60 1.18 35 Bubzkji 0.76 85.2 3186 1.16 0.13 0.66 1.18 36 Texta 0.76 83.1 2097 1.15 0.16 0.66 1.18 37 coldzera 0.78 82.2 1767 1.22 0.11 0.64 1.18 38 frozen 0.75 81.2 2789 1.19 0.14 0.63 1.17 39 MarKE 0.74 80.9 1851 1.17 0.13 0.63 1.17 40 mantuu 0.77 82.0 2755 1.16 0.12 0.67 1.17

The performance table of the selected criteria C1, C2 and assessment model P1 is presented in Table 12. For a better demonstration of the relevance of the criteria to the P1 assessment model, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the criteria C1, C2 and reference ranking obtained by P1 assessment model is equal to 0.6670 and 0.2420. The correlation between the first one is moderately strong, while, in the second one, it is weak. The visualization of the relation diagram of Average kills per round (C1) and P1 assessment model, as well as the relation diagram of Average damage per round (C2) and P1 assessment model, is presented in Figure 22. The P2 and P3 models were analyzed, as well, and their results presented in an analogical way. The whole process can be found in AppendixA. Appl. Sci. 2020, 10, 6768 25 of 35

Table 12. The performance table of the selected criteria C1, C2 and assessment model P1.

Pos. Name C1 C2 P1 1 s1mple 0.88 86.6 0.8825 2 ZywOo 0.83 85.3 0.6788 3 Jame 0.78 79.3 0.4163 4 Jamppi 0.83 83.1 0.6513 5 huNter 0.80 88.2 0.6025 6 vsm 0.80 86.6 0.5825 7 meyern 0.82 83.8 0.6225 8 Kaze 0.78 80.7 0.4338 9 Hatz 0.76 81.8 0.3725 10 Sico 0.76 78.4 0.3300 11 yuurih 0.79 87.1 0.5513 12 aliStair 0.77 78.7 0.3712 13 TenZ 0.80 85.5 0.5688 14 xsepower 0.76 75.6 0.2950 15 roeJ 0.80 87.7 0.5963 16 floppy 0.79 84.3 0.5163 17 Brehze 0.80 83.3 0.5413 18 KSCERATO 0.75 79.4 0.3050 19 electronic 0.76 83.9 0.3988 20 EliGE 0.77 84.8 0.4475 21 woxic 0.77 80.5 0.3937 22 kNgV- 0.78 79.8 0.4225 23 INS 0.74 83.8 0.3225 24 BnTeT 0.73 81.2 0.2525 25 erkaSt 0.79 83.3 0.5038 26 somedieyoung 0.78 84.6 0.4825 27 NAF 0.72 82.2 0.2275 28 NiKo 0.78 83.9 0.4738 29 dexter 0.76 82.2 0.3775 30 kennyS 0.77 78.6 0.3700 31 jks 0.76 82.7 0.3838 32 blameF 0.75 83.3 0.3538 33 shz 0.80 83.0 0.5375 34 nexa 0.75 80.3 0.3163 35 Bubzkji 0.76 85.2 0.415 36 Texta 0.76 83.1 0.3887 37 coldzera 0.78 82.2 0.4525 38 frozen 0.75 81.2 0.3275 39 MarKE 0.74 80.9 0.2863 40 mantuu 0.77 82.0 0.4125

The sample data for the top 40 players is shown in Table 13. The final decision assessment model identified ‘Simple’ as the best player at all with an excellent value equal to 0.7437, when the worst rating was given to ‘BnTeT’, who received value equal to 0.1021. Analyzing the results of the three related models, we can conclude that the highest score in the first model P1 was obtained by ‘Simple’ again, and is equal to 0.8825, while the lowest score received by ‘NAF’ was only 0.2275. In the P2 and P3 models, the best outcome was acquired by ‘Jame’ with the value 0.8423 as P2 and 0.7750 as P3. The worst assessment in P2 was given to ‘BnTeT’ with the value 0, and in the P3 model the lowest evaluation value was given to ‘roeJ’ with the value 0.2500. The interesting fact is that ‘ZywOo’, who placed the second position, even if he did not have the best score in any of the three models, was still better than Jame. ‘ZywOo’ received a much better result in the first model and had a comparable score to ‘Jame’ in the second model. Furthermore, ‘huNter’ with the fourth result was close to beat ‘Jame’ and take over his position. In comparison with ‘Jame’, ‘huNter’ had much higher assessment in P1, getting average results at the rest of the models. It follows from this that the most critical models are P1 and P2. Appl. Sci. 2020, 10, 6768 26 of 35

Table 13. The performance table of the related assessment models and final ranking.

Pos. Name P1 P2 P3 P RankingP 1 s1mple 0.8825 0.6849 0.5125 0.7437 1 2 ZywOo 0.6788 0.7857 0.5625 0.7414 2 3 Jame 0.4163 0.8423 0.7750 0.6432 3 4 Jamppi 0.6513 0.5553 0.3500 0.5941 5 5 huNter 0.6025 0.5753 0.3375 0.5959 4 6 vsm 0.5825 0.4158 0.3500 0.4556 16 7 meyern 0.6225 0.4271 0.3750 0.4732 11 8 Kaze 0.4338 0.4723 0.5000 0.4129 14 9 Hatz 0.3725 0.4546 0.5625 0.3617 18 10 Sico 0.3300 0.5271 0.6875 0.3759 7 11 yuurih 0.5513 0.5798 0.4500 0.5545 6 12 aliStair 0.3712 0.4985 0.6000 0.3882 13 13 TenZ 0.5688 0.4145 0.3500 0.4496 17 14 xsepower 0.2950 0.6613 0.5500 0.5057 34 15 roeJ 0.5963 0.3480 0.2500 0.4290 33 16 floppy 0.5163 0.6145 0.3625 0.5912 15 17 Brehze 0.5413 0.4225 0.3375 0.4494 30 18 KSCERATO 0.3050 0.6834 0.6750 0.4975 8 19 electronic 0.3988 0.3260 0.4750 0.2869 22 20 EliGE 0.4475 0.4162 0.3625 0.4048 20 21 woxic 0.3937 0.3923 0.4750 0.3392 25 22 kNgV- 0.4225 0.4389 0.4000 0.4055 35 23 INS 0.3225 0.3273 0.5250 0.2488 12 24 BnTeT 0.2525 0.0000 0.6000 0.1021 26 25 erkaSt 0.5038 0.3833 0.3625 0.3980 10 26 somedieyoung 0.4825 0.3688 0.3625 0.3786 40 27 NAF 0.2275 0.3840 0.5375 0.2583 9 28 NiKo 0.4738 0.3223 0.2625 0.3613 28 29 dexter 0.3775 0.3205 0.3625 0.3016 37 30 kennyS 0.3700 0.4945 0.4250 0.4261 21 31 jks 0.3838 0.3063 0.3750 0.2893 38 32 blameF 0.3538 0.3610 0.4250 0.3114 32 33 shz 0.5375 0.4067 0.3500 0.4322 29 34 nexa 0.3163 0.5785 0.5375 0.4487 31 35 Bubzkji 0.4150 0.3985 0.3125 0.3906 19 36 Texta 0.3887 0.2800 0.3500 0.2756 36 37 coldzera 0.4525 0.3538 0.3625 0.3556 27 38 frozen 0.3275 0.4058 0.4375 0.3355 23 39 MarKE 0.2863 0.2868 0.4250 0.2266 39 40 mantuu 0.4125 0.3575 0.2625 0.3682 24

To show the relation of the obtained parameters with the final assessment model P, a ρ Spearman’s rank correlation coefficient was calculated. ρ Spearman’s rank correlation coefficient between the P1, P2, and P3 model and final ranking P is equal, respectively, to 0.5122, 0.6679, and 0.3182. The correlation between the first two models is moderately strong, and, in the case of the third model, the correlation is weak. The visualization of the relation diagram of effectiveness per round (P1) and final assessment P is shown in Figure 23, the relation diagram of frag gaining (P2) and final assessment P is presented in Figure 24, and the relation diagram of failures per round (P3) and final assessment P is presented in Figure 25. Appl. Sci. 2020, 10, 6768 27 of 35

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2 0.72 0.74 0.76 0.78 0.8 0.82 0.84 0.86 0.88 0.9 75 80 85 90

Figure 22. The relation diagram of Average kills per round (C1) for assessment P1 (left side) and Average damage per round (C2) for assessment P1 (right side).

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Figure 23. The relation diagram of the Effectiveness per round assessment (P1) for final assessment P.

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Figure 24. The relation diagram of the Frag gaining assessment (P2) for final assessment P. Appl. Sci. 2020, 10, 6768 28 of 35

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

Figure 25. The relation diagram of the Failures per round assessment (P3) for final assessment P.

ρ Spearman’s coefficient between the final model and reference ranking is equal to 0.5304, which means that both rankings are moderately correlated. The visualization of the relation diagram of final ranking P and Rating 2.0 is presented in Figure 26.

40

35

30

25

20

15

10

5

0 0 5 10 15 20 25 30 35 40

Figure 26. The relation diagram of the final ranking (P) for reference ranking Rating 2.0.

7. Conclusions The objective of this work was an identification of the model to create a ranking of players in the popular e-sport game, i.e., Counter-Strike: Global Offensive (CS: GO), using the appropriate multi-criteria decision-making method. For verification purposes, the obtained ranking of players was compared to the existing ranking created by HLTV called Rating 2.0, which is the most prestigious for this game. It was decided to set a ranking for top 10 and, later, even for 40 players. The additional purpose of this paper is to familiarize people with the term of e-sport and to convince them that it is worthwhile and future-proof. The main contribution of this work is a proposal of the CS: GO players assessment model with three related evaluation models. It was obligatory to choose the right method, build associated models Appl. Sci. 2020, 10, 6768 29 of 35 for the players, and then to calculate the player’s assessments for their performances. Comet’s resistant to the rank reversal paradox is a significant feature. That’s not relevant which set of players will be applied, because each of them will always get the same value. The comparison of characteristic objects is easier than the players because the distance between them is bigger than the compared players. The identification of the CS: GO players assessment model additionally allows evaluating every set of players in the considered numerical space without involving the expert in the evaluation process again because the model is defined for a certain set of player characteristics. Another original feature of the COMET method is the employment of global criterion weights, which determine the average significance of a criterion for the final assessment. The linear weighting of non-linear problems reduces the accuracy of the results. That is the reason why the calculation procedure of this method has resigned from arbitrary weights for specific criteria. Therefore, the COMET method was chosen as the best approach to make an identification of the players’ assessment model. The results demonstrate that the model could be utilized to evaluate the players and helps to generate the ranking and select the best CS: GO player. The positions of incorrectly classified players are quite close to each other. ρ Spearman’s coefficient between the final model and reference ranking is equal to 0.5304, which means that both rankings are moderately correlated. Despite quite an average result, this ranking might be considered as sufficient, because the top positions of the classification are more fitted to reference ranking. The proposed structure of the assessment model satisfactorily defines the investigated relationships. Further future work directions should concentrate on the improvement of model effectiveness. Perhaps adding a greater amount of input criteria, and thus increasing the number of related assessment models, could make the final ranking more reliable and appropriate to reflect some players real talent. Moreover, this should fix on the prospective empirical investigation for CS: GO game but also in other e-sport games.

Author Contributions: Conceptualization, K.U. and W.S.; methodology, K.U. and W.S.; software, K.U.; validation, J.W. and W.S. ; formal analysis, J.W. and W.S.; investigation, K.U.; resources, K.U.; data curation, K.U.; writing—original draft preparation, K.U.; writing—review and editing, W.S. and J.W.; visualization, W.S.; supervision, W.S. and J.W.; project administration, W.S.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript. Funding: The work was supported by the National Science Centre, Decision number UMO-2018/29/B/HS4/ 02725 (W.S.) and and by the project financed within the framework of the program of the Minister of Science and Higher Education under the name “Regional Excellence Initiative” in the years 2019–2022, Project Number 001/RID/2018/19; the amount of financing: PLN 10.684.000,00 (J.W.). Acknowledgments: The authors would like to thank the editor and the anonymous reviewers, whose insightful comments and constructive suggestions helped us to significantly improve the quality of this paper. Conflicts of Interest: The authors declare no conflict of interest.

Abbreviations The following abbreviations are used in this manuscript:

CS: GO Counter-Strike: Global Offensive COMET The Characteristic Objects Method MEJ Matrix of Expert Judgment SJ vector of the Summed Judgments e-sport/eSports electronic sports

Appendix A ρ Spearman’s rank correlation coefficient between the criteria C3, C4, and reference ranking obtained by P2 assessment model is equal to 0.1283 and 0.7555. There is no correlation between the first one when the second one is moderately correlated with the reference ranking. The visualization of the relation diagram of total kills (C3) and P2 assessment model, as well as the relation diagram of K/D ratio (C4) and P1 assessment model, is shown in Figure A1. Appl. Sci. 2020, 10, 6768 30 of 35

Table A1. The performance table of the selected criteria C3, C4 and assessment model P2.

Pos. Name C3 C4 P2 1 s1mple 0.168 1.50 0.6849 2 ZywOo 1.000 1.40 0.7857 3 Jame 0.755 1.51 0.8423 4 Jamppi 0.507 1.30 0.5553 5 huNter 0.981 1.22 0.5753 6 vsm 0.343 1.22 0.4158 7 meyern 0.080 1.28 0.4271 8 Kaze 0.089 1.32 0.4723 9 Hatz 0.190 1.28 0.4546 10 Sico 0.137 1.36 0.5271 11 yuurih 0.879 1.24 0.5798 12 aliStair 0.194 1.32 0.4985 13 TenZ 0.338 1.22 0.4145 14 xsepower 0.888 1.31 0.6613 15 roeJ 0.312 1.18 0.3480 16 floppy 0.658 1.32 0.6145 17 Brehze 0.370 1.22 0.4225 18 KSCERATO 0.805 1.35 0.6834 19 electronic 0.044 1.21 0.3260 20 EliGE 0.525 1.19 0.4162 21 woxic 0.069 1.25 0.3923 22 kNgV- 0.170 1.27 0.4389 23 INS 0.169 1.19 0.3273 24 BnTeT 0.000 1.23 0.0000 25 erkaSt 0.213 1.22 0.3833 26 somedieyoung 0.335 1.19 0.3688 27 NAF 0.456 1.18 0.3840 28 NiKo 0.209 1.18 0.3223 29 dexter 0.202 1.18 0.3205 30 kennyS 0.538 1.24 0.4945 31 jks 0.085 1.19 0.3063 32 blameF 0.364 1.18 0.3610 33 shz 0.187 1.24 0.4067 34 nexa 0.874 1.24 0.5785 35 Bubzkji 0.634 1.16 0.3985 36 Texta 0.220 1.15 0.2800 37 coldzera 0.095 1.22 0.3538 38 frozen 0.483 1.19 0.4058 39 MarKE 0.127 1.17 0.2868 40 mantuu 0.470 1.16 0.3575

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55

Figure A1. The relation diagram of Total kills (C3) for assessment P2 (left side) and K/D Ratio (C4) for assessment P2 (right side). Appl. Sci. 2020, 10, 6768 31 of 35

ρ Spearman’s rank correlation coefficient between the criteria C5, C6, and reference ranking obtained by P3 assessment model is equal to −0.1225 and −0.2722. The correlation between those two criteria and reference ranking is negatively weak. The visualization of the relation diagram of Average assists per round (C5) and P3 assessment model, as well as the relation diagram of Average deaths per round (C6) and P3 assessment model, is shown in Figure A2.

Table A2. The performance table of the selected criteria C5, C6 and assessment model P3.

Pos. Name C5 C6 P3 1 s1mple 0.09 0.59 0.5125 2 ZywOo 0.12 0.59 0.5625 3 Jame 0.09 0.52 0.7750 4 Jamppi 0.10 0.64 0.3500 5 huNter 0.15 0.66 0.3375 6 vsm 0.13 0.65 0.3500 7 meyern 0.12 0.64 0.3750 8 Kaze 0.10 0.60 0.5000 9 Hatz 0.15 0.60 0.5625 10 Sico 0.13 0.56 0.6875 11 yuurih 0.15 0.63 0.4500 12 aliStair 0.12 0.58 0.6000 13 TenZ 0.13 0.65 0.3500 14 xsepower 0.09 0.58 0.5500 15 roeJ 0.14 0.68 0.2500 16 floppy 0.14 0.65 0.3625 17 Brehze 0.12 0.65 0.3375 18 KSCERATO 0.12 0.56 0.6750 19 electronic 0.14 0.62 0.4750 20 EliGE 0.14 0.65 0.3625 21 woxic 0.11 0.61 0.4750 22 kNgV- 0.09 0.62 0.4000 23 INS 0.18 0.62 0.5250 24 BnTeT 0.15 0.59 0.6000 25 erkaSt 0.14 0.65 0.3625 26 somedieyoung 0.14 0.65 0.3625 27 NAF 0.16 0.61 0.5375 28 NiKo 0.12 0.67 0.2625 29 dexter 0.14 0.65 0.3625 30 kennyS 0.10 0.62 0.4250 31 jks 0.12 0.64 0.3750 32 blameF 0.16 0.64 0.4250 33 shz 0.13 0.65 0.3500 34 nexa 0.13 0.60 0.5375 35 Bubzkji 0.13 0.66 0.3125 36 Texta 0.16 0.66 0.3500 37 coldzera 0.11 0.64 0.3625 38 frozen 0.14 0.63 0.4375 39 MarKE 0.13 0.63 0.4250 40 mantuu 0.12 0.67 0.2625 Appl. Sci. 2020, 10, 6768 32 of 35

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 0.7

Figure A2. The relation diagram of Average assists per round (C5) for assessment P3 (left side) and Average deaths per round (C6) for assessment P3 (right side).

References

1. Xen, C. The Road to Professionalism: A Qualitative Study on the Institutionalization of eSports. 2017. Available online: http://gupea.ub.gu.se/bitstream/2077/52951/1/gupea_2077_52951_1.pdf (accessed on 11 June 2020). 2. Jonasson, K.; Thiborg, J. Electronic sport and its impact on future sport. Sport Soc. 2010, 13, 287–299. [CrossRef] 3. Hamari, J.; Sjöblom, M. What is eSports and why do people watch it? Internet Res. 2017, 27, 211–232. [CrossRef] 4. Lux, M.; Halvorsen, P.; Dang-Nguyen, D.T.; Stensland, H.; Kesavulu, M.; Potthast, M.; Riegler, M. Summarizing E-sports matches and tournaments: The example of counter-strike: Global offensive. In Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems, Amherst, MA, USA, 18 June 2019; pp. 13–18. 5. Rizani, M.N.; Iida, H. Analysis of Counter-Strike: Global Offensive. In Proceedings of the 2018 International Conference on Electrical Engineering and Computer Science (ICECOS), Pangkal Pinang, Indonesia, 2–4 October 2018; pp. 373–378. 6. Bornemark, O. Success factors for e-sport games. In Proceedings of the Umeå’s 16th Student Conference in Computing Science, Umeå, Sweden, June 2013; pp. 1–12. 7. Egliston, B. E-sport, phenomenality and affect. Transformations 2018, 31, 156–176. 8. Menasce, R.M. From Casual to Professional: How Brazilians Achieved Esports Success in Counter-Strike: Global Offensive. Ph.D. Thesis, Northeastern University, , MA, USA, 2017. 9. Ma, H.; Wu, Y.; Wu, X. Research on essential difference of e-sport and online game. In Informatics and Management Science V; Springer: Berlin/Heidelberg, , 2013; pp. 615–621. 10. Martonˇcik,M. e-Sports: Playing just for fun or playing to satisfy life goals? Comput. Hum. Behav. 2015, 48, 208–211. [CrossRef] 11. Makarov, I.; Savostyanov, D.; Litvyakov, B.; Ignatov, D.I. Predicting winning team and probabilistic ratings in “” and “Counter-Strike: Global Offensive” video games. In International Conference on Analysis of Images, Social Networks and Texts; Springer: Berlin/Heidelberg, Germany, 2017; pp. 183–196. 12. Adamus, T. Playing computer games as electronic sport: In search of a theoretical framework for a new research field. In Computer Games And New Media Cultures; Springer: Berlin/Heidelberg, Germany, 2012; pp. 477–490. 13. Laberge, M. Hand Eye Coordination, Encyclopedia of Children’s Health. 2019. Available online: http://www.healthofchildren.com/G-H/Hand-Eye-Coordination.htm (accessed on 16 June 2020). 14. Rambusch, J.; Jakobsson, P.; Pargman, D. Exploring E-sports: A case study of game play in Counter-strike. In Proceedings of the 3rd Digital Games Research Association International Conference: “ Situated Play”, DiGRA 2007, Digital Games Research Association (DiGRA), Tokyo, Japan, 24–28 September 2007; Volume 4, pp. 157–164. Appl. Sci. 2020, 10, 6768 33 of 35

15. Vaz, C. CS: GO Economy Guide. 2019. Available online: https://www.metabomb.net/csgo/gameplay- guides/csgo-economy-guide-2 (accessed on 8 June 2020). 16. Pizzo, A.D.; Na, S.; Baker, B.J.; Lee, M.A.; Kim, D.; , D.C. eSport vs. Sport: A Comparison of Spectator Motives. Sport Mark. Q. 2018, 27, 108–123. 17. Drenthe, R. Informal Roles Within eSport Teams: A Content Analysis of the Game “Counter-Strike: Global Offensive”, 2016. Available online: http://urn.fi/URN:NBN:fi:jyu-201606062893 (accessed on 7 June 2020). 18. Mertz, J.; Hoover, L.D.; Burke, J.M.; Bellar, D.; Jones, M.L.; Leitzelar, B.; Judge, W.L. Ranking the greatest NBA players: A sport metrics analysis. Int. J. Perform. Anal. Sport 2016, 16, 737–759. [CrossRef] 19. Funk, D.C.; Pizzo, A.D.; Baker, B.J. eSport management: Embracing eSport education and research opportunities. Sport Manag. Rev. 2018, 21, 7–13. [CrossRef] 20. Kou, Y.; Gui, X.; Kow, Y.M. Ranking practices and distinction in league of legends. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, Austin, TX, USA, 13–19 October 2016; pp. 4–9. 21. DiFrancisco-Donoghue, J.; Balentine, J.R. Collegiate eSport: Where do we fit in? Curr. Sports Med. Rep. 2018, 17, 117–118. [CrossRef] 22. HLTV. CS:GO World Ranking. 2019. Available online: https://www.hltv.org/ranking/teams/2019/ december/16 (accessed on 1 June 2020). 23. HLTV. CS:GO News & Coverage. 2019. Available online: https://www.hltv.org (accessed on 2 June 2020). 24. Sałabun, W. The Characteristic Objects Method: A New Distance-based Approach to Multicriteria Decision-making Problems. J. Multi-Criteria Decis. Anal. 2015, 22, 37–50. [CrossRef] 25. W ˛atróbski,J.; Jankowski, J.; Ziemba, P.; Karczmarczyk, A.; Zioło, M. Generalised framework for multi-criteria method selection: Rule set database and exemplary decision support system implementation blueprints. Data Brief 2019, 22, 639. [CrossRef] 26. W ˛atróbski,J.; Jankowski, J.; Ziemba, P.; Karczmarczyk, A.; Zioło, M. Generalised framework for multi-criteria method selection. Omega 2019, 86, 107–124. [CrossRef] 27. Pamuˇcar, D.; Jankovi´c,A. The application of the hybrid interval rough weighted Power-Heronian operator in multi-criteria decision-making. Oper. Res. Eng. Sci. Theory Appl. 2020, 3, 54–73. [CrossRef] 28. Zavadskas, E.K.; Pamuˇcar, D.; Stevi´c,Ž.; Mardani, A. Multi-Criteria Decision-Making Techniques for Improvement Sustainability Engineering Processes. Symmetry 2020, 12, 986. [CrossRef] 29. Mati´c,B.; Jovanovi´c,S.; Das, D.K.; Zavadskas, E.K.; Stevi´c,Ž.; Sremac, S.; Marinkovi´c,M. A new hybrid MCDM model: Sustainable supplier selection in a construction company. Symmetry 2019, 11, 353. [CrossRef] 30. W ˛atróbski,J.; Ziemba, E.; Karczmarczyk, A.; Jankowski, J. An index to measure the sustainable information society: The Polish households case. Sustainability 2018, 10, 3223. [CrossRef] 31. Walker, T.R.; Adebambo, O.; Feijoo, M.C.D.A.; Elhaimer, E.; Hossain, T.; Edwards, S.J.; Morrison, C.E.; Romo, J.; Sharma, N.; Taylor, S.; et al. Environmental effects of marine transportation. In World Seas: An Environmental Evaluation; Elsevier: Amsterdam, The Netherlands, 2019; pp. 505–530. 32. Nalmpantis, D.; Roukouni, A.; Genitsaris, E.; Stamelou, A.; Naniopoulos, A. Evaluation of innovative ideas for Public Transport proposed by citizens using Multi-Criteria Decision Analysis (MCDA). Eur. Transp. Res. Rev. 2019, 11, 22. [CrossRef] 33. Silva, A.R.D.; Ferreira, F.A.; Carayannis, E.G.; Ferreira, J.J. Measuring SMEs’ propensity for open innovation using cognitive mapping and MCDA. IEEE Trans. Eng. Manag. 2019.[CrossRef] 34. Sałabun, W.; Palczewski, K.; W ˛atróbski,J. Multicriteria approach to sustainable transport evaluation under incomplete knowledge: Electric bikes case study. Sustainability 2019, 11, 3314. [CrossRef] 35. W ˛atróbski,J.; Sałabun, W.; Karczmarczyk, A.; Wolski, W. Sustainable decision-making using the COMET method: An empirical study of the ammonium nitrate transport management. In Proceedings of the 2017 Federated Conference on Computer Science and Information Systems (FedCSIS), Prague, Czech Republic, 3–6 September 2017; pp. 949–958. 36. Baumann, M.; Weil, M.; Peters, J.F.; Chibeles-Martins, N.; Moniz, A.B. A review of multi-criteria decision-making approaches for evaluating energy storage systems for grid applications. Renew. Sustain. Energy Rev. 2019, 107, 516–534. [CrossRef] Appl. Sci. 2020, 10, 6768 34 of 35

37. W ˛atróbski,J.; Ziemba, P.; Wolski, W. Methodological aspects of decision support system for the location of renewable energy sources. In Proceedings of the 2015 Federated Conference on Computer Science and Information Systems (FedCSIS), Lodz, Poland, 13–16 September 2015; pp. 1451–1459. 38. Ortiz-Urbina, E.; González-Pachón, J.; Diaz-Balteiro, L. Decision-Making in Forestry: A Review of the Hybridisation of Multiple Criteria and Group Decision-Making Methods. Forests 2019, 10, 375. [CrossRef] 39. Longaray, A.A.; Ensslin, L.; Dutra, A.; Ensslin, S.; Brasil, R.; Munhoz, P. Using MCDA-C to assess the organizational performance of industries operating at Brazilian maritime port terminals. Oper. Res. Perspect. 2019, 6, 100109. [CrossRef] 40. Maghsoodi, A.I.; Riahi, D.; Herrera-Viedma, E.; Zavadskas, E.K. An integrated parallel big data decision support tool using the W-CLUS-MCDA: A multi-scenario personnel assessment. Knowl. Based Syst. 2020, 105749. [CrossRef] 41. Lloyd-Williams, H. The role of multi-criteria decision analysis (MCDA) in public health economic evaluation. In Applied Health Economics for Public Health Practice and Research; Oxford University Press: Oxford, UK, 2019; p. 301. 42. Hansen, P.; Devlin, N. Multi-criteria decision analysis (MCDA) in healthcare decision-making. In Oxford Research Encyclopedia of Economics and Finance; Oxford University Press: Oxford, UK, 2019. 43. Moreno-Calderón, A.; Tong, T.S.; Thokala, P. Multi-criteria Decision Analysis Software in Healthcare Priority Setting: A Systematic Review. Pharmacoeconomics 2020, 38, 269–283. [CrossRef][PubMed] 44. Gavião, L.O.; Sant’Anna, A.P.; Alves Lima, G.B.; de Almada Garcia, P.A. Evaluation of soccer players under the Moneyball concept. J. Sports Sci. 2020, 38, 1221–1247. [CrossRef][PubMed] 45. Angilella, S.; Arcidiacono, S.G.; Corrente, S.; Greco, S.; Matarazzo, B. An application of the SMAA—Choquet method to evaluate the performance of sailboats in offshore regattas. Oper. Res. 2020, 20, 771–793. [CrossRef] 46. Chelmis, E.; Niklis, D.; Baourakis, G.; Zopounidis, C. Multiciteria evaluation of football clubs: The Greek Superleague. Oper. Res. 2019, 19, 585–614. [CrossRef] 47. Lee, S.; Walsh, P. Does your left hand know what your right hand is doing? Impacts of athletes’ pre-transgression philanthropic behavior on consumer post-transgression evaluation. Sport Manag. Rev. 2019, 22, 553–565. [CrossRef] 48. Thompson, A.; Parent, M.M. Understanding the impact of radical change on the effectiveness of national-level sport organizations: A multi-stakeholder perspective. Sport Manag. Rev. 2020.[CrossRef] 49. Thomson, A.; Cuskelly, G.; Toohey, K.; Kennelly, M.; Burton, P.; Fredline, L. Sport event legacy: A systematic quantitative review of literature. Sport Manag. Rev. 2019, 22, 295–321. [CrossRef] 50. Rascher, D.A.; Maxcy, J.G.; Schwarz, A. The Unique Economic Aspects of Sports. J. Glob. Sport Manag. 2019, 1–28. [CrossRef] 51. Hallmann, K.; Giel, T. eSports—Competitive sports or recreational activity? Sport Manag. Rev. 2018, 21, 14–20. [CrossRef] 52. Bányai, F.; Griffiths, M.D.; Király, O.; Demetrovics, Z. The psychology of esports: A systematic literature review. J. Gambl. Stud. 2019, 35, 351–365. [CrossRef][PubMed] 53. Jankowski, J.; Hamari, J.; W ˛atróbski,J. A gradual approach for maximising user conversion without compromising experience with high visual intensity website elements. Internet Res. 2019, 29, 194–217. [CrossRef] 54. DiFrancisco-Donoghue, J.; Balentine, J.; Schmidt, G.; Zwibel, H. Managing the health of the eSport athlete: An integrated health management model. BMJ Open Sport Exerc. Med. 2019, 5.[CrossRef] 55. Musabirov, I.; Bulygin, D.; Marchenko, E. Personal Brands of ESports Athletes: An Exploration of Evaluation Mechanisms. High. Sch. Econ. Res. Pap. 2019, 90.[CrossRef] 56. Matsui, A.; Sapienza, A.; Ferrara, E. Does Streaming Esports Affect Players’ Behavior and Performance? Games Cult. 2020, 15, 9–31. [CrossRef] 57. Hodge, V.J.; Devlin, S.M.; Sephton, N.J.; Block, F.O.; Cowling, P.I.; Drachen, A. Win Prediction in Multi-Player Esports: Live Professional Match Prediction. IEEE Trans. Games 2019.[CrossRef] 58. Khromov, N.; Korotin, A.; Lange, A.; Stepanov, A.; Burnaev, E.; Somov, A. Esports Athletes and Players: A Comparative Study. IEEE Pervasive Comput. 2019, 18, 31–39. [CrossRef] 59. Kodikara, P.N. Multi-Objective Optimal Operation of Urban Water Supply Systems. Ph.D. Thesis, Victoria University, Melbourne, Australia, 2008. 60. Greco, S.; Figueira, J.; Ehrgott, M. Multiple Criteria Decision Analysis; Springer: Berlin, Germany, 2016. Appl. Sci. 2020, 10, 6768 35 of 35

61. e Costa, C.A.B.; Vincke, P. Multiple criteria decision aid: An overview. In Readings In Multiple Criteria Decision Aid; Springer: Berlin, Germany, 1990; pp. 3–14. 62. Sałabun, W.; Karczmarczyk, A. Using the comet method in the sustainable city transport problem: An empirical study of the electric powered cars. Procedia Comput. Sci. 2018, 126, 2248–2260. [CrossRef] 63. Martel, J.M.; Matarazzo, B. Other outranking approaches. In Multiple Criteria Decision Analysis: State of the Art Surveys; Springer: Berlin, Germany, 2005; pp. 197–259. 64. Greco, S. A new pcca method: Idra. Eur. J. Oper. Res. 1997, 98, 587–601. [CrossRef] 65. Lewandowska, A.; Jankowski, J.; Sałabun, W.; W ˛atróbski,J. Multicriteria Selection of Online Advertising Content for the Habituation Effect Reduction. In Asian Conference on Intelligent Information and Database Systems; Springer: Berlin, Germany, 2019; pp. 499–509. 66. Jankowski, J.; Sałabun, W.; W ˛atróbski,J. Identification of a multi-criteria assessment model of relation between editorial and commercial content in web systems. In Multimedia and Network Information Systems; Springer: Berlin, Germany, 2017; pp. 295–305. 67. Palczewski, K.; Sałabun, W. Identification of the football teams assessment model using the COMET method. Procedia Comput. Sci. 2019, 159, 2491–2501. [CrossRef] 68. Sałabun, W.; Piegat, A. Comparative analysis of MCDM methods for the assessment of mortality in patients with acute coronary syndrome. Artif. Intell. Rev. 2017, 48, 557–571. [CrossRef] 69. Sałabun, W.; Karczmarczyk, A.; W ˛atróbski,J.; Jankowski, J. Handling data uncertainty in decision-making with COMET. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1478–1484. 70. Faizi, S.; Sałabun, W.; Ullah, S.; Rashid, T.; Wi˛eckowski,J. A New Method to Support Decision-Making in an Uncertain Environment Based on Normalized Interval-Valued Triangular Fuzzy Numbers and COMET Technique. Symmetry 2020, 12, 516. [CrossRef] 71. Chmielarz, W.; Zborowski, M. On Analysis of e-Banking Websites Quality–Comet Application. Procedia Comput. Sci. 2018, 126, 2137–2152. [CrossRef] 72. Carnero, M. Waste Segregation FMEA Model Integrating Intuitionistic Fuzzy Set and the PAPRIKA Method. Mathematics 2020, 8, 1375. [CrossRef] 73. Sałabun, W.; Ziemba, P.; W ˛atróbski,J. The rank reversals paradox in management decisions: The comparison of the ahp and comet methods. In International Conference on Intelligent Decision Technologies; Springer: Berlin, Germany, 2016; pp. 181–191. 74. Karol, U. Identification of Players Ranking in E-Sport: CS:GO Study Case. In Polskie Porozumienie na Rzecz Rozwoju Sztucznej Inteligencji (PP-RAI’2019); Department of Systems and Computer Networks, Faculty of Electronics, Wroclaw University of Science and Technology: Wrocław, Poland, 2019; pp. 45–48.

c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).