SHS Web of Conferences 69, 00150 (2019) https://doi.org/10.1051/shsconf/20196900150 CILDIAH-2019 Machine Learning Ethics in the Context of Justice Intuition Natalia Mamedova1,*, Arkadiy Urintsov1, Nina Komleva1, Olga Staroverova1 and Boris Fedorov1 1Plekhanov Russian University of Economics, 36, Stremyanny lane, 117997, Moscow, Russia Abstract. The article considers the ethics of machine learning in connection with such categories of social philosophy as justice, conviction, value. The ethics of machine learning is presented as a special case of a mathematical model of a dilemma - whether it corresponds to the “learning” algorithm of the intuition of justice or not. It has been established that the use of machine learning for decision making has a prospect only within the limits of the intuition of justice field based on fair algorithms. It is proposed to determine the effectiveness of the decision, considering the ethical component and given ethical restrictions. The cyclical nature of the relationship between the algorithmic algorithms subprocesses in machine learning and the stages of conducting mining analysis projects using the CRISP methodology has been established. The value of ethical constraints for each of the algorithmic processes has been determined. The provisions of the Theory of System Restriction are applied to find a way to measure the effect of ethical restrictions on the “learning” algorithm 1 Introduction Accepting the freedom of thought is authentic intuition of justice of one individual in relation to The intuition of justice is regarded as a sincere, himself and others. But for artificial intelligence emotionally saturated belief in the justice of some (hereinafter - AI) freedom of thought is not supposed - position. This belief is a prism through which the neither for a strong AI, nor for a weak AI. First, the refraction of any information is received by the subject prospect of free thinking for AI is seen as a frightening and the adoption of intuitively obvious decisions. way of developing the future [4]. Second, the level of What everyone considers to be fair is decided technology development is insufficient to create a strong individually for oneself, but the process of developing (true, general) AI - a machine that is capable of thinking, the intuition of justice is completely determined by the learning new things and being aware of itself. As for the environment in which the individual develops. His own weak (narrow) AI, its freedom of thinking is limited by nature and range of external influences ultimately the framework of the machine learning algorithm, determine whether to take each individual position on according to which the machine solves certain human faith or rethink, accepting or denying it. Thus, the pivot problems. This happens during machine learning without points with which any concepts of justice generated by explicit programming by learning from precedents and society should be compared fall into the field of the as part of a training set. This kind of “learning” intuition of the justice of an individual [1]. algorithm does exactly what it can and can do what is Studies in the history of philosophy show that the provided for by the mathematical model. Further, we concepts of justice, the cornerstone of which is value or will consider AI only in the aspect of machine learning benefit (individual or collective), are readily recognized (weak AI), excluding the field of futurology. by an individual as intuitively valid, as proven and The mathematical model of the “learning” algorithm reliable [2]. Supporting the concept of justice by a is a product of human thinking, and in the process of its multitude of individuals gives it an egregorial character, development ethical deviations can be made. Using the creating an even more stable foundation for conviction. logical method of bringing to the point of absurdity As a result, a conditionally permanent conviction (reductio ad absurdum), we define that the maximum complex (reliable, intuitively obvious, just) is formed, ethical deviation should be considered the complete localized in the field of intuition of justice. absence of ethical restrictions in the “learning” Thus, a local conviction that has value within a algorithm. Although other ethical deviations, such as society is recognized as fair [3]. And yet, freedom of “pollution”, “poisoning” of the initial data of the training thought is the highest iteration of the intuition of justice, sample, manipulations with feedback loops and false it is capable of leveling the individual's egregorial correlations, or unethicality of the task itself, can also be dependence, going beyond a fair local judgment. We applied to the ethical deviations scale. owe freedom of thought to all the results of human To identify such deviations in most cases is not activity, both the best and the worst. difficult, since the intuition of justice works flawlessly, but, as we have already mentioned, the limits of justice © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). SHS Web of Conferences 69, 00150 (2019) https://doi.org/10.1051/shsconf/20196900150 CILDIAH-2019 are determined individually. Therefore, the parameters of for patterns in the data carries the potential for making the acceptable level of ethical deviations are variable, the best decision in the future based on the examples they are difficult to formalize in machine learning. (precedents) set today. However, the best solution should However, it is necessary to do so that the “learning” take into account ethical restrictions; otherwise the algorithm itself and the algorithmization process fit into solution proposed by the machine may be contrary to the the logic of the egregorial concept of justice accepted in intuition of justice. From here we can conclude that an society. ethical restriction is formulated by imposing a In other words, the ethical component of the categorical ban on the commission of an operation “learning” algorithm must meet the intuition of the (action, choice). justice of individuals, must have value within society. This characteristic of the ethical constraint allows us Therefore, the ethics of machine learning ethics, namely to go further and determine that the ethics of machine the ethical limitations of the “learning algorithm”, come learning is a special case of a mathematical model of a to the forefront. Two approaches are applicable to dilemma, since it involves only two answers - yes or no. machine learning ethics. According to the first approach, There are questions at what stage or at what stages of the it is aimed at expanding the knowledge of human ethics algorithmization process it is necessary to establish with the help of “learning” algorithms. In accordance ethical restrictions, what focal points need to be with the second approach, the ethical component is used determined to measure ethical restrictions. in the development of machine learning algorithms. In Answering the first question asked, it is necessary to this study, the second approach is applied. initially present the general process of machine learning and focus on the process of algorithmization in machine learning. The projection of the ethical component on the 2 Materials and methods algorithmization process will help answer the question - Machine learning is a section (subset) of AI. On the one at what stage or stages are ethical restrictions necessary hand, this section examines the algorithms and statistical or appropriate. models used by computer systems to effectively perform Objectively, the core of machine learning is the a specific task [5]. A special feature is that instead of construction of a model of general dependence (patterns, instructions, the computer system relies on patterns and relationships) of data. Unlike formalized expert established patterns (dependencies). On the other hand, knowledge, transformed by deductive learning [10], the this section of AI is studying methods for constructing model of general dependence is formed by learning from algorithms capable of learning [6]. This involves precedents - inductive learning. Learning from developing computer programs that can access and use precedents is synonymous with an earlier notion - data for training. recovering dependencies from empirical data [11]. This The learning process begins with observations or concept is related to computational learning theory baseline data, however, the search for patterns and (COLT), which studies mathematical dependencies and decision making based on precedents are carried out quantitative restrictions on the parameters of the using predetermined algorithms. Considering that the maximum complexity of a model and the reliability of ethical component of machine learning is formed data. directly by algorithmization, we will focus on the The process begins with the selection of cases. The concept of algorithm in more detail. precedent is considered relevant if it corresponds to It is generally accepted to use the concept of an some private data that describe the precedent. A set of algorithm as a computational procedure, which, descriptions of precedents is a training sample. Next, a according to a training sample, sets up a model [7]. The learning algorithm is formed that reveals a general result of the procedure is a function that establishes dependence (pattern, relationship) of data on all
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-