
Genetic Algorithm as Machine Learning for Profiles Recognition Yann Carbonne1 and Christelle Jacob2 1University of Technology of Troyes, 12 rue Marie Curie, Troyes, France 2Altran Technologies, 4 avenue Didier Daurat, Blagnac, France Keywords: Genetic Algorithm, Machine Learning, Natural Language Processing, Profiles Recognition, Clustering. Abstract: Persons are often asked to provide information about themselves. These data are very heterogeneous and result in as many “profiles” as contexts. Sorting a large amount of profiles from different contexts and assigning them back to a specific individual is quite a difficult problem. Semantic processing and machine learning are key tools to achieve this goal. This paper describes a framework to address this issue by means of concepts and algorithms selected from different Artificial Intelligence fields. Indeed, a Vector Space Model is customized to first transpose semantic information into a mathematical model. Then, this model goes through a Genetic Algorithm (GA) which is used as a supervised learning algorithm for training a computer to determine how much two profiles are similar. Amongst the GAs, this study introduces a new reproduction method (Best Together), and compare it to some usual ones (Wheel, Binary Tournament).This paper also evaluates the accuracy of the GAs predictions for profiles clustering with the computation of a similarity score, as well as its ability to classify two profiles are similar or non-similar. We believe that the overall methodology can be used for any kind of sources using profiles and, more generally, for similar data recognition. 1 INTRODUCTION different social networks. Even if existing solutions, such as the use of Vector Space Models (VSM) for For several years, we have witnessed the exponential information retrieval (Salton, 1968), inspired our growth of data worldwide. According to the experts, study, they are not straight related. 90% of world data had been generated over the last The problem is to identify the same real person two years. Human cannot handle this large amount of between different profiles from different sources. data, hence machine comes in the foreground for To do so, the objective is to teach a computer to processing and extracting meaningful information automatically answer the question: “Are these two from them. profiles about the same real person?”. Just like it In this paper, we will focus on a special kind of would be for a human, the teaching will be split in data: those concerning people. These data can be very two phases. During a first phase, the computer will heterogeneous due to the diversity of their origin. use a human-made set of data to train. Within this Data comes from several sources: public (social training set, for each possible combination of profiles, media, forums, etc.) or private (employee database, the question above had been answered. The training customer database, etc.). should be done with various profiles from different Despite their diversity, collected data are sources to be relevant. After the training phase, the processed the same way: each user (a real person) is computer will be able to predict a similarity score matched with one or several profiles. A profile could between two profiles. The performances will be contain global information (city, gender …) or determined through the analysis of predefined criteria specific information (work history …). The for predictions. information volume could also be dense or sparse. In this study, we investigate how to determine a This paper differs from existing studies about person profile using a combination of natural profiles recognition in social networks (Rawashdeh & language processing, genetic algorithm and machine Ralescu, 2014) because it does not focus on similarity learning. In addition, we propose a new reproduction between profiles within a social network but between mechanism, named here Best Together (BT). The 157 Carbonne, Y. and Jacob, C.. Genetic Algorithm as Machine Learning for Profiles Recognition. In Proceedings of the 7th International Joint Conference on Computational Intelligence (IJCCI 2015) - Volume 1: ECTA, pages 157-166 ISBN: 978-989-758-157-1 Copyright c 2015 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved ECTA 2015 - 7th International Conference on Evolutionary Computation Theory and Applications new reproduction is compared with other methods (1) M = Nx ∪ Ny such as Wheel and Binary Tournament. Results indicate BT as a promising strategy for profile Whenever transposing a vector into a VSM, the recognition. vector has a value 0 on its non-existing dimensions. This paper is organized as follows: Section 2 Illustration to the use of VSM with an example : introduces how to convert a profile (a set of semantic considering two profiles P and P : information) to a mathematic model, understandable 1 2 for a computer. Section 3 describes the overview of Table 2: Contents for profiles P1 and P2. the genetic algorithm used to train the computer for profile recognition. Section 4 analyses the results of P1 P2 the prediction made by our model and it will be Foo firstname Foo firstname followed by our conclusions. Bar lastname Bar lastname Google organisation Horses like Paris city Google like 2 MATHEMATICS DESIGN FOR For the purpose of this example, the weighting for PROFILES each label are: 2.1 Representation of a Profile Table 3: Weighting example. First of all, let us introduce the mathematical model Label Weight used for profiles representation. A profile will be firstname 0.7 considered as a set of labelled semantic information. lastname 0.8 For example: organisation 0.4 city 0.5 Table 1: Profile example. like 0.1 Information Label The associated VSM, with the vector V1 for P1 and Foo firstname V2 for P2, will be: Bar lastname Paris city Table 4: VSM for V1 and V2. beer like Dimension V1 V2 In order to be able to compare several profiles on Foo 0.7 0.7 a same frame of references, we will use the Vector Bar 0.8 0.8 Space Models (VSM) of semantics. Google 0.4 0.1 Indeed, computers have difficulty understanding Paris 0.5 0 semantic information but VSM provides a solution Horses 0 0.1 for this problem. They have widely been used in different fields as a recent survey highlights (Turney The advantage of this representation is to keep the and Pantel, 2010). In particular, they have been semantic information in the forefront of the successfully used in the field of Machine Learning for mathematic analysis. In the example above, both classification (Dasarathy, 1991) and also for profiles have the information “Google” but for one, clustering (A. K. Jain, 1999). this information is labelled as “organisation” and for the other, it is labelled as “like”. Even with these Within a VSM, each profile Px will be transposed as a vector V with N dimensions which are all the different labels, this model will consider the x x information important but at a different scale. By information in the profile Px. For example, the profile above will be a vector with the dimensions: “Foo”, intuition, we would like to set the label “organisation” “Bar”, “Paris” and “beer”. at a higher value than the label “like” because the former is usually more relevant to distinct two The value of Vx in a specific dimension δ will be a weighting from the label matched with the profiles than the later. We acquired this intuition information δ. through our experience and we would like the The VSM consists in creating a new vector space computer to get the same “intuition” for any kind of labels. of M dimensions. For two vectors Vx and Vy, the dimension M is set as: 158 Genetic Algorithm as Machine Learning for Profiles Recognition 2.2 Similarity Function GA can also be used as Machine Learning (ML) algorithm and has been shown to be efficient in this Once the VSM for two vectors Vx and Vy is created, purpose (Goldberg, 1989). The idea behind is that the usual method is to compute their similarity with natural-like algorithms can demonstrate, in some the cosine (Turney and Pantel, 2010): cases in the ML field, a higher efficiency compared to human-designed algorithms (Kluwer Academic similarity (V ,V ) = cos (α) (2) x y Publishers, 2001). Indeed, actual evolutionary where α is the angle between Vx and Vy. processes have succeeded to solve highly complex This similarity rate gives a good clue of how close problems, as proved through probabilistic arguments two vectors are within their vector space, therefore (Moorhead and Kaplan, 1967). how similar two profiles are. In our case, GA will be used to determine an But there is a human intuition in profiles adequate set of weighting for each label present in a recognition that is missing with this computation. training set. Our training set is composed with Sometimes, two profiles do not contain enough similarities between profiles, two profiles are either relevant information to evaluate their similarity. For similar (output = 1) or not similar (output = 0). example, if two profiles like the same singer and these profiles only contain this information, it is not enough 3.1 Genetic Representation to determine they both concern the same person. Through experimentation, we noticed that the norm The genotype for each chromosome of the population of a vector is a good metric to evaluate the relevance will be the group of all labels in the training set. Each of a profile. Therefore, the similarity rate is smoothed label is defined as a gene and the weighting for a with the average norm of the two vectors: specific label is the allele of the linked gene. The similarity(Vx,Vy) = cos(α) * (‖Vx‖ + ‖Vy‖) / 2 (3) weighting is a value in [0,1], it could be translated as the relevance of a label and it reaches its best at value where ‖V‖ is the Euclidean norm for the vector V.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-