
Using Latent Semantic Analysis to Identify Similarities in Source Code to Support Program Understanding Jonathan I. Maletic, Andrian Marcus Division of Computer Science The Department of Mathematical Sciences The University of Memphis Campus Box 526429 Memphis TN 38152 [email protected], [email protected] Abstract pieces of source code are being conducted. The The paper describes the results of applying Latent objective of this research is to determine how well Semantic Analysis (LSA), an advanced information such a method can be used to support aspects of retrieval method, to program source code and program understanding, comprehension, and associated documentation. Latent Semantic Analysis reengineering of software systems. is a corpus-based statistical method for inducing and Latent Semantic Analysis (LSA) [1, 12] is a representing aspects of the meanings of words and corpus-based statistical method for inducing and passages (of natural language) reflective in their representing aspects of the meanings of words and usage. This methodology is assessed for application passages (of natural language) reflective in their to the domain of software components (i.e., source usage. The method generates a real valued vector code and its accompanying documentation). Here description for documents of text. This LSA is used as the basis to cluster software representation can be used to compare and index components. This clustering is used to assist in the documents using a variety of similarity measures. By understanding of a nontrivial software system, applying LSA to source code and its associated namely a version of Mosaic. Applying Latent internal documentation (i.e., comments), candidate Semantic Analysis to the domain of source code and components can be compared with respect to these internal documentation for the support of program similarity measures. A number of metrics are defined understanding is a new application of this method based on these similarity measures to help support and a departure from the normal application domain program understanding. of natural language. Results have shown [1, 12] that LSA captures significant portions of the meaning not only of individual words but also of whole passages such as 1. Introduction sentences, paragraphs, and short essays. Basically, the central concept of LSA is that the information The tasks of maintenance and reengineering of an about word contexts in which a particular word existing software system require a great deal of effort appears or does not appear provides a set of mutual to be spent on understanding the source code to constraints that determines the similarity of meaning determine the behavior, organization, and of sets of words to each other. architecture of the software not reflected in This research is attempting to address some of the documentation. The software engineer must examine following issues: Does LSA apply equally well to the both the structural aspect of the source code (e.g., domain of source code and internal documentation? programming language syntax) and the nature of the What is the best granularity of the software problem domain (e.g., comments, documentation, documents/components for use with LSA? Can LSA and variable names) to extract the information needed be utilized to help support reverse engineering and to fully understand any part of the system [3, 8, 14, program understanding? The following section gives 18, 19]. In the research presented here, the later a brief overview of different approaches to aspect is being examined and tools to help automate information retrieval. A detailed description of LSA this part of the understanding process are being is then given along with some of the reasoning investigated. Experiments using an advanced behind choosing LSA. The results of applying this information retrieval technique, Latent Semantic method to a reasonable sized software system Analysis (LSA), to identify similarities between (Mosaic) are then presented. The measures derived by LSA are used to cluster the source code (at a k ≤ max(i,j). In SVD, a rectangular matrix is function level) into semantically similar groups. decomposed into the product of three other Examples of how this clustering helps support the matrices. One component matrix describes the program understanding process are also given. original row entities as vectors of derived orthogonal factor values; another describes the 2. The LSA Model original column entities in the same way. The third is a diagonal matrix containing scaling There are a variety of information retrieval values such that when the three components are methods including traditional [9] approaches such as matrix multiplied, the original matrix is signature files, inversion, and clustering. Other reconstructed. methods that try to capture more information about 4. Finally, all but the d largest singular values are documents to achieve better performance include set to zero. Pre-multiplication of the right-hand those using parsing, syntactic information, and matrices produces a least squares best natural language processing techniques; methods approximation to the original matrix given the using neural networks; and Latent Semantic Analysis number of dimensions, d, that are retained. The (also referred to as Latent Semantic Indexing). SVD with dimension reduction constitutes a LSA relies on a Single Value Decomposition constraint satisfaction induction process in that it (SVD) [17, 20] of a matrix (word × context) derived predicts the original observations on the basis of from a corpus of natural text that pertains to linear relations among the abstracted knowledge in the particular domain of interest. SVD representations of the data that it has retained. is a form of factor analysis and acts as a method for reducing the dimensionality of a feature space The result is that each word is represented as a without serious loss of specificity. Typically, the vector of length d. Performance depends strongly on word by context matrix is very large and (quite often) the choice of the number of dimensions. The optimal sparse. SVD reduces the number of dimensions number is typically around between 200 and 300 and without great loss of descriptiveness. Single value may vary from corpus to corpus, domain to domain. decomposition is the underlying operation in a The similarity of any two words, any two text number of applications including statistical principal passages, or any word and any text passage, are component analysis [10], text retrieval [2, 6], pattern computed by measures on their vectors. Often the recognition and dimensionality reduction [5], and cosine of the contained angle between the vectors in natural language understanding [11, 12]. d-space is used as the degree of qualitative similarity Latent Semantic Analysis is comprised of four of meaning. The length of vectors is also useful as a steps [4, 12]: measure. 1. A large body of text is represented as an occurrence matrix (i × j) in which rows stand for 3. Advantages of Using LSA individual word types, columns for meaning bearing passages such as sentence or paragraphs A fundamental deficiency of the many other IR (granularity is based on problem or data), that is methods is that they fail to deal properly with two (word × context). Each cell then contains the major issues: synonymy and polysemy. Synonymy is frequency with which a word occurs in a used in a very general sense to describe the fact that passage. there are many ways to refer to the same object. 2. Cell entries freqi,j are transformed to: People in different contexts, with different + knowledge, or linguistic habits will describe the same log( freqi, j 1) information using different terms. Polysemy refers to the general fact that most words have more than one freqi, j freqi, j − ∑ *log distinct meaning. In different contexts or when used 1− j ∑ freq ∑ freq 1− j i, j 1− j i, j by different people the same term takes on varying referential significance [4]. Although software a measure of the first order association of a word developers tend to use standard terms for the and its context. concepts they are working on, a flexible technique 3. The matrix is then subject to Singular Value capable to deal with variability is needed. Decomposition (SVD) [6, 10, 17, 20]: Also, LSA does not utilize a grammar or a predefined vocabulary. This makes automation much [ij] = [ik] [kk] [jk]’ where [ij] is the occurrence matrix, [ik] and [jk] have orthonormal columns, simpler and supports programmer defined variable [kk] is a diagonal matrix of singular value where names that have implied meanings (e.g., avg) yet are not in the English language vocabulary. The meanings are derived from the usage rather then a available software systems were used as data for the predefined dictionary. This is a stated advantage experiments: LEDA [13] (Library for Efficient Data over using a traditional natural language approach, structures and Algorithms) and MINIX [21] such as in [7, 8], were a (subset) grammar for the (Operating System). LEDA is a library of the data English language must be developed. types and algorithms for combinatorial computing and provides a sizable collection of data types and 4. Previous Experiments with LSA algorithms in a form that allows them to be used by non-experts. LEDA is composed of over 140 C++ Experiments into how domain knowledge is classes. MINIX is a simple version of the UNIX embodied within software are being investigated in operating system and widely used in university level an empirical manner. The work presented here computer science OS courses. It is written in C and focuses on using the vector representations to consists of approximately 28,000 lines of code. compare components (at a specific level of Given that LEDA is written in C++ using an object- granularity) and classify them into clusters of oriented methodology the granularity chosen is that semantically similar concepts. of the class LEDA has 144 source code documents.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-