Enhanced Latent Semantic Indexing Using Cosine Similarity Measures for Medical Application

Enhanced Latent Semantic Indexing Using Cosine Similarity Measures for Medical Application

742 The International Arab Journal of Information Technology, Vol. 17, No. 5, September 2020 Enhanced Latent Semantic Indexing Using Cosine Similarity Measures for Medical Application Fawaz Al-Anzi1 and Dia AbuZeina2 1Department of Computer Engineering, Kuwait University, Kuwait 2Computer Science Department, Palestine Polytechnic University, Palestine Abstract: The Vector Space Model (VSM) is widely used in data mining and Information Retrieval (IR) systems as a common document representation model. However, there are some challenges to this technique such as high dimensional space and semantic looseness of the representation. Consequently, the Latent Semantic Indexing (LSI) was suggested to reduce the feature dimensions and to generate semantic rich features that can represent conceptual term-document associations. In fact, LSI has been effectively employed in search engines and many other Natural Language Processing (NLP) applications. Researchers thereby promote endless effort seeking for better performance. In this paper, we propose an innovative method that can be used in search engines to find better matched contents of the retrieving documents. The proposed method introduces a new extension for the LSI technique based on the cosine similarity measures. The performance evaluation was carried out using an Arabic language data collection that contains 800 medical related documents, with more than 47,222 unique words. The proposed method was assessed using a small testing set that contains five medical keywords. The results show that the performance of the proposed method is superior when compared to the standard LSI. Keywords: Arabic Text, Latent Semantic Indexing, Search Engine, Dimensionality Reduction, Text Classification. Received December 25, 2018; accepted January 28, 2020 https://doi.org/10.34028/iajit/17/5/7 1. Introduction characterized as three matrices. The product of the thus created matrices must be equivalent to the original As a result of intense development in the presence of term-by-document matrix. Therefore, the primary step online data, search engines noticeably play a title role of LSI is to decompose the term-by-document (A) in Information Retrieval (IR) and web data mining matrix as follows: A=USVT where U is a matrix that applications. The web is primary source to a plethora gives the weights of terms, S makes available the of open data. Such a massive data resource certainly eigenvalues for each principal component direction, entails effective algorithms to retrieve and clean out and VT is a matrix that offers the weights of textual or other kinds of data. Therefore, search documents. VT matrix consists of the document feature engines are suited to become to be developed vectors that are normally used in IR and text mining. intelligently in order to obtain the desired content. Figure 1 shows the decomposition procedure that Textual data can be broadly represented using the truncates a term-by-document matrix (A) into the three Vector Space Model (VSM), wherein each document is matrices. represented using a vector of attributes, many of which could be nil. However, VSM faces some challenges such as there are extremely long features and a A U S V tendency to observe vague semantic representation. = Hence, Latent Semantic Indexing (LSI) method is suggested to ease such encounters and to optimistically Document feature vectors enhance the presentation. LSI aims at converting the Figure 1. SVD decomposition process. original textual vectors into conceptual vectors that are written off by two properties: reduced dimensions and Accordingly, a standard process of creating LSI semantic rich features. The determining feature of the starts with using a term-by-document matrix to LSI can be attributed to the semantic property, carried generate the required feature vectors necessary for out by returning semantically close documents without classification. Nevertheless, a term-by-document the restriction to match the exact search keywords. matrix is generally formed utilizing different values LSI is based on a proposition from linear algebra such as Boolean flags, counts, or weights. This is named Singular Valued Decomposition (SVD). SVD employed to track the required term occurrence in can transform the textual data, represented as a large documents. For classification, the produced LSI feature term-by-document matrix, into a lesser semantic space vectors are developed using measures similar to the Enhanced Latent Semantic Indexing Using Cosine Similarity Measures for ... 743 likes of Euclidian, Mahalanobis, Manhattan, cosine Using the enhancing search process with thorough similarity, etc., the cosine similarity measure is known overwhelming digital data requires an endless research to be a popular distance measure in pattern recognition. effort to satisfy the users’ requests. In fact, text mining In this research, we propose an extension of the LSI is a challenging task since documents usually have implementation by using cosine measures as a mixed contents that make it difficult to digitally replacement to the standard word co-occurrence values understand the document’s category. For an generally used in text classification processes. Hence, illustration, Figure 2 shows a document that has ,”صداع“ :”the proposed method suggests creating a new term-by- different words, such as “headaches :”tension“ ,”مخدرات“ :”drugs“ ,”ادمان“ :”document matrix using the cosine measures between “addiction .”الجرائم“ :”and “crimes ,”الهيجان“ :”frenzy“ ,”التوتر“ .documents before engaging the SVD process In the succeeding section presents the motivation, Such diverse words make it vague for IR algorithms to followed by the literature review in section 3. In accurately search for the required data. By nature, section 4, we present the proposed method followed by medical documents require precise algorithms that can the experimental results in section 5. We conclude in adequately find the proper results for the user. section 6. The LSI has been proven to be a valuable tool that reveals the semantical relationship between data 2. Motivations objects (i.e., the words in this research). Based on underlying semantic distinctions detected, LSI is able Text mining systems are intuitively in need of efficient to bring out the relevant documents that may not algorithms that intelligently understand the search contain the searched keyword. Figure 3 shows a breathing.” If“ :”التنفس“ engine’s documents, as well as the query’s keywords. medical article that is related to ”,oxygen“ :”اﻻوكسجين“ Moreover, whilst considering huge online data it is a user searches for the word required to bear in mind semantic relationships then semantic loss methods (i.e., plain keyword search) between both the documents and the words (also called will fail since as there is no exact match between the co-occurrences). Unlike trivial searching methods that search word and the document’s words. However, LSI are based on traditional text matching, LSI is does support semantic search, and this might be the characterized by semantic rich values, enabling the goal of the user looking for a particular topic. system to return useful results without having exact match between the document and the query’s keywords. For example, if we search for the word “coffee,” it is expected that the system will return many documents related to this word, however, it might return other related documents that have no “coffee” word in it, but are semantically related to the word “coffee.” Specifically, it is possible to obtain documents that belong to the topic, such as stimulant effects, caffeine, etc., Figure 2 shows an example of an article that contains the word “coffee.” It also indicates that the document has other words such as “nervous system.” Therefore, the search process for the word “coffee” might return documents related to the topic Figure 3. An example of “breathing” in a medical document. “nervous system” that might not have the word “coffee” in it. This is the strength of the LSI method 3. Literature Review and one reason for its popularity. In the literature, there are many studies that discuss the LSI technique. LSI is used for the text mining tasks, such as text classification, text summarization, text clustering, search engines, etc., LSI initially was presented by Deerwester in [12] as a standard dimension reduction technique in IR. Osinski and Weiss [23] presents an algorithm to enhance the results of search engines. The algorithm combines common phrase discovery and LSI techniques to separate search results into meaningful groups. Letsche and Berry [20] presents a new implementation of the standard LSI; aiming to provide efficient, extensible, portable, and maintainable LSI. Kontostathis and Pottenger [18] Figure 2. An example of word co-occurrences of “coffee” and presents a theoretical model for understanding the “nervous system”. 744 The International Arab Journal of Information Technology, Vol. 17, No. 5, September 2020 performance of LSI in retrieval applications. Dumais et neighborhood. Dhillon and Modha [13] demonstrates al. [14] presents an LSI based method for fully that the cosine similarity is easy to interpret and simple automated cross-language document retrieval in which to compute for sparse vectors, this indicates that it is no query translation is required. widely used in text mining and IR. Sobh et al. [24] Bellegarda et al. [9] describes a word clustering used the cosine similarity measure for the Arabic approach that is based on LSI. Liu

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us