Vec2gc - a Graph Based Clustering Method for Text Representations

Vec2gc - a Graph Based Clustering Method for Text Representations

Vec2GC - A Graph Based Clustering Method for Text Representations Rajesh N Rao Manojit Chakraborty Robert Bosch Research and Technology Center, India Robert Bosch Research and Technology Center, India Bangalore, India Bangalore, India ABSTRACT NLP pipelines with limited or no labeled data, rely on unsuper- vised methods for document processing. Unsupervised approaches typically depend on clustering of terms or documents. In this pa- per, we introduce a novel clustering algorithm, Vec2GC (Vector to Graph Communities), an end-to-end pipeline to cluster terms or documents for any given text corpus. Our method uses community detection on a weighted graph of the terms or documents, created using text representation learning. Vec2GC clustering algorithm is a density based approach, that supports hierarchical clustering as well. KEYWORDS text clustering, embeddings, document clustering, graph clustering 1 INTRODUCTION Dealing with large corpus of unlabeled domain specific documents is a common challenge faced in industrial NLP pipeline. Unsuper- vised algorithms like clustering are the first step in processing unlabeled corpus to get an overview of the data distribution. Vi- sual exploration based on clustering and dimensionality reduction Figure 1: UMAP 2d plot of 20 Newsgroup dataset algorithms provide a good overview of data distribution. In this context, clustering and dimensionality reduction are important steps. Dimensionality reduction techniques like PCA [5], t-SNE [18] or UMAP [12] would map the document in embedding space on latent semantic indexing, graph representations, ontology and to 2 dimensional space as shown in figure 1. Clustering algorithm lexical chains. would groups semantically similar documents or term together. Traditional cluster algorithm like k-Mean [9], k-medoids [16], DB- 3 ALGORITHM DETAILS SCAN [4] or HDBSCAN [11] with distance metric derived from Vec2GC converts the vector space embeddings of terms or docu- Cosine Similarity [10] do not do a very good job on this. ments to a graph and performs clustering on the constructed graph. We propose the Vec2GC, Vector To Graph Community, a cluster- The algorithm consists of two steps: construction of the graph and ing algorithm that converts the terms or documents in the vector generation of clusters using Graph Community detection algorithm. arXiv:2104.09439v1 [cs.IR] 15 Apr 2021 embedding space [13][7] to a graph and generate clusters based on Graph Community Detection algorithm. 3.1 Graph Construction 2 LITERATURE SURVEY For the graph construction, we consider each term or document embedding as a node. A node can be represented by 0 and its em- Hossain and Angryk [6] represented text documents as hierarchi- bedding represented by E0. And to construct the graph, we measure cal document-graphs to extract frequent subgraphs for generating the cosine similarity of the embeddings, equation (1). An edge is sense-based document clusters. Wang et. al. [19] used vector repre- drawn between two nodes if their cosine similarity is greater than a sentations of documents and run k-means clustering on them to specific threshold \, which is a tuneable parameter in our algorithm. understand general representation power of various embedding generation models. Angelov [1] proposed Top2Vec, which uses E0.E1 joint document and word semantic embedding to find topic vectors 2B¹0,1º = (1) k E kk E k using HDBSCAN as clustering method to find dense regions in the 0 1 embedding space. Saiyad et. al. [16] presented a survey covering The edge weight is determined by the cosine similarity value major significant works on semantic document clustering based and is given by equation 2. Rajesh N Rao and Manojit Chakraborty Algorithm 1: Recursive Graph Community Detection ( 0 2B¹0,1º < \ 퐸¹0,1º = (2) 1 ¹ º ≥ 1 def GetCommunity(6, 2_=>34,CA44,<>3_CℎA4Bℎ,<0G_B8I4) 1−2B ¹0,1º 2B 0,1 \ 2 <>3_8=34G, 2_;8BC = 2><<D=8C~_34C42C8>=_0;6>¹6º Equation 2 maps the cosine similarity to edge weight as shown 3 if <>3_8=34G < \ then below: <>3D;0A8C~ 4 tree.add_node(curr_node) return 1 5 foreach 2><< 8= 2_;8BC do ¹\, 1º ! ¹ , 1º (3) 1 − \ 6 if ;4=¹2><<º ¡ <0G_B8I4 then As cosine similarity tends to 1, edge weight tends to 1. Note in 7 s_g = get_community_subgraph(comm) graph, higher edge weight corresponds to stronger connectivity. 8 =_=>34 = #>34¹º Also, the weights are non-linearly mapped from cosine similarity 9 CA44.033_=>34¹=_=>34º to edge weight. This increases separability between two node pairs 10 퐺4C퐶><<D=8C~¹B_6, =_=>34,CA44,<>3_CℎA4Bℎ,<0G_B8I4º that have similar cosine similarity. For example, a pair of nodes with 2B¹0,1º = 0.9 and another pair with 2B¹G,~º = 0.95 would 11 else ¹º have edge weights of 10 and 20 respectively. A stronger separation 12 =4F_=>34 = #>34 is created for cosine similarity closer to 1. Thus higher weight is 13 CA44.033_=>34¹=4F_=>34º given to embeddings that are very similar to each other. 3.2 Graph Community Detection We construct the graph with words or documents as nodes and with any specific context or it is not close enough to a community edges between nodes with cosine similarity greater than \. to be included as member. The Graph Community Detection algorithm consideres only local neighborhood in community detection. If we consider doc- 4 EXPERIMENT uments in a corpus, the cosine similarity is a strong indicator of We perform extensive set of experiments and comparisons to show similarity between two documents, when cosine similarity is high the advantage of Vec2GC as a clustering algorithm for documents (¡ \), it strongly indicates the two documents are semantically or words in a text corpus. We consider 5 different text document similar. However, when cosine similarity is low (< \), it indicates datasets along with class information. The dataset details are as a dis-similarity between the two documents. And the strength of follows: dis-similarity is not indicated by the value of the cosine similarity. A cosine similarity of 0 2 does not indicate a higher dis-similarity . 4.1 Datasets than cosine similarity of 0.4. Thus we eliminate the notion of dis- similarity by only connecting nodes which have a high degree of 4.1.1 20 newsgroups. The 20 Newsgroups data set comprises of similarity. Thus, all pairs nodes with cosine similarity below the approximately 20,000 newsgroup documents, evenly distributed given threshold are ignored. across 20 different newsgroups, each corresponding to a different 1 Though we discuss the idea of similarity and dis-similarity in topic. the context of documents, the arguments extends equally well to 4.1.2 AG News. AG is a collection of more than 1 million news terms represented by embeddings. articles gathered from more than 2000 news sources by ComeToMy- We apply a standard Graph Community Detection algorithm, Head 2, which is an academic news search engine. The AG’s news Parallel Louvian Method [2] in determining the communities topic classification dataset is developed by Xiang Zhang 3 from the in the graph. We calculate the modularity index [14], given by above news articles collection, consisting of 127600 documents. It equation 4, for each execution of the PLM algorithm. was first used as a text classification benchmark in the following 1 ∑︁ : : paper [20] & = , − 0 1 X ¹2 , 2 º (4) 2< 퐸01 2< 0 1 0,1 4.1.3 BBC Articles. This dataset is a public dataset from the BBC, We execute the Graph Community Detection algorithm recur- comprised of 2225 articles, each labeled under one of 5 categories: 4 sively. The pseudo code of the recursive algorithm in show in Al- Business, Entertainment, Politics, Sport or Tech. gorithm 1 4.1.4 Stackoverflow QA. This is a dataset of 16000 question and answers from the Stackoverflow website 5, labeled under 4 different 3.3 Non Community Nodes categories of coding language - CSharp, JavaScript, Java, Python. 6 Note, not all nodes would be member of a community. There will be nodes that do not belong to any community. Nodes that are not 1http://qwone.com/~jason/20Newsgroups/ connected or not well connected fail to be a member of a community. 2http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html We define such nodes as Non Community nodes. [email protected] 4 If we consider VecGC for term embeddings, we believe there are https://www.kaggle.com/c/learn-ai-bbc/data 5www.stackoverflow.com two reasons for a term to become a Non Community node. Either it 6http://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k. appears in multiple contexts and does not have a strong similarity tar.gz Vec2GC - A Graph Based Clustering Method for Text Representations 4.1.5 DBpedia. DBpedia is a project aiming to extract structured Table 1: Comparison using Doc2Vec Embeddings content from the information created in Wikipedia. 7 This dataset is extracted from the original DBpedia data that provides taxonomic, Dataset Purity Fraction of Fraction of Fraction of hierarchical categories or classes for 342,782 articles. There are 3 Value clusters @ clusters @ clusters @ levels of classes, with 9, 70 and 219 classes respectively. 8 k% purity k% purity k% purity We use two different document embedding algorithms to gener- (KMedoids) (hdbscan) (Vec2CG) ate document embeddings for all text datasets. The first algorithm 50% .53 .76 .89 that we use is Doc2Vec, which creates document embeddings using 20Newsgroup 70% .38 .56 .69 the distributed memory and distributed bag of words models from 90% .07 .20 .39 [7]. We also create document embeddings using Sentence-BERT 50% .98 .98 .99 [15]. It computes dense vector representations for documents, such AG News 70% .74 .90 .94 that similar document embeddings are close in vector space using 90% .20 .63 .80 pretrained language models on transformer networks like BERT [3] 50% 1.0 .99 .99 BBC / RoBERTa [8]/ DistilBERT [17] etc.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us