Data Ingestion, Analysis and Annotation Version 2.0

Total Page:16

File Type:pdf, Size:1020Kb

Data Ingestion, Analysis and Annotation Version 2.0 Enhancing and Re-Purposing TV Content for Trans-Vector Engagement (ReTV) H2020 Research and Innovation Action - Grant Agreement No. 780656 Enhancing and Re-Purposing TV Content for Trans-Vector Engagement Deliverable 1.2 (M20) Data Ingestion, Analysis and Annotation Version 2.0 This document was produced in the context of the ReTV project supported by the European Commission under the H2020-ICT-2016-2017 Information & Communication Technologies Call Grant Agreement No 780656 D1.2 Data Ingestion, Analysis and Annotation DOCUMENT INFORMATION Delivery Type Report Deliverable Number 1.2 Deliverable Title Data Ingestion, Analysis and Annotation Due Date M20 Submission Date August 31, 2019 Work Package WP1 Partners CERTH, MODUL Technology, webLyzard Konstantinos Apostolidis, Nikolaos Gkalelis, Evlampios Apostolidis, Alexandros Pournaras, Vasileios Mezaris Author(s) (CERTH), Lyndon Nixon, Adrian M.P. Braşoveanu (MODUL Technology), Katinka Boehm (webLyzard) Reviewer(s) Basil Philipp (Genistat) Data Ingestion, TV Program Annotation, TV Program Keywords Analysis, Social Media Retrieval, Web Retrieval, Video Analysis, Concept Detection, Brand Detection Dissemination Level PU MODUL Technology GmbH Project Coordinator Am Kahlenberg 1, 1190 Vienna, Austria Coordinator: Dr Lyndon Nixon ([email protected]) R&D Manager: Prof Dr Arno Scharl Contact Details ([email protected]) Innovation Manager: Bea Knecht ([email protected]) Page 2 of 65 D1.2 Data Ingestion, Analysis and Annotation Revisions Version Date Author Changes V. Mezaris, 0.1 30/6/19 Created template and ToC K. Apostolidis 0.2 18/7/19 L. Nixon Initial input (Section 2) 0.3 26/7/19 L. Nixon Completed inputs (Sections 2 and 3) K. Apostolidis, 0.4 27/7/19 Added parts for Sections 4.1, 5 and 6 E. Apostolidis 0.5 27/7/19 A. Pournaras Added parts for Section 6 0.6 29/7/19 A. Brasoveanu Started NER/NEL text for Section 3 0.7 30/7/19 N. Gkalelis Input for Section 4.2 0.8 31/7/19 K. Apostolidis Revision of Sections 4 and 5 0.85 1/8/19 K. Boehm Checked and completed SKB descriptions Revised sections and checked all MODUL 0.95 14/8/19 L. Nixon input 1.0 19/8/19 A. Brasoveanu Finished NER/NEL text with evaluations 1.5 22/8/19 B. Philipp QA review K. Apostolidis, 2.0 27/8/19 Post-QA updates L. Nixon Page 3 of 65 D1.2 Data Ingestion, Analysis and Annotation Statement of Originality This deliverable contains original unpublished work except where clearly indicated otherwise. Acknowledgement of previously published material and of the work of others has been made through appropriate citation, quotation or both. This deliverable reflects only the authors’ views and the European Union is not liable for any use that might be made of information contained therein. Page 4 of 65 D1.2 Data Ingestion, Analysis and Annotation Contents 1 Introduction 8 2 Content Collection Across Vectors 9 2.1 Status . .9 2.2 Updates from Previous Report . 11 2.3 Evaluation of Data Quality . 12 2.4 Outlook . 13 3 Annotation and Knowledge Graph Alignment 14 3.1 Status . 14 3.2 Updates from Previous Report . 16 3.2.1 NLP . 16 3.2.2 SKB . 16 3.2.3 Recognyze NER/NEL . 17 3.3 Evaluation of Annotation Quality . 19 3.3.1 Recognyze General Evaluation . 19 3.3.2 Recognyze Media Annotations . 20 3.3.3 Discussion . 22 3.4 Outlook . 23 4 Concept-Based Video Abstractions 25 4.1 Video Fragmentation . 25 4.1.1 Updated Problem Statement and State of the Art . 25 4.1.2 Fast DCNN-based Video Fragmentation . 27 4.1.3 Implementation Details and Use . 28 4.1.4 Results . 28 4.2 Concept-based Annotation . 30 4.2.1 Updated Problem Statement and State of the Art . 30 4.2.2 Extended Concept Pools for Video Annotation . 31 4.2.3 Improved Deep Learning Architecture . 34 4.2.4 Implementation Details and Use . 37 4.2.5 Results . 37 5 Brand Detection 40 5.1 Updated Problem Statement and State of the Art . 40 5.2 Extended Brand Pools for Video Annotation . 43 5.3 ReTV method for Ad Detection . 44 5.4 Implementation Details and Use . 47 5.5 Results . 48 6 Updated Video Analysis Component, Workflow and Extended API 51 6.1 Video Analysis Component Functionalities and Outputs . 51 6.2 Component API and Usage Instructions . 51 6.3 Component Workflow and Integration . 56 6.4 Component Testing and Software Quality Assessment . 56 7 Conclusion and Outlook 58 Page 5 of 65 D1.2 Data Ingestion, Analysis and Annotation EXECUTIVE SUMMARY This deliverable is an update of D1.1. It covers the topics of content collection across vectors, annotation and knowledge graph alignment, concept-based video abstractions, and brand de- tection. Specifically, on content collection across vectors it reports on the current status of the collection process including quality management. Concerning annotation and knowledge graph alignment, it updates on the implementation of an accurate NLP & NEL pipeline for annotat- ing the collected data with respect to keywords and Named Entities and aligning annotaions to our Semantic Knowledge Base (SKB) as well as external knowledge sources. With respect to concept-based video abstractions, it presents a fast learning-based method for video fragmen- tation to shots and scenes, a new deep learning architecture for improved concept detection that is suitable for learning from large-scale annotated datasets such as YouTube8M, and an extended set of concept pools that are supported by the ReTV Video Analysis service. Con- cerning brand detection, this deliverable discusses the newly-adopted method for brand (logo) detection in ReTV, the extended pools of brands that were specified, and a method that was developed for advertisement detection. Finally, this document also presents in detail the ReTV WP1 Video Analysis REST service, which implements the methods for concept-based video abstraction and brand detection in a complete, integration-ready software component. Page 6 of 65 D1.2 Data Ingestion, Analysis and Annotation ABBREVIATIONS LIST Abbreviation Description Application Programming Interface: a set of functions and procedures that API allow the creation of applications which access the features or data of an application or other service. DCNN Deep Convolutional Neural Network: a type of artificial neural network. Electronic Program Guides: menu-based systems that provide users of EPG television with continuously updated menus displaying broadcast programming or scheduling information for current and upcoming programming. Types of method in the Hypertext Transfer Protocol (HTTP). The HTTP HTTP POST method is used to send data to a server to create/update a resource. POST/GET The HTTP GET method is used to request data from a specified resource. Internet Protocol Television: is the delivery of television content over Internet IPTV Protocol (IP) networks. JSON JavaScript Object Notation: a data-interchange format. LSTM Long Short Term Memory networks: a type of recurrent neural network. Multi-task learning: a field of machine learning in which multiple learning tasks MTL are solved at the same time, exploiting commonalities and differences across tasks. NEL Named Entity Linking NER Named Entity Recognition Natural Language Processing: subfield of linguistics, computer science, and NLP artificial intelligence concerned with the interactions between computers and human (natural) languages. Over The Top: content providers that distribute streaming media as a OTT standalone product directly to viewers over the Internet, bypassing telecommunications that traditionally act as a distributor of such content. Resource Description Framework: a method for conceptual description or RDF modeling of information that is implemented in web resources. Representational State Transfer: an architectural style that defines a set of REST constraints to be used for creating web services. RNN Recurrent Neural Network: a type of an artificial neural network. Semantic Knowledge Base: a RDF-based triple store for a knowledge SKB representation of keywords and entities during in annotation of documents Transactional Video on Demand: a distribution method by which customers TVoD pay for each individual piece of video on demand content. Uniform Resource Locator: a reference to a web resource that specifies its URL location on a computer network and a mechanism for retrieving it. Page 7 of 65 D1.2 Data Ingestion, Analysis and Annotation 1. Introduction This deliverable reports on the work done from month M11 to M20. In this reporting pe- riod all tasks, (i.e., T1.1: Content Collection Across Vectors, T1.2: Concept-Based Video Abstractions, T1.3: Brand Detection in Video, and T1.4: Annotation and Knowledge Graph Alignment) were active. Firstly, the content collection was started by MODUL regarding an initial set of TV broadcast- ers and TV series, using the annotation model described in D1.1 of ReTV [60]. Specifically, we both crawl the Web for news and TV related content as well as collect social media content via TV related channels and a list of terms for public posts by users (Section 2). All collected data is annotated according to keywords and Named Entities, supported by our own Knowledge Graph (Section 3). We report on the current status of the data collection and annotation pipeline including quality management for online data sources and an evaluation of our Named Entity annotation accuracy. In the direction of video fragmentation and annotation, CERTH developed new technologies for learning-based video fragmentation to shots and scenes, and a new deep learning architecture for improved concept detection that is suitable for learning from large-scale annotated datasets such as YouTube8M (Section 4). Alongside this, the set of concept pools that are supported for concept detection was expanded. State-of-the-art object detection frameworks were tested for brand recognition. We report (i) the selection of the framework that best suits the project’s requirements, (ii) the extended pools of brands whose detection is supported, and (iii) a method that was developed for advertisement de- tection (Section 5).
Recommended publications
  • Formal Concept Analysis in Knowledge Processing: a Survey on Applications ⇑ Jonas Poelmans A,C, , Dmitry I
    Expert Systems with Applications 40 (2013) 6538–6560 Contents lists available at SciVerse ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa Review Formal concept analysis in knowledge processing: A survey on applications ⇑ Jonas Poelmans a,c, , Dmitry I. Ignatov c, Sergei O. Kuznetsov c, Guido Dedene a,b a KU Leuven, Faculty of Business and Economics, Naamsestraat 69, 3000 Leuven, Belgium b Universiteit van Amsterdam Business School, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands c National Research University Higher School of Economics, Pokrovsky boulevard, 11, 109028 Moscow, Russia article info abstract Keywords: This is the second part of a large survey paper in which we analyze recent literature on Formal Concept Formal concept analysis (FCA) Analysis (FCA) and some closely related disciplines using FCA. We collected 1072 papers published Knowledge discovery in databases between 2003 and 2011 mentioning terms related to Formal Concept Analysis in the title, abstract and Text mining keywords. We developed a knowledge browsing environment to support our literature analysis process. Applications We use the visualization capabilities of FCA to explore the literature, to discover and conceptually repre- Systematic literature overview sent the main research topics in the FCA community. In this second part, we zoom in on and give an extensive overview of the papers published between 2003 and 2011 which applied FCA-based methods for knowledge discovery and ontology engineering in various application domains. These domains include software mining, web analytics, medicine, biology and chemistry data. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction mining, i.e. gaining insight into source code with FCA.
    [Show full text]
  • Concept Maps Mining for Text Summarization
    UNIVERSIDADE FEDERAL DO ESPÍRITO SANTO CENTRO TECNOLÓGICO DEPARTAMENTO DE INFORMÁTICA PROGRAMA DE PÓS-GRADUAÇÃO EM INFORMÁTICA Camila Zacché de Aguiar Concept Maps Mining for Text Summarization VITÓRIA-ES, BRAZIL March 2017 Camila Zacché de Aguiar Concept Maps Mining for Text Summarization Dissertação de Mestrado apresentada ao Programa de Pós-Graduação em Informática da Universidade Federal do Espírito Santo, como requisito parcial para obtenção do Grau de Mestre em Informática. Orientador (a): Davidson CurY Co-orientador: Amal Zouaq VITÓRIA-ES, BRAZIL March 2017 Dados Internacionais de Catalogação-na-publicação (CIP) (Biblioteca Setorial Tecnológica, Universidade Federal do Espírito Santo, ES, Brasil) Aguiar, Camila Zacché de, 1987- A282c Concept maps mining for text summarization / Camila Zacché de Aguiar. – 2017. 149 f. : il. Orientador: Davidson Cury. Coorientador: Amal Zouaq. Dissertação (Mestrado em Informática) – Universidade Federal do Espírito Santo, Centro Tecnológico. 1. Informática na educação. 2. Processamento de linguagem natural (Computação). 3. Recuperação da informação. 4. Mapas conceituais. 5. Sumarização de Textos. 6. Mineração de dados (Computação). I. Cury, Davidson. II. Zouaq, Amal. III. Universidade Federal do Espírito Santo. Centro Tecnológico. IV. Título. CDU: 004 “Most of the fundamental ideas of science are essentially simple, and may, as a rule, be expressed in a language comprehensible to everyone.” Albert Einstein 4 Acknowledgments I would like to thank the manY people who have been with me over the years and who have contributed in one way or another to the completion of this master’s degree. Especially my advisor, Davidson Cury, who in a constructivist way disoriented me several times as a stimulus to the search for new answers.
    [Show full text]
  • Empowering Video Storytellers: Concept Discovery and Annotation for Large Audio-Video-Text Archives
    Empowering Video Storytellers: Concept Discovery and Annotation for Large Audio-Video-Text Archives A large amount of video content is available and stored by broadcasters, libraries and other enterprises. However, the lack of effective and efficient tools for searching and retrieving has prevented video becoming a major value asset. Much work has been done for retrieving video information from constrained structured domains, such as news and sports, but general solutions for unstructured audio-video-language intensive content are missing. Our goal is to transform the multimedia archive into an organized and searchable asset so that human efforts can focus on a more important aspect - storytelling. This project involves a cross- disciplinary effort towards development of new techniques for organizing, summarizing, and question answering over audio-video-language intensive content, such as raw documentary footage. Such domains offer the full range of real-world situations and events, manifested with interaction of rich multimedia information. The team includes investigators from Schools of Engineering and Applied Science, Journalism, and Arts. It has extensive experience and deep expertise in analysis of multimedia information - video, audio, and language - as well as broad experience with professional documentary film production and interactive media designs. The project includes two major content partners, WITNESS and WNET, both providing unique video archives, well-defined application domains and user communities. In particular, WITNESS collects and manages a documentary video archive from activists all over the world to support the promotion of human rights. This archive has several important qualities: it is diverse (including interviews, documentation of events, and ‘hidden camera’ reporting), it is relatively ‘raw’ (shot by semi-professional operators), it has been shot for specific purposes (e.g.
    [Show full text]
  • Technical Phrase Extraction for Patent Mining: a Multi-Level Approach
    2020 IEEE International Conference on Data Mining (ICDM) Technical Phrase Extraction for Patent Mining: A Multi-level Approach Ye Liu, Han Wu, Zhenya Huang, Hao Wang, Jianhui Ma, Qi Liu, Enhong Chen*, Hanqing Tao, Ke Rui Anhui Province Key Laboratory of Big Data Analysis and Application, School of Data Science & School of Computer Science and Technology, University of Science and Technology of China {liuyer, wuhanhan, wanghao3, hqtao, kerui}@mail.ustc.edu.cn, {huangzhy, jianhui, qiliuql, cheneh}@ustc.edu.cn Abstract—Recent years have witnessed a booming increase of TABLE I: Technical Phrases vs. Non-technical Phrases patent applications, which provides an open chance for revealing Domain Technical phrase Non-technical phrase the inner law of innovation, but in the meantime, puts forward Electricity wireless communication, wire and cable, TV sig- higher requirements on patent mining techniques. Considering netcentric computer service nal, power plug Mechanical fluid leak detection, power building materials, steer that patent mining highly relies on patent document analysis, Engineering transmission column, seat back this paper makes a focused study on constructing a technology portrait for each patent, i.e., to recognize technical phrases relevant works have been explored on phrase extraction. Ac- concerned in it, which can summarize and represent patents cording to the extraction target, they can be divided into Key from a technology angle. To this end, we first give a clear Phrase Extraction [8], Named Entity Recognition (NER) [9] and detailed description about technical phrases in patents and Concept Extraction [10]. Key Phrase Extraction aims to based on various prior works and analyses. Then, combining characteristics of technical phrases and multi-level structures extract phrases that provide a concise summary of a document, of patent documents, we develop an Unsupervised Multi-level which prefers those both frequently-occurring and closed to Technical Phrase Extraction (UMTPE) model.
    [Show full text]
  • Information Extraction in Text Mining Matt Ulinsm Western Washington University
    Western Washington University Western CEDAR Computer Science Graduate and Undergraduate Computer Science Student Scholarship 2008 Information extraction in text mining Matt ulinsM Western Washington University Follow this and additional works at: https://cedar.wwu.edu/computerscience_stupubs Part of the Computer Sciences Commons Recommended Citation Mulins, Matt, "Information extraction in text mining" (2008). Computer Science Graduate and Undergraduate Student Scholarship. 4. https://cedar.wwu.edu/computerscience_stupubs/4 This Research Paper is brought to you for free and open access by the Computer Science at Western CEDAR. It has been accepted for inclusion in Computer Science Graduate and Undergraduate Student Scholarship by an authorized administrator of Western CEDAR. For more information, please contact [email protected]. Information Extraction in Text Mining Matt Mullins Computer Science Department Western Washington University Bellingham, WA [email protected] Information Extraction in Text Mining Matt Mullins Computer Science Department Western Washington University Bellingham, WA [email protected] ABSTRACT Text mining’s goal, simply put, is to derive information from text. Using multitudes of technologies from overlapping fields like Data Mining and Natural Language Processing we can yield knowledge from our text and facilitate other processing. Information Extraction (IE) plays a large part in text mining when we need to extract this data. In this survey we concern ourselves with general methods borrowed from other fields, with lower-level NLP techniques, IE methods, text representation models, and categorization techniques, and with specific implementations of some of these methods. Finally, with our new understanding of the field we can discuss a proposal for a system that combines WordNet, Wikipedia, and extracted definitions and concepts from web pages into a user-friendly search engine designed for topic- specific knowledge.
    [Show full text]
  • Concept Mining: a Conceptual Understanding Based Approach
    Concept Mining: A Conceptual Understanding based Approach by Shady Shehata A thesis presented to the University of Waterloo in ful¯llment of the thesis requirement for the degree of Doctor of Philosophy in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2009 °c Shady Shehata 2009 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required ¯nal revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Due to the daily rapid growth of the information, there are considerable needs to extract and discover valuable knowledge from data sources such as the World Wide Web. Most of the common techniques in text mining are based on the statistical analysis of a term either word or phrase. These techniques consider documents as bags of words and pay no attention to the meanings of the document content. In addition, statistical analysis of a term frequency captures the importance of the term within a document only. However, two terms can have the same frequency in their documents, but one term contributes more to the meaning of its sentences than the other term. Therefore, there is an intensive need for a model that captures the meaning of linguistic utterances in a formal structure. The underlying model should indicate terms that capture the semantics of text. In this case, the model can capture terms that present the concepts of the sentence, which leads to discover the topic of the document. A new concept-based model that analyzes terms on the sentence, document and corpus levels rather than the traditional analysis of document only is introduced.
    [Show full text]
  • A User-Centered Concept Mining System for Query and Document Understanding at Tencent
    A User-Centered Concept Mining System for Query and Document Understanding at Tencent Bang Liu1∗, Weidong Guo2∗, Di Niu1, Chaoyue Wang2 Shunnan Xu2, Jinghong Lin2, Kunfeng Lai2, Yu Xu2 1University of Alberta, Edmonton, AB, Canada 2Platform and Content Group, Tencent, Shenzhen, China ABSTRACT ACM Reference Format: 1∗ 2∗ 1 2 2 Concepts embody the knowledge of the world and facilitate the Bang Liu , Weidong Guo , Di Niu , Chaoyue Wang and Shunnan Xu , Jinghong Lin2, Kunfeng Lai2, Yu Xu2. 2019. A User-Centered Concept Min- cognitive processes of human beings. Mining concepts from web ing System for Query and Document Understanding at Tencent. In The documents and constructing the corresponding taxonomy are core 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining research problems in text understanding and support many down- (KDD ’19), August 4–8, 2019, Anchorage, AK, USA. ACM, New York, NY, USA, stream tasks such as query analysis, knowledge base construction, 11 pages. https://doi.org/10.1145/3292500.3330727 recommendation, and search. However, we argue that most prior studies extract formal and overly general concepts from Wikipedia or static web pages, which are not representing the user perspective. 1 INTRODUCTION In this paper, we describe our experience of implementing and de- The capability of conceptualization is a critical ability in natural ploying ConcepT in Tencent QQ Browser. It discovers user-centered language understanding and is an important distinguishing factor concepts at the right granularity conforming to user interests, by that separates a human being from the current dominating machine mining a large amount of user queries and interactive search click intelligence based on vectorization.
    [Show full text]
  • Unsupervised Extraction and Clustering of Key Phrases from Scientific Publications
    Unsupervised Extraction and Clustering of Key Phrases from Scientific Publications Xiajing Li Uppsala University Department of Linguistics and Philology Master Programme in Language Technology Master’s Thesis in Language Technology, 30 ects credits September 25, 2020 Supervisors: Dr. Fredrik Wahlberg, Uppsala University Dr. Marios Daoutis, Ericsson AB Abstract Mapping a research domain can be of great signicance for understanding and structuring the state-of-art of a research area. Standard techniques for system- atically reviewing scientic literature entail extensive selection and intensive reading of manuscripts, a laborious and time consuming process performed by human experts. Researchers have spent eorts on automating methods in one or more sub-tasks of reviewing process. The main challenge of this work lies in the gap in semantic understanding of text and background domain knowledge. In this thesis we investigate the possibility of extracting keywords from scien- tic abstracts in an automated way. We intended to use the categories of these keywords to form a basis of a classication scheme in the context of systemati- cally mapping studies. We propose a framework by joint unsupervised keyphrase extraction and semantic keyphrase clustering. Specically, we (1) explore the eect of domain relevance and phrase quality measures in keyphrase extraction; (2) explore the eect of knowledge graph based word embedding in embedding rep- resentation of phrase semantics; (3) explore the eect of clustering for grouping semantically related keyphrases. Experiments are conducted on a dataset of publications pertaining the do- main of "Explainable Articial Intelligence (XAI)”. We further test the perfor- mance of clustering using terms and labels from publicly available academic taxonomies and keyword databases.
    [Show full text]
  • Alicoco: Alibaba E-Commerce Cognitive Concept
    AliCoCo: Alibaba E-commerce Cognitive Concept Net Xusheng Luo∗ Luxin Liu, Le Bo, Yuanpeng Cao, Alibaba Group, Hangzhou, China Jinhang Wu, Qiang Li [email protected] Alibaba Group, Hangzhou, China Yonghua Yang, Keping Yang Kenny Q. Zhu Alibaba Group, Hangzhou, China Shanghai Jiao Tong University, Shanghai, China ABSTRACT 1 INTRODUCTION One of the ultimate goals of e-commerce platforms is to One major functionality of e-commerce platforms is to match satisfy various shopping needs for their customers. Much the shopping need of a customer to a small set of items from efforts are devoted to creating taxonomies or ontologies in an enormous candidate set. With the rapid developments e-commerce towards this goal. However, user needs in e- of search engine and recommender system, customers are commerce are still not well defined, and none of the existing able to quickly find those items they need. However, the ontologies has the enough depth and breadth for universal experience is still far from “intelligent”. One significant rea- user needs understanding. The semantic gap in-between pre- son is that there exists a huge semantic gap between what vents shopping experience from being more intelligent. In users need in their mind and how the items are organized this paper, we propose to construct a large-scale e-commerce in e-commerce platforms. The taxonomy to organize items Cognitive Concept net named “AliCoCo”, which is prac- in Alibaba (actually almost every e-commerce platforms) is ticed in Alibaba, the largest Chinese e-commerce platform generally based on CPV (Category-Property-Value): thou- in the world.
    [Show full text]
  • Unsupervised Concept Hierarchy Learning: a Topic Modeling Guided Approach V
    Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 89 ( 2016 ) 386 – 394 Twelfth International Multi-Conference on Information Processing-2016 (IMCIP-2016) Unsupervised Concept Hierarchy Learning: A Topic Modeling Guided Approach V. S. Anoopa,∗,S.Asharafa and P. Deepakb aIndian Institute of Information Technology and Management, Kerala (IIITM-K), Thiruvananthapuram, India bQueen’s University Belfast, UK Abstract This paper proposes an efficient and scalable method for concept extraction and concept hierarchy learning from large unstructured text corpus which is guided by a topic modeling process. The method leverages “concepts” from statistically discovered “topics” and then learns a hierarchy of those concepts by exploiting a subsumption relation between them. Advantage of the proposed method is that the entire process falls under the unsupervised learning paradigm thus the use of a domain specific training corpus can be eliminated. Given a massive collection of text documents, the method maps topics to concepts by some lightweight statistical and linguistic processes and then probabilistically learns the subsumption hierarchy. Extensive experiments with large text corpora such as BBC News dataset and Reuters News corpus shows that our proposed method outperforms some of the existing methods for concept extraction and efficient concept hierarchy learning is possible if the overall task is guided by a topic modeling process. © 2016 The Authors. Published by Elsevier B.V. © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license Peer-review under responsibility of organizing committee of the Twelfth International Multi-Conference on Information (http://creativecommons.org/licenses/by-nc-nd/4.0/).
    [Show full text]
  • Generating Word and Document Embeddings for Sentiment Analysis
    Generating Word and Document Embeddings for Sentiment Analysis Cem Rıfkı Aydın, Tunga Gung¨ or¨ and Ali Erkan Bogazic¸i˘ University Computer Engineering Department Bebek, Istanbul, Turkey, 34342 [email protected], [email protected], [email protected] Abstract. Sentiments of words differ from one corpus to another. Inducing gen- eral sentiment lexicons for languages and using them cannot, in general, produce meaningful results for different domains. In this paper, we combine contextual and supervised information with the general semantic representations of words occurring in the dictionary. Contexts of words help us capture the domain-specific information and supervised scores of words are indicative of the polarities of those words. When we combine supervised features of words with the features extracted from their dictionary definitions, we observe an increase in the suc- cess rates. We try out the combinations of contextual, supervised, and dictionary- based approaches, and generate original vectors. We also combine the word2vec approach with hand-crafted features. We induce domain-specific sentimental vec- tors for two corpora, which are the movie domain and the Twitter datasets in Turkish. When we thereafter generate document vectors and employ the support vector machines method utilising those vectors, our approaches perform better than the baseline studies for Turkish with a significant margin. We evaluated our models on two English corpora as well and these also outperformed the word2vec approach. It shows that our approaches are cross-domain and portable to other languages. Keywords: Sentiment Analysis, Opinion Mining, Word Embeddings, Machine Learning 1 Introduction Sentiment analysis has recently been one of the hottest topics in natural language pro- arXiv:2001.01269v2 [cs.CL] 7 Dec 2020 cessing (NLP).
    [Show full text]
  • A Survey of Building a Reverse Dictionary
    Jincy A K et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (6) , 2014, 7727-7728 A Survey of Building a Reverse Dictionary Jincy A K, Sindhu L Department of Computer Science& Engg,College Of Engineering, Cochin university of science and Technology, Poonjar,Kottayam,kerala,India Abstract- A reverse dictionary takes as input a phrase or II. GENERAL TECHNIQUE a sentence describing a concept, and returns a set of The different steps for the implementation of reverse candidate words that satisfy the meaning of the input dictionary includes lexical analysis, elimination of stop phrase. This paper presents a literature survey words, stemming, pos tagging, concept mapping, building regarding the design and implementation of a reverse the reverse mapping set, querying the reverse mapping set, dictionary. The reverse dictionary identifies a ranking candidate words, term similarity... concept/idea/definition to words and phrases used to describe that concept. It has significant application for A. Stemming those who work closely with words and also in the Stemming is the process of finding the root word of a general field of conceptual search. Mainly for linguists, word contained in a given phrase. Normally stemming is poets, anthropologists and forensics specialist accomplished by means of porter stemmer but studies examining a damaged text that had only the final proved that it has certain drawbacks portion of a particular word preserved. B. POS Tagging Part-Of-Speech Tagging is the process of assigning parts Keywords- Thesaurus, Stemming, Pos Tagging, Concept of speech to each word (and other token), such as noun, Mining verb, adjective, etc.
    [Show full text]