Improved Word Representation Learning with Sememes

Total Page:16

File Type:pdf, Size:1020Kb

Improved Word Representation Learning with Sememes Improved Word Representation Learning with Sememes Yilin Niu1∗, Ruobing Xie1,∗ Zhiyuan Liu1;2 ,y Maosong Sun1;2 1 Department of Computer Science and Technology, State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, Beijing, China 2 Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou 221009 China Abstract for each word. Hence, people manually annotate word sememes and build linguistic common-sense Sememes are minimum semantic units of knowledge bases. word meanings, and the meaning of each HowNet (Dong and Dong, 2003) is one of such word sense is typically composed by sev- knowledge bases, which annotates each concep- eral sememes. Since sememes are not ex- t in Chinese with one or more relevant sememes. plicit for each word, people manually an- Different from WordNet (Miller, 1995), the phi- notate word sememes and form linguistic losophy of HowNet emphasizes the significance of common-sense knowledge bases. In this part and attribute represented by sememes. paper, we present that, word sememe in- HowNet has been widely utilized in word similar- formation can improve word representa- ity computation (Liu and Li, 2002) and sentiment tion learning (WRL), which maps word- analysis (Xianghua et al., 2013), and in section 3.2 s into a low-dimensional semantic space we will give a detailed introduction to sememes, and serves as a fundamental step for many senses and words in HowNet. NLP tasks. The key idea is to utilize In this paper, we aim to incorporate word se- word sememes to capture exact meanings memes into word representation learning (WRL) of a word within specific contexts accu- and learn improved word embeddings in a low- rately. More specifically, we follow the dimensional semantic space. WRL is a fundamen- framework of Skip-gram and present three tal and critical step in many NLP tasks such as lan- sememe-encoded models to learn repre- guage modeling (Bengio et al., 2003) and neural sentations of sememes, senses and word- machine translation (Sutskever et al., 2014). s, where we apply the attention scheme There have been a lot of researches for learn- to detect word senses in various contexts. ing word representations, among which word2vec We conduct experiments on two tasks in- (Mikolov et al., 2013) achieves a nice balance be- cluding word similarity and word analogy, tween effectiveness and efficiency. In word2vec, and our models significantly outperform each word corresponds to one single embedding, baselines. The results indicate that WRL ignoring the polysemy of most words. To address can benefit from sememes via the attention this issue, (Huang et al., 2012) introduces a multi- scheme, and also confirm our models be- prototype model for WRL, conducting unsuper- ing capable of correctly modeling sememe vised word sense induction and embeddings ac- information. cording to context clusters. (Chen et al., 2014) fur- 1 Introduction ther utilizes the synset information in WordNet to instruct word sense representation learning. Sememes are defined as minimum semantic u- From these previous studies, we conclude that nits of word meanings, and there exists a lim- word sense disambiguation are critical for WR- ited close set of sememes to compose the se- L, and we believe that the sememe annotation mantic meanings of an open set of concepts (i.e. of word senses in HowNet can provide neces- word sense). However, sememes are not explicit sary semantic regularization for the both tasks. ∗ indicates equal contribution To explore its feasibility, we propose a novel y Corresponding author: Z. Liu ([email protected]) Sememe-Encoded Word Representation Learning (SE-WRL) model, which detects word senses et al., 2015), parsing (Chen and Manning, 2014) and learns representations simultaneously. More and text classification (Zhang et al., 2015). Word specifically, this framework regards each word distributed representations are capable of encod- sense as a combination of its sememes, and iter- ing semantic meanings in vector space, serving as atively performs word sense disambiguation ac- the fundamental and essential inputs of many NLP cording to their contexts and learn representation- tasks. s of sememes, senses and words by extending There are large amounts of efforts devoted to Skip-gram in word2vec (Mikolov et al., 2013). In learning better word representations. As the ex- this framework, an attention-based method is pro- ponential growth of text corpora, model efficien- posed to select appropriate word senses according cy becomes an important issue. (Mikolov et al., to contexts automatically. To take full advantages 2013) proposes two models, CBOW and Skip- of sememes, we propose three different learning gram, achieving a good balance between effective- and attention strategies for SE-WRL. ness and efficiency. These models assume that the In experiments, we evaluate our framework on meanings of words can be well reflected by their two tasks including word similarity and word anal- contexts, and learn word representations by maxi- ogy, and further conduct case studies on sememe, mizing the predictive probabilities between words sense and word representations. The evaluation and their contexts. (Pennington et al., 2014) fur- results show that our models outperform other ther utilizes matrix factorization on word affinity baselines significantly, especially on word analo- matrix to learn word representations. However, gy. This indicates that our models can build bet- these models merely arrange only one vector for ter knowledge representations with the help of se- each word, regardless of the fact that many word- meme information, and also implies the potential s have multiple senses. (Huang et al., 2012; Tian of our models on word sense disambiguation. et al., 2014) utilize multi-prototype vector model- The key contributions of this work are conclud- s to learn word representations and build distinct ed as follows: (1) To the best of our knowledge, vectors for each word sense. (Neelakantan et al., this is the first work to utilize sememes in HowNet 2015) presents an extension to Skip-gram model to improve word representation learning. (2) We for learning non-parametric multiple embeddings successfully apply the attention scheme to detect per word. (Rothe and Schutze¨ , 2015) also utilizes word senses and learn representations according to an Autoencoder to jointly learn word, sense and contexts with the favor of the sememe annotation synset representations in the same semantic space. in HowNet. (3) We conduct extensive experiments This paper, for the first time, jointly learns rep- and verify the effectiveness of incorporating word resentations of sememes, senses and words. The sememes for improved WRL. sememe annotation in HowNet provides useful se- mantic regularization for WRL. Moreover, the u- 2 Related Work nified representations incorporated with sememes 2.1 Word Representation also provide us more explicit explanations of both word and sense embeddings. Recent years have witnessed the great thrive in word representation learning. It is simple and s- 2.2 Word Sense Disambiguation and traightforward to represent words using one-hot Representation Learning representations, but it usually struggles with the data sparsity issue and the neglect of semantic re- Word sense disambiguation (WSD) aims to iden- lations between words. tify word senses or meanings in a certain context To address these issues, (Rumelhart et al., computationally. There are mainly two approach- 1988) proposes the idea of distributed represen- es for WSD, namely the supervised methods and tation which projects all words into a continuous the knowledge-based methods. Supervised meth- low-dimensional semantic space, considering each ods usually take the surrounding words or senses word as a vector. Distributed word representation- as features and use classifiers like SVM for word s are powerful and have been widely utilized in sense disambiguation (Lee et al., 2004), which are many NLP tasks, including neural language mod- intensively limited to the time-consuming human els (Bengio et al., 2003; Mikolov et al., 2010), ma- annotation of training data. chine translation (Sutskever et al., 2014; Bahdanau On contrary, knowledge-based methods utilize large external knowledge resources such as knowl- word “apple”. The word “apple” actually has two edge bases or dictionaries to suggest possible sens- main senses shown on the second layer: one is a es for a word. (Banerjee and Pedersen, 2002) ex- sort of juicy fruit (apple), and another is a famous ploits the rich hierarchy of semantic relations in computer brand (Apple brand). The third and fol- WordNet (Miller, 1995) for an adapted dictionary- lowing layers are those sememes explaining each based WSD algorithm. (Bordes et al., 2011) intro- sense. For instance, the first sense Apple brand in- duces synset information in WordNet to WR- dicates a computer brand, and thus has sememes L. (Chen et al., 2014) considers synsets in Word- computer, bring and SpeBrand. Net as different word senses, and jointly conducts From Fig.1 we can find that, sememes of word sense disambiguation and word / sense rep- many senses in HowNet are annotated with vari- resentation learning. (Guo et al., 2014) considers ous relations, such as define and modifier, and for- bilingual datasets to learn sense-specific word rep- m complicated hierarchical structures. In this pa- resentations. Moreover, (Jauhar et al., 2015) pro- per, for simplicity, we only consider all annotat- poses two approaches to learn sense-specific word ed sememes of each sense as a sememe set with- representations that are grounded to ontologies. out considering their internal structure. HowNet (Pilehvar and Collier, 2016) utilizes personalized assumes the limited annotated sememes can well PageRank to learn de-conflated semantic represen- represent senses and words in the real-world sce- tations of words. nario, and thus sememes are expected to be useful In this paper, we follow the knowledge-based for both WSD and WRL. approach and automatically detect word senses ac- cording to the contexts with the favor of sememe 苹果 information in HowNet.
Recommended publications
  • Words and Alternative Basic Units for Linguistic Analysis
    Words and alternative basic units for linguistic analysis 1 Words and alternative basic units for linguistic analysis Jens Allwood SCCIIL Interdisciplinary Center, University of Gothenburg A. P. Hendrikse, Department of Linguistics, University of South Africa, Pretoria Elisabeth Ahlsén SCCIIL Interdisciplinary Center, University of Gothenburg Abstract The paper deals with words and possible alternative to words as basic units in linguistic theory, especially in interlinguistic comparison and corpus linguistics. A number of ways of defining the word are discussed and related to the analysis of linguistic corpora and to interlinguistic comparisons between corpora of spoken interaction. Problems associated with words as the basic units and alternatives to the traditional notion of word as a basis for corpus analysis and linguistic comparisons are presented and discussed. 1. What is a word? To some extent, there is an unclear view of what counts as a linguistic word, generally, and in different language types. This paper is an attempt to examine various construals of the concept “word”, in order to see how “words” might best be made use of as units of linguistic comparison. Using intuition, we might say that a word is a basic linguistic unit that is constituted by a combination of content (meaning) and expression, where the expression can be phonetic, orthographic or gestural (deaf sign language). On closer examination, however, it turns out that the notion “word” can be analyzed and specified in several different ways. Below we will consider the following three main ways of trying to analyze and define what a word is: (i) Analysis and definitions building on observation and supposed easy discovery (ii) Analysis and definitions building on manipulability (iii) Analysis and definitions building on abstraction 2.
    [Show full text]
  • ON SOME CATEGORIES for DESCRIBING the SEMOLEXEMIC STRUCTURE by Yoshihiko Ikegami
    ON SOME CATEGORIES FOR DESCRIBING THE SEMOLEXEMIC STRUCTURE by Yoshihiko Ikegami 1. A lexeme is the minimum unit that carries meaning. Thus a lexeme can be a "word" as well as an affix (i.e., something smaller than a word) or an idiom (i.e,, something larger than a word). 2. A sememe is a unit of meaning that can be realized as a single lexeme. It is defined as a structure constituted by those features having distinctive functions (i.e., serving to distinguish the sememe in question from other semernes that contrast with it).' A question that arises at this point is whether or not one lexeme always corresponds to just one serneme and no more. Three theoretical positions are foreseeable: (I) one which holds that one lexeme always corresponds to just one sememe and no more, (2) one which holds that one lexeme corresponds to an indefinitely large number of sememes, and (3) one which holds that one lexeme corresponds to a certain limited number of sememes. These three positions wiIl be referred to as (1) the "Grundbedeutung" theory, (2) the "use" theory, and (3) the "polysemy" theory, respectively. The Grundbedeutung theory, however attractive in itself, is to be rejected as unrealistic. Suppose a preliminary analysis has revealed that a lexeme seems to be used sometimes in an "abstract" sense and sometimes in a "concrete" sense. In order to posit a Grundbedeutung under such circumstances, it is to be assumed that there is a still higher level at which "abstract" and "concrete" are neutralized-this is certainly a theoretical possibility, but it seems highly unlikely and unrealistic from a psychological point of view.
    [Show full text]
  • Download Article
    Advances in Social Science, Education and Humanities Research (ASSEHR), volume 312 International Conference "Topical Problems of Philology and Didactics: Interdisciplinary Approach in Humanities and Social Sciences" (TPHD 2018) Methods of Identifying Members of Synonymic Row Juliya A. Litvinova Elena A. Maklakova Chair of Foreign Languages Chair of Foreign Languages Federal State Budget Educational Institution of Higher Federal State Budget Educational Institution of Higher Education Voronezh State University of Forestry and Education Voronezh State University of Forestry and Technologies named after G.F. Morozov Technologies named after G.F. Morozov Voronezh, Russia Voronezh, Russia [email protected] Affiliation): dept. name of organization Abstract— This article is devoted to identifying the criteria of analysis, method of field modeling, method of semantic synonymity of lexical items. The existence of different definitions interpretation, method of generalization of dictionary of synonymy, the selection criteria of items in the synonymic row definitions, methods of quantitative, lexicographical, indicate the insufficient study and incoherence of this contextual, psycholinguistic analysis. Data for the study were phenomenon in linguistics. The study of the semantics of lexical 2 lexical items in the Russian language (gorodok, gorodishko) items allows explaining the most accurately and authentically the integration and differentiation of lexical items close in meaning. obtained from the Russian Explanatory Dictionaries (V. I. The description of the meaning structure (sememe) is possible Dahl, D. N. Ushakov, S. I. Ozhegov, A. P. Evgenieva, S. A. through the description of its seme composition. The methods of Kuznetsov, T. F. Efremova), Russian National Corpus seme semasiology (lexicographic, psycholinguistic, contextual) (ruscorpora.ru). allow revealing various components in the sememe structure.
    [Show full text]
  • Verbs of 'Preparing Something for Eating by Heating It in a Particular
    DEPARTAMENTO DE FILOLOGÍA INGLESA Y ALEMANA Verbs of ‘preparing something for eating by heating it in a particular way’: a lexicological analysis Grado en Estudios Ingleses Fabián García Díaz Tutora: Mª del Carmen Fumero Pérez San Cristóbal de La Laguna, Tenerife 8 de septiembre de 2015 INDEX 1. Abstract ................................................................................................................................. 3 2. Introduction .......................................................................................................................... 4 3. Theoretical perspective ........................................................................................................ 6 4. Analysis: verbs of to prepare something for eating by heating it in a particular way: cook, fry and roast. ................................................................................................................... 9 4.1. Corpus selection .............................................................................................................. 9 4.2. Verb selection ................................................................................................................ 11 5. Paradigmatic relations ....................................................................................................... 13 5.1. Semantic components and lexematic analysis ............................................................... 13 5.2. Lexical relations ...........................................................................................................
    [Show full text]
  • Towards a Deeper Understanding of Meaning in Language
    Valentina Gavranović www.singidunum.ac.rs Valentina Gavranović TOWARDS A TOWARDS A DEEPER DEEPER UNDERSTANDING UNDERSTANDING OF MEANING OF MEANING IN LANGUAGE AIN COURSEBOOK LANGUAGE IN ENGLISH SEMANTICS A COURSEBOOK IN ENGLISH SEMANTICS is coursebook has been written for undergraduate students majoring in English, with the aim to introduce them with the main concepts found in the domain of semantics, and help them understand the complexities and dierent aspects of meaning in language. Although the book is primarily intended for students who have no previous theoretical knowledge in semantics, it can also be used by those who want to consolidate and expand on their existing learning experience in linguistics and semantics. e scope of concepts and topics covered in this book have been framed within a rounded context which oers the basics in semantics, and provides references and incentives for students’ further autonomous investigation and research. e author endeavoured to present complex linguistic issues in such a way that any language learner with no previous knowledge in this area can use the book. However, anyone who uses this book needs to read it very carefully so that they could understand the theory and apply it while tackling the examples illustrating the concepts provided in this book. Furthermore, the very nature of linguistics demands a thorough and careful reading, rereading and questioning. e book is composed of six chapters, and, within them, there are several sections and subsections. Each chapter contains theoretical explanations followed by a set of questions and exercises whose aim is to help the learners revise the theoretical concepts and apply them in practice.
    [Show full text]
  • Towards Building a Multilingual Sememe Knowledge Base
    Towards Building a Multilingual Sememe Knowledge Base: Predicting Sememes for BabelNet Synsets Fanchao Qi1∗, Liang Chang2∗y, Maosong Sun13z, Sicong Ouyang2y, Zhiyuan Liu1 1Department of Computer Science and Technology, Tsinghua University Institute for Artificial Intelligence, Tsinghua University Beijing National Research Center for Information Science and Technology 2Beijing University of Posts and Telecommunications 3Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University [email protected], [email protected] fsms, [email protected], [email protected] Abstract word husband A sememe is defined as the minimum semantic unit of human languages. Sememe knowledge bases (KBs), which contain sense "married man" "carefully use" words annotated with sememes, have been successfully ap- plied to many NLP tasks. However, existing sememe KBs are built on only a few languages, which hinders their widespread human economize utilization. To address the issue, we propose to build a uni- sememe fied sememe KB for multiple languages based on BabelNet, a family male spouse multilingual encyclopedic dictionary. We first build a dataset serving as the seed of the multilingual sememe KB. It man- ually annotates sememes for over 15 thousand synsets (the entries of BabelNet). Then, we present a novel task of auto- Figure 1: Sememe annotation of the word “husband” in matic sememe prediction for synsets, aiming to expand the HowNet. seed dataset into a usable KB. We also propose two simple and effective models, which exploit different information of synsets. Finally, we conduct quantitative and qualitative anal- sememes to annotate senses of over 100 thousand Chinese yses to explore important factors and difficulties in the task.
    [Show full text]
  • Towards Building a Multilingual Sememe Knowledge Base
    The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Towards Building a Multilingual Sememe Knowledge Base: Predicting Sememes for BabelNet Synsets Fanchao Qi,1∗ Liang Chang,2∗† Maosong Sun,1,3‡ Sicong Ouyang,2† Zhiyuan Liu1 1Department of Computer Science and Technology, Tsinghua University Institute for Artificial Intelligence, Tsinghua University Beijing National Research Center for Information Science and Technology 2Beijing University of Posts and Telecommunications 3Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University [email protected], [email protected] {sms, liuzy}@tsinghua.edu.cn, [email protected] Abstract word husband A sememe is defined as the minimum semantic unit of human languages. Sememe knowledge bases (KBs), which contain sense "married man" "carefully use" words annotated with sememes, have been successfully ap- plied to many NLP tasks. However, existing sememe KBs are built on only a few languages, which hinders their widespread human economize utilization. To address the issue, we propose to build a uni- sememe fied sememe KB for multiple languages based on BabelNet, a family male spouse multilingual encyclopedic dictionary. We first build a dataset serving as the seed of the multilingual sememe KB. It man- ually annotates sememes for over 15 thousand synsets (the entries of BabelNet). Then, we present a novel task of auto- Figure 1: Sememe annotation of the word “husband” in matic sememe prediction for synsets, aiming to expand the HowNet. seed dataset into a usable KB. We also propose two simple and effective models, which exploit different information of synsets. Finally, we conduct quantitative and qualitative anal- sememes to annotate senses of over 100 thousand Chinese yses to explore important factors and difficulties in the task.
    [Show full text]
  • 10. Semic Analysis Summary
    Louis Hébert, Tools for Text and Image Analysis: An Introduction to Applied Semiotics 10. SEMIC ANALYSIS SUMMARY Semic analysis was developed within the field of semantics (the study of meaning in linguistic units). This chapter presents Rastier's interpretive semantics, and semic analysis as formulated therein. Semic analysis is performed on a semiotic act – a text, for example – by identifying the semes, that is, the elements of meaning (parts of the signified), defining clusters of semes (isotopies and molecules) and determining the relations between the clusters (relations of presupposition, comparison, etc., between the isotopies). The repetition of a seme creates an isotopy. For example, in "There was a fine ship, carved from solid gold / With azure reaching masts, on seas unknown" (Émile Nelligan, "The Golden Ship"), the words "ship", "masts" and "seas" all contain the seme /navigation/ (as well as others). The repetition of this seme creates the isotopy /navigation/. A semic molecule is a cluster of at least two semes appearing together more than once in a single semantic unit (a single word, for instance). For instance, in the poem just quoted, there is a semic molecule formed by the semes /precious/ + /dispersion/. It appears in the "streaming hair" ("cheveux épars") of the beautiful Goddess of Love (Venus) who was "spread-eagled" ("s'étalait") at the ship's prow, and also in the sinking of the ship's treasure, which the sailors "disputed amongst themselves" ("entre eux ont disputés"). 1. THEORY Semic analysis is performed on a semiotic act – such as a text – by identifying the semes (the elements of meaning), finding clusters of semes (isotopies and molecules) and determining the relations between the clusters (relations of presupposition, comparison, etc., between the isotopies).
    [Show full text]
  • International English Usage Loreto Todd & Ian Hancock
    INTERNATIONAL ENGLISH USAGE LORETO TODD & IAN HANCOCK London First published 1986 by Croom Helm This edition published in the Taylor & Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & Francis or Routledge's collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” First published in paperback 1990 by Routledge 11 New Fetter Lane, London EC4P 4EE © 1986 Loreto Todd and Ian Hancock All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data Todd, Loreto International English usage. 1. English language—Usage I. Title II. Hancock, Ian 428 PE1460 Library of Congress Cataloging in Publication Data Todd, Loreto. International English usage. Includes index. 1. English language—Usage—Handbooks, manuals, etc. I. Hancock, Ian F. II. Title. PE1460.T64 1987 428 86–28426 ISBN 0-203-97763-7 Master e-book ISBN ISBN 0-415-05102-9 (Print Edition) ISBN 0-709-94314-8 hb. Contents Introduction iv Contributors vi List of Symbols viii Pronunciation Guide ix INTERNATIONAL ENGLISH USAGE 1–587 Index 588–610 Introduction In the four centuries since the time of Shakespeare, English has changed from a relatively unimportant European language with perhaps four million speakers into an international language used in every continent by approximately eight hundred million people. It is spoken natively by large sections of the population in Australia, Canada, the Caribbean, Ireland, New Zealand, the Philippines, Southern Africa, the United Kingdom and the United States of America; it is widely spoken as a second language throughout Africa and Asia; and it is the most frequently used language of international affairs.
    [Show full text]
  • The Seme ‘Strong’ in Lexicological Definitions
    UDC 811.111’37 Boris Hlebec* University of Belgrade Faculty of Philology English Department Belgrade, Serbia THE SEME ‘STRONG’ IN LEXICOLOGICAL DEFINITIONS Abstract This is an in-depth analysis of a selection of English lexemes containing the seme ‘strong’, performed by means of the collocational method as devised and elaborated by the author in his previous articles. This kind of approach shows the way language really works and that there is no clear borderline between langue and parole, or between lexis and syntax. Key words: semantic definition, collocation, seme, sememe, classeme, cryptotype 1. Introduction Defining lexemes in a scientific way is a rigorous task, which requires insight into the whole lexical system, or at least into a large part of it. When applying our collocational method to this aim (Hlebec 2007, 2008a, 2008b, 2008c, 2010, 2011a, 2011b), the present author has come across * E-mail address: [email protected] Sources of collocations have been various: British National Corpus, Corpus of Contemporary American English, Oxford Collocational Dictionary for Students of English, The Cassell Dictionary of Appropriate Adjectives, Dictionary of Selected Collocations among the most frequently consulted. Slang, literary style and specialized terms have not been taken into account. 7 Belgrade BELLS a defining seme2 ‘strong’ as a recurrent element in quite a lot of lexemes.3 This topic merits a whole volume, and the article reveals only a part of its extensive use. ‘Strong’ is to be understood in its abstract meaning ‘of great intensity’ rather than in its concrete meaning ‘of great bodily strength’. The crucial step in the application of the collocational method is to ascertain the common content of the directive.
    [Show full text]
  • Hewson's Semantic Theory and Implicit Semantics
    HEWSON'S SEMANTIC THEORY AND IMPLICIT SEMANTICS Valeri Vassiliev Chuvash Pedagogical Institute Cheboksary, Russia ABSTRACT The problem of implicit semantics was approached in this paper with a broad understanding of presuppositions, implications, entailments, etc., as derivable information created by three different kinds of con- texts in the four stages of meaning generation. Liens are viewed as meanings implied by the speaker and/or inferred by the hearer in a real act of communication. As meaning generation is assumed to be an ac- tivity, liens are consequently treated as dynamic processes involving psychological associations, rational deductions, and imagination. Each stage of semiogenesis provides a potential possibility for a variety of liens. Most of them are dismissed and cancelled at the subsequent stages of meaning generation so that the final perceptive product-an understood utterance-appears to have only several liens which re- main relevant for a given set of linguistic, referential and situational contexts and for the majority of the speakers. 1. John Hewson's ideas on semantics, which permeate his works, ap- pear to have a considerable explanatory potential applicable to a number of semantic issues. The following is a brief outline of his semantic teaching and an attempt to demonstrate how it can be employed in the treatment of problems pertaining to implicit semantics. 1.1 Hewson's theory of meaning is based on Saussure's distinction of langue and parole, or tongue and discourse, and elaborates the view that the use of language is an activity 'whereby the underlying elements of a tongue are processed, albeit subconsciously, for use in discourse' (Hewson 1990: 27).
    [Show full text]
  • End to End Chinese Lexical Fusion Recognition with Sememe Knowledge
    End to End Chinese Lexical Fusion Recognition with Sememe Knowledge Yijiang Liu1 and Meishan Zhang2 and Donghong Ji1∗ 1Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, China 2School of New Media and Communication, Tianjin University, China fcslyj,[email protected], [email protected] Abstract In this paper, we present Chinese lexical fusion recognition, a new task which could be regarded as one kind of coreference recognition. First, we introduce the task in detail, showing the relationship with coreference recognition and differences from the existing tasks. Second, we propose an end-to-end model for the task, handling mentions as well as coreference relationship jointly. The model exploits the state-of-the-art contextualized BERT representations as the encoder, and is further enhanced with the sememe knowledge from HowNet by graph attention networks. We manually annotate a benchmark dataset for the task and then conduct experiments on it. Results demonstrate that our final model is effective and competitive for the task. Detailed analysis is offered for comprehensively understanding the new task and our proposed model. 1 Introduction Coreference is one important topic in linguistics (Gordon and Hendrick, 1998; Pinillos, 2011), and coref- erence recognition has been extensively researched in the natural language processing (NLP) community (Ng and Cardie, 2002; Lee et al., 2017; Qi et al., 2012; Fei et al., 2019). There are different kinds of coreferences, such as pronoun anaphora and abbreviation (Mitkov, 1998; Mitkov, 1999; Munoz˜ and Montoyo, 2002; Choubey and Huang, 2018). Here we examine the phenomenon of Chinese lexical fusion, also called separable coreference (Chen and Ren, 2020), where two closely related words in a paragraph are united by a fusion form of the same meaning, and the fusion word can be seen as the coreference of the two separation words.
    [Show full text]