Proceedings of the Workshop on Grammar and Lexicon: Interactions and Interfaces, Pages 1–6, Osaka, Japan, December 11 2016
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Definiteness Determined by Syntax: a Case Study in Tagalog 1 Introduction
Definiteness determined by syntax: A case study in Tagalog1 James N. Collins Abstract Using Tagalog as a case study, this paper provides an analysis of a cross-linguistically well attested phenomenon, namely, cases in which a bare NP’s syntactic position is linked to its interpretation as definite or indefinite. Previous approaches to this phenomenon, including analyses of Tagalog, appeal to specialized interpretational rules, such as Diesing’s Mapping Hypothesis. I argue that the patterns fall out of general compositional principles so long as type-shifting operators are available to the gram- matical system. I begin by weighing in a long-standing issue for the semantic analysis of Tagalog: the interpretational distinction between genitive and nominative transitive patients. I show that bare NP patients are interpreted as definites if marked with nominative case and as narrow scope indefinites if marked with genitive case. Bare NPs are understood as basically predicative; their quantificational force is determined by their syntactic position. If they are syntactically local to the selecting verb, they are existentially quantified by the verb itself. If they occupy a derived position, such as the subject position, they must type-shift in order to avoid a type-mismatch, generating a definite interpretation. Thus the paper develops a theory of how the position of an NP is linked to its interpretation, as well as providing a compositional treatment of NP-interpretation in a language which lacks definite articles but demonstrates other morphosyntactic strategies for signaling (in)definiteness. 1 Introduction Not every language signals definiteness via articles. Several languages (such as Russian, Kazakh, Korean etc.) lack articles altogether. -
Operational Semantics Page 1
Operational Semantics Page 1 Operational Semantics Class notes for a lecture given by Mooly Sagiv Tel Aviv University 24/5/2007 By Roy Ganor and Uri Juhasz Reference Semantics with Applications, H. Nielson and F. Nielson, Chapter 2. http://www.daimi.au.dk/~bra8130/Wiley_book/wiley.html Introduction Semantics is the study of meaning of languages. Formal semantics for programming languages is the study of formalization of the practice of computer programming. A computer language consists of a (formal) syntax – describing the actual structure of programs and the semantics – which describes the meaning of programs. Many tools exist for programming languages (compilers, interpreters, verification tools, … ), the basis on which these tools can cooperate is the formal semantics of the language. Formal Semantics When designing a programming language, formal semantics gives an (hopefully) unambiguous definition of what a program written in the language should do. This definition has several uses: • People learning the language can understand the subtleties of its use • The model over which the semantics is defined (the semantic domain) can indicate what the requirements are for implementing the language (as a compiler/interpreter/…) • Global properties of any program written in the language, and any state occurring in such a program, can be understood from the formal semantics • Implementers of tools for the language (parsers, compilers, interpreters, debuggers etc) have a formal reference for their tool and a formal definition of its correctness/completeness -
The Generative Lexicon
The Generative Lexicon James Pustejovsky" Computer Science Department Brandeis University In this paper, I will discuss four major topics relating to current research in lexical seman- tics: methodology, descriptive coverage, adequacy of the representation, and the computational usefulness of representations. In addressing these issues, I will discuss what I think are some of the central problems facing the lexical semantics community, and suggest ways of best ap- proaching these issues. Then, I will provide a method for the decomposition of lexical categories and outline a theory of lexical semantics embodying a notion of cocompositionality and type coercion, as well as several levels of semantic description, where the semantic load is spread more evenly throughout the lexicon. I argue that lexical decomposition is possible if it is per- formed generatively. Rather than assuming a fixed set of primitives, I will assume a fixed number of generative devices that can be seen as constructing semantic expressions. I develop a theory of Qualia Structure, a representation language for lexical items, which renders much lexical ambiguity in the lexicon unnecessary, while still explaining the systematic polysemy that words carry. Finally, I discuss how individual lexical structures can be integrated into the larger lexical knowledge base through a theory of lexical inheritance. This provides us with the necessary principles of global organization for the lexicon, enabling us to fully integrate our natural language lexicon into a conceptual whole. 1. Introduction I believe we have reached an interesting turning point in research, where linguistic studies can be informed by computational tools for lexicology as well as an appre- ciation of the computational complexity of large lexical databases. -
Automated Method of Hyper-Hyponymic Verbal Pairs Extraction from Dictionary Definitions Based on Syntax Analysis and Word Embeddings
Rhema. Рема. 2020. № 2 Лингвистика DOI: 10.31862/2500-2953-2020-2-60-75 O. Antropova, E. Ogorodnikova Ural Federal University named after the first President of Russia B.N. Yeltsin, Yekaterinburg, 620002, Russian Federation Automated method of hyper-hyponymic verbal pairs extraction from dictionary definitions based on syntax analysis and word embeddings This study is dedicated to the development of methods of computer-aided hyper-hyponymic verbal pairs extraction. The method’s foundation is that in a dictionary article the defined verb is commonly a hyponym and its definition is formulated through a hypernym in the infinitive. The first method extracted all infinitives having dependents. The second method also demanded an extracted infinitive to be the root of the syntax tree. The first method allowed the extraction of almost twice as many hypernyms; whereas the second method provided higher precision. The results were post-processed with word embeddings, which allowed an improved precision without a crucial drop in the number of extracted hypernyms. Key words: semantics, verbal hyponymy, syntax models, word embeddings, RusVectōrēs Acknowledgments. The reported study was funded by RFBR according to the research project № 18-302-00129 FOR CITATION: Antropova O., Ogorodnikova E. Automated method of hyper- hyponymic verbal pairs extraction from dictionary definitions based on syntax analysis and word embeddings. Rhema. 2020. No. 2. Pp. 60–75. DOI: 10.31862/2500- 2953-2020-2-60-75 © Antropova O., Ogorodnikova E., 2020 Контент доступен по лицензии Creative Commons Attribution 4.0 International License 60 The content is licensed under a Creative Commons Attribution 4.0 International License Rhema. -
Denotational Semantics
Denotational Semantics CS 6520, Spring 2006 1 Denotations So far in class, we have studied operational semantics in depth. In operational semantics, we define a language by describing the way that it behaves. In a sense, no attempt is made to attach a “meaning” to terms, outside the way that they are evaluated. For example, the symbol ’elephant doesn’t mean anything in particular within the language; it’s up to a programmer to mentally associate meaning to the symbol while defining a program useful for zookeeppers. Similarly, the definition of a sort function has no inherent meaning in the operational view outside of a particular program. Nevertheless, the programmer knows that sort manipulates lists in a useful way: it makes animals in a list easier for a zookeeper to find. In denotational semantics, we define a language by assigning a mathematical meaning to functions; i.e., we say that each expression denotes a particular mathematical object. We might say, for example, that a sort implementation denotes the mathematical sort function, which has certain properties independent of the programming language used to implement it. In other words, operational semantics defines evaluation by sourceExpression1 −→ sourceExpression2 whereas denotational semantics defines evaluation by means means sourceExpression1 → mathematicalEntity1 = mathematicalEntity2 ← sourceExpression2 One advantage of the denotational approach is that we can exploit existing theories by mapping source expressions to mathematical objects in the theory. The denotation of expressions in a language is typically defined using a structurally-recursive definition over expressions. By convention, if e is a source expression, then [[e]] means “the denotation of e”, or “the mathematical object represented by e”. -
Quantum Weakest Preconditions
Quantum Weakest Preconditions Ellie D’Hondt ∗ Prakash Panangaden †‡ Abstract We develop a notion of predicate transformer and, in particular, the weak- est precondition, appropriate for quantum computation. We show that there is a Stone-type duality between the usual state-transformer semantics and the weakest precondition semantics. Rather than trying to reduce quantum com- putation to probabilistic programming we develop a notion that is directly taken from concepts used in quantum computation. The proof that weak- est preconditions exist for completely positive maps follows from the Kraus representation theorem. As an example we give the semantics of Selinger’s language in terms of our weakest preconditions. 1 Introduction Quantum computation is a field of research that is rapidly acquiring a place as a significant topic in computer science. To be sure, there are still essential tech- nological and conceptual problems to overcome in building functional quantum computers. Nevertheless there are fundamental new insights into quantum com- putability [Deu85, DJ92], quantum algorithms [Gro96, Gro97, Sho94, Sho97] and into the nature of quantum mechanics itself [Per95], particularly with the emer- gence of quantum information theory [NC00]. These developments inspire one to consider the problems of programming full-fledged general purpose quantum computers. Much of the theoretical re- search is aimed at using the new tools available - superposition, entanglement and linearity - for algorithmic efficiency. However, the fact that quantum algorithms are currently done at a very low level only - comparable to classical computing 60 years ago - is a much less explored aspect of the field. The present paper is ∗Centrum Leo Apostel , Vrije Universiteit Brussel, Brussels, Belgium, [email protected]. -
Arxiv:2106.08037V1 [Cs.CL] 15 Jun 2021 Alternative Ways the World Could Be
The Possible, the Plausible, and the Desirable: Event-Based Modality Detection for Language Processing Valentina Pyatkin∗ Shoval Sadde∗ Aynat Rubinstein Bar Ilan University Bar Ilan University Hebrew University of Jerusalem [email protected] [email protected] [email protected] Paul Portner Reut Tsarfaty Georgetown University Bar Ilan University [email protected] [email protected] Abstract (1) a. We presented a paper at ACL’19. Modality is the linguistic ability to describe b. We did not present a paper at ACL’20. events with added information such as how de- sirable, plausible, or feasible they are. Modal- The propositional content p =“present a paper at ity is important for many NLP downstream ACL’X” can be easily verified for sentences (1a)- tasks such as the detection of hedging, uncer- (1b) by looking up the proceedings of the confer- tainty, speculation, and more. Previous studies ence to (dis)prove the existence of the relevant pub- that address modality detection in NLP often p restrict modal expressions to a closed syntac- lication. The same proposition is still referred to tic class, and the modal sense labels are vastly in sentences (2a)–(2d), but now in each one, p is different across different studies, lacking an ac- described from a different perspective: cepted standard. Furthermore, these senses are often analyzed independently of the events that (2) a. We aim to present a paper at ACL’21. they modify. This work builds on the theoreti- b. We want to present a paper at ACL’21. cal foundations of the Georgetown Gradable Modal Expressions (GME) work by Rubin- c. -
Methods in Lexicography and Dictionary Research* Stefan J
Methods in Lexicography and Dictionary Research* Stefan J. Schierholz, Department Germanistik und Komparatistik, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany ([email protected]) Abstract: Methods are used in every stage of dictionary-making and in every scientific analysis which is carried out in the field of dictionary research. This article presents some general consid- erations on methods in philosophy of science, gives an overview of many methods used in linguis- tics, in lexicography, dictionary research as well as of the areas these methods are applied in. Keywords: SCIENTIFIC METHODS, LEXICOGRAPHICAL METHODS, THEORY, META- LEXICOGRAPHY, DICTIONARY RESEARCH, PRACTICAL LEXICOGRAPHY, LEXICO- GRAPHICAL PROCESS, SYSTEMATIC DICTIONARY RESEARCH, CRITICAL DICTIONARY RESEARCH, HISTORICAL DICTIONARY RESEARCH, RESEARCH ON DICTIONARY USE Opsomming: Metodes in leksikografie en woordeboeknavorsing. Metodes word gebruik in elke fase van woordeboekmaak en in elke wetenskaplike analise wat in die woor- deboeknavorsingsveld uitgevoer word. In hierdie artikel word algemene oorwegings vir metodes in wetenskapfilosofie voorgelê, 'n oorsig word gegee van baie metodes wat in die taalkunde, leksi- kografie en woordeboeknavorsing gebruik word asook van die areas waarin hierdie metodes toe- gepas word. Sleutelwoorde: WETENSKAPLIKE METODES, LEKSIKOGRAFIESE METODES, TEORIE, METALEKSIKOGRAFIE, WOORDEBOEKNAVORSING, PRAKTIESE LEKSIKOGRAFIE, LEKSI- KOGRAFIESE PROSES, SISTEMATIESE WOORDEBOEKNAVORSING, KRITIESE WOORDE- BOEKNAVORSING, HISTORIESE WOORDEBOEKNAVORSING, NAVORSING OP WOORDE- BOEKGEBRUIK 1. Introduction In dictionary production and in scientific work which is carried out in the field of dictionary research, methods are used to reach certain results. Currently there is no comprehensive and up-to-date documentation of these particular methods in English. The article of Mann and Schierholz published in Lexico- * This article is based on the article from Mann and Schierholz published in Lexicographica 30 (2014). -
The Thinking of Speaking Issue #27 May /June 2017 Ccooggnnaatteess,, Tteelllliinngg Rreeaall Ffrroomm Ffaakkee More About Cognates Than You Ever Wanted to Know
Parrot Time The Thinking of Speaking Issue #27 May /June 2017 CCooggnnaatteess,, TTeelllliinngg RReeaall ffrroomm FFaakkee More about cognates than you ever wanted to know AA PPeeeekk iinnttoo PPiinnyyiinn The Romaniizatiion of Mandariin Chiinese IInnssppiirraattiioonnaall LLaanngguuaaggee AArrtt Maxiimiilliien Urfer''s piiece speaks to one of our wriiters TThhee LLeeaarrnniinngg MMiinnddsseett Language acquiisiitiion requiires more than study An Art Exhibition That Spoke To Me LLooookk bbeeyyoonndd wwhhaatt yyoouu kknnooww Parrot Time is your connection to languages, linguistics and culture from the Parleremo community. Expand your understanding. Never miss an issue. 2 Parrot Time | Issue#27 | May/June2017 Contents Parrot Time Parrot Time is a magazine covering language, linguistics Features and culture of the world around us. 8 More About Cognates Than You Ever Wanted to Know It is published by Scriveremo Languages interact with each other, sharing aspects of Publishing, a division of grammar, writing, and vocabulary. However, coincidences also Parleremo, the language learning create words which only looked related. John C. Rigdon takes a community. look at these true and false cognates, and more. Join Parleremo today. Learn a language, make friends, have fun. 1 6 A Peek into Pinyin Languages with non-Latin alphabets are often a major concern for language learners. The process of converting a non-Latin alphabet into something familiar is called "Romanization", and Tarja Jolma looks at how this was done for Mandarin Chinese. 24 An Art Exhibition That Spoke To Me Editor: Erik Zidowecki Inspiration is all around us, often crossing mediums. Olivier Email: [email protected] Elzingre reveals how a performance piece affected his thinking of languages. -
A COMPARISON ANALYSIS of AMERICAN and BRITISH IDIOMS By
A COMPARISON ANALYSIS OF AMERICAN AND BRITISH IDIOMS By: NANIK FATMAWATI NIM: 206026004290 ENGLISH LETTERS DEPARTMENT LETTERS AND HUMANITIES FACULTY STATE ISLAMIC UNIVERSITY “SYARIF HIDAYATULLAH” JAKARTA 2011 ABSTRACT Nanik Fatmawati, A Comparison Analysis of American Idioms and British Idioms. A Thesis: English Letters Department. Adab and Humanities Faculty. Syarif Hidayatullah State Islamic University Jakarta, 2011 In this paper, the writer uses a qualitative method with a descriptive analysis by comparing and analyzing from the dictionary and short story. The dictionary that would be analyzed by the writer is English and American Idioms by Richard A. Spears and the short story is you were perfectly fine by John Millington Ward. Through this method, the writer tries to find the differences meaning between American idioms and British idioms. The collected data are analyzed by qualitative using the approach of deconstruction theory. English is a language particularly rich in idioms – those modes of expression peculiar to a language (or dialect) which frequently defy logical and grammatical rules. Without idioms English would lose much of its variety and humor both in speech and writing. The results of this thesis explain the difference meaning of American and British Idioms that is found in the dictionary and short story. i ii iii DECLARATION I hereby declare that this submission is my original work and that, to the best of my knowledge and belief, it contains no material previously published or written by another person nor material which to a substantial extent has been accepted for the award of any other degree or diploma of the university or other institute of higher learning, except where due acknowledgement has been made in the text. -
Calculus of Possibilities As a Technique in Linguistic Typology
Calculus of possibilities as a technique in linguistic typology Igor Mel’uk 1. The problem stated: A unified conceptual system in linguistics A central problem in the relationship between typology and the writing of individual grammars is that of developing a cross-linguistically viable con- ceptual system and a corresponding terminological framework. I will deal with this problem in three consecutive steps: First, I state the problem and sketch a conceptual system that I have put forward for typological explora- tions in morphology (Sections 1 and 2). Second, I propose a detailed illus- tration of this system: a calculus of grammatical voices in natural languages (Section 3). And third, I apply this calculus (that is, the corresponding con- cepts) in two particular case studies: an inflectional category known as an- tipassive and the grammatical voice in French (Sections 4 and 5). In the latter case, the investigation shows that even for a language as well de- scribed as French a rigorously standardized typological framework can force us to answer questions that previous descriptions have failed to re- solve. I start with the following three assumptions: 1) One of the most pressing tasks of today’s linguistics is description of particular languages, the essential core of this work being the writing of grammars and lexicons. A linguist sets out to describe a language as pre- cisely and exhaustively as possible; this includes its semantics, syntax, morphology and phonology plus (within the limits of time and funds avail- able) its lexicon. 2) Such a description is necessarily carried out in terms of some prede- fined concepts – such as lexical unit, semantic actant, syntactic role, voice, case, phoneme, etc. -
The Use of Ostensive Definition As an Explanatory Technique in the Presentation of Semantic Information in Dictionary-Making By
The Use of Ostensive Definition as an Explanatory Technique in the Presentation of Semantic Information in Dictionary-Making by John Lubinda The University of Zambia School of Humanities and Social Sciences Email: [email protected] Abstract Lexicography – The theory and practice of dictionary making – is a multifarious activity involving a number of ordered steps. It essentially involves the management of a language’s word stock. Hartmann (1983: vii) summarises the main aspects of the lexicographic management of vocabulary into three main activities, viz. recording, description and presentation. The stages involved in dictionary-making may be outlined as follows (paraphrasing Gelb’s(1958) five-step model): (a) Identification, selection and listing of lexical items (b) sequential arrangement of lexical units (c) parsing and excerpting of entries (d) semantic specification of each unit (e) dictionary compilation from processed data and publication of the final product. The present article focuses specifically on one aspect of the intermediate stages of this rather complex process of dictionary-making: the stage that lexicographers generally consider to be the principal stage in the entire process and one that is at the very heart of the raison d’être of every monolingual linguistic dictionary, namely semantic specification. The main thesis being developed here is that semantic specification can be greatly enhanced by the judicious 21 Journal of Lexicography and Terminology, Volume 2, Issue 1 use of pictorial illustrations (as a complement to a lexicographic definition) in monolingual dictionaries, and as an essential lexicographic device. This feature is discussed under the broad concept of ostensive definition as an explanatory technique in dictionary- making.