A Distributional and Orthographic Aggregation Model for English Derivational Morphology

Total Page:16

File Type:pdf, Size:1020Kb

A Distributional and Orthographic Aggregation Model for English Derivational Morphology A Distributional and Orthographic Aggregation Model for English Derivational Morphology Daniel Deutsch∗, John Hewitt∗ and Dan Roth Department of Computer and Information Science University of Pennsylvania fddeutsch,johnhew,[email protected] Abstract Modeling derivational morphology to gen- erate words with particular semantics is useful in many text generation tasks, such as machine translation or abstractive ques- tion answering. In this work, we tackle the task of derived word generation. That is, given the word “run,” we attempt to gener- ate the word “runner” for “someone who runs.” We identify two key problems in generating derived words from root words and transformations: suffix ambiguity and Figure 1: Diagram depicting the flow of our aggregation model. Two models generate a hypothesis according to or- orthographic irregularity. We contribute a thogonal information; then one is chosen as the final model novel aggregation model of derived word generation. Here, the hypothesis from the distributional model generation that learns derivational transfor- is chosen. mations both as orthographic functions us- ing sequence-to-sequence models and as smaller parts. The AGENT derivational transforma- functions in distributional word embedding tion, for example, answers the question, what is the space. Our best open-vocabulary model, word for ‘someone who runs’? with the answer, a which can generate novel words, and our runner.1 Here, AGENT is spelled out as suffixing best closed-vocabulary model, show 22% -ner onto the root verb run. and 37% relative error reductions over cur- We tackle the task of derived word generation. rent state-of-the-art systems on the same In this task, a root word x and a derivational trans- dataset. formation t are given to the learner. The learner’s job is to produce the result of the transformation on the root word, called the derived word y. Table 1 Introduction 1 gives examples of these transformations. Previous approaches to derived word genera- The explicit modeling of morphology has been tion model the task as a character-level sequence- shown to improve a number of tasks (Seeker and to-sequence (seq2seq) problem (Cotterell et al., C¸ etinoglu, 2015; Luong et al., 2013). In a large 2017b). The letters from the root word and some number of the world’s languages, many words are encoding of the transformation are given as input to composed through morphological operations on a neural encoder, and the decoder is trained to pro- subword units. Some languages are rich in inflec- duce the derived word, one letter at a time. We iden- tional morphology, characterized by syntactic trans- tify the following problems with these approaches: formations like pluralization. Similarly, languages First, because these models are unconstrained, like English are rich in derivational morphology, they can generate sequences of characters that do where the semantics of words are composed from 1We use the verb run as a demonstrative example; the ∗These authors contributed equally; listed alphabetically. transformation can be applied to most verbs. 1938 Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1938–1947 Melbourne, Australia, July 15 - 20, 2018. c 2018 Association for Computational Linguistics x t y improve the accuracy of seq2seq-only derived word systems by adding external information through wise ADVERB ! wisely constrained decoding and hypothesis rescoring. simulate RESULT ! simulation These methods provide orthogonal gains to our approve RESULT ! approval main contribution. overstate RESULT ! overstatement We evaluate models in two categories: open vo- yodel AGENT ! yodeler cabulary models that can generate novel words survive AGENT ! survivor unattested in a preset vocabulary, and closed- intense NOMINAL ! intensity vocabulary models, which cannot. Our best open- effective NOMINAL ! effectiveness vocabulary and closed-vocabulary models demon- pessimistic NOMINAL ! pessimism strate 22% and 37% relative error reductions over the current state of the art. Table 1: The goal of derived word generation is to produce the derived word, y, given both the root word, x, and the transformation t, as demonstrated here with examples from 2 Background: Derivational Morphology the dataset. Derivational transformations generate novel words that are semantically composed from the root word not form actual words. We argue that requiring the and the transformation. We identify two unsolved model to generate a known word is a reasonable problems in derived word transformation, each of constraint in the special case of English derivational which we address in Sections3 and4. morphology, and doing so avoids a large number First, many plausible choices of suffix for a sin- of common errors. gle pair of root word and transformation. For ex- Second, sequence-based models can only gen- ample, for the verb ground, the RESULT transfor- 2 eralize string manipulations (such as “add -ment”) mation could plausibly take as many forms as if they appear frequently in the training data. Be- (ground; RESULT) ! grounding cause of this, they are unable to generate derived (ground; RESULT) ! *groundation words that do not follow typical patterns, such as generating truth as the nominative derivation of (ground; RESULT) ! *groundment true. We propose to learn a function for each trans- (ground; RESULT) ! *groundal formation in a low dimensional vector space that corresponds to mapping from representations of However, only one is correct, even though each the root word to the derived word. This eliminates suffix appears often in the RESULT transformation the reliance on orthographic information, unlike re- of other words. We will refer to this problem as lated approaches to distributional semantics, which “suffix ambiguity.” operate at the suffix level (Gupta et al., 2017). Second, many derived words seem to lack a gen- eralizable orthographic relationship to their root We contribute an aggregation model of derived words. For example, the RESULT of the verb speak word generation that produces hypotheses inde- is speech. It is unlikely, given an orthographically pendently from two separate learned models: one similar verb creak, that the RESULT be creech in- from a seq2seq model with only orthographic in- stead of, say, creaking. Seq2seq models must grap- formation, and one from a feed-forward network ple with the problem of derived words that are using only distributional semantic information in the result of unlikely or potentially unseen string the form of pretrained word vectors. The model transformations. We refer to this problem as “or- learns to choose between the hypotheses accord- thographic irregularity.” ing to the relative confidence of each. This system can be interpreted as learning to decide between 3 Sequence Models and Corpus positing an orthographically regular form or a se- Knowledge mantically salient word. See Figure1 for a diagram of our model. In this section, we introduce the prior state-of-the- We show that this model helps with two open art model, which serves as our baseline system. problems with current state-of-the-art seq2seq de- Then we build on top of this system by incorpo- rived word generation systems, suffix ambiguity rating a dictionary constraint and rescoring the and orthographic irregularity (Section2). We also 2The * indicates a non-word. 1939 model’s hypotheses with token frequency informa- The goal of decoding is to find the most probable tion to address the suffix ambiguity problem. structure y^ conditioned on some observation x and transformation t. That is, the problem is to solve 3.1 Baseline Architecture y^ = arg max p(y j x; t) (1) We begin by formalizing the problem and defin- y2Y ing some notation. For source word x = = arg min − log p(y j x; t) (2) x1; x2; : : : xm, a derivational transformation t, and y2Y target word y = y ; y ; : : : y , our goal is to learn 1 2 n where Y is the set of valid structures. Sequential some function from the pair (x; t) to y. Here, x i models have a natural ordering y = y ; y ; : : : y and y are the ith and jth characters of the input 1 2 n j over which − log p(y j x; t) can be decomposed strings x and y. We will sometimes use x1:i to denote x1; x2; : : : xi, and similarly for y1:j. n X The current state-of-the-art model for derived- − log p(y j x; t) = − log p(yt j y1:t−1; x; t) form generation approaches this problem by learn- t=1 ing a character-level encoder-decoder neural net- (3) work with an attention mechanism (Cotterell et al., Solving Equation2 can be viewed as solving a 2017b; Bahdanau et al., 2014). shortest path problem from a special starting state to a special ending state via some path which The input to the bidirectional LSTM en- uniquely represents y. Each vertex in the graph coder (Hochreiter and Schmidhuber, 1997; Graves represents some sequence y , and the weight of and Schmidhuber, 2005) is the sequence #, 1:i the edge from y1:i to y1:i+1 is given by x1; x2; : : : xm, #, t, where # is a special symbol to denote the start and end of a word, and the encoding − log p(y j y ; x; t) (4) of the derivational transformation t is concatenated i+1 1:i−1 to the input characters. The model is trained to The weight of the path from the start state to the end minimize the cross entropy of the training data. We state via the unique path that describes y is exactly refer to our reimplementation of this model as SEQ. equal to Equation3. When the vocabulary size is For a more detailed treatment of neural sequence- too large, the exact shortest path is intractable, and to-sequence models with attention, we direct the approximate search methods, such as beam search, reader to Luong et al.(2015). are used instead.
Recommended publications
  • Pronouns, Logical Variables, and Logophoricity in Abe Author(S): Hilda Koopman and Dominique Sportiche Source: Linguistic Inquiry, Vol
    MIT Press Pronouns, Logical Variables, and Logophoricity in Abe Author(s): Hilda Koopman and Dominique Sportiche Source: Linguistic Inquiry, Vol. 20, No. 4 (Autumn, 1989), pp. 555-588 Published by: MIT Press Stable URL: http://www.jstor.org/stable/4178645 Accessed: 22-10-2015 18:32 UTC Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/ info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Linguistic Inquiry. http://www.jstor.org This content downloaded from 128.97.27.20 on Thu, 22 Oct 2015 18:32:27 UTC All use subject to JSTOR Terms and Conditions Hilda Koopman Pronouns, Logical Variables, Dominique Sportiche and Logophoricity in Abe 1. Introduction 1.1. Preliminaries In this article we describe and analyze the propertiesof the pronominalsystem of Abe, a Kwa language spoken in the Ivory Coast, which we view as part of the study of pronominalentities (that is, of possible pronominaltypes) and of pronominalsystems (that is, of the cooccurrence restrictionson pronominaltypes in a particulargrammar). Abe has two series of thirdperson pronouns. One type of pronoun(0-pronoun) has basically the same propertiesas pronouns in languageslike English. The other type of pronoun(n-pronoun) very roughly corresponds to what has been called the referential use of pronounsin English(see Evans (1980)).It is also used as what is called a logophoric pronoun-that is, a particularpronoun that occurs in special embedded contexts (the logophoric contexts) to indicate reference to "the person whose speech, thought or perceptions are reported" (Clements (1975)).
    [Show full text]
  • Toward a Shared Syntax for Shifted Indexicals and Logophoric Pronouns
    Toward a Shared Syntax for Shifted Indexicals and Logophoric Pronouns Mark Baker Rutgers University April 2018 Abstract: I argue that indexical shift is more like logophoricity and complementizer agreement than most previous semantic accounts would have it. In particular, there is evidence of a syntactic requirement at work, such that the antecedent of a shifted “I” must be a superordinate subject, just as the antecedent of a logophoric pronoun or the goal of complementizer agreement must be. I take this to be evidence that the antecedent enters into a syntactic control relationship with a null operator in all three constructions. Comparative data comes from Magahi and Sakha (for indexical shift), Yoruba (for logophoric pronouns), and Lubukusu (for complementizer agreement). 1. Introduction Having had an office next to Lisa Travis’s for 12 formative years, I learned many things from her that still influence my thinking. One is her example of taking semantic notions, such as aspect and event roles, and finding ways to implement them in syntactic structure, so as to advance the study of less familiar languages and topics.1 In that spirit, I offer here some thoughts about how logophoricity and indexical shift, topics often discussed from a more or less semantic point of view, might have syntactic underpinnings—and indeed, the same syntactic underpinnings. On an impressionistic level, it would not seem too surprising for logophoricity and indexical shift to have a common syntactic infrastructure. Canonical logophoricity as it is found in various West African languages involves using a special pronoun inside the finite CP complement of a verb to refer to the subject of that verb.
    [Show full text]
  • Assibilation Or Analogy?: Reconsideration of Korean Noun Stem-Endings*
    Assibilation or analogy?: Reconsideration of Korean noun stem-endings* Ponghyung Lee (Daejeon University) This paper discusses two approaches to the nominal stem-endings in Korean inflection including loanwords: one is the assibilation approach, represented by H. Kim (2001) and the other is the analogy approach, represented by Albright (2002 et sequel) and Y. Kang (2003b). I contend that the assibilation approach is deficient in handling its underapplication to the non-nominal categories such as verb. More specifically, the assibilation approach is unable to clearly explain why spirantization (s-assibilation) applies neither to derivative nouns nor to non-nominal items in its entirety. By contrast, the analogy approach is able to overcome difficulties involved with the assibilation position. What is crucial to the analogy approach is that the nominal bases end with t rather than s. Evidence of t-ending bases is garnered from the base selection criteria, disparities between t-ending and s-ending inputs in loanwords. Unconventionally, I dare to contend that normative rules via orthography intervene as part of paradigm extension, alongside semantic conditioning and token/type frequency. Keywords: inflection, assibilation, analogy, base, affrication, spirantization, paradigm extension, orthography, token/type frequency 1. Introduction When it comes to Korean nominal inflection, two observations have captivated our attention. First, multiple-paradigms arise, as explored in previous literature (K. Ko 1989, Kenstowicz 1996, Y. Kang 2003b, Albright 2008 and many others).1 (1) Multiple-paradigms of /pʰatʰ/ ‘red bean’ unmarked nom2 acc dat/loc a. pʰat pʰaʧʰ-i pʰatʰ-ɨl pʰatʰ-e b. pʰat pʰaʧʰ-i pʰaʧʰ-ɨl pʰatʰ-e c.
    [Show full text]
  • Nominal Internal and External Topic and Focus: Evidence from Mandarin
    University of Pennsylvania Working Papers in Linguistics Volume 20 Issue 1 Proceedings of the 37th Annual Penn Article 17 Linguistics Conference 2014 Nominal Internal and External Topic and Focus: Evidence from Mandarin Yu-Yin Hsu Bard College Follow this and additional works at: https://repository.upenn.edu/pwpl Recommended Citation Hsu, Yu-Yin (2014) "Nominal Internal and External Topic and Focus: Evidence from Mandarin," University of Pennsylvania Working Papers in Linguistics: Vol. 20 : Iss. 1 , Article 17. Available at: https://repository.upenn.edu/pwpl/vol20/iss1/17 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/pwpl/vol20/iss1/17 For more information, please contact [email protected]. Nominal Internal and External Topic and Focus: Evidence from Mandarin Abstract Taking the Cartographic Approach, I argue that the left periphery of nominals in Mandarin (i.e., the domain before demonstrative) has properties similar to the split-CP domain proposed by Rizzi (1997). In addition, I argue that the nominal internal domain (i.e., under demonstrative but outside of NP) encodes information structure in a way similar to the sentence-internal Topic and Focus that has been put forth in the literature. In this paper, I show that identifying Topic and Focus within a nominal at such two distinct domains helps to explain various so-called “reordering” and extraction phenomena affecting nominal elements, their interpretation, and their associated discourse functions. The result of this paper supports the parallelisms between noun phrases and clauses and it provides a new perspective to evaluate such theoretical implication, that is, the interaction between syntax and information structure.
    [Show full text]
  • A THEORY of NOMINAL CONCORD a Dissertation Submitted in Partial Satisfaction of the Requirements for the Degree Of
    UNIVERSITY OF CALIFORNIA SANTA CRUZ A THEORY OF NOMINAL CONCORD A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in LINGUISTICS by Mark Norris June 2014 The Dissertation of Mark Norris is approved: Professor Jorge Hankamer, Chair Professor Sandra Chung Professor James McCloskey Dean Tyrus Miller Vice Provost and Dean of Graduate Studies Copyright © by Mark Norris 2014 Table of Contents List of Figures vi List of Tables vii Abstract viii Dedication x Acknowledgments xi 1 Introduction 3 1.1 Themainpuzzle.................................. 3 1.2 EmpiricalBackground ... ...... ..... ...... ...... ... 5 1.2.1 Grammaticalsketch ........................... 6 1.2.2 Nominalmorphology........................... 8 1.2.3 Nominal morphophonology . 12 1.2.4 Datasources ............................... 14 1.2.5 ThecaseforEstonian...... ..... ...... ...... .... 16 1.3 Theoretical Background . 16 1.3.1 Some important syntactic assumptions . ..... 17 1.3.2 Some important morphological assumptions . ...... 20 1.4 Organization.................................... 21 2 Estonian Nominal Morphosyntax 23 2.1 Introduction.................................... 23 2.2 TheDPlayerinEstonian .. ...... ..... ...... ...... ... 25 2.2.1 Estonian does not exhibit properties of articleless languages . 28 2.2.2 Overt material in D0 inEstonian..................... 44 2.2.3 Evidence for D0 from demonstratives . 50 2.2.4 Implications for the Small Nominal Hypothesis . ....... 54 2.3 Cardinal numerals in Estonian . 60 2.3.1 The numeral’s “complement” . 60 2.3.2 Previous analyses of numeral-noun constructions . ......... 64 iii 2.4 Two structures for NNCs in Estonian . ..... 75 2.4.1 The size of the NP+ ............................ 76 2.4.2 The higher number feature in Estonian . 79 2.4.3 Higher adjectives and possessors . 81 2.4.4 Plural numerals in Estonian are specifiers .
    [Show full text]
  • Where Do New Words Like Boobage, Flamage, Ownage Come From? Tracking the History of ‑Age Words from 1100 to 2000 in the OED3
    Lexis Journal in English Lexicology 12 | 2018 Lexical and Semantic Neology in English Where do new words like boobage, flamage, ownage come from? Tracking the history of ‑age words from 1100 to 2000 in the OED3 Chris A. Smith Electronic version URL: http://journals.openedition.org/lexis/2167 DOI: 10.4000/lexis.2167 ISSN: 1951-6215 Publisher Université Jean Moulin - Lyon 3 Electronic reference Chris A. Smith, « Where do new words like boobage, flamage, ownage come from? Tracking the history of ‑age words from 1100 to 2000 in the OED3 », Lexis [Online], 12 | 2018, Online since 14 December 2018, connection on 03 May 2019. URL : http://journals.openedition.org/lexis/2167 ; DOI : 10.4000/ lexis.2167 This text was automatically generated on 3 May 2019. Lexis is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Where do new words like boobage, flamage, ownage come from? Tracking the hist... 1 Where do new words like boobage, flamage, ownage come from? Tracking the history of ‑age words from 1100 to 2000 in the OED3 Chris A. Smith Introduction 1 This study aims to trace the evolution of nominal ‑age formation in the OED3, from its origins as a product of borrowing from Latin or French from 1100, to its status as an innovative internal derivation process. The suffix ‑age continues today to coin newly- or non-lexicalized forms such as ownage, boobage, brushage, suggestive of the continued productivity of a long-standing century-old suffix. This remarkable success appears to distinguish ‑age from similar Latinate suffixes such as ‑ment and ‑ity (see Gadde [1910]) and raises the question of the reasons behind this adaptability.
    [Show full text]
  • Ikizu-Sizaki Orthography Orthography Statement
    IKIZUIKIZU----SIZAKISIZAKI ORTHOGRAPHY STATEMENT SIL International UgandaUganda----TanzaniaTanzania Branch IkizuIkizu----SizakiSizaki Orthography Statement Approved Orthography Edition Acknowledgements Many individuals contributed to this document by formatting the structure, contributing the language data, organizing the data, writing the document and by giving advice for editing the document. This document was authored by Holly Robinson, and it is an updated and expanded version of the Ikizu Orthography Sketch, which was authored by Michelle Smith and Hazel Gray. Additional contributors from SIL International include: Oliver Stegen, Helen Eaton, Leila Schroeder, Oliver Rundell, Tim Roth, Dusty Hill, and Mike Diercks. Contributors from the Ikizu language community include: Rukia Rahel Manyori and Ismael Waryoba (Ikizu Bible translators), as well as Kitaboka Philipo, Bahati M. Seleman, Joseph M. Edward, Magwa P. Marara, Mwassi Mong’ateko, Ibrahimu Ketera, Amosi Mkono, Dennis Paul, Nickson Obimbo, Muhuri Keng’are, Matutu Ngese. © SIL International UgandaUganda----TanzaniaTanzania Branch P.O Box 44456 00100 Nairobi, Kenya P.O. Box 60368 Dar Es Salaam, Tanzania P.O. Box 7444 Kampala, Uganda Approved Orthography Edition: April 2016 2 Contents 1.1.1. Introduction ......................................................................................................................................................................... ......... 555 1.1 Classification ..................................................................................................
    [Show full text]
  • Philosophy of Language in the Twentieth Century Jason Stanley Rutgers University
    Philosophy of Language in the Twentieth Century Jason Stanley Rutgers University In the Twentieth Century, Logic and Philosophy of Language are two of the few areas of philosophy in which philosophers made indisputable progress. For example, even now many of the foremost living ethicists present their theories as somewhat more explicit versions of the ideas of Kant, Mill, or Aristotle. In contrast, it would be patently absurd for a contemporary philosopher of language or logician to think of herself as working in the shadow of any figure who died before the Twentieth Century began. Advances in these disciplines make even the most unaccomplished of its practitioners vastly more sophisticated than Kant. There were previous periods in which the problems of language and logic were studied extensively (e.g. the medieval period). But from the perspective of the progress made in the last 120 years, previous work is at most a source of interesting data or occasional insight. All systematic theorizing about content that meets contemporary standards of rigor has been done subsequently. The advances Philosophy of Language has made in the Twentieth Century are of course the result of the remarkable progress made in logic. Few other philosophical disciplines gained as much from the developments in logic as the Philosophy of Language. In the course of presenting the first formal system in the Begriffsscrift , Gottlob Frege developed a formal language. Subsequently, logicians provided rigorous semantics for formal languages, in order to define truth in a model, and thereby characterize logical consequence. Such rigor was required in order to enable logicians to carry out semantic proofs about formal systems in a formal system, thereby providing semantics with the same benefits as increased formalization had provided for other branches of mathematics.
    [Show full text]
  • Long-Distance Reflexivization and Logophoricity in the Dargin Language Muminat Kerimova Florida International University
    Florida International University FIU Digital Commons MA in Linguistics Final Projects College of Arts, Sciences & Education 2017 Long-Distance Reflexivization and Logophoricity in the Dargin Language Muminat Kerimova Florida International University Follow this and additional works at: https://digitalcommons.fiu.edu/linguistics_ma Part of the Linguistics Commons Recommended Citation Kerimova, Muminat, "Long-Distance Reflexivization and Logophoricity in the Dargin Language" (2017). MA in Linguistics Final Projects. 3. https://digitalcommons.fiu.edu/linguistics_ma/3 This work is brought to you for free and open access by the College of Arts, Sciences & Education at FIU Digital Commons. It has been accepted for inclusion in MA in Linguistics Final Projects by an authorized administrator of FIU Digital Commons. For more information, please contact [email protected]. FLORIDA INTERNATIONAL UNIVERSITY Miami, Florida LONG-DISTANCE REFLEXIVIZATION AND LOGOPHORICITY IN THE DARGIN LANGUAGE A thesis submitted in partial fulfillment of the requirements for the degree of MASTER OF ARTS in LINGUISTICS by Muminat Kerimova 2017 ABSTRACT OF THE THESIS LONG-DISTANCE REFLEXIVIZATION AND LOGOPHORICITY IN THE DARGIN LANGUAGE by Muminat Kerimova Florida International University, 2017 Miami, Florida Professor Ellen Thompson, Major Professor The study of anaphora challenges us to determine the conditions under which the pronouns of a language are associated with possible antecedents. One of the theoretical questions is whether the distribution of pronominal forms is best explained by a syntactic, semantic or discourse level analysis. A more practical question is how we distinguish between anaphoric elements, e.g. what are the borders between the notions of pronouns, locally bound reflexives and long-distance reflexives? The study analyzes the anaphora device saj in Dargin that is traditionally considered to be a long-distance reflexivization language.
    [Show full text]
  • The Neat Summary of Linguistics
    The Neat Summary of Linguistics Table of Contents Page I Language in perspective 3 1 Introduction 3 2 On the origins of language 4 3 Characterising language 4 4 Structural notions in linguistics 4 4.1 Talking about language and linguistic data 6 5 The grammatical core 6 6 Linguistic levels 6 7 Areas of linguistics 7 II The levels of linguistics 8 1 Phonetics and phonology 8 1.1 Syllable structure 10 1.2 American phonetic transcription 10 1.3 Alphabets and sound systems 12 2 Morphology 13 3 Lexicology 13 4 Syntax 14 4.1 Phrase structure grammar 15 4.2 Deep and surface structure 15 4.3 Transformations 16 4.4 The standard theory 16 5 Semantics 17 6 Pragmatics 18 III Areas and applications 20 1 Sociolinguistics 20 2 Variety studies 20 3 Corpus linguistics 21 4 Language and gender 21 Raymond Hickey The Neat Summary of Linguistics Page 2 of 40 5 Language acquisition 22 6 Language and the brain 23 7 Contrastive linguistics 23 8 Anthropological linguistics 24 IV Language change 25 1 Linguistic schools and language change 26 2 Language contact and language change 26 3 Language typology 27 V Linguistic theory 28 VI Review of linguistics 28 1 Basic distinctions and definitions 28 2 Linguistic levels 29 3 Areas of linguistics 31 VII A brief chronology of English 33 1 External history 33 1.1 The Germanic languages 33 1.2 The settlement of Britain 34 1.3 Chronological summary 36 2 Internal history 37 2.1 Periods in the development of English 37 2.2 Old English 37 2.3 Middle English 38 2.4 Early Modern English 40 Raymond Hickey The Neat Summary of Linguistics Page 3 of 40 I Language in perspective 1 Introduction The goal of linguistics is to provide valid analyses of language structure.
    [Show full text]
  • Three Challenges for Indexicalism LENNY CLAPP
    Three Challenges for Indexicalism LENNY CLAPP Abstract: Indexicalism is a strategy for defending truth-conditional semantics from under-determination arguments. According to indexicalism the class of indexical expres- sions includes not only the obvious indexicals, e.g. demonstratives and personal pronouns, but also unobvious indexical expressions, expressions which allegedly have been discovered to be indexicals. This paper argues that indexicalism faces significant obstacles that have yet to be overcome. The issue that divides indexicalism and truth-conditional pragmatics is first clarified. And then three general problems for indexicalism are presented, and some potential solutions that have been proposed in its defense are criticized. 1. Introduction The fundamental idea behind the family of views known as truth-conditional semantics is the following simple principle of truth-conditional compositionality: The truth-conditions expressed by an utterance of a declarative sentence S are determined by only two factors: (i) the logical form (LF) of S, and (ii) the semantic contents of the lexical items in S. Because of the influential work of Kaplan (1989) it is now a familiar point that in order to account for sentences that contain obvious indexicals—e.g. ‘I’, ‘now’, ‘she’, ‘that’ and perhaps some ‘relational terms’ such as ‘local’—a defender of truth-conditional semantics must reject this simple principle of truth-conditional compositionality in favor of a relativized principle: The truth-conditions expressed by an utterance of a declarative sentence Sina context c are determined by only two factors: (i) the logical form (LF) of S,and (ii) the semantic contents of the lexical items of Sinc.
    [Show full text]
  • Case and Affectedness in German Inalienable Possession Constructions*
    Case and Affectedness in German Inalienable Possession Constructions* VERA LEE-SCHOENFELD University of Georgia [email protected] Abstract: The possessor in German inalienable possession constructions can be an accusative or dative-marked nominal, as in Der Junge hat ihn/ihm in die Nase gebissen 'The boy bit him (ACC/DAT) into the nose' (see also Wegener 1985, Draye 1996, and Lamiroy and Delbecque 1998). Not all participating verbs allow this case optionality. Some require accusative, others seem to require dative when modified by one kind of PP but take accusative when modified by another kind of PP. This paper argues that the option of having a possessor dative, an instance of 'external possession,' depends on the possibility of using the verb intransitively, with a Goal PP indicating the endpoint of a directed motion. 0. Introduction: A Data Puzzle As previously noted by Wegener (1985), Draye (1996), and Lamiroy and Delbecque (1998), the possessor of a PP-embedded body part in German inalienable possession constructions can be an accusative or dative-marked nominal. The data in (1-4) illustrate the seemingly random distribu- tion of accusative and dative case.1 (1) Der Junge hat ihn/ihm in die Nase gebissen. the boy has him-ACC/DAT in the nose bitten 'The boy bit him in the nose.' (2) Das Kind hat sie/ihr in den Unterleib getreten. the child has her-ACC/DAT in the abdomen kicked 'The child kicked her in the abdomen.' (3) Der Mann hat sie/?*ihr auf den Mund geküsst. ? the man has her-ACC/ *DAT on the mouth kissed 'The man kissed her on the mouth.' *A number of people have made significant contributions to this research.
    [Show full text]