A Note on Huave Morpheme Ordering: Local Dislocation Or Generalized U20? ∗
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
When Linearity Prevails Over Hierarchy in Syntax
When linearity prevails over hierarchy in syntax Jana Willer Golda, Boban Arsenijevic´b, Mia Batinic´c, Michael Beckerd, Nermina Cordalijaˇ e, Marijana Kresic´c, Nedzadˇ Lekoe, Franc Lanko Marusiˇ cˇf, Tanja Milicev´ g, Natasaˇ Milicevi´ c´g, Ivana Mitic´b, Anita Peti-Stantic´h, Branimir Stankovic´b, Tina Suligojˇ f, Jelena Tusekˇ h, and Andrew Nevinsa,1 aDivision of Psychology and Language Sciences, University College London, London WC1N 1PF, United Kingdom; bDepartment for Serbian language, Faculty of Philosophy, University of Nis,ˇ Nisˇ 18000, Serbia; cDepartment of Linguistics, University of Zadar, Zadar 23000, Croatia; dDepartment of Linguistics, Stony Brook University, Stony Brook, NY 11794-4376; eDepartment of English, Faculty of Philosophy, University of Sarajevo, Sarajevo 71000, Bosnia and Herzegovina; fCenter for Cognitive Science of Language, University of Nova Gorica, Nova Gorica 5000, Slovenia; gDepartment of English Studies, Faculty of Philosophy, University of Novi Sad, Novi Sad 21000, Serbia; hDepartment of South Slavic languages and literatures, Faculty of Humanities and Social Sciences, University of Zagreb, Zagreb 10000, Croatia Edited by Barbara H. Partee, University of Massachusetts at Amherst, Amherst, MA, and approved November 27, 2017 (received for review July 21, 2017) Hierarchical structure has been cherished as a grammatical univer- show cases of agreement based on linear order, called “attrac- sal. We use experimental methods to show where linear order is tion,” with the plural complement of noun phrases (e.g., the key also a relevant syntactic relation. An identical methodology and to the cabinets are missing), a set of findings later replicated in design were used across six research sites on South Slavic lan- comprehension and across a variety of other languages and con- guages. -
Root and Semi-Phrasal Compounds: a Syntactic Approach Dimitrios Ntelitheos & Katya Pertsova*
2019. Proc Ling Soc Amer 4. 30:1-14. https://doi.org/10.3765/plsa.v4i1.4530 Root and semi-phrasal compounds: A syntactic approach Dimitrios Ntelitheos & Katya Pertsova* Abstract. We present a syntactic account of the derivation of two types of attributive nominal compounds in Spanish, Russian and Greek. These include right-headed “root” compounds, which exhibit more “word”-like properties and single stress domains, and left-headed “semi-phrasal” compounds with more phrasal properties and independent stress domains for the two compound members. We propose that both compound structures are formed on a small clause predicate phrase, with their different properties derived from the merger of the predicate member of the small clause as a root or as a larger nominal unit with additional functional projections. The proposed structures provide an explanation of observed lexical integrity effects, as well as specific predictions of patterns of compound formation crosslinguistically. Keywords. morpho-syntax; compounds; Distributed Morphology; lexical integrity; predicate inversion; small clauses 1. Introduction. A hotly debated issue is to what extent morphological principles are independ- ent from the syntactic ones. There is no agreement on how and why differences between “words,” “phrases,” and other units proposed to exist in-between these two categories arise. Syn- tactic approaches to word-formation such as Distributed Morphology (Halle & Marantz 1993, Marantz 1997 and related work), antisymmetry (Kayne 1994, Koopman and Szabolcsi 2000) and more recently nanosyntax (Starke 2009) derive morphosyntactic properties of different units from the derivational path of their formation. For instance, Marantz (2001) proposes a difference between units created via combinations of functional heads with roots vs. -
Pronouns and Prosody in Irish&Sast;
PRONOUNS AND PROSODY IN IRISH* RYAN BENNETT Yale University EMILY ELFNER University of British Columbia JAMES MCCLOSKEY University of California, Santa Cruz 1. BACKGROUND One of the stranger properties of human language is the way in which it creates a bridge between two worlds which ought not be linked, and which seem not to be linked in any other species—a bridge linking the world of concepts, ideas and propositions with the world of muscular gestures whose outputs are perceivable. Because this link is made in us we can do what no other creature can do: we can externalize our internal and subjective mental states in ways that expose them to scrutiny by others and by ourselves. The existence of this bridge depends in turn on a system or systems which can take the complex structures used in cognition (hierarchical and recursive) and translate them step by step into the kinds of representations that our motor system knows how to deal with. In the largest sense, our goal in the research reported on here is to help better understand those systems and in particular the processes of serialization and flattening that make it possible to span the divide between the two worlds. In doing this, we study something which is of central importance to the question of what language is and how it might have emerged in our species. Establishing sequential order is, obviously, a key part of the process of serialization. And given the overall perspective just suggested, it is *Four of the examples cited in this paper (examples (35), (38a), (38b), and (38c)) have sound-files associated with them. -
Cohesion, Coherence and Temporal Reference from an Experimental Corpus Pragmatics Perspective Yearbook of Corpus Linguistics and Pragmatics
Yearbook of Corpus Linguistics and Pragmatics Cristina Grisot Cohesion, Coherence and Temporal Reference from an Experimental Corpus Pragmatics Perspective Yearbook of Corpus Linguistics and Pragmatics Editor-in-Chief Jesús Romero-Trillo, Universidad Autónoma de Madrid, Spain Reviews Editor Dawn Knight, Cardiff University, Cardiff, UK Advisory Editorial Board Karin Aijmer, University of Gothenburg, Sweden Belén Díez-Bedmar, Universidad de Jaén, Spain Ronald Geluykens, University of Oldenburg, Germany Anna Gladkova, University of Sussex and University of Brighton, UK Stefan Gries: University of California, Santa Barbara, USA Leo Francis Hoye, University of Hong Kong, China Jingyang Jiang, Zhejiang University, China Anne O’Keefe, Mary Immaculate College, Limerick, Ireland Silvia Riesco-Bernier, Universidad Autónoma de Madrid, Spain Anne-Marie Simon-Vandenbergen, University of Ghent, Belgium Esther Vázquez y del Árbol, Universidad Autónoma de Madrid, Spain Anne Wichmann, University of Central Lancashire, UK More information about this series at http://www.springer.com/series/11559 Cristina Grisot Cohesion, Coherence and Temporal Reference from an Experimental Corpus Pragmatics Perspective Cristina Grisot Department of Linguistics University of Geneva Geneva 4, Switzerland Published with the support of the Swiss National Science Foundation ISSN 2213-6819 ISSN 2213-6827 (electronic) Yearbook of Corpus Linguistics and Pragmatics ISBN 978-3-319-96751-6 ISBN 978-3-319-96752-3 (eBook) https://doi.org/10.1007/978-3-319-96752-3 Library of Congress -
Graph-Based Syntactic Word Embeddings
Graph-based Syntactic Word Embeddings Ragheb Al-Ghezi Mikko Kurimo Aalto University Aalto University [email protected] [email protected] Abstract We propose a simple and efficient framework to learn syntactic embeddings based on information derived from constituency parse trees. Using biased random walk methods, our embeddings not only encode syntactic information about words, but they also capture contextual information. We also propose a method to train the embeddings on multiple constituency parse trees to ensure the encoding of global syntactic representation. Quantitative evaluation of the embeddings shows competitive performance on POS tagging task when compared to other types of embeddings, and qualitative evaluation reveals interesting facts about the syntactic typology learned by these embeddings. 1 Introduction Distributional similarity methods have been the standard learning representation in NLP. Word represen- tations methods such as Word2vec, GloVe, and FastText [1, 2, 3] aim to create vector representation to words from other words or characters that mutually appear in the same context. The underlying premise is that ”a word can be defined by its company” [4]. For example, in the sentences, ”I eat an apple every day” and ”I eat an orange every day”, the words ’orange’ and ’apple’ are similar as they share similar contexts. Recent approaches have proposed a syntax-based extension to distributional word embeddings to in- clude functional similarity in the word vectors by leveraging the power of dependency parsing[5] [6]. Syntactic word embeddings have been shown to be advantageous in specific NLP tasks such as ques- tion type classification[7], semantic role labeling[8], part-of-speech tagging[6], biomedical event trigger identification[9], and predicting brain activation patterns [10]. -
Pronouns in Nanosyntax a Typological Study of the Containment Structure of Pronominals
Pronouns in Nanosyntax A typological study of the containment structure of pronominals *Name* *Student number* rMA Linguistics Universiteit van Amsterdam 24-06-2015 Acknowledgements This paper is the result of my Research Project Linguistics, the final part of my Research Master Linguistics at the University of Amsterdam. In following this rMA, I had the opportunity to study the fields that I am interested in the most: linguistic theory and typology. These two fields are fused in this research project, that was sometimes challenging but more often fun to work on. I would like to thank all my teachers and fellow students, in particular my co-students at the rMA, for the educational and fun experience this rMA has been. In particular, I want to express my gratitude to two of my teachers: Eva van Lier and Jan Don. I would like to thank Eva van Lier, for introducing me to the field of typology and for being an inspirational teacher. Besides that, I am grateful for her comments on earlier versions of this paper which made me evaluate my own work critically. I am most grateful to Jan Don, for making me realise that linguistics is the most interesting field of study, for his valuable lessons about linguistics and academic research, and for his faith in my capacities. Above all, I want to thank him for being the most inspirational, encouraging and helpful supervisor of this thesis project that a student could wish for. i Abbreviations 1 1st person 2 2nd person 3 3rd person ABS absolutive ACC accusative ALIEN alienable possession C common gender -
The Non-Hierarchical Nature of the Chomsky Hierarchy-Driven Artificial-Grammar Learning
The Non-Hierarchical Nature of the Chomsky Hierarchy-Driven Artificial-Grammar Learning Shiro Ojima & Kazuo Okanoya Recent artificial-grammar learning (AGL) paradigms driven by the Choms- ky hierarchy paved the way for direct comparisons between humans and animals in the learning of center embedding ([A[AB]B]). The AnBn grammars used by the first generation of such research lacked a crucial property of center embedding, where the pairs of elements are explicitly matched ([A1 [A2 B2] B1]). This type of indexing is implemented in the second-generation AnBn grammars. This paper reviews recent studies using such grammars. Against the premises of these studies, we argue that even those newer AnBn grammars cannot test the learning of syntactic hierarchy. These studies nonetheless provide detailed information about the conditions under which human adults can learn an AnBn grammar with indexing. This knowledge serves to interpret recent animal studies, which make surprising claims about animals’ ability to handle center embedding. Keywords: language evolution; animal cognition; syntactic hierarchy; arti- ficial grammar; center embedding 1. Center Embedding and AnBn Grammars One of the properties that make humans unique among animals is language, which has several components including phonology, lexicon, and syntax. It has been debated how much of each of these components is shared between humans and non-human animals (Markman & Abelev 2004, Yip 2006). The component of syntax, which has been receiving much attention in the field of comparative cognition, instantiates linguistic knowledge describable in terms of a finite set of rules. That set of rules is called a grammar. Fitch & Hauser’s (2004) seminal work tried to test which type of grammar non-human primates can learn. -
Functional Categories and Syntactic Theory 141 LI02CH08-Rizzi ARI 5 December 2015 12:12
LI02CH08-Rizzi ARI 5 December 2015 12:12 ANNUAL REVIEWS Further Click here to view this article's online features: • Download figures as PPT slides • Navigate linked references • Download citations Functional Categories • Explore related articles • Search keywords and Syntactic Theory Luigi Rizzi1,2 and Guglielmo Cinque3 1Departement´ de Linguistique, UniversitedeGen´ eve,` CH-1211 Geneve,` Switzerland 2Centro Interdipartimentale di Studi Cognitivi sul Linguaggio–Dipartimento di Scienze Sociali, Politiche e Cognitive (CISCL-DISPOC), Universita` di Siena, Siena 53100, Italy 3Dipartimento di Studi Linguistici, Ca’ Foscari University, Venice 30123, Italy Annu. Rev. Linguist. 2016. 2:139–63 Keywords The Annual Review of Linguistics is online at functional heads, Universal Grammar, syntactic variation, cartography, linguist.annualreviews.org lexicon This article’s doi: by Mr. Guglielmo Cinque on 01/27/16. For personal use only. 10.1146/annurev-linguistics-011415-040827 Abstract Copyright c 2016 by Annual Reviews. The distinction between lexical and functional elements plays a major role in All rights reserved Annu. Rev. Linguist. 2016.2:139-163. Downloaded from www.annualreviews.org current research in syntax and neighboring aspects of the study of language. In this article, we review the motivations of a progressive shift of emphasis from lexical to functional elements in syntactic research: the identification of the functional lexicon as the locus of the triggering of syntactic actions and of syntactic variation, and the description and analysis of the complexity of functional structures in cartographic studies. The latter point leads us to illustrate current cartographic research and to present the maps created in the study of clauses and phrases. The maps of CP, IP, and other phrasal categories all involve a richly articulated functional sequence. -
Some Observations on the Hebrew Desiderative Construction – a Dependency-Based Account in Terms of Catenae1
Thomas Groß Some Observations on the Hebrew Desiderative Construction – A Dependency-Based Account in Terms of Catenae1 Abstract The Modern Hebrew (MH) desiderative construction must obey four conditions: 1. A subordinate clause headed by the clitic še= ‘that’ must be present. 2. The verb in the subordinate clause must be marked with future tense. 3. The grammatical properties genus, number, and person tend to be specified, i.e. if the future tense affix is underspecified, material tends to appear that aids specification, if contextual recovery is unavailable. 4. The units of form that make up the constructional meaning of the desiderative must qualify as a catena. A catena is a dependency-based unit of form, the parts of which are immediately continuous in the vertical dimension. The description of the individual parts of the desiderative must address trans-, pre-, and suffixes, and cliticization. Catena-based morphology is representational, monostratal, dependency-, construction-, and piece-based. 1. Purpose, means and claims The main purpose of this paper is to analyze the Hebrew desiderative construction. This construction is linguistically interesting and challenging for a number of reasons. 1. It is a periphrastic construction, with fairly transparent compositionality. 2. It is transclausal, i.e. some parts of the construction reside in the main clause, and others in the subordinated clause. The complementizer is also part of the construction. 3. The construction consists of more than one word, but it does not qualify as a constituent. Rather the construction cuts into words. 4. Two theoretically 1 I want to thank Outi Bat-El (Tel Aviv University) and three anonymous reviewers for their help and advice. -
Ch1 LBEL Preprint
Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2018 Nanosyntax: the basics Baunaz, Lena ; Lander, Eric Abstract: This chapter offers a thorough introduction to nanosyntactic theory, a development ofthe cartographic program in generative grammar. It discusses the foundations on which nanosyntax was conceived, such as the “one feature–one head” maxim and the universal functional sequence (fseq). It also provides a brief comparison of theoretical and terminological issues in nanosyntax vs. the competing framework of Distributed Morphology. It is seen that the syntactic component according to nanosyn- tax unifies aspects of (what are traditionally called) syntax, morphology, and formal semantics. Thisis reflected in the tools used to probe linguistic structure in the nanosyntactic approach, such asmorpholog- ical decomposition, syncretism, and containment. The chapter also discusses the technical details of the syntax–lexicon relation, detailing the matching or spellout process and Starke’s view of spellout-driven movement. This chapter is meant to provide readers with the necessary background to understand and navigate the rest of the chapters in this volume. DOI: https://doi.org/10.1093/oso/9780190876746.003.0001 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-167406 Book Section Accepted Version Originally published at: Baunaz, Lena; Lander, Eric (2018). Nanosyntax: the basics. In: Baunaz, Lena; De Clercq, Karen; Haegeman, Liliane; Lander, Eric. Exploring Nanosyntax. New York: Oxford University Press, 3-56. DOI: https://doi.org/10.1093/oso/9780190876746.003.0001 15 <Chapter 1 Nanosyntax: The Basics§> Lena Baunaz and Eric Lander Nanosyntax (Caha 2009; Starke 2009, 2011ab) is a generative approach to the study of language that is in line with the major tenets of the Principles and Parameters framework of Chomsky (1981, 1986). -
Cognitive Connections Between Linguistic and Musical Syntax
COGNITIVE CONNECTIONS BETWEEN LINGUISTIC AND MUSICAL SYNTAX AN OPTIMALITY THEORETIC APPROACH by LAURA BREWER (Under the Direction of Keith Langston) ABSTRACT Language and music share empirically validated syntactic similarities that suggest these modalities may have important cognitive connections. These findings challenge the current view of the human language faculty as encapsulated from other cognitive domains. Patel (2008) emphasizes the distinction between the representational knowledge base of a domain (competence) and the syntactic processes that are employed during access and integration of this knowledge, proposing that the processing resources are shared between the two domains while the knowledge bases remain separate. I propose that the shared processing resources for linguistic and musical syntax are characterized predominantly by the constraint evaluation and optimization mechanisms of Optimality Theory. The goal of this endeavor will be to explore the OT character of musical syntax in support of the claim that a unified theory can reflect cognitive overlap in the language and music modalities and to encourage future refinement of this theoretical framework. INDEX WORDS: Optimality Theory (OT), syntax, Western tonal music. modularity, weighted constraint, Generative Theory of Tonal Music (GTTM), competence, performance, processing COGNITIVE CONNECTIONS BETWEEN LINGUISTIC AND MUSICAL SYNTAX AN OPTIMALITY THEORETIC APPROACH by LAURA BREWER B.A., The University of Georgia, 2007 A Thesis Submitted to the Graduate Faculty of The University of Georgia in Partial Fulfillment of the Requirements for the Degree MASTER OF ARTS ATHENS, GA 2014 © 2014 Laura Brewer All Rights Reserved COGNITIVE CONNECTIONS BETWEEN LINGUISTIC AND MUSICAL SYNTAX AN OPTIMALITY THEORETIC APPROACH by LAURA BREWER Major Professor: Keith Langston Committee: Jonathan Evans Jared Klein Electronic Version Approved: Julie Coffield Interim Dean of the Graduate School The University of Georgia December 2014 ACKNOWLEDGEMENTS I am extremely grateful to my committee, Dr. -
Semantic Integration Processes at Different Levels of Syntactic Hierarchy During Sentence Comprehension: an ERP Study
Neuropsychologia 48 (2010) 1551–1562 Contents lists available at ScienceDirect Neuropsychologia journal homepage: www.elsevier.com/locate/neuropsychologia Semantic integration processes at different levels of syntactic hierarchy during sentence comprehension: An ERP study Xiaolin Zhou a,b,c,∗, Xiaoming Jiang a, Zheng Ye a, Yaxu Zhang a,b, Kaiyang Lou d, Weidong Zhan c,e a Center for Brain and Cognitive Sciences and Department of Psychology, Peking University, Beijing 100871, China b Key Laboratory of Machine Perception and Intelligence (Ministry of Education), Peking University, Beijing 100871, China c Key Laboratory of Computational Linguistics (Ministry of Education), Peking University, Beijing 100871, China d Department of Applied Linguistics, Communication University of China, Beijing 100024, China e Department of Chinese Literature and Language, Peking University, Beijing 100871, China article info abstract Article history: An event-related potential (ERP) study was conducted to investigate the temporal neural dynamics of Received 25 April 2009 semantic integration processes at different levels of syntactic hierarchy during Chinese sentence reading. Received in revised form 6 December 2009 In a hierarchical structure, subject noun + verb + numeral + classifier + object noun, the object noun is con- Accepted 1 February 2010 strained by selectional restrictions of the classifier at the lower-level and of the verb at the higher-level Available online 6 February 2010 and the classifier is also constrained by the verb at the higher-level. Semantic congruencies between verb, classifier, and noun were manipulated, resulting in five types of sentences: correct sentences, sentences Keywords: with the single classifier–noun mismatch, sentences with the single verb–noun mismatch, sentences Syntactic hierarchy Semantic integration with the double-mismatch in classifier–noun and verb–noun, and sentences with the triple-mismatch in Classifier classifier–noun, verb–noun and verb-classifier.