1 Summary 2 Phrase Structure Rules 3 Constituency Tests
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Ditransitive Verbs in Language Use
Chapter 3 Aspects of description: ditransitive verbs in language use This chapter provides a corpus-based description of individual ditransitive verbs in actual language use. First, the two verbs that are typical of ditransitivity in ICE-GB will be analysed: give and tell (see section 3.1). Second, the four habitual ditransitive verbs in ICE-GB (i.e. ask, show, send and offer) will be scrutinised (see section 3.2). Particular emphasis in all the analyses will be placed on the different kinds of routines that are involved in the use of ditransitive verbs. The description of peripheral ditransitive verbs, on the other hand, will centre on the concepts of grammatical institutionalisation and conventionalisation (see section 3.3). At the end of this chapter, the two aspects will be discussed in a wider setting in the assessment of the role of linguistic routine and creativity in the use of ditransitive verbs (see section 3.4). 3.1 Typical ditransitive verbs in ICE-GB In the present study, typical ditransitive verbs are verbs which are frequently attested in ICE-GB in general (i.e. > 700 occurrences) and which are associated with an explicit ditransitive syntax in some 50% of all occurrences or more (cf. Figure 2.4, p. 84). These standards are met by give (see section 3.1.1) and tell (see section 3.1.2). 3.1.1 GIVE In light of recent psycholinguistic and cognitive-linguistic evidence, it is not sur- prising that the most frequent ditransitive verb in ICE-GB is GIVE.1 Experiment- al data have led Ninio (1999), for example, to put forward the hypothesis that children initially acquire constructions through one (or very few) ‘pathbreaking verbs(s)’. -
Chapter Ii the Types of Phrasal Verbs in Movie Snow White and the Huntsman by Rupert Sanders
33 CHAPTER II THE TYPES OF PHRASAL VERBS IN MOVIE SNOW WHITE AND THE HUNTSMAN BY RUPERT SANDERS In this chapter, the researcher will analyzed the types of phrasal verbs. It is to complete the first question in this research. The researcher had been categorizing the types of phrasal verbs and form that divided into verb and adverb, also sometimes prepositions. 2.1. Types of Phrasal Verbs in movie Snow White and the Huntsman According to Heaton (1985:103) considers that phrasal verbs are compound verbs that result from combining a verb with an adverb or a preposition, the resulting compound verb being idiomatic. Phrasal verb is one of important part of grammar that almost found in English language. Based on Andrea Rosalia in her book “A Holistic Approach to Phrasal Verb”, Phrasal verbs are considered to be a very important and frequently occurring feature of the English language. First of all, they are so common in every day conversation, and non-native speakers who wish to sound natural when speaking this language need to learn their grammar in order to know how to produce them correctly. Secondly, the habit of inventing phrasal verbs has been the source of great enrichment of the language (Andrea Rosalia, 2012:16). A grammarian such as Eduard, Vlad (1998:93) describes phrasal verbs as "combinations of a lexical verb and adverbial particle". It means that the verb if wants to be a phrasal verb always followed by particle. It can be one particle or two particles in one verb. If the case is like that, it called as multi word verbs. -
Ling 130 Notes: Predicate Logic and English Syntax
Ling 130 Notes: Predicate Logic and English Syntax Sophia A. Malamud February 15, 2011 1 Characteristic functions The characteristic function of a set: (1) Let A be a set. The, charA, the characteristic function of A, is the function F such that, for any x∈ A, F(x)=1, and for any x ∉ A, F(x)=0. Schönfinkelization: (2) U = {a,b,c} (3) The relation "fond of": Rfond-of = {<a,b>, <b,c>, <c,c>} (4) The characteristic function of Rfond-of (takes all combinations of two elements, gives 0/1): <a,a> →0 <a,b> →1 <a,c> →0 <b,a> →0 CharRfond-of = <b,b> →0 <b,c> →1 <c,a> →0 <c,b> →0 <c,c> →1 (5) Turning n-ary functions into multiple embedded 1-ary functions: Schönfinkelization. a → 0 left-to-right schönfinkelization a →b → 1 c → 0 a → 0 b →b → 0 c → 1 a → 0 c →b → 0 c → 1 EXERCISE 1: Given the following Universe, spell out the characteristic function of the set in (6) and schönfinkelize it left-to-right. 1 (6) U = {d, e} (7) R = { <d,d,d>, <d,e,d>, <e,d,d>, <e,e,e>, <e,e,d> } 2 Syntactic composition A formal language is a set of strings - finite sequences of minimal units (words/morphemes, for natural languages) - with meaning. The "machine" that generates those strings and their corresponding meanings is its grammar. A grammar must specify the following three components: • A lexicon which contains every minimal unit with meaning (= every word, for this course) and its grammatical category; • a syntax, that is, a set of rules that tells you how the minimal units combine to form longer units, how this longer units combine to form yet longer units, and so forth until we form full complex sentences; and • a semantics, which determines what semantic operation or function corresponds to each syntactic rule and combines the “atomic” word meanings to build the meaning of the complete sentence. -
Serial Verb Constructions Revisited: a Case Study from Koro
Serial Verb Constructions Revisited: A Case Study from Koro By Jessica Cleary-Kemp A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Linguistics in the Graduate Division of the University of California, Berkeley Committee in charge: Associate Professor Lev D. Michael, Chair Assistant Professor Peter S. Jenks Professor William F. Hanks Summer 2015 © Copyright by Jessica Cleary-Kemp All Rights Reserved Abstract Serial Verb Constructions Revisited: A Case Study from Koro by Jessica Cleary-Kemp Doctor of Philosophy in Linguistics University of California, Berkeley Associate Professor Lev D. Michael, Chair In this dissertation a methodology for identifying and analyzing serial verb constructions (SVCs) is developed, and its application is exemplified through an analysis of SVCs in Koro, an Oceanic language of Papua New Guinea. SVCs involve two main verbs that form a single predicate and share at least one of their arguments. In addition, they have shared values for tense, aspect, and mood, and they denote a single event. The unique syntactic and semantic properties of SVCs present a number of theoretical challenges, and thus they have invited great interest from syntacticians and typologists alike. But characterizing the nature of SVCs and making generalizations about the typology of serializing languages has proven difficult. There is still debate about both the surface properties of SVCs and their underlying syntactic structure. The current work addresses some of these issues by approaching serialization from two angles: the typological and the language-specific. On the typological front, it refines the definition of ‘SVC’ and develops a principled set of cross-linguistically applicable diagnostics. -
Language Structure: Phrases “Productivity” a Property of Language • Definition – Language Is an Open System
Language Structure: Phrases “Productivity” a property of Language • Definition – Language is an open system. We can produce potentially an infinite number of different messages by combining elements differently. • Example – Words into phrases. An Example of Productivity • Human language is a communication system that bears some similarities to other animal communication systems, but is also characterized by certain unique features. (24 words) • I think that human language is a communication system that bears some similarities to other animal communication systems, but is also characterized by certain unique features, which are fascinating in and of themselves. (33 words) • I have always thought, and I have spent many years verifying, that human language is a communication system that bears some similarities to other animal communication systems, but is also characterized by certain unique features, which are fascinating in and of themselves. (42 words) • Although mainstream some people might not agree with me, I have always thought… Creating Infinite Messages • Discrete elements – Words, Phrases • Selection – Ease, Meaning, Identity • Combination – Rules of organization Models of Word reCombination 1. Word chains (Markov model) Phrase-level meaning is derived from understanding each word as it is presented in the context of immediately adjacent words. 2. Hierarchical model There are long-distant dependencies between words in a phrase, and these inform the meaning of the entire phrase. Markov Model Rule: Select and concatenate (according to meaning and what types of words should occur next to each other). bites bites bites Man over over over jumps jumps jumps house house house Markov Model • Assumption −Only adjacent words are meaningfully (and lawfully) related. -
Flexible Valency in Chintang.∗
Flexible valency in Chintang.∗ Robert Schikowskia , Netra Paudyalb , and Balthasar Bickela a University of Zürich b University of Leipzig 1 e Chintang language Chintang [ˈts̻ ʰiɳʈaŋ]̻ (ISO639.3: ctn) is a Sino-Tibetan language of Nepal. It is named aer the village where it is mainly spoken. e village lies in the hills of Eastern Nepal, bigger cities within day’s reach being Dhankuṭā and Dharān. ere are no official data on the number of speakers, but we estimate there to be around 4,000 - 5,000 speakers. Most speakers are bi- or trilingual, with Nepali (the Indo-Aryan lingua franca of Nepal) as one and Bantawa (a related Sino-Tibetan language) as the other additional language. Monolingual speakers are still to be found mainly among elderly women, whereas a considerable portion of the younger generation is rapidly shiing to Nepali. Genealogically, Chintang belongs to the Kiranti group. e Kiranti languages are generally accepted to belong to the large Sino-Tibetan (or Tibeto-Burman) family, although their position within this family is controversial (cf. e.g. urgood 2003, Ebert 2003). Based on phonological evidence, Chintang belongs to the Eastern subgroup of Kiranti (Bickel et al. 2010). ere are two major dialects (Mulgaũ and Sambugaũ ) named aer the areas where they are spoken. e differences between them concern morphology and the lexicon but, as far as we know, not syntax, and so we will not distinguish between dialects in this chapter. For all examples the source has been marked behind the translation. Wherever possible, we take data from the Chintang Language Corpus (Bickel et al. -
Chapter 30 HPSG and Lexical Functional Grammar Stephen Wechsler the University of Texas Ash Asudeh University of Rochester & Carleton University
Chapter 30 HPSG and Lexical Functional Grammar Stephen Wechsler The University of Texas Ash Asudeh University of Rochester & Carleton University This chapter compares two closely related grammatical frameworks, Head-Driven Phrase Structure Grammar (HPSG) and Lexical Functional Grammar (LFG). Among the similarities: both frameworks draw a lexicalist distinction between morphology and syntax, both associate certain words with lexical argument structures, both employ semantic theories based on underspecification, and both are fully explicit and computationally implemented. The two frameworks make available many of the same representational resources. Typical differences between the analyses proffered under the two frameworks can often be traced to concomitant differ- ences of emphasis in the design orientations of their founding formulations: while HPSG’s origins emphasized the formal representation of syntactic locality condi- tions, those of LFG emphasized the formal representation of functional equivalence classes across grammatical structures. Our comparison of the two theories includes a point by point syntactic comparison, after which we turn to an exposition ofGlue Semantics, a theory of semantic composition closely associated with LFG. 1 Introduction Head-Driven Phrase Structure Grammar is similar in many respects to its sister framework, Lexical Functional Grammar or LFG (Bresnan et al. 2016; Dalrymple et al. 2019). Both HPSG and LFG are lexicalist frameworks in the sense that they distinguish between the morphological system that creates words and the syn- tax proper that combines those fully inflected words into phrases and sentences. Stephen Wechsler & Ash Asudeh. 2021. HPSG and Lexical Functional Gram- mar. In Stefan Müller, Anne Abeillé, Robert D. Borsley & Jean- Pierre Koenig (eds.), Head-Driven Phrase Structure Grammar: The handbook. -
Common and Distinct Neural Substrates for Pragmatic, Semantic, and Syntactic Processing of Spoken Sentences: an Fmri Study
Common and Distinct Neural Substrates for Pragmatic, Semantic, and Syntactic Processing of Spoken Sentences: An fMRI Study G.R.Kuperberg and P.K.McGuire Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/12/2/321/1758711/089892900562138.pdf by guest on 18 May 2021 Institute of Psychiatry, London E.T.Bullmore Institute of Psychiatry, London and the University of Cambridge M.J.Brammer, S.Rabe-Hesketh, I.C.Wright, D.J.Lythgoe, S.C.R.Williams, and A.S.David Institute of Psychiatry, London Abstract & Extracting meaning from speech requires the use of difference between conditions in activation of the left-super- pragmatic, semantic, and syntactic information. A central ior-temporal gyrus for the pragmatic experiment than the question is:Does the processing of these different types of semantic/syntactic experiments; (2) a greater difference be- linguistic information have common or distinct neuroanatomi- tween conditions in activation of the right-superior and middle- cal substrates? We addressed this issue using functional temporal gyrus in the semantic experiment than in the syntactic magnetic resonance imaging (fMRI) to measure neural activity experiment; and (3) no regions activated to a greater degree in when subjects listened to spoken normal sentences contrasted the syntactic experiment than in the semantic experiment. with sentences that had either (A) pragmatical, (B) semantic These data show that, while left- and right-superior-temporal (selection restriction), or (C) syntactic (subcategorical) viola- regions may be differentially involved in processing pragmatic tions sentences. All three contrasts revealed robust activation of and lexico-semantic information within sentences, the left- the left-inferior-temporal/fusiform gyrus. -
Modeling Subcategorization Through Co-Occurrence Outline
Introducing LexIt Building Distributional Profiles Ongoing Work Conclusions Outline Modeling Subcategorization Through Co-occurrence 1 Introducing LexIt A Computational Lexical Resource for Italian Verbs The Project Distributional Profiles 1 2 Gabriella Lapesa ,AlessandroLenci 2 Building Distributional Profiles Pre-processing 1University of Osnabr¨uck, Institute of Cognitive Science Subcategorization Frames 2 University of Pisa, Department of Linguistics Lexical sets Selectional preferences Explorations in Syntactic Government and Subcategorisation nd University of Cambridge, 2 September 2011 3 Ongoing Work 4 Conclusions Gabriella Lapesa, Alessandro Lenci Modeling Subcategorization Through Co-occurrence 2/ 38 Introducing LexIt Introducing LexIt Building Distributional Profiles The Project Building Distributional Profiles The Project Ongoing Work Distributional Profiles Ongoing Work Distributional Profiles Conclusions Conclusions Computational approaches to argument structure LexIt: a computational lexical resource for Italian The automatic acquisition of lexical information from corpora is a longstanding research avenue in computational linguistics LexIt is a computational framework for the automatic acquisition subcategorization frames (Korhonen 2002, Schulte im Walde and exploration of corpus-based distributional profiles of Italian 2009, etc.) verbs, nouns and adjectives selectional preferences (Resnik 1993, Light & Greiff 2002, Erk et al. 2010, etc.) LexIt is publicly available through a web interface: verb classes (Merlo & Stevenson 2001, Schulte im Walde 2006, http://sesia.humnet.unipi.it/lexit/ Kipper-Schuler et al. 2008, etc.) Corpus-based information has been used to build lexical LexIt is the first large-scale resource of such type for Italian, resources aiming at characterizing the valence properties of predicates cf. VALEX for English (Korohnen et al. 2006), LexSchem for fully on distributional ground French (Messiant et al. -
Lexical-Functional Grammar and Order-Free Semantic Composition
COLING 82, J. Horeck~(eel) North-HoOandPubllshi~ Company O A~deml~ 1982 Lexical-Functional Grammar and Order-Free Semantic Composition Per-Kristian Halvorsen Norwegian Research Council's Computing Center for the Humanities and Center for Cognitive Science, MIT This paper summarizes the extension of the theory of lexical-functional grammar to include a formal, model-theoretic, semantics. The algorithmic specification of the semantic interpretation procedures is order-free which distinguishes the system from other theories providing model-theoretic interpretation for natural language. Attention is focused on the computational advantages of a semantic interpretation system that takes as its input functional structures as opposed to syntactic surface-structures. A pressing problem for computational linguistics is the development of linguistic theories which are supported by strong independent linguistic argumentation, and which can, simultaneously, serve as a basis for efficient implementations in language processing systems. Linguistic theories with these properties make it possible for computational implementations to build directly on the work of linguists both in the area of grammar-writing, and in the area of theory development (cf. universal conditions on anaphoric binding, filler-gap dependencies etc.). Lexical-functional grammar (LFG) is a linguistic theory which has been developed with equal attention being paid to theoretical linguistic and computational processing considerations (Kaplan & Bresnan 1981). The linguistic theory has ample and broad motivation (vide the papers in Bresnan 1982), and it is transparently implementable as a syntactic parsing system (Kaplan & Halvorsen forthcoming). LFG takes grammatical relations to be of primary importance (as opposed to the transformational theory where grammatical functions play a subsidiary role). -
Verbs in Relation to Verb Complementation 11-69
1168 Complementation of verbs and adjectives Verbs in relation to verb complementation 11-69 They may be either copular (clause pattern SVC), or complex transitive verbs, or monotransitive verbs with a noun phrase object), we can give only (clause pattern SVOC): a sample of common verbs. In any case, it should be borne in mind that the list of verbs conforming to a given pattern is difficult to specífy exactly: there SVC: break even, plead guilty, Iie 101V are many differences between one variety of English and another in respect SVOC: cut N short, work N loose, rub N dry of individual verbs, and many cases of marginal acceptability. Sometimes the idiom contains additional elements, such as an infinitive (play hard to gel) or a preposition (ride roughshod over ...). Note The term 'valency' (or 'valencc') is sometimes used, instead of complementation, ror the way in (The 'N' aboye indicates a direct object in the case oftransitive examples.) which a verb determines the kinds and number of elements that can accompany it in the clause. Valency, however, incIudes the subject 01' the clause, which is excluded (unless extraposed) from (b) VERB-VERB COMBINATIONS complementation. In these idiomatic constructions (ef 3.49-51, 16.52), the second verb is nonfinite, and may be either an infinitive: Verbs in intransitive function 16.19 Where no eomplementation oecurs, the verb is said to have an INTRANSITIVE make do with, make (N) do, let (N) go, let (N) be use. Three types of verb may be mentioned in this category: or a participle, with or without a following preposition: (l) 'PURE' INTRANSITIVE VERas, which do not take an object at aH (or at put paid to, get rid oJ, have done with least do so only very rarely): leave N standing, send N paeking, knock N fiying, get going John has arrived. -
The Representation of Third Person and Its Consequences for Person-Case Effects
The Representation of Third Person and Its Consequences for Person-Case Effects The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Nevins, Andrew. The representation of third person and its consequences for person-case effects. Natural Language and Linguistic Theory 25 (2007): 273-313. Published Version http://dx.doi.org/10.1007/s11049-006-9017-2 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:2223518 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP Nat Lang Linguist Theory (2007) 25:273–313 DOI 10.1007/s11049-006-9017-2 ORIGINAL PAPER The representation of third person and its consequences for person-case effects Andrew Nevins Received: 13 June 2005 / Accepted: 3 July 2006 / Published online: 28 April 2007 © Springer Science+Business Media B.V. 2007 Abstract In modeling the effects of the Person-Case Constraint (PCC), a common claim is that 3rd person “is not a person”. However, while this claim does work in the syntax, it creates problems in the morphology. For example, characterizing the well-known “spurious se effect” in Spanish simply cannot be done without reference to 3rd person. Inspired by alternatives to underspecification that have emerged in phonology (e.g., Calabrese, 1995), a revised featural system is proposed, whereby syntactic agreement may be relativized to certain values of a feature, in particular, the contrastive and marked values.