Compositionality Without Word Boundaries

Total Page:16

File Type:pdf, Size:1020Kb

Compositionality Without Word Boundaries Acknowledgments We would like to express our gratitude to the following colleagues for their valuable assistance in reviewing abstracts for Semantics and Linguistic Theory 22: Barbara Abbott Ted Fernald Andrew Koontz-Garboden Mats Rooth Dorit Abusch Tim Fernando Marcus Kracht Daniel Rothschild Maria Aloni Marcelo Ferreira Angelika Kratzer Susan Rothstein Luis Alonso-Ovalle Hana Filip Manfred Krifka Hotze Rullmann Peter Alrenaga Kai von Fintel Kiyomi Kusumoto Jerry Sadock Pranav Anand Danny Fox Bill Ladusaw Uli Sauerland Ana Arregui Itamar Francez Utpal Lahiri Philippe Schlenker Ron Artstein Lyn Frazier Fred Landman Bernhard Schwarz Nicholas Asher Jon Gajewski Peter Lasersohn Florian Schwarz Rebekah Baglini Mark Gawron Daniel Lassiter Roger Schwarzschild Alan Bale Anastasia Giannakidou Chungmin Lee Yael Sharvit Chris Barker Brendan Gillon Beth Levin Junko Shimoyama David Beaver Delia Graff Fara Jeff Lidz Mandy Simons John Beavers Aidan Gray Jo-Wang Lin Benjamin Spector Sigrid Beck Jeroen Groenendijk Sophia Malamud Jason Stanley Rajesh Bhatt Patrick Grosz Luisa Marti Penka Stateva Maria Bittner Elena Guerzoni Lisa Matthewson Markus Steinbach M. Ryan Bochnak Christine Gunlogson Ora Matushansky Tamina Stephenson Andrea Bonomi Martin Hackl William Mcclure Isidora Stojanovic Adrian Brasoveanu Valentine Hacquard Eric Mccready Anna Szabolcsi Seth Cable Chung-Hye Han Louise Mcnally Shoichi Takahashi Ivano Caponigro Nancy Hedberg Paula Menendez-Benito Yukinori Takubo Greg Carlson Irene Heim Jason Merchant Christopher Tancredi Elena Castroviejo-Miro Daphna Heller Friederike Moltmann Alice Ter Meulen Emmanuel Chemla Elena Herburger Marcin Morzycki Satoshi Tomioka Anna Chernilovskaya Klaus von Heusinger Sarah Murray Judith Tonhauser Gennaro Chierchia James Higginbotham Kimiko Nakanishi Galit W. Sassoon Ariel Cohen Jack Hoeksema Rick Nouwen Arnim von Stechow Cleo Condoravdi Angeliek van Hout Toshiyuki Ogihara Michael Wagner Bridget Copley Sabine Iatridou Roumyana Pancheva Gregory Ward Stephen Crain Tania Ionin Barbara Partee Tom Werner Henriette De Swart Michela Ippolito Jeffrey Pelletier Malte Willer Amy Rose Deal Pauline Jacobson Doris Penka Yoad Winter Paul Dekker Gerhard Jaeger Orin Percus Ming Xiang Ashwini Deo Hans Kamp Christopher Pinon Masahiro Yamada Jenny Doetjes Stefan Kaufmann Paul Portner Suwon Yoon Edit Doron Magdalena Kaufmann Chris Potts Ede Zimmermann Veena Dwivedi Edward Keenan Josep Quer Malte Zimmermann Regine Eckardt Chris Kennedy Jessica Rett Alessandro Zucchi Paul Elbourne Ezra Keshet Craige Roberts Martina Faller Peter Klecha Maribel Romero Donka Farkas Nathan Klinedinst Robert van Rooij Finally, we wish to express our sincere appreciation to the following units at the University of Chicago for providing financial and organizational support for the conference: The Division of Humanities The Department of Linguistics The Office of the Provost The Department of Philosophy The Franke Institute for the Humanities The Chicago Linguistic Society Chris Barker How to sprout Sluicing without deletion? Case matching provides a well-known argument that sluicing may involve deletion of silent syntactic structure (e.g., Merchant 2001): (1) a. Someonenom spoke to John, but I don’t know whonom/*whomacc spoke to John. b. John spoke to someoneacc, but I don’t know whomacc John spoke to . If the sluice gap contains a copy of the antecedent clause, the case of the wh-word is determined just as in any non-sluiced embedded question. Jager¨ (2001, 2005) explains how to guarantee case matching without resorting to silent syntactic structure. However, Jager’s¨ (2005:228) analysis does not generalize to sprouting (Chung, Ladusaw & McCloskey 1995): (2) John left at some time, but I don’t know when he left . Jager’s¨ analysis requires the antecedent clause to contain an indefinite, but the essence of sprouting is that there is no overt indefinite or any other overt sluicing trigger. I will provide a primarily semantic, anaphoric analysis that generalizes smoothly to sprouting cases. Although I will share Jager’s¨ starting point (type logical grammar), my solution will di⇤er from his not only in its empirical coverage, but conceptually and technically as well. Fragment. Let us reason about syntactic and semantic composition. If two constituents, A and Γ, can combine to form a complex expression in category B (i.e., if A Γ B, where ‘ ’ · ⇤ · indicates normal syntactic composition) and if we remove A, then what remains is an expression that clearly can combine with an A to its left to form a B: we conclude that Γ A B. This is simple ⇤ \ categorial grammar. Likewise (but less familiarly), if Γ[A] is a syntactic structure containing a specific occurrence of A inside of it, and this composite structure is in category B (i.e., if Γ[A] B), ⇤ then removing A from Γ produces an expression containing an A gap: ⇥xΓ[x] A B, where ‘A B’ ⇤ is the category of a B missing an A somewhere inside of it. I will write A ⇥xΓ[x] B, where ‘ ’ ◦ ⇤ ◦ is the syntactic operation of plugging A into the gap left in Γ, and where “⇥x...x)...” keeps track) of the syntactic position from which A has been removed. For instance, if the syntactic structure John ((spoke (to someone)) yesterday) has category · · · · S, and someone has category DPacc, then by the reasoning given above we are able to infer that ⇥x(John ((spoke (to x)) yesterday)) DPacc S. Since this is the expression that the sluice gap · · · · ⇤ is anaphoric to (and that supplies its semantic content), we correctly predict that the sluice gap will combine only with an accusative wh-word, and) not with a nominative one. We can implement the reasoning developed above in the form of a practical (i.e., decidable) fragment using Genzen sequent inference rules. Although the talk will not presuppose any previous familiarity with type logical grammar, the notation is as in Moortgat 1997: Γ A ⇥[B] Z A Γ B ⇥[B] Z Γ A Γ A B ⇤ ⇤ L · ⇤ R ⇤ ⇤ /L · ⇤ /R ⇥[Γ A B] Z \ Γ A B \ ⇥[B/A Γ] Z Γ B/A · \ ⇤ ⇤ \ · ⇤ ⇤ Γ A ⇥[B] Z A Γ B ⇥[B] Z Γ A Γ A B ⇤ ⇤ L ◦ ⇤ R ⇤ ⇤ L ◦ ⇤ R ⇥[Γ A B] Z Γ A B ⇥[B A Γ] Z Γ B A ◦ ⇤ ⇤ ◦ ⇤ ⇤ The R and R rules have already been discussed; the other inferences can easily be justified. \ ) ) ( ( We need one) additional rule to allow) for in-situ scope-taking:( ( ) Γ[A] A ⇥xΓ[x] ⇥ ◦ This rule says that A ⇥xΓ[x] is an equivalent way of writing the result of plugging A into the gap ◦ in ⇥xΓ[x], and that the two forms can be freely intersubstituted. Chris Barker Example. Then we have the following derivation for a simple sluicing example, Someone left, but I don’t know who (bidk is an amalgam representing but-I-don’t-know): (someone dp s) (bidk (who dp s)) s ◦ · · · ⇤ dp s ⇥y((someone y) (bidk (who dp s))) s ⇥ dp dp s s ◦ ) ◦ · · ·) ⇤ R · \ ⇤ ⇥y((someone y) (bidk (who dp s))) (dp s) s dp ⇥x(x left) s ⇥ ) ◦ · · · ⇤ ) ◦ · ⇤ R dp s ⇥z⇥y((someone y) (but (who z))) (dp s) )s ⇥ ⇥x(x left) dp sss ◦ ◦ · · ) · ⇤ ) ) R · ⇤ ⇤ L ⇥z⇥y((someone y) (bidk (who z))) (dp s) ((dp s) s) ⇥x(x left) (dp s)) s s ) ◦ · · · ⇤ ) ) · ◦ ) ⇤ L ⇥x(x left) (((dp s) s) ((dp s) ((dp s) s)) ⇥z⇥y((someone) y) (bidk (who z)))) s ) · ◦ ) )◦ ) ) ◦ · · )· ) ⇤ ⇥x(x left) ⇥y((someone y) (bidk (who sluicegap))) s ⇥( ) ·) ( ◦ ) ) ) ◦) · · · ⇤ (someone ⇥x(x left)) (bidk (who sluicegap)) s ⇥ ◦ · · · · ⇤ (someone left) (bidk (who sluicegap)) s ⇥ · · · · ⇤ Starting from the bottom: someone “raises” to take scope over the antecedent clause; the remnant of someone, i.e., ⇥x(x left) raises to take scope over the entire sluice; then the sluice gap raises · and tucks in beneath the raised remnant. The final ingredient is to assign to the sluice gap the duplicator meaning ⇥x.κxx. Then the standard (Curry-Howard) semantics for multi-modal type logical grammars (for now, see, e.g., Moortgat 1997 for details) delivers the denotation of the remnant as the content of the sluice gap. Sprouting. The fragment already generates sprouting examples. In the derivation above, the wh-word who needs an S containing a DP gap in order to form an embedded question, and so has category q/(dp s). The wh-word when in (2), then, requires a clause with an adverbial gap, i.e., q/(adv s), where adv = (dp s) (dp s). But assuming that the empty structure is a right identity for \ \ \ ) dp s dp s ‘ ’, i.e., that Γ () Γ, we have this simple proof: \ ⇤ \ R . In other words, starting · ) · ⇥ (dp s) (dp s) \ ⇤ \ \ \ with the tautology dp s dp s, if we remove the leftmost constituent from the structure on the left \ ⇤ \ of the turnstyle, the remaining (empty) structure must have category adv (with the identity function as its Curry-Howard semantic value). This means that we can freely add a silent, identity-function denoting adverb to the antecedent clause. There is no need to posit silent lexical content such as at some time, since
Recommended publications
  • A Type-Theoretical Analysis of Complex Verb Generation
    A Type-theoretical Analysis of Complex Verb Generation Satoshi Tojo Mitsubishi Research Institute, Inc. 2-22 Harumi 3, Chuo-ku Tokyo 104, Japan e-mail: [email protected], [email protected] Abstract hardly find a verb element in the target language, which corresponds with the original word exactly in Tense and aspect, together with mood and modality, meaning. However we also need to admit that the usually form the entangled structure of a complex verb. problem of knowledge representation of tense and as- They are often hard to translate by machines, because pect (namely time), or that of mood and modality is of both syntactic and semantic differences between lan- not only the problem of verb translation, but that of guages. This problem seriously affects upon the gen- the whole natural language understanding. In this pa- eration process because those verb components in in- per, in order to realize a type calculus in syntax, we terlingua are hardly rearranged correctly in the target depart from a rather mundane subset of tense, aspects, language. We propose here a method in which each and modMities, which will be argued in section 2. verb element is defined as a mathematical function ac- cording to its type of type theory. This formalism gives 1.2 The difference in the structure each element its legal position in the complex verb in the target language and certifies so-called partial trans- In the Japanese translation of (1): lation. In addition, the generation algorithm is totally oyoi-de-i-ta-rashii free from the stepwise calculation, and is available on parallel architecture.
    [Show full text]
  • Development of a General-Purpose Categorial Grammar Treebank
    Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 5195–5201 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC Development of a General-Purpose Categorial Grammar Treebank Yusuke Kubota, Koji Mineshima, Noritsugu Hayashi, Shinya Okano NINJAL, Ochanomizu University, The University of Tokyo, idem. f10-2 Midori-cho, Tachikawa; 2-2-1 Otsuka, Bunkyo; 3-8-1 Komaba, Megurog, Tokyo, Japan [email protected], [email protected], fhayashi, [email protected] Abstract This paper introduces ABC Treebank, a general-purpose categorial grammar (CG) treebank for Japanese. It is ‘general-purpose’ in the sense that it is not tailored to a specific variant of CG, but rather aims to offer a theory-neutral linguistic resource (as much as possible) which can be converted to different versions of CG (specifically, CCG and Type-Logical Grammar) relatively easily. In terms of linguistic analysis, it improves over the existing Japanese CG treebank (Japanese CCGBank) on the treatment of certain linguistic phenomena (passives, causatives, and control/raising predicates) for which the lexical specification of the syntactic information reflecting local dependencies turns out to be crucial. In this paper, we describe the underlying ‘theory’ dubbed ABC Grammar that is taken as a basis for our treebank, outline the general construction of the corpus, and report on some preliminary results applying the treebank in a semantic parsing system for generating logical representations of sentences. Keywords: treebank, categorial grammar, Japanese, annotation 1. Introduction Table 1: Comparison with related work original corpus output This paper reports on the progress of the construction of a H&S Penn Treebank English CCG general-purpose categorial grammar treebank in Japanese Uematsu et al.
    [Show full text]
  • The Shift and Equivalence of English Sentence Word Order As It Is Translated Into Indonesia
    THE SHIFT AND EQUIVALENCE OF ENGLISH SENTENCE WORD ORDER AS IT IS TRANSLATED INTO INDONESIA Suyudi Universitas Gunadarma Jalan Margonda Raya 100 [email protected] ABSTRACT The problems of this research area: Are there equivalence and shifts between the English and Indonesian word order in the translation of English sentences into Indonesian? And the aims of this research are to describe equivalence and shift between the English and Indonesia word order in the translation of English sentences into Indonesian. This research uses qualitative method, which is descriptive because it relates with word and sentences. This research analyze the equivalence and shift occurred in the translation of English sentences into Indonesian. The data is taken from the English novel the switch by Sandra Brown and its translated version by Monica Dwi Chresnayani. Data are classified into two categories, they are: Equivalence and shifts occurred in the translation of English sentence word order as it is translated into Indonesian. The number of equivalence of the word order is 37% and the number of shifts is 63%. We can conclude that the shifts occurred more than the equivalence. Keywords: Translation, shift, equivalence, word order. this sentence into Indonesian is 1. INTRODUCTION setelah menemukan buku yang dicari dia mengacungkannya (Adv + S + P + The subject of the research is O). Here we can see that there is the possible word order in trans- equivalence of sentence word order lating English sentences into Indo- at the level of clause. But this nesian by analyzing any shift and doesn't always happens all the ti- equivalence occurred in the trans- me, sometime, however, the trans- lation of English sentences into lation from English into Indonesian Indonesian Sometimes English sen- experience different word order in tence has the same word order of its sentences.
    [Show full text]
  • Direct Compositionality and Variable-Free Semantics: the Case of “Principle B”
    Direct Compositionality and Variable-Free Semantics: The Case of “Principle B” Drafty outline/handout of material to be presented at the Workshop on Direct Compositionality Pauline Jacobson Brown University 0. The big picture goals: (A) To argue that direct compositionality is a much simpler hypothesis about the organization of the grammar than the competing views It is, therefore, the view we should hope is correct. Apparent challenges to this view should perhaps lead us to rethink some of the underlying assumptions which give rise to these challenges. (B) To point out: a consequence of at least a reasonably strong version of direct compositionality is that structured representations (e.g., trees) play no actual role in the grammar: they are not something that grammatical rules/principles can “see”. They are just convenient representation for the lnguist (representing the syntactic proof system and the compositional semantics) (C) To point out the consequences of this with respect to principles regarding the distribution of pronouns (reflexives, etc.). Standard assumption: these require the grammar to include principles which are stated on non strictly-local chucks of representation Hence: the hypothesis of direct compositionality should inspire us to rethink such statements (D) To discuss a second way in which binding principles have traditionally been taken to challenge direct compositionality (at least in its strong form): • there are many cases where there are binding type effects (e.g., "binding" of pronouns, "binding" of reflexives to satisfy Principle A, Principle B violations, etc.) but where on the surface, there is no big enough chunk of representation Typical example : connectivity in specificational sentences (1) What John is is proud of himself.
    [Show full text]
  • Chapter 1 Negation in a Cross-Linguistic Perspective
    Chapter 1 Negation in a cross-linguistic perspective 0. Chapter summary This chapter introduces the empirical scope of our study on the expression and interpretation of negation in natural language. We start with some background notions on negation in logic and language, and continue with a discussion of more linguistic issues concerning negation at the syntax-semantics interface. We zoom in on cross- linguistic variation, both in a synchronic perspective (typology) and in a diachronic perspective (language change). Besides expressions of propositional negation, this book analyzes the form and interpretation of indefinites in the scope of negation. This raises the issue of negative polarity and its relation to negative concord. We present the main facts, criteria, and proposals developed in the literature on this topic. The chapter closes with an overview of the book. We use Optimality Theory to account for the syntax and semantics of negation in a cross-linguistic perspective. This theoretical framework is introduced in Chapter 2. 1 Negation in logic and language The main aim of this book is to provide an account of the patterns of negation we find in natural language. The expression and interpretation of negation in natural language has long fascinated philosophers, logicians, and linguists. Horn’s (1989) Natural history of negation opens with the following statement: “All human systems of communication contain a representation of negation. No animal communication system includes negative utterances, and consequently, none possesses a means for assigning truth value, for lying, for irony, or for coping with false or contradictory statements.” A bit further on the first page, Horn states: “Despite the simplicity of the one-place connective of propositional logic ( ¬p is true if and only if p is not true) and of the laws of inference in which it participate (e.g.
    [Show full text]
  • 1 English Subjectless Tagged Sentences Paul Kay Department Of
    1 English subjectless tagged sentences Paul Kay Department of Linguistics University of California Berkeley, CA 94720 [email protected] 2 Abstract A colloquial English sentence like Fooled us, didn't they? contains a finite main verb but no expressed subject. The identity of the missing subject of fooled is recovered from the tag subject they: compare Fooled us, didn't she?, Fooled us, didn't you? This paper argues (1) that such subjectless tagged sentences (STSs) pose a problem for grammatical approaches based on movement and empty categories and (2) that STSs receive a revealing analysis as part of a finely articulated family of tagged sentence constructions when viewed within a non-derivational, constructional, multiple-inheritance-based approach.* *I would like to thank Peter Culicover, Liliane Haegeman, Charles Fillmore Andreas Kathol and Richard Oehrle for comments on previous versions of this paper, as well as an anonymous reviewer for Language. They have doubtless offered more good advice than I have accepted. 3 0. Introduction. It has been argued from several points of view that whatever can be done with empty categories (ecs) can be done without them (Ades and Steedman 1982, Gazdar et al. 1984, Kaplan and Zaenen 1989, Pollard and Sag 1994 chapter 9, Sag and Fodor 1994, Kay and Fillmore 1999, Sag 1999). It has also been argued that, because there is no hard evidence for their existence, linguistic theory would be better off dispensing with these unobservable entities (Pickering and Barry 1991, Sag and Fodor 1994, Sag 1999).1 The present paper purports to take the argument one step further by showing that there are things that can be done without empty categories that cannot be done with them, at least not with any of the ecs currently available.
    [Show full text]
  • A Birelational Analysis of the Russian Imperfective
    A birelational analysis of the Russian imperfective Daniel Altshuler Dept. of Linguistics Rutgers University [email protected] Abstract This paper provides two puzzles for a theory of aspect. The first concerns the quirky behavior of the Russian imperfective with regard to its culmination properties: it seems to function like the perfect aspect in certain cases, but like the progressive in others. The other puzzle concerns how the Russian imperfective constrains the temporal location of a described event: it relates distinct event parts to a given temporal parameter. Which part is at play depends on how this parameter is specified. If it is specified by an adverbial, then an event is located in time. If it is specified by the discourse context, then a consequent state is located in time. I solve the former puzzle by appealing to the structure of atomic vs. non-atomic events and solve the latter by appealing to two temporal inputs required by an aspectual marker. These inputs reveal that aspectual meaning involves both temporal information and information about discourse connectivity. 1 Introduction Moens & Steedman 1988 proposed that events have the tripartite structure shown below in Fig.1. The culmination point of an event is its inherent telos—i.e. a point at which an event ceases to take place. An achievement solely consists of a culmination point—i.e. it is over as soon as it is instantiated. An accomplishment (or culminated process), on the other hand, consists not only of a culmination point, but also a preparatory process, which in turn consists of a series of preparations leading to a culmination and in certain cases, the consequence of this culmination or an event’s consequent state (cf.
    [Show full text]
  • Categorial Grammar and Lexical-Functional Grammar∗
    Categorial Grammar and Lexical-Functional Grammar¤ Reinhard Muskens Department of Linguistics Tilburg University The Netherlands Proceedings of the LFG01 Conference University of Hong Kong, Hong Kong Miriam Butt and Tracy Holloway King (Editors) 2001 CSLI Publications http://csli-publications.stanford.edu/ ¤I wish to thank the participants of LFG2001, especially Mary Dalrymple and Ron Kaplan, for valuable feedback. All comments are very welcome. Abstract This paper introduces ¸-grammar, a form of categorial grammar that has much in common with LFG. Like other forms of categorial grammar, ¸-grammars are multi-dimensional and their components are combined in a strictly parallel fashion. Grammatical representa- tions are combined with the help of linear combinators, closed pure ¸-terms in which each abstractor binds exactly one variable. Mathe- matically this is equivalent to employing linear logic, in use in LFG for semantic composition, but the method seems more practicable. While ¸-grammars could be used to formalize many approaches to grammatical theory, they are certainly natural as a basis for the formalization of LFG. This leads to a theory I would like to call ¸- LFG. In this paper it will be shown how the standard components of LFG can be set up in the framework. We will have descriptions of c-structure, descriptions of f-structure, and semantics. The di®erence between de¯ning and constraining information will be explained in terms of entailment, and requirements on long-distance paths in f- structure will be explained in terms of entailment in the presence of a simple set of axioms. 1 Introduction In this paper I want to discuss a version of Lexical-Functional Grammar (LFG) that is also a version of Categorial Grammar (CG).
    [Show full text]
  • The Categorial Fine-Structure of Natural Language
    1 The categorial fine-structure of natural language Johan van Benthem, Amsterdam & Stanford, March 2003 Abstract Categorial grammar analyzes linguistic syntax and semantics in terms of type theory and lambda calculus. A major attraction of this approach is its unifying power, as its basic function/ argument structures occur across the foundations of mathematics, language and computation. This paper considers, in a light example-based manner, where this elegant logical paradigm stands when confronted with the wear and tear of reality. Starting from a brief history of the Lambek tradition since the 1980s, we discuss three main issues: (a) the fit of the lambda calculus engine to characte- ristic semantic structures in natural language, (b) coexistence of the original type-theoretic and more recent modal interpretations of categorial logics, and (c) the place of categorial grammars in the total architecture of natural language, which involves mixtures of interpretation and inference. 1 From Montague Grammar to Categorial Grammar Logic and linguistics have had lively connections from Antiquity right until today (GAMUT 1991). A recurrent theme in this history is the categorial structure of language and ontology, from Aristotle's grammatical categories to Russell's theory of types in the foundations of mathematics. Further bridges were thrown as logic and linguistics developed, with Lambek 1958 as a major advance in sophistication, many years ahead of its time. Large-scale logical analysis of natural language really took off around 1970, thanks to Montague's work (Montague 1975, Partee 1997). This system connects linguistic structure and logical semantics in the following architecture. The pivotal deep structure are syntax trees formed by grammatical construction rules.
    [Show full text]
  • Chapter 3 Language Analysis and Understanding
    Chapter 3 Language Analysis and Understanding 3.1 Overview a b Annie Zaenen & Hans Uszkoreit a Rank Xerox ResearchCentre, Grenoble, France b Deutsches Forschungszentrum fur Kunstlic he Intelligenz and Universitat des Saarlandes, Saarbruc ken, Germany We understand larger textual units bycombining our understanding of smaller ones. The main aim of linguistic theory is to showhow these larger units of meaning arise out of the combination of the smaller ones. This is mo deled by means of a grammar. Computational linguistics then tries to implement this pro cess in an ecientway.Itis traditional to sub divide the task into syntax and semantics, where syntax describ es how the di erent formal elements of a textual unit, most often the sentence, can b e combined and semantics describ es how the interpretation is calculated. In most language technology applications the enco ded linguistic knowledge, i.e., the grammar, is separated from the pro cessing comp onents. The grammar consists of a lexicon, and rules that syntactically and semantically combine words and phrases into larger phrases and sentences. A variety of representation languages have b een develop ed for the enco ding of linguistic knowledge. Some of these languages are more geared towards conformity with formal linguistic theories, others are designed to facilitate certain pro cessing mo dels or sp ecialized applications. Several language technology pro ducts that are on the market to day employ annotated 109 110 Chapter 3: Language Analysis and Understanding phrase-structure grammars, grammars with several hundreds or thousands of rules describing di erent phrase typ es. Each of these rules is annotated by features and sometimes also by expressions in a programming language.
    [Show full text]
  • Theme Markedness in English and Spanish: a Systemic-Functional
    Theme markedness in English and Spanish: A Systemic-Functional Approach Maite Taboada Program in English Linguistics Deparment of English Philology Universidad Complutense de Madrid Tutor: Dr. Julia Lavid No. of credits: 4 May 1995 Contents Contents ............................................................................................................... 1 List of figures and tables ...................................................................................... 3 1 Introduction ......................................................................................................... 4 2 Review of literature ............................................................................................ 6 3 Towards a functional approach to Theme .......................................................... 11 4 Word order in Spanish: some remarks ............................................................... 14 4.1 Word order in non-sentence constructions ...................................................... 14 4.2 Word order within the sentence ...................................................................... 17 4.2.1 Given-new constraints .......................................................................... 19 4.2.2 End -weight ......................................................................................... 21 4.2.3 Other factors ....................................................................................... 22 4.3 Unmarked word order structures ...................................................................
    [Show full text]
  • Department of English
    DEPARTMENT OF ENGLISH HS8151 COMMUNICATIVE ENGLISH LECTURE NOTES UNIT I WH questions: Wh-questions begin with what, when, where, who, whom, which, whose, why and how. We use them to ask for information. The answer cannot be yes or no: A:When will you finish your B.E? B:Next year. A:What are you doing now? B:I am teaching Grammar. Question words Meaning Examples Who Person Who's that? That's Nancy. Place Where do you live? In Where Boston Reason Why do you sleep early? Why Because I've got to get up early Time When do you go to work? When At 7:00 How Manner How do you go? By car object, idea or action What do you do? I am an What engineer Choice Which one do you prefer? Which The red one. Possession Whose is this book? It's Whose Alan's. object of the verb Whom did you meet? I met Whom the manager. Description What kind of music do you what kind like? I like quiet songs quantity (countable) How many students are how many there? There are twenty. amount, price (uncountable) How much time have we how much got? Ten minutes duration, length How long did you stay in how long that hotel? For two weeks. Frequency How often do you go to the how often gym? Twice a week. Distance How far is your school? It's how far one mile far. how old Age How old are you? I'm 16. WH Questions Structure: WH questions in the simple present use “do” or “be”.
    [Show full text]