Using Sentence Type Information for Syntactic Category Acquisition

Total Page:16

File Type:pdf, Size:1020Kb

Using Sentence Type Information for Syntactic Category Acquisition Using Sentence Type Information for Syntactic Category Acquisition Stella Frank ([email protected]) Sharon Goldwater ([email protected]) Frank Keller ([email protected]) School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB, UK Abstract we hypothesise that sentence type is a useful cue for In this paper we investigate a new source of syntactic category learning. We test this hypothesis by information for syntactic category acquisition: incorporating sentence type information into an unsu- sentence type (question, declarative, impera- pervised model of part of speech tagging. tive). Sentence type correlates strongly with We are unaware of previous work investigating the intonation patterns in most languages; we hy- usefulness of this kind of information for syntactic pothesize that these intonation patterns are a category acquisition. In other domains, intonation has valuable signal to a language learner, indicat- been used to identify sentence types as a means of im- ing different syntactic patterns. To test this hy- proving speech recognition language models. Specifi- pothesis, we train a Bayesian Hidden Markov cally, (Taylor et al., 1998) found that using intonation Model (and variants) on child-directed speech. to recognize dialogue acts (which to a significant extent We first show that simply training a separate correspond to sentence types) and then using a special- model for each sentence type decreases perfor- ized language model for each type of dialogue act led mance due to sparse data. As an alternative, we to a significant decrease in word error rate. propose two new models based on the BHMM In the remainder of this paper, we first present the in which sentence type is an observed variable Bayesian Hidden Markov Model (BHMM; Goldwater which influences either emission or transition and Griffiths (2007)) that is used as the baseline model probabilities. Both models outperform a stan- of category acquisition, as well as our extensions to dard BHMM on data from English, Cantonese, the model, which incorporate sentence type informa- and Dutch. This suggests that sentence type tion. We then discuss the distinctions in sentence type information available from intonational cues that we used and our evaluation measures, and finally may be helpful for syntactic acquisition cross- our experimental results. We perform experiments on linguistically. corpora in four different languages: English, Spanish, Cantonese, and Dutch. Our results on Spanish show no 1 Introduction difference between the baseline and the models incor- porating sentence type, possibly due to the small size of Children acquiring the syntax of their native language the Spanish corpus. Results on all other corpora show have access to a large amount of contextual informa- a small improvement in performance when sentence tion. Acquisition happens on the basis of speech, and type is included as a cue to the learner. These cross- the acoustic signal carries rich prosodic and intona- linguistic results suggest that sentence type may be a tional information that children can exploit. A key task useful source of information to children acquiring syn- is to separate the acoustic properties of a word from the tactic categories. underlying sentence intonation. Infants become attuned to the pragmatic and discourse functions of utterances 2 BHMM Models as signalled by intonation extremely early; in this they are helped by the fact that intonation contours of child 2.1 Standard BHMM and infant directed speech are especially well differen- We use a Bayesian HMM (Goldwater and Griffiths, tiated between sentence types (Stern et al., 1982; Fer- 2007) as our baseline model. Like a standard trigram nald, 1989). Children learn to use appropriate intona- HMM, the BHMM assumes that the probability of tag tional melodies to communicate their own intentions at ti depends only on the previous two tags, and the proba- the one word stage, before overt syntax develops (Snow bility of word wi depends only on ti. This can be written and Balog, 2002). as It follows that sentence type information (whether a 0 (t,t0) (t,t0) sentence is declarative, imperative, or a question), as ti|ti−1 = t,ti−2 = t ,τ ∼ Mult(τ ) (1) (t) (t) signaled by intonation, is readily available to children wi|ti = t,ω ∼ Mult(ω ) (2) by the time they start to acquire syntactic categories. 0 Sentence type also has an effect on sentence structure where τ(t,t ) are the parameters of the multinomial dis- in many languages (most notably on word order), so tribution over following tags given previous tags (t,t0) 1 Proceedings of the 2010 Workshop on Cognitive Modeling and Computational Linguistics, ACL 2010, pages 1–8, Uppsala, Sweden, 15 July 2010. c 2010 Association for Computational Linguistics and ω(t) are the parameters of the distribution over out- puts given tag t. The BHMM assumes that these param- α eters are in turn drawn from symmetric Dirichlet priors ti−2 ti−1 ti with parameters α and β, respectively: 0 τ(t,t )|α ∼ Dirichlet(α) (3) (t) ω |β ∼ Dirichlet(β) (4) si wi β Using these Dirichlet priors allows the multinomial dis- N tributions to be integrated out, leading to the following predictive distributions: Figure 1: Graphical model representation of the BHMM-T, which includes sentence type as an ob- C(ti−2,ti−1,ti) + α P(ti|t−i,α) = (5) served variable on tag transitions (but not emissions). C(ti−2,ti−1) + Tα C(ti,wi) + β P(wi|ti,t−i,w−i,β) = (6) punctuation features (not part of our model), and these C(ti) +Wt β i features are informative enough to reliably distinguish where t−i = t1 ...ti−1, w−i = w1 ...wi−1, C(ti−2,ti−1,ti) sentence types (as speech recognition tasks have found and C(ti,wi) are the counts of the trigram (ti−2,ti−1,ti) to be the case, see Section 1). and the tag-word pair (ti,wi) in t−i and w−i, T is the In the BHMM, there are two obvious ways that sen- tence type could be incorporated into the generative size of the tagset, and Wti is the number of word types emitted by ti. model: either by affecting the transition probabilities Based on these predictive distributions, (Goldwa- or by affecting the emission probabilities. The first case ter and Griffiths, 2007) develop a Gibbs sampler for can be modeled by adding si as a conditioning variable the model, which samples from the posterior distri- when choosing ti, replacing line 1 from the BHMM bution over tag sequences t given word sequences w, definition with the following: i.e., P(t|w,α,β) ∝ P(w|t,β)P(t|α). This is done by us- 0 0 t |s = s,t = t,t = t0, (t,t ) ∼ Mult( (s,t,t )) (7) ing Equations 5 and 6 to iteratively resample each tag i i i−1 i−2 τ τ 1 ti given the current values of all other tags. The re- We will refer to this model, illustrated graphically in sults show that the BHMM with Gibbs sampling per- Figure 1, as the BHMM-T. It assumes that the distribu- forms better than the standard HMM using expectation- tion over ti depends not only on the previous two tags, maximization. In particular, the Dirichlet priors in the but also on the sentence type, i.e., that different sen- BHMM constrain the model towards sparse solutions, tence types tend to have different sequences of tags. i.e., solutions in which each tag emits a relatively small In contrast, we can add si as a conditioning variable number of words, and in which a tag transitions to few for wi by replacing line 2 from the BHMM with following tags. This type of model constraint allows (t) (s,t) the model to find solutions which correspond to true wi|si = s,ti = t,ω ∼ Mult(ω ) (8) syntactic parts of speech (which follow such a sparse, Zipfian distribution), unlike the uniformly-sized clus- This model, the BHMM-E, assumes that different sen- ters found by standard maximum likelihood estimation tence types tend to have different words emitted from using EM. the same tag. In the experiments reported below, we use the Gibbs The predictive distributions for these models are sampler described by (Goldwater and Griffiths, 2007) given in Equations 9 (BHMM-T) and 10 (BHMM-E): for the BHMM, and modify it as necessary for our own C(ti−2,ti−1,ti,si) + α extended models. We also follow (Goldwater and Grif- P(ti|t−i,si,α) = (9) C(ti− ,ti− ,si) + Tα fiths, 2007) in using Metropolis-Hastings sampling for 2 1 C(ti,wi,si) + β the hyperparameters, which are inferred automatically P(wi|ti,si,β) = (10) in all experiments. A separate β parameter is inferred C(ti,si) +Wti β for each tag. Of course, we can also create a third new model, the BHMM-B, in which sentence type is used to con- 2.2 BHMM with Sentence Types dition both transition and emission probabilities. This We wish to add a sentence type feature to each time- model is equivalent to training a separate BHMM on step in the HMM, signalling the current sentence type. each type of sentence (with shared hyperparameters). We treat sentence type (s) as an observed variable, on Note that introducing the extra conditioning variable the assumption that it is observed via intonation or in these models has the consequence of splitting the counts for transitions, emissions, or both. The split dis- 1Slight corrections need to be made to Equation 5 to ac- count for sampling tags from the middle of the sequence tributions will therefore be estimated using less data, rather than from the end; these are given in (Goldwater and which could actually degrade performance if sentence Griffiths, 2007) and are followed in our own samplers.
Recommended publications
  • RELATIONAL NOUNS, PRONOUNS, and Resumptionw Relational Nouns, Such As Neighbour, Mother, and Rumour, Present a Challenge to Synt
    Linguistics and Philosophy (2005) 28:375–446 Ó Springer 2005 DOI 10.1007/s10988-005-2656-7 ASH ASUDEH RELATIONAL NOUNS, PRONOUNS, AND RESUMPTIONw ABSTRACT. This paper presents a variable-free analysis of relational nouns in Glue Semantics, within a Lexical Functional Grammar (LFG) architecture. Rela- tional nouns and resumptive pronouns are bound using the usual binding mecha- nisms of LFG. Special attention is paid to the bound readings of relational nouns, how these interact with genitives and obliques, and their behaviour with respect to scope, crossover and reconstruction. I consider a puzzle that arises regarding rela- tional nouns and resumptive pronouns, given that relational nouns can have bound readings and resumptive pronouns are just a specific instance of bound pronouns. The puzzle is: why is it impossible for bound implicit arguments of relational nouns to be resumptive? The puzzle is highlighted by a well-known variety of variable-free semantics, where pronouns and relational noun phrases are identical both in category and (base) type. I show that the puzzle also arises for an established variable-based theory. I present an analysis of resumptive pronouns that crucially treats resumptives in terms of the resource logic linear logic that underlies Glue Semantics: a resumptive pronoun is a perfectly ordinary pronoun that constitutes a surplus resource; this surplus resource requires the presence of a resumptive-licensing resource consumer, a manager resource. Manager resources properly distinguish between resumptive pronouns and bound relational nouns based on differences between them at the level of semantic structure. The resumptive puzzle is thus solved. The paper closes by considering the solution in light of the hypothesis of direct compositionality.
    [Show full text]
  • Chapter 1 Basic Categorial Syntax
    Hardegree, Compositional Semantics, Chapter 1 : Basic Categorial Syntax 1 of 27 Chapter 1 Basic Categorial Syntax 1. The Task of Grammar ............................................................................................................ 2 2. Artificial versus Natural Languages ....................................................................................... 2 3. Recursion ............................................................................................................................... 3 4. Category-Governed Grammars .............................................................................................. 3 5. Example Grammar – A Tiny Fragment of English ................................................................. 4 6. Type-Governed (Categorial) Grammars ................................................................................. 5 7. Recursive Definition of Types ............................................................................................... 7 8. Examples of Types................................................................................................................. 7 9. First Rule of Composition ...................................................................................................... 8 10. Examples of Type-Categorial Analysis .................................................................................. 8 11. Quantifiers and Quantifier-Phrases ...................................................................................... 10 12. Compound Nouns
    [Show full text]
  • September 15 & 17 Class Summary: 24.902
    September 15 & 17 class summary: 24.902 could not be replaced by depend, because depend bears the distinct feature [+ __ NP PP]. The week actually began with some unfinished business from last week, which is OK, painful detail. What is interesting to us is a constraint proposed by Chomsky on the contained in the previous summary. Then we moved on... ability of a subcategorization rule to control the environment in which it applies. 1. Preliminaries Chomsky claimed that the environment of the subcategorization rule for X makes A word often cares about its syntactic environment. A verb, for example, may reference to all and only the sisters of X. More simply still: the subcategorization property require two, one or zero complements, and (seemingly -- see below!) may specify the of X cares only about the sisters of X. syntactic category of its complement. Terminological note: As mentioned in class, a slightly different way of speaking has arisen in the field of syntax. We say that a verb like put "subcategorizes for" an NP and a • Examples: put requires NP PP, devour requiresNP, depend requires PP, eat takes an optional NP. PP. Likewise a verb like eat "subcategorizes for" an optional NP. • Chomsky's constraint has interesting implications for language acquisition. Granted (1) a. Sue put the book under the table. that a child must maintain some record of the syntactic environment in which lexical b. *Sue put the book. items occur (or else subcategorization information would not be acquired), Chomsky's c. *Sue put under the table. constraint suggests that this information is quite limited.
    [Show full text]
  • When and Why Can 1St and 2Nd Person Pronouns Be Bound Variables? the Problem with 1St and 2Nd Person Pronouns
    When and why can 1st and 2nd person pronouns be bound variables? The problem with 1st and 2nd person pronouns. In some contexts (call them context A), English 1st and 2nd person pronouns can be used as bound variables (Heim 1991, Kratzer 1998) as in (1). (1) Context A: 1st person (data based on Rullmann 2004) a. Only I got a question that I understood. = (i) !x [x got a question that x understood] = (ii) !x [x got a question that I understood] b. Every guy that I’ve ever dated has wanted us to get married. = (i) "x, guy(x) & dated(I, x) [x wants x & I to get married] = (ii) "x, guy(x) & dated(I, x) [x wants us to get married] Rullmann (2004) argues that such data are problematic for Déchaine & Wiltschko’s (2002) analysis (henceforth DW), where it is claimed that English 1st/2nd person pronouns are DPs. According to the DW analysis, 1st/2nd person pronouns have the status of definite DPs (i.e., definite descriptions) and consequently are predicted not to be construable as bound variables. DW support their claim with the observation that in some contexts of VP-ellipsis (call them context B), 1st/2nd person pronouns cannot be used as bound variables, as in (2). (2) Context B: 1st person (data based on Déchaine & Wiltschko 2002) I know that John saw me, and Mary does too. # (i) ‘I know that John saw me, and Mary knows that John saw her. !x [x knows that John saw x] & !y [y knows that John saw y] = (ii) ‘I know that John saw me, and Mary knows that John saw me.’ !x [x knows that John saw me] & !y [y knows that John saw me] As we will show, the (im)possibility of a bound variable reading with 1st and 2nd person is conditioned by a number of factors.
    [Show full text]
  • Syntactic Categories in the Speech of \Bung Children
    Developmental Psychology Copyright 1986 by the American Psychological Association, Inc. 1986, Vol. 22, No. 4,562-579 00l2-1649/86/$0O.75 Syntactic Categories in the Speech of \bung Children Virginia Valian Columbia University This article demonstrates that very young children have knowledge of a range of syntactic categories. Speech samples from six children aged 2 years to 2 years, 5 months, with Mean Lengths of Utterance (MLUs) ranging from 2.93 to 4.14, were examined for evidence of six syntactic categories: Deter- miner, Adjective, Noun, Noun Phrase, Preposition, and Prepositional Phrase. Performance was eval- uated by the conformance of the children's speech to criteria developed for each category. Known syntactic diagnostics served as models for the criteria developed; the criteria exploited distributional regularities. All children showed evidence of all categories, except for the lowest MLU child, whose performance was borderline on Adjectives and Prepositional Phrases. The results suggest that chil- dren are sensitive very early in life to abstract, formal properties of the speech they hear and must be credited with syntactic knowledge at an earlier point than heretofore generally thought. The results argue against various semantic hypotheses about the origin of syntactic knowledge. Finally, the methods and results may be applicable to future investigations of why children's early utterances are short, of the nature of children's semantic categories, and of the nature of the deviance in the speech of language-deviant children and adults. The grammars of mature speakers consist of rules and princi- Levin, 1985). Unless children can select the proper phrasal cate- ples that capture generalizations about the language.
    [Show full text]
  • A Morpho-Syntactic Approach to Pronominal Binding
    University of Pennsylvania Working Papers in Linguistics Volume 14 Issue 1 Proceedings of the 31st Annual Penn Article 18 Linguistics Colloquium April 2008 A morpho-syntactic approach to pronominal binding Heeshin Koak Rutgers University Follow this and additional works at: https://repository.upenn.edu/pwpl Recommended Citation Koak, Heeshin (2008) "A morpho-syntactic approach to pronominal binding," University of Pennsylvania Working Papers in Linguistics: Vol. 14 : Iss. 1 , Article 18. Available at: https://repository.upenn.edu/pwpl/vol14/iss1/18 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/pwpl/vol14/iss1/18 For more information, please contact [email protected]. A morpho-syntactic approach to pronominal binding Abstract In this paper, I propose that the availability of a bound variable reading for pronouns is predictable from their morphological structure of the pronouns. More specifically I argue that noun-containing pronouns cannot be bound variables. My proposal is different from D&W's (2002) argument in that in their theory, every DPs cannot have a bound variable reading, while in my theory, even DPs can have a bound variable reading as long as they do not contain a noun in it. I show that my proposal has more empirical and conceptual advantages than D&W's (2002) theory through the binding properties of Korean pronouns. I also deal with the cases discussed in D&W (2002) and show that my proposal can explain those data without the additional category phi-P that D&W (2002) suggest. This conference paper is available in University of Pennsylvania Working Papers in Linguistics: https://repository.upenn.edu/pwpl/vol14/iss1/18 A Morpho-syntactic Approach to Pronominal Binding Heeshin Koak * 1 Introduction Many researchers have tried to explain the binding properties of pronouns based on their morphosyntactic properties.
    [Show full text]
  • Structural Signature of Pronouns 1
    The Structural Signature of Pronouns 1 The Structural Signature of Pronouns* Mark Baltin Department of Linguistics New York University New York, New York 10003 [email protected] August 26, 2012 * This work grew out of joint work with Jeroen van Craenenbroeck several years ago, and I have benefited from numerous discussions with him. I am also indebted to Janet Dean Fodor for a very helpful discussion. The Structural Signature of Pronouns 2 The Structural Signature of Pronouns I argue for a definition of pro-form that simply defines the structural configuration into which pro-forms enter: a functional head that lacks an overt complement. This definition allows us to explain why full DPs can convert to pronouns without violating Inclusiveness, and why pro-forms and deletion sites are identical. Essentially, the licensing condition on deletion sites just makes them pro- forms. This definition tells us that , as Postal (1969) analyzed pronouns, their analysis is an essential ingredient of their definition; another analysis of pronouns that did not take them to involve deletion would fail to explain how elements could convert to them without violating Inclusiveness. Figuring out the mechanics of pronoun-conversion allows us to view the nature of garden-variety pronouns, which have intrigued philosophers through the centuries, and this work is intended as a window into the nature of symbolic cognitive architecture. The Structural Signature of Pronouns 3 I. Introduction Modern linguistics is invested with two truisms. The first truism is that we use the object-language, the language that we are attempting to characterize (English, French, Thai, Ewe, etc.) as an approximation for a meta-language, what Chomsky (1986) calls an I(nternal)-Language.
    [Show full text]
  • Type Shifting with Semantic Features: a Unified Perspective
    Type Shifting with Semantic Features: A Unified Perspective Yoad Winter Technion 1 Introduction Since their introduction by Partee and Rooth (1983) into linguistic theory, type shifting principles have been extensively employed in various linguistic domains, including nominal predicates (Partee 1987), kind denoting NPs (Chierchia 1998), interrogatives (Groenendijk and Stokhof 1989), scrambled defi- nites (De Hoop and Van der Does 1998) and plurals (Winter 2001,2002). Most of the accounts that use type shifting principles employ them as “last resort” mechanisms, which apply only when other compo- sitional mechanisms fail. This failure is often sloppily referred to as type mismatch. The motivation for introducing type mismatch into the compositional mechanism is twofold: on the one hand it allows lex- ical items to be assigned the minimal types that are needed for describing their denotation; on the other hand, it has been argued that the “last resort” strategy of type shifting prevents derivation of undesired meanings. The first goal of this paper is to define a simple notion of type mismatch, which will rather closely follow Partee and Rooth's original proposal but will be expressed within more familiar terms of categorial semantics. After introducing this implementation of traditional type mismatch, it will be argued that in fact, it covers only one possible kind of trigger for type shifting principles. Partee and Rooth's notion of mismatch is “external” in that the type of an expression is changed only when it combines with another type to which it cannot compose using the “normal” compositional mechanism. It will be argued that, within an appropriate type system, another notion of mismatch is also useful.
    [Show full text]
  • Syntax Corrected
    01:615:201 Introduction to Linguistic Theory Adam Szczegielniak Syntax: The Sentence Patterns of Language Copyright in part: Cengage learning Learning Goals • Hierarchical sentence structure • Word categories • X-bar • Ambiguity • Recursion • Transformaons Syntax • Any speaker of any human language can produce and understand an infinite number of possible sentences • Thus, we can’ t possibly have a mental dictionary of all the possible sentences • Rather, we have the rules for forming sentences stored in our brains – Syntax is the part of grammar that pertains to a speaker’ s knowledge of sentences and their structures What the Syntax Rules Do • The rules of syntax combine words into phrases and phrases into sentences • They specify the correct word order for a language – For example, English is a Subject-Verb-Object (SVO) language • The President nominated a new Supreme Court justice • *President the new Supreme justice Court a nominated • They also describe the relationship between the meaning of a group of words and the arrangement of the words – I mean what I say vs. I say what I mean What the Syntax Rules Do • The rules of syntax also specify the grammatical relations of a sentence, such as the subject and the direct object – Your dog chased my cat vs. My cat chased your dog • Syntax rules specify constraints on sentences based on the verb of the sentence *The boy found *Disa slept the baby *The boy found in the house Disa slept The boy found the ball Disa slept soundly Zack believes Robert to be a gentleman *Zack believes to be a gentleman Zack tries to be a gentleman *Zack tries Robert to be a gentleman What the Syntax Rules Do • Syntax rules also tell us how words form groups and are hierarchically ordered in a sentence “ The captain ordered the old men and women of the ship” • This sentence has two possible meanings: – 1.
    [Show full text]
  • THE ANALYSIS of SENTENCE STRUCTURE Syntax Is the Study Of
    LING 220 LECTURE #12 SYNTAX: THE ANALYSIS OF SENTENCE STRUCTURE Syntax is the study of sentence formation; it is a system of categories and rules that allow words to form sentences. Grammatical sentences: the native speakers of a language judge them as possible utterances. ...”The fundamental aim in the linguistic analysis of a language L is to separate the grammatical sentences which are sentences of L from the ungrammatical sentences which are not sentences of L, and to study the structure of grammatical sentences.” N. Chomsky. 1957. Syntactic Structures. Sylvia wanted George to go. *Sylvia George go want. Grammaticality does not depend on whether: 1. The utterance has been heard before A pigeon-toed sloth won the beauty contest wearing a purple tutu. 2. The utterance is false My horse is a professor of mathematics. 3. Semantically it does not make any sense Colourless green ideas sleep furiously. ALL THE ABOVE SENTENCES ARE GRAMMATICAL! CATEGORIES AND STRUCTURE The shared characteristics of words allow us to organize them into a relatively small number of groups, called SYNTACTIC CATEGORIES: Lexical categories Non-lexical categories 1 LEXICAL CATEGORIES: NOUN (N) John, box, girl, truth VERB (V) come, see, receive ADJECTIVE (A) nice, red, cheap, big PREPOSITION (P) of, by, to, on ADVERB (Adv) slowly, now, cleverly NON-LEXICAL CATEGORIES (=functional categories): DETERMINER (Det) the, a, this DEGREE WORD (Deg) very, so, more QUALIFIER (Qual) perhaps, almost AUXILIARY (Aux) may, have, will CONJUNCTION (Con) and, but, or STUDY Table 5.1 on p. 168 How can we determine a word’s category? There are three criteria that help to identify the syntactic category of a word: 1.
    [Show full text]
  • CAS LX 522 Syntax I
    C-selection CAS LX 522 Verbs are recorded in the lexicon with the θ-roles they assign as part of their meaning. Syntax I But, verbs are more selective than that. Pat felt a tremor. Pat felt uncomfortable. Pat felt that Chris had not performed well. Pat is the Experiencer; a tremor (noun), uncomfortable (adjective), or that…well (sentence) Week 3b. Merge, feature checking is the Theme/Source. So θ-role does not determine 3.6-4.2 syntactic category. And syntactic category certainly does not determine θ-role. So verbs also need to be recorded with information about the syntactic categor(y/ ies) they combine with. C-selection C-selection (“Subcategorization”) Kick needs a nominal object. Not all transitive verbs (that take just one Pat kicked the pail. object) can take the same kind of object. Kick has a [V] category feature, but also needs Sue knows [ the answer ] to have some form of [N] category feature DP Sue knows [ that Bill left early ] indicating that it needs a nominal object. CP Sue hit [ the ball ] We don’t want to risk interpreting kick as a noun, DP though. So, the [V] and [N] features must have a *Sue hit [CP that Bill left early] different status. So know can take either a DP or a CP as On kick, the [V] feature is interpretable— the [N] its object argument; hit can only take a DP feature is just for use in assembling the structure, as its object argument. it is not interpreted—hence uninterpretable. Feature checking S-selection To model this, we will say that if a syntactic object Verbs also exert semantic control of the kinds of has an uninterpretable feature, it must Merge with arguments they allow.
    [Show full text]
  • Are English Prepositions Grammatical Or Lexical Morphemes?1
    ARE ENGLISH PREPOSITIONS GRAMMATICAL OR LEXICAL MORPHEMES?1 Lucile Bordet & Denis Jamet Université Jean Moulin – Lyon 3 Centre d’Etudes Linguistiques – EA 1663 Introduction This article will start from the very traditional assumption that it is necessary to classify prepositions in parts of speech, even if, according to D. Gaatone [2001 : 23] “la notion même de ‘préposition’ reste aussi controversée que jamais” or, according to A. Borillo [2001 : 141] “le terme de ‘préposition’ s’applique à des constituants qui diffèrent tant par leur forme que leur mode de fonctionnement ou la spécificité de leur contenu sémantique”. Without completely calling the very existence of this traditional class generally called “grammatical” or “syntactic” into question, We aim to reexamine the notion of grammaticality, as it seems most characteristic of prepositions, which are normally placed into the “grammatical” or “syntactic” category. We will ask ourselves whether it is possible to classify words such as of, instead of, in view of and in ignorance of in examples (1) to (4) in the same category, i.e. the prepositional syntactic class? (1) EEC 1341 What would be the status of a new document? (2) A8U 46 Instead of focusing on the narrow agenda of Delors the new Europe should be more forward looking. (3) HPN 653 In view of the numbers of theses produced by both universities, it seems that the latter hypothesis is the more likely. (4) CBR 1158 Now that we have the means to observe samples of language which must be fairly close to representative samples, the 1 Many thanks to Lily Cheah, Natalie Mandon‐Hunter and Madeleine Wilson for proofreading this paper and their useful comments.
    [Show full text]