Head-Modifier Relation Based Non-Lexical Reordering Model for Phrase-Based Translation

Total Page:16

File Type:pdf, Size:1020Kb

Head-Modifier Relation Based Non-Lexical Reordering Model for Phrase-Based Translation Head-modifier Relation based Non-lexical Reordering Model for Phrase-Based Translation Shui Liu1, Sheng Li1, Tiejun Zhao1, Min Zhang2, Pengyuan Liu3 1School of Computer Science and Technology, Habin Institute of Technology {liushui,lisheng,tjzhao}@mtlab.hit.edu.cn 2Institute for Infocomm Research [email protected] 3Institute of Computational Linguistics, Peking University [email protected] To tackle this problem, n-best parse trees and Abstract parsing forest (Mi and Huang, 2008; Zhang, 2009) are proposed to relieve the error propaga- Phrase-based statistical MT (SMT) is a tion brought by linguistic analysis. Secondly, milestone in MT. However, the transla- some phrases which violate the boundary of tion model in the phrase based SMT is linguistic analysis are also useful in these mod- structure free which greatly limits its els ( DeNeefe et al., 2007; Cowan et al. 2006). reordering capacity. To address this is- Thus, a tradeoff needs to be found between lin- sue, we propose a non-lexical head- guistic sense and formal sense. modifier based reordering model on On the other hand, instead of using syntactic word level by utilizing constituent based translation rules, some previous work attempts parse tree in source side. Our experi- to learn the syntax knowledge separately and mental results on the NIST Chinese- then integrated those knowledge to the original English benchmarking data show that, constraint. Marton and Resnik (2008) utilize the with a very small size model, our me- language linguistic analysis that is derived from thod significantly outperforms the base- parse tree to constrain the translation in a soft line by 1.48% bleu score. way. By doing so, this approach addresses the challenges brought by linguistic analysis 1 Introduction through the log-linear model in a soft way. Syntax has been successfully applied to SMT to Starting from the state-of-the-art phrase based improve translation performance. Research in model Moses ( Koehn e.t. al, 2007), we propose applying syntax information to SMT has been a head-modifier relation based reordering model carried out in two aspects. On the one hand, the and use the proposed model as a soft syntax syntax knowledge is employed by directly inte- constraint in the phrase-based translation grating the syntactic structure into the transla- framework. Compared with most of previous tion rules i.e. syntactic translation rules. On this soft constraint models, we study the way to util- perspective, the word order of the target transla- ize the constituent based parse tree structure by tion is modeled by the syntax structure explicit- mapping the parse tree to sets of head-modifier ly. Chiang (2005), Wu (1997) and Xiong (2006) for phrase reordering. In this way, we build a learn the syntax rules using the formal gram- word level reordering model instead of phras- mars. While more research is conducted to learn al/constituent level model. In our model, with syntax rules with the help of linguistic analysis the help of the alignment and the head-modifier (Yamada and Knight, 2001; Graehl and Knight, dependency based relationship in the source 2004). However, there are some challenges to side, the reordering type of each target word these models. Firstly, the linguistic analysis is with alignment in source side is identified as far from perfect. Most of these methods require one of pre-defined reordering types. With these an off-the-shelf parser to generate syntactic reordering types, the reordering of phrase in structure, which makes the translation results translation is estimated on word level. sensitive to the parsing errors to some extent. 748 Coling 2010: Poster Volume, pages 748–756, Beijing, August 2010 Fig 1. An Constituent based Parse Tree 2 Baseline ing capacity of the translation model. Instead of Moses, a state-of-the-art phrase based SMT sys- directly employing the parse tree fragments tem is used as our baseline system. In Moses, (Bod, 1992; Johnson, 1998) in reordering rules given the source language f and target language (Huang and Knight, 2006; Liu 2006; Zhang and e, the decoder is to find: Jiang 2008), we make a mapping from trees to length(e) ebest = argmaxe p ( e | f ) pLM ( e ) ω (1) sets of head-modifier dependency relations (Collins 1996 ) which can be obtained from the where p(e|f) can be computed using phrase constituent based parse tree with the help of translation model, distortion model and lexical head rules ( Bikel, 2004 ). reordering model. pLM(e) can be computed us- length(e) ing the language model. ω is word penalty 3.1 Head-modifier Relation model. Among the above models, there are three According to Klein and Manning (2003) and reordering-related components: language model, Collins (1999), there are two shortcomings in n- lexical reordering model and distortion model. ary Treebank grammar. Firstly, the grammar is The language model can reorder the local target too coarse for parsing. The rules in different words within a fixed window in an implied way. context always have different distributions. Se- The lexical reordering model and distortion condly, the rules learned from training corpus reordering model tackle the reordering problem cannot cover the rules in testing set. between adjacent phrase on lexical level and Currently, the state-of-the-art parsing algo- alignment level. Besides these reordering model, rithms (Klein and Manning, 2003; Collins 1999) the decoder induces distortion pruning con- decompose the n-ary Treebank grammar into straints to encourage the decoder translate the sets of head-modifier relationships. The parsing leftmost uncovered word in the source side rules in these algorithms are constructed in the firstly and to limit the reordering within a cer- form of finer-grained binary head-modifier de- tain range. pendency relationships. Fig.2 presents an exam- ple of head-modifier based dependency tree 3 Model mapped from the constituent parse tree in Fig.1. In this paper, we utilize the constituent parse tree of source language to enhance the reorder- 749 Fig. 2. Head-modifier Relationships with Aligned Translation Moreover, there are several reasons for which Relationship: in this paper, we not only use the we adopt the head-modifier structured tree as label of the constituent label as Collins (1996), the main frame of our reordering model. Firstly, but also use some well-known context in pars- the dependency relationships can reflect some ing to define the head-modifier relationship r(.), underlying binary long distance dependency including the POS of the modifier m, the POS relations in the source side. Thus, binary depen- of the head h, the dependency direction d, the dency structure will suffer less from the long parent label of the dependency label l, the distance reordering constraint. Secondly, in grandfather label of the dependency relation p, head-modifier relation, we not only can utilize the POS of adjacent siblings of the modifier s. the context of dependency relation in reordering Thus, the head-modifier relationship can be model, but also can utilize some well-known represented as a 6-tuple <m, h, d, l, p, s>. and proved helpful context (Johnson, 1998) of constituent base parse tree in reordering model. r(.) relationship Finally, head-modifier relationship is mature r(1) <VV, - , -, -, -, - > and widely adopted method in full parsing. r(2) <NN, NN, right, NP, IP, - > r(3) <NN,VV, right, IP, CP, - > 3.2 Head-modifier Relation Based Reor- r(4) <VV, DEC, right, CP, NP, - > dering Model r(5) <NN,VV, left, VP, CP, - > Before elaborating the model, we define some r(6) <DEC, NP, right, NP, VP, - > notions further easy understanding. S=<f1, f r(7) <NN, VV, left, VP, TOP, - > 2…fn> is the source sentence; T=<e1,e2,…,em> is Table 1. Relations Extracted from Fig 2. the target sentence; AS={as(i) | 1≤ as(i) ≤ n } where as(i) represents that the ith word in source In Table 1, there are 7 relationships extracted sentence aligned to the as(i)th word in target from the source head-modifier based dependen- sentence; AT={aT(i) | 1≤ aT (i) ≤ n } where aT(i) cy tree as shown in Fig.2. Please notice that, in represents that the ith word in target sentence this paper, each source word has a correspond- aligned to the aT(i)th word in source sentence; ing relation. D= {( d(i), r(i) )| 0≤ d(i) ≤n} is the head- Reordering type: there are 4 reordering types modifier relation set of the words in S where for target words with linked word in the source d(i) represents that the ith word in source sen- side in our model: R= {rm1, rm2, rm3 , rm4}. The tence is the modifier of d(i)th word in source reordering type of target word as(i) is defined as sentence under relationship r(i); O= < o1, o2,…, follows: om > is the sequence of the reordering type of every word in target language. The reordering rm1: if the position number of the ith model probability is P(O| S, T, D, A). word’s head is less than i ( d(i) < i ) in source language, while the position num- ber of the word aligned to i is less than 750 as(d(i)) (as(i) < as(d(i)) ) in target lan- ing type into rm2; otherwise, we classify the guage; reordering type into rm4. Probability estimation: we adopt maximum rm : if the position number of the ith 2 likelihood (ML) based estimation in this paper. word’s head is less than i ( d(i) < i ) in In ML estimation, in order to avoid the data source language, while the position num- sparse problem brought by lexicalization, we ber of the word aligned to i is larger than discard the lexical information in source and a (d(i)) (a (i) > a (d(i)) ) in target lan- s s s target language: guage.
Recommended publications
  • Antisymmetry Kayne, Richard (1995)
    CAS LX 523 Syntax II (1) A Spring 2001 March 13, 2001 qp Paul Hagstrom Week 7: Antisymmetry BE 33 Kayne, Richard (1995). The antisymmetry of syntax. Cambridge, MA: MIT Press. CDFG 1111 Koopman, Hilda (2000). The spec-head configuration. In Koopman, H., The syntax of cdef specifiers and heads. London: Routledge. (2) A node α ASYMMETRICALLY C-COMMANDS β if α c-commands β and β does not The basic proposals: c-command α. X-bar structures (universally) have a strict order: Spec-head-complement. There is no distinction between adjuncts and specifiers. • B asymmetrically c-commands F and G. There can be only one specifier. • E asymmetrically c-commands C and D. • No other non-terminal nodes asymmetrically c-command any others. But wait!—What about SOV languages? What about multiple adjunction? Answer: We’ve been analyzing these things wrong. (3) d(X) is the image of a non-terminal node X. Now, we have lots of work to do, because lots of previous analyses relied on d(X) is the set of terminal nodes dominated by node X. the availability of “head-final” structures, or multiple adjunction. • d(C) is {c}. Why make our lives so difficult? Wasn’t our old system good enough? • d(B) is {c, d}. Actually, no. • d(F) is {e}. A number of things had to be stipulated in X-bar theory (which we will review); • d(E) is {e, f}. they can all be made to follow from one general principle. • d(A) is {c, d, e, f}. The availability of a head-parameter actually fails to predict the kinds of languages that actually exist.
    [Show full text]
  • Antisymmetry and the Lefthand in Morphology*
    CatWPL 7 071-087 13/6/00 12:26 Página 71 CatWPL 7, 1999 71-87 Antisymmetry And The Lefthand In Morphology* Frank Drijkoningen Utrecht Institute of Linguistics-OTS. Department of Foreign Languages Kromme Nieuwegracht 29. 3512 HD Utrecht. The Netherlands [email protected] Received: December 13th 1998 Accepted: March 17th 1999 Abstract As Kayne (1994) has shown, the theory of antisymmetry of syntax also provides an explanation of a structural property of morphological complexes, the Righthand Head Rule. In this paper we show that an antisymmetry approach to the Righthand Head Rule eventually is to be preferred on empirical grounds, because it describes and explains the properties of a set of hitherto puzz- ling morphological processes —known as discontinuous affixation, circumfixation or parasyn- thesis. In considering these and a number of more standard morphological structures, we argue that one difference bearing on the proper balance between morphology and syntax should be re-ins- talled (re- with respect to Kayne), a difference between the antisymmetry of the syntax of mor- phology and the antisymmetry of the syntax of syntax proper. Key words: antisymmetry, Righthand Head Rule, circumfixation, parasynthesis, prefixation, category-changing prefixation, discontinuities in morphology. Resum. L’antisimetria i el costat esquerre en morfologia Com Kayne (1994) mostra, la teoria de l’antisimetria en la sintaxi també ens dóna una explicació d’una propietat estructural de complexos morfològics, la Regla del Nucli a la Dreta. En aquest article mostrem que un tractament antisimètric de la Regla del Nucli a la Dreta es prefereix even- tualment en dominis empírics, perquè descriu i explica les propietats d’una sèrie de processos fins ara morfològics —coneguts com afixació discontínua, circumfixació o parasíntesi.
    [Show full text]
  • Head Words and Phrases Heads and Their Dependents
    Head Words and Phrases Tallerman: Chapter 4 Ling 222 - Chapter 4 1 Heads and their Dependents • Properties of heads – Head bears most important semantic information of the phrase. – Word class of head determines word class of entire phrase. • [NP very bright [N sunflowers] ] [VP [V overflowed] quite quickly] [AP very [A bright]] [AdvP quite [Adv quickly]] [PP [P inside] the house] Ling 222 - Chapter 4 2 1 – Head has same distribution as the entire phrase. • Go inside the house. Go inside. • Kim likes very bright sunflowers. Kim likes sunflowers. – Heads normally can’t be omitted • *Go the house. • *Kim likes very bright. Ling 222 - Chapter 4 3 – Heads select dependent phrases of a particular word class. • The soldiers released the hostages. • *The soldiers released. • He went into the house. *He went into. • bright sunflowers *brightly sunflowers • Kambera – Lalu mbana-na na lodu too hot-3SG the sun ‘The sun is hot.’ – *Lalu uma too house Ling 222 - Chapter 4 4 2 – Heads often require dependents to agree with grammatical features of head. • French – un livre vert a:MASC book green:MASC ‘a green book.’ – une pomme verte a:FEM apple green:FEM ‘a green apple’ – Heads may require dependent NPs to occur in a particular grammatical case. • Japanese – Kodomo-ga hon-o yon-da child-NOM book-ACC read-PAST ‘The child read the book.’ Ling 222 - Chapter 4 5 • More about dependents – Adjuncts and complements • Adjuncts are always optional; complements are frequently obligatory • Complements are selected by the head and therefore bear a close relationship with it; adjuncts add extra information.
    [Show full text]
  • 1 on Agent Nominalizations and Why They Are Not Like Event
    On agent nominalizations and why they are not like event nominalizations1 Mark C. Baker and Nadya Vinokurova Rutgers University and Research Institute of Humanities -Yakutsk Abstract: This paper focuses on agent-denoting nominalizations in various languages (e.g. the finder of the wallet), contrasting them with the much better studied action/event- denoting nominalizations. In particular, we show that in Sakha, Mapudungun, and English, agent-denoting nominalizations have none of the verbal features that event- denoting nominalizations sometimes have: they cannot contain adverbs, voice markers, expressions of aspect or mood, or verbal negation. An apparent exception to this generalization is that Sakha allows accusative-case marked objects in agentive nominalizations. We show that in fact the structure of agentive nominalizations in Sakha is as purely nominal as in other languages, and the difference is attributable to the rule of accusative case assignment. We explain these restrictions by arguing that agentive nominalizers have a semantics very much like the one proposed by Kratzer (1996) for Voice heads. Given this, the natural order of semantic composition implies that agentive nominalizers must combine directly with VP, just as Voice heads must. As a preliminary to testing this idea typologically, we show how a true agentive nominalization can be distinguished from a headless subject relative clause, illustrating with data from Mapudungun. We then present the results of a 34-language survey, showing that indeed none of these languages allow clause-like syntax inside a true agentive nominalization. We conclude that a generative-style investigation into the details of particular languages can be a productive source of things to look for in typological surveys.
    [Show full text]
  • On Object Shift, Scrambling, and the PIC
    On Object Shift, Scrambling, and the PIC Peter Svenonius University of Tromsø and MIT* 1. A Class of Movements The displacements characterized in (1-2) have received a great deal of attention. (Boldface in the gloss here is simply to highlight the alternation.) (1) Scrambling (exx. from Bergsland 1997: 154) a. ... gan nagaan slukax igaaxtakum (Aleut) his.boat out.of seagull.ABS flew ‘... a seagull flew out of his boat’ b. ... quganax hlagan kugan husaqaa rock.ABS his.son on.top.of fell ‘... a rock fell on top of his son’1 (2) Object Shift (OS) a. Hann sendi sem betur fer bréfi ni ur. (Icelandic) he sent as better goes the.letter down2 b. Hann sendi bréfi sem betur fer ni ur. he sent the.letter as better goes down (Both:) ‘He fortunately sent the letter down’ * I am grateful to the University of Tromsø Faculty of Humanities for giving me leave to traipse the globe on the strength of the promise that I would write some papers, and to the MIT Department of Linguistics & Philosophy for welcoming me to breathe in their intellectually stimulating atmosphere. I would especially like to thank Noam Chomsky, Norvin Richards, and Juan Uriagereka for discussing parts of this work with me while it was underway, without implying their endorsement. Thanks also to Kleanthes Grohmann and Ora Matushansky for valuable feedback on earlier drafts, and to Ora Matushansky and Elena Guerzoni for their beneficient editorship. 1 According to Bergsland (pp. 151-153), a subject preceding an adjunct tends to be interpreted as definite (making (1b) unusual), and one following an adjunct tends to be indefinite; this is broadly consistent with the effects of scrambling cross-linguistically.
    [Show full text]
  • The Antisymmetry of Syntax
    Contents Series Foreword xi Preface xiii Acknowledgments xvii Chapter 1 Introduction 3 1.1 Introduction 3 1.2 Proposal 5 Chapter 2 Deriving XBar lhory 7 PART 11 13 Chapter 3 Adjunction 15 3.1 Segments and Categories 15 3.2 Adjunction to a Head 17 3.3 Multiple Adjunctions: Clitics 19 3.4 Multiple Adjunctions: Nonheads 21 3.5 Specifiers 22 ... Vlll Contents Contents 3.6 Verb-Second Effects 27 Chapter 6 3.7 Adjunction of a Head to a Nonhead 30 Coordination 57 6.1 More on Coordination 57 Chapter 4 6.2 Coordination of Heads, Wordorder 33 4.1 The specifier-complement including Clitics 59 Asymmetry 33 6.3 Coordination with With 63 4.2 Specifier-Head-Complement as a Universal Order 35 6.4 Right Node Raising 67 4.3 Time and the Universal Chapter 7 -- Specifier-Head-Complement Order Complementation 69 7.1 Multiple Complements and 36 Adjuncts 69 4.4. Linear Order and Adjunction to 7.2 Heavy NP Shift 71 Heads 38 7.3 Right-Dislocations 78 4.5 Linear Order and Structure below the Word Level 38 Relatives and Posseshes 85 8.1 Postnominal Possessives in 4.6 The Adjunction Site of Clitics English 85 42 8.2 Relative Clauses in English 86 Chapter 5 Fortherconsequences 47 5.1 There Is No Directionality 8.3 N-Final Relative Clauses 92 Parameter 47 8.4 Reduced Relatives and 5.2 The LCA Applies to All Syntactic Representations 48 Adjectives 97 8.5 More on Possessives 101 I 5.3 Agreement in Adpositional Phrases 49 1 8.6 More on French De 105 b 5.4 Head Movement 50 8.7 Nonrestrictive Relatives 1 10 5.5 Final Complementizers and Agglutination 52 ..
    [Show full text]
  • 1 a Microparametric Approach to the Head-Initial/Head-Final Parameter
    A microparametric approach to the head-initial/head-final parameter Guglielmo Cinque Ca’ Foscari University, Venice Abstract: The fact that even the most rigid head-final and head-initial languages show inconsistencies and, more crucially, that the very languages which come closest to the ideal types (the “rigid” SOV and the VOS languages) are apparently a minority among the languages of the world, makes it plausible to explore the possibility of a microparametric approach for what is often taken to be one of the prototypical examples of macroparameter, the ‘head-initial/head-final parameter’. From this perspective, the features responsible for the different types of movement of the constituents of the unique structure of Merge from which all canonical orders derive are determined by lexical specifications of different generality: from those present on a single lexical item, to those present on lexical items belonging to a specific subclass of a certain category, or to every subclass of a certain category, or to every subclass of two or more, or all, categories, (always) with certain exceptions.1 Keywords: word order, parameters, micro-parameters, head-initial, head-final 1. Introduction. An influential conjecture concerning parameters, subsequent to the macro- parametric approach of Government and Binding theory (Chomsky 1981,6ff and passim), is that they can possibly be “restricted to formal features [of functional categories] with no interpretation at the interface” (Chomsky 1995,6) (also see Borer 1984 and Fukui 1986). This conjecture has opened the way to a microparametric approach to differences among languages, as well as to differences between related words within one and the same language (Kayne 2005,§1.2).
    [Show full text]
  • Development of Code-Switching: a Case Study on a Turkish/English
    Running Head: DEVELOPMENT OF CODE-SWITCHING Development of Code-Switching: A Case Study on a Turkish/ English/Arabic Multilingual Child Mehmet TUNAZ September, 2016 Erciyes University DEVELOPMENT OF CODE-SWITCHING Abstract The purpose of this research was to investigate the early code switching patterns of a simultaneous multilingual subject (Aris) in accordance with Muysken’s (2000) code switching typology: insertion and alternation. Firstly, the records of naturalistic spontaneous conversations were obtained from the parents via e-mail, phone calls and researcher’s interval observation sessions. After a detailed revision of the records, code switching samples were categorized into two groups: insertion and alternation. Then, the code switching samples performed by the subject were ordered chronologically. It was found that the insertion type of code switching occurs at the earlier stages of multilingual development whereas the alternation type of code switching comes out later. This case indicated that the form of code switching gets more complex and intentional as linguistic competence and awareness enhance. The results as consistent with the explanations of MacSwan (2000) and Koike (1987) who emphasized that code switching develops in parallel with linguistic ability, and it should not be assumed as a deficit in the early simultaneous multilingual development. The study is a considerable case analysis in terms of including Turkish, English, and Arabic in a multilingual context. Keywords: code switching, insertion, alternation, multilingual, Turkish, English, Arabic DEVELOPMENT OF CODE-SWITCHING Development of Code-Switching: A Case Study on a Turkish/ English/Arabic Multilingual Child English, which is accepted as “the international” language, has gained importance steadily over the last few decades, driven by the changes in the political, technological and economical fields.
    [Show full text]
  • Linguistics 101 Theoretical Syntax Theoretical Syntax
    Linguistics 101 Theoretical Syntax Theoretical Syntax • When constructing sentences, our brains do a lot of work ‘behind the scenes’. • Syntactic theories attempt to discover these hidden processes. • While languages differ a lot on the surface, they are very similar in what goes on ‘behind the scenes’. • The following slides will introduce the type of work done in theoretical syntax. Theoretical Syntax • Recall: English has: • VP (verb phrase) with a V head. • PP (prepositional phrase) with a P head. • NP (noun phrase) with a N head. • CP (complementizer phrase) with a C head. • I will show that English also has TP (tense phrase) with a T head. • I will also show that morphemes can ‘move’ from one position to another. Tense • Tense is sometimes shown on the main verb. • I walk, he walks (present) • I walked (past) Tense • Tense is sometimes shown as a separate word. • I will walk (future) • I don’t walk (present with negation) • I didn’t walk (past with negation) • I do walk (present with emphasis) • I did walk (past with emphasis) • I am walking (present progressive) • I was walking (past progressive) • Did you walk (past question) • Do you walk (present question) Tense • In many languages, ‘tense’ is always in the same position. • Could English ‘tense’ also always be in the same position? Tense Phrase ‘He walked.’ Tense Phrase • The verb gets tense by ‘moving’. Tense Phrase `He will walk.’ • ‘will’ indicates a tense, so it can start in T. Evidence • Is there any evidence supporting a ‘tense’ phrase and movement of the verb into ‘tense’? • negation • yes/no questions • We will also see further evidence that things ‘move’.
    [Show full text]
  • Syntax Corrected
    01:615:201 Introduction to Linguistic Theory Adam Szczegielniak Syntax: The Sentence Patterns of Language Copyright in part: Cengage learning Learning Goals • Hierarchical sentence structure • Word categories • X-bar • Ambiguity • Recursion • Transformaons Syntax • Any speaker of any human language can produce and understand an infinite number of possible sentences • Thus, we can’ t possibly have a mental dictionary of all the possible sentences • Rather, we have the rules for forming sentences stored in our brains – Syntax is the part of grammar that pertains to a speaker’ s knowledge of sentences and their structures What the Syntax Rules Do • The rules of syntax combine words into phrases and phrases into sentences • They specify the correct word order for a language – For example, English is a Subject-Verb-Object (SVO) language • The President nominated a new Supreme Court justice • *President the new Supreme justice Court a nominated • They also describe the relationship between the meaning of a group of words and the arrangement of the words – I mean what I say vs. I say what I mean What the Syntax Rules Do • The rules of syntax also specify the grammatical relations of a sentence, such as the subject and the direct object – Your dog chased my cat vs. My cat chased your dog • Syntax rules specify constraints on sentences based on the verb of the sentence *The boy found *Disa slept the baby *The boy found in the house Disa slept The boy found the ball Disa slept soundly Zack believes Robert to be a gentleman *Zack believes to be a gentleman Zack tries to be a gentleman *Zack tries Robert to be a gentleman What the Syntax Rules Do • Syntax rules also tell us how words form groups and are hierarchically ordered in a sentence “ The captain ordered the old men and women of the ship” • This sentence has two possible meanings: – 1.
    [Show full text]
  • Running Head: CODE SWITCHING in SEQUENTIAL BILINGUALISM 1
    Running head: CODE SWITCHING IN SEQUENTIAL BILINGUALISM 1 The running head is a shortened version of the paper’s full title The paper should and it is used to be typed on an help readers 8.5" x 11" paper, identify the titles have 1" margins for published on all sides, and articles (even if be double- your paper is not spaced. 12 pt. intended for Times New publication, your Roman Font is paper should still recommended. have a running head). The running head The title should cannot exceed 50 summarize the characters, paper’s main idea including spaces and identify the and punctuation. variables under The running discussion and Code Switching In Sequential Bilingualism: head’s title should the relationship be in all capital between them. letters. The A Polish-English Case Study running head should be flush The title should be left, and page centered and Aleksandra Kasztalska numbers should should not be be flush right. On bolded, the title page, the underlined, or Purdue University running head italicized. The title should include the should be followed words “Running by the author’s head.” For pages name and following the title institutional page, repeat the affiliation (all running head in all double-spaced). caps without “Running head.” Blue boxes contain directions for writing and citing in APA style. Green text boxes contain explanations of APA style guidelines. CODE SWITCHING IN SEQUENTIAL BILINGUALISM 2 The word “Abstract” The should be abstract Abstract centered is a brief and in 12 summary point of the This study examined the code switching of a six-year-old Polish learner of English.
    [Show full text]
  • Introduction to Head-Driven Phrase Structure Grammar∗
    Introduction to Head-driven Phrase Structure Grammar∗ Steve Harlow [email protected] May 5, 2009 Contents Tables 2 Exercises 2 1 Signs 4 2 Valence 5 3 Words and phrases 7 3.1 Complements ..................................... ... 9 Head-ComplementPhrase............................. 9 TheHeadFeaturePrinciple . 10 3.2 Subjects........................................ ... 12 Head-SubjectPhrase................................ 13 3.3 NounsandNounPhrases ............................. .... 14 Head-SpecifierPhrase ............................... 18 3.4 Prepositions and PPs.................................... 20 3.5 Verbsandauxiliaries. .. .. .. .. .. .. .. .. .. .. .. .. ....... 22 3.6 Clauses ......................................... .. 28 3.7 Subject-auxiliaryInversion(SAI) . ............ 31 4 The Lexicon 32 5 Lexical Relations 42 6 Unbounded dependency constructions 44 6.1 Subjectextraction ............................... ...... 53 ∗Draft only. Please do not quote or cite 1 LIST OF TABLES 2 7 Semantics 54 7.1 Thesemanticsofverbs. ............................ ...... 56 TheContentPrinciple ............................... 58 7.1.1 Context ....................................... 59 7.2 PrepositionalPhrases . .. .. .. .. .. .. .. .. .. .. .. .. ....... 61 7.3 Determinersandquantifiers . ........ 62 The Quantifier Amalgamation Principle – First version . ....... 64 TheQuantifierInheritancePrinciple . .... 64 The Quantifier Amalgamation Principle – Final version . ....... 64 8 Adjuncts 68 8.1 Adjectivaladjuncts.. .. .. .. .. .. .. .. .. .. .. .. .. ......
    [Show full text]