Phrase-Based Compressive Cross-Language Summarization

Total Page:16

File Type:pdf, Size:1020Kb

Phrase-Based Compressive Cross-Language Summarization Phrase-based Compressive Cross-Language Summarization Jin-ge Yao Xiaojun Wan Jianguo Xiao Institute of Computer Science and Technology, Peking University, Beijing 100871, China Key Laboratory of Computational Linguistic (Peking University), MOE, China yaojinge, wanxiaojun, xiaojianguo @pku.edu.cn { } Abstract of our knowledge, no previous work of this task tries to focus on summarization beyond pure sen- The task of cross-language document tence extraction. summarization is to create a summary in a On the other hand, cross-language summariza- target language from documents in a dif- tion can be seen as a special kind of machine trans- ferent source language. Previous meth- lation: translating the original documents into a ods only involve direct extraction of au- brief summary in a different language. Inspired by tomatically translated sentences from the phrase-based machine translation models (Koehn original documents. Inspired by phrase- et al., 2003), we propose a phrase-based scoring based machine translation, we propose scheme for cross-language summarization in this a phrase-based model to simultaneously work. perform sentence scoring, extraction and Since our framework is based on phrases, we compression. We design a greedy algo- are not limited to produce extractive summaries. rithm to approximately optimize the score We can use the scoring scheme to perform joint function. Experimental results show that sentence selection and compression. Unlike typi- our methods outperform the state-of-the- cal sentence compression methods, our proposed art extractive systems while maintaining algorithm does not require additional syntactic similar grammatical quality. preprocessing such as part-of-speech tagging or syntactic parsing. We only utilize information 1 Introduction from translated texts with phrase alignments. The The task of cross-language summarization is to scoring function consists of a submodular term of produce a summary in a target language from compressed sentences and a bounded distortion documents written in a different source language. penalty term. We design a greedy procedure to This task is particularly useful for readers to efficiently get approximate solutions. quickly get the main idea of documents written in For experimental evaluation, we use the a source language that they are not familiar with. DUC2001 dataset with manually translated refer- Following Wan (2011), we focus on English-to- ence Chinese summaries. Results based on the Chinese summarization in this work. ROUGE metrics show the effectiveness of our pro- The simplest and the most straightforward posed methods. We also conduct manual evalua- way to perform cross-language summarization is tion and the results suggest that the linguistic qual- pipelining general summarization and machine ity of produced summaries is not decreased by too translation. Such systems either translate all the much, compared with extractive counterparts. In documents before running generic summarization some cases, the grammatical smoothness can even algorithms on the translated documents, or sum- be improved by compression. marize from the original documents and then only The contributions of this paper include: translate the produced summary into the target lan- Utilizing the phrase alignment information, • guage. Wan (2011) show that such pipelining ap- we design a scoring scheme for the cross- proaches are inferior to methods that utilize in- language document summarization task. formation from both sides. In that work, the au- thor proposes graph-based models and achieves We design an efficient greedy algorithm to • fair amount of improvement. However, to the best generate summaries. The greedy algorithm is 118 Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 118–127, Lisbon, Portugal, 17-21 September 2015. c 2015 Association for Computational Linguistics. partially submodular and has a provable con- the source sentence to the target sentence e(y) is: stant approximation factor to the optimal so- L lution up to a small constant. f(y) = g(pk) + LM(e(y)) Xk=1 We achieve state-of-the-art results using the L 1 • − extractive counterpart of our compressive + η start(p ) 1 end(p ) | k+1 − − k | summarization framework. Performance in k=1 terms of ROUGE metrics can be significantly X where LM( ) is the target-side language model improved when simultaneously performing · score, g( ) is the score function of phrases, η < 0 extraction and compression. · is the distortion parameter for penalizing the dis- 2 Background tance between neighboring phrases in the deriva- tion. Note that the phrases addressed here are Document summarization can be treated as a spe- typically continuous n-grams and need not to be cial kind of translation process: translating from a grammatical linguistic phrasal units. Later we will bunch of related source documents to a short tar- directly use phrases provided by modern machine get summary. This analogy also holds for cross- translation systems. language document summarization, with the only Searching for the best translation under this difference that the languages of source documents score definition is difficult in general. Thus and the target summary are different. approximate decoding algorithms such as beam Our design of sentence scoring function for search should be applied. Meanwhile, several con- cross-language document summarization purpose straints should be satisfied during the decoding is inspired by phrase-based machine translation process. The most important one is to set a con- stant limit of the distortion term start(p ) models. Here we briefly describe the general idea | k+1 − 1 end(p ) δ to exhibit derivations with dis- of phrase-based translation. One may refer to − k | ≤ Koehn (2009) for more detailed description. tant phrase translations. 3 Phrase-based Cross-Language 2.1 Phrase-based Machine Translation Summarization Phrase-based machine translation models are cur- Inspired by the general idea of phrase-based rently giving state-of-the-art translations for many machine translation, we describe our proposed pairs of languages and dominating modern statis- phrase-based model for cross-language summa- tical machine translation. Classical word-based rization in this section. IBM models cannot capture local contextual in- formation and local reordering very well. Phrase- 3.1 Phrase-based Sentence Scoring based translation models operate on lexical entries In the context of cross-language summarization, with more than one word on the source language here we assume that we can also have phrases and the target language. The allowance of multi- in both source and target languages along with word expressions is believed to be the main rea- phrase alignments between the two sides. For son for the improvements that phrase-based mod- summarization purposes, we may wish to se- els give. Note that these multi-word expressions, lect sentences containing more important phrases. typically addressed as phrases in machine transla- Then it is plausible to measure the scores of these tion literature, are essentially continuous n-grams aligned phrases via importance weighing. and do not need to be linguistically integrate and Inspired by phrase-based translation models, we meaningful constituents. can assign phrase-based scores to sentences from Define y as a phrase-based derivation, or the translated documents for summarization pur- more precisely a finite sequence of phrases poses. We define our scoring function for each p1, p2, . , pL. For any derivation y we use e(y) sentence s as: to refer to the target-side translation text defined by y. This translation is derived by concatenat- F (s) = d0g(p) + bg(s) ing the strings e(p ), e(p ), . , e(p ). The scor- p s 1 2 L X∈ ing scheme for a phrase-based derivation y from +η dist(y(s)) 119 Here in the first term g( ) is the score of phrase p, Algorithm 1 A greedy algorithm for phrase-based · which can be simply set to document frequency. summarization The phrase score is penalized with a constant 1: S ← ∅ damping factor d0 to decay scores for repeated 2: i 1 ← phrases. The second term bg(s) is the bigram 3: single best = argmaxs U,C( s ) B F ( s ) ∈ { } ≤ { } score of sentence s. It is used here to simu- 4: while U = do 6 ∅ F (Si 1 s ) F (Si 1) late the effect of language models in phrase-based 5: si = argmaxs U − ∪{C( }s −)r − ∈ { } translation models. Denoting y(s) as the phrase- 6: if C(Si 1 s ) B then − ∪ { } ≤ based derivation (as mentioned earlier in the previ- 7: Si Si 1 s ← − ∪ { } ous section) of sentence s, the last distortion term 8: i i + 1 dist(y(s)) = L start(p ) 1 end(p ) ← k=1 | k+1 − − k | 9: end if is exactly the same as the distortion penalty term 10: U U s P ← \{ i} in phrase-based translation models. This term can 11: end while be used as a reflection of complexity of the trans- 12: return S∗ = argmaxS single best,Si F (S) lation. All the above terms can be derived from ∈{ } bilingual sentence pairs with phrase alignments. Meanwhile, we may also wish to exclude unim- The space U denotes the set of all possible com- portant phrases and badly translated phrases. Our pressed sentences. In each iteration, the algorithm definition can also be used to guide sentence com- tries to find the compressed sentence with maxi- pression by trying to remove redundant phrase. mum gain-cost ratio (Line 5, where we will fol- Based on the definition over sentences, we de- low previous work to set r = 1), and merge it to fine our summary scoring measure over a sum- the summary set at the current iteration (denoted mary S: as Si). The target is to find the compression with maximum gain-cost ratio. This will be discussed count(p,S) i 1 in the next section. Note that the algorithm is also F (S) = d − g(p) + bg(s) naturally applicable to extractive summarization.
Recommended publications
  • A Noun Phrase Parser of English Atro Voutilainen H Elsinki
    A Noun Phrase Parser of English Atro Voutilainen H elsinki A bstract An accurate rule-based noun phrase parser of English is described. Special attention is given to the linguistic description. A report on a performance test concludes the paper. 1. Introduction 1.1 Motivation. A noun phrase parser is useful for several purposes, e.g. for index term generation in an information retrieval application; for the extraction of collocational knowledge from large corpora for the development of computational tools for language analysis; for providing a shallow but accurately analysed input for a more ambitious parsing system; for the discovery of translation units, and so on. Actually, the present noun phrase parser is already used in a noun phrase extractor called NPtool (Voutilainen 1993). 1.2. Constraint Grammar. The present system is based on the Constraint Grammar framework originally proposed by Karlsson (1990). A few characteristics of this framework are in order. • The linguistic representation is based on surface-oriented morphosyntactic tags that can encode dependency-oriented functional relations between words. • Parsing is reductionistic. All conventional analyses are provided as alternatives to each word by a context-free lookup mechanism, typically a morphological analyser. The parser itself seeks to discard all and only the contextually illegitimate alternative readings. What 'survives' is the parse.• • The system is modular and sequential. For instance, a grammar for the resolution of morphological (or part-of-speech) ambiguities is applied, before a syntactic module is used to introduce and then resolve syntactic ambiguities. 301 Proceedings of NODALIDA 1993, pages 301-310 • The parsing description is based on linguistic generalisations rather than probabilities.
    [Show full text]
  • Personal Pronouns, Pronoun-Antecedent Agreement, and Vague Or Unclear Pronoun References
    Personal Pronouns, Pronoun-Antecedent Agreement, and Vague or Unclear Pronoun References PERSONAL PRONOUNS Personal pronouns are pronouns that are used to refer to specific individuals or things. Personal pronouns can be singular or plural, and can refer to someone in the first, second, or third person. First person is used when the speaker or narrator is identifying himself or herself. Second person is used when the speaker or narrator is directly addressing another person who is present. Third person is used when the speaker or narrator is referring to a person who is not present or to anything other than a person, e.g., a boat, a university, a theory. First-, second-, and third-person personal pronouns can all be singular or plural. Also, all of them can be nominative (the subject of a verb), objective (the object of a verb or preposition), or possessive. Personal pronouns tend to change form as they change number and function. Singular Plural 1st person I, me, my, mine We, us, our, ours 2nd person you, you, your, yours you, you, your, yours she, her, her, hers 3rd person he, him, his, his they, them, their, theirs it, it, its Most academic writing uses third-person personal pronouns exclusively and avoids first- and second-person personal pronouns. MORE . PRONOUN-ANTECEDENT AGREEMENT A personal pronoun takes the place of a noun. An antecedent is the word, phrase, or clause to which a pronoun refers. In all of the following examples, the antecedent is in bold and the pronoun is italicized: The teacher forgot her book.
    [Show full text]
  • TRADITIONAL GRAMMAR REVIEW I. Parts of Speech Traditional
    Traditional Grammar Review Page 1 of 15 TRADITIONAL GRAMMAR REVIEW I. Parts of Speech Traditional grammar recognizes eight parts of speech: Part of Definition Example Speech noun A noun is the name of a person, place, or thing. John bought the book. verb A verb is a word which expresses action or state of being. Ralph hit the ball hard. Janice is pretty. adjective An adjective describes or modifies a noun. The big, red barn burned down yesterday. adverb An adverb describes or modifies a verb, adjective, or He quickly left the another adverb. room. She fell down hard. pronoun A pronoun takes the place of a noun. She picked someone up today conjunction A conjunction connects words or groups of words. Bob and Jerry are going. Either Sam or I will win. preposition A preposition is a word that introduces a phrase showing a The dog with the relation between the noun or pronoun in the phrase and shaggy coat some other word in the sentence. He went past the gate. He gave the book to her. interjection An interjection is a word that expresses strong feeling. Wow! Gee! Whew! (and other four letter words.) Traditional Grammar Review Page 2 of 15 II. Phrases A phrase is a group of related words that does not contain a subject and a verb in combination. Generally, a phrase is used in the sentence as a single part of speech. In this section we will be concerned with prepositional phrases, gerund phrases, participial phrases, and infinitive phrases. Prepositional Phrases The preposition is a single (usually small) word or a cluster of words that show relationship between the object of the preposition and some other word in the sentence.
    [Show full text]
  • Code-Mixing As Found in Kartini Magazine
    Code-Mixing as Found in Kartini Magazine Dewi Yanti Agustina Sinaga [email protected] Abstract This study deals with code-mixing in Kartini magazine. The objective of the study is to analyze the components of language used in code-mixing namely word, phrase, clause and sentence and to find out the components of language which occurs dominantly in code-mixing in Kartini magazine. This study is limited on the use of code-mixing only in English and Indonesian language in articles of Kartini magazine and it taken randomly as the sample. The study was conducted by using descriptive quantitative design. The technique for collecting the data was a documentary technique. The data were analyzed based on the components of language which consists of word, phrase, clause and sentence. Having analyzed the data,it was found that the components of language dominantly used is in word. The percentage of word is 59,6% followed by phrase 38%, sentence 2,4% and clause 0%. Key words : Code mixing, word, phrase, clause and sentence 1. The Background of the Study Communication has become a basic need for human beings. The activity or process of expressing ideas and feelings or giving information is called communication (Hornby, 2008: 84). The main instrument of communication is language. By using it, human can get information through language. Jendra (2010: 1) explains that “The word language is used only to refer to human’s way of communicating. In addition, Alwasilah (1983: 37) also defines that “Language is an activity of human mind, and activities of human minds are wonderfully varied, often illogical, sometimes unpredictable and occasionally obscure.
    [Show full text]
  • PARTS of SPEECH ADJECTIVE: Describes a Noun Or Pronoun; Tells
    PARTS OF SPEECH ADJECTIVE: Describes a noun or pronoun; tells which one, what kind or how many. ADVERB: Describes verbs, adjectives, or other adverbs; tells how, why, when, where, to what extent. CONJUNCTION: A word that joins two or more structures; may be coordinating, subordinating, or correlative. INTERJECTION: A word, usually at the beginning of a sentence, which is used to show emotion: one expressing strong emotion is followed by an exclamation point (!); mild emotion followed by a comma (,). NOUN: Name of a person, place, or thing (tells who or what); may be concrete or abstract; common or proper, singular or plural. PREPOSITION: A word that connects a noun or noun phrase (the object) to another word, phrase, or clause and conveys a relation between the elements. PRONOUN: Takes the place of a person, place, or thing: can function any way a noun can function; may be nominative, objective, or possessive; may be singular or plural; may be personal (therefore, first, second or third person), demonstrative, intensive, interrogative, reflexive, relative, or indefinite. VERB: Word that represents an action or a state of being; may be action, linking, or helping; may be past, present, or future tense; may be singular or plural; may have active or passive voice; may be indicative, imperative, or subjunctive mood. FUNCTIONS OF WORDS WITHIN A SENTENCE: CLAUSE: A group of words that contains a subject and complete predicate: may be independent (able to stand alone as a simple sentence) or dependent (unable to stand alone, not expressing a complete thought, acting as either a noun, adjective, or adverb).
    [Show full text]
  • Chapter 3 Noun Phrases Pronouns
    Chapter 3 Noun Phrases Now that we have established something about the structure of verb phrases, let's move on to noun phrases (NPs). A noun phrase is a noun or pronoun head and all of its modifiers (or the coordination of more than one NP--to be discussed in Chapter 6). Some nouns require the presence of a determiner as a modifier. Most pronouns are typically not modified at all and no pronoun requires the presence of a determiner. We'll start with pronouns because they are a relatively simple closed class. Pronouns English has several categories of pronouns. Pronouns differ in the contexts they appear in and in the grammatical information they contain. Pronouns in English can contrast in person, number, gender, and case. We've already discussed person and number, but to review: 1. English has three persons o first person, which is the speaker or the group that includes the speaker; o second person, which is the addressee or the group of addressees; o third person, which is anybody or anything else 2. English has two numbers o singular, which refers to a singular individual or undifferentiated group or mass; o plural, which refers to more than one individual. The difference between we and they is a difference in person: we is first person and they is third person. The difference between I and we is a difference in number: I is singular and we is plural. The other two categories which pronouns mark are gender and case. Gender is the system of marking nominal categories.
    [Show full text]
  • Parts of Speech
    PARTS OF SPEECH NOUN: Name of a person, place, thing or quality. (examples: Billy, Chicago, pencil, courage). PRONOUN: Word used in place of a noun. (examples: he, she, it, they, we). ADJECTIVE: Word that describes or limits a noun or pronoun. (examples: big, blue, mean, pretty). VERB: Word expressing action or state of being. (examples: write, kiss, is, feels). ADVERB: Word used to modify the meaning of a verb, adjective, or another adverb. (examples: always, once, quickly, too, there). PREPOSITION: Word used to show the relation between two or more things. (examples: to, at, under, into, between). CONJUNCTION: Word used to join a word or group of words with another word or group of words. INTERJECTION: An exclamation. (examples: oh!, wow!). GRAMMAR Subject: The something or someone talked about or doing the action in a sentence. (ex.: President Johnson disliked his portrait.) Predicate: The verb plus all its modifiers and complements. (ex.: President Johnson disliked his portrait.) Fused Sentence: Two sentences run together with no punctuation between them. (ex.: President Johnson disliked his portrait he felt it made him look too old.) Comma Splice: Two sentences separated only by a comma. (ex.: President Johnson disliked his portrait, he felt it made him look too old.) Fragment: A group of words presented as a sentence but which lacks the elements of a sentence. (ex.: Once upon a time.) Sentence: A group of words containing a subject, a verb, and expressing a complete thought. (ex.: Once upon a time, Jack and Jill tumbled down the hill.) OVER HELPING VERBS be will being would been must am do is did are does was have were has may had might having can shall could should PREPOSITIONS about beyond over above by since across down through after during to against for toward along from under among in underneath around inside until at into up before like upon behind near with below of within beneath off without besides on between onto 1.
    [Show full text]
  • A New Outlook of Complementizers
    languages Article A New Outlook of Complementizers Ji Young Shim 1,* and Tabea Ihsane 2,3 1 Department of Linguistics, University of Geneva, 24 rue du Général-Dufour, 1211 Geneva, Switzerland 2 Department of English, University of Geneva, 24 rue du Général-Dufour, 1211 Geneva, Switzerland; [email protected] 3 University Priority Research Program (URPP) Language and Space, University of Zurich, Freiestrasse 16, 8032 Zurich, Switzerland * Correspondence: [email protected]; Tel.: +41-22-379-7236 Academic Editor: Usha Lakshmanan Received: 28 February 2017; Accepted: 16 August 2017; Published: 4 September 2017 Abstract: This paper investigates clausal complements of factive and non-factive predicates in English, with particular focus on the distribution of overt and null that complementizers. Most studies on this topic assume that both overt and null that clauses have the same underlying structure and predict that these clauses show (nearly) the same syntactic distribution, contrary to fact: while the complementizer that is freely dropped in non-factive clausal complements, it is required in factive clausal complements by many native speakers of English. To account for several differences between factive and non-factive clausal complements, including the distribution of the overt and null complementizers, we propose that overt that clauses and null that clauses have different underlying structures responsible for their different syntactic behavior. Adopting Rizzi’s (1997) split CP (Complementizer Phrase) structure with two C heads, Force and Finiteness, we suggest that null that clauses are FinPs (Finiteness Phrases) under both factive and non-factive predicates, whereas overt that clauses have an extra functional layer above FinP, lexicalizing either the head Force under non-factive predicates or the light demonstrative head d under factive predicates.
    [Show full text]
  • Notes on Phrases and Clauses What Is a Phrase?
    Notes on Phrases and Clauses ELAGSE9-10L1: Demonstrate command of the conventions of standard English grammar and usage when writing or speaking. b. Use various types of phrases (noun, verb, adjectival, adverbial, participial, prepositional, absolute) and clauses (independent, dependent; noun, relative, adverbial) to convey specific meanings and add variety and interest to writing or presentations. What is a phrase? A phrase is a group of words without both a subject and predicate. Phrases combine words into a larger unit that can function as a sentence element. For example, a participial phrase can include adjectives, nouns, prepositions and adverbs; as a single unit, however, it functions as one big adjective modifying a noun (or noun phrase). See this overview of phrases for more. Noun Phrase - "The crazy old lady in the park feeds the pigeons every day." A noun phrase consists of a noun and all of its modifiers, which can include other phrases (like the prepositional phrase in the park). More examples. o Appositive Phrase - "Bob, my best friend, works here" or "My best friend Bob works here." An appositive (single word, phrase, or clause) renames another noun, not technically modifying it. See this page from the Armchair Grammarian for everything you ever wanted to know about appositives. o Gerund Phrase - "I love baking cakes." A gerund phrase is just a noun phrase with a gerund as its head. o Infinitive Phrase - "I love to bake cakes." An infinitive phrase is a noun phrase with an infinitive as its head. Unlike the other noun phrases, however, an infinitive phrase can also function as an adjective or an adverb.
    [Show full text]
  • Phrase Based Language Model for Statistical Machine Translation
    Phrase Based Language Model for Statistical Machine Translation Technical Report by Chen, Geliang 1 Abstract Reordering is a challenge to machine translation (MT) systems. In MT, the widely used approach is to apply word based language model (LM) which considers the constituent units of a sentence as words. In speech recognition (SR), some phrase based LM have been proposed. However, those LMs are not necessarily suitable or optimal for reordering. We propose two phrase based LMs which considers the constituent units of a sentence as phrases. Experiments show that our phrase based LMs outperform the word based LM with the respect of perplexity and n-best list re-ranking. Key words: machine translation, language model, phrase based This version of report is identical to the dissertation approved from Yuanpei Institute of Peking University to receive the Bachelor Degree of Probability and Statistics by Chen, Geliang. Advisor: Assistant Professor Dr.rer.nat. Jia Xu Evaluation Date: 20. June. 2013 2 Contents Abstract .......................................................................................................................... 2 Contents …………………………………………………………………………………………………………………..3 1 Introduction ................................................................................................................. 4 2 Review of the Word Based LM ................................................................................... 5 2.1 Sentence probability ............................................................................................
    [Show full text]
  • The Functions of Code-Switching in the Interaction of the Cartoon Characters in Dora the Explorer
    Arab World English Journal (AWEJ) Volume 11. Number3 September Pp. 260- 2757777777775 DOI: https://dx.doi.org/10.24093/awej/vol11no3.16 The Functions of Code-switching in the Interaction of the Cartoon Characters in Dora the Explorer Majedah Abdullah Alaiyed Department of English Language and Translation College of Sciences and Arts in Arrass Qassim University, Saudi Arabia Abstract This paper investigates code-switching from Standard Arabic into English in six episodes of the TV cartoon series Dora the Explorer. The significance of this study is that it will provide an in- depth understanding of the strategies and structures of code-switching used to address children in order to teach them English. The study addresses two research questions: 1) What are the patterns of code-switching found in the interaction of the cartoon characters in Dora the Explorer? 2) What is the function of code-switching in each pattern? Quantitative analysis was used to analyze the frequency of each pattern of code-switching, while qualitative analysis was used to determine the functions of code-switching. The results show several patterns of code-switching into English: code-switching from Arabic to English without translation; Arabic lexical items followed by an English translation; English lexical items followed by an Arabic translation; translation from Arabic into English in two turns; and metadiscursive code-switching. English lexical items are introduced through code-switching in each episode. English words without translation account for the highest percentage of code-switching. In the code-switching to English, some English units are permanent, while some are context units that depend on the episode topic: these include basic formulaic and non-formulaic expressions.
    [Show full text]
  • Linguistics 101 Theoretical Syntax Theoretical Syntax
    Linguistics 101 Theoretical Syntax Theoretical Syntax • When constructing sentences, our brains do a lot of work ‘behind the scenes’. • Syntactic theories attempt to discover these hidden processes. • While languages differ a lot on the surface, they are very similar in what goes on ‘behind the scenes’. • The following slides will introduce the type of work done in theoretical syntax. Theoretical Syntax • Recall: English has: • VP (verb phrase) with a V head. • PP (prepositional phrase) with a P head. • NP (noun phrase) with a N head. • CP (complementizer phrase) with a C head. • I will show that English also has TP (tense phrase) with a T head. • I will also show that morphemes can ‘move’ from one position to another. Tense • Tense is sometimes shown on the main verb. • I walk, he walks (present) • I walked (past) Tense • Tense is sometimes shown as a separate word. • I will walk (future) • I don’t walk (present with negation) • I didn’t walk (past with negation) • I do walk (present with emphasis) • I did walk (past with emphasis) • I am walking (present progressive) • I was walking (past progressive) • Did you walk (past question) • Do you walk (present question) Tense • In many languages, ‘tense’ is always in the same position. • Could English ‘tense’ also always be in the same position? Tense Phrase ‘He walked.’ Tense Phrase • The verb gets tense by ‘moving’. Tense Phrase `He will walk.’ • ‘will’ indicates a tense, so it can start in T. Evidence • Is there any evidence supporting a ‘tense’ phrase and movement of the verb into ‘tense’? • negation • yes/no questions • We will also see further evidence that things ‘move’.
    [Show full text]