Judging Grammaticality with Tree Substitution Grammar Derivations

Total Page:16

File Type:pdf, Size:1020Kb

Judging Grammaticality with Tree Substitution Grammar Derivations Judging Grammaticality with Tree Substitution Grammar Derivations Matt Post Human Language Technology Center of Excellence Johns Hopkins University Baltimore, MD 21211 Abstract as for machine translation. As a result, the output of such text generation systems is often very poor In this paper, we show that local features com- grammatically, even if it is understandable. puted from the derivations of tree substitution Since grammaticality judgments are a matter of grammars — such as the identify of particu- lar fragments, and a count of large and small the syntax of a language, the obvious approach for fragments — are useful in binary grammatical modeling grammaticality is to start with the exten- classification tasks. Such features outperform sive work produced over the past two decades in n-gram features and various model scores by the field of parsing. This paper demonstrates the a wide margin. Although they fall short of utility of local features derived from the fragments the performance of the hand-crafted feature of tree substitution grammar derivations. Follow- set of Charniak and Johnson (2005) developed ing Cherry and Quirk (2008), we conduct experi- for parse tree reranking, they do so with an ments in a classification setting, where the task is to order of magnitude fewer features. Further- more, since the TSGs employed are learned distinguish between real text and “pseudo-negative” in a Bayesian setting, the use of their deriva- text obtained by sampling from a trigram language tions can be viewed as the automatic discov- model (Okanohara and Tsujii, 2007). Our primary ery of tree patterns useful for classification. points of comparison are the latent SVM training On the BLLIP dataset, we achieve an accuracy of Cherry and Quirk (2008), mentioned above, and of 89.9% in discriminating between grammat- the extensive set of local and nonlocal feature tem- ical text and samples from an n-gram language plates developed by Charniak and Johnson (2005) model. for parse tree reranking. In contrast to this latter set of features, the feature sets from TSG derivations 1 Introduction require no engineering; instead, they are obtained The task of a language model is to provide a measure directly from the identity of the fragments used in of the grammaticality of a sentence. Language mod- the derivation, plus simple statistics computed over els are useful in a variety of settings, for both human them. Since these fragments are in turn learned au- and machine output; for example, in the automatic tomatically from a Treebank with a Bayesian model, grading of essays, or in guiding search in a machine their usefulness here suggests a greater potential for translation system. Language modeling has proved adapting to other languages and datasets. to be quite difficult. The simplest models, n-grams, 2 Tree substitution grammars are self-evidently poor models of language, unable to (easily) capture or enforce long-distance linguis- Tree substitution grammars (Joshi and Schabes, tic phenomena. However, they are easy to train, are 1997) generalize context-free grammars by allow- long-studied and well understood, and can be ef- ing nonterminals to rewrite as tree fragments of ar- ficiently incorporated into search procedures, such bitrary size, instead of as only a sequence of one or 217 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics:shortpapers, pages 217–222, Portland, Oregon, June 19-24, 2011. c 2011 Association for Computational Linguistics S. tions with larger fragments, whereas ungrammatical sentences will be forced to resort to small fragments. NP. VP. .. This is the central idea explored in this paper. VBD. NP. SBAR. This raises the question of what exactly the larger fragments are. A fundamental problem with TSGs is said. that they are hard to learn, since there is no annotated corpus of TSG derivations and the number of possi- Figure 1: A Tree Substitution Grammar fragment. ble derivations is exponential in the size of a tree. The most popular TSG approach has been Data- Oriented Parsing (Scha, 1990; Bod, 1993), which more children. Evaluated by parsing accuracy, these takes all fragments in the training data. The large grammars are well below state of the art. However, size of such grammars (exponential in the size of the they are appealing in a number of ways. Larger frag- training data) forces either implicit representations ments better match linguists’ intuitions about what (Goodman, 1996; Bansal and Klein, 2010) — which the basic units of grammar should be, capturing, for do not permit arbitrary probability distributions over example, the predicate-argument structure of a verb the grammar fragments — or explicit approxima- (Figure 1). The grammars are context-free and thus tions to all fragments (Bod, 2001). A number of re- retain cubic-time inference procedures, yet they re- searchers have presented ways to address the learn- duce the independence assumptions in the model’s ing problems for explicitly represented TSGs (Zoll- generative story by virtue of using fewer fragments mann and Sima’an, 2005; Zuidema, 2007; Cohn et (compared to a standard CFG) to generate a tree. al., 2009; Post and Gildea, 2009a). Of these ap- 3 A spectrum of grammaticality proaches, work in Bayesian learning of TSGs pro- duces intuitive grammars in a principled way, and The use of large fragments in TSG grammar deriva- has demonstrated potential in language modeling tions provides reason to believe that such grammars tasks (Post and Gildea, 2009b; Post, 2010). Our ex- might do a better job at language modeling tasks. periments make use of Bayesian-learned TSGs. Consider an extreme case, in which a grammar con- sists entirely of complete parse trees. In this case, 4 Experiments ungrammaticality is synonymous with parser fail- We experiment with a binary classification task, de- ure. Such a classifier would have perfect precision fined as follows: given a sequence of words, deter- but very low recall, since it could not generalize mine whether it is grammatical or not. We use two at all. On the other extreme, a context-free gram- datasets: the Wall Street Journal portion of the Penn mar containing only depth-one rules can basically Treebank (Marcus et al., 1993), and the BLLIP ’99 produce an analysis over any sequence of words. dataset,1 a collection of automatically-parsed sen- However, such grammars are notoriously leaky, and tences from three years of articles from the Wall the existence of an analysis does not correlate with Street Journal. grammaticality. Context-free grammars are too poor For both datasets, positive examples are obtained models of language for the linguistic definition of from the leaves of the parse trees, retaining their to- grammaticality (a sequence of words in the language kenization. Negative examples were produced from of the grammar) to apply. a trigram language model by randomly generating TSGs permit us to posit a spectrum of grammati- sentences of length no more than 100 so as to match cality in between these two extremes. If we have a the size of the positive data. The language model grammar comprising small and large fragments, we was built with SRILM (Stolcke, 2002) using inter- might consider that larger fragments should be less polated Kneser-Ney smoothing. The average sen- likely to fit into ungrammatical situations, whereas tence lengths for the positive and negative data were small fragments could be employed almost any- 23.9 and 24.7, respectively, for the Treebank data where as a sort of ungrammatical glue. Thus, on average, grammatical sentences will license deriva- 1LDC Catalog No. LDC2000T43. 218 dataset training devel. test 4. The Charniak parser (Charniak, 2000), run in Treebank 3,836 2,690 3,398 language modeling mode 91,954 65,474 79,998 BLLIP 100,000 6,000 6,000 The parsing models for both datasets were built from 2,596,508 155,247 156,353 sections 2 - 21 of the WSJ portion of the Treebank. These models were used to score or parse the train- Table 1: The number of sentences (first line) and words ing, development, and test data for the classifier. (second line) using for training, development, and test- From the output, we extract the following feature ing of the classifier. Each set of sentences is evenly split sets used in the classifier. between positive and negative examples. Sentence length (l). • and 25.6 and 26.2 for the BLLIP data. Model scores (S). Model log probabilities. Each dataset is divided into training, develop- • ment, and test sets. For the Treebank, we trained Rule features (R). These are counter features the n-gram language model on sections 2 - 21. The • based on the atomic unit of the analysis, i.e., in- classifier then used sections 0, 24, and 22 for train- dividual n-grams for the n-gram models, PCFG ing, development, and testing, respectively. For rules, and TSG fragments. the BLLIP dataset, we followed Cherry and Quirk Reranking features (C&J). From the Char- (2008): we randomly selected 450K sentences to • train the n-gram language model, and 50K, 3K, and niak parser output we extract the complete set 3K sentences for classifier training, development, of reranking features of Charniak and Johnson 3 and testing, respectively. All sentences have 100 (2005), and just the local ones (C&J local). or fewer words. Table 1 contains statistics of the l Frontier size ( n, n). Instances of this fea- datasets used in our experiments. • ture class countF theF number of TSG fragments To build the classifier, we used liblinear (Fan having frontier size n, 1 n 9.4 Instances et al., 2008). A bias of 1 was added to each feature of l count only lexical items≤ ≤ for 0 n 5. vector. We varied a cost or regularization parame- Fn ≤ ≤ ter between 1e 5 and 100 in orders of magnitude; 4.2 Results at each step, we− built a model, evaluating it on the Table 2 contains the classification results.
Recommended publications
  • The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology
    UCLA UCLA Previously Published Works Title The empirical base of linguistics: Grammaticality judgments and linguistic methodology Permalink https://escholarship.org/uc/item/05b2s4wg ISBN 978-3946234043 Author Schütze, Carson T Publication Date 2016-02-01 DOI 10.17169/langsci.b89.101 Data Availability The data associated with this publication are managed by: Language Science Press, Berlin Peer reviewed eScholarship.org Powered by the California Digital Library University of California The empirical base of linguistics Grammaticality judgments and linguistic methodology Carson T. Schütze language Classics in Linguistics 2 science press Classics in Linguistics Chief Editors: Martin Haspelmath, Stefan Müller In this series: 1. Lehmann, Christian. Thoughts on grammaticalization 2. Schütze, Carson T. The empirical base of linguistics: Grammaticality judgments and linguistic methodology 3. Bickerton, Derek. Roots of language ISSN: 2366-374X The empirical base of linguistics Grammaticality judgments and linguistic methodology Carson T. Schütze language science press Carson T. Schütze. 2019. The empirical base of linguistics: Grammaticality judgments and linguistic methodology (Classics in Linguistics 2). Berlin: Language Science Press. This title can be downloaded at: http://langsci-press.org/catalog/book/89 © 2019, Carson T. Schütze Published under the Creative Commons Attribution 4.0 Licence (CC BY 4.0): http://creativecommons.org/licenses/by/4.0/ ISBN: 978-3-946234-02-9 (Digital) 978-3-946234-03-6 (Hardcover) 978-3-946234-04-3 (Softcover) 978-1-523743-32-2
    [Show full text]
  • 1 Descriptive Versus Prescriptive Grammar Eli Hinkel Seattle Pacific University [email protected] Word Count 2,453 Abstract A
    Page 1 of 5 TESOL Encyclopedia of English Language Teaching Descriptive versus Prescriptive Grammar Eli Hinkel Seattle Pacific University [email protected] Word Count 2,453 Abstract A descriptive grammar is a study of a language, its structure, and its rules as they are used in daily life by its speakers from all walks of life, including standard and nonstandard varieties. A prescriptive grammar, on the other hand, specifies how a language and its grammar rules should be used. A prescriptivist view of language implies a distinction between "good grammar" and "bad grammar," and its primary focus is on standard forms of grammar and syntactic constructions. Main Text Section 1: Framing the Issue A definition of a descriptive grammar: A descriptive grammar is a study of a language, its structure, and its rules as they are used in daily life by its speakers from all walks of life, including standard and nonstandard varieties. That is, descriptive grammar describes the language, its structure, and the syntactic rules that govern sentence and phrase constructions (Greenbaum & Quirk, 1990). A descriptive study of grammar is non-judgmental, and it does not have the goal of determining what represents good or bad language, correct or incorrect structures, or grammatical or ungrammatical forms (Leech, Deuchar, & Hoogenraad, 2006). A descriptive grammar is typically studied by linguists, anthropologists, ethnographers, psychologists, or other researchers who seek to identify how the grammar of a language is actually used in various contexts and for various purposes. (Books that describe and present the grammar of any language are called reference grammars, or sometimes "a grammar" by non-specialists.) In this light, sentences such as Him and me, we are neighbors or I don't know nothing simply reflect how the language is used by its speakers.
    [Show full text]
  • Chapter 1 Basic Categorial Syntax
    Hardegree, Compositional Semantics, Chapter 1 : Basic Categorial Syntax 1 of 27 Chapter 1 Basic Categorial Syntax 1. The Task of Grammar ............................................................................................................ 2 2. Artificial versus Natural Languages ....................................................................................... 2 3. Recursion ............................................................................................................................... 3 4. Category-Governed Grammars .............................................................................................. 3 5. Example Grammar – A Tiny Fragment of English ................................................................. 4 6. Type-Governed (Categorial) Grammars ................................................................................. 5 7. Recursive Definition of Types ............................................................................................... 7 8. Examples of Types................................................................................................................. 7 9. First Rule of Composition ...................................................................................................... 8 10. Examples of Type-Categorial Analysis .................................................................................. 8 11. Quantifiers and Quantifier-Phrases ...................................................................................... 10 12. Compound Nouns
    [Show full text]
  • TRADITIONAL GRAMMAR REVIEW I. Parts of Speech Traditional
    Traditional Grammar Review Page 1 of 15 TRADITIONAL GRAMMAR REVIEW I. Parts of Speech Traditional grammar recognizes eight parts of speech: Part of Definition Example Speech noun A noun is the name of a person, place, or thing. John bought the book. verb A verb is a word which expresses action or state of being. Ralph hit the ball hard. Janice is pretty. adjective An adjective describes or modifies a noun. The big, red barn burned down yesterday. adverb An adverb describes or modifies a verb, adjective, or He quickly left the another adverb. room. She fell down hard. pronoun A pronoun takes the place of a noun. She picked someone up today conjunction A conjunction connects words or groups of words. Bob and Jerry are going. Either Sam or I will win. preposition A preposition is a word that introduces a phrase showing a The dog with the relation between the noun or pronoun in the phrase and shaggy coat some other word in the sentence. He went past the gate. He gave the book to her. interjection An interjection is a word that expresses strong feeling. Wow! Gee! Whew! (and other four letter words.) Traditional Grammar Review Page 2 of 15 II. Phrases A phrase is a group of related words that does not contain a subject and a verb in combination. Generally, a phrase is used in the sentence as a single part of speech. In this section we will be concerned with prepositional phrases, gerund phrases, participial phrases, and infinitive phrases. Prepositional Phrases The preposition is a single (usually small) word or a cluster of words that show relationship between the object of the preposition and some other word in the sentence.
    [Show full text]
  • Grammar for Academic Writing
    GRAMMAR FOR ACADEMIC WRITING Tony Lynch and Kenneth Anderson (revised & updated by Anthony Elloway) © 2013 English Language Teaching Centre University of Edinburgh GRAMMAR FOR ACADEMIC WRITING Contents Unit 1 PACKAGING INFORMATION 1 Punctuation 1 Grammatical construction of the sentence 2 Types of clause 3 Grammar: rules and resources 4 Ways of packaging information in sentences 5 Linking markers 6 Relative clauses 8 Paragraphing 9 Extended Writing Task (Task 1.13 or 1.14) 11 Study Notes on Unit 12 Unit 2 INFORMATION SEQUENCE: Describing 16 Ordering the information 16 Describing a system 20 Describing procedures 21 A general procedure 22 Describing causal relationships 22 Extended Writing Task (Task 2.7 or 2.8 or 2.9 or 2.11) 24 Study Notes on Unit 25 Unit 3 INDIRECTNESS: Making requests 27 Written requests 28 Would 30 The language of requests 33 Expressing a problem 34 Extended Writing Task (Task 3.11 or 3.12) 35 Study Notes on Unit 36 Unit 4 THE FUTURE: Predicting and proposing 40 Verb forms 40 Will and Going to in speech and writing 43 Verbs of intention 44 Non-verb forms 45 Extended Writing Task (Task 4.10 or 4.11) 46 Study Notes on Unit 47 ii GRAMMAR FOR ACADEMIC WRITING Unit 5 THE PAST: Reporting 49 Past versus Present 50 Past versus Present Perfect 51 Past versus Past Perfect 54 Reported speech 56 Extended Writing Task (Task 5.11 or 5.12) 59 Study Notes on Unit 60 Unit 6 BEING CONCISE: Using nouns and adverbs 64 Packaging ideas: clauses and noun phrases 65 Compressing noun phrases 68 ‘Summarising’ nouns 71 Extended Writing Task (Task 6.13) 73 Study Notes on Unit 74 Unit 7 SPECULATING: Conditionals and modals 77 Drawing conclusions 77 Modal verbs 78 Would 79 Alternative conditionals 80 Speculating about the past 81 Would have 83 Making recommendations 84 Extended Writing Task (Task 7.13) 86 Study Notes on Unit 87 iii GRAMMAR FOR ACADEMIC WRITING Introduction Grammar for Academic Writing provides a selective overview of the key areas of English grammar that you need to master, in order to express yourself correctly and appropriately in academic writing.
    [Show full text]
  • 2 Conceptions of Language and Grammar
    2 Conceptions of Language and Grammar KEY CONCEPTS The study of language The roles of the English teacher What is a language? Competence and performance Approaches to the study of language THE STUDY OF LANGUAGE The study of spoken and writtenlanguage occupies a significant part of contemporary primary and secondary school and university curricula. The grammars, handbooks of style, and composition texts used in these cur- ricula are based on various assumptions about language and about why it should be studied. It is important that teachers have a critical understanding of these assumptions, which in many instances are either indirectly stated or omitted entirely. These books are designed to help you to: • develop the critical resources you need as a teacher to respond to many language-related issues; • understand the many concepts needed to talk appropriately and accurately about language; • develop skills that you will use in everyday teaching of language, literature, reading, and writing. In the pages to follow you will encounter ideas about language that may be new to you and which may contradict ideas you’ve been taught. We cannot guarantee that these new concepts will be easy to master, but we do believe that they are worth your best efforts. We will, as we said earlier, try to begin with what you know about language. For example, you have probably been taught to avoid non-standard expressions such as seen or seed instead of saw, to avoid multiple nouns as modifiers, to make sure that your subjects and verbs agree, to use parallel structures where possible, and the like.
    [Show full text]
  • • an Adjective Is a Word That Describes a Noun. an Adjective Usually Comes Before the Noun It Describes
    Grammar: Adjectives Name • An adjective is a word that describes a noun. An adjective usually comes before the noun it describes. • Some adjectives are descriptive. They tell what kind of person, place, or thing the noun is. • Some adjectives tell how many. • Some adjectives are limiting, such as this, that, these, and those. Draw one line under each adjective. Circle the noun that the adjective describes. 1. Gramps has a brown horse. 2. Rex is the name of this big animal. 3. I am a good helper when I visit Gramps. 4. I take Rex out for long rides. 5. I feed Rex juicy apples. Copyright © The McGraw-Hill Companies, Inc. 6. Gramps lets me polish the heavy saddle. 7. In June I will help him paint the old barn. 8. Gramps let me pick out the new color. 9. I chose a bright red. 10. I think Rex will like that color. 126 Grammar • Grade 3 • Unit 6 • Week 1 Grammar: Articles Name • The articles a, an, and the are special adjectives. • Use an before an adjective or a nonspecifi c singular noun that begins with a vowel. • Use the before singular and plural nouns when referring to something specifi c. • Some adjectives are limiting, such as this, that, these, and those. Write a, an, or the to fi nish each sentence. 1. I went to see first game of the World Series. 2. I wrote essay about my exciting day. 3. I took baseball with me in hopes of getting it signed. 4. After game, I looked around for my favorite pitcher.
    [Show full text]
  • Linguistic Background: an Outline of English Syntax
    Allen 1995: Chapter 2 - An Outline of English Syntax Allen 1995 : Natural Language Understanding Contents Preface Introduction previous Part I Part II Part III - Context / World next chapter chapter Knowledge Syntactic Processing Semantic Interpretation Appendices Bibliography Index Summaries Further Readings Exercises Chapter 2: Linguistic Background: An Outline of English Syntax 2.1 Words 2.2 The Elements of Simple Noun Phrases 2.3 Verb Phrases and Simple Sentences 2.4 Noun Phrases Revisited 2.5 Adjective Phrases 2.6 Adverbial Phrases Summary Related Work and Further Readings Exercises for Chapter 2 [Allen 1995: Linguistic Background: An Outline of English Syntax 23] http://www.uni-giessen.de/~g91062/Seminare/gk-cl/Allen95/al199502.htm (1 / 23) [2002-2-26 21:16:11] Allen 1995: Chapter 2 - An Outline of English Syntax This chapter provides background material on the basic structure of English syntax for those who have not taken any linguistics courses. It reviews the major phrase categories and identifies their most important subparts. Along the way all the basic word categories used in the book are introduced. While the only structures discussed are those for English, much of what is said applies to nearly all other European languages as well. The reader who has some background in linguistics can quickly skim this chapter, as it does not address any computational issues. You will probably want to use this chapter as a reference source as you work through the rest of the chapters in Part I. Section 2.1 describes issues related to words and word classes. Section 2.2 describes simple noun phrases, which are then used in Section 2.3 to describe simple verb phrases.
    [Show full text]
  • Some Problems Related to the Development of a Grammar Checker
    Some problems related to the development of a grammar checker Kristin Hagen, Janne Bondi Johannessen and Pia Lane University of Oslo {kristiha, jannebj, pial}@mail.hf.uio.no 1. Introduction Developing a grammar checker presents problems to the linguist and to linguistic software that are far from trivial. In this paper we will discuss various kinds of problems that we have encountered during our work with developing a grammar checker for Norwegian (Bokmål). The grammar checker in question, developed in cooperation with Lingsoft OY and to be used in Microsoft Word from the summer 2001, will be applied on text that has been grammatically tagged and disambiguated by the Constraint Grammar method (Karlsson et al 1995, Hagen and Johannessen 1998, and Hagen, Johannessen and Nøklestad 2000a, 2000b). The fact that the input text has been multi-tagged and disambiguated before grammar checking has certain advantages, but also presents certain problems, as we shall see. Below, we shall give a brief review of statistical measures and an outline of the structure of the grammar checker, and then give an overview of the error types that our grammar checker is meant to cover. Many people find it hard to believe that grammar errors occur to any large extent. We shall therefore show some authentic examples from manually proofread tests. The errors can be categorised in two main groups: 1) Errors caused by a sloppy use of the "cut and paste" options of modern text editors, and 2) errors due to people's mistaken beliefs about the written language norms. We shall then focus on problems that present a challenge in more ways than the obvious ones that deal with recognising mistakes and suggesting improvements.
    [Show full text]
  • Symbolic Logic: Grammar, Semantics, Syntax
    Symbolic Logic: Grammar, Semantics, Syntax Logic aims to give a precise method or recipe for determining what follows from a given set of sentences. So we need precise definitions of `sentence' and `follows from.' 1. Grammar: What's a sentence? What are the rules that determine whether a string of symbols is a sentence, and when it is not? These rules are known as the grammar of a particular language. We have actually been operating with two different but very closely related grammars this term: a grammar for propositional logic and a grammar for first-order logic. The grammar for propositional logic (thus far) is simple: 1. There are an indefinite number of propositional variables, which we have been symbolizing as A; B; ::: and P; Q; :::; each of these is a sen- tence. ? is also a sentence. 2. If A; B are sentences, then: •:A is a sentence, • A ^ B is a sentence, and • A _ B is a sentence. 3. Nothing else is a sentence: anything that is neither a member of 1, nor constructable via successive applications of 2 to the members of 1, is not a sentence. The grammar for first-order logic (thus far) is more complex. 2 and 3 remain exactly the same as above, but 1 must be replaced by some- thing more complex. First, a list of all predicates (e.g. Tet or SameShape) and proper names (e.g. max; b) of a particular language L must be specified. Then we can define: 1F OL. A series of symbols s is an atomic sentence = s is an n-place predicate followed by an ordered sequence of n proper names.
    [Show full text]
  • GRAMMATICALITY and UNGRAMMATICALITY of TRANSFORMATIVE GENERATIVE GRAMMAR : a STUDY of SYNTAX Oleh Nuryadi Dosen Program Studi Sa
    GRAMMATICALITY AND UNGRAMMATICALITY OF TRANSFORMATIVE GENERATIVE GRAMMAR : A STUDY OF SYNTAX Oleh Nuryadi Dosen Program Studi Sastra Inggris Fakultas Komunikasi, Sastra dan Bahasa Universitas Islam “45” Bekasi Abstrak Tulisan ini membahas gramatika dan kaidah-kaidah yang menentukan struktur kalimat menurut tatabahasa transformasi generatif. Linguis transformasi generatif menggunakan istilah gramatika dalam pengertian yang lebih luas. Pada periode sebelumnya, gramatika dibagi menjadi morfologi dan sintaksis yang terpisah dari fonologi. Namun linguis transformasi generatif menggunakan pengertian gramatika yang mencakup fonologi. Kalimat atau ujaran yang memenuhi kaidah-kaidah gramatika disebut gramatikal, sedangkan yang berhubungan dengan gramatikal disebut kegramatikalan. Kegramatikalan adalah gambaran susunan kata-kata yang tersusun apik (well-formed) sesuai dengan kaidah sintaksis. Kalimat atau ujaran yang tidak tersusun secara well-formed disebut camping atau ill-formed. Gramatika mendeskripsikan dan menganalisis penggalan-penggalan ujaran serta mengelompokkan dan mengklasifikasikan elemen-elemen dari penggalan-penggalan tersebut. Dari pengelompokan tersebut dapat dijelaskan kalimat yang ambigu, dan diperoleh pengelompokkan kata atau frase yang disebut kategori sintaksis. Kegramatikalan tidak tergantung pada apakah kalimat atau ujaran itu pernah didengar atau dibaca sebelumnya. Kegramatikalan juga tidak tergantung apakah kalimat itu benar atau tidak. Yang terakhir kegramatikalan juga tidak tergantung pada apakah kalimat itu penuh arti
    [Show full text]
  • Experimental Methods in Studying Child Language Acquisition Ben Ambridge∗ and Caroline F
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by MPG.PuRe Advanced Review Experimental methods in studying child language acquisition Ben Ambridge∗ and Caroline F. Rowland This article reviews the some of the most widely used methods used for studying children’s language acquisition including (1) spontaneous/naturalistic, diary, parental report data, (2) production methods (elicited production, repeti- tion/elicited imitation, syntactic priming/weird word order), (3) comprehension methods (act-out, pointing, intermodal preferential looking, looking while lis- tening, conditioned head turn preference procedure, functional neuroimaging) and (4) judgment methods (grammaticality/acceptability judgments, yes-no/truth- value judgments). The review outlines the types of studies and age-groups to which each method is most suited, as well as the advantage and disadvantages of each. We conclude by summarising the particular methodological considerations that apply to each paradigm and to experimental design more generally. These include (1) choosing an age-appropriate task that makes communicative sense (2) motivating children to co-operate, (3) choosing a between-/within-subjects design, (4) the use of novel items (e.g., novel verbs), (5) fillers, (6) blocked, coun- terbalanced and random presentation, (7) the appropriate number of trials and participants, (8) drop-out rates (9) the importance of control conditions, (10) choos- ing a sensitive dependent measure (11) classification of responses, and (12) using an appropriate statistical test. © 2013 John Wiley & Sons, Ltd. How to cite this article: WIREs Cogn Sci 2013, 4:149–168. doi: 10.1002/wcs.1215 INTRODUCTION them to select those that are most useful for studying particular phenomena.
    [Show full text]