Steps for Creating a Specialized Corpus and Developing an Annotated Frequency-Based Vocabulary List Marie-Claude Toriida This article provides introductory, step-by-step explanations of how to make a specialized corpus and an annotated frequency-based vocabulary list. One of my objectives is to help teachers, instructors, program administrators, and graduate students with little experience in this field be able to do so using free resources. In- structions are first given on how to create a specialized corpus. The steps involved in developing an annotated frequency-based vocabulary list focusing on the spe- cific word usage in that corpus will then be explained. The examples are drawn from a project developed in an English for Academic Purposes Nursing Founda- tions Program at a university in the Middle East. Finally, a brief description of how these vocabulary lists were used in the classroom is given. It is hoped that the explanations provided will serve to open the door to the field of corpus linguistics. Cet article présente des explications, étape par étape, visant la création d’un cor- pus spécialisé et d’un lexique annoté et basé sur la fréquence. Un de mes objectifs consiste à aider les enseignants, les administrateurs de programme et les étudiants aux études supérieures avec peu d’expérience dans ce domaine à réussir ce projet en utilisant des ressources gratuites. D’abord, des directives expliquent la création d’un corpus spécialisé. Ensuite, sont présentées les étapes du développement d’un lexique visant le corpus, annoté et basé sur la fréquence. Les exemples sont tirés d’un projet développé dans une université du Moyen-Orient pour un cours d’an- glais académique dans un programme de fondements de la pratique infirmière. En dernier lieu, je présente une courte description de l’emploi en classe de ces listes de vocabulaire. J’espère que ces explications ouvriront la porte au domaine de la linguistique de corpus. keywords: corpus development, specialized corpus, nursing corpus, spaced repetition, Antconc A corpus has been defined as “a collection of sampled texts, written or spo- ken, in machine readable form which may be annotated with various forms of linguistic information” (McEnery, Xiao, & Tono, 2006, p. 6). One area of research in corpus linguistics has focused on looking at the frequency of the words used in real-world contexts. Teachers have used such information for the purpose of increasing language learner success. For example, the seminal TESL CANADA JOURNAL/REVUE TESL DU CANADA 87 VOLUME 34, ISSUE 11, 2016 PP. 87–105 http://dx.doi.org/1018806/tesl.v34i1.1255 General Service List (GSL; West, 1953), a list of approximately 2,200 words, was long said to represent the most common headwords of English, as they comprise, or cover, approximately 75–80% of all written texts (Nation & War- ing, 1997) and up to 95% of spoken English (Adolphs & Schmitt, 2003, 2004). Similarly, the Academic Word List (AWL; Coxhead, 2000) is a 570-word list of high-frequency word families, excluding GSL words, found in a variety of academic texts. It has been shown to cover approximately 10% of a variety of textbooks taken from different fields (Coxhead, 2011). Thus, the lexical cover- age of the GSL and AWL combined is between 85% and 90% of academic texts (Neufeld & Billuroğlu, 2005). More recent versions of these classic lists include the New General Service List (new-GSL; Brezina & Gablasova, 2015), the New General Service List (NGSL; Browne, Culligan, & Phillips, 2013b), the New Academic Word List (NAWL; Browne, Culligan, & Phillips, 2013a), and the Academic Vocabu- lary List (AVL; Gardner & Davies, 2014). Large corpora of English also exist, such as the recently updated Corpus of Contemporary American English (Davies, 2008–) and the British National Corpus (2007). These corpora are based on large amounts of authentic texts from a variety of fields. Hyland and Tse (2007), however, noted that many words have different meanings and uses in different fields, hence the need to learn context-specific meanings and uses. They further stated as a criticism of the AWL, “As teach- ers, we have to recognize that students in different fields will require dif- ferent ways of using language, so we cannot depend on a list of academic vocabulary” (p. 249). As a means to address this concern, specialized corpora specific to particular fields and contexts have been developed in recent years. For examples of academic nursing corpora, see Budgell, Miyazaki, O’Brien, Perkins, and Tanaka (2007), and Yang (2015). Nursing Corpus Project: Context and Rationale Our institution, located in the Middle East, offers two nursing degrees: a Bachelor of Nursing degree and a Master of Nursing degree. The English for Academic Purposes (EAP) Nursing Foundations Program is a one-year, three-tiered program. It has the mandate to best prepare students for their first year in the Bachelor of Nursing program. Our students come from a variety of educational and cultural backgrounds. Some students are just out of high school, while others have been practicing nurses for many years. We felt that a corpus-based approach for targeted vocabulary learning would best serve our diverse student population, and be an efficient way to address the individual linguistic gaps hindering their ability to comprehend authentic materials used in the nursing program (Shimoda, Toriida, & Kay, 2016). One factor that greatly affects reading comprehension is vocabulary knowledge (Bin Baki & Kameli, 2013). Reading comprehension research has shown that the more vocabulary is known by the reader, the better their read- 88 MARIE-CLAUDE TORIIDA ing comprehension will be. For example, Schmitt, Jiang, and Grabe (2011) found a linear relationship between the two. Previous researchers also looked at this relationship in terms of a vocabulary knowledge threshold for success- ful reading comprehension. Laufer (1989) claimed that knowledge of 95% of the words in a text was needed for minimal comprehension in an academic setting, set as an achievement score of 55%. In a later study, Hu and Nation (2000) suggested that 98% lexical coverage of a text was needed for adequate comprehension when reading independently, with no assistance from a gloss or dictionary. One problem raised was how to define “adequate compre- hension.” Laufer and Ravenhorst-Kalovski (2010) later suggested that 95% vocabulary knowledge would yield adequate comprehension if adequate comprehension was defined as “reading with some guidance and help” (p. 25). They further supported Hu and Nation’s (2000) findings that 98% lexical knowledge was needed for unassisted independent reading. These findings highlight the importance of vocabulary knowledge in reducing the reading burden. This is especially critical when dealing with second language learn- ers who are expected to read nursing textbooks high in academic and techni- cal vocabulary. To best facilitate the transition from the EAP to the nursing program, a corpus was thus developed from an introductory nursing textbook inten- sively used in the first-year nursing courses at our institution. From this corpus of 152,642 tokens (total number of words), annotated vocabulary lists based on word frequency were developed for the first 2,500 words of the corpus (25 lists of 100 words), as they constituted close to 95% of the text. The lists included, for each word, the part(s) of speech, a context-specific defini- tion, high-frequency collocation(s), and a simplified sample sentence taken from the corpus. An individual vocabulary acquisition program using these lists was later introduced at all levels of the EAP program. The teacher participants involved in this project had no prior experience developing a corpus. Compiling a corpus from a textbook was a long and ex- tensive task, one that preferably should be done as a team. To get acquainted with the process, teachers may want to try developing a corpus and anno- tated frequency-based vocabulary lists from smaller, more specific sources to fit their specific needs, such as graded readers, novels, journal articles, or textbook chapters. One advantage of doing this is statistically knowing the frequency of the words that compose the corpus and how they are used in that specific context. This can validate intuition and facilitate the selection of key vocabulary or expressions to be taught and tested. Similarly, it can help make informed decisions as to what words might be best presented in a gloss. Another advantage is being able to extract high-frequency collocations specific to the target corpus. In short, a corpus-based approach is a form of evidence-based language pedagogy that provides teachers with information to guide decisions regarding vocabulary teaching, learning, and testing. It is important to note, however, that the smaller the number of words in a corpus, TESL CANADA JOURNAL/REVUE TESL DU CANADA 89 VOLUME 34, ISSUE 1, 2016 the lower its stability, reliability, and generalizability (Browne, personal com- munication, March 12, 2013). Having said that, a smaller corpus can still be of value for your teaching and learning goals. As Nelson (2010) noted, “the purpose to which the corpus is ultimately put is a critical factor in deciding its size” (p. 54). This article will provide a practical explanation of the steps involved in creating a specialized corpus and frequency-based vocabulary list using free resources. Suggestions will also be presented on how to annotate such a list for student use. Finally, a brief explanation of how annotated lists were used in our EAP program will be given. The following is intended as an introduc- tory, step-by-step, practical guide for teachers interested in creating a corpus. Preparing a Corpus Target Materials The first important step in creating a corpus is thinking about your teaching context, your students’ language needs, and how the corpus will be used.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages19 Page
-
File Size-