Volume 7, Issue 1: 2014

Volume 7, Issue 1: 2014

7/23/2018 The Journal of Writing Assessment Volume 7, Issue 1: 2014 Linguistic microfeatures to predict L2 writing proficiency: A case study in Automated Writing Evaluation by Scott A. Crossley, Kristopher Kyle, Laura K. Allen, Liang Guo, & Danielle S. McNamara This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a corpus of 480 independent essays written for the TOEFL. A stepwise regression analysis indicated that six linguistic microfeatures explained 60% of the variance in human scores for essays in a test set, providing an exact accuracy of 55% and an adjacent accuracy of 96%. To examine the limitations of the model, a post-hoc analysis was conducted to investigate differences in the scoring outcomes produced by the model and the human raters for essays with score differences of two or greater (N = 20). Essays scored as high by the regression model and low by human raters contained more word types and perfect tense forms compared to essays scored high by humans and low by the regression model. Essays scored high by humans but low by the regression model had greater coherence, syntactic variety, syntactic accuracy, word choices, idiomaticity, vocabulary range, and spelling accuracy as compared to essays scored high by the model but low by humans. Overall, findings from this study provide important information about how linguistic microfeatures can predict L2 essay quality for TOEFL-type exams and about the strengths and weaknesses of automatic essay scoring models. Introduction An important area of development for second language (L2) students is learning how to share ideas with an audience through writing. Some researchers suggest that writing is a primary language skill that holds greater challenges for L2 learners than speaking, listening, or reading (Bell & Burnaby, 1984; Bialystok, 1978; Brown & Yule, 1983; Nunan, 1989; White, 1981). Writing skills are especially relevant for L2 learners involved with English for specific purposes (i.e., students primarily interested in using language in business, science, or the law) and for L2 learners who engage in standardized writing assessments used for admittance into, advancement within, and eventual graduation from academic programs. Given the importance of writing to L2 learners, it is no surprise that investigating how the linguistic features in a text can explain L2 written proficiency has been an important area of research for the past 30 years. Traditionally, this research has focused on propositional information (i.e., the lexical, syntactic, and discoursal units found within a text Crossley, 2013; Crossley & McNamara, 2012), such as lexical diversity, word repetition, text length, and word frequency (e.g., Connor, 1990; Engber, 1995; Ferris, 1994; Frase, Faletti, Grant, & Ginther, 1999; Jarvis, 2002; Jarvis, Grant, Bikowski, & Ferris, 2003; Reid, 1986; 1990; Reppen, 1994). More recent studies have begun to assess lexical proficiency using linguistic features found in situational models (i.e. a text’s temporality, spatiality, and causality, Crossley & McNamara, 2012) and rhetorical features more closely related to argument structure (Attali & Burstein, 2005; Attali, 2007). While research in this area continues to advance, a coherent understanding of the links between linguistic features in the text and how these features influence human judgments of writing proficiency is still lacking (Jarvis et al., 2003). There are many reasons for this dearth of understanding, chief among them being the sheer variety of topics, prompts, genres, and tasks that are used to portray writing proficiency. Another is the incongruence among http://journalofwritingassessment.org/article.php?article=74 1/34 7/23/2018 The Journal of Writing Assessment the types, numbers, and sophistication of the methods used to investigate writing proficiency, which renders it challenging to make comparisons between measures and studies (Crossley & McNamara, 2012). This study focused specifically on how linguistic microfeatures produced in an independent writing task can explain L2 writing proficiency. Our focus is on one specific domain common to L2 writing research studies: a standardized independent writing assessment as found in the Test of English as a Foreign Language Internet-Based Test (TOEFL iBT). These independent writing tasks require students to respond to a writing prompt and assignment without the use of secondary sources. Standardized independent writing assessments like those found in the TOEFL are an important element of academic writing for L2 learners and provide a reliable representation of many underlying writing abilities because they assess writing performance beyond morphological and syntactic manipulation. These assessments ask writers to provide extended written arguments that tap into textual features, discourse elements, style issues, topic development, and word choice and that are built exclusively from their experience and prior knowledge (Camp, 1993; Deane, 2013; Elliot et al., 2013). Our study builds on previous research in the area, but overcomes previous shortcomings by relying on advanced computational and machine learning techniques, which provide detailed information about the linguistic features investigated and the models of writing proficiency developed in this study. Although general explanations of the variables and models employed in AES systems have been described in previous publications (e.g., Enright & Quinlan, 2010), detailed explanations of the variables and scoring algorithms are uncommon in published studies on TOEFL writing proficiency because of the need to protect proprietary information (Attali, 2004; 2007; Attali & Burstein, 2005; 2006). In one of the most detailed of these published studies, for example, Enright and Quinlan (2010) provided an outline of the aggregated features included in e-rater (e.g., organization, development, mechanics, usage, etc.) and the relative weights for each of these aggregated features. While some of the aggregated features are fairly straightforward (e.g., lexical complexity, which is comprised of the microfeatures average word length and word frequency), others are much more opaque. For instance, the aggregated features “organization” and “development,” which when combined are responsible for 61% of the e-rater score, are reported to be comprised of the “number of discourse elements” and the “length of discourse elements,” respectively. However, little information is provided with regard to what qualifies as a “discourse element” or how these elements are computationally identified (though see Burstein, Marcu, and Knight, 2003). Thus, our goal in the current study is to contribute to the growing body of knowledge regarding the identification of microfeatures by providing a comprehensive, rigorous, and elaborative model of L2 writing quality using microfeatures, which can be used as a guide in writing instruction, essay scoring, and teacher training. In addition, we examined the limitations and weaknesses of statistical models of writing proficiency (cf. Deane, 2013; Haswell & Ericsson, 2006; Herrington & Moran, 2001; Huot, 1996; Perelman, 2012; Weigle, 2013a). We did this by qualitatively and quantitatively assessing mismatches between our automated scoring model and human raters. This analysis provides a critical examination of potential weaknesses of automated models of writing proficiency that questions elements of model reliability while, at the same time, provides suggestions to improve such models. By noting both the strengths and weaknesses of an automatic essay scoring model, we hope to address concerns among scholars and practitioners about issues of model reliability (Attali & Burstein, 2006; Deane, Williams, Weng, & Trapani, 2013; Perelman, 2014; Shermis, in press) and construct validity (Condon, 2013; Crusan, 2010; Deane et al., 2013; Elliot et al., 2013, Haswell, 2006; Perelman, 2012). L2 Writing Writing in a second language (L2) is an important component of international education and business. Research into L2 writing has investigated a wide spectrum of variables that explain writing development and proficiency, including first language background (Connor, 1996), writing purpose, writing medium (Biesenbach-Lucas, Sigrun, & Wesenforth, 2000), cultural expectations (Matsuda, 1997), writing topic, writing audience (Jarvis et al., 2003), and the production of linguistic microfeatures (Crossley & McNamara, http://journalofwritingassessment.org/article.php?article=74 2/34 7/23/2018 The Journal of Writing Assessment 2012; Ferris, 1994; Frase et al., 2000; Grant & Ginther, 2000; Jarvis, 2002; Reid, 1986, 1990, 1992; Reppen, 1994). The need to teach and assess L2 writing quickly and efficiently at a global scale increases the importance of AES systems (Weigle, 2013b). The current study operated under the simple premise that the linguistic microfeatures in a text are strongly related to human judgments of perceived writing proficiency. The production of linguistic microfeatures in written text, especially in timed written texts where the writer does not have access to outside sources, reflects writers’ exposure to a second language and the amount of experience and practice they have in understanding and communicating in that

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    34 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us