2019 Study on the Reading Level of STAAR

Total Page:16

File Type:pdf, Size:1020Kb

2019 Study on the Reading Level of STAAR SCHOOLING VOLUME 10, NUMBER 1, 2019 Readability of the STARR Test is Still Misaligned Susan Szabo, EdD Professor Texas A&M University-Commerce Commerce, TX Becky Barton Sinclair, PhD Associate Professor Texas A&M University-Commerce Commerce, TX Abstract This study examined the readability of the State of Texas Assessments of Academic Readiness (STAAR) Reading passages for 2018 for grades 3-8. These results where then compared to the authors’ first study on the topic, which found that the readability of the STAAR Reading passages were one-to-three years higher than the grade level for which they were written to assess (Szabo & Sinclair, 2012). This study found that some characteristics of the STAAR test had changed since 2012, but many of the reading passages were still misaligned for the targeted grade level. The term “accountability” has a plethora of meanings. However, in the context of public education the term has come to represent ideas that were first associated with the No Child Left Behind Act of 2001 (NCLB, 2002) and later with the College and Career Readiness Standards (Conley, 2010; 2012).These legislative policies have focused on student achievement as a way to produce better student learning. Additionally, the Every Student Succeeds Act (U.S. Department of Education, 2015) limited the federal role in defining state accountability (Greene & McShane, 2018).Texas uses the Texas Essential Knowledge Skills (TEKS) to direct student learning and the State of Texas Assessments of Academic Readiness (STAAR) test to determine if students are learning the intended curriculum. Purpose of the Study In this age of accountability, assessments have gained a negative reputation. However, criterion-referenced assessments are a valuable tool to help drive instruction and to help students to be successful learners (Sindelar, 2015). Assessments can be powerful in helping teachers plan instruction and in letting the students gauge if their learning is up to the state standards. As the STAAR test is a criterion-referenced test based on the TEKS, it is important to investigate why the passing rates are not higher. In 2012, it was found that the readability of the STAAR Reading (grades 3-8) passages were written one-to-three grade levels above the grade for which it was intended and that the 1 SCHOOLING 2___________________________________________________________________________________ questions were written at a higher level as they were either think-and-search questions or on-your- own questions (Szabo & Sinclair, 2012). However, this current study only focused on the quantitative dimensions of text complexity as determined by using various readability formulas. The following question guided our study: How has the readability of the STAAR Reading passages and total overall readability average changed over the past seven years? Readability and Readability Formulas Readability is the ease with which a text can be read and understood (Gunning, 1952). Readability determines if any given written text is written clearly and at a comprehensible level. There are both pros and cons toward the use of readability formulas. Both viewpoints have research to support their beliefs (Zmanian & Heydari, 2012). During the last century, many researchers (e.g. Dale & Chall, 1949; Flesch, 1948; Fry, 1968; McLaughlin, 1969) have addressed the issue of readability and how to calculate it as a way to make classrooms, the work place, and public communications more effective. The purpose of readability formulas is to determine the difficulty of the text so that the reader can determine if the reading material can be read without frustration (Begeny & Greene, 2014). This information helps authors convey complex ideas more clearly and more effectively towards their targeted audience. It also gives the reader advanced knowledge about the text, which may help in determining which book to check-out or to purchase (Zamanian & Heydari, 2012). However, readability formulas cannot tell if the target audience will understand the text, as they do not measure the context, the reader’s prior knowledge, or interest level in the topic or the cohesiveness of the text (Bailin & Grafstein, 2001; Bertram & Newman, 1981; Kirkwood & Wolfe, 1980; Zamanian & Heydari, 2012). Additionally, it was found that tinkering with the text to produce acceptable readability levels may make the text more difficult to understand (Rezaei, 2000). Nevertheless, today, various readability formulas are commonly used to determine the readability of government documents, educational materials for students, newspapers, magazines and popular literature (Begeny & Greene, 2014). Readability formulas are mathematical in nature and focus on different text features. These features include the number of words in a sentence, the percentage of high frequency words on predetermined grade level word lists, the number of multisyllabic or “hard” words, and/or the number of letters in the text (Bailin & Grafstein, 2001; Begeny & Greene, 2014). For this reason, several formulas should be used and averaged when determining the readability of a selection of text, to account for the differences in formula design (Szabo & Sinclair, 2012). Methodology Procedure In 2012, the researchers investigated the readability level of the STAAR Reading tests for grades 3-8 as well as the types of questions asked (Szabo & Sinclair, 2012). Another readability study on the STAAR Reading passages was done by Lopez and Pilgrim (2016) who found similar results. And in 2015, HB 743 required the Texas Education Agency (TEA) to modify the STAAR Reading passages reducing, the number of passages that had to be read and the number of questions SUSAN SZABO AND BECKY BARTON SINCLAIR ___________________________________________________________________________________3 that were answered (Huberty, 2015). These changes had third grade students reading four passages and answering 34 questions. Additionally, 2 questions were added to each grade level so that eighth grade students had 44 questions to answer about their reading passages. All students from 4-8 grades read six passages. Because of the previous study and the changes by TEA, this study only focused on the 2018 STAAR Reading passages to investigate what, if any, changes occurred in the readability of the assessment passages. First, all of the released STAAR Reading passages for grades 3-8 were downloaded from the Texas Education Agency website (2007-2019). The pdf documents were converted into word documents. All photos, graphics, directions and item questions were removed. Line by line editing was done to ensure that line numbers and page numbers did not appear and that there was consistent spacing between all words. Additionally, at each grade level of the STAAR Reading tests, one poem for students to read and interpret was included. However, these poetry passages were not included in this readability study, as the variations in format prevent accurate readability calculations (Fry, 1977). The reading passages were entered into three different online readability index calculators that linked the readability levels to grade levels. The first calculator used both the Fry and the Raygor readability formulas (ReadabilityFormulas, 2019a). The second calculator used the Flesch- Kincaid, the Coleman-Liau Index, the SMOG Index, the Automated Readability Index (ARI), and the Linsear Write (ReadabilityFormulas, 2019b). The third calculator used the Dale-Chall Formula (ReadabilityFormulas, 2019c). Thus, eight readability formulas were used in the calculation of readability. This approach was necessary, as more than one readability formula should be used to provide a more accurate grade-level indication (Szabo & Sinclair, 2012). After determining the readability of each of the grade-level reading passage, the results were added and averaged. These scores were then compared to the Reading Consensus Scores (RCS; ReadabilityFormulas, 2019b). It was found that the results were similar. Following are brief descriptions of the readability formulas used in this study. Flesh-Kincaid. Rudolph Flesch’s readability research (1948) made him an early authority in the field and he inspired additional readability formula variations and applications (Kincaid, Fishburne, Rogers, & Chissom, 1975). The formula looks at the number of words and sentence length per 100 words to determine a grade-level reading score. This readability formula was first used by the Department of Defense to determine the difficulty level of technical manuals and today is a standard function on all Microsoft Word products (Zamanian & Heydari, 2012). Coleman-Liau Index. This formula examines the number of letters in a word and the number of sentences per 300 words in the text. It was created for the U.S. Department of Education to calculate the readability of textbooks for schools (Coleman & Liau, 1975). SMOG Index. The SMOG, which was created by McLaughlin (1969), was first used to evaluate healthcare material. The formula counts every word with three or more syllables within 30 sentences. This formula is appropriate for fourth grade to college age readers. The Automated Readability Index (ARI). This formula was created to use computers to calculate the readability of text. The formula uses ratio representing the number of letters per word and the number of words per sentence (Kincaid et al., 1975). SCHOOLING 4___________________________________________________________________________________ Linsear Write Formula. This formula was developed
Recommended publications
  • A New Vocabulary-Based Readability Index for Thai
    A NEW VOCABULARY-BASED READABILITY INDEX FOR THAI UNIVERSITY STUDENTS Patteera Thienpermpool A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in English Language Studies Suranaree University of Technology Academic Year 2009 ดัชนีวัดความยากงายของบทอานแบบอิงคําศัพทใหมสําหรับนักศึกษา ไทยในมหาวิทยาลัย นางสาวภัทรธีรา เทียนเพิ่มพูล วิทยานิพนธนี้เปนสวนหนึ่งของการศึกษาตามหลักสูตรปริญญาศิลปศาสตรดุษฎีบัณฑิต สาขาภาษาอังกฤษศึกษา มหาวิทยาลัยเทคโนโลยีสุรนารี ปการศึกษา 2552 A NEW VOCABULARY-BASED READABILITY INDEX FOR THAI UNIVERSITY STUDENTS Suranaree University of Technology has approved this thesis submitted in partial fulfillment of the requirements for the Degree of Doctor of Philosophy. Thesis Examining Committee _____________________________ (Dr. Dhirawit Pinyonatthagarn) Chairperson _____________________________ (Dr. Sarit Srikhao) Member (Thesis Advisor) _____________________________ (Prof. Dr. Chaiyong Brahmawong) Member _____________________________ (Dr. Jitpanat Suwanthep) Member ______________________________ (Dr. Suksan Suppasetseree) Member ______________________________ _____________________________ (Prof. Dr. Sukit Limpijumnong) (Dr. Peerasak Siriyothin) Vice Rector for Academic Affairs Dean of Institute of Social Technology ภัทรธีรา เทียนเพิ่มพูล : ดัชนีวัดความยากงายของบทอานแบบอิงคําศพทั ใหมสําหรบั นักศึกษาไทยในระดับมหาวทยาลิ ัย (A NEW VOCABULARY-BASED READABILITY INDEX FOR THAI UNIVERSITY STUDENTS) อาจารยที่ปรึกษา : รองศาสตราจารย ดร. เจเรมี วิลเลียม วอรด, 351หนา การศึกษานี้มีจุดมุงหมายเพื่อสรางดัชนีวัดความยากงายของบทอานแบบอิงคําศัพทและ
    [Show full text]
  • Readability of Ebola Information on Websites of Public Health Agencies, United States, United Kingdom, Canada, Australia, and Europe
    Article DOI: http://dx.doi.org/10.3201/eid2107.141829 Readability of Ebola Information on Websites of Public Health Agencies, United States, United Kingdom, Canada, Australia, and Europe Technical Appendix Websites Reviewed European Centre for Disease Control. Ebola factsheet for the general public [cited 2014 Sep 1]. http://www.ecdc.europa.eu/en/healthtopics/ebola_marburg_fevers/factsheet_general_ public/Pages/factsheet-general-public.aspx Centers for Disease Prevention and Control. Questions and answers on Ebola [cited 2014 Sep 1]. http://www.cdc.gov/vhf/ebola/outbreaks/2014-west-africa/qa.html Public Health England. Ebola: public health questions and answers [cited 2014 Sep 1]. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/370777 /141104__Ebola_QA_For_Public_LF.pdf Government of Canada. Ebola virus disease [cited 2014 Sep 1]. http://healthycanadians.gc.ca/diseases-conditions-maladies-affections/disease- maladie/ebola/index-eng.php Government of Australia. Ebolavirus disease outbreaks in West Africa – important information for travellers, patients and consumers [cited 2014 Nov 11]. http://www.health.gov.au/internet/main/publishing.nsf/Content/2974D69E347C60F6 CA257D1E00106707/$File/ebola-travellers-patients-consumers.pdf World Health Organization. Advice for individuals and families. Ebola guidance package [cited 2014 Nov 11]. http://apps.who.int/iris/bitstream/10665/136474/1/WHO_EVD_Guidance_AdviceFa m_14.1_eng.pdf?ua = 1 Page 1 of 2 Readability Indicator Definitions Gunning FOG Index, which correlates aspects of a text with its grade level. Flesch Reading Ease score, which measures readability on a scale from 0 to 100 (100 being easiest to read, where a score of 60–70 is considered well written and easy to follow by the average reader).
    [Show full text]
  • Assessing the Readability of Policy Documents on the Digital Single Market of the European Union
    Assessing the Readability of Policy Documents on the Digital Single Market of the European Union Jukka Ruohonen University of Turku, Finland Email: juanruo@utu.fi Abstract—Today, literature skills are necessary. Engineering fruit. For instance, anecdotal evidence hints that engineering and other technical professions are not an exception from this and computer science students are receptive to the grammar requirement. Traditionally, technical reading and writing have and vocabulary of law [6]. Yet questions remain whether been framed with a limited scope, containing documentation, specifications, standards, and related text types. Nowadays, and how well they are able to understand and translate law however, the scope covers also other text types, including legal, into technical specifications and implementations. In practice, policy, and related documents. Given this motivation, this paper collaboration between lawyers and engineers is often required evaluates the readability of 201 legislations and related policy for such translations [7]. To ease the translations, different documents in the European Union (EU). The digital single market formalization methods and tools have long been proposed [8]. (DSM) provides the context. Five classical readability indices provide the methods; these are quantitative measures of a text’s Eventually, such technical solutions may eliminate the need readability. The empirical results indicate that (i) generally a to have lawyers in situations requiring the engineering of Ph.D. level education is required to comprehend the DSM laws requirements from law. These requirements have increased and policy documents. Although (ii) the results vary across the particularly in the EU as regulation of the information technol- five indices used, (iii) readability has slightly improved over time.
    [Show full text]
  • The Patterns of Interaction Between Professional Translators and Online Resources
    Centre for Translation Studies School of English and Languages Faculty of Arts and Social Sciences THE PATTERNS OF INTERACTION BETWEEN PROFESSIONAL TRANSLATORS AND ONLINE RESOURCES Thesis submitted for the Degree of Doctor of Philosophy JOANNA GOUGH December 2016 ©Joanna Gough DECLARATION OF ORIGINALITY I declare that this thesis and the work to which it refers are the results of my own efforts. Any ideas, data, images or text resulting from the work of others (whether published or unpublished) are fully identified as such within the work and attributed to their originator in the text or bibliography. This thesis has not been submitted in whole or in part for any other academic degree or professional qualification. I agree that the University has the right to submit my work to the plagiarism detection service TurnitinUK for originality checks. Whether or not drafts have been so assessed, the University reserves the right to require an electronic version of the final document (as submitted) for assessment as above. 2 ABSTRACT With the rapid growth of the Internet and the recent developments in translation technology, the way translators carry out their translation-oriented research has changed dramatically. Resources used by translators to conduct such research have diversified and largely moved from paper to online. However, whilst the number and the variety of online resources available to translators is growing exponentially, little is known about the interactions between translators and these resources. The present research empirically examines the use of online resources by professional translators during their translation- oriented research activities and it does so from an information behaviour perspective.
    [Show full text]
  • Thesis the Effects of User Expectations on Website
    THESIS THE EFFECTS OF USER EXPECTATIONS ON WEBSITE INFORMATION COMPREHENSION AND SATISFACTION Submitted by Anna Harvilla Department of Journalism and Technical Communication In partial fulfillment of the requirements For the Degree of Master of Science Colorado State University Fort Collins, Colorado Summer 2014 Master’s Committee: Advisor: Jamie Switzer Pete Seel Lucy Troup Copyright by Anna Harvilla 2014 All Rights Reserved ABSTRACT THE EFFECTS OF USER EXPECTATIONS ON WEBSITE INFORMATION COMPREHENSION AND SATISFACTION The purpose of this qualitative study was to examine the role of users’ expectations of a website information search in determining their comprehension of the information on a website and their satisfaction with the website. Interviews to determine their satisfaction with the website and think-aloud sessions were employed to gather data from participants, and open coding was used to analyze responses. The findings of this study support the previous literature on scripts with respect to the usability of the Veterans Affairs website. The study found that scripts are present before users search for information on a website. Those scripts provide users with a strategy to find needed information efficiently, but when a website fails to conform to a user’s script, users experience a more difficult search and lower satisfaction with the website. More research into the particular scripts that inform users website searching strategies will help to encourage better communication on websites. Adhering to the Plain Writing Act (2010)
    [Show full text]
  • Readability and Academic Communication: a Comparative Study of Undergraduate Students’ and Handbook of Three Ghanaian Universities
    IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 13, Issue 6 (Jul. - Aug. 2013), PP 41-50 www.iosrjournals.org Readability and Academic Communication: A Comparative Study of Undergraduate Students’ and Handbook of Three Ghanaian Universities William Kodom Gyasi Department of Communication Studies University of Cape Coast, Ghana Abstract: Essentially, this study investigated the readability of handbooks of three reputable universities in Ghana, namely: University of Ghana, Kwame Nkrumah University of Science and Technology and University of Cape Coast. The study conducted an in-depth investigation into the readability of these handbooks, using seven readability indexes. These readability indexes include: Flesch-Kincaid Grade Level, Flesch Reading Ease, Gunning Fog Index, Coleman-Liau Index, SMOG Index, Automated Readability Index and Lisear Write Formula. The readability consensus for the 7 readability indexes showed that these handbooks are very difficult to comprehend when measured in terms of readability indexes and that they were generally written to be understood by university graduates and in some cases even above the reading level of university graduates. The study also established that there are no statistically significant differences across the mean scores of the readability of the three handbooks. It is therefore recommended that the hallmark of texts contained in students’ handbooks should be sensitive to students’ reading level. It is belief that this can be achieved through the use of plain language in writing students’ handbooks. Keywords: text readability; readability index, Flesch-Kincaid Grade Level, Flesch Reading Ease, Gunning Fog, Coleman-Liau Index, SMOG Index, Automated Readability Index, Lisear Write Formula and Ghanaian Universities Students’ Handbook.
    [Show full text]
  • Download This PDF File
    THE JOURNAL OF TEACHING ENGLISH FOR SPECIFIC AND ACADEMIC PURPOSES Vol. 4, No 3, 2016, pp. 641654 UDC: 811.111’276:159.946.4 DOI: 10.22190/JTESAP1603641M READABILITY OF ESP TEXTBOOKS IN IRAN: A NEGLECTED ISSUE OR A TAKEN-FOR-GRANTED ONE? Hassan Mohebbi1, Akram Nayernia2, Majid Nemati3, Behzad Mohebbi4 1Faculty of Foreign Languages and Literature, University of Tehran, Iran 2Iran University of Science and Technology, Iran 3Faculty of Foreign Languages and Literature, University of Tehran, Iran 4University of Mohaghegh Ardabili, Iran Phone: +988977;8375;, E-Mail: [email protected] Abstract. This study investigates the readability index of the ESP textbooks taught at Iranian universities. To this end, 51 randomly selected texts from 11 ESP textbooks published by The Organization for Researching and Composing University Textbooks in the Humanities (SAMT) were analyzed. Seven main readability indexes, including Flesch Reading Ease Score, Gunning Fog, Flesch-Kincaid Grade Level, The Coleman-Liau Index, The SMOG Index, Automated Readability Index, and Linsear Write Formula were used to assess the readability of the texts. Surprisingly enough, the data analysis revealed that the majority of the texts were difficult, fairly difficult, and very difficult. The findings underscore that the texts used in the ESP textbooks at Iranian universities are not appropriate for ESP students in terms of the readability of the texts. It is concluded that readability of ESP books is a neglected issue that needs more in-depth consideration. The findings of the study might be useful for ESP textbooks’ authors and materials developers. The pedagogical implications of the study and further research directions are explained in detail.
    [Show full text]
  • A Multi-Aspect Classification Ensemble Approach for Profiling Fake News Spreaders on Twitter
    A Multi-Aspect Classification Ensemble Approach for Profiling Fake News Spreaders on Twitter Notebook for PAN at CLEF 2020 Clemens Hörtenhuemer and Eva Zangerle Department of Computer Science University of Innsbruck, Austria [email protected] [email protected] Abstract In this work, we attempt to differentiate authors of fake news and real news as part of the Profiling Fake News Spreaders on Twitter task at PAN. We propose a set of eight different language features to represent tweets. These rep- resentations are subsequently used in an ensemble classification model to identify fake news spreaders on Twitter. The approach is confined to the English language. Keywords: Stacking ensemble, natural language processing, ensemble pruning, fake news detection 1 Introduction Threats like public deceit or deep fakes, i.e., the artificially created, realistic imitation of an individual reasonably concern politicians, journalists, and sociologists [14]. In online social networks, fake messages and rumors are usually spread with the intention of de- ceiving users and manifesting certain opinions. Fake news is not new, but social media platforms have enabled the phenomenon to grow exponentially in recent years [17]. Therefore, technologies to detect intentionally spread fake messages are sought af- ter. At the CLEF 2020 conference, the Profiling Fake News Spreaders on Twitter task at PAN addresses this matter [17]. The objective of the task is to study whether it is possible to distinguish authors who have disseminated fake news from those who, in their good faith, have never done so. For this task, a collection of sample messages from known fake news spreaders and truth-tellers was gathered from the Twitter mi- croblogging platform and provided to participants.
    [Show full text]
  • Package 'Korpus'
    Package ‘koRpus’ May 17, 2021 Type Package Title Text Analysis with Emphasis on POS Tagging, Readability, and Lexical Diversity Description A set of tools to analyze texts. Includes, amongst others, functions for automatic language detection, hyphenation, several indices of lexical diversity (e.g., type token ratio, HD-D/vocd-D, MTLD) and readability (e.g., Flesch, SMOG, LIX, Dale-Chall). Basic import functions for language corpora are also provided, to enable frequency analyses (supports Celex and Leipzig Corpora Collection file formats) and measures like tf-idf. Note: For full functionality a local installation of TreeTagger is recommended. It is also recommended to not load this package directly, but by loading one of the available language support packages from the 'l10n' repository <https://undocumeantit.github.io/repos/l10n/>. 'koRpus' also includes a plugin for the R GUI and IDE RKWard, providing graphical dialogs for its basic features. The respective R package 'rkward' cannot be installed directly from a repository, as it is a part of RKWard. To make full use of this feature, please install RKWard from <https://rkward.kde.org> (plugins are detected automatically). Due to some restrictions on CRAN, the full package sources are only available from the project homepage. To ask for help, report bugs, request features, or discuss the development of the package, please subscribe to the koRpus-dev mailing list (<https://korpusml.reaktanz.de>). Author Meik Michalke [aut, cre], Earl Brown [ctb], Alberto Mirisola [ctb], Alexandre Brulet
    [Show full text]
  • Curiously Unique: Joseph Smith As Author of the Book of Mormon
    Curiously Unique: Joseph Smith as Author of the Book of Morm Brian C. Hales [Page 151]Abstract: The advent of the computer and the internet allows Joseph Smith as the “author” of the Book of Mormon to be compared to other authors and their books in ways essentially impossible even a couple of decades ago. Six criteria can demonstrate the presence of similarity or distinctiveness among writers and their literary creations: author education and experience, the book’s size and complexity, and the composition process and timeline. By comparing these characteristics, this essay investigates potentially unique characteristics of Joseph Smith and the creation of the Book of Mormon. Historically, many critics have dismissed the Book of Mormon by classifying its creation as remarkable but not too remarkable. As naturalist Dan Vogel explains: “The Book of Mormon was a remarkable accomplishment for a farm boy …. While Smith continued to produce religious texts, the Book of Mormon remained his most creative, ambitious work in scope and originality.”1 Like Vogel, skeptics commonly assume that human creativity, ambition, and originality can produce texts like the Book of Mormon and imply that past authors have used those abilities to produce similar volumes. Despite such assumptions, no attempts to duplicate his effort or to actively compare Joseph Smith to other authors have been published. Admittedly, devising a scheme for comparison can be tricky. Multiple variables could be chosen, each detecting a potentially useful parallel or nonparallel. This essay will compare six criteria regarding the author, the book, and the composition process: 1. [Page 152]Author Age 2.
    [Show full text]
  • Readability of the STAAR Test Is Still Misaligned
    SCHOOLING VOLUME 10, NUMBER 1, 2019 Readability of the STAAR Test is Still Misaligned Susan Szabo, EdD Professor Texas A&M University-Commerce Commerce, TX Becky Barton Sinclair, PhD Associate Professor Texas A&M University-Commerce Commerce, TX Abstract This study examined the readability of the State of Texas Assessments of Academic Readiness (STAAR) Reading passages for 2018 for grades 3-8. These results were then compared to the authors’ first study on the topic, which found that the readability of the STAAR Reading passages were one-to-three years higher than the grade level for which they were written to assess (Szabo & Sinclair, 2012). This study found that some characteristics of the STAAR test had changed since 2012, but many of the reading passages were still misaligned for the targeted grade level. The term “accountability” has a plethora of meanings. However, in the context of public education the term has come to represent ideas that were first associated with the No Child Left Behind Act of 2001 (NCLB, 2002) and later with the College and Career Readiness Standards (Conley, 2010; 2012).These legislative policies have focused on student achievement as a way to produce better student learning. Additionally, the Every Student Succeeds Act (U.S. Department of Education, 2015) limited the federal role in defining state accountability (Greene & McShane, 2018).Texas uses the Texas Essential Knowledge Skills (TEKS) to direct student learning and the State of Texas Assessments of Academic Readiness (STAAR) test to determine if students are learning the intended curriculum. Purpose of the Study In this age of accountability, assessments have gained a negative reputation.
    [Show full text]
  • Tools to Assess the Reading Levels of Books and Web Pages
    TOOLS TO ASSESS THE READING LEVELS OF BOOKS AND WEB PAGES Compiled and annotated by Elizabeth Greef, May 2014 Using the following keywords in Google, you will be able to find more sources of assessment tools for reading levels: tools assess reading levels books web The following websites are useful for teacher librarians to build their understanding of reading levels and to use the tools to evaluate suitability of texts. Early Reading Assessment: A guiding tool for instruction. Reading Rockets This article is a more academic overview of ways of assessing reading with links to many tools – CBM, CTOPP, DIBELS, DRP, ERDA, GORT4, ITBS, MI, PAT, TOWRE and TPRI. While a number of these tools focus on young children, some may be used up to Grade 12. Free Text Readability Consensus Calculator This site allows direct input of text and has a variety of readability formulas which it automatically uses to generate a score including Flesch Reading Ease, Gunning Fog, Flesch-Kincaid Grade Level, The Coleman-Liau Index, the SMOG Index, Automated Readability Index, Linsear Write Formula. Having seven formulas employed at once gives variation in the results but allows the user to make a good assessment of the level of readability. Other formulas may also be selected. There are brief overviews of these formulas on the page and explanations of how to interpret the results. There is also a link to a free ebook, Can you read me now? Learn how to use readability formulas. Lexile Analyzer This site is free to use for blocks of text up to 1000 words but requires registration.
    [Show full text]