READING BEYOND THE FOLDER: CLASSROOM PORTFOLIO ASSESSMENT AS

A EVENT

A dissertation submitted

to Kent State University in partial

fulfillment of the requirements for the

degree of Doctor of Philosophy

by

Curt M. Greve

August 2016

Dissertation written by

Curt M. Greve

B.A., Kent State University, 2006

M.A., University of Dayton, 2011

Ph.D., Kent State University, 2016

Approved by

Brian Huot, Chair, Doctoral Dissertation Committee

Pamela Takayoshi, Member, Doctoral Dissertation Committee

Derek Van Ittersum, Member, Doctoral Dissertation Committee

Sandra Murphy, Member, Doctoral Dissertation Committee

Christopher Was, Member, Doctoral Dissertation Committee

Accepted by

Robert W. Trogdon, Chair, English Department

James L. Blank, Dean, College of Arts and Sciences

ii

Table of Contents

List of Figures ...... iv

List of Tables ...... v

Acknowledgements ...... viii

Chapter 1: Introduction and Theoretical Background ...... 1

Chapter 2: Literature Review ...... 34

Chapter 3: Methods ...... 81

Chapter 4: Results and Analysis ...... 134

Chapter 5: Conclusion: ...... 212

Appendix A: Edited Collections on Portfolios ...... 230

Appendix B: Rodney’s Portfolio Fact Sheet ...... 231

Appendix C: Recruitment Script for Portfolio Survey ...... 232

Appendix D: Portfolio Survey Consent Form ...... 233

Appendix E: Classroom Portfolio Assessment Survey ...... 235

Appendix F: Recruitment Script for Interviews ...... 241

Appendix G: Field Note Consent Form ...... 243

Appendix H: Document Collection Consent Form ...... 244

Appendix I: Classroom Observation Consent Form ...... 245

References ...... 247

iii

List of Figures

Figure 1. Portfolio Publications from 1985-2011 ...... 26

Figure 2. Chart of Codes from Edited collections on portfolios in Press from 1978-

1999 ...... 28

Figure 3. College/University Demographics ...... 84

Figure 4. Participant Position in the Departments Demographics ...... 84

Figure 5. Participants Degrees Held Demographics ...... 84

Figure 6. Word Cloud Generated from Initial Coding ...... 114

iv

List of Tables

Table 1. Francis’s Portfolio Description ...... 95

Table 2. % of Rodney’s Classes Observed ...... 97

Table 3. % of Francis’s Classes Observed ...... 97

Table 4. Interview Time Table ...... 107

Table 5. Sample Coding Sheet ...... 112

Table 6. Sample of Initial Coding ...... 114

Table 7. Interview Excel Cell Counts ...... 116

Table 8. TAW-G Examples ...... 119

Table 9. TAW-S Examples ...... 120

Table 10. TAC Examples ...... 120

Table 11. TAP Examples ...... 121

Table 12. PE Examples ...... 122

Table 13. TE Examples ...... 123

Table 14. ST Examples ...... 124

Table 15. WT Examples...... 125

Table 16. TO Examples ...... 126

Table 17. INT Examples ...... 127

Table 18. RC Examples ...... 128

v

Table 19. RUC Examples ...... 128

Table 20. RAD Examples ...... 129

Table 21. VR Examples ...... 130

Table 22. GT Examples ...... 130

Table 23. SA Examples ...... 131

Table 24. MINB Examples ...... 132

Table 25. TIM Examples ...... 132

Table 25. Students with Previous Portfolio Experience ...... 138

Table 26. Students Who Had Been in Graded/Not Graded Portfolio Classes . 141

Table 27. Coding Key with Abbreviations and Small Descriptions ...... 145

Table 28. Interview Turns talking Totals ...... 146

Table 29. Tally of Occurrences of Codes in Participants Interviews ...... 147

Table 30. Percentage of Nature of the Discourse Comments ...... 148

Table 31. Examples of Student’s Reactions to “New” Kinds of Response in the

Classes ...... 149

Table 32. % Frequency of Codes for Content of Discourse ...... 151

Table 33. Increases/Decreases in Categories from students who participated in multiple interviews ...... 153

Table 34. Increases/Decreases in Nature of the Discourse Categories ...... 156

vi

Table 35. Breakdown of Anna’s Increase/Decrease Categories for Content of

Discourse ...... 162

Table 36. Breakdown of Chloe’s Increase/Decrease Categories for Content of

Discourse ...... 167

Table 37. Breakdown of Will’s Increase/Decrease Categories for Content of

Discourse ...... 175

Table 38. Breakdown of Kara’s Increase/Decrease Categories for Content of

Discourse ...... 182

Table 39. Breakdown of Holden’s Increase/Decrease Categories for Content of

Discourse ...... 187

Table 40. Breakdown of Ryan’s Increase/Decrease Categories for Content of

Discourse ...... 194

vii

Acknowledgements

First, I want to thank Brian Huot, who has played many important roles in my professional development first as my teacher and mentor and later as a friend. Without your summer course in 2005, I would not be where I am today. Your summer course altered my ideas of teaching and learning. Over the years you have kindled my curiosity in assessment and learning. You inspired ideas in this dissertation; the ugly sentences are my own. To Pam, you always made me feel like a scholar who had important things to contribute to the field. You saw good in my projects and pushed me to explore those ideas. To Derek, your feedback in R&I was so in-depth that it made me feel inadequate in my own response practices. You also helped me to worry less about sharing my ideas. To Sandy, who was my portfolio hero long before I was able to fully articulate my idea of a portfolio or how powerful they were. Your input on this dissertation has gone beyond my expectations. To my friend Bill, you helped me get through the difficult times of graduate school and helped me think about my work in different ways. Meeting for our own dissertation boot camp kept me focused and motivated. My friends, the assessment nerds, Bill, Elliot, Nikki, Ashlee, Jamie, and Melody, you all helped me think about assessment in novel and intriguing ways. To Jenny Dixon, for all your help in the Program. To the students and teachers from my study, without you all, this research would not have been possible. I only hope that I do justice to your voices and perspectives. And to Ryoko, my wife and mother of my most precious little girl, you supported me both financially at times and mentally. Thank you for all your sacrifices. To Emma, my daughter, you helped me stay young and stress-free in all the craziness. Playing outside with you and the dogs, Momiji, Sakura, and Hime, helped me to appreciate the beautiful things in life. To my mom, Vicky, and step-father, Roger, you helped us secure a place to live as well as maintaining a home where we no longer lived. The two of you have made many sacrifices to help us get to where we are today. A few lines in my acknowledgements cannot express the gratitude that I have for you both.

This dissertation is dedicated to everyone who helped me get to this point in my life and especially my daughter Emma. Thank you all for your help, guidance, support, and wisdom for the last four years and all those to come.

viii

Chapter 1: Introduction and Theoretical Background

Scope of the Study

As Huot and Williamson (1997) note, portfolios were once the most popular form of evaluation for the teaching and assessing of writing, but since the late 90s, they have less and less been the topic of scholarly inquiry (Greve, 2004; Perry, forthcoming). In four to five years in the mid to late 90s, portfolios went through an explosion (Elbow & Belanoff, 1997) period that included four national conferences focusing solely on the dissemination of portfolio knowledge and also eight edited collections of portfolio scholarship1. Morris, Greve, Knowles, and Huot’s

(2015) study looks at thirty-four assessment texts written over a thirty-eight year time span

(19742012) before and after the year 2000 to try to gain a sense of the “emerging” field

(Gallagher, 2011; Condon, 2011; Behizadeh and Engelhard, 2011) of writing assessment through its canon. The authors’ decision to exclude portfolio texts from the study of assessment books was based on portfolios’ distinct lives, apart from other assessment scholarship, and also on portfolios’ waning influence on writing assessment literature2. This decision gives credence to portfolios being their own body of work within the writing assessment scholarship.

During their prime, in the 90s, portfolios were seen as a way to promote the teaching of writing (Graves & Sunstein, 1992; Yancey & Weiser, 1997; Gill, 1993), as a way of reclaiming assessment from large scale testing (Elbow & Belanoff, 1986; Murphy, 1997; Elbow & Belanoff

1 See Appendix A for a list of the eight edited books on portfolios. The eight books were published from 19911997, what some scholars have referred to as the “explosion period.” 2 Since the 1997 “explosion” on portfolio scholarship there have been only two edited collections on portfolios that came out, both on e-portfolios in 2002 and 2009.

1

1997), and as being capable of representing multiple and linguistic diversity (Murphy,

1994a; Belanoff, 1994). Nevertheless, absent from any work on portfolios in the late 70s and early 80s was the scholarship from literacy studies (Scribner & Cole, 1978; Heath, 1982; Heath,

1983; Street, 1984) that debunked theories of literacy in terms of economic or cognitive growth

(Goody & Watt, 1968; Ong, 1986; Goody, 1986). The early scholarship looking at literacy presented a special status for literate practices and ignored understanding of literacy within the purposes and contexts of the people using it. Brian Street (1984) argues that earlier notions of literacy made claims about the following aspects of cognition: “it facilitates ‘empathy,’ ‘abstract context-free thought,’ ‘rationality,’ ‘critical thought,’ post-operative thought,’ thought (in

Piaget’s usage), ‘detachment,’ and the kinds of logical processes exemplified by syllogisms, formal language, elaborated code, etc.” (2). Street debunks these conceptions by looking at how literacy is used in specific contexts offering up an “ideological” model, which focuses on the social practices of and writing that are culturally embedded in the ways that people use literacy.

Along with Street’s work, a growing number of other scholars have provided us with shifting notions of literacy with an increasing trend to study literate practices in specific contexts, looking at how literacy functions in these spaces (Heath, 1982; Heath, 1984; Street, 1984;

Fishman, 1987; Heath, 1989; Barton & Ivanic, 1991; Mitchell & Weiler, 1991; Street, 1994;

Moss, 1994; Cushman, Kintgen, Kroll, & Rose, 2001; Smith & Wilhelm, 2002; Barton, Ivanic,

Hodge, & Tusting, 2007; Heath & Street, 2008). Much of this new work of literacy studies has focused on the theoretical framework of Shirley Brice Heath’s “literacy event,” which enlarges our understanding of literacy as social activities structured around how people use and talk about text.

2

In this chapter I will begin to establish the framework for my study by discussing Shirley

Brice Heath’s (1982) notion of “literacy events” and their connection to classroom writing assessment. This chapter also provides a synopsis of the neglected area of response in the writing classroom, the student’s perspective, from which I make the argument for enveloping the literacy scholarship, by creating a theory of classroom portfolio assessment/response that aligns with contemporary theories of learning of literacy. Understanding the types of “literacy events” students have with teachers’ response, we can employ classroom writing assessment that takes into account the social practice of assessment. I will also discuss the decline in portfolio scholarship and change in focus in the scholarly literature. This chapter also provides a historical look at portfolios and some of the major literacy theories that my study will be using to situate my work within those fields. In addition I will furnish a small discussion on formative assessment. Chapter 1 will establish a rationale for the four distinct areas (portfolios, response, formative assessment, and literacy) that Chapter 2 will explore. This framework will show that this project and its methodology (Chapter 3) not only are necessary but also will open up new areas of inquiry, joining together literacy and assessment.

The Problem

According to Heath (1982b), “literacy events” are “occasions in which written language is integral to the nature of participants’ interactions and their interpretive processes and strategies

… a literacy event can then be viewed as any action sequence, involving one or more persons, in which the production and/or comprehension of print plays a role” (445). Heath’s study, which has been cited more than any other piece of scholarship addressing literacy3, is one of the most extensive accounts of literacy we have to date. Her examination of the literate practices of

3 According to Google Scholar in May, 2016 Ways With Words has been cited 11,371times.

3 individuals in the Piedmont Carolinas from her ten year longitudinal study in the 70s and 80s provides us with the framework for how to observe and record how literacy functions. Heath’s data driven concept of “literacy events” grants us a theoretical lens to study literate activity in various contexts enhancing our understanding about how literacy functions and is transmitted.

Some examples of “literacy events” that Heath (1982) includes are; a church congregation meeting where everyone must read regulations for applying for a loan or grant, a girl scout passing out information about selling cookies and fundraising (446), and women sitting on a porch discussing applications and documents needed for sending children to school (450). These social events structured around how people use and talk about text are not passive moments for participants and require an interaction in order to engage in literate practices. Reading and interacting in some form are requirements of “literacy events.”

Strangely enough with all the advancements and buy-in to the literacy scholarship at the college level, we have yet to turn our focus of attention to classrooms, specifically in terms of assessment and the student’s perceptions of assessment. As a field, writing studies and college composition pedagogy have failed to look at literacy, except for observing literacy development and literacy growth in young learners, neglecting the literate activity of reading and interpreting assessment from the student’s point of view. Literacy is a complex, multidimensional symbol system that people use to communicate, and that all literacy is created and used in specific social contexts. To study literacy by itself is “autonomous” (Street, 1984) but to study literacy within the context in which it was created and used allows us to investigate the social practices of individuals in those situations and the ways literacy functions and is transmitted. In neglecting to gain the student’s perspective on assessment, how they read and interact with it, we do not know how or if they use teacher’s response. To remedy this problem and take the students into account,

4 we need to refer to Shirley Brice Heath’s ethnographic work and use the “literacy event” as a conceptual frame to investigate student’s reactions to written response.

Heath’s study lays the foundation for literacy as an interactive process the same ways that certain uses of portfolios push for and enable interactions. Some of portfolios’ early agreed upon tenants were collection, selection, and reflection (Yancey, 1992), which are important aspects that aid students in the activity of self-assessment. By collecting various types of writing, then selecting pieces that are important for their portfolio, students are able to reflect upon their writing in meaningful ways allowing them to interact. Many scholars agree that collection and selection are important attributes that grant students ownership of their work (Yancey 1992,

Camp, 1993; Hamp-Lyons & Condon, 2000; Cambridge, Cambridge, & Yancey, 2009), but I argue that collection and selection are secondary to the way portfolios are used. Within a sixteen week course on college campuses the collection and selection are limited because of time constraints placed on both teachers and students. However, collection and selection are invaluable for long-term portfolios, their usefulness in a college classroom where often times everything is included, collection and selection have less of a role in constructing portfolios and are secondary to the ways that we engage student interaction through assessment. Another problem with collection, selection, and reflection is that students do always know which pieces to include in portfolios because they are unable to make the decisions themselves for various reasons (age, immaturity, unsure of their own writing). In cases like these teachers, parents, or grades on papers aid in the selection process. In having help with the selection process students are left out of important self-assessment process and largely left mystified of the entire assessment process.

5

Just as Heath claims protean status for literacy, Sandra Murphy (1994a) points out that portfolios are protean and change depending upon the situation, location, and teachers using them. Murphy, looks at the ways portfolios can take on the shape of the curriculum, noting the different theories that portfolios can enact. She explains the curriculum as being what students experience in the classroom, what is taught, and what outcomes students are held accountable for. Murphy makes the argument that because portfolios reflect the curriculum, we are able to make inferences about the underlying philosophical assumptions and beliefs about teaching and learning on which they are based. In other words, portfolios enact theories of teaching and learning. In an example of a use for portfolios, Murphy outlines how a “behaviorist’s portfolio” would utilize worksheets and offer grades and praise for correct answers, reinforcing certain types of answers and activities. This usage of portfolios would not only reflect the teacher’s views and assumptions on learning but also the views of the school and the curriculum. In other examples that Murphy provides we are able to view different theories of assessment, not all of which are aligned with current models of teaching and learning, nor encourage reflection, an important component for portfolio assessment. Different curricula and different teachers can use portfolios in very different ways, so it is important that we think about portfolios as capable of enacting various competing theories of writing.

If portfolios are to be a long lasting teaching and assessment tool, then I argue that we utilize the literacy scholarship to make portfolios more meaningful for all parties involved.

Portfolios and assessment are all capable of enacting various theories of writing, and based off of

Heath’s (1982b) definition of “literacy events” then various uses of portfolios are capable of creating an interaction, but only if students are able to use the feedback. If students do not read teacher commentary, or cannot read the feedback then there is no interaction. Interactions require

6 that participants engage with the written text and then use it for some ends. For example, putting a grade on a paper does not illicit an interaction from the students. A grade might illicit a reaction, but to have an interaction a reciprocal act must occur. However, some students may read written commentary on graded papers and try to interact, but the types of interactions are restricted or prohibited simply because the grade shuts off the ability to engage. Connors and

Lunsford (1993) in a study looking at the largest sample of teachers response ever analyzed found that 60% of teacher’s comments on graded papers were for grade justification while only

11% showed any evidence of a revision policy. For the most part, teacher’s commentary has been about grades not learning. I argue that not all interactions are aligned with contemporary social theories of learning and literacy. For example, I find it difficult to imagine how a behaviorist portfolio (or others tapping into older theories of learning and writing) could create an interaction that would be meaningful for students. Thinking about the interactions that response and portfolios are capable of it is useful to think of Brandt’s (2001) notion of

“sponsorship.” In Brandt’s work we can see how different types of sponsors “enable, support, teach , and model, as well as recruit, regulate, suppress, or withhold, literacy” (19). Assessment and response are capable of the various type of “sponsorship”, so it is important that we take into consideration the types of sponsorship our assessments/response have on students.

Response as scholars have noted for the most part has only been one sided (O’Neil and

Fife, 1999; Murphy, 2000; Fife and O’Neill, 2001). In neglecting to focus on empirical research involving students, the response scholarship has yet to shed light on student’s involvement with text4. Heath’s work shows us that literacy is interactive and requires an interaction. In thinking

4 For the most part scholars have neglected the student’s voice in exception the work done by Melanie Sperling and Sarah Warshauer Freedman, scholars from outside Rhetoric and Composition, whose work has largely been overlooked by the College Composition communities.

7 about response and what we have learned, scholars like Straub (1996, 1997, 2000) and Connors and Lunsford (1988, 1993) have mostly given us decontextualized accounts of response telling us what to do and with what frequency, but have done little to show us its reception or interaction with students5. What is clear though in the response research is that comments like

“AWK” or other types stylistic/mechanical critiques do not lay out a clear path for students nor engage them in a literate activity of self-assessment6. As I mentioned above, comments teachers provide can “sponsor” literacy in different ways but not all are capable of creating an interaction.

In looking at the Writing Program Administration (WPA) learning outcomes as well as the portfolio scholarship we find it hard pressed to see how anyone has yet to address the ways that literacy functions in assessment outside measuring literate growth or showcasing multiple types of literacy (various samples of writing). It seems that as a field we have worried more about a portfolios physical attributes than think of ways to make portfolios and assessment more interactive for students. We have yet to tap into theories of language and literacy to make portfolios more functional and useful for all.

Two decades ago, Graves and Sunstein (1992) argued that the portfolio movement was

“one of the best opportunities for students to learn how to examine their own work and participate in the entire literacy/literate process. What we don’t know is the best way to involve students in the process of maintaining effective portfolios” (4). The reason we did not know how

5 Straub’s 1997 research asking students through a survey was a step in the right direction of trying to gain an understanding of what students think about response, but the questions he was asking were decontextualized in the sense that students were taking a survey looking at a paper with teacher comments not their own. Straub was trying to gain a sense of what types of comments students liked and were receptive to, but the effect of his survey research is minimal to what we could learn by looking at responses effect directly and contextually through each student’s individualized teacher response. 6 In the Barton et al. (1998) study The Awkward Part of Awkward Sentences, the authors find that the “awkward” parts of writing are part of the process that young writers go through to develop their abilities. They also note that different teachers interpret and mark different sentences “AWK,” and that many teachers have difficulty in identifying why sentences are awkward in the first place. Since “AWK” means different things to different teachers it comes as no surprise how comments like “AWK” can puzzle students.

8 to involve students in portfolios then and still do not know how to now is because we have cut students out of the literate activity of assessment or even thought of assessment as a literate activity. While scholars have started to look at the social nature of assessment (Broadfoot 1996) and its consequences (Walker-Johnson, 2009; Murphy, 2013) we have yet to tap into theories of literacy and assessment. Specifically in relation to portfolios when scholars talk about literacy and portfolios, literacy is usually represented in terms of showing literacy development through literacy portfolios (Matthews, 1992; Stone, 1997; Cohen & Wiener, 2003) or that portfolios house more than one type of literacy (Murphy 1994, Gallagher, 2007; Cambridge, Cambridge, &

Yancey, 2009). These connections to portfolios and literacy are invaluable for students gaining ownership but neglect to include students in the literate activity of reading and interacting with response. At their core, “literacy events” are communicative, but if we are not involving and engaging students in assessment then at the very least we are denying them access to a very important literate activity, leaving them with an impoverished literacy practice.

Toward a Theory of Portfolio Assessment

Currently, it is difficult to extrapolate a coherent theory of teaching and assessment from the portfolio literature, but to create a usable theory of portfolios we must look at some of the criteria that goes into the completion of portfolios as well as the theoretical work that this project will be working within. Some common features that have remained constant in the usage of portfolios are “collection,” “selection,” and “reflection” (Yancey 1992). These attributes are features that many scholars have agreed upon and use with portfolios, with the exception of the selection process for college classes because in many college writing classrooms all pieces of writing are included. Camp (1993b) states that if we deny “students the opportunity to select from and reflect meaningfully on their writing, such a portfolio would discourage student

9 ownership over the portfolio, it would not engage them as active learners exercising judgment over their work” (196). Student ownership is something that is stressed in the collection process of portfolios (Murphy & Smith 1990, Yancey 1992, Camp 1993, Black et al 1994), and is important for decision making and reflection, which creates the avenue for students to think meaningfully and rhetorically about their writing and growth over the term of the course.

Because portfolios often carry a multitude of papers constructed over time varying in genre we can see how they foster multiple types of literacy (Elbow & Belanoff, 1997, Murphy &

Underwood, 2000; Gallagher, 2007). Pat Belanoff (1994) argues for portfolios and literacy because they are contextually relevant, meaning that literacy is situated; for her portfolios allow for various samples of writing to be included in portfolios. Belanoff ideas of portfolios are important components of usage because they speak to the multiple types of writing that can occur in naturalistic settings over time, but are ones that do not actually tap into theories of assessment & literacy in regards to how students read and make sense of assessment. Belanoff models portfolios after Scribner’s (1984) Literacy in Three Metaphors; “literacy as grace,”

“literacy as power,” and “literacy as adaptive” and explains literacy as a “protean concept, [that] can only be defined within the social context in which it is discussed” (13). In quoting Scribner

Belanoff explains that “ideal literacy is simultaneously adaptive, socially empowering, and self- enhancing”, and concludes that to look at portfolios in any other way causes them to lose their potency and weakens the ways that intertwined literacies function (13). Belanoff stresses the importance of the context though which portfolios are constructed as a social act as it relates it to

“Brandt’s definition of literacy as ‘one’s involvement with other people—rather than with texts

(Literacy as Involvement, 32)’”, and conjectures that “portfolio assessment brings people together to create a literate environment” (21). This interactive process is important for

10 encouraging ownership and “literacy as power” for students. Literacy as power can also be seen in Berlin’s (1994) ideas of subverting the portfolio where students are asked to think rhetorically about the audience of the portfolio and therefore are able to have authority over their text and the way that it is read. Belanoff’s use of literacy gives students control and ownership of their text and portfolios by involving them in the creation of documents for the portfolio. This helps portfolio theory by providing student ownership and encouraging students to self-assess, but like

Murphy’s (2000) critique of the response literature we do not know the students side on portfolios or response.

Belanoff’s ideas of literacy bring us closer to a usable theory for portfolios but fall short in the aspect of involving those most affected by assessment, the students. The ideas that

Belanoff (and others) speak of have also helped to feed into the belief that portfolios are a better assessment tool because they offer a variety of writing samples, which they do, but I argue that portfolios are about usage. If we are not employing portfolios in ways that align with theories of literacy and creating interaction with students then it makes it harder to tap into the reformative power of portfolios that Huot (2002) talks about. Huot, in (Re)Articulating Writing Assessment for Teaching and Learning, offers us a usable theory for portfolios (pgs. 71-80) with delayed grading employing “instructive evaluation” arguing that portfolios are capable of changing the classroom dynamic by using portfolios in a way that allows us to align our teaching and assessment practices. Huot says that to use portfolios to align teaching and assessment requires a pedagogical shift where teachers are conscious of the shift in assessment theory driving portfolio assessment.

Collection, selection, and reflection offer us some criteria for how to compile a portfolio.

By allowing students to collect, select, and reflect on their work, they are able to gain greater

11 ownership and agency over their writing. When students assume some control over their work it positions them as an authority encouraging them to take on different identities in the classroom allowing “situated identity” (Gee 2007) of writers to emerge. Through interviews and observations of students in three portfolio classes with delayed grading over one semester, I discovered students who were able to “situate” themselves into the roles of writers were able to engage with teacher’s commentary more easily than students who more comfortably identified as students. These ideas will be discussed further in Chapter 4 where I present some of the data and analysis of student’s voices. Permitting student’s ownership over their work is an important aspect of portfolio assessment, but I argue that we need to start engaging students through response/assessment allowing them greater agency. Collection, selection, and reflection act as a glue of sorts for compiling portfolios, but in order for us to fully integrate portfolios as a social practice, which for Street (1984) is synonymous with literacy (1), we need to employ theories and research on responding and grading. The missing principle that my dissertation takes into consideration is the social act of response and how that fits into theories of portfolios and learning. By observing, participating, and talking with students, my research offers the students perspective on assessment and furnishes a new way to think about and deliver response.

By looking at how “literacy events” take place through portfolios/response, we can then start considering teachers, their comments, and assessment as “literacy sponsors” (Brandt 2001).

According to Brandt (2001) sponsors are, “any agents, local or distant, concrete or abstract, who enable, support, teach, or model, as well as recruit, regulate, suppress, or withhold literacy” (19).

Response and teacher feedback in a class are how literacy is transmitted, negotiated, interpreted, and used. In an ideal system of response, modeled after formative assessment, a teacher can promote someone’s literacy, but if done another way can suppress. Murphy’s (1994) critique of a

12 behaviorist portfolio shows how certain uses of portfolios can repress literacy. To promote

“ideological” notions of literacy (Street 1984) it is important to look at assessment and grades and the various consequences they can have for students; from material objects; scholarships, graduate school, jobs, placement, discounts for performance, to intangibles; self-esteem, identity, and movement through systems. Huot and Perry (2009) assert that “classroom-writing assessment remains underresearched, under-theorized, and underutilized as a legitimate and important part of teaching students how to write” (423). Neglecting student voice prevents us from knowing their perspective, but by making classroom writing assessment a site for study we are able to hear the student’s voices providing a fuller narrative.

Huot (2002) lays out a theoretical and practical importance of portfolios for classroom assessment challenging us to “exploit and recognize” shifting notions in assessment theory driving portfolios. Portfolios can be used very badly and in some instances are just papers in a folder. Huot explains that if we are to use portfolios to connect assessment to teaching that it is important to think about the ability portfolios have to “alter the relationship between grading and evaluation in the composition classroom” (71). Putting grades on individual papers can undermine the overall purpose of a portfolio by emphasizing the written product over the work that goes into the compilation of the portfolio. The variety of writing in a portfolio “can only be understood and evaluated in context and relationship with each other” (72). This approach to assessment values the fluidity of the pieces of writing that students compose in a classroom for a portfolio privileging their relationship and positionality within regards to each other. Portfolios allow students to work as writers throughout the semester, postponing grades until the end of the semester so students have more time to focus and reflect on their writing.

13

Portfolios afford us much in terms of classroom pedagogy; they can alter the power relationship between teacher and student putting them on a level playing field allowing for dialogue (Elbow & Belanoff, 1997; Huot, 2002), present opportunities for reflection (Murphy &

Smith 1992, Yancey 1992, Yancey, 1996; Daiker, Sommers, & Sygall, 1996; Yancey 1998), and create space to grow as writers (Belanoff & Dickson, 1991;Graves & Sunstein, 1992; Yancey &

Weiser, 1997; Gill, 1993). Although there is no way to completely disrupt the power relationships in a classroom, portfolios create opportunity for writing teachers to take on the roles of a coach (or as I tell my students a big brother). Huot notes that not all stakeholders in assessment have an equal voice, but postponing grades and employing what he calls “instructive evaluation” helps create a space for conversations that grades could otherwise silence.

Huot’s notion of “instructive evaluation” is synonymous with formative assessment, and the processes that he uses to describe the teaching pedagogy that “instructive evaluation” employs mirrors formative assessment/response, a catalyst for classroom “literacy events”.

“Instructive evaluation” is of particular interest because it involves the student in the assessment process, something that teachers have yet to do on a regular basis, but also because Huot says

“instructive evaluation” is “tied to the act of learning a specific task while participating in a particular literacy event” (69). Later in Chapter 4, I will present six case studies and “assessment events7” highlighting students’ interactions with teacher commentary. The “assessment events” are interactions students have with written commentary positioning them in places that allow and encourage agency.

7 “Assessment events” are a term that I develop later in the dissertation and fall under the umbrella of a “literacy event.” Like “literacy events,” “assessment events” are about interactions, specifically the interactions students have with teacher’s response.

14

Formative assessment/response is non-evaluative assessment geared at guiding writers to where their writing stands (a map), while also allowing teachers to assess the impact of the curriculum (Sadler, 1989; Atkin, Black, and Coffey, 2001; Taras, 2005; Dargusch, 2014; Glazer,

2014). Formative assessment is contrasted with summative assessment, which typically places a score/grade on papers and is typically used at the end of a process acting as a sum of the product.

Black and William (2004), in their review of over 250 formative assessment studies from several countries state the purpose of formative assessment is “about developing interaction between teachers and learners, about evoking evidence from such interactions and using it as feedback”

(36). The definitive data based definition from Black and William on formative assessment, like

Heath’s “literacy events,” is about interactions. These interactions with formative assessment allow teachers the ability to shift grades to the backburner creating an atmosphere that fosters student writers and teacher dialogue. Black and Wiliam (2004) explain that “an assessment activity can promote learning if it provides information that can be used, either by teachers or their students when assessing themselves and each other, to modify the teaching and learning activities in which they are engaged” (22). Assessment activities such as interactions with teacher commentary allow students to use assessment to strengthen their work by providing them with feedback that they can use to improve their writing.

Sadler (1989) makes the argument that summative assessment is counterproductive to learning and states that formative assessment is the most useful for instruction, therefore, designing classroom assessment practices as dialogic in nature, specifically literate conversations, response, feedback, challenges on the page (Zebroski, 1989; Carini, 1994; Fife &

O’Neill, 2001; Huot, 2002; Kynard; 2006) can be crucial to students’ developing their own ability to assess as part of their developing processes for writing. Self-assessment is an important

15 skill that allows students the ability to take more from the writing than a score (Sadler, 1989;

Yancey 1998; Yancey, 2000; Shepard, 2000; Moss, 2003; Black & Wiliam 2004.) Clearly portfolios have serious theoretical potential, but if we neglect these theoretical properties of portfolios and assessment, they can become nothing more than collections of papers in a folder and lose any power for transforming teaching, learning, and assessment.

Portfolios as a “Literacy Event”

One way to keep portfolios from becoming papers in a folder is to deepen portfolio practice by linking these practices to Shirley Brice Heath’s (1982) notion of a “literacy event,” which are “occasions in which written language is integral to the nature of participants’ interactions and their interpretive processes and strategies” (50). Heath’s research was a decade long longitudinal study looking at the literacy of three Piedmont Carolina communities, where

Heath was able to see how literacy was transmitted throughout the various communities from children up to adults living in them. Heath’s notions of literacy tie into theories of “autonomous” and “ideological” literacy as developed by Brian Street (1984). Street’s concepts of literacy at their most basic level relates to the usage; the ways in which literacy is used by certain participants to accomplish goals. To put it simply, literacy does things for people in an

“ideological” model, the same ways that in a perfect world response acts in a writing classroom.

Zebroski (1989) in offering up multiple ways of reading student text tells us that “it is less important what we make of student writing than that we make something, something principled of it” (46). What my study proposes to make of classroom portfolios/response is to look at portfolios/response as a “literacy event.” “Literacy events” are times “in which the production and/or comprehension of print plays a role” (Heath 1982b, 445). These types of interactions and involvement with print bring about the opportunity for interpreting and understanding a text

16 creating a literate environment. What is of particular interest for my study is what students’ make of these literacy exchanges, because as Heath notes “literacy events” are negotiated both in written discourse as well as in oral discourse; depending upon the nature of the exchange.

In a writing classroom the nature of a “literacy event” also will vary depending on various factors, but will always revolve around the interpretation or production of a piece of text.

Murphy and Grant (1996) note that when assessment is placed into the hands of the teacher and used in service of learning that:

assessment helps teachers hear individual students’ voices, discover the strategies

students are using, and discover the knowledge students have already constructed.

Assessment becomes an opportunity for planning the next move in the

conversation. Further, assessment provides a means for engaging students in self-

reflection and for acknowledging their role as collaborators in the learning

process. (288)

The conversation that occurs between teachers and students takes shape through response. Freedman’s (1987) study of Response to Student Writing is of particular importance here, for it defines the way that response enters the class. Freedman notes that “response includes more than the written comments teachers make in the margins of their students’ finished pieces of writing” (4). Freedman defines response as “all reaction to writing, formal or informal, written or oral, from teacher or peer, to a draft or final version. Response may occur as a formal part of a classroom lesson or informally as teachers and peers have brief seemingly casual conversations”

(5). In a college writing classroom the act of responding becomes the “literacy event,” but for the most part what we have learned though the response literature is that students do not know how to interpret response or what to do with it afterwards. In keeping in line with “ideological”

17 notions of literacy (Street 1984) and the “literacy event” (Heath 1982b) portfolios and response can only work when there is an exchange of ideas and a negotiation between readers and writers, which is why it is imperative that students are able assume “situated identities” (Gee, 2007) as writers rather than students in a portfolio classroom. Taking on a new role as a writer in the class is important for students to be able to meaningfully engage in the literate activity of interpreting and implementing response, and is an idea that I will explore later in Chapter 4. Students who are not able to “situate” themselves as writers are not able to fully participate in and make meaningful interpretations of response to their writing. In a portfolio class with delayed grading where formative assessment is used this exchange often manifests through revision memos to peers and teachers’ responses to drafted pieces. In these memos or discussions with readers of their prose students who are able to situate themselves into new identities as writers are able to more meaningfully contribute to and understand the relationship of response to writing.

The type of response that I am arguing which helps stimulate a measurable “literacy event” comes from Shepard’s (2006) notion of formative assessment. Shepard believes that formative assessment is “a model for learning that directly corresponds to the zone of proximal development (ZPD) and sociocultural learning theory”, theories that Vygotsky (1978) had envisioned about the “imaginary learning continuum, between what a child can do independently and what the same child can do with assistance” (628). The frame for formative assessment according to Atkin, Black, and Coffey (2001) needs to take shape of the following questions:

 Where are you trying to go?  Where are you now?  How can you get there? (qtd in Shepard, pg 628)

Formative assessment is designed to act as a guide or a map to provide the basic structure to help make something stronger than it was in the beginning. Formative assessment is used to

18 gauge where something stands and is used as a way to strengthen competency and like

Vygotsky’s notions of ZPD, formative assessment helps to elevate a student’s writing to a place they could not have reached on their own. The act of responding to any writer’s text is a highly social and contextualized action, and when done in a manner that mirrors the tenants of formative assessment is a highly valuable assessment tool.

The first appearance of formative assessment came from Michael Scriven’s “The

Methodology of Evaluation” (1967). In the article, Scriven talks about the complexity of curriculum development and the roles that assessment plays in education systems. Although

Scriven’s work and reference of formative assessment was geared towards curricula design, his overall purpose was to get at the methodological difficulties of evaluation. This article was also talking back to an earlier Cronbach (1963) article on evaluating curriculum showing the possibilities that formative assessment has to make teaching and learning a more meaningful activity that ultimately would produce better output. An analogy that Scriven offers that is useful in thinking about the values and possibilities of formative assessment can be seen in comparing curriculum design to engineering. In Scriven’s analogy he says that:

To develop a new automobile engine or a rocket engine is a very, very expensive

business despite the extreme constancy in the properties of physical substances. When we

are dealing with a teaching instrument such as a new curriculum or classroom procedure,

with its extreme dependence upon highly variable operators and recipients, we must

expect considerably more expense. The social pay-off is enormously more important, and

this society can, in the long run, afford the expense. (83)

The reference to the social pay-off in design is important when considering the social cost of assessment and response to student writing. In comparing education to engineering processes,

19

Scriven highlights the importance of formative assessments main goals as trying to produce the best outcomes they can. Scriven explains the purpose of formative assessment is to “discover deficiencies” (51), so regardless of machines or writing, formative assessment is a tool that can be used to strengthen gaps in a process to create a stronger product.

In looking at formative assessment (response) in terms of a “literacy event” then we have a better connection to teaching and writing, because it allows us to look at response as more than just comments on the page. If the very acts of reading and writing are highly social practices, it follows that it is essential that we consider responding to students’ text as an “ideological” exchange of literacy. Since assessment and teachers response can act as a “literacy sponsor,” it is important that response push students into a new way of thinking allowing them to develop as writers. In thinking of response this way, then the type of response used in class should act as a map guiding the students to the production of a better text while also giving them a voice in the matter to keep the social aspect of literacy intact. The scholarship on portfolio usage for classroom writing assessment has not fully explored and adopted crucial research and scholarship from literacy studies, let alone looked at response in terms of a “literacy event” before, so the research in my dissertation will uncover ways that assessment and literacy intersect. This research is important because literacy studies inform the construct of what is being assessed in the first place. The highly social process of response is as Huot (2002) states

“influenced by a wealth of factors, many of which are grounded in our interaction with the student herself” (121). These interactions or “literacy events” are an important factor to look at when thinking about how to make assessment meaningful and how to use assessment to create better learning and better teaching.

20

My research names and frames (Schon 1990, 42) classroom portfolio assessment as a

“literacy event” (Heath 1982b). In using the literacy scholarship, my study aims to consider college writing classes and the participants as sites where “literacy events” occur; thus this dissertation fills an important gap in the assessment/response scholarship. By looking at “literacy event” associated with portfolios/response we can examine how assessment engages students in classroom literate practices. Using Heath’s data driven definition of “literacy events” to look at the literate activity that occurs through portfolios and response, we can start to understand what types of reactions students have with assessment. Literacy is an ideologically packed and socially constructed activity that enable individuals to use literacy for various ends. Barton (1991) states that “the very act of reading or writing takes on a social meaning: It can be an act of defiance or an act of solidarity, an act of conforming or a symbol of change” (11). Literacy is always about involvement (Brandt, 1990), so it is important to explore the social meanings that arise which vary from situation to situation and since we know very little about portfolios or response from the student side, it is vital that we look at literacy within those contexts to gain an understanding of the social meaning assessment carries.

A Brief History of Portfolios

Any study addressing a practice like portfolios whose popularity has waned needs to be seen as much as possible in a historical sense for a comprehensive understanding. To this end, I begin the discussion with Ford and Larkin’s (1978) germinal article in which portfolios have been seen as a more comprehensive way to evaluate writing. Portfolios were borrowed from artists, theater, education, and visual fields, to serve writing studies purposes. Portfolios supported the writing pedagogy emerging in the late 70s from the process scholarship earlier in the decade. Apart from Ford and Larkin (1978) most portfolio adoptions were used to replace

21 other forms of writing assessment, including so-called indirect writing assessment measures like multiple choice tests on usage, grammar and mechanics still widespread at the time8. While developing notions of composition pedagogical theory helped the initial popularity of portfolios in the early 80’s, later work by Peter Elbow and Pat Belanoff kicked off the portfolio movement as a force to combat large-scale assessments. Elbow and Belanoff’s 1986 faculty exchange article “Portfolios as a Substitute for Proficiency Examinations”, outlines the ways that portfolios were used to restructure proficiency measures at SUNY Stonybrook. Following Elbow and

Belanoff’s early work on portfolios an explosion of portfolios scholarship emerged on placement

(Daiker, Sommers, & Sygall, 1996; Haswell, 2001; Hester, O’Neill, Neal, Edgington, & Huot,

2007), proficiency (Elbow & Belanoff, 1986; Belanoff & Dickson, 1991; Belanoff & Elbow,

1991), statewide assessment (Belanoff & Dickson 1991; Freedman, 1993; Kortez et al, 1995;

Callahan, 1997; Callahan, 2000; Murphy & Underwood 2000; Smith 2002), large scale assessment (Anson & Brown, 1991, Black et al, 1994), teacher training and professional development (Bishop, 1991; Black et al, 1994; Yancey & Weiser, 1997), teaching (Graves &

Sunstein, 1992; Murphy & Smith, 1992; Yancey & Weiser, 1997; Gill, 1993; Sunstein & Lovell,

2000 ), reflection (Wolf, 1989; Camp & Levine, 1991; Camp, 1992; Murphy & Smith, 1992;

Yancey, 1992; Yancey, 1996; Daiker, Sommers, & Sygall, 1996;Yancey 1998) and more recently e-portfolios (Hawisher & Selfe, 1997; Fischer, 1997; Blair & Takayoshi, 1997;

Cambridge, Kahn, Thompkins, Yancey, 2001; Cambridge, Cambridge, Yancey, 2009).

Roberta Camp, a longtime employee of the Educational Testing Service (ETS) and early proponent of portfolio writing assessment (1985, 1990, 1992, 1993a, 1993b, 1996) makes the

8 While multiple choice tests have disappeared for many placement procedures on college campuses, there is still widespread use of other indirect measures like the COMPASS tests used to place millions of students in first year writing courses. Although multiple choice tests are no longer common in colleges they are still very prevalently used in many K-12 settings.

22 argument for the value that portfolios hold both in terms of their utility for teachers/students for teaching and learning as well as their value for testing purposes. Camp (1993b) notes that “with increased emphasis on the social consequences of assessment and on the effects of assessment on the educational system… the costs of direct assessment are no longer assumed to be unreasonable” (189). Work done by Camp helped establish portfolios (Camp, 1990; Camp,

1992) specifically but also socially-based writing assessment practices more generally (1993a,

1993b). Two decades after Camp’s contribution to writing assessment, Murphy (2013) echoes

Camp’s notion of social-based writing practices by arguing that we need to take into consideration various stakeholders in assessment and the “cost” of an assessments impact on their educational experience (33).

With shifting definitions of validity and reliability in the measurement community, many from both writing studies as well as folks from the measurement community have come to recognize the value of portfolios to represent the teaching and learning of writing. Portfolios are thought to be an excellent way of looking at the way students write and learn because portfolios provide a better construct representation by including multiple drafts and revision letters addressing changes. Portfolios also offer insights into the writing of linguistically diverse students (Murphy 1994b) and shifting populations of students from different countries/cultures as well as varying socioeconomic class at all levels of education (Hamp-Lyons, 1996; Murphy &

Underwood, 2000; Gallagher 2007; Cambridge, Cambridge, & Yancey, 2009). Portfolios permit us to look at multiple types of writing that take place in more natural contexts over time, and grant us the ability to take into consideration various stakeholders in assessment.

Kathleen Blake Yancey’s (1999) article in the fifteenth anniversary issue of College

Composition and Communication, describes writing assessment history as three distinct waves,

23 each with a different, and perhaps, ascending focus. Yancey argues the “waves” as a useful lens through which to understand the portfolio movement in assessment. The first of Yancey’s waves

(19501970) begins with “reliability” in which indirect measures dominated other forms of assessment. The second wave (19701980) of assessment came in the form of holistic scoring, a procedure Huot (2002) ascribes was developed by ETS scholars Godshalk9, Swineford, and

Coffman (1966) and implemented by the Bay Area Writing Project (Davis, Scriven, & Thomas

1978) as a way to score pieces of writing in a fair and “reliable” manner. Holistic scoring permitted teachers to use real writing samples as a means to assess student writing. Portfolios, the third wave (1986-1999) of assessment, emerged enabling scholars and teachers to look at and assess more than one piece of writing at a time. Yancey’s “waves” explain that portfolios (the third wave) were such a success in the classroom because they look at real writing that was written in more than one sitting in natural conditions. Huot (2002) critiques Yancey’s use of the wave metaphor, pointing out that in Yancey’s “waves” the only new element is the sample of what is being assessed. For example, the first wave consists of multiple choice tests, the second wave’s sample is a single essay, and the third wave’s sample is multiple essays. The two portfolio placement programs, Michigan and Miami, Yancey cites used holistic scoring, muddying the usefulness of the wave metaphor. Focusing only on the sample, Yancey’s “waves” infer the same sorts of theories guiding all writing assessments. Although Yancey was uninterested in looking at the theories driving assessment, it is useful to think about how she describes the movements in assessment because of their connections to other theories of writing that were being introduced to the writing studies community. Yancey’s narrative enacts certain

9 In a book still preparation, Norbert Elliot and Richard Haswell believe that there is an earlier mention of holistic scoring developed prior to Fred Godshalk’s work in the 60’s on holistic scoring. Since the book is not yet published, and I have not read the text, I can neither confirm nor deny an earlier cite for holistic scoring, but the practice of holistic scoring as developed by Godshalk can be seen in assessment done in the Bay Area Writing Project.

24 beliefs and assumptions that guide her account of assessment and College Composition and

Communication’s history.

The influence or impact of portfolios on the field was kairotic because portfolios introduction to the field coincides with the development of social theories of writing. Starting with Emig’s (1971) germinal work on process, and Vygotsky’s ideas about the social basis of learning (1986), theories of writing have evolved and helped to shape the ways in which composition scholars look at and think about writing. The different theoretical viewpoints emerging through the scholarship created the catalyst for portfolios to take hold. Based on what we knew or were coming to know and understand about writing, portfolios seemed like the natural adaptation to match theory with practice; portfolios were a way for teachers to put theory into practice.

Fifteen years ago, Hamp-Lyons & Condon (2000) stated that portfolios had yet to be theorized in a way that allows us to tap into the tremendous power that they have as an assessment and learning tool and attempted to create theory to keep portfolios from falling to the wayside. This state of affairs has not changed. If anything, the situation has deteriorated. As

Perry (forthcoming) shows, (in Figure 1), there has been a decline in the amount of portfolio scholarship since Yancey’s (1999) 3rd wave of writing assessment.

25

Figure 1: Portfolio publications from 1985-2011

Perry attributes this decline in scholarship to the field’s overall tendency to jump onto new “hot topics” and the inability of the portfolio scholarship to create a powerful enough theoretical base to guide and enact successful portfolio practices. Although there are multiple ways in which

Perry’s graph can interpreted, one possible interpretation is that scholars are not writing about portfolios with the same frequency as they once were.

Recently conversations about portfolios have revolved around “what to do” with portfolios rather than “how and why we should” use them. In addition to the published scholarship, there is a lot of portfolio chatter on places like the WPA-L where questions about e-portfolio software and other bureaucratic matters appear. These administrative matters oversimplify the usage of portfolios and detract from the real work of research and theory that should be occurring with discussion about portfolios. The field of writing assessment as a whole has been largely concerned with administrative matters and less with theories of writing assessment as an area of research (Huot, 2002; O’Neill, Schendel, & Huot 2002). In doing this

26 we are preventing portfolios from becoming a more powerful assessment tool that can improve both teaching and learning.

While Perry, looked at the publishing trends in portfolio scholarship, my 2014 CCCC’s presentation examined book chapters focused on portfolios from the 90’s when portfolios were in their “explosion” period. For my research I discovered 8 edited anthologized books published solely on portfolios from 1978-1999 (Yancey’s 3rd wave) by Composition scholars. In my study I coded the 8 texts and analyzed 143 chapters. In this analysis, I coded for the three main categories of scholarship in Writing Studies (Huot 2002); “pedagogy,” “research/theory,” and

“administration.” “Pedagogy” articles represent times scholars are writing about how to use portfolios. They often show success or failure stories with portfolios in an individual teacher’s narrative of a classroom or across a school. “Research/Theory” articles often take the shape of empirical studies or are pieces that are theory building about the ways to use portfolios as well as why we should be using them. And the last category is “administration”, which are articles typically written by a Writing Program Administrator or a person in charge of making decisions that are often about accountability. This analysis of 143 chapters on portfolios shows that the research/theory articles were underrepresented in the scholarship (29%), with the bulk of articles looking at administration and pedagogy (71%). In looking at the books from the 90’s we can see that scholars were more interested in using portfolios rather than focusing on theories driving them, which also seems to be the current trend. In neglecting research and theory in portfolios, scholars are limiting the ways that portfolios can offer us a more comprehensive view on assessment. Also in neglecting research and theory we are limiting the types of voices that we hear about portfolios and assessment and are only looking at them from a narrow view.

27

)LJXUH&KDUWRIFRGHVIURPHGLWHGFROOHFWLRQVRQSRUWIROLRVLQSUHVVIURP

&RGHV

29% Admin & Pedagogy

Research/Theory 71

 ,QWKHPRVWUHFHQWERRNRQSRUWIROLRVE\&DPEULGJH&DPEULGJHDQG

PRVWDUWLFOHVDUHHLWKHUDERXWKRZWRXVHHOHFWURQLFSRUWIROLRV HSRUWIROLRV RUPDNH

DGPLQLVWUDWLYHDUJXPHQWVIRUWKHXVHRIWKHP7KLVODFNRIWKHRUL]LQJSRUWIROLRSUDFWLFHVKDV

KHOSHGWRFRQWULEXWHWRWKHGHFOLQHRISRUWIROLRVLQWKHVFKRODUVKLS$VZHKDYHVHHQLQFDVHVOLNH

.HQWXFN\ &DOODKDQ&DOODKDQ ZKHQWHDFKHUVDUHUHTXLUHGWRWHDFKZLWKSRUWIROLRV

ZLWKRXWXQGHUVWDQGLQJWKHRULHVGULYLQJWKHPDQGFRQQHFWLQJWKHPWRFODVVURRPDVVHVVPHQWVZH

VHHWKDWSRUWIROLRVIDOOE\WKHZD\VLGH7HDFKHUVDQGDGPLQLVWUDWRUVKDYHWRPDNHFRQVFLRXV

GHFLVLRQVZKHQWRXVHSRUWIROLRVDQGZKHQQRWWRDQGPRUHLPSRUWDQWO\WKH\KDYHWRNQRZZK\

WKH\DUHXVLQJWKHPDQGWKHSXUSRVHVEHKLQGSRUWIROLRVRWKHUZLVHWKHWKHRULHVRIZULWLQJ

UHLQIRUFHGWKURXJKSRUWIROLRXVDJHDUHQHJOHFWHGDQGSRUWIROLRVEHFRPHFRQWDLQHUVWKDWERWK

WHDFKHUVDQGVWXGHQWVGUHDGFRPSLOLQJ2QPDQ\FROOHJHFDPSXVHVSRUWIROLRVDUHRIWHQXVHG

ZLWKRXWDVWURQJWKHRUHWLFDOEDVLVIRUWHDFKLQJDQGOHDUQLQJUHVXOWLQJLQPDQ\WHDFKHUVXVLQJ

SRUWIROLRLQZD\VWKDWHQDFWGLIIHUHQWWKHRULHVRIZULWLQJDVVHVVPHQW,QFDVHVOLNHWKHVH

SRUWIROLRVDUHPHUHO\DFROOHFWLRQRISDSHUVLQDIROGHUWKDWKDYHQRWKHRUHWLFDORUSHGDJRJLFDO

YDOXH6FKRODUVZRUULHGHDUO\RQWKDWWKDWLIWKHRULHVJXLGLQJSRUWIROLRVZHUHLJQRUHGWKH\ZRXOG

10 ,Q&KDSWHU,ZLOOWDONEULHIO\DERXWKRZVWXGHQWVLQP\VWXG\ZKRKDGSDVWH[SHULHQFHZLWKSRUWIROLRWDONHG DERXWWKHP,WDSSHDUVWKDWVWXGHQWVOLNHWHDFKHUVGRQRWXQGHUVWDQGWKHWKHRULHVGULYLQJDVVHVVPHQW

 turn into a fad, (Huot 1994, Elbow 1996, Murphy & Camp 1996, Elbow & Belanoff 1997,

Condon 2000, Huot 2000) and lose all theoretical and pedagogical value. Making portfolios a site of research as well as taking into consideration the literate activity of assessment contributes to a more comprehensive view on the ways that portfolios can and should be used.

Summary of the Problem

In Spring of 2014, I designed a survey to capture data for my 2015 C’s presentation looking at teachers use of portfolios for classroom writing assessment. In Chapter 3, I will go into greater detail explaining the survey and its primary and secondary use. The survey that I designed was created for teachers to determine what types of portfolios they were using and how they were assessing them in their college writing classes. The survey was sent out on the WPA listserv as well as to a handful of colleges where I had either worked, studied, or knew someone who could help distribute the survey. The survey over a period of 2 semesters yielded 122 participants. Although limited by the number of participants, I discovered that there is no consensus among teachers in the ways that they are using or assessing portfolios, and that there are many different and sometimes conflicting theories guiding classroom portfolio assessment11.

It appears that many teachers are using portfolios inconsistently because as a field we have neglected to think of portfolio as something other than a better way to assess.

Portfolios have been under-theorized and under-researched so even in looking at

Yancey’s (1999) view of assessment and her neglect to think about the theories guiding assessment, we can see how people have come to believe that portfolios are better simply because they look at more writing and have “face validity”. This opinion is echoed in White et

11 I presented the results of the survey at the 2015 CCCC’s convention in Tampa, Florida where I made the argument that classroom writing assessment, specifically portfolios, was an undertheorized area in need of investigation. From my research I discovered that there were many ways that portfolios were being used often times with contradictions or unclear assessments procedures.

29 al. (2015) Very Like a Whale, where the authors praise their e-portfolio system for having “face validity” (60). White et al. make the argument that portfolios are a valid construct because of the multitude of papers they house. The author’s stance on portfolios as well as validity, an evidence based interpretation, is unhelpful to the field because of their lack of understanding of certain key components of validity. Face validity is a hollow construct as Anastasi (1988) explains that face validity is a desirable quality for assessment(s), but is also not validity in the technical sense. She states that “face validity pertains to whether the test ‘looks valid’ to the examinees who take it, the administrative personnel who decide on its use, and other technically untrained observers.

Fundamentally, the question of face validity concerns rapport and public relations” (117) and warns that in order for an assessment to be valid requires that we test the apparatus. This is something we have yet to do with portfolios and because portfolios provide face validity evidence many teachers and scholars have taken to the current trend of employing e-portfolios for both programmatic assessment as well as in classroom assessment.

In readily employing e-portfolios without a connection to teaching and learning, this trend of jumping onto the portfolio bandwagon risks falling into what Hawisher (1989) terms

“technocentrism,” where teachers will just use e-portfolio as a convenient way to store and access student writing, a technical issue, rather than thinking of the deeper reasons and theories behind why and how we should be using portfolios. I would argue that e-portfolios and paper based portfolio share many (if not all) of the same attributes and in fact are not as different as people would think. Early on portfolios were not able to gain momentum for reasons of storage, but with the efficiency and low cost of electronic storage many of these issues have disappeared.

There is much untapped potential in portfolios and portfolio assessment, but we as scholars need to look at portfolios and assessment as research, a point that Huot (2002) makes best in his book.

30

Until assessment becomes a place of research interest there is much that we will not know and people will continue to use portfolios without understanding the theories driving the assessment.

To redeem portfolios from being papers in a folder, my research attempts to gain the student’s perspective to test my hypothesis of whether or not formative assessment allows students to participate in the literate activity of response. In ignoring the student’s perspectives, we teeter on being “autonomous” in the ways that we assess. To be clear, I am not saying that there is “one right way” to use portfolios, but I am making the argument that not every portfolio iteration is capable of aligning with social theories of learning and literacy. It is my belief that with formative assessment and Heath’s “literacy events” that we can start to create a more viable model of classroom portfolio assessment that can enhance both teaching and learning through interaction. In ignoring theories of literacy and learning we run the risk of different types of assessment “sponsorship” (Brandt, 2001) which could potentially leave students in the class an impoverished version of literacy.

Significance of the Problem

Practice is theory enacted (Zebroski, 1994), and since we have no contemporary theory of portfolio assessment it is safe to say that there are many different types of theories feeding into portfolio assessment, which limits the potential that portfolios have to put assessment into the hands of teachers and students. The goals of any assessment are to create better teaching and better learning, but by using assessment in conflicting ways, theories of assessment are ignored leaving us in this case with portfolios as merely papers in a folder. The practice of using an assessment tool before it has been fully theorized is not something new to writing assessment

(Faigley, Cherry, Jolliffe, & Skinner; 1985) as a field and with the heyday that portfolios had in the early 90’s it is safe to say that portfolios are in need of theorizing else they fall by the

31 wayside as other assessment tools have of the past. Portfolios have the potential to change the way that we assess by linking teaching and learning to assessment. Portfolios are not a panacea for all writing assessments woes, but I argue that they can be used to align with theories of learning and literacy that revolve around interactions. It is important for teachers to have a theoretical understanding of portfolios but more importantly that students are able to engage in the literate activity of assessment. If we do not start to theorize and research portfolio assessment, then they will be nothing more than papers in a folder.

Research Questions

This study attempts to look at numerous questions about how portfolios function in a classroom, providing a new lens using literacy scholarship to look at portfolios’ and their connections to literacy and social activity. The research questions this study is set up to address are;

(1) Is response able to act as and fulfill the conditions to be considered a “literacy

event” in a portfolio classroom that uses delayed grading?

(2) Are students able to fully participate in the literate activity of assessment that

portfolios and delayed grading promote?

(3) How does teacher response and assessment act as a sponsor of literacy?

(4) Are there students in the class that do not interact with teachers response, and

if so, why?

Through this research we will be able to see a functional use of portfolios tied to theory and research which will aid in our understanding of the ways that portfolios can be used to create an interaction. This research will shed light on a neglected area of research and help to give portfolios a new theory for teaching and learning.

32

Conclusion

In this chapter, I have made the argument that the assessment scholarship should align with literacy scholarship as well as formative assessment. In looking at interactions between students and text we can design assessment practices that fall in line with contemporary social theories of learning and literacy. The purpose of this dissertation is to gain the students perspective on assessment to answer my research questions. Using ethnographic methods I was able to gather the student’s perspective to help our understanding of the ways that literacy and assessment can work together. In the next chapter, Chapter 2, I will review relevant literature from four distinct fields of study (portfolios, response, formative assessment, and literacy) to synthesize bodies of scholarship that often do not make use of each other. The literature review helps to fame the study and aids in making the argument that the various literatures should be working together. Chapter 2 also helps to set the stage for Chapter 3, where I will discuss the methods of this study. In Chapter 4, I furnish an analysis of the data relevant to answering my research questions, and in Chapter 5, I will provide a discussion of the results and make suggestions for further research as well as discussing the how joining literacy and assessment can help reform classroom writing assessment.

33

Chapter 2: Literature Review

Introduction In Chapter 1 I argued for aligning literacy studies with assessment scholarship to take portfolios and response into consideration as a socially situated activity. By aligning literacy studies scholarship with assessment we can consider contemporary social theories of learning and literacy making assessment more meaningful for teachers and students. As Huot (2002) argued that writing assessment outside the classroom is bifurcated, so is the literature within the writing classroom. Because of this bifurcation, my literature review acts as a synthesis of various and distinct bodies of scholarship (portfolios, response, formative assessment, and literacy studies) in addressing my research questions;

(1) Is response able to act as and fulfill the conditions to be considered a “literacy

event” in a portfolio classroom that uses delayed grading?

(2) Are students able to fully participate in the literate activity of assessment that

portfolios and delayed grading promote?

(3) How does teacher response and assessment act as a sponsor of literacy?

(4) Are there students in the class that do not interact with teachers response, and

if so, why?

By accessing multiple bodies of scholarship, this literature review helps knit together theories and concepts that often are not in communication with one another, building a comprehensive

34 theory of portfolios and response within a writing classroom aligned with theories of learning and literacy.

This chapter has three main purposes. The first purpose is to familiarize readers with the portfolio scholarship, which is responsible for much of the early work building a theory for portfolio usage in the Rhetoric and Composition scholarship. With a plethora of scholarship available, arguably its own canon (Morris, Greve, Knowles, and Huot, 2015), this first section on portfolios will focus on many early portfolio theories responsible for informing much of the field of portfolio assessment. The first section of the literature review will take a historical approach to how portfolio assessment has been used and talked about in various educational and assessment settings. After a historical look, the portfolio section will focus on issues in portfolio assessment both for large scale assessment and classroom writing assessment like; literacy, agency, and power. The focus of this dissertation is on classroom writing assessment. The literature review, however, includes large scale assessment as well because some principles can be applied to portfolios on any scale. Nevertheless, the portfolio section in the literature review is not comprehensive, with such a vast body of scholarship it is impossible and unnecessary to discuss everything about portfolios. In Chapter 1 I offer a brief history on portfolios’ life and trajectory, so repeating it here is unwarranted. Portfolios can be and have been diversely used, so the section on portfolios is merely designed to allow the reader to grasp some important pieces of scholarship that have been influential in the portfolio scholarship.

After beginning with many germinal and influential portfolio scholarship this chapter switches focus to the second area, response. The response section starts with the canon Rhetoric and Composition has built and later shift its focus, tapping into the formative assessment scholarship that addresses interactions (Black and Wiliam 2004) and delayed grading (Sadler,

35

1989) that provide an essential map (Shepard, 2004) for how to improve writing. The formative assessment scholarship acts as an alternative method of response than to the one developed by the Rhetoric and Composition scholarship. Many of Rhetoric and Compositions scholarship on response has focused on how and when to respond (Sommers, 1982; Connors and Lunsford,

1993; Straub 1996, 1997, 2000) while also developing how to make response communicative

(Zebroski, 1989; Carini, 1994; Kynard, 2006), later discussing the neglected perspective of students (Murphy, 2000; Phelps, 2000; Mathison-Fife and O’Neil, 2001).

In looking at the response scholarship, I will focus first on how response has been thought of and how it has been shaped by theories of writing and response and, from there, I will move to the Education scholarship, looking at texts that show how formative assessment acts as a map for both teachers and students in the assessment process. Formative assessment is seldom mentioned in the Rhetoric and Composition scholarship, so this part of the literature review will show how we can borrow from the Education scholarship to strengthen and enhance our own response styles. The formative assessment scholarship will highlight how assessment in portfolios can be used to foster meaningful dialogue in the writing classrooms, an often neglected area of study.

The third area of focus for this literature review will be on how literacy theories have shifted to account for multiple types of literacy and literate activity (Gee, 1991a, 1991b, 1991c,

2007, 2012; New London Group, 1996). This section focuses on the social nature of learning and literacy, which influences how reading and writing occur. I will ultimately conclude this chapter discussing how we have yet to tap into important literacy theories in the field of assessment, encouraging interaction between student and text. Primarily in the assessment scholarship, specifically those on portfolios, we have only looked at portfolios as being a means to measure

36 literate development or as being able to house multiple types of literacy and have neglected to see how theories of literacy, specifically those of Heath (1982b, 1983, 1988), Brandt (1998,

2001), and Street (1984), tie into theories of assessment/response. In a portfolio classroom with delayed grading, the feedback12 from teachers functions as a literate activity in which students and teachers have an interaction. In these interactions, “literacy events,” (Heath, 1982b) the response from the teacher, takes on different roles as types of “sponsors of literacy” (Brandt,

2001). By looking at Brandt’s work, we can see how different types of sponsors are capable of producing different types of literate exchange, some of which promote literacy while others suppress it. To have an “ideological” (Street, 1984) exchange of literacy through response, it is imperative that we link assessment and literacy theory. Neglecting literacy theory in assessment on how and what we assess falls prey into falling into an “autonomous” (Street, 1984) models of assessment. In an “autonomous” model of assessment, students are unable to fully participate/engage in the literate activity of assessment/response, because grades largely shut down any conversation or interaction that students may have, especially if grades are given on papers with no chance of revision. Grades, a symbol system of values, are used by certain groups of people, students, employers, administration, etc. to portray an image of someone’s ability or accomplishments but are largely a decontextualized view of the individual and the classroom context in which the grades were earned. In classes without grades, formative assessment functions as a tool to gauge a student’s performance, allowing for interactions to occur in a contextually bound environment.

12 When I refer to feedback or response I am in fact referring to assessment. There are some in the field of Writing Assessment who do not believe that response is assessment but my argument is that response is very much part of assessment giving students/writers a clear path for where they are at, where they need to go, and how they can get there.

37

By bringing these four bodies of scholarship together for the first time (to my knowledge), this literature review is relevant for research for my study, which addresses these questions; (1) Is response able to act as and fulfill the conditions to be considered a “literacy event” in a portfolio classroom that uses delayed grading?; (2) Are students able to fully participate in the literate activity of assessment that portfolios and delayed grading promote?;

(3) How does teacher response and assessment act as a sponsor of literacy?; (4) Are there students in the class that do not interact with teachers response, and if so, why?

Portfolios

Portfolios have had a long life span with numerous books, articles, and conferences dedicated strictly to portfolios (Huot and Williamson, 2007). This distinct body of work has for the most part been viewed apart from other assessment texts (Morris, Greve, Knowles, and Huot,

2015). Because of the vastness of portfolio scholarship, this literature review choses to look at pieces of scholarship that have created a large impact on the spread and popularity of portfolios for both classroom assessment as well as large-scale and programmatic uses. The focus of this dissertation is to look at classroom writing assessment, but choses to include portfolios usage for large-scale and programmatic uses because of the ways the pieces in this literature review align and complement each other. The pieces were also selected as the foundation of portfolio theory.

Ford and Larkin’s (1978) text on portfolios is perhaps the oldest piece of scholarship from Rhetoric and Composition. Ford and Larkin’s article was based off of seven years of using and experimenting with portfolios at their institution, not relying on nor citing a single piece of scholarship for this article. In this article the authors lay out a basic foundation for how portfolios were used at the Hawaii Campus of Brigham Young University. Ford and Larkin explained how portfolios could be used both for keeping and maintaining standards at the university as well as

38 for classroom instruction. The portfolio system at the university was based off of the concepts of portfolios that artist had been adhering to for years. At the university, portfolios were implemented to combat grade inflation but to also to address rumors of a decline in students

English abilities. To accomplish the task of keeping standards high while also combating grade inflation, teachers of the students as well as outside readers were asked to look at portfolios.

Looking at portfolios this way helped bring faculty together to maintain and discuss standards.

Scores of portfolios at the school were used by the writing program coordinator to monitor what was occurring in the program. The portfolio system described by Ford and Larkin was designed to maintain proficiency with the English language, and even though their focus with the portfolios was on maintaining standards the portfolios still functioned to help make students better writers. In the portfolio system at the Hawaii campus students whose portfolios did not pass were not penalized with a poor grades but rather were given an “X” which allowed students to retake the class without detrimental effects on their GPA.

Elbow and Belanoff13 (1986) look at portfolios similarly to Ford & Larkin in the sense that portfolios are a more comprehensive way to gauge proficiency measures. Elbow and

Belanoff explain the portfolio system at Stony Brook and make the argument for portfolios as providing more evidence than a single sample of writing like what had been used in the past for proficiency at the school. Their ideas come from four years of “small scale experimentation”

(336) at the university where portfolios were thought to be a more robust sample of writing that could bring together the community of teachers as well as creating an atmosphere which allowed

13 For the literature review I have included 3 similar pieces of scholarship from Peter Elbow and Pat Belanoff’s research at Stony Brook University because of their influence on other pieces of portfolio scholarship. Elbow and Belanoff arguably helped to kick off portfolios widespread acceptance among writing teachers and administrators.

39 for writing as a process to occur. Elbow and Belanoff explain how portfolios were also able to allow teachers to take on a different role other than strictly a judge.

In another article, Belanoff and Elbow (1986a) suggest that portfolios at Stony Brook were about collaboration and community. They suggest that much teaching and therefore grading goes on in isolation which creates disparity between grades in classes. The portfolio system at

Stony Brook was started initially to replace proficiency measures as the school. Portfolios were designed to go through collaborative reading to judge whether or not portfolios were passing.

Belanoff and Elbow state that the collaborative reading of the portfolios brought teachers together to work as colleagues. The portfolios at Stony Brook were used to create a community of standards from class to class, semester to semester. Belanoff and Elbow say the communal reading and evaluation of faculty at the university helped give teachers more certainty in their grading practices. Belanoff and Elbow state that the communal carry over from the larger group to individual teachers in their own classrooms. By reading portfolios together, the teachers at the school felt more confident in their grading practices. Along with bringing together teachers as collaborators, the portfolios also encourage collaboration between teachers and students. Belanoff and Elbow discuss the shifting role of the teacher in the portfolio class. In the portfolio system at Stony Brook, teachers act like coaches to help bring portfolios up to standards. Since portfolios were used to determine whether or not students passed the class or had to repeat it, and since the teachers were not solely responsible for determining if students passed or failed, the teacher’s role could be that of a coach to help students improve their writing to bring them up to an acceptable level.

Belanoff and Elbow (1986b) offer another article on portfolios talking about the shortcomings of proficiency exams in schools. They argue that proficiency exams undermine the

40 nature of the teaching writing because “the writing is unconnected to the study of any material and cut off from connection with any ongoing conversation” (97). The portfolios were supposed to act as a quality control in replacing proficiency exams by going through a mid-semester grading where teachers decided if the work was worthy of a C or not. Belanoff and Elbow also talk about the benefits to collaboratively reading/grading portfolios. They describe the portfolios as having two goals, “first, it improves trustworthiness of evaluation, since the readers can base their judgment on more than one piece in more than one mode. Second, it sends the message we want to send to the students about the richness and multiplicity of writing as a process” (105).

Roberta Camp14 (1985), a longtime ETS scholar, talks about how she was first assigned to an ETS sponsored writing portfolio project. The projects aims were to develop a program capable of obtaining more robust information about student writing ability. Scholars at ETS were not satisfied with the SAT exam because “it measures a fairly narrow aspect of ability”, as well as portrays disadvantaged students in a negative light (92). With portfolios, ETS was hoping to gather broader information about student’s talents and abilities, to emphasize writing in the secondary to postsecondary transition, and to provide a more reliable source of information for admission into colleges as well as possibly placement. Camp notes her reluctance at first to taking on a project such as the portfolio project because of her concern for getting involved in schools curriculum. After hesitation on her part, Camp used information gathered in 1981 from a local teachers, WPA’s, and administrators meeting that ETS held to create a portfolio system.

From the information gathered, Camp was able to focus the portfolio project on only one of ETS three main ideas for portfolio; strengthening the transition from secondary to postsecondary.

14 Roberta Camp’s work has helped to establish portfolios within the measurement community as well as among Rhetoric and Composition scholars. Her early work on portfolios is arguably as influential as Elbow and Belanoff’s early work on portfolios in trying to establish portfolios as a socially based writing assessment practice.

41

From talking with local stakeholders in 1981, along with a follow up meeting in 1982, Camp designed and implemented a portfolio program that emphasized writing. Although the portfolio project was in its early stage, Camp says that “what we are seeing right now is a change in measurement theory itself, in the beliefs about what measurement should do…I think we learned that it is no longer possible to expect tests to be separated from curriculum” (98).

Camp (1990), offers a follow up to her earlier 1985 article discussing portfolios mismatch from early measurement theory to current instructive practices. Camp discusses some success the

Pittsburg Arts PROPEL project was having like changing how teachers taught with portfolios.

Teacher’s earlier assumptions of portfolios were that they were papers in a folder, but after witnessing changes portfolios were having on their instruction as well as the students they started to think more meaningfully about portfolios. Along with the change in the teachers, students were also thinking and reflecting more meaningfully on their own writing because of the portfolios. Camp gives portfolios the credit of helping students to see the value in their writing and the ways their writing can change, which allowed students to take more responsibility for decisions in writing. The Arts PROPEL project allowed time for teachers to discuss goals and outcomes of writing and the portfolios, and through the discussions, teachers realized that with portfolios writing was an ongoing process. The portfolios that Camp discusses here not only helped students attain greater agency over their own writing to satisfy assessment outcomes, but the portfolios also allowed teachers the opportunity to have conversations about what they valued in writing and what sorts of outcomes they envisioned from the portfolios. In Camp’s example, portfolios allow students more control over their writing but also act as a faculty development tool that can help to strengthen teaching and teacher training.

42

Dennie Palmer Wolf (1989), also involved in the Pittsburg Arts PROPEL project, furnishes some description of the portfolios that were used in the schools. The portfolios for the

PROPEL project contained multiple samples of writing that could be passed forward from year to year. In an example that Wolf provides, she talks about how a teacher in the schools would ask her students to reflect upon two or three pieces of writing that they had done. In the examples

Wolf notes that students are able to discuss changes they see in themselves as writers and thinkers. Students are able to talk about their own growth as writers through reflecting upon previous work, and results can be seen within a semester or a year when teachers offer students time to reflect upon their work. Much like Camp, Wolf found that teachers were also using the portfolios to reflect upon their own teaching practices as well. The portfolios in the PROPEL project were useful for both teachers and students and helped both to see the value in reflection.

Paulson, Paulson, and Meyer (1990) explore “what makes a portfolio a portfolio.” In doing so they say that a portfolio is a “purposeful collection of student work that exhibits the student’s efforts, progress, and achievements in one or more areas” (60). The portfolios must also include areas of student’s self-reflections. Paulson, Paulson, and Meyer note that portfolios have the ability to reveal things about their creators, and say that portfolio can be strong educational tools for engaging students to take ownership over their own learning. The portfolios combine both instruction and assessment and argue that by combining the two that we gain more than if they were separated. In the article the authors note many properties that portfolios allow;

1. They offer students an opportunity to learn about learning 2. Portfolios are something students do, not something that are done to them 3. Portfolios are different from student’s cumulative folder 4. The portfolios must show explicitly or implicitly the students activities 5. Portfolios may serve different purposes 6. Portfolio may have multiple purposes 7. Portfolios should show growth

43

8. Lastly that good portfolios do not happen by themselves and that they require support and examples from teachers to show students appropriate ways to compile a portfolio (62-63).

The authors conclude by showing us that portfolios offer us a broad way to look at learning and assessment. The portfolios that Paulson, Paulson, and Meyer present allow students to have responsibility for their own progress, where they become a participant rather than the object of an assessment.

Roberta Camp (1993b) discusses three different models of portfolio assessment; one, portfolios used to make decisions about students applying for advanced standing in university writing programs, two, portfolios to create a statewide profile of student writing in fourth and eighth grade, and third, to improve writing instruction in sixth to twelfth grade. Because of the dissatisfaction with large-scale assessment and test scores carrying too much weight and determining educational goals, Camp makes the argument that portfolios provide a broader range of ability, focusing on writing tasks done in educational values with links to educational curriculum. With emphasis on the social consequences of assessment, Camp rallies for portfolio usage because it provides more data points and evidence of student ability over large-scale testing. By reviewing a continuum of portfolio models, the Miami University Writing Portfolio

Program, the Vermont Writing Assessment, and the Arts PROPEL Writing Portfolio, Camp shows multiple ways that portfolios can be used by different institutions for their own goals and aims, all of which aim to make better assessment decisions. The portfolios that Camp discusses all fall within different ends of what she refers to as “continuums;” large-scale, classroom based, student shaped, classroom projects, individually or collaboratively created, graded, or not graded, with some models in the continuum being more traditional than others. Regardless of where the portfolios fit into the continuum, Camp argues that portfolio programs all raise important issues

44 for measurement theory, concluding that “portfolio programs are pushing our collective understanding of assessment beyond the range of conventional acceptance” (207).

Sandra Murphy (1994a) discusses portfolios as a protean concept that are able adapt and change based upon location. Murphy argues that portfolios are reflections of curriculum, and by looking at portfolios we are able to glimpse curricula goals. Curriculum in the way Murphy discusses it are the classroom and what is taught, as well as what the outcomes students are held accountable for. Murphy show the diverse ways that portfolios can used. Because portfolios reflect curriculum, we can see philosophical assumptions and beliefs about teaching and learning, much like Zebroski’s (1994) ideas of practice being theory enacted. Murphy’s article looks at programmatic portfolio usage and curriculum, focusing on the different types of values that portfolio usage reflect. One example Murphy provides, of multiple, is of a behaviorist portfolio. A behaviorist portfolio would utilize worksheets and other sorts of activities that offer grades or praise for certain answers. In a behaviorist portfolios students would receive praise reinforcing certain behaviors in their portfolios. Murphy argues that different uses of portfolios tap into different theories of learning not all of which align with goals and outcomes.

Pat Belanoff (1994), like Murphy, tells us that portfolios are a protean concept and offers us Scribner’s three metaphors in relation to the reason Belanoff uses portfolios. In regards to literacy and society, Scribner tells us that literacy acts as “adaptive,” “power,” and “grace.” With these metaphors of literacy, Belanoff argues for why portfolios fit into Scribner’s literacy frame.

She sees the metaphors working to increase our understanding of literacy by;

First, “literacy as adaptation,” which stresses functional aspects of the ability to

read and write; second, “literacy as power,” which stresses ways in which reading

and writing can advance group and community status; and third, “literacy as

45

grace,” which stresses “intellectual, aesthetic, and spiritual participation in the

accumulated creations add knowledge of humankind, made available through the

written word”(14). Scribner concludes that “ideal literacy is simultaneously

adaptive, socially empowering, and self-enhancing” (18). (13)

Belanoff argues that to separate the three intertwined literacies makes them all weaker. The rationale for portfolios and literacy works well together, but the limitation of this article is that literacy is only considered as the sample of writing. Belanoff’ makes the argument for why we need to look at multiple types of writing in a portfolio because it provides us with a more diverse sample in which to make judgements, but only narrowly thinks of literacy as the sample. This shared belief carries over in much of the scholarship on portfolios where portfolios are argued to be able to house a multitude of different literacies representing linguistic diversity (Murphy,

1996; Murphy and Underwood, 2000; Gallagher, 2007; Kirkpatrick, Renner, Kanae, Goya, 2009;

Cambridge, Cambridge, and Yancey, 2009).

Murphy (1994b), like Belanoff, describes portfolios as being capable of housing multiple types of literacy. In a book chapter looking at portfolios in grades K-12, Murphy is interested in exploring “what portfolios have to offer in the assessment of linguistic minority students, particularly their potential for providing more equitable assessments and new approaches in the teaching of writing to linguistically diverse students” (140). In this chapter Murphy talks about the limitations of standardized multiple choice tests. She offers an example of how standardized tests fail certain students by showing an example from Miriam Cohen’s book, First Grade takes a Test. Murphy points out from Cohen’s book a boy named George who is asked what rabbits eat from three possible answers. George is asked to choose from, lettuce, dog food, or sandwiches, but George knows that rabbits eat carrots so he draws a carrot as an option on the test. Murphy

46 says that that “the point is, of course, that when answers are nonnegotiable, children don’t have the opportunity to show what they know” (141). In another example Murphy talks about a student named “Elena” who had been a student of Murphy’s mother. Elena was an ESL student and when asked to identify a lamp on a standardized test she was unable to. The reason Elena could not identify the lamp was because in her trailer they had no traditional lamps like the ones on the test. What Murphy tells us is similar to Mike Rose’s15 (1990) account of a student reading a poem different than him because of the student’s personal history and cultural background shaping the way he (the student) read the poem. Murphy contents that “tests are not culture free”

(142), and are geared more towards cultural beliefs rather than knowledge. Murphy then offers us an alternative (portfolios) to standardized tests that takes into account linguistically and culturally diverse learners. Murphy states that “using portfolios to understand how language varies with audience, purpose, and across situations is particularly appropriate for students who need to understand how English used for academic purposes differs from other kinds of writing and speaking than they may be accustomed to at home” (144). In the end Murphy makes the argument for portfolios because they “can have an enormous impact on the ways students and teachers work and learn, especially when teachers provide formative feedback and students are engaged in reflective evaluation” (156).

In an edited collection of essays on portfolios, Kathleen Blake Yancey (1992) writes the last essay reflecting on portfolios in the writing classroom16. Yancey points out that portfolios for

15 Rose’s article with Glynda Hull “This Wooden Shack Place”: The Logic of an Unconventional Reading, looks at how a student in Rose’s class had an unconventional reading to a wooden shack in a poem. Rose as well as others saw the shack and a woman hanging clothes as a sign of poverty, but the student saw the same events as nothing out of the ordinary and were something familiar to him. The socio-economic status of the student played into his reading that different from Roses’ reading. 16 Yancey also has another piece in the edited collection that tries to establish a portfolio pedagogy, talking about affordances portfolios have for the writing classroom based on the other essays in the collection, but this essay is similar to the one I review (and less developed) and therefore only appears as a footnote.

47

Rhetoric and Composition were a grassroots movement that scholars repurposed from the arts to work for writing classes. According to Yancey, all portfolios have three essential characteristics,

“they are, first, longitudinal in nature; second, diverse in content; and third, almost always collaborative in ownership and composition” (102). Yancey also tells us that portfolios “include some metacognitive work, that is, some exploration by the writers of their own composing process and of their own development as writers” (104). For Yancey, the defining features of portfolios are collection, selection, and reflection, which often reflects the author’s purpose for compiling portfolios.

Yancey (2001), a decade later talks about portfolios switching from paper to electronic.

In this book chapter, she lays out portfolios as a unified construct echoing her earlier work, stating three main principles activities that portfolios promote; “collection, selection, and reflection” (16). Yancey goes through the nuts and bolts of portfolio usage talking about key components to successful learning and teaching with portfolios and offers some suggestions for how to compile electronic portfolios. Nearly a decade after Yancey talked about three of portfolio defining features, collection, selection, and reflection, she explains how the same tenants of portfolios transfers from paper to electronic versions.

Elbow and Belanoff (1997) in an edited collection on portfolios reflect on the growth of portfolios in the field of writing assessment. Elbow and Belanoff, looking at portfolios trajectory discuss how portfolios have “kicked back at testing itself—helping people rethink some central assumptions and practice” (25). The two outline how portfolios helped scholars reconsider the value in holistic scoring because portfolios, unlike holistic scoring, have the opportunity for feedback that could enhance learning and provide more evidence to show strengths and

48 weaknesses in writing. Elbow and Belanoff also offer us a summary of the larger theoretical points of portfolios;

 Grades undermine improvement in writing because they restrict and pervert students' naturally developing sense of audience awareness.  Writing is its own heuristic; it doesn't have to be graded to lead to learning.  Portfolios lead to a decentralization of responsibility which empowers everyone involved.  Teacher authority needs to be shared if writers are to have genuine authority.  All evaluation exists within a context of literacy defined by a multitude of factors. Not all of which are products of the classroom.  Knowledge, whether of grades or of teaching strategies or of theoretical underpinnings, is a product of discussion among community members.  Evaluating, judging, liking, and scoring are inextricably bound up together and need to be thoughtfully examined. (30)

The two end by stating that portfolios are not a panacea for all of writing assessments woes, and caution joining the bandwagon of portfolio usage because portfolios have their own sets of pitfalls, but argue that portfolios are currently the best system that we have in place capable of assessing writing.

Sandra Murphy and Terry Underwood’s (2000) book “Portfolio Practices Lessons from

Schools, Districts and States” looks at various portfolio programs at a variety of institutions from

English Departments in schools to state-level Departments of Education. In their book they have ten chapters, eight of which focus individually on the local context of portfolios within a school system or governing body. In the chapters focusing on portfolios systems, Murphy and

Underwood follow a similar framework looking at; (1) the stakes attached, (2) how the portfolio systems were developed, (3) source of authority and who controls the assessment, (4) support for teachers in maintaining the system. Along with these four points there is also sections on portfolio practice; collection, selection, reflection, and evaluation. In each of these areas Murphy and Underwood explain the details within the schools to give us a picture of what types of portfolios and assessments are occurring at the schools or within the district. The work in this

49 book is extremely important because it enhances our understanding of portfolios and how they function at various levels. The book provides an overview of the history and development of portfolios and leaves us with a clear message that “portfolio assessment systems cannot be discussed apart from their contexts—no more than the students put into their portfolios can be considered apart from the classrooms in which those writings were created” (15).

Huot and Williamson (1997) in talking about large-scale assessment ask us to rethink portfolios for evaluative purposes and challenge teachers to “recognize the power struggles they and their students find themselves in as they attempt to use assessment instruments like portfolios to teach their students” (56). Huot and Williamson explore issues of power, politics, and surveillance in assessment situations. They start by reviewing scholarship on validity and measurement theory and how it relates to portfolios. Huot and Williamson bring in examples of the Vermont and Kentucky portfolio systems and explain “that the use of portfolios in high stakes assessment scenarios are predicated on political rather than educational rationale” (51).

Huot and Williamson remind us that “as portfolios are continually defined in terms of both their pedagogical value and measurement properties, it is important to remember that an assessment technique itself is not always of primary importance” (53). They also point out that portfolios can be used in very different ways and hope that portfolios can overcome the obstacle for standardization versus highly contextualized assessments that are locally driven. The two ask for a “reassessment of the way power is located in assessment, especially in the use of writing portfolios, can be viewed, perhaps, as somewhat utopian, unrealistic, or unobtainable” (55). In addressing issues of power in assessment and portfolios, Huot and Williamson remind us of the power that assessment has and caution us to be careful when designing and using assessments like portfolios, which have the ability to make a positive impact on teaching and learning.

50

In broad strokes and leaps I have reviewed relevant portfolio scholarship highlighting various ways portfolios have been used and thought of in the scholarship. Portfolios exigency in the Rhetoric and Composition world was largely influenced from a dissatisfaction with large- scale assessments to accurately assess students ability (Ford & Larkin, 1978; Belanoff & Elbow,

1986a, 1986b), but their large spread usage came from the revelation that portfolios could change the way that teachers taught and students learned (Wolf, 1989; Camp, 1990; Paulson, Paulson, &

Meyer, 1990). Along with the benefits to teaching and learning, portfolios also represent a broader measure for writing ability by looking at more writing from natural settings (Murphy,

1994a; Belanoff, 1994) as well as creating a space for reflection (Wolf, 1989; Yancey, 1992;

Camp, 1993, Paulson, Paulson, & Meyer, 1990, Murphy & Underwood, 2000). Although the review of portfolio scholarship is but the tip of the iceberg, it is sufficient to ground us in the history, discussion, and issues involved in portfolio assessment and helps to establish an understanding of portfolios.

Response

The section on response is shaped into three sections; “How and When to Respond: A

Historical Look at Response and Criticism,” “Response; Communicating With Students,” and finally “Formative Assessment.” All three sections are in no means to be looked at as a comprehensive view on response but rather are building blocks and germinal texts that have helped shape response. The first two sections, “How and When to Respond: A Historical Look at

Response and Criticism” and “Response; Communicating With Students,” are mostly written from Rhetoric and Composition scholars and represent a consensus view on the response scholarship, while the third area “Formative Assessment” is mostly from the Educational

51

Measurement scholarship. All the texts chosen for this literature review were picked because of their historical and/or contextual background to this study.

Section 1: How and When to Respond: A Historical Look at Response And Criticism

Nancy Sommers (1982) research focused on comments teachers make to motivate revision in student writing. For this study, Sommers focused on the commenting styles of thirty- five teachers looking at the comments teachers wrote on first and second drafts. The study looks at teacher’s comments as well as interviews with teachers and students. Her first finding on teacher response was that “teachers’ comments can take student’s attention away from their own purpose in writing a particular text and focus that attention on the teachers’ purpose in commenting” (149, emphasis hers). Sommers says that “comments are worded in such a way that it is difficult for students to know what is the most important problem in the text and what problems are of lesser importance” (151). Her second finding is that “most teachers’ comments are not text-specific and could be interchanged, rubber-stamped, from text to text” (152, emphasis hers). Sommers finds her research disheartening because the teachers in her study were not providing students with thoughtful commentary that encouraged students to engage with their writing. Sommers believes the reason teachers were providing the type of feedback they were is because response to student writing was rarely stressed or talked about in teacher training. Sommers states that “written comments need to be viewed not as an end in themselves—a way for teachers to satisfy themselves that they have done their jobs—but rather as a means for helping students to become more effective writers” (155). Sommers believes that unless teachers can provide students with the types of response geared at helping them to revise that teachers response will utterly be done in vain.

52

Lil Brannon and C.H. Knoblauch (1982) tease out the idea of how readers will work harder at understanding the meaning behind a writer’s text, but that same skill does not transfer over to the reading of student writing. Brannon and Knoblauch state that “denying students control of what they want to say must surely reduce incentive and also, presumably, the likelihood of improvement. “ (159). In the strange balance that teachers and students hang in, it is difficult for students to improve their writing when teachers are disbelieving in what the students have to say as authorities of a text. Brannon and Knoblauch challenge teachers to relinquish control over the classroom to support student writers to feel free to write what they want. In granting students more authority over their writing, Brannon and Knoblauch believe it will create “rich ground for nurturing skills because the writer’s motive for developing them lies in the realization that an intended reader is willing to take the writer’s meaning seriously” (165).

Connors and Lunsford’s (1993) article presents research on the largest scale study of teachers’ response to student writing. In this study, they analyzed 3,000 essays attempting to identify the types of “global comments” that teachers put on students papers. They wanted to know “what were teachers saying in response to the content of the paper, or to the specifically rhetorical aspects of its organization, sentence structure, etc? What kind of teacher-student relationship did the comments reflect?” (205). Connors and Lunsford, along with 26 experienced writing teachers, analyzed and coded the essays for global comments. In their analysis they found that 2,297 (77%) of the papers contained global comments, 2,241 (75%) of the papers had some sort of grade or evaluative symbol, and 1,934 (64%) of papers had terminal or initial comments, 318 (16%) of which had overview-style comments with 84% of those comments placed at the end by grades. Connors and Lunsford’s study helped to determine the types of comments teachers typically leave on student paper, with an overwhelming number focusing on

53 global and rhetorical issues. A less impressive number that Connors and Lunsford discovered is that only 11% of the papers showed any evidence of revision policy. Their study suggests that many of the comments, global and rhetorical in nature, were mainly geared towards grade justification (60%), and devoid from any classroom context the study lacks to help us understand the role of response in a writing classroom.

Richard Straub (1996) looks at comments to writing from teachers informed on response scholarship as well as who are contributors to that body of work. In this article Straub analyzes comments from six Rhetoric and Composition scholars; Edward White, Donald Stewart, Chris

Anson, Anne Gere, Glynda Hull, and Peter Elbow. What he derives from this analysis of commenting styles is that response should be conversational, which “encourages teachers to adopt a reader’s perspective and play back for students how well they are communicating their intentions to an audience” (391). Straub says in taking a conversational tone with students “it urges them, in addition, to create themselves as demanding, expectant readers and leads students to look for more from their writing than clear communication alone” (391). Ultimately Straub says that “comments that are shaped into a real conversation blur the lines between writing and reading, and allows teachers to actively model and encourage acts of making meaning” (394).

Following his article from 1996, Straub (1997) presents the results of a survey he conducted on 142 first-year writing students. The survey was set up to gain an understanding of students’ perceptions of teacher response. The survey asked students 40 questions to see what they thought about three areas of teacher response; focus, specificity, and mode. Straub found in his research that students preferred response that focused on global matters of content, purpose and comments, with most preferring response that was open ended or that provided explanation on how to revise. Although a step in the right direction of gaining student perspectives on

54 response, Straub’s survey asked questions that were decontextualized to the student. The survey contained response from 20 different teachers on a sample student paper. The types of response

Straub gained from students provides a preference of response for students, but devoid from their own writing and classroom the survey is limited and offers no perspective for how students would actually use and interpret the response. Straub recognizes response as a “social action, taking place within a proscribed relationship between teacher and student” (96), but by looking at response to writing other than their own writing takes the social action away from response and makes it an autonomous discourse.

Straub (2000), follows his other research on response in a special issue on response in

Assessing Writing. In this article, Straub furnishes a case study of a student “Sarah” from a first- year writing course that he taught. Straub’s research attempts to analyze “the principles and goals of the course, defines the role of teacher response within this setting, and critically examines the relationship between response theory and this teacher’s responding practice” (23). Straub’s work focuses on his own reflective practice as a teacher who is conscious of the ways he responds to student writing. Straub explains “effective response is a matter of local context, to be worked out according to the demands of particular circumstances” (24). Straub’s response style in the article is conversational with the student, which he points out is important since response is a communicative act. Straub lists what two decades of response scholarship has revealed and states that when we responding teachers should;

Turn your comments into a conversation. Create a dialogue with students on the

page. Do not take control over the test; instead of projecting your agenda on

student writing and being directive, be facilitative and help students realize their

own purposes. Limit the scope of your comments. Limit the number of comments

55

you present. Give priority to global concerns of content, context, and organization

before getting (overly) involved with style and correctness. Focus your comments

according to the stage of drafting and the relative maturity of the text. Gear your

comments to the individual student behind the text. Make frequent praise. Tie

your responses to the larger classroom conversations. (23-24).

Straub’s work in this article focuses more on being a reflective practitioner and the importance reflection plays in teaching than actually trying to understand the effect his comments have on students. Like Straub’s other work, this article provides a picture for how and when to respond but provides us little with how the response is perceived from the student’s point of view.

Sandra Murphy (2000), in response to Straub’s 2000 article in the same issue of

Assessing Writing challenges “is there a student in the room,” both in her title as well as throughout the article. In this article Murphy challenges that there has been a lack of attention paid to student writers and their perspectives of written response in the response scholarship.

Taking a sociocultural perspective to response, Murphy contents that “a teacher’s response to any individual piece of writing is but one turn in an ongoing exchange with the student writer, and both teacher and student have roles in the interactive process of knowledge construction”

(81). Murphy states that “like other kinds of text, teachers’ comments are facilitators of intersubjective process and, as Brandt (1990, p.99) says, ‘the means by which present-tense literate acts are carried out’” (81). Murphy advocates for research that takes into consideration the student’s perception of written comments. By taking a sociocultural approach, Murphy opens up the doors for research into response as a literate activity to be able to find out how and what student writers interpret from teacher’s response. Murphy concludes by saying that “teachers and researchers alike need to look more often at what students say about their understandings and

56 their learning and to consider the totality of the interactions through which knowledge is constructed. We need to remember that there is a student in the room” (89, emphasis hers).

Louise Phelps (2000), also in the special issue on response as Straub and Murphy, talks about the limitations of current response theory focusing on teachers’ commentary. She says that

“studying response rhetoric in isolation is tantamount to describing only the tip of Cyrano’s nose:

The results are bound to be misleading, fragmentary, and unsatisfying” (93), and provides three reasons why. Phelps says that “1. This approach reduces a complex, dynamic system to one of its parts…2. It’s the wrong metonym… Response is most fundamentally reading, not writing… 3.

In effect, in shifting attention to a different text (commentary) and a different writer (the teachers), scholars displaced student texts and student writers from the center of our gaze” (93-

94, emphasis hers). Phelps credits the metonymic approach to response to a “discipline familiar with methods of text analysis and rhetorical criticism, yet inclined to see agency in terms of writing” (94). She says the missing pieces to response as a pedagogical act are:

1. Taking pedagogical action is overidentified with writing commentaries… 2.

Commentaries are often analyzed as if each were an autonomous event,

independent of other texts and events of the classroom… 3. The conception of

response as a pedagogical transaction is over-individualized… 4. A corollary is

that response is more flexible in its audience and more widely addressed than is

suggested by the stereotype of commentary written to an individual student about

a student’s single text or perhaps a portfolio. (94-94, emphasis hers)

Phelps asks how we can evaluate the value of response without looking at response empirically, and states that in the response scholarship we have rarely looked at or examined the student’s perspective. Like Murphy, Phelps questions why we have not looked at what students think. She

57 believes that classroom ethnographies and case studies would provide information on student perception. Ultimately Phelps calls for a pedagogical shift in the ways we read students text and to “view response less autonomously from other pedagogical action” (105).

Jane Mathison-Fife and Peggy O’Neil (2001), echo Phelps’ assertion that empirical research in the area of response is lagging behind classroom practices. Mathison-Fife and O’Neil argue that by focusing on teachers comments we are unable to broaden our notion of response and that most research on response has been directed at answering the problems noted by

Sommers (1982) and Brannon and Knoblauch (1982). Mathison-Fife and O’Neil argue that “the prevailing assumption of the research has been that the problems of ineffective response and loss of student textual authority lies in the teacher’s written comments; solving these problems, then, means improving and changing the written comments” (302). In embracing a social constructionists approach they say that “texts are understood in context and more and more teachers recognize the importance of the whole classroom context as a framework for response and move toward including student voices in discussions about writing” (302). Mathison-Fife and O’Neil call for more empirical research into students interpretations of response stating that

“research that relies solely on the researcher’s or teacher’s interpretation of a response violates what we know about reading and making meaning” (306). Like Murphy (2000) and Phelps

(2000), Mathison-Fife and O’Neil show us the limitations of only looking at response from the teacher’s point of view and talk about the limitations in pedagogical improvement and reflection by neglecting the students’ voices.

Edgington’s (2005) reports on a semester-long study looking at how composition teachers read and respond to student writing. Edgington’s research steps outside of the normal frame of response scholarship that Mathison-Fife and O’Neil urge for, and was focused on eight

58 teachers teaching in their own classes in context and utilized protocol analysis, textual analysis, and interviews to discover the teachers reading strategies. Edgington discovered that when the teachers were reading student papers that their responding strategies were often influenced by contextual factors based on student-teacher relationships, the classroom, past experiences in teaching writing, and nonacademic influences. Edgington asks that composition teachers direct more attention to how teachers read students papers thinking about factors that feed into response. Edgington states his findings “offer empirical evidence to justify the argument that reading and responding to student writing is not just a textual act; it is a contextual act” (141).

Edgington asks that teachers change the way they think about teacher response and instead consider a discussion of “teacher reading and response” (143). In moving away from the written comment alone, Edgington’s research sheds light on the ways that teachers read student papers and respond.

Section 2: Response; Communicating With Students

James Zebroski (1989) takes a Bakhtinian approach to reading student writing listening to the voices in the text and joins the dialogue. Zebroski argues that reading student texts this way is always a social act. To develop this perspective Zebroski uses Bakhtin to discuss multiple voices (Simon Newman, John Crowe Redemption, Mina Flaherty, and Mikhail Zebroski

Bakhtin) that he hears when responding to a student’s (Dave) text. All four of the voices that

Zebroski hears while reading Dave’s text offers different types of readings and response (errors, structure and meaning, logic of writer’s choices, and focusing on intertextuality). Listening to the voices in the student writing helps Zebroski to discover the hero. Although Zebroski never discloses who the hero is and ends the piece by asking us who we believe the hero to be, I would like to believe the hero is the dialogue that emerges between teacher and student through the

59 response. Regardless, Zebroski’s article offers us multiple ways to read a text and shows the importance of hearing the “twists and turns and confusions of the text that often point to ‘voice’”

(36).

Carmen Kynard (2006) follows a similar thread as Zebroski by considering her response style as conversational. Zebroski offers multiple voices in his readings to get to the hero in the students text while Kynard offers different types of readings and response based upon the type of writing as well as based off of her positionality with the class. Kynard notes that her weekly mantra to students as she collected their papers was “Y’all killin’ me in here wit’ all this writing”

(362). Kynard notes the importance of her putting on antics to encourage the students to write more and more. Kynard identifies her response identity as an;

African American woman, a sistahgurl as my students have called me and

therefore the saliency of every responder’s culture/social identity as intimately

connected to the registers with which we engage our students, our perceptions of

students’ political positions, and the inevitable discursive relationships that get

constructed between teacher and student by nature of all this writing being passed

back and forth among us. (364)

Understanding her own response style and positionality in the class is important for Kynard to meet students on the page to engage their ideas. She tells us that “students and teachers need a language for responding to each other” (366), so her response aids in creating a line of communication. Kynard provides an analysis and rationale for her response style by looking at some response she has given to students. She notes that her response style is taxing but explains there is no short-cut to response. Through a textual relationship with her students, Kynard is able

60 to encourage her students “to engage these issues, dig deeper, unravel their social and political implications” (379).

Lester Faigley (1989) analyses an English test for college admissions and essays from

1929 to try to understand what is valued in writing. Faigley states that research in college writing has done little to reveal what teachers value in student writing. Through statistical analysis, researchers found that the “length of essays is associated with judgments of quality” (395).

Stepping outside that frame, Faigley attempts to bring attention to the student voice and the self in student writing by looking at sample essays. In analyzing some student writing, Faigley found that “each judgment of value is made from some notion of value, usually a notion that is widely shared within a culture” (395). Faigley’s reading of the student texts reveal that readers of the students’ essays (teacher) were reading the text for culture and to see if the students had read the

“right” books, rather than anything else.

Similar to Faigley, Elbow (1993) also talks about value in writing. Elbow says that “we evaluate every time we write a comment on a paper or have a conversation about its value”

(188). Elbow states that he is troubled by ranking and makes a call for more evaluation instead.

Elbow sees three problems ranking; “(1) First the unreliability. (2) Ranking or grading is woefully uncommunicative. (3) Ranking leads students to get so hung up on these oversimple quantitative verdicts that they care more about the scores than about learning-more about the grade we put on the paper than about the comment we have written on it” (188-190). Elbow challenges us to stop putting numbers on papers and read students text providing evaluative comments to help students grow as writers. Elbow uses portfolios to facilitate evaluative comments without attaching grades. Elbow says he is not trying to rid the classroom of evaluation, but rather trying to “emphasize it, enhance it” (196). By deemphasizing grades,

61

Elbow believes we will encourage students to read our comments and focus more on writing than a grade.

Patricia Carini (1994) also looks at response as an act of reading to listen for voice. In this article, Carini starts off first by analyzing a seventy five year old letter from her father to his sister Bess to construct a personal connection and framework through which to respond. Carini discusses various aspects about her father and his family as she ponders over the letter explaining her purpose for this discussion was because she was asked to write about writing assessment. She says “the letter is my needed reminder of both the intimacy and bigness of writing. I also had to start by writing myself—writing in response to writing” (33). She contends that “writing myself is the reminder needed of the writer’s leap of faith that there are other bodies out there somewhere: eyes, ears, voices and hands, ready to catch my though, to take my meaning, to meet it with their own, to fall in with or quarrel with it, to speak (write) back to it (to me)” (33).

Carini talks about writing as being a personal act and deserving of our attention. She argues that

“a lot of talk about language and writing, and about language and writing assessment, stresses efficient, correct, useful standard communication” (44, emphasis hers), which language lends to measurement. Carini challenges us to not think of writing as a means to an end, stating that

“instrumentality has nothing to do with the excitement, the pleasure, the passionate feeling, the rushes of insight that the written can stir in the reader—and in the writer, writing” (45). In her no-traditional reading, Carini shows us that writing is a highly personal communicative act deserving of an equally personal response.

Chris Anson (1989), in an edited collection on response, also points to response as being communicative by opening with a story of Melville’s “Bartleby, the Scrivener.” Anson explains

Bartleby’s position in the story and how Bartleby eventually starts to turn “his back on everyone

62 around him, Bartleby becomes a human completely alone in the midst of humanity” (1).

Bartleby’s job had been working in a dead letters office, where his job was “opening, reading, and preparing to burn hundreds of undeliverable letters—‘errands of life’ destined to die in the flames without ever reaching their intended audiences” (1). Anson points out that Bartleby’s job had been to read these letters for which there could be no response, and therefore he simply

“becomes a man who himself loses the will to respond” (1). This is a particularly insightful look at Bartleby’s position, and Anson reminds us through Bartleby that “the process of response… is so fundamental to human interaction that when it is short-circuited, whether by accident or design, the result can hardly be interpreted as anything but a loss of humanity” (1). Anson reminds us that response is ultimately a communicative act for both student and writers and cautions that;

Response can generate its own special anxieties: misguided expectations as their

private creations struggling with the public nature of discourse; conflicts of ego as

the instinct to present themselves at their best battles with the fundamental need to

share their doubts and imperfections at the very moment when they are most

vulnerable. (2)

Melanie Sperling and Sarah Warshauer Freedman (1987) in an ethnographic study report using a case-study methodology to investigate the “uncanny persistence in students to misunderstand the written response they receive on their papers” (344). For seven weeks,

Sperling and Freedman studied one student, “Lisa,” in a ninth-grade English class to understand the complexities of teachers’ response. In the class they describe the teacher “Mr. Peterson” as teacher who employed various techniques to get the students to think about writing in sophisticated ways to convey and capture reader’s interest. All of Mr. Peterson’s activities in

63 class became “a coherent blueprint for his students’ growth as learners and as writers” (347).

The student, Lisa, had been identified as a high achieving student whose “scores on a standardized test of basic skills ranked in the ninetieth percentile range; her grades the previous semester were all A’s” (347-348). From the study, Sperling and Freedman observed teachers response that referenced classroom discussions and values that Mr. Peterson had in writing, and others that had no had no in-class referent. In various pieces of writing they discovered between

14% and 27% of Mr. Peterson’s comment were not in reference to ideas that surfaced in class

(352). The comments that had no classroom reference were the types of comments that Sperling and Freedman believed were capable of creating the “knotty points where misunderstanding might be considerable” (353). In looking at Lisa’s revision Sperling and Freedman found that

Lisa “shows herself to be a skillful follower of directions” (354), and found that Lisa had no problems processing comments that were ideal text for Mr. Peterson. By using her past experiences with Mr. Peterson’s response, Lisa was able to produce more of the same kinds of successful prose on future drafts. Ultimately, Sperling and Freedman determine that “a good girl writes like a good girl” (357) in response to students making changes in their text based on things the teacher “likes.” Sperling and Freedman’s study demonstrates how students and teachers continually misunderstand each other through written feedback and because of the power divide between students and teachers.

In a book on response Sarah Warshauer Freedman (1987) uses multiple methods to investigate response in the classroom. Freedman study had two parts, one, a national survey designed to uncover how the most successful teachers in the nation respond to their students writing, and two, an ethnographic study looking at two successful ninth-grade teachers in the San

Francisco Bay Area. The survey results were used to identify teachers for the study as well as to

64 provide her with information to focus ethnographic observations. Freedman defines response as more than just written comments on student writing. She says that, “response, as defined in this study, includes other activities as well and may not differ much from what we include as the teaching of writing. Response includes all reaction to writing, formal or informal, written or oral, from teacher or peer, to a draft of final version” (5). Freedman’s study had three main aims;

“(1) to understand better how response can function in helping students to improve their writing,

(2) to define more precisely the concept of response, in the hopes of enriching the traditional views and definitions, and (3) to understand how successful teachers accomplish response, and to learn what they do and do not know about response to student writing” (9). Although the teachers in the survey as well as in the ethnographic study shared different philosophies on teaching, many of their goals and outcomes for student writing were similar. From this study

Freedman found that successful response; (1) promotes student ownership in their writing, (2) communicates high expectations to all students, and (3) teachers offer a lot of assistance during the writing process. These three effective response strategies although certainly not inclusive or exhaustive provide a foundational understanding of effective response.

Response research has moved past describing textual aspects of response (Sommers,

1982; Connors and Lunsford, 1993; Straub, 1996; Straub, 2000), revising our theoretical understanding of response as being communicative (Anson, 1989; Carini, 1994; Kynard, 2006), but the focus of study is still largely one sided. The exceptions in the response scholarship being the work of Freedman (1987) and Sperling and Freedman (1987) actually looking at the student’s perspective. The current focus of Rhetoric and Composition has mostly focused on technological aspects of response (Anson, 1997; Blythe, 2001; Batt and Wilson, 2008) still not addressing the elephant in the room, the students (Murphy, 2000).

65

Section 3: Formative response:

In 1967, Michael Scriven coins the terms “formative” and “summative” assessment as a book chapter in a collection on curriculum evaluation. Scriven’s article was geared in part towards addressing Lee Cronbach’s (1963) earlier article Course Improvement Through

Evaluation. Scriven states that “the main focus of this paper is on curricular evaluation but almost all the points made transfer immediately to other kinds of evaluation” (39). In Scriven’s article, he argues that “evaluation is itself a methodological activity which is essentially similar whether we are trying to evaluate coffee machines or teaching machines, plans for a house or plans for curriculum” (40). Scriven explains that formative assessment is about closing gaps in what is taught and what is learned in the curriculum. Curriculum assessment provides important information needed in making decisions about educational programs, and formative assessment is a way to obtain information to strengthen any gaps or lags between goals and outcomes.

Although Scriven’s work is rarely referenced in the scholarship on classroom assessment, his ideas on formative assessment align with a pedagogical shift to delay grades and provide feedback rather than grades.

Sadler’s (1989) article looks at formative feedback and the lack of a general theory of formative assessment. Sadler attempts to create a usable theory of formative assessment that is relevant to a broad spectrum of learning outcomes from a variety of subjects. Sadler points out that his definition and usage of formative assessment differs from the traditional definition of formative assessment from the educational research scholarship because their definition is too narrow to be of much use. Sadler attempts to offer ideas for how to use formative assessment to enhance teaching and learning because of a discontent with inconsistencies in the formative assessment scholarship. He tells us that formative assessment “is concerned with how judgments

66 about the quality of student response (performances, pieces, or works) can be used to shape and improve the student’s competence by short-circuiting the randomness of inefficiency of trial- and-error learning” (120). In his article, Sadler, defines the difference between formative and summative assessment. Sadler says that

summative contrasts with formative assessment in that it is concerned with

summing up or summarizing the achievement status of a student, and is geared

towards reporting at the end of a course of study especially for purposes of

certification. It is essentially passive and does not normally have immediate

impact on learning, although it often influences decisions which may have

profound educational and personal consequences for the student. The primary

distinction between formative and summative assessment relates to purpose and

effect, not to timing. (120, emphasis added).

In Sadler’s distinction between formative and summative, he notes that summative is a passive activity, whose effects are unable to be measured and whose goals and aims do not match up to formative purposes. Sadler argues that summative assessment may actually be damaging and counterproductive to learning because “methods of grading which emphasize rank or comparisons among students… Assuming that sorting and stratifying learners is not the main purpose of education and training, the objective for each student is acquire expertise in some absolute sense, not merely to surpass other students” (127) . Sadler’s notion of formative assessment is one that promotes student self-assessment, allowing students to have more authority and agency in assessment while also being able to hold their own work to a higher standard.

67

Black and Wiliam (1998) reviewed 250 articles on formative assessment from several countries. In their review of the scholarship, they found that formative assessment does improve learning and is effective in almost any educational setting. Black and Wiliam’s research also reveals that grades are less effective than formative feedback and can be counterproductive to teaching and learning. Black and Wiliam make the same argument as Sadler (1989) that

“formative assessment does not have a tightly defined and widely accepted meaning” (1), so one of their purposes was to look for effective uses of formative assessment to provide information that could be useful for teaching and learning activities. Through their extensive literature review looking at formative assessment from a wide range of course and countries, Black and Wiliam determined that formative assessment improves learning, and that considerable gains in student achievement were observed in places where formative assessment was used.

Black and Wiliam (2004) revisit their ideas from their 1998 study looking at 250 articles on formative assessment to provide more detail to teachers looking to improve formative assessment practices. Black and Wiliam reaffirm earlier notions that formative assessments prime purpose is to promote student learning, and that if such learning can provide information to students or teachers that allows them to asses themselves that it is possible to modify teaching and learning activities. Black and Wiliam from working with teachers as well as their review of the literature found it important to make a precise definition of formative assessment since there is much discrepancy with what formative assessment is. They note that some teachers and researchers thought that ‘it refers to any assessment conducted by teachers and, in particular, that giving a test every week and telling the students their marks constitutes formative assessment”

(22). They follow that assessment can only be formative if “some learning action follows from the outcomes, that practice is merely frequent summative assessment (22). Another

68 misconception they debunk is in regards to portfolio assessment when it is “developed with the aim of replacing or supplementing the results produced by externally imposed tests” (22). Black and Wiliam state that formative assessment only occurs when “there is active feedback that enables students to change and improve their work as they build their portfolios” (22). Formative feedback is only formative if students are able to use the feedback for improvement. Black and

Wiliam state that “feedback to learners should both assess their current achievement and indicate the next steps for their learning” (27). This notion of formative assessment providing guidance and acting as a map is echoed by Lorrie Shepard’s (2004) book length review of the formative scholarship.

What is particularly interesting in Black and Wiliam’s article is that for part of their project they interviewed students in three schools to investigate their reactions to how their exercise books were marked. What the students told them they wanted from their teachers were

“1) not to use red pen (students felt that it ruined their work); 2) to write legibly so that the comments could be read; and 3) to write statements that could be understood” (28). At the beginning of their project, Black and Wiliam found that teachers at the schools where they were interviewing students were writing things like “good” or “well done” on papers but later after discussing formative assessment with the teachers, the teachers were able to develop a new style of response that provided more direction for what students needed to do. They surmised that students might have negative reactions to not receiving scores, but through their research they found that students realized the comments would help in their future work. Black and Wiliam believe that numerical scores are not able to help assist a student with how to improve their work, and therefore with grades an opportunity to enhance learning is lost. In their study they also found that with formative feedback the classroom culture began to change, and that the

69 feedback worked to create and start a dialogue between teacher and students. With formative feedback the students were provided the scaffolding they need to learn how to assess themselves.

Black and Wiliam explain that formative assessment is “about developing interaction between teachers and learners, about evoking evidence from such interactions and using it as feedback, about learners’ capacity to make intelligent us of such feedback in order to improve” (36). The interaction that Black and Wiliam talk about with formative feedback is based off of their review of 250 studies published on formative assessment as well as empirically collected data from classrooms. This interaction they speak about is important for my work because it is the same type of interaction that Heath observes in “literacy events.” Black and Wiliam conclude by talking about how the interactions between teachers and students helps to enhance the learning environment and creates a different type of culture than by using summative assessment. Using

Elbow and Belanoff’s (1997) metaphor for assessment stating that “teaching needs to be the dog that wags the tail of assessment rather than vice versa” (32), formative assessment is the dog that wags the tail and helps put teachers and students at the forefront of assessment.

Lorrie Shepard (2004) offers an analysis of formative assessment as a way to enhance teaching and learning by looking historically at measurement and the classroom providing a comprehensive view on formative assessment in the classroom. Stepping away from assessment for accountability, Shepard proposes a new frame work for how to view validity and reliability in the classroom. Shepard contends that formative assessment “is defined as assessment carried out during the instructional process for the purpose of improving teaching and learning,” as opposed to summative which is “assessments carried out at the end of an instructional unit or course of study for the purpose of giving grades or otherwise certifying student proficiency” (627). Like

Black and Wiliam, and Sadler, Shepard argues that “formative assessment, effectively

70 implemented, can do as much or more to improve student achievement than any of the most powerful instructional, interventions, intensive reading instruction, one-on-one tutoring, and the like” (627). While the majority of teachers have limited knowledge of formative assessment and continue using summative assessment as their primarily form of assessment/grading, Shepard offers an alternative theory of formative assessment with feedback that helps act as a clear guide to close the gap between learning and outcomes. In channeling a sociocultural model of learning and the work of Vygotsky’s (1978) zone of proximal development (ZPD), Shepard proposes assessment that answers three questions. Shepard quotes Atkin, Black, and Coffey (2001) by framing formative assessment’s goals as answering these questions; “where are you trying to go?”, “where are you now?”, “how can you get there?” (628). With formative assessments goals being the guide that helps students answer these questions in their own work, formative assessment acts as map or scaffolding which allows feedback to enhance learning. Ultimately

Shepard argues that formative assessment helps to facilitate learning by tapping into sociocultural models of learning and providing effective feedback for students.

Since the Formative Assessment scholarship is vast and has no direct consensus on what formative assessment is (Sadler, 1989; Black and Wiliam, 1998), the review of this literature is enough to show the direct connections to the literacy scholarship since the data driven definition of formative assessment (Black and Wiliam, 1998) and literacy (Heath, 1982) are both about interactions. The formative assessment scholarship represented in this literature review help to build a foundation and bridge between literacy and assessment.

Literacy

Brian Streets (1984) book. “Literacy in theory and practice,” is a result of anthropological field work that he carried out in the 1970’s in parts of Iran. Using ethnographic methods, Street

71 gathered data that provides a detailed account of different literate practices in various Iranian villages. Through his research, Street explains two types of literacy; the “autonomous” model and the “ideological” model. According to Street the “autonomous” model of literacy is based on the essay text and “generalize[s] broadly from what is in fact a narrow, cultural-specific literacy practice” (1). Street say this model “assumes a single direction in which literacy development can be traced, and associates it with ‘progress’, ‘civilisation’[sic], individual liberty and social mobility” (2). According to Street in looking at literacy this way “it isolates literacy as an independent variable and then claims to be able to study its consequences” (2). The consequences Street sees in looking at literacy as an “autonomous” model are often represented in economic growth or cognitive skills. Conversely, the “ideological” model of literacy “stresses the significance of the socialization process in the construction of the meaning of literacy for participants and is therefore concerned with the general social institutions though which this process takes place and not just the explicit ‘educational’ ones” (2). The “ideological” model of literacy recognizes reading and writing as a socially situated practice that is culturally embedded in communities. Street’s ideas on “autonomous” and “ideological” models of literacy contends with earlier notions of literacy that believed literacy to be an autonomous, transportable activity.

Early literacy scholars (Goody and Watt, 1968; Goody, 1986, Ong, 1986) argued that a “Great

Divide” existed between societies with literacy and those that did not. This “Great Divide” created a dichotomy between illiterate VS literate, or primitive VS civilized groups of people.

Street’s work debunks these early theories on literacy and shows that literacy is a socially situated activity and can only be understood within the time, place, and culture in which the literate practice is being used. Using ethnographic data, Street outlines the ways that literacy functions in various societies to accomplish numerous means for individuals living there. By

72 arguing for an “ideological” model of literacy Street helped to shed light on the consequences of

“autonomous” models of literacy.

Shirley Brice Heath (1982a, 1982b, 1983) contributes much to our understanding of literacy and the ways it is transmitted and used in various communities. Heath’s book “Ways with Words” is the result of a ten year longitudinal study from the mid-1970’s in the Piedmont

Carolina’s. I start the review of Heath’s work with her book, which does not reference “literacy events,” rather than start with the two articles from 1982 which explicitly names and defines

“literacy events.” The rationale for starting with the book is because it provides us the richest understanding of what literacy was in the communities Heath lived among. In her book, Heath not only defines literacy in the communities but more importantly she shows us how it is used.

Heath takes an ethnographic approach to studying literacy in “Roadville”, a white working class community who’s many residents work in the local textile mills, and “Trackton”, a black working class community whose older generations grew up farming but whose current generation’s livelihood is also working at the mills. By living amongst the members of these two distinct communities, Heath is able to see the how language develops and is transmitted among members. Heath also uncovers the deep cultural differences between the different communities and how they also differ from “mainstream” literacy practices of middle-class blacks and whites who hold power in various places in the communities. In looking at the day to day lives of the

“Roadville” and “Trackton” residents, Heath provides us with the most comprehensive look at what literacy is and how it is used.

Prior to Heath’s book “Ways with Words”, she published an article Protean Shapes in

Literacy Events: Ever-Shifting Oral and Literate Traditions (1982b). This study discusses the

73 same research on “Roadville” and “Trackton” residents. In this article Heath develops the notion of the “literacy event.” According to Heath

the LITERACY EVENT is a conceptual tool useful in examining within

particular communities of modern society the actual forms and functions of oral

and literate traditions and co-existing relationships between spoken and written

language. A literacy event is any occasion in which a piece of writing is integral

to the nature of participants’ interactions and their interpretative process. (445,

capitalization hers)

This data driven definition of literacy has been used by many later researchers as a focusing point to study literacy in various communities. Heath’s work shows us the multiple ways that literacy acts as a social act and how it functions for its users in day to day life. In this article

Heath provides three relevant examples of “literacy events” from her observations. One, takes place at a church congregation meeting where everyone must read regulations for applying for a loan or grant, second, a girl scout passing out information about selling cookies and fundraising

(446), and third, a group of women sitting on a porch discussing applications and documents needed for sending children to school (450). Heath’s work and the “literacy event” are a focusing point for many other literacy studies because of Heath’s work from her ten year study in

Piedmont Carolinas, which we are able to have a full account of how literacy functions and for what purposes.

Heath’s (1982a) “What no bedtime story means: Narrative skills at home and school” reports on three different communities, two of which I have already provided some information about in a previous entry, “Roadville” and “Trackton”, but along with those two Heath adds information about “Maintown.” “Maintown” is an area of middle-class individuals whose

74 literacy practices represent “mainstream” literacy which is focuses on school-oriented culture. In this study Heath focuses on bedtime stories as a “literacy events” to highlight the types of reading that goes on at home in the three distinct communities and its relationship to what occurs at school. Through her research Heath discovers that the “Maintown” parents model initiation reply evaluation sequence (IRE) when reading with their children. This IRE pattern is similar to that of traditional education in schools, which grooms the “mainstream” children for schooling.

“Maintown” parents also tend to extend the content or habits of “literacy events” outside of bookreading. Reading, writing, and talking in “Maintown” homes is viewed differently that in the homes of “Roadville” and “Trackton.” In “Roadville” print does not have the same place in the day to day lives of residents that it does in “Maintown.” For example, people in “Roadville” normally do not use written directions for cooking or initiating the playing of games. Heath tells us that seldom are written instructions utilized, and when they are the instructions are often loosely observed. In “Roadville” there is much less discussion for how to do thing than in

“Maintown” homes. And in “Trackton” there are no bedtime stories or few occasions where children are read to or engaged in a literate activity. In the “Trackton” homes children are engaged in different types of social interactions most of which are oral and have their own distinct set of rules for interaction. Heath tells us that “Roadville and Trackton view children’s learning of language from two radically different perspectives: in Trackton, children ‘learn to talk,’ in Roadville, adults ‘teach them how to talk’” (57). In comparing these three different communities Heath shows “the inadequacy of the prevalent dichotomy between oral and literate traditions, and points also to the inadequacy of unilineal models of child language development and dichotomies between types of cognitive styles” (49). Heath’s research shows us that “literacy events” must be understood and studied in their natural context in order to see the larger

75 sociocultural patters that the “literacy events” reflect. Only through context and community will we be able to see how literacy functions for individuals.

In Debra Brandt’s (2001) book “Literacy in American Lives,” Brandt uses interview data and extended case studies that she draws from over 80 individuals born between 1895 and 1985 to share participant’s early memories of learning how to read and write17. For this book and other research, Brandt uses the narratives of the individuals to discuss the economics of literacy in their lives. Through this research Brandt defines literacy sponsors as “agents, local or distant, concrete or abstract, who enable, support, teach , and model, as well as recruit, regulate, suppress, or withhold, literacy—and gain advantage of it in some ways” (19). She follows with

“sponsors are a tangible reminder that literacy learning throughout history has required permission, sanction, assistance, coercion, or, at minimum, contact with existing trade routes”

(19). Brandt’s research on literacy is largely looked at in terms of the financial payout that literacy grants certain individuals and how the rapid restructure of literacy and literacy standards affects certain individuals. In reporting and analyzing the stories of the different individuals we are able to see how people learned to read and write and what sorts of sponsorship literacy had for them.

Scribner and Cole’s (1981) article on literacy comes from an empirical study looking at the literate practices of West African people, the Vai, and the three different writing systems that the Vai use for various purposes. The information that Scribner and Cole report on comes from ethnographic observation along with the interview of roughly seven hundred adult men and women who were literate in the Vai script. According to Scribner and Cole, the Vai are one of

17 Brandt had an earlier article Sponsors of Literacy (1998) that came out prior to the book’s publication focusing specifically on “sponsorship” but it was not included in the literature review because much of the material that was in the 1998 article as well as the same data set was used for the book. The book shows a more robust understanding of literacy and sponsorship than the earlier article so it was intentionally excluded in the review of literature.

76 the few cultures to have independently created a two-hundred character that has remained active for a century and a half. Along with the Vai script, some members are literate in

Arabic and or English. The three types of literacy all had different roles and therefore came with different status among the Vai. English was the official script of politics and economic operations, Arabic was used for religious practice and learning, and Vai scripts were for personal and public needs in the village and allowed for communication among people living in different areas. Literacy among the Vai was a highly social activity, with 78% of those interviewed reported that that they had never sent a written message nor received one from a stranger. The

Vai all used literacy for their own different purposes but one thing that Scribner and Cole were able to gather from all of their field work with the Vai was that there were no large differences between literate and illiterate Vai in reading and writing in terms of “‘cognitive restructuring’ that control intellectual performance in all domains” (136). What Scribner in Cole found was that there was some change in terms of cognition between literate and illiterate Vai, but noting that would warrant a “great divide” that earlier anthropologists suggested. Scribner and Cole’s work helped to debunk much of the autonomous model of literacy and show that literacy is complex and cannot be traced or measured for certain tasks.

Maddox, Zumbo, Tay-Lin, and Qu (2014), uses a mixed methods approach to gain a richer sense of the ecology of testing. For this study participants were given a created by The United Nations Educational, Scientific and Cultural Organization (UNESCO)

Literacy Assessment and Monitoring Programme (LAMP). In this article Maddox et al analyze how respondents to the literacy test “cope with realistic test item content that has varying degrees of relevance to their cultural context” (292). Maddox el al analyzed “responses to three test items

(Timed Parking, the Gas Gauge, and the Mongolian Camel) using ethnographic transcripts and

77 differential item functioning (DIF analysis” (292). Maddox et al first report survey results (DIF analysis) from roughly 4,000 adults randomly selected. Ethnographic observation was also used to observe Mongolian camel herders in the Gobi desert to see how a close analysis of literacy and the people compared to the DIF analysis. The ethnographic observations occurred in people’s homes, and Maddox et al report recording twenty five assessment events18. Maddox et al recorded “how the respondent[s] engaged with assessment documents and tasks” (296).

Ethnographic methods proved useful for the team of researchers because it allowed them to

“delve deeper into cases where the ethnographic and statistical accounts offered contradictory findings” (297). Ultimately what the team found through their research was that some of the testing items were a “cultural misfit (not enough context” (298) and other times “contextual confounders (too much context)” (299) and conclude by stating that “when there was a dissonance between textual (test item) and contextual knowledge, the test privileged the text- based information” (306).

Along with the Maddox et al article Maddox published 3 other recent articles (2014,

2015a, 2015b) all on the UNESCO LAMP literacy test. All the articles report similar data and findings as the Maddox et al article on the texting of Mongolian camel herders, which conclusively shows that assessment is not transportable and that the local context outweighs the textual. I end the literature review with Maddox’s work and included it in the literature review because of its relevance to my own research and its connection literacy. Although Maddox’s work does not utilize the same literacy scholarship as I do, the findings of his research all point

18 Maddox et al’s use of an assessment event is vague and only represents the interviews that revolved around assessment whereas the way that I am using “assessment events” in my dissertation links to Heath’s (1982) notion of “literacy events” and are in fact “literacy events” focused on assessment. Although on some level Maddox et al’s definition and mine are similar it is important that I make the distinction between their use of the “assessment event” and where it comes from and theories linked to it. Although I believe their use of the term assessment event is the same as my assessment event because both involve interaction, the Maddox et al assessment is less robust because of its disconnect from the larger literacy scholarship that has focused on Heath’s work and ideas.

78 to and help to understand the contextual nature of assessment and participants understanding of assessment.

The Literacy Studies scholarship is a vast body of scholarship and since the purpose of my study is to integrate theories of literacy into assessment the work that I present in this review is meant to show the prevalent theories that will feed into my work and analysis. Following my argument from Chapter 1, that literacy in connection with portfolios and assessment have mostly been concerned with the sample or the development of literacy skills and less on the ways that students read and understand assessment, the literacy scholarship I have selected for this review help us to build a foundation in which to begin understanding teachers response as a literate activity. My study and research focuses on the types of interactions that students have with assessment, so the body of literacy scholarship I sample acts a foundational account of theory in which to integrate with portfolios and response.

Conclusion

In this literature review, I have provided a brief summary of each section and in conclusion I attempt to build the bridge between the four bodies of scholarship that my study resides within. The purpose of this chapter was to review literature that helps to answer my research questions; (1) Is response able to act as and fulfill the conditions to be considered a

“literacy event” in a portfolio classroom that uses delayed grading?; (2) Are students able to fully participate in the literate activity of assessment that portfolios and delayed grading promote?; (3)

How does teacher response and assessment act as a sponsor of literacy?; (4) Are there students in the class that do not interact with teachers response, and if so, why?

Portfolios can and have been used in various ways, but in order to align with contemporary social theories of learning and literacy we must build upon the response

79 scholarship by taking into consideration aspects of formative assessment focusing specifically on interactions (Sadler, 1989; Black and Wiliam, 1998). Since literacy and portfolios have both been termed protean concepts (Heath, 1989; Belanoff, 1994; Murphy, 1994a), it only makes sense that we make assessment a literate activity that encourages interactions with students through response. In encouraging interactions with our students, we build upon the notion of response as being a socially communicative act (Anson, 1998; Carini, 1994, Kynard, 2006) so it is imperative that we locate the student in the room (Murphy, 2000) to see the types of sponsorship (Brandt, 2001) that assessment promotes. In ignoring the student’s voice and interpretation of response, we are missing out of one of the most important aspects of response, and that is its reception. To have an “ideological” exchange of literacy (Street, 1984) looking at the socialization process and construction of meaning of response, it is important that we gain an understanding of how and what students interpret response. Interactions are key in both “literacy events” as well as in formative assessment, so it is important that we look at and listen to students voices. The dark area in much of the assessment, response, and portfolio scholarship comes from the fact that we do look at assessment as a literate activity or engage students in the literate activity of assessment. In the next chapter, I explain the methods and design of my study that helped to capture data that build a better understanding of how portfolios and response can be used to align with social theories of learning and literacy by making assessment and response an area of study and by asking students their reading of response.

80

Chapter 3: Methods

Overview

This study addresses the questions of how students perceive and interact with assessment in portfolio classes with delayed grading, focusing on the literate activity of reading and interpreting teachers’ response. With a plethora of portfolio scholarship available along with many germinal texts on response, my study investigates various forms of data to attain a broad view of how portfolio assessment and response are used to establish a more coherent theory of portfolio assessment, answering the following research questions;

(1) Is response able to act as and fulfill the conditions to be considered a “literacy

event” in a portfolio classroom that uses delayed grading?

(2) Are students able to fully participate in the literate activity of assessment that

portfolios and delayed grading promote?

(3) How does teacher response and assessment act as a sponsor of literacy?

(4) Are there students in the class that do not interact with teachers response, and

if so, why?

Since the focus of this study is looking at the literate activity of students reading teachers response, my study relies heavily on literacy studies methods. Literacy studies research frequently uses multiple and mixed methods of data collection (Heath and Street, 2008; Calfee and Sperling 2010) to triangulate data, allowing researchers to view the problem from various angles strengthening their analysis. As many other literacy studies research, this study utilizes

81 multiple forms of data to investigate how assessment/response are communicated in portfolio classes. Methods used for data collection in this study were survey, class observations, field notes, audio recordings, student interviews, and document collection. Although some forms of data were more crucial (primarily interviews) for this study than others, all the mentioned forms of data collection helped me understand how assessment functioned in the classes.

In recent calls for transparency in research, methods, and methodology, (Haswell, 2005;

Smagorinsky, 2008; Huot & Williamson, 2012), this chapter serves as the conceptual epicenter

(Smagorinsky, 2008) of the research. As Smagorinsky (2008) argues for the Methods section as a detailed account of the research conducted, I also provide a detailed account of the methods and methodology, furnishing a transparent account of the research to benefit me as the researcher and also the audience.

Survey

One of the foci of my study was to gauge whether and how formative response in delayed grading portfolio classes was used so it was important to locate classroom sites that fit the profile. In the Spring of 2014, I designed a survey for teachers who were using portfolios for classroom assessment. The initial creation of the survey was for a research presentation intended for a 2015 CCCC’s looking at the types of epistemologies teachers were employing through classroom portfolio assessment. My research questions guiding this survey were:

 How/Why are teachers using portfolios?  What types of epistemologies are teachers employing in their use of classroom portfolio assessment?

I started the survey on the Survey Monkey platform, but later moved it to Qualtrics, because of various affordances that Qualtrics offered that Survey Monkey did not. However; in moving the survey nothing changed other than the platform used for collecting data. The importance of

82 piloting survey research has been thought to aid in the overall delivery and success of the instrument (van Teijlingen, & Hundley, 2001; van Teijlingen, Rennie, Hundley, & Graham,

2001), so by piloting the survey, I was able to test how well the tool was designed to collect information that I was interested in gathering. The initial participants of the survey were six PhD graduate students with whom I was taking a methods class, as well as two professors with rhetoric and composition PhD’s at the university. Many of graduate students in the methods class piloted their research on each other to refine their instruments as well as to give presentations at the end of the semester. van Teijlingen, Rennie, Hundley, and Graham (2001) stress the importance of discussing pilot data because it aids in the researchers overall understanding of the instrument, allowing them to move forward collecting data. In the Spring of 2014, my pilot research showed that with some minor editing as well as adding in some more refined demographic questions that my survey would be ready to go. After I received IRB approval from my institution in the Fall of 2014 (see appendix C & D for forms), I sent the survey out on the

WPA listserv, as well as to a handful of colleges and universities where I had either taught, studied, or knew someone who could help distribute the survey to her colleagues. The survey which ran for 2 semesters, yielded 122 participants from a variety of different institutions, public, private, 2 year, and 4 year institutions, with a variety of different educational backgrounds, experiences, and positions in the department as can be seen in Figures 3, 4, and 5.

83

Figure 3: College/University Demographics

College/University Demographic

Grants Doctorates Community College University Private Public 0 5 10 15 20 25 30 35 40

Types Of Institution Of Types Institution % Of Participants

Figure 4: Participant Position in the Department Demographics

Personal Demographics

Writing Program Administrator or Program… Teaching Assistant/Graduate Assistant Adjunct Non-tenure Track Tenure Track

Personal Demographics Personal 0 5 10 15 20 25 30 35 40 % Using Portfolios

Figure 5: Participants Degrees Held Demographics

Personal Demographics 60 50 40 30 20 10 0 Rhetoric and Literature Linguistics In Progress Other not listed Composition

PhD MA BA

Although the survey was limited to the numbers of participants, I was able to gather enough data to move forward with my research. While I cannot account for the number of

84 number of surveys that were sent out, I was able to gather 122 responses from a variety of different individuals. In rationalizing my decision to stop the survey with 122 participants it is useful to think about how surveys have recently become a ubiquitous tool with ready-made templets and websites that help in the creation of the survey. Surveys are useful because they allow researchers the ability to sample large and diverse populations (Babble, 1990; Fink, 2003;

Andrews, Nonnecke, & Preece, 2003; Baumann, and Bason, 2004; Fowler, 2013; Lauer,

McLeod, Blythe, 2013) but because of the ease in usage and distribution we are suffering from survey fatigue.

Regardless, from the survey results I gathered, I was able to glimpse the different ways that teachers were using portfolios for assessment purposes. The questions in the survey (see

Appendix E) were geared towards extrapolating theories guiding classroom portfolio assessment and response to writing. The results of the survey revealed that there are different types of theories pouring into portfolio assessment and that not all portfolios and response are used in a way where delayed grading and formative response align with contemporary social theories of learning and literacy. There are perhaps multiple ways that portfolios can be used to align with contemporary social theories of learning, but from the survey I found that not all portfolio usage promoted interactions among students and the text.

In recent discussions of allowing for research transparency and clearly defined methods

(Haswell, 2005; Smagorinsky, 2008; Williamson & Huot, 2012), it is fitting to talk about how initially this primary survey was designed for extrapolating theories behind portfolio assessment, which it did, but its secondary aim/purpose came in the form of helping to identify research participants and sites for my dissertation research. Many of the survey results clarified that not all assessment lays out a clear path for students, nor do teachers rely on portfolio scholarship or

85 theory for their implementation. In order for students to make the most of assessment in an ideological model, tapping into contemporary social theories of learning, students need assessment that acts a map or a guide allowing an interaction with the text and the instructor.

Assessment in this sense takes the shape of formative response as described by Shepard (2006),

 Where are you trying to go?  Where are you now?  How can you get there? (pg 628).

Although there is no right or wrong way to use portfolios or response, formative assessment and delayed grading offer us an opportunity to engage students in the literate activity of assessment.

Huot (2002) argues that in order for assessment to lead to better writing, student’s first need to be able to make sense of the assessment and then engage with it. In following Shepard’s notion of formative response, we can see that when response acts as a map, it has the capacity to enhance a student’s ability to understand and use response to strengthen their writing. To test my hypothesis about formative assessment, portfolios, and "literacy events,” I needed teachers whose pedagogical practices fit into a profile of an ideological version of assessment/response was. Through the survey, two such potential sites emerged whose teaching practices and ideas of portfolios matched those of a profile of classes/teachers necessary for this study.

Constructing A Model of Potential Research Sites

Writing Assessments history has been tied to standardized tests and psychometrics

(Camp 1993; Williamson, 1994; Huot, 2002; Elliot, 2005) whose goals and aims are not always the same as localized schools and teachers. As well as Writing Assessment’s historical roots of multiple aims of standardization, there has also been a distrust of teachers (Elbow & Belanoff,

1997; Apple, 1999) and teachers’ disconnect from writing assessment that must be addressed.

The top down approach to assessment from outside forces as well as technical jargon from those

86 outside specialists has kept teachers outside of the loop for many assessment practices, making assessment a mystical process. Huot (2002) lays out a new theory and practice for writing assessment in which assessments need to be “site-based,” “locally-controlled,” “context- sensitive,” “rhetorically-based,” and “accessible.” In Huot’s model for writing assessment, teachers regain the power to make important decisions at their institutions. Huot’s theoretical position is that of a WPA’s perspective for programmatic assessment, but these same concepts can also be expanded into a classroom context, lining up with contemporary theories of literacy and learning. Since assessment is a literate practice, it is important that it be looked at and controlled locally because assessment is not a transportable process (Maddox et al., 2014;

Maddox, 2014; Maddox, 2015; Greve, Morris, and Huot, forthcoming).

Although unpublished at the time, Huot’s theory of writing assessment is enacted earlier in William Smith’s “expert reader” design, a system for writing placement that utilizes the expertise of local teachers to make placement decisions. Because my study was trying to answer questions about engaging students in a literate activity of assessment, I needed experienced teachers familiar and comfortable with their own practice. For this reason I decided to borrow and modify Smith’s “expert reader” model to account for “expert teachers” using portfolios and delayed grading.

Smith’s (1993, 1994) research on holistic scoring and placement from an 8 year study at the University of Pittsburg helped form the basic structure and theory of constructing my own

“expert teachers.” Although Smith’s “experts” were for placement, many of the principles are applicable for other aspects of assessment. Smith’s research discovered that teachers who most recently taught a course were able to more accurately (validly) score or place students into classes than to raters who had not taught that course recently. Smith’s research highlights

87 teachers’ knowledge of both the students and the curriculum, which was useful for my projects construction of experts highlighting the power behind teachers’ knowledge as professionals.

Smith raters varied based upon an interested pool of applicants, but one thing that remained consistent was raters had regularly taught the composition courses they were assessing and had taken composition theory course(s) (1993, p168).

Research on holistic scoring also supports the importance of selecting raters with composition theory knowledge (Huot, 1993; Pula and Huot, 1993). Teachers familiar with theories of composition make more adequate (valid) decisions in scoring/ranking/placing of students because of their knowledge and training with writing. The same applied for my “expert teachers” profile; I wanted teachers who were composition teachers and who knew and understood theories of composition. On top of these criteria I was also interested in finding teachers who were knowledgeable about portfolio and assessment theories. In piloting my initial portfolio survey extracting teaching pedagogies and epistemologies, I discovered that many of the graduate students who took the survey were inexperienced with using portfolios and as a result many of the ways they were assessing portfolios might contradict what they knew and believed in from composition theory19. A good example of the profile of teacher(s) I was looking for can be seen in Steven Smith’s (2002) book chapter “Why Use Portfolio? One

Teacher’s Response.” In this chapter Smith explains the two different types of portfolios he used to teach with, one for his class and the other for a state-mandated assessment. It was important

19After piloting my survey research I gave a presentation in the methods class with most of the participants of the survey, and a day after the presentation one of the other graduate students came up to me and said my presentation of data had made many of them (not just her) rethink what they were doing with portfolios. Up until my survey, many of the other graduate students had not thought extensively of how and why they were using portfolio assessment. In terms of reciprocity, I am thankful to all my participants from the initial survey and hope that my findings were able to help them all use portfolios in a more conducive way to match with their theories of writing and response.

88 for my study to have participants who understand the various ways that portfolios are used, and to find participants whose use of portfolios coincided with the curriculum they were teaching.

Steven Smith’s examples of using two different types of portfolios reflects a conscious affirmation of how portfolios can be used for various ends. In talking about reliability as an argument, Parkes (2007) notes that in assessment, “theory is most important because it is the articulation of the construct to be measured and the guide, first, to what constitutes an instantiation, a sample, of the construct, thinking traditionally done as validity” and also that teachers do not always have a “strong theoretical understanding of the content they are teaching or of their students” (7). Sandra Murphy (1994, 2013) echo’s these notions, arguing that many forms of assessment do not match current theories driving the teaching and learning of writing.

All of these concerns helped inform my decision for picking teachers with direct working knowledge of composition theory as well as assessment to reduce the chance of picking research sites where “literacy events20” were less likely to occur.

Elbow and Belanoff (1997) in their article Reflections on an Explosion, lay out seven major attributes and theoretical underpinnings of portfolios. They say that:

 Grades undermine improvement in writing because they restrict and pervert students' naturally developing sense of audience awareness.  Writing is its own heuristic; it doesn't have to be graded to lead to learning.  Portfolios lead to a decentralization of responsibility which empowers everyone involved.  Teacher authority needs to be shared if writers are to have genuine authority.  All evaluation exists within a context of literacy defined by a multitude of factors. Not all of which are products of the classroom.  Knowledge, whether of grades or of teaching strategies or of theoretical underpinnings, is a product of discussion among community members.

20 As well as research supporting teacher’s expertise weighing on my decision to pick the teachers that I did, I also took into consideration the literacy scholarship which often uses ethnographic methods or other research designs that requires the researcher to have access to individuals and the literate activity. In order to observe “literacy events” I needed to physically be in the spaces where the events were occurring to both witness and participate in the interaction.

89

 Evaluating, judging, liking, and scoring are inextricably bound up together and need to be thoughtfully examined. (30)

Although there is no consensus between scholars and the scholarship supporting Elbow and

Belanoff’s notions of portfolio assessment, these are important criteria to consider for portfolio classrooms that use delayed grading and formative assessment. To create an interaction and catalysis for “literacy events,” then ensuring the teachers selected for the study was of utmost importance. There are many ways portfolios can be used, but to witness “literacy events” and interaction with assessment, the criteria listed for selecting teachers was ideal for testing my hypothesis.

Later in Chapter 5 I address some of the issues in portfolio assessment building a better understanding of portfolio theory showing how portfolios can align with contemporary social theories of learning. Portfolios can and have been used in various ways, but to observe interactions, I needed sites and teachers whose pedagogy were capable of making assessment a literate activity. Although the initial survey of Classroom Portfolio Assessment was limited to the number of participants as well as to the population that responded, the results showed some interesting findings21 and helped to reinforce my decision in crafting an expert teacher model.

Ethnographic Methods

To answer my research questions looking at the literate activity of assessment, ethnographic methods were essential in data collection. If portfolios offer us a way to look at and legitimize more than one type of literacy (Belanoff 1994), and my research questions focus on

21 The findings of the survey were used for my 2015 CCCC’s presentation in Tampa, Florida. In the presentation, I talked about various ways that portfolios were being used as well as theories driving the assessment and how often survey participant’s assessment of portfolios were contradictory to theories of portfolio assessment. There were many ways that teachers were using portfolios that seemed to contradict what scholars had written about them and often portfolios seemed to be papers in a folder with little pedagogical value. In Chapter 4, I will talk more about the ways that students view portfolios and how their ideas of portfolios are often contradictory, matching those of the Classroom Assessment Practices Survey. By calling anything and everything a portfolio we have both confused ourselves (teachers) as well as students in what a portfolio is.

90 literacy, then literacy had to be one of the key components in the research design. There has been a growing body of literacy studies scholarship employing ethnographic methods as the dominant method (Street, 1984; Fishman, 1987; Street, 1993; Moss, 1994; Smith & Wilhelm, 2002; Heath

& Street, 2008; Calfee & Sperling 2010). Ethnographic methods provide researchers a view of everyday literate occurrences by looking at the day to day literate practices of individuals using literacy for their own aims. By looking at literacy in natural settings we are able to see the various activities that literacy promotes in communities. Barton (1991) explains “literacy is embedded in the activities of ordinary life” (2). In looking at “literacy events,” in classrooms, ethnographic methods allowed me to investigate the literate activities of what I am calling

“assessment events,22” which revolved around response in the classroom.

An “assessment event” falls under the umbrella of a “literacy event,” but is centered on assessment. “Assessment events” are occasions where assessment is the topic of discussion and can be used synonymously with “literacy events.” “Assessment events” are specific instances where students are trying to understand and interpret assessment either in the form of verbal or written response to their writing. Brian Maddox’s (2014, 2015) work using ethnography to study

Mongolian literacy and testing in the Gobi Desert brushes close to defining assessment events, and calls them “literacy assessment events,” but only in a generic sense. Maddox’s work engages how Mongolian camel herders make sense of “transportable” standardized tests and the thought process and social dynamics of participants when taking the tests. Maddox’s found that the social situation of participants has a strong effect on how participants take and make sense of

22 Initially I was going to call “assessment events” an “assessment moment,” but after carefully thinking about the differences between a moment and an event I decided upon event instead. A moment is a particular time, but an event is something that can be the culmination of things over time. From the data analysis I found that many of the “assessment events” were shaped from past experiences and were not something that could be looked at as a one- time occurrence but rather were something that could only be explained by looking at how everything was interrelated so it was important to call these “assessment events” rather than moments because by calling them events allows a nuanced way to understand the ways that students think about and make sense of assessment.

91 assessment, and how it connects to the wider world around them. Maddox also found that the local context weighs heavier on how test takers answer and make sense of assessment over what the written text says23. This tendency of participants to answer test questions based on their local context makes the argument that transported tests and assessment devices value the written text over participant’s local culture and literate practices.

My use of the term “assessment event” builds upon Maddox’s work looking at assessment as a literate practice as well as Heath’s frame for “literacy events” and takes into consideration the culture of the classroom and how students interact with response. Heath tells us that “literacy events” take on various purposes, but an “assessment event” focuses on teacher’s response and how students interact with it.

The Teachers

The teachers selected for this study were picked for multiple reasons. The first reason was access and time; because this project used ethnographic methods, which are time consuming between classroom observations as well as with individual interviews. Also being a graduate student, having a family, and being a teacher which are all taxing by themselves so when combined add to the difficulty of trying to juggle your time between the various tasks that each demand. For these reasons I decided to look for participants on the campus where I taught, but more importantly the teachers who were chosen were both experienced teachers, who had been teaching for some time, and who had experience with portfolio assessment that employed delayed grading and formative assessment. Both teachers selected for this study identified themselves as possible participants in my initial Classroom Assessment Practices survey and

23 Maddox’s work directly connects to Brian Street’s work on “autonomous” and “ideological” versions of literacy, and although Maddox’s work does not consult Street’s frame work, it is clear though his study that transporting test from unfamiliar situations taps into “autonomous” versions of literacy.

92 their answers to questions fit into the profile for the types of teachers and classes that are best able to enact the pedagogy my research questions set to answer. Having had multiple conversations with the two professors about their assessment/response practices, I concluded that their usage of portfolios mirrored the practices that my study was designed to examine. On top of the conversations and survey results both professors were individuals with whom I felt comfortable with, and in thinking in terms of reciprocity my going into their classes we all agreed that my studying their classes could be beneficial to us all.

The first professor chosen, “Rodney,” was an established writing studies scholar with

30+ years of teaching experience and had been teaching with portfolios since 1984 after his first semester in a rhetoric and linguistics program. Rodney was also a portfolio "literacy sponsor" on the campus. He gave portfolio handouts (see Appendix B) to new graduate students or teachers who were interested in teaching with portfolios for the first time, explaining the benefits and theories behind portfolios/assessment24. The handouts were something Rodney also gave to his classes explaining portfolio assessment easing them into it. The semester before my study took place, Rodney was teaching a 21011 class with an educational theme, a course that the semester before my data collection began we, along with Francis the second teacher in my study, had collaborated on designing. The course was designed to allow students to explore various issues in the education system focusing on literacy. Rodney, Francis, and I were satisfied with the design of the class from the previous semester, but only Rodney and I decided to run the class again the following semester. The reason Francis did not join us in running the educational class

24 Rodney started using the portfolio handout shortly after arriving at Kent State University because students at Kent were unfamiliar with portfolios. Previously Rodney worked at a university in Kentucky where statewide portfolios were used for assessment purposes so many of his students in Kentucky were used to a portfolio culture. His decision to use the handout at Kent came from his desire to ease student’s fears and anxieties over what was potentially a new form of assessment for them.

93 again was because he was slotted to teach an upper-level writing course focusing on argument.

The educational themed course was one that Rodney had been teaching in various iterations for over a decade and was an area of expertise and interest to him, so it seemed like an ideal site to investigation my research questions.

The second professor selected for the study, “Francis,” came from a different background than Rodney, with training in rhetoric and philosophy with nine years teaching experience seven of which he used portfolios. The two year lapse in usage of portfolios for Francis came from teaching ESL courses as well as working at a community college on the quarter system that had restraints making it difficult to use portfolios. Francis was a graduate student at the end of his fourth year working on his PhD in Rhetoric and Composition focusing his dissertation on validity arguments made in assessment. Although Francis did not use a portfolio handout for his classes, his practices mirrored those of the professor with thirty plus years and was not new to portfolio assessment. Having worked at other institutions where portfolios were either required or encouraged, Francis was not new to portfolio assessment and had been comfortable with portfolios and delayed grading for some time before my study. Francis like Rodney was able to talk meaningfully about portfolios and had an understanding of how theories of assessment worked, and also from our previous experience collaborating on the 21011 education and literacy themed class made Francis ideal for the study. Francis and Rodney had had much different educational and assessment training experiences, but how they used portfolios and response were the same. Subsequently, Francis was not one of the teachers who had access to Rodney’s portfolio handout. Francis was teaching an upper level writing course focused on rhetoric and argument. For the class Francis designed his syllabus explaining portfolios around the 5 canons of rhetoric; invention, arrangement, style, memory and delivery (Table 1). Talking to Francis

94 about this decision he stated that he had spent over an hour crafting his explanation of portfolios on the syllabus, a clear indicator that he firmly believed in and understood teaching with portfolios.

Table 1: Francis’s Portfolio Description

Portfolio: Portfolios represent your cumulative work over the term of the course and begins the day you first write anything for this course. It should contain "final draft" out-of-class essays, a cover letter reflecting on what you've learned about reading and writing brief and extended arguments, any revisions to "final draft" essays, article/book chapter summaries, and any other evidence of your investment in understanding reading and writing argumentative prose. The design and delivery of the portfolio is ultimately a student-based decision. On the one hand, a portfolio is a semester long project of aggregated work bound together by a student. But the other hand offers a portfolio as an argument a student writer makes. This argument is a deliberate arrangement of documents a writer invents to deliver a stylized response to one's own and others' ideas investigated during the course of a term to best represent the knowledge and practice one has committed to (writing) memory.

In addition to the two professors selected, I also chose to use my own classroom, a 21011 class focusing on the education system and literacy like Rodney’s. My teaching experience is similar to that of Francis, at the time of my study I was in my 3rd year of graduate school and had been teaching in various capacities for eight years. My experience with portfolios was less than the other two teachers only having taught with portfolio for five years, but my interest in portfolios and assessment made me curious and before my dissertation questions started to materialize I had been reading all the portfolio scholarship I could find. My 2014 CCCC’s presentation had been an analysis of eight edited collections of portfolio scholarship, so I was very familiar with notions of portfolios from teaching with them as well as from my scholarly inquiry into their history. My 2015 CCCC’s presentation also looked at portfolios and it is where I presented the results of my Classroom Portfolio Assessment survey used to help me look

95 at the ways portfolios were being used by various teachers across the U.S.. The survey helped to gain a better sense of how portfolios were being used for classroom writing assessment and also in designing and recruiting for my dissertation research.

Because the 3 classrooms were different research sites various/different forms of data were gathered from each allowing me to look at and answer my research questions from different angles. The results gathered from the 3 sites have many observable similarities and patterns which strengthens the arguments I make in Chapters 4 and 5 since the data gathered is consistent across the 3 different classroom sites.

Site 1, Rodney’s class

Rodney’s class, a sophomore level writing class25, had 19 enrolled students in the class of which none withdrew or dropped out. At the University, 45% of 21011 classes the semester I observed reported similar data, with no students withdrawing or dropping out (Kent State

Institutional Research). The class met two days a week on Tuesdays and Thursdays for an hour and fifteen minutes. Because of holidays, snow days, professors attending conferences or other professional related events Rodney’s class met only 28 days of which I attended 15 classes

(54%), with the majority at the end of the semester when students were both giving response to each other in peer groups as well as receiving response from teachers in writing, or verbally in class or in conferences outside of class (of which I observed 20 individual outside of class conferences).

25 The 21011 classes were originally designed as a Sophomore level course for students to take once they were getting into their majors so they could focus their writing and research more around their majors but with changes at the university the classes were opened up to anyone who completed 11011, the first credit bearing course at the university. I mention this information because although the classes are usually populated with students who have completed one year of college there was one Freshman in the class, Cory, who repeatedly identified as a Freshman.

96

Table 2: % Of Rodney's classes observed

Total Class Days Of % Of Classes Meetings Observation 28 15 54%

My reason for strategically choosing classes was because the heart of my research focused on the response/feedback that students were receiving on formal assignments that were to be included in the portfolio. The classes that I observed were chosen because they were either days where the teacher were talking about writing or the portfolio or on days where peer response was happening or papers were being handed back by the teacher.

Site 2, Francis’s class

Francis’s class was an upper level class primarily populated by junior and seniors, although there was one sophomore in the class, with a cap of 24 students but only 8 registered and completed the course. The class met three days a week, Monday, Wednesday, and Friday for fifty minutes each class period. Along the lines of Rodney’s class, Francis also had reduced classes because of holidays, snow days, and the professors attending a conference. Francis’s class met 36 times of which I attended 18 (50%).

Table3: % Of Francis's classes observed

Total Class Days Of % Of Classes Meetings Observation 36 18 50%

The same reasoning behind selecting classes for Rodney’s also applied to Francis. I chose to observe classes that would explicitly be dealing with response or the portfolios. On top of the classroom observations for Francis’ class I also sat in on 7 individual conferences with Francis

97 and his student either after classes or during finals week when he had scheduled some appointments with students to discuss their writing.

Site 3, my class

My class resembled Rodney’s class, a sophomore level 21011 course that was capped at

19, and like Rodney’s class my class also subsequently had no one drop the course. Obviously with my class being the third site for my study the nature of my interaction slightly changed so as to make my research design stronger, but to also protect the students in my class. There has been controversy or rather distrust of sorts from studies whose researchers use their own classroom sites of study, so to avoid any of those arguments from the opposition, I asked an outside IRB approved researcher to perform certain aspects of the research in my class. One substantial thing that varied for my class from the other sites involved in the study was that I was not recording our classes or taking field notes. The outside researcher performed data collection for me in the form of interviews and asked students to participate in the study to avoid my knowing who chose to participate and who did not. The data collected from my classroom (audio interviews and consent forms) was kept from me until after the semester was over and until grades were posted.

My reasoning for asking an outside researcher to help with certain data collection was to keep my own relationship with the students as their teacher from interfering with the kind of data that

I would get. I wanted to get the most truthful and honest answers from participants as possible and I knew that being their teacher and ultimately the one to assign them a final grade would prevent some of the participants from answering truthfully about their feelings on response and portfolio assessment.

98

Data Stream

Data collected for this study was survey responses, class observation audio recordings, field notes from class observations, documents from classes (handouts, syllabi, peer review sheets, revision memos, etc.), audio of student interview, transcripts of interviews, audio of student teacher conferences, and memoing. These multiple sources of data helped in conceptualizing and interpreting my research questions. Student interviews are the primary means of data analysis in Chapter 4 because the interviews allow me to answer the research questions, but all other forms of data collected helped me interpret the data at various stages and also helped to refine research questions and research design. “Being26” in the classrooms and interacting with all the participants as well as my data stream immersed me in this project providing multiple forms of data to better understand and answer my research questions.

Survey data was collected on Survey Monkey and then later moved to Qualtrics and ran from fall of 2014 to spring 2015 (2 semesters). The survey was sent out on the WPA-L as well as sent to a handful of other colleges and universities. All audio files from class observations, interviews, and conferences were collected on either a Sony ICD-PX82 recorder, an Olympus

VN-722PC recorder, or through the built in recorder on my Dell Inspiron 1545 laptop.

Documents collected were scanned on an Epson WF-7510 printer scanner with the aid of Adobe

Pro where they could be converted into PDF files and stored on my computer and in backup locations. Field notes and memos were taken on my computer using Microsoft One Note which has variety of tools that both allowed me to type as well as draw various shapes to help me

26 I chose to put Being in quotation marks in order to set it apart from the reset of the text putting emphasis on my “being” in the classes. Taking an ethnographic approach to data collection requires a lot of work on the part of a researcher because of the various roles they play as a researcher, participant, and a peer. Also I am convinced that had I not “been” in the classes in the capacity and frequency that I was that I would not have collected and analyzed data in the same way. There is more to “being” in the class than just physical presence.

99 describe and document locations of participants as well as myself. Transcriptions were done on an Infinity IN-19 foot pedal with the program InqScribe and then copied to a Microsoft Excel spreadsheet for coding. All participants were assigned pseudonyms and materials were organized accordingly.

Classroom Observations

Ethnographic research provides insights into the daily lives of participants, allowing a researcher to observe her participants in a naturalistic setting. To study the literate activity of assessment, I had to be in classrooms where “assessment events” would be occurring. For classroom observations I first had to acquire written permission from all students in the classes stating that I was able to audio record classes as well as permission to take ethnographic field notes (see appendix F-I for sample forms). My IRB approval as well as the nature of my design demanded that I gain everyone in the class’s permission to sit in and observe classes. On the first day of both Rodney and Francis class I was given the first ten minutes to sit down and talk with students in the classes explaining to them who I was, a graduate student doing dissertation research, and what my research was about, trying to gain student perspective of assessment/response, as well as to answer any questions they may have about what my being in the classes might mean. Auspiciously all students in both classes were willing and able to have me sitting in and observing their classes. I was able to gain permission from Rodney and

Francis’s classes with no difficulties.

For class observations I carried a Sony ICD-PX820 recorder capable of storing 2 gig of audio files and later towards the end of the semester I bought a second recording device an

Olympus VN-722PC capable of storing up to 4 gig of internal storage as well as it had an expansion slot for a SD mini card. Along with the Sony and Olympus recorders I also used the

100 built in recorder on my Dell Inspiron 1545 laptop. I wanted to make sure that I was capturing as much audio as possible and with multiple recording devices it also afforded me the convenience of switching out new batteries or deleting audio off of a device if it became full without worrying that I would miss parts of class discussions. With the quirks of technology I thought it best to have multiple ways of capturing data, which ended up paying off in the long term. After each class recording the files were copied from all devices and backed up to Dropbox, an online cloud storage site, to assure that the data was not lost. Audio files were also skimmed through after the classes to assure sound quality and then dated and labeled to keep observations organized.

On top of the audio recordings from the classes, I also took detailed field notes with

Microsoft’s program One Note. In One Note, I set aside a separate binder titled “Research

Journal” where I had various tabs and sections devoted to various aspects of my observations.

The field notes focused on things such as the location of the teacher, students seating arrangements, topics of discussion, along with stream of conscious note taking of things that were being said and discussed in class as well as general impressions of the class. In the field notes I also took note of areas where the professors specifically mentioned assessment, portfolios, writing assignments for the class, and anything else I felt would be important to help me answer my research questions. One Note proved to be a valuable tool for helping to keep things organized in the research with daily notes and reflections on how the research was going.

Memoing

One Note was also the platform I used for memoing on the research and themes that I saw emerging. Memoing is an important activity for researchers performing observations and is individualized for each researcher (Miles & Huberman, 1984; Birks and Mills, 2015), allowing them to think about the research more meaningfully and to if necessary to refine research

101 questions or data they are collecting (Birks, Chapman, and Francis, 2008). For me memoing acted as a heuristic throughout the research and even prior to starting the study because it created a space for me to explore my relationship to the study and research questions. The memos served as a map for this study and data analysis by allowing me to reexamine, refine, and challenge my research questions and findings (Birks and Mills, 2015). Memos also helped keep the research unified and organized for later data analysis. The memos that I took during and after classes and interviews were invaluable because they allowed me to gain a better understanding of the class and the students and what it meant “being” in the classroom.

Sample Description

Participants for this study were drawn from the three classes from Kent State University.

Kent State is located in North Eastern Ohio, close to Cleveland. Its enrollment is around 22,000 undergraduates whose population is made up of 59% female and 41% male. Most of the students attending Kent come from surrounding areas close to the campus, but it contains representative from 50 states and 100 countries. For this study no international students were involved nor enrolled in the classes that were selected for observations. Ethnic diversity on the campus is represented by only 13%, with the majority of students identifying as white. In terms of class size, 45% of the classes offered at Kent have lower than 20 students (Kent State University). The participants involved in my study are relatively representative of the Kent State campus and student body, with 49% of my participant’s female (8) and 51% male (9), 2 of the female students were minorities (12%). All students in the study self-identified as being willing and interested to assist me with my research, and their demographics are representative of the university as a whole.

102

Interviews

In Rodney and Francis’s classes interviews of students occurred at mutually agreed upon times and locations. The interviews all took place in either my office or a common space in the

English Department. Before interviews started, I always made sure to ask participants where they would feel the most comfortable either talking in my office or in one of the common spaces in the department. Students who were interested in participating for this study identified with me as being willing to sit down for an interview either in class or through email. Having students self- select for this study helped to avoid my coercing them into interviews, and also allowed students to be more comfortable talking with me. Periodically throughout the semester I would ask the instructors of the classes if I could have five minutes to try recruiting students for interviews.

During these recruiting times I discussed the types of questions as well as approximately how long it would take for the interviews, I also explained to students that if at any time during the interviews they wanted to leave they were free to do so and that there would be no repercussions.

Taking and ethnographic approach to this research and from “being” in the classrooms at least 50% of the time helped recruiting students because many were familiar with me from either having discussions with them in groups or from just seeing me in the class with them. My “being there” allowed students to feel comfortable whereas granting me access to “underlife27” (Brooke,

1987) in the classes. I chose to use “being there” to help describe my methodological approach to data collection. “Being there” was not limited to my presence in the class, it also involved

27 Brooke’s use of “underlife” is based off of Goffman’s notions of identity as social interaction. Brooke’s study was a semester long classroom study, where he was a participant-observer and was privy to private conversations students had with each other, had access to the notes they write to themselves and to others, and other written documents that students were writing during class time. What Brooke found was that many of the conversations and notes were directed towards making sense of the class and materials. Brooke observed that while teachers think students are distancing themselves from class and the materials that they are actually making sense of the materials in their own complex ways. In my own study I too was privy to the “underlife” that students had in the classes, which granted me access to materials that allowed me to understand the students and class better.

103 interacting with the students as a researcher, student, teacher on occasions28, and a tutor. These various roles I played helped gain access and trust from students in the class that I otherwise would not have been able to. “Being” in the classes was more than just a physical presence in the rooms. My “being there” allowed me to interact in the various capacities I described, all of which aided in the data collection and analysis in this study, and fall in line with ethnographic methods.

On a couple of occasions after interviews with students, they would ask if I was coming back to class, this signifies the comfort that students had with me “being there”. The student’s acceptance of me permitted me to interact as if we were one of them, and would have not been possible had I not participated in the manner that I did.

In both classes I took a feminist methodological approach to the interviews and recruiting. As well as asking in the beginning of classes I also periodically sent out emails to the classes asking if anyone was interested in talking with me about response and the portfolios.

Maintaining a reciprocal approach (Barton, 2000; Powell and Takayoshi, 2012), I offered to act as a peer tutor or extra reader to all students in both classes regardless if they participated in the interviews or not. I also offered the option of listening to any recordings or looking at any field notes that I took to maintain a culture of trust. In the classes, I made a conscious decision to share as many aspects of my research with participants as I was able to. It was important in the class observations and in interviews that I maintain and build relationships with the participants to build trust and create an atmosphere of collaboration. Although I made the offer to act as a tutor in Rodney and Francis’s classes, only eight of the students took me up on my offer. Out of the eight students, six participated in interviews for my study and two did not. During classes where peer review was happening, I also floated around the classroom and periodically acted as

28 On one occasion, I taught Rodney’s class while he was away on personal business. For the class I taught we went over a reading that the students read and then worked on drafting their 2nd paper looking at statistics.

104 an extra reader or answered questions about writing and topics that students had. I wanted my relationship with the classes to be mutually beneficial, although I knew that I had a reason for being in the class, collecting my dissertation data, I also wanted to be there to assist the students with questions they might have about writing and researching.

My purposes for being in the class were different from the students, but I believe my position of being a graduate student helped in soliciting participants because in their minds I was also a student. One student, Cory, on the first day in Rodney’s class while I was discussing my research project said to me “do what you gotta do,” a feeling he frequently expressed throughout the semester, but also a feeling that many others in the class adopted and accepted without reservation. Whenever I participated in classroom activities and discussions, I always made it clear to the students that my opinions and help were no way whatsoever intended to act as that of their teacher and that if they had specific questions about something that they direct those questions directly to them. I wanted the student to know that even though I was a teacher that I was not their teacher and nothing I said should be intended to take place of anything their professors told them.

As far as many of the students were concerned I was also a student in the class, just one a little more experienced than them. The human interactions in the classroom observations as well as my openness with participants helped gain access to the students, but I quickly found out through the research that it was difficult for students to meet outside of classroom times, being full-time students with full schedules and often times part-time or full-time jobs made recruiting participants difficult. For this reason later in the semester many of the interviews took place during workshop time in the classes while students were working on their papers getting things ready for their final portfolios.

105

From Francis and Rodney’s classes I interviewed thirteen students and from my own class the outside researchers interviewed four. With a total of seventeen participants, nine males and eight females, there were twenty-six interviews ranging in time from 6:40.18 (hour: min: sec: fractions of sec) to 1:29:19.00 totaling roughly twelve hours and fourteen minutes’ worth of interviews with an average time of twenty eight minutes and fourteen seconds (Table 4). The number of interviews as well as the amount of time the interviews lasted varies from student to student based upon the nature of the interview, the content being discussed, and the fact that some participants had more to say and discuss than others.

Kvale (1996) in a book on qualitative interviewing offers up two metaphors for interviews, one of the “miner” and the other the “traveler.” For the miner metaphor “knowledge is understood as buried metal and the interviewer is a miner who unearths the valuable metal” and the traveler metaphor “understands the interviewer as a traveler on a journey that leads to a tale to be told upon returning home” (3-4). He goes on to say that the traveler is one who goes on the journey with participants and “ask questions that lead the subjects to tell their own stories.”

The traveler metaphor is a suitable metaphor for the types of interviews that I conducted because first it fits in line with ethnographic methods, but also because during the interviews I was in fact going on an assessment journey with the students to learn the ways that they understand and used assessment. Kvale says through the journey the traveler (interviewer) may also change because the interviews and journey are a reflexive process. In doing the interviews, I can attest to this notion of reflexivity because there were many moments when discussing writing with the students where I had reflexive moments that caused me to think more critically about the types of response that we use in the class. Doing these interviews not only made me a better researcher but it also affected the way that I teach and think about teaching.

106

Table 4: Interview time table

Teacher Student Interview 1 Interview 2 Interview 3 Francis Ryan Male 29:11.00 Francis Tony Male 22:26.11 6:40.18 Francis Leigh Female 14:46.02 Francis John Male 15:52.00 Rodney Cory Male 14:39.00 32:02.26 Rodney Jake Male 20:51.19 31:00.19 1:29.19.00 Rodney Robert Male 11:23.00 19:42.00 Rodney Chloe Female 14:33.21 41:37.28 Rodney Anna Female 17:50.00 28:19.28 Rodney Will Male 48:09.11 Rodney Alice Female 25:40.00 Rodney Brooklyn Female 11:25.12 Rodney Arthur Male 53:30.00 Curt Holden Male 41:42.23 18:01.23 Curt Kara Female 40:34.00 21:29.04 Curt Stella Female 30:00.08 Curt Claire Female 33:13.06

The structure of all the first interviews started with the same set of basic opening questions trying to gain some rudimentary information from the participants:

1. What types of reading/writing do you like to do? What types on a daily basis? 2. What is your understanding of the portfolio for this class? 3. What types of things are you concerned with or about in regards to the portfolio? 4. How do you feel about not receiving grades on formal assignments? 5. Are there any trends or things you see in the response you get from your teacher?

The five questions were not asked verbatim and sometimes the order of questions shifted based upon how the conversations were going, but the types of information I was after with the questions were gathered through the initial “get to know you” process of the interview and many of the questions acted as ice-breakers to make participants comfortable talking with me. Creswell

(2015) suggests that the initial interviews be used as a way to make participants comfortable, so framing the beginning of the interviews in a relaxed manner was important to help ease any participant anxiety.

107

The first question asking about types of reading and writing the students did was to try to gain a perspective to the types of literacy that they frequently used in their daily lives. Many of the students did not consider themselves to be readers or writers because they thought the question was only asking about school based reading and writing, a myth I quickly dispersed through the interviews and used as a way to place value on the types of reading and writing that many of them did on a daily basis. This initial question and discussing of what counts as reading and writing was a good spring board question to move through the rest of the interview. The second question was geared at trying to find out what sorts of conceptions or understandings students had of portfolios as well as to discuss if they had portfolio experience in the past. A few of the students I interviewed had been in portfolio classes in the past so it was interesting to see the similarities and differences that they had from the different classes but also to listen to what they thought a portfolio was. The third question in the interviews was aimed at finding out if there were any concerns that they may have about keeping and compiling a portfolio, which normally lead into the fourth question asking specifically what sorts of feelings students may have from only receiving response and no grades. This question was saved towards the end of the opening interviews because as I suspected many of the students had not been in delayed grading classes before, and I thought it wise not to start talking about what could be a topic of anxiety for some. The fifth and last question in the opening interviews was to open the door to start discussing the formative response that they were getting from their teacher(s) either in formal writing assignments to be compiled into the portfolio or in their informal journals. From this question and the other basic questions I was able to gain an understanding of the students, their writing habits, and feelings about portfolios and response.

108

After the initial questions, I shifted my questions to discourse based questions (Odell,

Goswami, and Herrington, 1983) specifically about response students had received to their writing. The discourse based questions helped illicit information from the students to find out the ways that they were interpreting response from their teacher to discover what kinds of interactions students were having with response. To test the hypothesis of whether or not formative response was a catalyst for “assessment events,” I needed to look at written text, formal or informal to see what sorts of interactions students were having with response. If the response in the class was capable of generating a “literacy event,” then there needed to be some sort of interaction between the students and the response.

For the discourse based interviews, I asked students to email me electronic versions of the papers with response if applicable, but if the teacher’s response was handwritten then I used an Android App called Genius Scan that converts a pictures into a PDF file. Once the students and I were both looking at the response, I asked them to read the teachers comments out loud and to talk about what the response meant to them and how they thought it applied to their paper.

After the students explained to me what they thought the response meant, I then asked them to talk about what they thought they would do to address the teacher’s comments in revision. The discourse based questions acted like retrospective probing, which allowed me to ask clarification questions if I needed participants to elaborate more on what they thought the response meant.

Currently, in the response scholarship there is not any data offering an insight into the student’s perspective of response and assessment. Interviewing students to gain their perspective provided me with rich detail looking at student’s initial reactions to teacher response. The discourse based interviews came immediately, a class or two, after students received the response, with a few actually having the student read the response in front of me for the first

109 time. Setting up the interviews shortly after receiving the response proved to be a good design decision because I was able to capture initial reaction, and in one case I was able to help a distraught student make sense of response that she was perceiving to be negative.

Performing the interviews this way not only afforded me the opportunity to capture student’s perception of response, but it also allowed me at times to participate in the “literacy events29.” In the case of the student who thought her response was negative it allowed me to act with reciprocity to help her think about what the response might mean, providing her with a better understanding of what the response was telling her30. In Chapter 4, I will go into greater depth with this interview and others but it was interesting to hear how much past experience with response affected, even guided much of student’s reading of response. Many students were not used to response without grades, so the guided discourse based interviews allowed me to gain a better understanding of how assessment functions and is viewed by students.

The outside researcher who collected interviews for me also adopted the same procedures as I did for interviewing and collecting data. The researcher used the same sets of questions when talking to the students and adopted the same stance as a “traveler” when trying to gain an understanding of their assessment experience. From talking with the researcher before the interviews and discussing the purposes of my study as well as from listening to the interviews after the semester was over, I able to safely say that our approaches to the interviews were the same.

29 The outside researcher who performed interviews for my class also had instances in the interviews where he also participated in “literacy events.” The questions and the nature of the interviews, being a teacher, researcher, student, etc. all made it extremely difficult not to participate in “literacy events” with participants. 30 In the interviews I was careful not to impose too much of my “interpretation” of teacher response, but at times it was important I discuss response with them to be able to understand why they interpreted certain response the way that they did. I will go into this in more detail in chapter 4’s analysis of data and in chapter 5 where I talk about implications of the research and as response/assessment being a genre that we should help equip students with.

110

At the end of all the interviews, students were all asked if they wanted to come up with their own pseudonyms, and all of the students, but one, told me that I could either use their real names or make something up for them. Most students felt as if the information that they had provided to me was not something that was extremely personal of something that could damage their character. The one exception of the student from the interviews who wanted to pick his own name was “Will.” The reason I mention Will is because at the end of the interview he said also said he did not care what I called him, but then upon walking out of the room from the interview, he popped back in and told me that he wanted to be called the real name of the professor. Will had a tendency in class to be “playful” with the professor and at times it appeared that he was trying to fluster the professor through their discussions. It was clear from the class as well as through the interview that Will was an intelligent student who enjoyed having academic conversations. Although the students in the study gave me permission to use their real names I made the decision to give them all pseudonyms because even though they were all consenting adults, I wanted to maintain their privacy.

Data Organization and Coding

After all the interviews, the recordings were transcribed using an Infinity IN-19 foot pedal with the program InqScribe and then copied to a Microsoft Excel spreadsheet for coding.

The audio files were transcribed as true as possible to what the participants were saying while also including any fillers in speech (um, ahh, etc.) as well as paralinguistic features (sighing, laughing). Transcriptions typically took me an hour for every fifteen minutes worth of recording, so I spent roughly 49 hours to transcribe all the data and ended up 232 single spaced pages of transcriptions. Doing the transcriptions myself allowed me to gain a better understanding of the participants and positioned me in a better place to begin data analysis. After

111 doing many of the transcriptions I took memos of my initial thoughts and themes that I thought might be relevant for coding and analysis. When I copied the transcriptions to the Excel sheet the speech segments were broken up into individual cells, resulting in 2,988 individual cells. Every other cell represented either the researcher or the student’s utterances. My rationale for coding the turns was a conscious decision that allowed me to hear the student’s voices. For example the interviewer’s initial question was in cell one then the participant’s response was in cell two, the next or follow up question from the interviewer was in cell three and so on and so forth. In the

Excel documents I was able to able to label who was talking in the cell to the left of the transcription and put my codes in the box on the right (Table 5).

Table 5: Sample Coding Sheet

Speaker Transcriptions Coding

Chloe That is a really good point, I actually don't know where I picked up that past experience mentality that the more you are critiqued or the more that um, the more with response, discussion that is brought up within your paper that means that you more is not know, for lack of a better term the more shitty it is so to speak, do you better, critique, know what I mean? Um, I don't know I kind of want to say that I just negative kind of grew up thinking that because in high school you know having all, I guess especially in high school cause now I'm looking back you know and I had turned in papers or homework and you know the more answers you got wrong you know the lower your grade and um in English you know obviously the more you know when you had those corrections of like grammar, it's kind of like this grading process that we go through you know in America is yea the more that is, the more red ink the lower the grade. That is kind of what I have picked up on intuitively. Or naturally I should say, yea. So a learned behavior, I don't know. Me That is perfectly natural, there has been research that shows that even with red pens when you give a student papers back with red on it I raises their blood pressure by 10%. Chloe That’s kinda funny

Development of the Codes

Initially for the coding of documents, I had planned to use and adopt Connors and

Lunsford’s (1993) coding scheme analyzing teachers response to student writing, but I quickly

112 realized that the types of data I collected could not be analyzed in the same ways as Connors and

Lunsford. Connors and Lunsford’s data was focusing on teacher’s response, whereas my data was looking at students reactions to teacher’s response. Even though the teacher’s response was present in my interviews and that is what Connors and Lunsford’s research looked at, their research did not account for formative assessment, a key component in my research or trying to gauge student understanding. Also the purpose of my study was to hear the student’s voices, not focusing on teacher’s commentary. Along with Connors and Lunsford’s coding, I also consulted

Huot (1988), Edgington (2004), and Caswell’s (2012) coding schemes for their studies on teaches response31, but none of the coding schemes could account for the richness (or newness) in data that I was discovering. Because my data was unlike other forms of data present in the literature, I decided to code based upon principles of Grounded Theory (GT) where the codes emerge from the data themselves.

Using a GT approach proved to be advantageous because I was first able to open coded all the transcriptions allowing the codes to emerge from the data rather than trying to code with a rigidly defined coding scheme. For the initial pass of my data, I looked at the interview transcripts in vivo and coded what the participants said almost verbatim. For example Robert in his first interview talked a lot about Rodney and the class. In the initial pass of attempting to make sense of the data in vivo coding helped establish patterns that later allowed me to collapse codes. From in vivo coding I generated over 500 codes, the exact amount is unclear because of the large amount of data that I was working with and the fact that some of the in vivo coding appeared only once or twice. Since it was my first time though the data it was less important that

31 Huot, Edgington, and Caswell’s studies focused on teacher’s response, but all their studies collected protocol analysis of teachers reading student writing, where my research focused on interviews with students. Although the codes they used for their coding helped on the analytical level, the data that I had was much richer and deserved codes of their own.

113

I count all the possible codes that were emerging than focusing on what the participants were actually saying. The example of Robert’s text from below (Table 6) is a good example of how in vivo coding helped to fracture and reduce the data to a much more manageable coding scheme, but more importantly it lead to codes that were able to help me answer my research questions.

Table 6: Sample of initial coding

Robert When I am writing, you know I try to make sure that I’m trying to he wants X3, incorporate everything he wants. I feel that since this is so high-stakes I have high-stakes, to make sure that it’s what needs, what is going to appeal to him kind of so appeal to that I can make sure that I can get the best grade possible. It kind of distracts him, grade, me from my creative writing kind of a sense because I am so have to focus distracts on everything that he wants would be the way to put it.

Saldana (2013) notes that tools such as woordle.com or similar programs that allow you to build a word cloud help act as an initial analytical tool to look at codes. To help me visualize my codes and categories before fracturing data I used the computer program NVivo to generate a word cloud. NVivo can also be used for coding, but I chose to code things myself because one, my inexperience with coding software, but two, because I wanted to get intimate with the data to understand it more fully and using a computer assisted program would not have allowed me to do that. The word cloud (Figure 6) acted as a heuristic for me because it permitted me a view of the in vivo coding that I had completed.

Figure 6: Word Cloud generated from initial coding

114

As I sifted recursively through the data using in vivo coding until I felt as if I had exhausted all possibly coding categories, I reached “theoretical saturation” (Glaser and Strauss,

1967). From the saturation point where I was “washed” in the data, I was able to go back through ready to “fracture” and refine the data (Glaser and Strauss, 1967) to end with eighteen codes.

Each cell on the Excel sheet could possibly be coded for multiple codes. Some cells received zero codes while others received eight or nine. Each cell represented one turn for the speaker

(Table 7), and the amount of turns as well as the length of the turns varied greatly from participant to participant. Some participant’s turns were a sentence or two while others accounted for a page of text. Interviews can be inconsistent from participant to participant so the researcher is never quite sure the types of response they will receive. During coding I consciously thought of various ways to break up the data, but in all the different iterations I constructed for looking at data, the natural form that manifested from the interviews seemed to represent the participants and their voices the truest.

115

Table 7: Interview Excel cell counts

Interview 1 Interview 2 Interview 3 Teacher Speaker Cells Words Cells Words Cells Words Francis Ryan 63 1483 Researcher 64 3288 Francis Tony 39 1788 11 413 Researcher 40 1229 12 651 Francis Leigh 52 1064 Researcher 53 1201 Francis John 61 1788 Researcher 62 1037 Rodney Cory 42 918 36 3404 Researcher 42 1860 37 1019 Rodney Jake 32 1976 28 3404 125 8374 Researcher 32 657 28 536 126 2890 Rodney Robert 44 1831 31 2276 Researcher 44 1055 32 770 Rodney Chloe 39 1508 62 5806 Researcher 40 712 63 1925 Rodney Anna 27 2095 48 4110 Researcher 27 503 49 1125 Rodney Will 88 6286 Researcher 88 2054 Rodney Alice 53 3753 Researcher 53 796 Rodney Brooklyn 33 1254 Researcher 32 673 Rodney Arthur 92 3552 Researcher 92 4726 Curt Holden 100 3450 43 2035 Researcher 101 4631 44 1090 Curt Kara 111 3265 70 1965 Researcher 112 1653 71 1371 Curt Stella 84 2682 Researcher 85 2137 Curt Claire 71 2268 Researcher 72 1868

Cells Words Students 1,487 73,608 Researcher 1,501 39,814 Totals 2,988 113,422

116

Along with the Excel sheet I was using to code the data, I also utilized Microsoft One

Note to keep a more detailed account of the codes and themes that were emerging. In the One

Note document, I was able to keep a detailed description of who the participants were as well as which interview the codes and themes could be found in. In doing this I was able to gain a better understanding of the data and to use the same coding schemes for all the interviews. Although the types of writing and response that were being discussed varied from class to class and from student to student, the same sets of codes applied to all participants.

Coding

Since one of the purposes of this study was to hear the student’s side of assessment it was important that the code be as true to the student’s voice as it could. In creating the coding system,

I strived to account for the richness in the data to my best ability, which I believe I have. To best represent the students in the study, data was not coded for positive/negative/neutral reactions.

The reason data was not coded for these attributes was a conscious decision to not dichotomize the types of reactions students were having to response. For example Grade Talk (GT), one of the codes I will talk about in more detail later, is something that cannot be avoided. I consciously avoided talking to participants about grades, but they usually came up in our conversations. With no grades in the classes, students found it difficult at times to gauge how they were doing in their writing so for a lack of knowing how to talk about a paper/writing without grades students often turned to what they knew (grades). Also for another code, Past Experience (PE), it was not important to code these for positive/negative/neutral, because they were things that had happened in the past, and regardless of their polarity, the experiences shaped students views on response. I realize if I had I chose to code for polarity of comments I would have a different

117 analysis/numbers, but to avoid researcher bias32 I kept the categories ambiguous33 to include all comments that fit the codes. My “being” in the room and interacting with the students and teachers acts as my compass for how to decipher the codes and reactions students had to response. Since response is more than just what is on the page (Freedman, 1987), my “being” there and the various types of data collected provide me with a robust understanding of response in the classes. Since assessment, like so many things in our lives, is about relationships, the relationship I constructed by “being” in the classrooms allow me to discuss the students and response more meaningfully. Using ethnographic methods proved advantageous for my research, because it allowed me a fuller picture of the participants and their voices. The data that

I present in Chapter 4 proves that the codes accurately capture the student’s perspective, highlighting the students in the room34. Above I have highlighted the process that I went through in collecting and analyzing the data and now I will explain the codes and provide some examples to make them more concreate. The codes are broken down into two sections the Nature of the

Discourse (four codes) and the Content of the Discourse (fourteen codes). The Nature of the

Discourse were used to find out what the students thought about writing, the class, and the portfolio. The Content of the Discourse were used to see how students specifically talked about teachers’ response.

32 I use the term bias, even though it is typically used with negative connotation because I believe we lack a better term to describe our epistemological view on things. When I use bias here it is in neither negative nor positive light, it is just used to represent my inclination to believe and make sense of things. 33 For many of the codes the polarity of the comments did not matter so I consciously avoided polarizing any comments, instead focusing my coding and analysis on the types of things participants discussed. Many categories also would not have had polarity so by “being” in the classes and interacting I chose to use my “being” there as a theoretical lens rather than polarity of comments. 34 Sandra Murphy’s 2000 article looking at the deficit in response research asks “is there a student in the room?” In chapter 4 I attempt to start answering the question that she asked nearly two decades ago.

118

Nature of the Discourse

Talk About Writing- General (TAW-G)

Table 8 shows some examples of the TAW-G code, which was used to first find out what types of reading and writing the participants were used to doing on a daily basis. The TAW-G code helped to build a better picture of the literate activity that the students engaged in. The

TAW-G represent times when the participant talked about reading/writing in a generic way or about reading/writing that they did in other classes or had done in the past.

Table 8: TAW-G Examples

Robert Well I have been working on a um a book actually a novel, a sci-fi novel. It has been on hold a little bit due to personal issues. So I do that. I keep a personal journal, when I get the chance. I enjoy writing. Chloe I do, um like for right now, um I like reading informative like books, like one right now is um "The Tipping Point" by Malcolm Gladwell, and it is basically just talking about how epidemics happen. um, you know I guess out of the blue so to speak, how they just overturned to be this big major thing, so business informative reading I guess. Um, and I do enjoy reading so I don't know if that helps in this (laughing), but. Will I have journals. Yea I've been writing my whole life like you know since I was like 13, 12. Poems when I was like 12 when I was in my angst period and then like just going on with it so.

Talk About Writing-Specific (TAW-S)

Unlike the previous code (TAW-G), which looked at reading/writing that the students did generically, the examples in Table 9 show the TAW-S code was specifically used for times when the participants talked about writing specifically for the classes; journals, blogs, papers for the class, peer review, memos, etc. These codes allowed me to focus on comments students had made about the response they were getting from their teacher specifically.

119

Table 9: TAW-S Examples

Ryan His feedback, um he said it was good for the most part but um, his main thing was adding some context to certain things so like when I put a quote in I didn't really say what were the circumstances for her writing that, um, so yea that is going to be some big things, from the difference between the, the first draft I handed him and this wasn't really that I changed much I kind of added more, um, so I mean like for the final draft I'm going to look at his comments a lot more because I'm going to be done adding, I'm just going to be editing stuff so. Jake That comment I believe wants me to take and change or remove that example because with that example I want to show that the discussion method and the discussing before you actually make the lecture helps the students form their own opinions and ideas and that was my goal with that example to say that, but he says that is isn't really a new or novel idea and that this class is structured very similarly to that and I think it all depends on the audience that you are thinking of in mind…

Talk About the Class (TAC)

Table 10 displays the TAC code, which was used when participants made reference to the class, activities they did in class, or the teacher. This set of codes represents attitudes and feelings that students portrayed about the class or the teacher. Although it was not a textual specific item, meaning it was not specifically in relation to their writing, but it helped to understand the student’s role and position in the classroom. This code facilitated in understanding what sorts of attitudes the participants may have had about the class or instructor to provide a nuanced look at how they viewed response.

Table 10: TAC Examples

Cory Well I could tell from the beginning, from the start of the semester that ahh Mr. R is a he is, he looks at the the character behind the writing and then the the grammar and the spelling and that sort of thing… Alice College Writing 1 um, we did a lot of I guess reviews I guess, we read a bunch of books and then we watched like a bunch of movies um and had to do like reviews on them and then here (Rodney’s class) it’s mainly about education which I guess is not my strong suit but I can write about it. Like the articles he gave us are interesting but I guess I’m just more fantasy like in my mind like made up stories and actual teen problems like I can relate more to.

120

Talking About Portfolio (TAP)

Table 11 has examples of the TAP code, which was used to gain an understanding of what students thought the portfolio was and also to see how they felt about the portfolio. Some participants had experienced portfolios in the past and others had not. This code was used to find out what the student’s perspective of the portfolio was. The code also aligns with my previous research looking at how teachers think of portfolios, and sheds light on ways that portfolios have been used in lots of different ways.

Table 11: TAP Examples

Robert Yes they were getting grades on them and then you would have an overall portfolio grade. Like you did a couple papers and then you got a grade on that paper and then you would slap them in the portfolio. Only difference was you didn't re-due it like he does it. He gives you kind of an informal thing and then you do it, but in high school they would give you a grade and then you would put it in the portfolio and you would you know present that at the end of the class or semester year I guess of the class. Will Um I see it [portfolio] as kind of the application of a lot of what Dewey is talking about in a certain way where we are entering, okay I'm going to say it’s like a realm in a sense because it is kind of outside a lot in what we are doing a lot in other classes, um I see it as um, this way of giving students more freedom but while also provoking more um, critical thinking which I think is the point of education, I personally have benefited from this class cause I never really had put together a larger portfolio which I obviously don't think it is large in the context of like certain like real um, academic, academic people um like maybe with PhD's and stuff you know you have people who have thousands and thousands of works put together and that might be hyperbole but a lot of people have a lot more stuff but the point is, is that it has opened my eyes to kind of how, I don't know I think it is a better approach to teaching ultimately. Kara Um, no I had college writing one um with a grad student freshman year and we got grades for every paper that we turned in but then we had to re-turn them all back in and she like went over them again and then gave us our grades for the last two papers, so I don't know it wasn't really a portfolio but she called it that cause we had to turn them all back in and stuff, I can't remember if she let us revise them when we turned them back in but she might have.

121

Content of the Discourse

Past Experience (PE)

The PE codes in Table 12 were used to reflect students using past experience with response/writing to make judgments about their writing and response. PE were instances where students referenced other classes they had taken in the past (high school, other college classes, etc). These codes reference experiences students had previously from other classes and teachers.

This code helped identify how participants view response and writing in general.

Table 12: PE Examples

Stella Yea I mean I've never done anything, I mean I've done book reviews before in high school or middle school or whatever but they were completely different like you summarized the book like it wasn't like. Claire But like I said I have done writing like that before. Well I guess you could say that I am worried about all the research. I can sit down and write you a really solid paper and I can go back and find stuff but it’s such a process and it is just so much easier to just sit down and write. You will probably get a much better paper if you do it the long complicated way. But you can always revise and go back. I like to have a large structure before I start. You know what I’m saying? Jake I'm fairly familiar with that style I mean in high school my teachers would write on your paper they would have red pen but I never had a professor really look into the content of the paper which is really a bad bad thing.

Teacher Expectations (TE)

The TE code shown in Table 13, reflect participants anticipating what the teacher wants/expects of them in their writing. Students often changed or thought about changing their writing because of what they thought the teacher would want. The TE codes were instances where it was clear participants were trying to feel out what the teacher wanted. In segments coded for TE, participants were focusing on what they thought the teachers expected or wanted of them. Sometimes these were students trying to please the teacher but other times they were writers trying to think about audience and sometimes it blurred the boundaries of both.

122

Table 13: TE Examples

Alice I changed the first part of this because I said as to any argument there's two sides and he didn't want me to do that. Arthur I wish it didn't have to be like that, like I wish I could just write what I think is my personal best but ah college writing I had like a huge issue with the class in general because it’s so subjective you know one teacher can say your writing is terrible and the next one is going to say it’s good so I mean really kind of like a lot of other classes in college you're just trying to give the teacher what they want rather than giving your brain what your brain needs you know. Anna I mean with any teaching I feel that you have to give the teacher something of what they want or you're not going to get a better grade cause to an extent you know it has to be something that they think would be worthy if that makes sense, or maybe that is just how my brain is trained with my other classes you know. Stella I want to make sure that I am doing every single thing that I have to do like every blog, every reading, everything because I like want a good grade which sounds bad because then I'm still working for the grade.

Student Talk (ST)

These are times when participants acted and talked like “students”, making reference to matters such as grades, page limits, timelines, what the teacher wanted, or when they did not know what to do with their writing. In Table 14, some examples of ST can be seen. The ST category was used to capture instances where participants clearly acted like a student and were concerned with aspects of the class rather than the content of their writing. Frequently sections of text coded for ST were also coded for Teacher Expectations (TE). The reason they are different codes will become clearer when looking at the next code Writer Talk (WT) and the need to keep certain codes from being collapsed.

123

Table 14: ST Examples

Alice I knew what I was getting in College Writing I so I guess it wasn't as scary like I knew what I was going to get, like I could only better my grade. I couldn't like worsen my grade, but now I don't even really know what I have which is kind of frustrating. I mean I like that we can turn in a portfolio but I just wish that I knew where I was headed in the class if I was. Arthur Um, I'm concerned about passing this class like I really haven't been able to do all the assignments that he has given out for a number of reasons like writers block is kind of one of them and lack of motivation is another and just well I kind of signed up for too many classes this year and it kind of stressful and stuff so I haven't been able to get all the assignments completed so I'm hoping I'm going to try my best and hopefully I can pass this class… Holden Right and that is just, I mean that right there is like 30 minutes of work cause I have to re go in there and go back to the book and reread sections of the book to go find it cause I mean it was, I read the book on Kindle and I didn't know how big it was until I started reading it, it was like a 270 page book.

Writer Talk (WT)

These are times when the participants stepped outside of Student Talk and spoke with authority of their text and response they received. At these moments, participants frequently adopted a “take it or leave it” mentality with some aspects of the feedback. Table 15 highlights some of the WT moments where the participants either disagreed or appreciated and agreed with teachers’ feedback, talking about how it would strengthen their papers, spoke about ideas of expanding content, talked of revision as putting in the intellectual time, etc. In WT moments participants clearly took on the role of a “writer” and spoke how writers do when talking about decisions they made in their text. WT is clearly contrasted with ST because in WT moments participants were focused on the writing rather than grades, page limits, expanding ideas, or other trivial matters that often prohibit students from moving forward with their writing. Writers knew what to do with their text, whereas students were worried about lower level concerns.

124

Table 15: WT Examples

Kara Exactly. Yea and even with like this book review we wrote the rough draft we got it peer reviewed and then fixed it and I turned it in and then I got his comments and I was like well that makes sense and I should change this or like even some of his comments I was like ehh he asked me to elaborate on a chapter and I was like, to me he didn't read the book but to me that chapter was not important so we are allowed to say, like I like that we are allowed to say I didn't elaborate on this chapter because I didn't think it was important to the person reading the book review to like know more about that chapter you know what I mean so. Ryan Mmm, not really worried because I can make an argument saying that I mean we are doing a thing where we are trying to socially correct things, trying to be politically correct about stuff and I think this is a big I mean the common idea of the revolutionary time period is all white men I think adding this to it even though she wasn't there she did have there are several quotes in here where she wrote to John and to the other delegates in congress so I think that is a big an important thing for today, I guess for today's society you know include minorities so. Jake Yea, I kind of have an argument going with the paper that the test is biased and this is supporting evidence for that but rather than just saying here's some numbers I need to develop that argument here and use the numbers in the argument rather than just explaining them and so I think I got a little carried away with just trying to explain what I saw in the graph.

Taking Ownership (TO)

The Taking Ownership code represents participants speaking with authority and knowledge of their text and topic. In Table 16, we can see that this code is similar to the WT code in the sense that participants take ownership of the text and speak with authority but differs because both students and writers can take ownership of their writing. It was possible when coding for TO to also have either ST or WT and was not exclusive to either. For example participants who acted more like students and exhibited ST, could have spoken with authority over their text and felt that the teacher was wrong in their reading. In cases like this, TO was a participant defending their decisions in the text. Participants who exhibited WT also spoke with authority similar to students, but with writers and WT participants saw response as something to work with whereas students saw the response as a critique. This code helped to differentiate how

125 students and writers talk about their writing, since both students and writers can take ownership, the code TO made a finer grained distinction.

Table 16: TO Examples

Will No he said that he was saying that this one guy is um, more termed as the founder but if you actually look at all of Dewey's work and how they reference him online they say that he is considered a founder and also the other guy, like they are hand in hand together um, Dewey if you read about him he is considered a pragmatist so I think it’s just the way that Brian is um, trying to help me weed out some of the possible critique that I would get from that so I think that if I am trying to specify and be concise and have clarity with my writing it might be good to leave out certain things because you obviously can't bring it all up in one review I mean every single factor is not going to be able to be you know what I mean. Holden I mean when he said here that I should explain it, I feel like this right here is a very good explanation of it this curve which he spends much time writing about in as a scale used to evaluate the complexity and level of critical thinking and cognition, and that's I feel like that is a pretty good definition for it, I mean me personally.

Interaction (INT)

Interactions like the ones in Table 17 were the hardest comments to create and code for.

The reason INT was so difficult, was because of how closely it is related to other codes. INT are occurrences where it is clear that the participants are having an interaction with their text/response/ideas that are being presented to them. Although any reading of response can create an interaction, the occurrences that were coded for INT were ones where participants were more involved than other instances and were engaging with the response/text on a different level than other codes could account for.

126

Table 17: INT Examples

Anna (laughing) And I think it comes from being a perfectionist too cause I don't really care about the things I am doing right, I want to fix the things I am doing wrong, (laughing) so. Um, and then he puts "first of all think about the organization of your text do you say anything more than once did you find yourself wanting to talk about something in more than one place" and I went back in and I highlighted and I took two different colors so um, at first I was like ummm, I don't think I do, I don't do that like I think you are crazy then I went back and highlighted um similar topics and I had like a lot of pink of things that were related and I did like multiple different colors so if I had a lot of pink which was um, where I was trying to um pull up points from the book they were all extremely repeated I had a very it was annoying after reading it of how repetitive my paper was but I didn't think at first when he said it was so it opened my eyes cause yea it was but I fixed that. Will Okay so first he talks about um my intro the organization of it yea in my intro um introducing who Dewy was what will be talking about but I didn't um fully talk about what each chapter would be in and I personally think that was because the way that I look at it but I did get what he was saying so I developed a more stable thesis or like a thesis that's easier to draw on while for the reader um, he talks about getting a better sense of the book so I think that's what I was just talking about, about giving the reader a better sense of what is going on so that kind of goes into the same thing. The main ideas so he talks about organizing my thoughts and the ideas around, um what I'm talking about throughout the book, um he talks about the term innovative realist which is actually not what I used it was innovative liberalist (laughing) so I was just referring to him as being a liberalist of his time after I introduced him as a late 1800's figure within education and psychology and stuff so he was saying coin that term but I didn't use realist I think that was just him skimming.

Response Clear (RC)

Table 18 displays the RC category, which is a category of codes looking specifically at teacher response and understanding. The RC comments represents occurrences where participants have an understanding of what the response means, the response is clear, they are connected to what the response says, the response acts as a guide/map providing direction. RC make sense for participants providing them with where they are, where they need to go, and how to get there.

127

Table 18: RC Examples

Kara So he is like "show some demographics about the different people in schools that might be a good place to bring some numbers in" so my revised paper I put in these charts about that, um. Stella Kind of, but then he said "break this data down a bit for us, what do you see in the data how is a reader supposed to read it" so I felt like when I reread it, it does like it kind of confuses you, like if state results were better than those from the national tests don't you think it would be, and you're kind of like what? Chloe What he is saying in this review, it does make sense like what he is asking me to do, I do have to go back however and reread my paper and try and make sense of that you know what I mean so yea I guess to really understand I do need to go back and look and kind of take his perspective for a second and I think you know it's not very complicated, I think that he just wants a little more organization or better organization um, and you know more context you know and review the book once more, hone in on more specific so not short and sweet but yeah.

Response Unclear (RUC)

As can be seen in Table 19, the RUC category is the opposite of the RC category. RUC codes were used to code instances where participants were not sure what response meant, they were disconnected from the response, it was not clear, and provided no direction for where they needed to go with the paper because of miscommunication or misunderstanding. When response was clear, participants had no trouble talking about what the response was asking them to do, but with RUC, it was evident that the participants were not sure what to do next with their papers.

Table 19: RUC Examples

Holden I think he should do that, he should definitely have a change with that because some of those comments just left me confused and he says we can respond to them and ask him about the comments but how does he want us to respond. Does he want us to email him, does he want us to just return the paper with some counter comments to it, like I don't know how he wants it you know it. Jake He is trying to explain to me who they are a bit more and I do see that, but I don't necessarily quite see how that is relating back to my writing other than I am writing about those two and their ideas and their thought processes and he obviously knows them more than I do but I see kind of that I should like emphasize their ideas as being very important because those are very important figures and Freire is well known and his work has influenced even the other author of the book Ira Shor and so I should look into what he says and view it in a different light I guess, I'm not quite sure what that will be but I kind of want to talk to him and see what those two bullets in his mind mean.

128

Response as Dialogue (RAD)

The RAD code accounts for when participants describe response from their teacher as a dialogue, a conversation, when the teacher is coming down to their level, new response that they have never encountered before, or response making them think critically about a subject. Table

20, shows examples of RAD moments. RAD codes appear to almost always have a positive feeling attached to them and are easily relatable and understandable for participants. This category emerged in relation to Kynard’s (2006) approach to responding or “getting down” with students on the text. RAD comments attempts to measure and highlight atypical moments with response.

Table 20: RAD Examples

Cory Um, actually no I don't think so, I think again it sounds like we are having a conversation he is intrigued why, why wouldn't you follow up with that or if that is the case, if that is the comment that I'm thinking he's thinking of, um, but I don't think he is saying that you should put you know that you should explain this experience I think he was just curious why I didn't do it and I think ah, um, not offended or anything but I would assume that it is more personal, you know what I mean and again conversations, you know it is very, very communicative… Kara Yea that's like this class is blowing my mind, like every day I'm like oh my gosh like I learn something new or not even something new but like we elaborate on stuff that yea his comments definitely every time he is like think about this why did you say this, what do you think we can do about this and I'm just like shit like he's blowing my mind like I'm just like and then I try you try to elaborate but it’s like eeeeh, he challenges. Jake With the rest of the paper here I've looked at everyone like it is something I need to change and so I really haven't looked at any of these comments in this paper as just dialogue.

Verbal Response (VR)

I used the VR code to identify the instances in the data when participants mentioned things that a teacher had verbally said to them in class or in a conference. Table 21, shows examples of response participants had that fit into the VR category. VR were codes that specifically referenced some form of verbal response to a paper that participants were working

129 on. The VR comments varied from class to class, with Francis’s having the highest count because of his class size, only eight students, where they frequently had mini conferences to discuss papers and ideas.

Table 21: VR Examples

Leigh But right now I have to rely on his word that I am on the right track. Tony It did at first but um, knowing that I can go talk to Bill whenever I need to and get like immediate feedback like on the fly that is nice to know. So like at the beginning I was like oh crap it's a portfolio class this is not good, but now I don't feel so bad. Cory Yea and I think that is also because I have been talking with professor when I get the chance or wherever we are both, we are both kind of sitting around. Um, but ah yea he has kept me updated and that's obviously comforting and ah he is yea I guess that is the best way to put it.

Grade Talk (GT)

Table 22, has examples of GT. Grade Talk are moments where the participants discussed what grade they thought they should get, what grade they believed their paper or work in the class was, or just talking about grades in general. These were any instances when students talked about grades in some fashion.

Table 22: GT Examples

Kara Right, yea I'm always wondering like what my grades going to be but I'm not too worried you know, life goes on. Anna I feel like my grades are just a representation of just how obedient I am, and that is sad. Holden So I do like the idea of grades because I want to get as much funding as I can and better grades equals more funding, so and right now I have like a 3.9 and I want to keep that so hopefully this portfolio thing works out for me.

Self-Assessment (SA)

Table 23, has examples of SA from the interviews. SA were moments when participants discussed what they thought their paper needed, lacked, or where they believed it stood in terms of a grade. The moments usually came up when looking at specific papers/response from the class. Many students had a good sense of their papers before they turned them in, or areas that

130 they thought the professor might make comments on. Many of the participants, especially ones that exhibited a lot of Writer Talk were fairly good at assessing themselves before the professors offered feedback, but not always the case. SA seemed to come from a lot of different places but it was clear that many participants were able to think about their writing objectively and that teacher feedback helped to reinforce many of the “uneasy” feelings they had about their writing.

Table 23: SA Examples

Robert Um, satisfied I guess I would have been, um, not that concerned about it with his response he gave, there are some places that um I knew there would be some issues with and he identified them so it was kind of just reinforcing what I had knew I guess. Stella Um, I'm just afraid because I know that he said some students wound up completely re-doing their papers after he gives them feedback and I'm like afraid that I’m not going to do enough because I feel like all I really need to do is like clear some things up and add some more detail and kind of like I don't know make my writing like a little more smooth and like. Will I mean it’s not as it’s not ready for all the, the scrupulous critiques that you would receive from publishers or different things it just not ready so it’s like when you are training you’re an athlete and you're training and you're just not ready and you know it because you can see the competition, you can see what's going on up there…

More Is Not Better (MINB)

These are codes loosely based off of Lakoff and Johnson’s (2003) orientational metaphors where people think in terms of direction, like up is good and down is bad. Lakoff and

Johnson develop a More Is Better (MIB) metaphor where the more of something you get or have is better. A good example of this is thinking about McDonald’s, where any size drink is the same price. In this instance people usually buy the biggest drink because for the same price More Is

Better. The MINB category is the opposite of MIB in relation to response, because the more response you get the more you have to fix. Examples of the MINB code can be seen in Table 24.

131

Table 24: MINB Examples

Anna The way I think about it is if a paper has more feedback there is more things for you to fix and that would be and maybe it is from my high school teacher as well because the students that um got less feedback she would write great first draft not much to fix here's some small things to fix you're on your way and then students that had a lot of feedback was okay like great first draft you need to fix a lot of these things to get to where I want you to be so maybe that is the stigma I have. Chloe It's the amount and the in-depth, no how should I say this? It is the amount. Holden It’s clear what he wants me to do and he gives me a suggestion on how to go about it, but it’s just so much.

Time Management/ Time Is Money (TIM)

This code is also loosely based off of Lakoff and Johnson’s (2003) metaphors for time.

Table 25 has examples of the TIM code, and was used to represent instances when the participants make reference or allude to time being a commodity or something they did to cut time for efficiency. Any reference to time in regards to their work was recorded for this category.

Table 25: TIM Examples

Holden I'll pretty much be at my 10 pages for my research paper because he keeps, he mentions in here that he wants me to find more cites, find some more sources but I probably have about 5 hours of time just searching the web for studies because 90% of the studies they publish are against video games not for them so it's hard. Arthur Um, well I'm trying to incorporate research into my one paper, I don't really know like where to get good research I mean I can do like a Google search but you know that's not going to give you too strong of research, um I don't know like there's other ways you can get research but it's just like I don't want to go through hours of mumbo jumbo for something that really isn't that important you know. Chloe You could say yea I kind of have, like a heaviness, stress that would be a good or anxiety, no anxiety I feel that that wouldn't go into this category but yea I would say again just stress and um I do need you could say heaviness because of the extra time I need to dedicate now to this class in trying to balance all the other classes and all time dedicated to other homework and other um, you know activities to complete so to speak.

Interrater Reliability

After coding was complete to insure that the code worked and was defensible, I compiled a random sample of participant transcripts to give to another graduate student for a reliability check. The sample of transcripts was 38 pages out of 232 pages (16%), which included seven

132 participants, two from Francis’ class, two from my own class, and three from Rodney’s class.

The 38 pages of transcripts selected accounted for 587 codes out of 2207 codes (27%) of the whole corpus. The outside coder was also a PhD candidate working on a dissertation in assessment and was familiar with my project and research. After explaining and working through the codes, the outside coder and I had 10 disagreements out of 587, giving us an overall agreement rate of 98%. The degree to which my outside rater agreed with my codes provided strength to my coding method by demonstrating that the codes could account for the data.

Conclusion

Taking an ethnographic approach to data collection and analysis, this study focuses on the student’s voice in assessment and response. By “being” in the classrooms and interacting with teachers and students this method for data collection and analysis helps provide a more robust understanding of the roles response plays in a classroom. Ethnographic methods allowed me to understand data and participants by “being” in the classrooms. Although this research is not a full blown ethnographic study, its methods, collection, and analysis borrow heavily from ethnography and help shed light on how literacy and assessment enter and function in a classroom. In Chapter 4, I present the findings from my study and provide an analysis of the student’s point of view on response. Looking at these “assessment events,” my research and analysis explores the interactions that occur in classes and show how literacy and assessment can be used together to create a more meaningful exchange between teachers and students.

133

Chapter 4: Results and Analysis

“I feel like my grades are just a representation of just how obedient I am, and that is sad” (Anna)

“I like it, uh it’s like a lot of people in the class are feeling anxious about not getting grade but I actually like that, I like not worrying about it, um I just do the assignments like however I want like I'm not worried about making sure my grammar is good” (Kara)

“Yea and I mean like it is a lot of work but knowing that I'm not getting like a grade right now I want to make sure that I am doing every single thing that I have to do like every blog, every reading, everything because I like want a good grade which sounds bad because then I'm still working for the grade” (Stella)

“I think it is smart, I think if a professor can actually see if you have actually like shifted your, I almost want to say like paradigm because like some people are literally going to be blown away by the things they are going to realize I think and it’s not about grades I think it’s about actually being able to critically think about you critically thinking and realize that you’re not always critically thinking especially in you are in classes where you are forced to put things out in a certain time because it’s not going to be the best work and the fact that your reviewing these three different times technically its, it, it’s good, it’s helpful you'll actually, I think you are actually learning real learning… The other classes are just the standard bullshit. I think it’s bullshit the traditional” (Will)

Introduction

I start this chapter with quotes from participants in the research, grounding this chapter in student’s voices so that we may start to understand their perceptions of assessment and response and how they engage with it. Little is known in our field about how students perceive our feedback (Murphy, 2000; Mathison-Fife and O’Neil, 2001) or what they think about portfolios and assessment in general. From a very young age, we are steeped into a culture of assessment from standardized tests to later in life with popular social media like Facebook where you can take a “test” to reveal which Disney, Star Wars, Marvel Super Hero character you are, and to testing that goes on in schools we are largely an acquiescing culture. Our cultural acceptance of testing (Hanson, 1993) has created a generation of students who have become “obedient” or

134 unsure of where they stand in a class if a number, letter, marking, grade are not present, or others like Will who think standard classes (classes with grades) are “bullshit,” and are glad not to receive grades. This research addresses the elephant in the room (Murphy, 2000), the student’s perspective, with classroom writing assessment and response, shedding light on this dark area.

When assessment is linked to learning, assessment has the potential to create a better educational environment, making expectations clear for teachers and students while also increasing student performance (Sadler, 1989; Murphy and Grant, 1996, Black and Wiliam, 2004; Shepard, 2006).

This chapter focuses on the analysis of 26 interviews from 17 students in 3 classes. Using data collected from interviews this chapter highlights the student’s voices. From interview data, I will present the students perspective by focusing on various interactions they had with classroom writing assessment. Taking an ethnographic approach and “being” in the classes, participating and interacting with teachers and students in various capacities, I provide multiple analyses of the data collected. This chapter is organized around a case-approach, focusing on students to answer my research questions;

(1) Is response able to act as and fulfill the conditions to be considered a “literacy event”

in a portfolio classroom that uses delayed grading?

(2) Are students able to fully participate in the literate activity of assessment that

portfolios and delayed grading promote?

(3) How does teacher response and assessment act as a sponsor of literacy?

(4) Are there students in the class that do not interact with teachers response, and if so,

why?

For my analysis, I use Heath’s “literacy events” as a focal point to discuss the different types of interactions students had with teacher’s response. The “literacy event” is a concept

135

Heath extrapolated from her decade long study of literacy looking at individuals from two different communities in the Piedmont Carolinas. “Literacy events” expand our notion of literacy and learning because they focus on how social activities structure individuals’ literate practices. I extend Heath’s notion of a “literacy event” to include “assessment events” (Maddox, 2014;

2015) to understand student’s activities in class that are structured around a portfolio with delayed grading. In this study, an “assessment event” is a conceptual tool used to focus on interactions that students have with assessment and teacher response in delayed grading portfolio classrooms. Like “literacy events,” “assessment events” are not passive occurrences and require interaction. When students receive final grades on papers they brush over the response (Bardine,

Bardine, and Deegan, 2000; Knowles, 2014; Knowles, 2015), and there is no action with the text, but a reaction to the text. If the student sees summative feedback (grades) what we typically see is a behavioral reaction that does not include a textual interaction. When students are given summative assessment with no chance to use the feedback to improve their writing or to revise, the ability to interact with response is lost. However, if a student is given summative feedback and allowed to use that feedback to revise then there is a possibility for an interaction. The problem with grades or any system of assessment that emphasizes scores/rankings is that conversations about writing are mostly one sided, and response is viewed by students as a way to get a better grade (Bardine, Bardine, and Deegan, 2000). “Literacy events” are about reciprocal action, but in classes with summative grades reciprocal action typically does not occur because students have little to no role for reciprocation. In delayed grading classes, on the other hand, formative assessment can act as a catalyst for “assessment events” because (1) it helps students see where they want to go, (2) it lets them know where they are now, and (3) it helps them see how to get there (Shepard, 2006). Providing directions to make a piece of writing stronger,

136 formative assessment encourages students to think about the text differently (Sadler, 1989; Black and Wiliam, 2004). Classroom writing assessment systems that employ grades/scores/ranks, grade contracts (Elbow, 1997b, Asao 2004), or co-constructed rubrics (Turley and Gallagher,

2008) where the focus of the assessment becomes the product, rather than the process of revising from feedback and are less capable of creating the types of interactions that delayed grading and formative assessment encourage. Typically with grade contracts or rubrics teachers set up categories and then connect those categories to a value or a grade. Writers do not compose rubrics to write, but rather they use rhetorical moves or obligations for every type of paper. It is possible that grading contracts and rubrics can be used to promote writing development from a talented and experienced teacher, but they are difficult to link to contemporary theories of literacy and “literacy events” because the student-teacher interaction is grade-focused. Certain uses of grade contracts or rubrics that are transported from other contexts tap into theories of

“autonomous” literacy (Greve, Morris, and Huot, forthcoming) and are not capable of creating interactions. Assessment is not transportable (Murphy and Ruth, 1993; Maddox, 2014; 2015) and must be locally constructed and used in a way that aligns with curricular goals and other contextual influences. Delayed grading and formative assessment are capable of tapping into more powerful theories of “ideological” literacy that revolve around how literacy is used to achieve various goals.

The analysis presented in this chapter focuses on the literate activity of assessment and shows assessment and literacy interpreted from the student’s perspective. Focusing on the students’ voice and interactions illustrates how students engage with response from teachers. The cases discussed in this chapter highlight the types of reactions students have with response.

137

Students and the Portfolio Results

My study’s findings support the notion that portfolios’ effectiveness and potential depend upon the ways they are used. Earlier research (Greve, 2014; Greve, 2015), indicates that portfolios can mean different things to different teachers, an idea reflected from students in the study. Table 25 shows that seven students (41%) out of seventeen who took part in interviews reported previous experience with portfolios and ten students (59%) did not. In Chapter 3, I made the argument for constructing my own “expert teachers” for teachers using portfolios, because I believed that in order to observe interactions, I needed sites and teachers whose pedagogy supported and enacted delayed grading classrooms that provided an environment for writers to work.

Table 25: Students with previous portfolio experience

Teacher Student Experience with portfolios Rodney Cory No Robert Yes Jake No Arthur No Will No Anna No Chloe Yes Brooklyn No Alice Yes Curt Holden No Kara Yes Stella No Claire No Francis Ryan No Tony Yes Leigh Yes John Yes

Totals Yes 7 (41%) No 10 (59%)

When approaching both Rodney and Francis’s classes about my research and explaining my IRB, I asked the students how many had previous experience with portfolios. In both classes

138 only a handful of students raised their hands, and many seemed completely oblivious to portfolios35. Rather than talking to the students about portfolios further, I decided to wait to see how the instructors would present the idea of portfolios to the students.

When Rodney came back into the room after my initial recruitment, immediately he started talking to the students about the class and syllabus. After explaining what the class was about, who he was, and the portfolio, approximately 15 minutes into class, Rodney asked why nobody in the classroom had asked about the portfolio. The students in the class looked disoriented by his question. They asked nothing about not getting grades to Rodney. During their silence Rodney began to pass out his portfolio handout (Appendix B) and asked if anyone knew what a portfolio was. Alice, who had been in a portfolio class before and was a participant in my study, raised her hand and asked if a “portfolio was a second chance to get a better grade?”

Rodney immediately told her and the class that “A portfolio makes you a writer because it puts someone in the same position as a writer.” He continued to talk about his own experience with writing and how he had rarely had an article accepted for publication right away. Rodney told the students, “Doing a portfolio is working as a writer, and you will never get a grade from me. You will get a lot of feedback. The research about teacher’s response shows that teachers spend 60% of what they say, justifying the grade. Most of what you get is the final grade. They are the teacher and you are the student and that is what they do. None of my feedback is about justifying something that doesn’t exist. It is all about the writing. ” Continuing to explain portfolios,

Rodney asked the question, “What do students do?” and then answered it: “They pay attention.

They do what they are told. Do writers do that? Hell no. They get us to think about things in a different way.”

35 In my own class I observed similar observations as in Rodney and Francis’s class in regards to student’s exposure to portfolios.

139

I begin with a description of Rodney’s first day to his class when he introduced the portfolio to exemplify his approach to teaching as well as to set the frame for all the teachers in this study working to create a space for writers’ writing. Francis did not go into the same depth about what a portfolio was, but he told his students that their class was “portfolio driven” and structured much of his instruction and activities around the portfolio.

Although I do not have data available about my own class or the first day of me talking with my students about portfolios, I frequently talked about my research and interests in portfolios. In the final paper, two of my students did some portfolio research spurred from the discussions we had about portfolios and education, the theme of the course. In all three classes the ideas about portfolios that students expressed through the interviews can be used to think about their overall reaction to response/assessment, providing us with a better picture of how students interpret and interact with comments from the teachers.

Table 26 shows that of the 7 students in my study 6 (86%) reported past experience with portfolios and receiving grades, with only 1 (14%) student reporting an ungraded portfolio.

Tony, in Francis’s class, had been in my first-semester College Writing I class, in which I used delayed grading portfolios. None of the other students in the study had reported being in a delayed grading class before, so the assessment (or lack of grades) was new to many. The prevalence of graded portfolios is also confirmed from Greve’s (2015) look at classroom writing assessment practices where many teachers used grades to some extent to assess their students.

140

Table 26: Students who had been in graded/not graded portfolio classes

Teacher Student Graded or Not Graded Rodney Robert Graded Chloe Graded Alice Graded Curt Kara Graded Francis Tony Not Graded Leigh Graded John Graded

Totals Graded 6 (86%) Not Graded 1 (14%)

Murphy (1994a) calls portfolios protean36 because of their ability to take on characteristics of the school and curriculum, from portfolios using rhetorical theory framework, cognitive framework, and behaviorists, making the adaptability of portfolios a plus; however, not all uses of portfolios can successfully link assessment and learning or create a space for writers.

The difference between a portfolio and papers in a folder stems from how they are used.

Alice, who reported using portfolios in her College Writing I class, said that portfolios measure progress and growth, but she did not understand the differences between her previous class in which grades were used, and Rodney’s. Alice’s misunderstanding of portfolios is similar to what O’Neil and Fife (1999) discovered from students in their study on portfolios and response. O’Neil and Fife found that many students based their reading of textual comments and assessment on past experiences with other teachers. Alice said in her College Writing I class that they received grades and that they had the option to revise the papers if they wanted to receive a better grade. This is possibly what prompted her response to Rodney’s question on the first day of class about what they thought a portfolios was. Her past experience with portfolios was one

36It is important to note that Heath (1983b) also refers to literacy as a protean concept.

141 where a portfolio “was a second chance at a better grade.” Kara reported a similar experience with portfolios from her College Writing I course;

We got grades for every paper that we turned in but then we had to re-turn them

all back in and she like went over them again and then gave us our grades for the

last two papers, so I don't know it wasn't really a portfolio but she called it that

cause we had to turn them all back in and stuff, I can't remember if she let us

revise them when we turned them back in but she might have. (1st interview)

Robert reported a similar experience with portfolios, saying “I really haven't had a lot of experience with portfolio, I might have had one or two when I was in high school, but there was a lot, of you know, informal grading that went along with it, and then you had an official portfolio grade overall so you kind of had to have an idea or a sense of where you were going and how it was looking through the course” (1st interview). John could not remember if he was allowed to revise portfolios from his College Writing I and II courses, but he did confirm that they were graded and then possibly summed up at the end. Chloe was a Visual Communication and Design (VCD) major, which requires that all VCD students compile a portfolio, showing their work and improvement through courses. At first Chloe said the VCD portfolio was the same as the one in Rodney’s class until I asked her about grades, upon which she realized that the VCD projects had all received grades. The problem with calling everything under the sun a portfolio is that it confuses teachers and students and limits the potential that portfolios have to be a truly unique learning experience (Murphy, 1994a).

All of these students thought that a portfolio was about progression and improvement, but how they measured the progression was through their grades. Grades acted as a guide for these students or, at least, an indicator of their progress. Chloe, in her second interview explained to

142 me that her GPA was very important to her. I then asked if it was important to maintain a certain

GPA for her identity, and she replied, “Oh yea, definitely because I always go back to knowledge is power and you know gaining more knowledge is only going to benefit me and help me comprehend and rationalize and just develop as a thinker so it is really just important I think in life.”

It is evident that Chloe and many of the other students believed their grade represented the value of their knowledge, and not what they actually learned. All three classes in my study were “non-traditional” in that grades did not come into the equation until the final week of classes. The types of assessment and pedagogy Rodney, Francis, and I used were what Paulo

Freire (1974) would call liberatory, disrupting the typical dynamics in a class where students are

“receptacles,” waiting to be filled. In the classes I observed, students were encouraged to think for themselves and act as writers and to speak back to teachers’ commentary if they disagreed.

Rodney set the stage for his students the first day of class by telling them about the use of portfolios in the class and that he teaches writers, not students. Francis also took a similar approach asking students on the first day, “what they wanted to know?”

Types of Response Results

In Chapter 3 I provided a description of the classes observed for this study as well as a description and examples of the codes used. Below, in Table 27, is a coding key with small descriptions of codes to use for other tables and abbreviations in this chapter. Across all three classes, the types of reactions that participants had to response were consistently observed. The criteria used to evaluate the participants’ reactions were either coded for the Nature of the

Discourse (TAW-G, TAW-S, TAP, TAC) or the Content of the Discourse (PE, TE, ST, WT, GT,

SA, TIM, MINB, RC, RUC, RAD, TO, INT, VR). There are multiple examples of what students

143 said for each code in Chapter 3, so to put examples in this chapter is unwarranted. Throughout this chapter, the codes will be capitalized as, for example, Writer Talk, to distinguish the codes from other text. From the codes there were four Nature of the Discourse categories which were used to find out what the students thought about writing, the class, and the portfolio. There were also fourteen categories introduced in the Content of the Discourse. The Content of the

Discourse is codes used when students specifically talked about teachers’ response.

144

Table 27: Coding Key With Abbreviations And Small Descriptions

CODE ABBREVIATION DESCRIPTION Nature of Discourse Nature Talk About Writing - TAW-G Reading/writing in general or participants’ past General experience with reading/writing. Talk About Writing - TAW-S Reading/writing specifically for the classes; Journals, Specific blogs, papers, peer review, memo etc. Talk About Portfolio TAP Anything pertaining to a portfolio. Talk About Class TAC Class, activities, and teacher in their classes for the study.

Content of Discourse Content Past Experience PE Judgments made on response/writing based on participants’ past experience. Teacher Expectations TE Participants’ anticipation of what the teacher wants/expects in their writing. Student Talk ST Talking about writing like students, focusing on grades, page limits, timelines, and what the teacher wanted participants to do, as well as when participants did not know what to do with their writing.

Writer Talk WT Talking about writing like writers, constructive discussion on their writing, adding content, talking about strengthening paper/ideas, writing from standpoint outside of the typical student discourse. Grade Talk GT Anything pertaining to grades. Self-Assessment SA What their paper needed or lacked, and where they believed it stood. Time Management TIM Time being a commodity or something they did to cut time for efficiency. More Is Not Better MINB “Amount” of response as an indicator of the quality of their papers. Response Clear RC The response being clear, participants understanding what the response meant and being connected to the content of the response, and the response acting as a guide/map and provided direction. Response Unclear RUC Participants being unsure of what response meant and disconnected from the response, the response being unclear and provided no direction for where they needed to go with the paper due to miscommunication or misunderstanding. Response As RAD Response as a dialogue, a conversation as the teacher Dialogue being at students’ level, new response they have never encountered before, and response making them think critically about a subject. Taking Ownership TO Participants talking with authority and knowledge of their text and topic. Interaction INT Participants clearly having an interaction with their text/response/ideas that were presented to them. Verbal Response VR Remarks the teacher had made verbally in class or conferences.

145

Table 2837 shows information about the 26 interviews with 17 students, noting the number of turns participants took, an important metric, given my focus on the voices of students.

In the interviews, participants spoke 65% of the time. These findings support my research questions and choice to focus on the student perspective. Open interviewing can be a challenging method of data collection because you never know what you will get. From these data, my observations, and experience with the students in the study, however, it was clear they were happy to voice their ideas about assessment. On the first day of classes, when I introduced myself and my study, I made sure to highlight that my research aimed at uncovering the dark territory in assessment and I was interested in student voices, experience, and stories. Some students in the class showed more interest in volunteering to participate in my research, and all students who agreed to interviews were eager and willing to share their experiences and expectations of assessment. The neglected voices of the students in the room unveiled new data to consider in terms of classroom writing assessment.

Table 28: Interview turns talking totals

Turn Talking Total Words Students 1,487 73,608 Researcher 1,501 39,814

Table 29 displays the tallies of occurrences of codes from the interviews. Four of the categories represented the Nature of the Discourse categories (TAW-G, TAW-S, TAP, TAC), which account for 818 (37%) of the 2,207 coded occurrences and were used to help gain an understanding of the literate activities of students as well as what they thought about writing, the class, and portfolios. The Content of the Discourse categories (PE, TE, ST, WT, GT, SA, TIM,

37 In Chapter 3 there is a larger table, Table 4 that shows a more detailed description of students and researcher break down.

146

MINB, RC, RUC, RAD, TO, INT, VR) account for 1,389 (63%) of the 2,207 occurrences, and were used to determine what sorts of “assessment events” students were having with response. It is hard not to notice the variations in the types of comments students talked about in the interviews. The variations speak to the individualized nature of students interactions with teacher’s commentary.

Table 29: Tally of occurrences of codes in participants interviews Instructor Student Taw-G Taw-S TAC TAP PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR Rodney Cory 3 7 1 4 2 3 1 Cory 14 6 4 2 2 6 3 1 6 7 7 1 Robert 3 7 13 3 3 6 5 1 10 3 2 1 3 Robert (2) 1 15 4 3 1 5 2 3 2 7 2 9 7 1 Jake 17131565411 9 2 Jake (2) 11 3 1 2 8 4 9 9 2 2 6 Jake (3) 2 51 3 2 22 32 3 22 28 4 3 5 2 Arthur 4 24 9 1 6 4 11 9 12 10 1 4 Will 8 25 8 4 5 5 1 16 2 20 6 15 2 6 10 12 1 Anna 3 6 3 5 2 3 11 2 8 2 1 1 1 1 Anna (2) 2 22 2 3 2 5 9 13 5 17 5 4 10 6 1 3 4 Chloe 4 4 3 3 1 1 1 1 3 1 2 2 3 Chloe (2) 1 17 7 2 4 8 12 8 11 13 13 6 7 8 1 11 Brooklyn 4 7 3 3 1 1 6 5 1 1 Alice 3 20 4 3 5 10 9 2 4 6 1 1 3 13 2 6 Curt Holden 5 42 18 7 3 21 20 10 8 10 7 30 1 4 2 Holden (2) 2 27 2 1 7 7 2 8 2 9 9 3 4 3 2 Kara 16 27 19 3 12 3 5 6 9 22 6 4 1 Kara (2) 3 47 3 3 4 1 19 16 21 4 12 1 Stella 9 23 7 4 3 7 7 3 10 5 2 11 1 1 1 Claire 18 15 6 2 1 6 7 2 9 7 1 4 5 Francis Ryan 4 23 8 18 6 13 4 13 10 13 2 4 2 4 Tony 4 12 11 8 2 7 3 2 3 5 9 Tony (2) 1 1 4 Leigh 8 7 2 7 5 1 4 6 4 4 5 1 9 John 12 3 2 7 6 10 6 2 9 2

Totals 120 456 148 94 81 141 123 131 132 188 78 24 186 125 43 52 50 35 2207

The Nature of the Discourse categories (TAW-G, TAW-S, TAP, and TAC) helped to shed some light on issues of relationships students had in the classes with the teacher and materials, which permitted more information as a way to understand the students. Table 30 breaks down the percentages of time in interviews students were making specific references to

147 writing (general or specific), the class, or the portfolios. The range of these scores varies from

9% to 90% with an average of 54%. As I discussed in Chapter 3, the interviews were semi- structured, allowing students to tell their stories, and my role as an interviewer was at times a

“traveler” and others a “miner,” (Kvale, 1996). Structuring interviews by assuming the roles of

“traveler” or “miner” allowed me to gain the students perspective on response and assessment.

Table 30: Percentage of Nature of the Discourse comments Instructor Student Taw-G Taw-S TAC TAP Total Rodney Cory 7% 0% 17% 2% 26% Cory 0% 39% 17% 0% 56% Robert 7% 16% 30% 7% 59% Robert (2) 3% 48% 13% 10% 74% Jake 3% 22% 3% 9% 38% Jake (2) 0% 39% 0% 11% 50% Jake (3) 2% 41% 0% 0% 42% Arthur 4% 26% 10% 1% 41% Will 9% 28% 9% 5% 51% Anna 11% 22% 11% 19% 63% Anna (2) 4% 46% 4% 6% 60% Chloe 10% 10% 8% 8% 36% Chloe (2) 2% 27% 11% 3% 44% Brooklyn 12% 21% 9% 9% 52% Alice 6% 38% 8% 6% 57% Curt Holden 5% 42% 18% 7% 72% Holden (2) 5% 63% 5% 0% 72% Kara 14% 24% 17% 3% 59% Kara (2) 4% 67% 4% 0% 76% Stella 11% 27% 8% 5% 51% Claire 25% 21% 8% 3% 58% Francis Ryan 6% 37% 13% 29% 84% Tony 10% 31% 28% 21% 90% Tony (2) 0% 0% 0% 9% 9% Leigh 15% 13% 4% 13% 46% John 20% 5% 3% 11% 39%

AVG 54% Many of the students seemed to come from similar traditional educational experiences when it came to writing instruction. The three teachers, Rodney, Francis, and I, constructed different classrooms from what many of these students were used to, especially how and what we were assessing. As can be seen in Table 31, many participants were not sure what to do with response at first, because it was new to them. Students were often surprised at how much response they were receiving, not only the amount but the quality. The response that participants

148 received was formative rather than summative, and directive rather than prescriptive, nothing like what many had reported receiving from past experiences in school. Given some time with the teacher and the response, some students who participated in my study started to think about the response as a guide that provided them feedback which allowed them to think differently about their papers/ideas. Initially, some students had difficulty making the transition from grades to response and it was not possible for all students to be able to understand and interact with response in the same ways.

Table 31: Examples of student’s reactions to “new” kinds of response in the classes

Claire Well at first I was like okay he's trying like I thought that I could tell that he's pushing himself to be critical or trying to be overly critical so at first I was like okay that’s whatever… But now I, it’s become helpful honestly so. Kara (1st Yea that's like this class is blowing my mind, like every day I'm like oh my gosh like I interview) learn something new or not even something new but like we elaborate on stuff that yea his comments definitely every time he is like think about this why did you say this, what do you think we can do about this and I'm just like shit like, he's blowing my mind like I'm just like and then I try you try to elaborate but it’s like eeeeh, he challenges. Anna (1st Um, well in like one of my journals I put that, this is like the first time that I have ever interview) gotten like feedback like that so um, I like how interactive the class is cause all through high school my teacher would just say great work and then she would give me an A and like uhh, that sounds bad but she just didn't put the time into it. Um, I don't know I notice that I connect a lot of the things that we are in class to myself which I guess that is kind of self-centered but it's what I know. He always challenges me to like think outside the world rather than just applying it to myself. So I've with my last couple of journals and like the informal responses I have tried to like branch out and to think of things as a big picture rather than like me in Kent right now kind of thing. I don't know it was just easier to just connect to myself (laughing) than to do the challenging work of thinking of everybody else. Cory (2nd Yea, yea if you, he is not just saying I could have worked on this or this is wrong um, interview) he is engaging with me individually and that is where you know he is putting in a lot of work behind the scenes, you know as far as, if he does this for each individual paper I don't know how he doesn't have carpal tunnel, yea just ah, that is um, yea it feels like I'm having a conversation with him sitting down.

Table 32 displays percentages for the Content of the Discourse categories, which focused on students interactions with response. Proportionately, the highest occurrences of codes are as follows: Grade Talk (12.7%), Response Clear (12.5%), Self-Assessment (11.8%), Teacher

Expectations (11.5%), Student Talk (9.6%), and Writer Talk (7.8%) as the largest categories

149 occurring in the interviews followed by Response Unclear (6.9%), Verbal Response (6.3%), Past

Experience (6%), Time Is Money (5.6%), Response As Dialogue (3.2%), Taking Ownership

(2.7%), Interaction (2.4%), and More Is Not Better (1.1%). Although there are no noticeable trends between participants, and the percentages that each individual had were all slightly different we can see that there are certain categories that nearly all students exhibited. For example, all the participants exhibit Past Experience and Grade Talk in at least one of their interviews. Other categories like Teacher Expectations, Student Talk, Writer Talk, Response

Clear, and Response Unclear show up in many of the participant’s interviews. The numbers in the Table below speak to the individualized nature of participants interactions with response, but later in the case study section I will start to weave together threads of thought that help to make sense of the percentages and individuals interactions.

150

Table 32: % Frequency of codes for Content of Discourse

Instructor Student PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR Rodney Cory 0.0 41.7 20.8 0.0 29.2 0.0 8.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Cory 10.1 5.5 5.5 0.0 15.6 0.0 7.3 2.8 15.6 17.4 17.4 0.0 0.0 2.8 Robert 9.0 17.9 14.1 2.6 29.5 9.0 6.4 0.0 2.6 9.0 0.0 0.0 0.0 0.0 Robert (2) 2.3 12.5 5.5 7.8 5.5 18.0 5.5 0.0 22.7 18.0 0.0 2.3 0.0 0.0 Jake 2.8 15.0 17.8 15.0 12.1 2.8 2.8 0.0 26.2 5.6 0.0 0.0 0.0 0.0 Jake (2) 2.6 4.6 0.0 19.0 9.2 20.9 0.0 0.0 20.9 4.6 0.0 4.6 13.7 0.0 Jake (3) 2.0 2.0 0.0 17.8 0.0 25.7 2.0 0.0 17.8 21.8 3.0 2.0 4.0 2.0 Arthur 10.6 7.6 19.7 0.0 15.2 21.2 16.7 0.0 1.5 7.6 0.0 0.0 0.0 0.0 Will 5.2 5.2 0.9 15.7 1.7 20.0 6.1 0.0 14.8 1.7 6.1 9.6 12.2 0.9 Anna 6.1 9.1 34.8 6.1 25.8 6.1 3.0 0.0 3.0 0.0 3.0 0.0 0.0 3.0 Anna (2) 2.3 5.8 11.0 15.6 5.8 20.2 5.8 4.6 12.1 7.5 1.2 3.5 4.6 0.0 Chloe 7.3 7.3 7.3 7.3 19.5 7.3 12.2 0.0 0.0 12.2 19.5 0.0 0.0 0.0 Chloe (2) 4.2 7.9 11.5 7.9 10.9 12.7 12.7 6.1 6.7 7.9 0.0 0.6 10.9 0.0 Brooklyn 6.7 0.0 0.0 6.7 40.0 33.3 6.7 0.0 6.7 0.0 0.0 0.0 0.0 0.0 Alice 7.6 16.1 14.4 3.4 6.8 9.3 1.7 1.7 5.1 21.2 0.0 3.4 9.3 0.0 Curt Holden 2.6 18.1 17.2 0.0 8.6 0.0 6.9 8.6 6.0 25.9 0.9 3.4 1.7 0.0 Holden (2) 1.5 12.0 12.0 0.0 0.0 3.8 14.3 3.8 15.8 15.8 5.3 6.8 5.3 3.8 Kara 15.5 4.2 0.0 7.0 7.0 11.3 0.0 0.0 28.2 0.0 7.0 5.6 0.0 14.1 Kara (2) 3.5 5.2 0.9 23.5 0.0 20.0 0.0 0.0 26.1 0.0 5.2 14.8 0.9 0.0 Stella 6.7 13.3 13.3 6.7 20.0 10.0 3.3 0.0 21.7 1.7 0.0 1.7 0.0 1.7 Claire 1.7 15.0 0.0 16.7 5.0 21.7 0.0 0.0 16.7 1.7 10.0 11.7 0.0 0.0 Francis Ryan 8.8 18.6 5.3 18.6 14.2 18.6 2.7 0.0 5.3 0.0 2.7 0.0 0.0 5.3 Tony 6.3 22.5 10.0 0.0 6.3 0.0 10.0 0.0 16.3 0.0 0.0 0.0 0.0 28.8 Tony (2) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 20.0 0.0 0.0 0.0 0.0 80.0 Leigh 13.0 2.6 10.4 0.0 15.6 10.4 10.4 0.0 13.0 0.0 2.6 0.0 0.0 22.1 John 17.5 28.1 17.5 5.3 26.3 5.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Table 33, shows increases and decreases in categories from 7 students38 who participated in at least 2 interviews. Table 33 shows a decrease in categories like Past Experience, Teacher

Expectations, Student Talk, and Grade Talk drop for most participants and increases in categories like Writer Talk, Self-Assessment, Taking Ownership, Interaction and Response

38 A student from Francis class, Tony, also participated in 2 interviews, but the second interview was only a few minutes. During the 1st interview Tony told me that he had been recording conferences with Francis and since the majority of Francis’ response in the class was verbally delivered in conference style I wanted to talk more about it with Tony. The interview was too brief to show any noticeable differences so it was not included in the table looking at differences from interview to interview.

151

Clear39. As students became more exposed to the teacher’s feedback the easier it became for them to interact with the feedback to make changes to their papers. Although the categories were not intended to be looked at as positive or negative, from the increases/decreases in categories it is hard not to think of some categories in terms of polarity. For example for many of the participants we see decreases in Student Talk and increases in Writer Talk. Participants also relied less on Past Experience as they became more familiar with the class and teachers. Teacher

Expectations also decreased for many participants. Grade Talk decreased for every participant who took part in multiple interviews implying that grades were less of a concern. Self-

Assessment also increased for all of the students who participated in multiple interviews. The

Response Clear comments went up for many of the participants and the Taking Ownership and

Interaction comments increased for all but one participant, Jake. These findings suggest that most participants were talking about writing differently, providing some evidence that a grade- free instructional environment and feedback that allowed students to know where they are, where they need to be, and how to get there has an influence on students ability to interpret and interact with response.

39 Unfortunately, because of students schedules and when they were turning in and receiving feedback there is not a consistent time frame between interviews. The interviews could have been one to two weeks apart up to two months apart. In the classes the students were given loosely constructed time schedules for when to turn in their papers. The professors had “suggested” timelines for the students to follow, but it was not always the one that students observed. Had there been a more consistently schedule of papers being turned in/papers being handed back perhaps the categories of increase/decrease would have changed more.

152

Table 33: Increases/Decreases in Content of Discourse categories from students who participated in multiple interviews Instructor Student PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR Rodney Cory 0.0 41.7 20.8 29.2 8.3 0.0 0.0 0.0 0.0 0.0 Cory 10.1 ↑ 5.5 ↓ 5.5 ↓ 15.6 ↓ 7.3 ↓ 2.8 ↑ 15.6 ↑ 17.4 ↑ 17.4 ↑ 2.8 ↑

Robert 9.0 17.9 14.1 2.6 29.5 9.0 6.4 2.6 9.0 0.0 Robert (2) 2.3 ↓ 12.5 ↓ 5.5 ↓ 7.8 ↑ 5.5 ↓ 18.0 ↑ 5.5 ↓ 22.7 ↑ 18.0 ↑ 2.3 ↑

Jake 2.8 15.0 17.8 15.0 12.1 2.8 2.8 26.2 5.6 0.0 0.0 0.0 0.0 Jake (2) 2.6 ↓ 4.6 ↓ 0.0 ↓ 19.0 ↑ 9.2 ↓ 20.9 ↑ 0.0 ↓ 20.9 ↓ 4.6 ↓ 0.0 4.6 ↑ 13.7 ↑ 0.0 Jake (3) 2.0 ↓ 2.0 ↓ 0.0 ↓ 17.8 ↓ 0.0 ↓ 25.7 ↑ 2.0 ↑ 17.8 ↓ 21.8 ↑ 3.0 ↑ 2.0 ↓ 4.0 ↓ 2.0 ↑

Anna 6.1 9.1 34.8 6.1 25.8 6.1 3.0 0.0 3.0 0.0 3.0 0.0 0.0 3.0 Anna (2) 2.3 ↓ 5.8 ↓ 11.0 ↓ 15.6 ↑ 5.8 ↓ 20.2 ↑ 5.8 ↑ 4.6 ↑ 12.1 ↑ 7.5 ↑ 1.2 ↓ 3.5 ↑ 4.6 ↑ 0.0 ↓

Chloe 7.3 7.3 7.3 7.3 19.5 7.3 12.2 0.0 0.0 12.2 19.5 0.0 0.0 Chloe (2) 4.2 ↓ 7.9 ↑ 11.5 ↑ 7.9 ↑ 10.9 ↓ 12.7 ↑ 12.7 ↑ 6.1 ↑ 6.7 ↑ 7.9 ↓ 0.0 ↓ 0.6 ↑ 10.9 ↑

Curt Holden 2.6 18.1 17.2 8.6 0.0 6.9 8.6 6.0 25.9 0.9 3.4 1.7 0.0 Holden (2) 1.5 ↓ 12.0 ↓ 12.0 ↓ 0.0 ↓ 3.8 ↑ 14.3 ↑ 3.8 ↓ 15.8 ↑ 15.8 ↓ 5.3 ↑ 6.8 ↑ 5.3 ↑ 3.8 ↑

Kara 15.5 4.2 0.0 7.0 7.0 11.3 28.2 7.0 5.6 0.0 14.1 Kara (2) 3.5 ↓ 5.2 ↑ 0.9 ↑ 23.5 ↑ 0.0 ↓ 20.0 ↑ 26.1 ↓ 5.2 ↓ 14.8 ↑ 0.9 ↑ 0.0 ↓ Freedman’s (1987) comprehensive study on response established the idea of response as

“all reaction to writing, formal or informal, written or oral, from teacher or peer, to a draft or final version. Response may occur as a formal part of a classroom lesson or informally as teachers and peers have brief seemingly casual conversations” (5). My study confirms

Freedman’s notion that response is more than just what is on the page. From class observations as well as talking to the students, I saw that response and how students make sense of it is deeper than just written commentary. The codes I created for coding the data also helped see how students make sense of response. Similarly O’Neill and Fife’s (1999) small study on response found that students read textual comments as part of the larger classroom context. By thinking of response as a larger entity in the classroom and how students interact with it, “assessment events” provide us a new perspective to consider in classroom writing assessment.

There are individual students whose discourse does not increase/decrease following the same pattern most others do. For example, Jake seemed to be the opposite in terms of what

153 increased/decreased in some of the categories. Looking at Table 33 (above), areas like Writer

Talk, Response Clear, Response Unclear, Taking Ownership, and Interaction go in the opposite direction for Jake than they do for others. Most of the students followed a similar pattern for increase/decreases, but Jake’s categories defied the same patterns. One way to think about the increase/decrease categories is that the papers students were talking about from the different interviews were different genres and some students had more time/emotion invested in the different forms of writing. Jake was a writer who did not like to cut things out of his papers. He was the kind of writer that I often think of as being married to the text, and rather than cut something he would add new sections and pages to make what he had fit. This is possibly why the Response Clear, Writer Talk, Taking Ownership, and Interaction categories decreased for

Jake while the Response Unclear increased, because he was not able to cut things from a text which affected his interpretation of the response. In various places on Jake’s papers, Rodney put comments like “why.” In the third interview with Jake, I asked him why he did not interpret

“why” as cutting paragraphs/ideas from the text and he got slightly offended at my interpretation and went on to explain how “why” meant that he just needed to explain things more. My interpretation of “why” was different than Jake’s. I interpreted “why” as in why does this need to be in the text? I confirmed my interpretation with Rodney later after the class was over because

Jake’s insistence that “why” meant expand had me puzzled, and I wanted to make sure my interpretation of the response was correct. Later in Chapter 5, I will discuss the need to make

“response/assessment” a genre to aid students in their writing because it appears that even with formative assessment, teachers still write comments that need to be unpacked/explained for students.

154

As well as changes in the Content of the Discourse categories, there were observable differences in the Nature of Discourse categories for students who participated in more than one interview as can be seen in Table 34. The decrease in Talk About Writing-General (TAW-G) was expected because part of the first interview with students was to gain a perspective on the types of reading and writing that they were used to and once that was accounted for then there was not a need to discuss it further unless participants brought it up. The Talk About Writing-

Specific (TAW-S) comments increased significantly for participant’s in the study who took part in multiple interviews. Part of the reason for the increase in TAW-S is because after preliminary interview questions (see Chapter 3) the nature of the conversations turned directly to specific elements of their writing for the class. Another possible reason for the increase in TAW-S is because participants in the study were more focused on writers’ writing rather than being a student. This change of focus can also be looked at as similar to how teachers who use delayed grading put on various hats; coach, big brother, guide, etc., during the class to aid students in their writing. It is possible for students to also switch hats at various times throughout the course.

This symbiotic relationship between teachers and students in a delayed grading class changes the overall focus and dimension of the class. In the Talk About Class (TAC) and Talk About

Portfolio (TAP) categories we see a decrease for almost all students who took part in multiple interviews. It seems that as students became more accustomed to the class and response, they focused more of their energy on actual writing, not worrying about Grade Talk, Teacher

Expectations, and Student Talk as was observed in the Content of the Discourse increases/decreases (Table 34).

155

Table 34: Increases/Decreases in Nature of the Discourse categories

Instructor Student Taw-G Taw-S TAC TAP Rodney Cory 7% 0% 17% 2% 26% Cory 0% ↓ 39% ↑ 17% 0% ↓ 56%

Robert 7% 16% 30% 7% 59% Robert (2) 3% ↓ 48% ↑ 13% ↓ 10% ↑ 74%

Jake 3% 22% 3% 9% 38% Jake (2) 0% ↓ 39% ↑ 0% ↓ 11% ↑ 50% Jake (3) 2% ↑ 41% ↑ 0% ↓ 0% ↓ 42%

Anna 11% 22% 11% 19% 63% Anna (2) 4% ↓ 46% ↑ 4% ↓ 6% ↓ 60%

Chloe 10% 10% 8% 8% 36% Chloe (2) 2% ↓ 27% ↑ 11% ↑ 3% ↓ 44%

Curt Holden 5% 42% 18% 7% 72% Holden (2) 5% 63% ↑ 5% ↓ 0% ↓ 72%

Kara 14% 24% 17% 3% 59% Kara (2) 4% ↓ 67% ↑ 4% ↓ 0% ↓ 76%

AVG 55%

Case Selection

In Chapter 3, I explained my recruiting of participants for the study and indicated that all students in the classes agreed to participate, but because of time constraints, some students were more active in my study than others. The sample collected is representative of many students from the classes I observed. Robert, in the first interview, indicated that he was working on a sci- fi novel in his spare time. Claire had self-published an electronic poetry book. I believe there are

Robert’s, Clarrie’s, Will’s, Tony’s, etc. in all our classes like there were in Emig’s germinal study many years ago from her 1969 dissertation. Before going further with this analysis, I want to point out as discussed in Chapter 3, that the codes/categories that were used for coding were not coded for polarity (positive/negative/neutral) because of my conscious decision to represent the students as true as I could to their voices.

From interviews, observations, and “being” in the classes, I developed relationships with participants allowing me to interact in various ways. It is from these interactions as well as from the data that influenced the case selection for analysis. Gerald Bracey (2006), in his book

156

Reading Educational Research: How to Avoid Getting Statistically Snookered, explains that statistics are merely numbers and require someone to offer an interpretation. In the same ways, the numbers in my charts also need to be interpreted.

From the three classes, seventeen students volunteered to interview, and eight agreed to multiple interviews (2 or 3). The nature and time of all the interviews varied from participant to participant, so rather than look at frequency of interviews or length of interviews I consciously decided to focus on interviews whose content offers meaningful answers to my research questions;

(1) Is response able to act as and fulfill the conditions to be considered a “literacy

event” in a portfolio classroom that uses delayed grading?

(2) Are students able to fully participate in the literate activity of assessment that

portfolios and delayed grading promote?

(3) How does teacher response and assessment act as a sponsor of literacy?

(4) Are there students in the class that do not interact with teachers response, and

if so, why?

The next section of the chapter focuses on case studies organized around students highlighting “assessment events” and providing a description and analysis of the “assessment events.” The case formatting in this chapter allows me to describe participant’s reactions to response so that after the case examples I can answer the research questions meaningfully. It is also important to point out that since this research is focusing on the students and their voices that my case study analysis will focus on the students rather than the teacher’s commentary.

Focusing on the students’ voices may limit me from being able to make certain arguments, but the focus of this study has been on bringing the voices of the students into the conversation. We

157 have a large body of scholarship that focuses on how and when teachers respond, (see Chapter

2), but we have little in terms of student’s reactions to teacher response, or as the case in my study, their interactions with response.

Case Studies

Six students out of seventeen were selected for the case studies because of the ways they answer this dissertation’s research questions. The cases also are representative of many of the students from the classes, so the data presented can be generalized among multiple participants.

In highlighting the cases selected, I will present some of the different types of interactions that participants had with response. Three students from Rodney’s class were selected. Anna, was selected because she was at first very “nervous” and extremely sensitive about grades and her

GPA. Anna was like many of the other students concerned about grades, but Anna was selected to highlight the effect that response had over grades on her. I selected Chloe, who had what I call an “emotional reaction” to teacher response. In this “emotion reaction” to response, Chloe started feeling “heavy” after about seven minutes into reading the teacher’s response. Chloe read the response in front of me for the first time on her iPhone after letting it set in her inbox for ten days. Will, the third student from Rodney’s class was another student who also waited ten days before looking at the response. Will was also selected because he identified as a writer and liked to challenge or “push” teachers’ buttons in class. In Will’s interview he had talked about reading

Dewey’s book, Experience and Education, for his book review assignment and related many of

Dewey’s ideas and principles to the class, portfolios, and his teacher Rodney. I also chose to highlight Will in the case studies because during our interview together, Will took over my computer, completely changing the dynamic of the conversation. Because I wanted to hear the

158 students in the room, I let Will direct the interview to allow him the opportunity to control the conversation.

From my own class, I chose Holden and Kara. Kara, an English major who planned on being a teacher was selected because she was a person very much familiar with text and had

“print code” awareness (Hartwell, 1980). Kara’s reactions to response in many ways were ideal because she almost instinctively knew what textual comments were asking her to do. Holden was selected because his reactions to response were at times fueled more by emotion, like Chloe, than by the written comments themselves. Like the research finding that teachers often respond emotionally to student papers Caswell’s (2012; 2014; 2016), my research reveals that students too have emotional readings of teacher’s comments.

I chose Ryan, from Francis’s class, because he was the largest producer of text in the class and therefore had more written response to discuss. Ryan arguably put in more work for his paper in the class than other students, constantly giving Francis pieces of writing to respond to, creating larger revised drafts before Francis could comment on previous versions. Ryan, like many of the students in the classes planned on going to graduate school (law school), and his reactions to response are consistent with the reactions of many individuals from the study. Ryan was also selected because on one of the last days in class, Francis handed out a portfolio checklist of sorts and Ryan was the only student in the class who wrote on the checklist.

Anna

Anna was a student who was very “grade sensitive” as she put it. Ann switched College

Writing II classes after the professor in the first class she was registered for told her that he did

159 not give A’s. After confirming his grading practices through MyEdu40, Anna switched to

Rodney’s section. Anna was a high achieving student who wanted to go to graduate school for psychology. In conferences at the end of the term, Ann said from an early age she wanted to get a PhD, and shared a story with Rodney and me about how her mother was puzzled at how a young child even knew what a PhD was. In class Anna was outgoing, very talkative, active, and pleasant in discussions and groups. Anna was a writer, like Will, and a bit of a perfectionist. In conferences she admitted that in high school if a teacher put a grade on a paper that she would go back and “fix” the things that were marked incorrect. She said that if something needed fixed then why not. Comments on papers from teachers, even graded ones, seemed to act as “sponsors”

(Brandt, 2001) for Anna in various ways. In an interview with Anna she told me that

when I get feedback I mainly focus on the things the negative things because I

don’t think about strengths because you know it comes second nature so when

there are strengths it’s like okay, but with like weaknesses I like think of those as

areas of growth so I want to focus on more on where I can improve more about

where I can get gratitude from the strengths, I don't know like my boss tells me I

need to stop and think about all the wonderful things I'm doing rather than just the

negative so I think that is just my personality. (2nd interview)

Anna had a preconception that response could be used to make something better, and although she tended to focus on the negative aspects of her own writing, it was clear that she wanted to make her writing the best that it could be. Anna was a serious student and writer who was also a resident advisor (RA) in her dorms. Anna was a highly social individual in and out of

40 MyEdu (https://www.myedu.com/) is a company similar to Rate My Professor.com in the fact that it presents some data on grades that students receive in certain classes. The site does not seem to be monitored or updated very closely and the legitimacy of the grade break down is up for question since it is a private company collecting information from students.

160 class. She reported that her reading and writing habits were mostly social media, (Facebook and

Twitter) as well as reading online news and sending emails to professors or supervisors at the school in charge of RA’s. Anna said that she liked reading, and would “read anything that is interesting,” like series of books or memoirs (1st interview). Anna, like most of the students in the study did not think of herself as a reader or writer because the types of reading and writing that they did were not valued in academic contexts. Grades were very important to Anna, but she did not let not getting grades interfere with her ability to engage in the class and with the writing.

To Anna the portfolio was a

collection of all our work to see how much we have improved throughout the year

and then to see how much we have improved upon an individual paper so like

taking a, like a not like a crappy draft, but like a first draft of things and then

seeing it like transform into something that is really well written and put a lot of

time into it, so it is I feel like it is a way of like reflecting as well so like with my

book review I had everyone like read it and I fixed errors with it and then he'll

like take my or he like read it and gave me response to it and then I'll take it to

make it like be even better, kind of like a learning a morphing kind of thing. (1st

interview)

Anna indicated that the portfolio made her nervous because she did not have a “benchmark of knowing where I am….because I don’t know if I am going to get the end result that I want ultimately being the A” (1st interview).

Anna was “grade sensitive” as she repeatedly exhibited in the first interview with 25.8% of her comments falling into the Grade Talk category. She had concerns about the portfolio and her progress, but as can be seen in Table 35, there were substantial changes in the ways that

161

Anna spoke about writing from the 1st to 2nd interview. From the 1st to 2nd interview there were decreases in the Past Experience (-62%), Teacher Expectation (-36.2%), Student Talk (-68.4%),

Grade Talk (-78%), Response As Dialogue (-60%), and Verbal Response (-100%) categories and increases in Writer Talk (+156%), Self-Assessment (+231%), Time Is Money (+93.3%), MINB

(+41), Response Clear (+303.3%), Response Unclear (+), Taking Ownership (+), and Interaction

(+).

Table 35: Break down of Anna’s increase/decrease categories for Content of the Discourse Student PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR Anna 6.1 9.1 34.8 6.1 25.8 6.1 3.0 0.0 3.0 0.0 3.0 0.0 0.0 3.0

Anna (2) 2.3 ↓ 5.8 ↓ 11.0 ↓ 15.6 ↑ 5.8 ↓ 20.2 ↑ 5.8 ↑ 4.6 ↑ 12.1 ↑ 7.5 ↑ 1.2 ↓ 3.5 ↑ 4.6 ↑ 0.0 ↓ Anna, like all students except for one who had participated in multiple interviews, made fewer comments coded as Past Experience, where students were referring to other writing assignments they had written in the past or previous classes. The drop in Past Experience comments suggests that as students were becoming more comfortable with the teacher, class, and response that they were relying less on what they had done in the past and were focusing more on the current class and papers. The frequency of comments in the coding category Teacher

Expectations also decreased for Anna and many of the other students. It appears that Past

Experience comments decrease for the same reason that Teacher Expectations do. Once students start to become comfortable with an instructor they are able to anticipate what kinds of comments/feedback they will get, similar to Sperling’s (1987) study “A Good Girl Writes Like a

Good Girl.” The difference is, in Sperling’s example the good girl tries to appease the teacher by

41 To show increases in percentages if the first number is zero or there was no recorded activity then you cannot show a percentage change because there was no beginning number to work from. Because of this there a few categories that show increases but cannot be represented in percentage changes. For the categories that cannot be calculated a “+” symbol will be used to represent an increase.

162 writing “what they like,” which can be taken in a negative light whereas my data suggests that students become more “audience sensitive” because they are starting to think like writers (Writer

Talk), and therefore concentrated more on audience (teacher and class) and less on doing what the teacher likes. Along with the increase in comments coded as Writer Talk (+156%) we get a decrease in comments coded as Grade Talk (-78%). By being exposed to Rodney’s response there are noticeable differences in the way that Anna talked about writing from the 1st to 2nd interview. The “assessment events” that Anna was being exposed to seemed to have a considerable impact on the way she thought about writing.

Anna’s Self-Assessment also increased by 231%, a code which increased in every student who participated in multiple interviews. Along with Self-Assessment, the category of Response

Clear increased 303.3%. The Response Unclear, also had an increase from 0 to 7.5%. The reason response became less clear for Anna can possibly be accounted for by looking at her 2nd interview in a comment Rodney wrote to her saying “although I feel as if you give the reader much to think about you don't really talk about the methods of the argument.” The response was to Anna’s book review and she said her initial reaction and reading of this comment was “Um, well with method I didn't really think about method of the argument, I was kind of thinking what do you mean by method like how they wrote the book like so um, I don't know I think the lingo was what got me, I knew what he meant but I didn't know how to put that in my paper more or less cause I don't know.” After thinking about “method” and then Googling “method,” Anna was able to understand what Rodney meant by “method,” and explained her rationale for revising her paper to make “method” more clear for her audience.

People won't care to read it because I'm just you know what I mean I don't have

any like prior knowledge so I put that in there and then method of the argument of

163

how they go through the book and how they write it for their book so in the

beginning when I did the method of the argument I just kind of wrote their

argument I didn't put why they did it I didn't put like any kind of evidence that

they made an like a sound argument so I went back through and I saw that their

method was by bringing up past examples of failures that are still trying to be

implemented but they are not working and they are listing the failures of the

business roundtable essentially is like their whole method so I went back through

and wrote that in my chapter by chapter explanation that with this chapter they

show this failure that didn't you know that is hurting students hurting teachers.

Anna’s engagement with the text and response clearly indicates that she was able to have a textual interaction with Rodney’s response. The feedback that Rodney gave her was directed as a question, it caused her to think about the text in a new light. The Response Unclear areas in her interview came from areas where she was not quite sure what the response was telling her but by

“Googling” things as well as using different colored highlighters to highlight areas in the text that Rodney referred to in other comments, Anna was able to have an interaction that allowed her to see her text in a different manner than before so that she could revise.

It is also worth mentioning the increase in the More Is Not Better comments for Anna. In the 2nd interview with Anna, the More Is Not Better category emerged as a possible clue to how and what students made of response. After getting response back from Rodney, Anna made the comment:

No after I got this back I felt much better cause um, a couple students and I were

talking about like how much feedback the one girl in the class I won't say her

name she got like two pages of response and not saying that is a bad thing that

164

like that it is a lot to improve on but I felt that since mine was so short it’s not that

short but its only like a page and there is not a lot to it, I felt that I had a very

sound like first draft afterwards I was kind of like I don't know if that makes sense

it was nice to know that my paper wasn't as bad as I thought it was when I turned

it into him to look at.

Anna, like many others who relied on past experience with response and grades, saw that More

Is Not Better. Connors and Lunsford (1993) found that over 59% of the comments teachers made were mostly about grade justification with only 11% focusing on revision. It appears that the more comments on a paper, grade justification, has also permeated into student’s perception of response and how they read our response. It could also be that Anna’s thinking More Is Not

Better had an effect on her ability to understand response which caused her Response Unclear to increase from 0 to 7.5%. Based on her experiences, where more feedback means more to fix, could have affected Anna’s overall ability to understand and engage with response. Anna also increased in the Taking Ownership, and Interaction, codes from the 1st to 2nd interview which suggests she was becoming more confident in her own writing abilities but also because her response was 1 page compared to 2 pages like some others in her class received. Anna felt like a more confident writer because she had “less to fix”. The amount of response she received in comparison to others is something that needs further exploring. Since response accounts for more of the class than just the textual comments (Freedman, 1987; O’Neill and Fife, 1999) it is important that we think of the relationships that students have with the teacher as well as others

165 in the class since much of current classroom pedagogy involves peer work. In the peer review sessions our response can sometimes be decontextualized and interpreted in various ways.42

Chloe

Chloe was similar to Anna in caring about her grades. She indicated in our 2nd interview

“I don’t want my GPA to go down.” I asked her if her GPA was important to her and she said

“I have a scholarship but it is from high school because of high honors, um but my GPA is a 3.6,

I was a part of the national honors or I am a part of the national honor society leadership here at

Kent” (2nd interview). Grades were important for Chloe, as many others in the classes, which to them reflected certain identities and roles they were comfortable with. Chloe was also a reader/writer in her own rights. In our 1st interview together, Chloe said she liked reading

“informative books” like The Tipping Point, by Malcolm Gladwell. Chloe said she rarely wrote for fun, but frequently used social media, like Facebook and Instagram, for commenting on others posts and text messaging. Like Anna, Chloe did not see social media and texting as a literate activity that was worthy of mention43.

Chloe, a Visual Communication and Design (VCD) major, had to compile a portfolio for the VCD major. The VCD portfolio and the one for Rodney’s class were very different. The

VCD portfolios were all assignments that had been previously graded from numerous classes and then compiled, whereas the portfolio for Rodney’s class consisted of formative assessment and delayed grading constructed in one classroom for a semester. Chloe saw little difference between

42 Anna talked with others students in class about response, probably in peer review sessions or in small group discussions. This same occurrence was observed with Holden and Kara from my class in their interviews. Although I do not have data specifically about students talking about response with each other, I believe a study like Robert Brooke’s (1987) “Underlife and Writing Instruction” could be useful to offer another possible avenue used to explore response. 43 A larger issue, outside of my research and dissertation is getting students to think about the writing that they do as having value. Many of the students in interviews did not think that they read or wrote much. Their idea of writing and reading seemed to be that if you were not reading scholarly or “smart” text that you weren’t really a literate person.

166 the two types of portfolios and thought that they were both about progress. When I asked her what she meant by progress she said

To progress to show that that you are understanding and grasping things is by

actively participating by doing the assignments, doing the homework. Um, and by

him writing back it is kind of his way of grading I guess, um so I'm not like

nervous about like oh I pray to get the best, I think as long as I am doing the work

I should achieve what I've worked towards right? What's the saying? You get

what you put in or what is it? You know. I guess practice makes perfect, or yea I

don't know it’s, yea I can't think of it, but essentially you know whatever you do

you are going to get back in return. (1st interview)

Chloe is representative in many of the students in her remark that “him writing back it is kind of his way of grading.” Participants in the study lacked the vocabulary necessary to talk about a delayed grading class. For a lack of better terms, many participants tried to use response as a grade or indicator of where their papers were at. This is possibly the reason some student’s interpreted more response as a negative indicator of their progress because to them, more response means more to fix. Anna and Chloe were in the same class and sat close to each other, so it is unclear if the two talked about response, but for Chloe’s book report she received two pages of response, which played a part in how she interpreted Rodney’s response. The increase/decrease in categories from Chloe’s 1st to 2nd interview can be observed in Table 36.

Table 36: Break down of Chloe’s increase/decrease categories for Content of the Discourse Student PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR Chloe 7.3 7.3 7.3 7.3 19.5 7.3 12.2 0.0 0.0 12.2 19.5 0.0 0.0

Chloe (2) 4.2 ↓ 7.9 ↑ 11.5 ↑ 7.9 ↑ 10.9 ↓ 12.7 ↑ 12.7 ↑ 6.1 ↑ 6.7 ↑ 7.9 ↓ 0.0 ↓ 0.6 ↑ 10.9 ↑

167

Chloe had significant decreases in Past Experience (-42.5%), Grade Talk (-44.1%),

Response Unclear (-54.4%), and Response As Dialogue (-100%) and increases in Teacher

Expectations (+8.2%), Student Talk (+57.5%), Writer Talk (+8.2%), Self-Assessment (+74%),

Time Is Money (+3.94%), More Is Not Better (+), Response Clear (+), Taking Ownership (+), and Interaction (+). The same explanation that applied for Anna for a decrease in Past

Experience can also be applied to Chloe. The increase in Teacher Expectations can be credited to the two pages of response that Chloe received from Rodney. In my interview with Chloe, it was clear that initially her interpretation of the response was affected by the two pages of response that she received. With the increase in More Is Not Better, the Teacher Expectation also goes up implying that with the response Chloe was more consciously trying to assess and perform Self-

Assessment on her own papers to satisfy her audience and re-understand her work. After receiving feedback on her paper, Chloe said,

I'm not there yet, I'm not shocked, its only 4 months of writing …when I look

back at what I was writing in the beginning or how I wrote things in the beginning

its different than how I do them now, and again I would you could say I went on

rants or I would bring up things that don't necessarily need to be discussed. So

kind of prioritizing, I am better prioritizing my discussion in what I am talking

about. (2nd interview)

With the help of response Chloe was able to identify and then discuss areas in her writing she struggled with. Chloe’s Student Talk went up in anticipation of doing poorly, because Rodney’s response to her book review engaged her as a writer, which moved her back to being a student.

Even though Chloe was a reader/writer in certain ways, my data suggests that she was about half writer half student who was used to performing and assuming certain roles (Foucault, 1995).

168

The two pages of response that Chloe received affected her in an interesting way. During the 2nd interview Chloe read her response in front of me for the first time on her iPhone. Rodney had sent the response back to Chloe ten days prior but due to time constraints, she let the response sit in her inbox. Reading this feedback caused Chloe to have an “emotional” reaction to response that influenced her ability to understand and interact with the text. Rodney’s first comment to Chloe was “So sorry this is later than I had mentioned but I had misplaced your paper until this morning. I think you have a good start on a strong review of an important and difficult text. You have looked at most of Dewey's important ideas and have tried to make sense of them for the reader bravo.” Chloe’s immediate reaction to this comment was “Okay thank you, but now let’s see what's wrong basically” (2nd interview). From this point on it seemed that her reading of response was negative, affirming the scholarship that many times teachers start positive and go negative. In looking at the first comment from Rodney I saw no indication that the response was going to go negative or that it should be viewed that way, but Chloe’s reading of the response was that of a student with genre knowledge from Past Experience where more feedback indicates more to fix. Chloe saw the response as an imminent “critique,” a term that

Chloe (and others) quite frequently used in describing response in interviews. The more Chloe read the response the more “heavy” the room and her mood became. At the 7:16:11 time mark in the interview, I made a field note to myself indicating this “heaviness” and shift of Chloe’s attitude. As Chloe read through her response, the heaviness could be seen in her body and face, both of which seemed to be straining. Chloe’s voice changed tone and she read and talked about the response with an uneasiness, as if she were forcing herself to push through reading the response from Rodney. Despite the “heaviness,” Chloe maintained control of her paper and the response, even though it was clear she was under some sort of stress. The increase in the

169

Interaction comments from 0 to 10.9% as well as the Taking Ownership from 0% to .6% represent Chloe’s asserting agency, interacting, and talking back to the response. Chloe frequently made comments like “I feel he definitely gave um good advice, obviously I will have to re-read how much I agree but it does seem like a lot.” In a comment on Chloe’s book review

Rodney said,

You need a better sense of the book as a book, not only in your organization of

the review, but in contextualizing the book itself that was published 80 or so years

ago. I wonder if you give the book its full context if you’ll still be saying you

have issues with Dewey and want to see his ideas fleshed out – the ideas for

implementation and examples have been done by Dewey in some detail.

Chloe responded to this comments saying “I will probably put my own input in it because from the first bullet point44 he did mention you know I wonder if you gave the book its full, … so basically if I gave my own ideas on whether or not I agreed or disagreed um, on the I'm going to try to maybe put that in there after re-reading in filling in those gaps so that is probably what I will be doing” (2nd interview). Chloe interacts with Rodney’s feedback, thinking about her own paper and how Rodney’s comments fit with her reading and interpretation of the text. She exhorts authority over her text even though the exigence came from teacher commentary. While

Chloe sometimes equated grades and knowledge, it was clear that she was able to engage with response. Much of our 2nd interview together demonstrates Chloe trying to make sense of the response she received, so that she could revise.

The Response Clear comments went up for Chloe from the 1st to 2nd interview so even though she saw the response as possibly negative and started feeling “heavy” she was able to

44 Some of the feedback Rodney gave the students was typed and had bullet points explaining aspects of their writing.

170 make sense of what Rodney was trying to articulate to her in the response. In the interview with

Chloe after she talked about the response, I felt obligated to discuss response with her, or at least the way that I responded to students, to possibly help alleviate some of the “heaviness” that came from the response. At first, I was worried about interjecting my opinion to Chloe that it would skew my data, but maintaining a reciprocal approach allowed me to talk with her about response.

I explained to Chloe that when I respond to writing and a paper and ideas are strong that I respond a lot engaging the writer in a dialogue. For me more response is usually because I am engaged in the writing/ideas myself. I discuss this more later in the chapter when I present a student, Holden from my class. I also explained to Chloe how it was harder to leave comments on a paper when the ideas are not as strong. Chloe responded to my explanation saying:

That is a really good point, I actually don't know where I picked up that mentality

that the more you are critiqued or the more that um, the more discussion that is

brought up within your paper that means that you know, for lack of a better term

the more shitty it is so to speak, do you know what I mean? Um, I don't know I

kind of want to say that I just kind of grew up thinking that because in high school

you know having all, I guess especially in high school cause now I'm looking

back you know and I had turned in papers or homework and you know the more

answers you got wrong you know the lower your grade and um in English you

know obviously the more you know when you had those corrections of like

grammar, it's kind of like this grading process that we go through you know in

America is yea the more that is, the more red ink the lower the grade. That is kind

of what I have picked up on intuitively. Or naturally I should say, yea. So a

learned behavior, I don't know. (2nd interview)

171

My interaction with Chloe in this interview possibly gave her a new way to look at response, but even without my interacting with her I believe that Chloe would have been able to make sense of the response she was getting from Rodney since it was formative in nature, and like Anna, Chloe could engage with Rodney’s text which would have eventually allowed her to make sense of the response. Her initial “emotional reaction” to response was something that had an effect on her overall interpretation of the response, because from a student’s perspective, more response is not good. Even though Chloe was emotional about the response, the way she talked back to the feedback in our interview suggests that she knew what to do with Rodney’s feedback.

As well as her talking about the feedback in our interview, Chloe addressed Rodney’s feedback in her revision memo. The revision memos should be thought of as an extension of textual interactions students have with teacher’s feedback because the memos allow students to connect feedback to revision. The memos provide students with textual agency since they decide what to do with the feedback. In Chloe’s memo she provided a rationale and argument for why she chose to rewrite her whole paper based on Rodney’s feedback. In Rodney’s feedback to

Chloe, he called into question her understanding of the book and its ideas and organization. After having taken time to digest Rodney’s feedback, Chloe’s memo explained her decision to start from scratch because;

(1)I knew I could do better; (2) after doing research on Dewey’s book,

“Experience & Education,” I gained a deeper insight into his argument and

discussion of the educational system; (3) my paper was too disorganized for

me to work with and because of the limited time that I have to work with,

completely starting from scratch was the best solution into getting the best

possible grade.

172

Along with Chloe’s three main points for revision she also addressed Rodney’s other feedback on her writing providing an argument for her decisions to start a new text and how she planned on revising based on her feedback. Rodney’s comments did not ask or tell Chloe to write a whole new paper, but she decided that rewriting the entire text would produce a stronger draft more in lines with the feedback. Chloe’s interaction with the feedback allowed her agency to decide how and what to do to address Rodney’s comments.

Will

Will, the third student from Rodney’s class only participated in one interview with me, so we cannot compare his discourse between multiple interviews, but his responses are no less interesting or complex in their own right. At first, Will was aloof, not volunteering to participate in the study and was somewhat reserved in classes. Often times I would notice Will walking around the class with a hooded sweatshirt coving his head. He sat in the back corner of the room, secluded from much of the others in the class. However, half-way through the semester, Will started behaving more comfortably in the class, removing his hood, and started talking and interacting more with Rodney and others during workshop time in class. In classroom observations, I noticed that Will was able to bring out a “playful side” in Rodney by having disagreements or conversations about difficult theoretical concepts. One particular day in class towards the end of the semester, Will and Rodney were having a conversation over a term

“innovative realist” that Will had coined himself after reading Dewey’s book Experience and

Education for his book review. Rodney was impressed with Will’s ability to tackle difficult theoretical terms, even though Rodney challenged the use of the term in a paper on its relation to

Dewey’s ideas. The two debated the use of the term for a couple minutes and what it meant to be a pragmatist while everyone else worked on their writing. After Rodney walked away chuckling

173 from his discussion with Will to work with another writer, I decided to talk with Will myself. In my discussion with Will, I asked about his coining of new words, talked about his paper, and asked if he had time for an interview. Will sat in the back of the class so he was less visible than many of the others in the class and was a busy student, and for that reason he told me he had not been available for interviews, but that day he had time to talk with me after class. Will, like others in the class, were taking multiple classes and had outside commitments so making time to talk with a researcher was difficult.

In the interview with Will, I asked him what kinds of reading and writing he did on a daily basis and he said he read “many different journals um, so like neuroscience journals, psychology journals, um educational journals, stuff like that um, I read a lot about human, like development so like neurosis um, ah” or as Will said “nerdy things.” Will also said he wrote poetry and had kept a journal since he was thirteen years old. As a hyper-literate45, Will appreciated the value of the portfolio in education, relating many of his ideas about class and the portfolio to Dewey’s work. He said that the portfolio was:

I see it as kind of the application of a lot of what Dewey is talking about in a

certain way where we are entering, okay I'm going to say it’s like a realm in a

sense because it is kind of outside a lot in what we are doing a lot in other classes,

um I see it as um, this way of giving students more freedom but while also

provoking more um, critical thinking which I think is the point of education, I

personally have benefited from this class cause I never really had put together a

45 In Amber Buck’s article Examining Digital Literacy Practice on Social Network Sites, Buck focuses her analysis on a hyper literate individual “Ronnie” to show the intricate ways that Ronnie uses various social media all differently to create and link his online identities. The student Will, in my study also seemed to be a hyper literate individual who used literacy in various ways in his own life to define who he was.

174

larger portfolio … but the point is, is that it has opened my eyes to kind of how, I

don't know I think it is a better approach to teaching ultimately.

Will seemed to have an understanding of portfolios and assessment that was slightly different than others in the class. Dewey’s book also had an effect on his outlook and reactions to response. Will thought that Dewey and Rodney were similar kinds of teachers and thinkers and referred to both of them as being pragmatists. Will enjoyed academic discourse and intellectual stimuli. Anna, in a conference with Rodney at the end of the semester explained to us that in a psychology class she had taken with Will that he often challenged the teacher about topics they were discussing in class. This challenging of teachers is the same thing I observed in class observation for Rodney’s class and was what prompted my talking with him in class about a possible interview. I did not confirm Anna’s story of Will with him since our interview occurred prior to Anna’s conference with Rodney, but “challenging” teachers seemed to fit his profile.

Table 37 presents the codes from Will’s interview. Although there was not a second interview with Will, many of the percentages for categories are comparable to those of many others in the class, like Past Experience and Teacher Expectations. Will differs though in the fact that he entered the class as a writer with only has .9% Student Talk (Kara also reports the same percentage and will be discussed later in this Chapter) and 1.7% Grade Talk. Will was comfortable and used to the role of a writer, which is reflected in his data.

Table 37: Break down of Will’s categories for Content of the Discourse Student PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR

Will 5.2 5.2 0.9 15.7 1.7 20.0 6.1 14.8 1.7 6.1 9.6 12.2 0.9

Astonishingly, Will, like Chloe, waited ten days before reading the response from

Rodney. I asked Will why he waited ten days and he responded;

175

I have to prioritize, because like I'm doing internship I work full time pretty

much, I'm moving because the city of Kent are buying our property. I um have all

the other classes and then my freaking siblings and my family are like you know

not in a good stable place and I don't live with them so it’s like I have literally a

lot of stressors and it’s like I need to work on it only when my brain is ready. Like

I can't even waste my time trying to work on stuff unless I'm ready.

The small amount of Student Talk (.6%) from Will, is because he was a writer, with 15.7% of the discourse coded as Writer Talk. Will’s behavior illustrates curiosity and openness in materials and ideas from class. For example, in our interview, Will told me that he had read reviews of the

Dewey book for his book review as well as reviews on earlier books of Dewey’s. Will did this at his own discretion to understand more about where Dewey’s idea had come from. Will explained this was how he approached many of his classes/assignments. He said to see where someone is now you have to see where they come from.

In the interview after Will had sent me his paper and comments from Rodney he quickly

“hijacked” my computer. I asked Will to read and respond to what Rodney had said to him in his draft and rather than using his own computer he used mine to jump back and forth from his response to his paper. Will jumped between documents utilizing keys shortcuts faster than I had seen before. With my computer, as well as his own text, Will worked with authority taking control and guiding our discussion. From the Taking Ownership (9.6%) and Interaction (12.2%) categories as well as Response Clear (14.8%) it was evident that Will knew what to do with response. During class, when Will had “time” to work and think about his writing, he immediately started revising and had worked through a few pages of his paper prior to my interviewing him. One of the points Rodney made about Will’s book review said “this is a large

176 claim – you need a cite and/or a sentence or two in support of your claim.” In his paper Will wrote that Dewey was the “co-founder of Pragmatism and is referred to as the founding father of

‘Progressive Education.’” Rodney’s comment was asking Will to support this claim about

Dewey. Many of Rodney’s comments were similar to the one mentioned above asking Will about pragmatism, wanting him to explain, support, or clarify. Another of Rodney’s comments told Will that there were many different types of pragmatism and offered an example of one. The comments Rodney gave Will were aimed at getting him to explain and expand upon his ideas to make them less vague. Will had been working on fixing the paper prior to our interview and told me “I think I changed it. So he once again talks about pragmatism in my review. I guess I was kind of vague and then he talks about bringing up who is sometimes referred to as the father but actually they are both equally credited with the, um, coining of that philosophical school.” In

Will’s comment there is evidence to suggest that he was taking charge of his writing, and making changes based on Rodney’s feedback. The feedback seemed to be clear for Will because during our interview he talked with authority of Dewey and pragmatism with ideas for how he was going to rework his text.

Will received two pages of response from Rodney, but exhibited 0% for the More Is Not

Better category, seemingly uninfluenced by the amount of feedback. Others in the class, Anna and Chloe, saw more feedback as a bad thing, but Will seemed to immediately know what to do with Rodney’s feedback as is evident from him starting to revise his paper in class after he read the feedback. Will’s Self-Assessment in the interview was 20%, an incredibly high percentage in comparison to other students in the class. Since Will identified as a writer, he seemed to not shrug from teachers comments and interacted with those comments to assess his own writing and ideas. Will said that he understood his writing was “not ready for all the, the scrupulous critiques

177 that you would receive from publishers or different things, it’s just not ready” and he understood the response to help make him and “authority” on an idea. Will understood what he needed to do in his paper to make it better, and said without fixing it that “it’s not ready.” Huot (2002) notes that writers assess themselves and their progress more frequently than student writers, which is what I observed in Will. Most of the response made sense for Will, the category of Response

Clear (14.8%) in comparison to Response Unclear (1.7%) show that Will was someone for whom the response made sense.

For nearly the entire interview, Will talked about the response with authority, and challenged some of what Rodney was telling him. One time, Rodney’s comment asked Will where he got a term “innovative realist” from and said “you should cite a source or give your reasoning for the label. However, you should only keep the label if you plan to use and refer to it later in the review.” Will responded to the comment saying “the term innovative realist which is not what I used, it was innovative liberalist.” After laughing about the comment and explaining what he meant in his paper Will said, “I think that was just him skimming.” Will appreciated the response he was getting and thought the textual interactions were a way to bring him closer to where he wanted to go with his writing and the feedback acted like a guide.

Similar to Chloe, Will used the revision memo to continue his textual interaction with

Rodney’s feedback. Rather than choosing to revise his whole paper like Chloe did, Will went through all of Rodney’s bullet points to explain or rationalize decisions in his paper. Many of

Rodney’s comments were geared at reorganizing or clarifying certain aspects of the text and words that Will was coining for his paper. In Will’s memo, he either specifically explained what he did to change the text or said “Check” next to the feedback. For Chloe and Will the memos granted them agency over their papers and created spaces where they could address or talk back

178 to teacher’s commentary. For example as I noted earlier much of Rodney’s feedback was trying to get Will to explain/expand upon ideas of pragmatism. In one of Rodney’s comments to Will he talked about pragmatism and different schools of pragmatism and ended by saying “You should either try to give some of the complexities of the term or abandon it-I vote for the later, but it’s your review.” In Will’s response memo to Rodney’s comment, Will says “I will allow pragmatism to be there solely on my belief that it is important to acknowledge when considering

‘a theory of experience’. There after removing any strict affiliation with Dewey and allowing

(hopefully) for the reader to grasp the theorem in relation to cause and effect.” Will’s talking back to the teacher in the memo indicates that he was in charge of his ideas and text and could make arguments about his own writing.

Kara

Kara46was from my own class so my understanding of Kara is obviously influenced from both having her in class as well as through the interview data that I have available. Kara, was an

English major who wanted to become a teacher after graduation. Her step-father was a guidance counselor, so she was very familiar with school (K-12) and school based literacy. Being an

English major, Kara liked to read. She reported reading literature, Shakespeare, Buzzfeed,

Twitter, and the New York Times. Kara also indicated that she “get[s] lost in Wikipedia,” explaining the “Hitler theory,” where in a certain amount of clicks on Wikipedia you always end up on Hitler. As far as writing, Kara said she writes “Tweets galore” (1st interview).

46 In Chapter 3, I explained that students from my own class were approached by an outside IRB approved researcher and that the data was retained from me until after grades were input at the end of the semester. I feel as if it is important to mention this, because of critiques that sometimes arise from teachers who conduct research on their own students. By having an outside researcher approach and interview my students I can say that the data that was gathered is as truthful and comparable as any of the data gathered from any of the three classes.

179

When asked what she thought the portfolio was Kara said, “I like it, uh it’s like a lot of people in the class are feeling anxious about not getting grade but I actually like that, I like not worrying about it, um I just do the assignments like however I want like I'm not worried about making sure my grammar is good.” (1st interview). When Kara was asked about why she was not concerned about the grade she said, “when I write like papers I feel like I have a better chance at doing well because it's kind of like how I feel and like especially in the classes I'm in now where it's like creative writing where I don't know how you could really get a bad grade in that class so

I'm not worried about that at all.” Kara did not seem to worry about the grades, she said that

even the first day of class Curt made it pretty clear if you do the work and if

you're actually trying and it’s evident that you are trying then you are going to do

fine so I've just kind of been doing the best I can like trying to talk in class and

like get the most out of what we are reading and yeah I really don't worry about it

too much [grades] like people like panicked when we like I'm not going to tell

you your grades but I'll like he made it pretty clear that like if you do what you’re

supposed to do then. (1st interview)

Kara’s attitude seems to stem from the fact that she, like Will, had a strong relationship with written text and “print code” awareness (Hartwell, 1980), which can also be looked at as academic literacy. Kara was familiar with written response and how to use it. Her attitude can also be attributed to having an engagement with the class and materials. In Chapter 3, I mention that the classes Rodney and I were teaching were educationally themed courses focusing on literacy and learning. For Kara, who was planning to become a teacher, the class was a blessing in disguise because she was able to grapple and explore issues in the education system before she did her pre-service teaching. She told the interviewer in the 1st interview that

180

this class is blowing my mind, like every day I'm like oh my gosh like I learn

something new or not even something new but like we elaborate on stuff that yea

his comments definitely every time he is like think about this why did you say

this, what do you think we can do about this and I'm just like shit like, he's

blowing my mind like I'm just like and then I try you try to elaborate but it’s like

eeeeh, he challenges.

Although it was not my intention to “blow” anyone’s mind in class, it seems that the responses helped some students in my class understand education in new ways. Kara was available and open for learning in the class, which ultimately had an effect on her ability to understand and engage with response.

In Table 38, we can see that Kara’s 2nd interview decreases in comments in Past

Experience (-77.4%), Grade Talk (-100%), Response Clear (-7.45%), Response As Dialogue (-

25.7%), Verbal Response (-100%) and increases in Teacher Expectations (+23.8%), Student

Talk (+), Writer Talk (235.7%), Self-Assessment (+77%), Taking Ownership (184.6%), and

Interaction (+). What is interesting in Kara’s data is the absence of Response Unclear comments, which implies that the response Kara was receiving was understandable to her. She also made no reference to time (Time Is Money) or the amount (More Is Not Better) of response she was receiving. Like Will, the size of the response seemed to have a little effect on over her overall interpretation of the comments she was receiving. I surmise that since Kara was an English major, as well as an aspiring teacher that she was already walking partway in the same “affinity group” (Gee, 2007) that I am in as a teacher, which possibly can explain her reactions to response.

181

Table 38: Break down of Kara’s increase/decrease categories for Content of the Discourse Student PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR Kara 15.5 4.2 0.0 7.0 7.0 11.3 28.2 7.0 5.6 0.0 14.1

Kara (2) 3.5 ↓ 5.2 ↑ 0.9 ↑ 23.5 ↑ 0.0 ↓ 20.0 ↑ 26.1 ↓ 5.2 ↓ 14.8 ↑ 0.9 ↑ 0.0 ↓

Kara exhibited no Student Talk in her 1st interview and very little in the 2nd (.9%). In the

2nd interview she also had increases in Writer Talk by 235.7%, Taking Ownership by 184.6% and Interaction from 0 to .9%. These numbers suggest that response clearly painted a picture for her as to what to do and how to get there. Many of my comments to Kara were geared towards getting her to expand her arguments or to make things more reader based prose instead of writer based. For a paper she was writing using statistics I asked her to “break this data down a bit for us, what do you see in the data? How is a reader supposed to read it?” Kara explained this comment to the interviewer saying “when I reread it, it does like it kind of confuses you” (2nd interview). Kara was able to take these comments and think differently about areas in her papers.

In interviews Kara talked about ways that she could bring in other forms of evidence based on my feedback to make her ideas stronger. Kara also disagreed with feedback and rationalized her decisions for not doing what the response said. In Kara’s first interview she talked about her book review and one of my comments that asked her to elaborate on one of the chapters in her book review more. Kara told the interviewer “I was like ehh, he asked me to elaborate on a chapter and I was like, to me he didn’t read the book but to me that chapter was not important…

I didn’t elaborate on this chapter because I didn’t think it was important to the person reading the book review to like know more about that chapter.” Kara took ownership of her paper and could disagree with teachers commentary based on her own judgement of the text. From conversations in class with Kara about her writing I could tell that at times she was frustrated with her writing or possibly my comments. In class during workshop time, Kara would frequently ask me

182 questions about her writing and my feedback. During these conversations, Kara would regularly sigh and show signs of frustration with ideas she was thinking or with things I had told her, but regardless of any frustration she was dealing with, Kara seemed to be able to engage with response.

In the 1st interview Kara indicated she did not want to “be like a kiss ass,” since most of her reactions to the class were positive, and told the researcher that

I'm really trying to get through to this guy you know cause I really think he is a

gen, like a really smart guy and I just really want to pick his brain but at the same

time like we just talked about my research paper and he just kept throwing stuff at

me and I was just like, like I have no response like I'm just like well he's like what

do you want to prove what do you want people to think and I'm just like I don't

know, I don't know but I think that is one of the great things about the way he

teaches, like he's not afraid to tell you what he is thinking or if you're wrong or to

challenge you and I think that that's something that a lot of teachers just have lost

you know, the spark to try to push students you know. I don't know I think he is

more concerned about learning than like his own than himself… (1st interview)

Later in the interview, Kara told the researcher that she had hoped to take the same professor for

College Writing II as who taught her for a Shakespeare class because it was easy and she did not have to read anything. Kara was unable to take that professors class for College Writing II because it was full and said signing up for my class “was like a blessing in disguise because now

I hate my intro to lit teacher and I'm like if I had him for my college I would lose my mind and like I really think that it was a blessing in disguise” (1st interview).

183

I chose to highlight Kara’s interactions because I believe our relationship partially influenced the way that she interpreted the response. The same kinds of successful interactions with response are based on relationships as can be seen in Will, and Cory another student from

Rodney’s class. However, Cory did not get the response in the same ways that Will or Kara did, even though all three students were intrigued, engaged, and committed to the classes, having faith in relationships with the teachers. The importance of student-teacher relationships helps students develop a “situated identity” (Gee, 2007), allowing them to become more receptive to assessment. Through student-teacher relationships, students can see themselves as learners and writers. I also highlight Kara’s interpretation of the class and response because my next case study Holden, had a much different reading and interpretation of response than Kara.

Holden

Holden was also a student from my class, but was less interested in school than Kara and more interested in being outdoors. He was a geology major who planned on going to graduate school to obtain an advanced degree. Holden said that he read the news, CNN’s website or scientific publications like National Geographic. As far as writing, Holden reported that he mostly wrote text messages and Facebook. He said he was too busy for other types of reading and writing, being a student, but he reported that he liked playing video games like Grand Theft

Auto, first person shooter games, and Minecraft. In the class Holden wrote all of his papers on different aspects of video games being educational, but at the same time felt restricted by the topic of his choice saying “I mean I've been doing stuff about video games all semester and I can only write so much about video games” (2nd interview). Holden’s “restriction” can be seen as a type of “sponsorship” (Brandt, 2001) that was prohibiting him and possibly holding him back from certain elements of the class. Holden transferred to the main Kent State campus after

184 having attended a community college. Holden was not a student who liked traditional schooling and had attended a trade school during high school.

When Holden was asked what he thought the portfolios was, instead of answering the question, he instead focused on some frustration he was having in the class with the different types of writing;

I liked the idea of it [portfolio] at first because we aren't graded on every

assignment and he gave us these journals and I guess the journals were a way for

us to get ready for the big writing assignments that we were going to do and he

kept telling me that my journals were good and my progress is showing and I was

like all right good so I'm doing good and then I wrote my book review and he

pretty much, I feel like he kind of butchered it a little bit and up until that point I

really liked the idea of the portfolio but now I'm a little worried about it … (1st

interview)

Holden’s frustration in the difference in response from journals to formal assignments seemed to have an overall effect on the way that he was able to read and interact with response. In my class

(as well as Rodney’s) the students were required to write three page journals every week responding to things we were reading and discussing in class. Every week in Holden’s journals he would write various things about the text and would often “talk to me” in the journals about aspects of our class that he was not happy with. The most common thing Holden would ask in these journals was about other students. Holden wanted to know why a lot of other students did not want to talk in the class; it was frustrating for him which became more evident in the 1st interview when he told the interviewer how he thought it was “disrespectful” that others did not

185

“speak up.” Holden felt that I held him to a different standard than others because of his participation level and expressed some nervousness talking about his writing by saying;

I mean you should have a rough idea of what your grade is, you know if you are

turning in crap and you know if you’re not turning in crap and he, he doesn't

grade on one rubric he, I mean he evaluates peoples papers based off of that

individual, I mean he knows that I mean I like to think that I'm one of the more

um active participating people in the class and I think that on this book review I

honestly think he was a little bit harder on me than other people because we are

working on the stats paper and you know he's reading along on mine and like he

told me to change something so I did and then he came back and said all right so

now that's good but now add a graphic to it and he told me to put like a pie chart

in and stuff and I don't think he told anybody else to do that so I mean I don't

know. (1st interview)

Holden thought my response to him was different than others in the class because of his position in the class, and said “he just expects me to do a little more because I've, there have only been 2 or 3 students that actually speak and say something every class and I think I speak a lot more than anybody else does and I don't know, sometimes I feel because of that he is almost holding to a little bit higher of a standard” (1st interview). Admittedly, I did think highly of Holden, and therefore, I pushed him by providing more response. Much like Rodney engaged Chloe as a writer, I engaged Holden in the same way by providing him with response that a writer would receive. The difference between Chloe and Holden is that Chloe knew what the response was asking of her as a writer, but in pushing Holden to be a writer, my response had the opposite effect of what I intended. Therefore, my response acted as a negative “sponsor,” for Holden that

186 may have inhibited his development as a writer. Holden did well in the class, but I believe the transition to a delayed grading class was difficult for him because he was too used to assuming the position of the student, and was unable to make the transition to a writer. Holden exhibited zero Writer Talk in the two interviews possibly because he saw himself as a “special student” in the class not a writer. Holden’s reliance on grades limited his ability to utilize response because he misunderstood what my comments were saying, and interpreted the response as a bad grade.

In Table 39, we can see that Holden, like many others who participated in multiple interviews have substantial increases/decreases. Holden had decreases in Past Experience (-

42.3%), Teacher Expectations (-33.7%), Student Talk (-30.2%), Grade Talk (-100%), More Is

Not Better (-55.8%), Response Unclear (-39%) and had increases in Self-Assessment (+), Time

Is Money (+107.2%), Response Clear (+163%), Response As Dialogue (+489%), Taking

Ownership (+100%), Interaction (+211.8%), and Verbal Response (+). The most puzzling aspect of Holden’s interviews is that for both interviews he exhibited zero Writer Talk.

Table 39: Break down of Holden’s increase/decrease categories for Content of the Discourse Student PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR Holden 2.6 18.1 17.2 8.6 0.0 6.9 8.6 6.0 25.9 0.9 3.4 1.7 0.0

Holden (2) 1.5 ↓ 12.0 ↓ 12.0 ↓ 0.0 ↓ 3.8 ↑ 14.3 ↑ 3.8 ↓ 15.8 ↑ 15.8 ↓ 5.3 ↑ 6.8 ↑ 5.3 ↑ 3.8 ↑

Regardless of Holden exhibiting zero Writer Talk in the interviews, the decreases in Past

Experience, Teacher Expectations, Student Talk, Grade Talk, and Response Unclear, suggest that he was becoming more comfortable with not receiving grades and more focused on writing from the 1st to 2nd interview. In the interviews, Holden, like Chloe, had emotional reactions to the response. From Holden’s point of view, I was being harder on him that others in the class because he saw himself as a contributor to class, which in his mind entitled him being cut some slack. In the 1st interview Holden said,

187

Ah, the biggest trend that I've noticed is in the journals he always told me this is

good, he'd either agree with my thoughts or sometimes he would disagree and he

would, he wouldn't really challenge me but he would say I see where you are

going with this, you're good thinking on this and I can tell that you are

progressing in nearly every journal he was like I can see that you are doing better,

your progress is showing blah blah blah, and then like I said the very first book

review he just boom (loudly) I just felt like I was butchered he's like why did you

have this, what's this, and just question mark question mark question mark and

just there was just a big gap in the way that he was grading it grading my writing

and I mean.

Both Holden and Chloe had emotional reactions to teacher’s response, Chloe’s reaction resulted in a “heaviness,” whereas for Holden we have “boom.” This was sentiment that he echoed later in the 1st interview when asked about the response/portfolio and he said “Oh shit yea I've still got

3 more papers to write and I have to finish my Audacity project and it just went from 0-100 very fast, very slow very casual and he was grading he was responding very lenient to everything and then boom (loudly) shit hit the fan.” It is also important to note his language in the last sentence where he referred to my response as “grading.” Holden, in the interviews quite often talked about response as being a “grade,” which also explains why Holden had the highest percentage of

More Is Not Better coded, and he was the only person with two interviews to have More Is Not

Better in both sets. Ill equipped with other ways to think about writing and school than with grades, Holden had difficulty in thinking about and rationalizing his progress in the class. It seems that a big part of Holden’s difficulty with response came from a conflicted relationship to the teacher.

188

In the 2nd interview, Holden’s interpretation changed slightly, although he seemed to still harbor feelings from the response he received on his book review. The change in Holden’s attitude possibly resulted from an interaction the interviewer had with him in their first interview.

The response that Holden discussed during the 1st interview was response he received on a book review. Holden read a book by Steven Johnson called Everything Bad Is Good For You. This was a book that I had read ten years prior while I was an undergraduate in a very similar course and position as Holden. I was also a non-traditional student who had not gone to college right away and I also had an affinity for video games. One of my comments to Holden in my response was;

Also not sure how we can measure intelligence from the 30's till now since

everything is so different like culture and technology. Being exposed to tech

doesn't make us smart, but the way we use it can. Not sure if that is what Johnson

says and it might be outside your book review purpose but I had to go off on a

tangent because on one hand yes tech is great and can make things easier but it all

depends on how we use it, it can be used for good or bad just like the tech and pop

culture we are being introduced to, not everything is equal.

I frequently made other comments on Holden’s review similar to the one above just thinking about ideas in the book. The researcher read the comment above, where I say I go off on a tangent to Holden and asked him, “So how do you interpret that? I mean he says not, well with this part here like but I had to go off on a tangent because on one hand yes, I mean how does that, how do you interpret that in light of the rest of the comments or the response?” Holden responded by saying “Um, I mean yea I guess that's kind of an obvious one, I have a sound system in my car but using it to slam music isn't going to make me any smarter, that's a good

189 one” (1st interview). Not quite answering the question the researcher asked “Fair enough

(laughing), I guess in the sense and in part of what I'm asking is this not sure if that is what

Johnson says and it might be outside your book review purpose but I had to go off on a tangent because, I mean does this, does that make this a response that is oh my god I have to address this like the first one right.” Holden did not see that my response was merely a suggestion trying to engage him in thinking about Johnson’s ideas more critically, and seemed to be overloaded with the amount (More Is Not Better) of response that he received. I fear that the length was overwhelming for him, like others in this study. With response in the past being directed at grade justification it is hard for students to not see “lots” of response as a bad thing, which is possibly part of the difference between writers and students. When writers get “lots” of feedback, they may not like it at times, but are able to engage with the feedback to address the comments/concerns of their audience to produce a stronger draft. For students on the other hand,

“lots” of feedback means more to fix, and for many that can be overwhelming.

From the researchers interaction with Holden, it seemed that Holden started to view the response more as suggestions rather than prescriptive allowing him to develop a “take it or leave it” mentality with some of the response as can be seen in the increase in Taking Ownership

(+100%) and Interaction (+211.8%) categories. Even though Holden did not identify as a writer, he started to speak with more authority over his text and felt ownership over his ideas. At the end of the second interview Holden discussed his paper using statistics, and said “in this I’m actually making a statement you know and the book review you weren’t really making a statement, you were like well I read this book, it was okay, this is what it was we did this we read about this like but with this I’m actually going through it making claims about different studies and trying to put them together to support something.” His take it or leave it mentality allowed him to engage with

190 the response taking agency over his ideas and what he would do or not do in the paper. The

Response Clear comments also went up 163% and Response Unclear went down 39%.

Another form of evidence that suggests Holden’s perspective of response changed can be seen in his revision memo to the book review. In the revision memo Holden extended his textual interaction with teachers commentary explaining why he chose not to revise his book review

(mostly for time), but detailed in two pages what he would have done to revise the paper. Many of my comments to Holden’s book review were me discussing ideas with Holden about the book or asking him to elaborate on ideas to make it clearer. There were also certain points in Holden’s book review where I commented on him inserting his opinion and suggested he remain objective in his review of the book. In Holden’s memo he said

I feel that throughout a majority of the paper I was somewhat vague, and this is

because I had a hard time connecting to the book when I read it. I feel a lot of my

essay was very simple. If I were to revise my essay I would talk a lot more about

the principals that he is trying to represent throughout the book. I would also talk

about the Flynn effect and I would try to connect it to his examples and writing a

little more than I did in my previous draft.

He went on to talk about remaining more neutral in the paper as well, noting that there were a couple places in his text that he inserted his own opinions and said “I now realize that it isn’t about my opinion of the book, it should be about what the book talks about and how it talks about it.” Although Holden did not revise the paper, the ideas he presented in his revision memo go through the majority of the comments I left on his paper with Holden talking about how he would have revised his paper if there were more time left in the class. Holden’s revision memo presents some evidence that given time to think about and digest the response he was able to

191 engage with it to think about ideas in his paper differently. The amount of effort and thought that

Holden put into the revision memo clearly indicates that he was able to understand how to fix certain aspects in the paper that were not clear or that needed elaboration.

Ryan

Ryan was a student in Francis’s class, an upper-level writing course, where the majority of response received was oral. With only eight students in class, Francis frequently utilized mini- conferences with students to give them quick feedback. In the class, Ryan was one of the largest producers of text47. He often had new drafts of writing before Francis could respond to the previous one. Ryan was a history major, who wrote long papers for his other classes. In his free- time Ryan liked reading Tolkien novels but also reported that he read scholarly books on Abigail

Adams and Rockefeller. Ryan planned on going to law school after he finished at Kent. When I asked him about the portfolio he said that

I just thought it was a just a basic rough collection of all the assignments, I didn't

know it was almost like an argument of what almost of what you learned kind of,

um I didn't know it was going to be so interconnected. I thought it was just going

to be all of the writing assignments that you did, put it in a folder and that’s that.

Um, I guess the big not surprise that I kind of discovered was the cover letter. I

think the cover letter is the link that connects everything together. And I think,

cause I have never done a cover letter like that before, like I have done a cover

letter for a job but this seems completely different so, so yea that's my big thing.

The day I interviewed Ryan was the day Francis handed out a portfolio checklist to the

47 Ryan submitted a 83 page portfolio, his final paper being 30 pages by itself. Most other student from the class submitted 15-69 page portfolios with most being in the 30-40 range. Ryan had the largest final paper that was double the size (14pages) that was required of them.

192 students, explaining to them different artifacts that they needed to include in their portfolios. One of the artifacts Francis asked for was a cover letter, which he explained as an argument and map for how he should read their work. The cover letter had an impact on the way Ryan thought about the portfolio. Originally Ryan thought the portfolio was “a basic rough collection of all the assignments” that you “put it in a folder.” It is not surprising that Ryan thought the portfolio was papers in a folder, especially since assessment is not typically a topic of discussion in most classes and although Ryan had reported no past experience with portfolios, it seems like he had a preconceived notion of the portfolio despite Francis’ explanation of portfolios to the class. At one point in our interview when talking about the final portfolio, Ryan said “I have a folder that I usually use for this kind of stuff.” If not for the cover letter than Ryan would have associated the portfolio with a folder and never thought about the portfolio as an argument.

In the interview, Ryan also talked about different point values that he believed the portfolio would represent. He said the final paper was probably 50% of the grade and the cover letter 10%. Using the portfolio checklist Francis distributed to the class, Ryan tried to make sense of grade and the portfolio. On the checklist, Ryan had written many comments quickly filling up the margins of the checklist with things Francis said responding to questions the class had. As can be seen in Table 40, 14.2% of Ryan’s comments were focused on Grade Talk. Ryan also had a high percentage of Past Experience (18.6%), Writer Talk (18.6%), and Self-

Assessment (18.6%). The high percentage of Writer Talk can be attributed to the amount of text that Ryan was used to crafting for his history classes. For Ryan, writing was “you just doing it.”

The final paper for Francis’s class was supposed to be roughly fourteen pages, with many of the students in the class expressing to me in interviews they were worried about making the fourteen page mark. Ryan did not have the same worry of making pages, but instead was worried

193 that his 21 pages with more to write would go over the “max.” Ryan indicated that he wanted to check with Francis about the size of the paper because he said “some professors will dock you if you go over it cause I think that he limits 14 for the class. I'm not sure if that's right.” Ryan’s concern for the portfolio and class was more about going over pages rather than the content of what would be included. Ryan was a writer, who knew how and what to write, but because of constraints other teachers put upon papers (page limits), Ryan was not comfortable with taking certain liberties and writing a larger paper if it would hurt his grade.

Table 40: Break down of Ryan’s categories for Content of the Discourse Student PE TE ST WT GT SA TIM MINB RC RUC RAD TO INT VR

Ryan 8.8 18.6 5.3 18.6 14.2 18.6 2.7 0.0 5.3 0.0 2.7 0.0 0.0 5.3

Ryan’s concern for grades also bleeds over into the Teacher Expectations comments.

When I talked to Ryan about the portfolio checklist he said that what Francis gave them was

“kind of vague I just wanted a little more specifics of what he, exactly what he wants.”

Throughout the interview Ryan made multiple references to what “he wants” in regards to the portfolio. I asked Ryan if it was important to have what he wants, and he quickly replied,

Of what he wants? Sure he is grading it. Um and you know, I think that is why

people asked I noticed in class everyone asked what do you want for this what do

you want for that and he said well it is up to you, I think that we're not just used to

that kind of thing, you know in almost every class they like you know since they

are giving the grade it’s what they want not what, doesn't matter what you wanted

because you're not grading it.

Ryan’s comments and concern with what “he wants” is the opposite of Will and Kara, where they seemed to be less concerned with what the teacher wanted and more focused on their

194 writing. Even though Ryan exhibited high percentages of Writer Talk, the highest in his class and higher than most participants, he was concerned about ultimately giving the teacher what they wanted. Part of Ryan’s concern to give Francis what he perceived he wanted could have also come from the amount of classes that Ryan had missed. According to Ryan, Francis sent him an email pointing out that he had missed quite a few classes and reminded Ryan to look at the attendance policy on the syllabus. Ryan did not want to be docked points for going over his allotted absences, but understood that he would probably lose some points. Ryan was a senior in his last semester, and had missed some classes to visit law schools, but also admitted that some of the absences were him just blowing off class. The number of missed classes seemed to have slipped by him as well as the note on the syllabus that explained students would be docked for missing too many classes.

Although I would classify Ryan as a writer because producing text seemed fairly easy and enjoyable for him, he was highly fixated on a grade and trying to figure out what sorts of things Francis was looking for and how much everything was worth. Ryan was a confident writer, who knew how to think about audience and was able to accept and interact with feedback efficiently. In a peer review session, Ryan had worked with another member of the class, and he said she made a comment on his paper asking him to focus more on the “feminism thing because women would like you know seeing that kind of issue um but yea I mean you don't really see people debating Abagail Adams in the news like how much of an impact that she had.” I asked him if he was worried about working that feedback into his paper and he said,

I'm going to try to focus on that a little bit more, out of the three that I have I have

her influencing on women’s rights her influencing John and the continental

congress and during his presidency and I think out of those three I think I'm going

195

to expand the feminism one the most because that’s what she, she’s known as

that. I mean she is also known to help John but I think her big thing is her being a

women’s rights advocate so um, yea I'm going to try to focus on that to try to pick

up what Tina wrote

Ryan seemed to have a firm grasp on what he was going to do to address his reviewer’s comments. Subsequently in class during a min-conference Francis also talked to Ryan about possibly incorporating and expanding more on ideas about feminism into his paper and talked about Spartan women. Ryan’s paper was about Abigail Adam’s, and in his paper he wrote about

Adams making references to the ancient Greeks and Romans. Francis talked with Ryan about ideas of Spartan women which Ryan was unfamiliar with, providing him with a deeper way to make arguments in his paper. In our interview Ryan said “Um his idea of the Sparta, I don't know if you heard of what we were talking about but how women were in Sparta I didn't know about it so I mean that is maybe going to be another paragraph I can add into it that will help.”

Looking at Ryan’s revised research paper he did incorporate ideas of Spartan women into his paper as well as feminism. Ryan was receptive to feedback and whenever he received any, either from other students or Francis, he always seemed to be able to think about and incorporate their ideas into his writing. Ryan was focused on accommodating his audience’s needs in his writing, and expressed little worry in being able to fix his writing. Ryan was a writer who from experience knew what to do and how to do it, but was also highly motivated by grades and giving the teacher what they wanted.

Summary of Case Studies

The students from the three classes can all be seen to exhibit similar trends in their reactions to response. Some of the students, Anna, Chloe, Holden, and Ryan, were concerned

196 about grades and tried to make sense of response by thinking of grades, whereas Will and Kara worried less about grades and being graded and focused more on their writing. Holden and Chloe both exhibited emotional reactions to response, where the teacher’s comments were misinterpreted because of the way the comments made them feel. Both Rodney and I tried to engage Chloe and Holden as writers, but our response seemed to have the opposite effect of what was intended. Some participants also seemed to identify more as students, like Holden who only identified as a student, or Chloe who was about half student half writer, and Anna who was slightly less student and more writer than Chloe. While others, like Will and Kara who almost always identified as writers or Ryan who I would also classify as a writer, but exhibited similar traits like Holden, Chloe, and Anna’s dependence on grades. While there are similarities in the types of interactions students had with teacher’s response, it is important that we take them into consideration within the classroom context in which they were received. The three classes from my study used delayed grading portfolios, were writer and writing centered and grades were postponed. Grades were given in these classes as the teachers’ employment required, but the pedagogy in these classes focused on teaching writers. Although there were different ideas the students brought with them to the interactions, it is important to note that in these classes formative assessment was used allowing the opportunity for students to interact with the teacher’s feedback.

By presenting a case study format to describe “assessment events” students had in the classes, I strengthen my argument for linking together theories of literacy and assessment by bringing the students point of view into light, showing how formative assessment and “literacy events” create interactions. Gee (2008) explains that literacy and language only make sense within “Discourses,” which are “ways of behaving, interacting, valuing, thinking, believing,

197 speaking, and often reading and writing” (2). To Gee language and literacy do not make sense outside of context, because language and literacy are socially situated activities. My case studies provide a contextualized account of student’s engagement with language and literacy within the writing classroom to answer my research questions, and highlight how delayed grading portfolios and formative assessment create spaces for teacher and student engagement.

Research Question 1: Is response able to act as and fulfill the conditions to be considered a

“literacy event” in a portfolio classroom that uses delayed grading?

“Literacy events,” are “any action sequence, involving one of more persons, in which the production and/or comprehension of print plays a role” (Heath, 1983b, pg. 386). A “literacy event” involves either the production or comprehension of text. At their most basic of function

“literacy events” are about participants interactions with written text. From Heath’s studies, we see that people use literacy for different purposes, all of which are social activities structured around the ways that people use and produce text. In my study, the use of literacy comes in the form of teacher’s response. Earlier in Chapter 1, I explain that any portfolio/assessment is capable of creating an interaction, but argue that not all interactions align with contemporary social theories of learning and literacy. In the three classes my data was collected from, the teachers used formative assessment, which is also about interactions (Black and Wiliam, 2004).

The response students received acted as the catalyst for “literacy events,” because reading and interpreting response is embedded in a social practice. Based on the definition of a “literacy event,” response falls under the blanket of a “literacy event” when and if students read and interact with teachers commentary. Barton (1991) explains “literacy events” as having integral roles in communicative activities and explains even something like “leaving a note for a

198 milkman is a literacy event” (5). My research builds on the ideas of a “literacy event” to include

“assessment events” in college writing classrooms.

In answer to Question 1, response is able to fulfill the conditions of a “literacy event,” because some students in the classes were able to use response from their teachers to interact allowing them to reconsider ideas in their papers. Considering response as a literate activity allows us to tap theories of literacy, specifically those of Street’s (1984) notions of “ideological”

VS “autonomous” literacy. In an “ideological” model of literacy, literacy is focused on the social practices of reading and writing that participants do in a particular context and setting. In contrast, the “autonomous” model of literacy looks at literacy as an isolated trait and then claims to be able to study its consequences. “Ideological” models of literacy are culturally embedded and stress the significance of the construction and usage of literacy in the context in which they were created. The literacy scholarship, which relies heavily on Heath’s “literacy events” and

Street’s “ideological” model, show that literacy functions and is transmitted in different ways.

Any act of reading and writing that allows and encourage interactions can be classified as a

“literacy event,” but not all tap into theories of “ideological” literacy or can be used to make new meaning. Bardine, Bardine, and Deegan (2000) found that students look at the grades on their papers first before looking at the response, oftentimes spending little to no time reading the teacher’s comments. In their study, they found that students viewed response as a way to get a better grade, a view that Alice in my study expressed about her previous experience with portfolios.

When grades are used, the types of interactions students have with the text are limited and the focus becomes the grade rather than the writing. Although written comments on a graded paper could be considered a “literacy event,” but only if students are able to interact with the

199 comments to achieve something. Connors and Lunsford’s (1993) study of teacher commentary found that 59% of teacher’s comments were directed towards grade justification with only 11% presenting evidence encouraging revision. They also discovered that 75% of papers they analyzed had some sort of grade attached often with no other written commentary. Connors and

Lunsford state “the overwhelming impression our readers were left with was that grades were implicitly—or often explicitly—overwhelming impediments both for teachers and for students”

(208). Looking at Connors and Lunsford’s findings it is hard to see how grades on papers can lead to meaningful interactions, especially since only 11% of the papers had any comments geared towards revision and because most of the research on revision shows that students do not revise past surface level errors (Faigley and Witte, 1981). Grades on papers limit or prohibit the types of interactions students can have with commentary, and can fall into an “autonomous” model of literacy where the written word is privileged and seen as having value in and of itself.

Brian Street (1995) argues that “autonomous” models of literacy do not “lift those who learn it out of their socially embedded contexts” (79), but rather suppress students under the ideology and social control of the teacher’s class. In “autonomous” models of literacy there is no interaction, so in classes where teachers grade papers, they limit and restrict the types of literate activity students can have with their writing because grades reduce the student to a passive recipient rather than an active negotiator of meaning. Sadler (1989) states that summative assessment (grades) is a passive activity that “does not normally have immediate impact on learning, although it often influences decisions which may have profound education and personal consequences for the student” (120). Grades emphasis the written product rather than the process of writing, and although grades are a complex symbol system that are used by certain people, they are not a transportable system. Grades are contextually bound to the classroom. Writing

200 classes and instructors vary widely in their approaches and materials, but by using grades to represent students early in the semester we are shortcutting values. In keeping in line with social theories of learning and literacy as well as process pedagogy, delayed grading portfolios along with formative assessment allow teachers to tap into powerful theories of literacy that help students focus on writing taking into consideration the local classroom and contextually bound nature of writing and response. In the classrooms I studied, the lack of grades seemed to expand the possible types of literate activities students could have with written commentary rather than limiting or prohibiting these interactions.

Research Question 2: Are students able to fully participate in the literate activity of

assessment that portfolios and delayed grading promote?

The answer to Research Question 2 is difficult because it is both yes and no. My research suggests that many students are able to participate in the literate activity of assessment that portfolios and delayed grading promote, but not all students are able to fully participate or interact with response in the same ways. The majority of students from the study were able to interact with response to think about their papers differently, but some students were resistant to the teachers, response, and not receiving grades or were ill equipped for being treated like writers rather than students. In discussing this question, I open up a new area of potential research looking more at establishing needed relationships to fully understand the implications of this research question as well as others in my study.

What I discovered though my analysis of students voices is that their interpretation of response is often derived from what role they see themselves playing in the class and their relationship to the teacher. For example, as part of my observations, Robert, a student in

Rodney’s class who was seriously considering dropping the course missed a class and on the day

201 that he returned to class, Rodney patted Robert’s shoulder and told him that they missed him and his insights. I observed this interaction in class, because on the day this happened I was sitting at the table next to Robert. The sentiment was also reported to me by Robert during our 2nd interview together. Robert had a negative outlook on the class prior to Rodney’s praise, which seemed to dissolve after Robert realized that Rodney valued him and his insights in class.

Ultimately Robert’s ability to interact with response came from how and where he felt he fit into the class. The same can be observed from Holden in my class and the case study I presented earlier on him. Holden thought that since he participated in class, that my response was harder on him than on others and he had regretted talking in class because he felt he was being held to a higher standard. Admittedly, I did hold Holden to a higher standard, because I thought he would know what to do with the response, but it seems that both of us had different expectations based off our relationship.

The ability for students to make the most out of their response can partially be attributed to the relationships that they had in the class and with the teacher. Literacy, a highly social- dependent activity, is affected deeply by the types of relationships that individuals have with each other. In Heath’s examples of “literacy events,” we can see how literacy was transmitted and used by various individuals based off of their relationships with others. The types of activities individuals took part in were often shaped by social situations and had rules and requirements for participation. For example, in Heath’s book Ways With Words, we are able to see the social roles that were constructed through literacy for individuals in the two communities,

“Tracktown” and “Roadville.” In these two communities, there were different roles that parents, teachers, neighbors, or friends constructed based off of socially acceptable norms. School and grades have also constructed certain rules and requirements for students throughout their lives so

202 it comes as no surprise that many students in the study based their interpretations of response based off of Past Experience, a notion also confirmed by O’Neill and Fife (1999).

I believe students are able to interact with response based on previous experiences with written comments and grades, or through the relationships they develop with their teachers. Will and Kara are good examples of this. Will, used response almost immediately because he identified as a writer, but also because of the relationship he had with Rodney, and where he felt comfortable enough to challenge and question him. The same for Kara, from my class who came from a similar “affinity group” (Gee, 2007) to me as well as had a high opinion of me and class.

My research and data suggest that students who able to relate to the response either through past experiences or though their relationships seemed to make sense of response more effectively.

Many of the students presented in the case studies exhibited intellectual curiosity and determination which provided them with the tools needed to interpret response. These students were open to the class, teacher, and materials. However, I believe the social nature of school and learning (Rose, 1989) influences students ability to use and interpret written commentary. John and Arthur, students who were not interested in the class, spoke with uncertainty about the class and teachers in interviews, and were unable to interact with response from the teachers in the same way as others48. My hypothesis is that John and Arthur were not able to engage with response because they were unable to socially situate themselves in the class and to do the work required of them. From my class observations, I can say that both John and Arthur missed classes frequently, which leads me to believe that their failure to do well in the classes can be attributed

48 Both John and Arthur barely participated in the classes or attended frequently. They both submitted portfolios but seemed to have done very little with their teacher’s commentary. As part of my IRB and research design I have access to all of the student’s portfolios from the classes and in looking at what John and Arthur submitted it is clear that they were students who were unable to utilize the response or chose not to. Although my study did not analyze the revision of students it is clear looking through what John and Arthur submitted that their portfolios were some of the weakest in the classes, both in content and in product. Arthur and John did not revise their papers for the final portfolio, which suggests that there was no textual interaction occurring with teacher feedback.

203 to Rose’s (1989) notions of the failure of students being social, not academic. Both John and

Arthur struck me as intelligent individuals, but were withdrawn from the classes for different reasons and reported as much in interviews. This withdrawal from the class, instructor, and materials is not a problem that can be solved easily and is something teachers constantly face.

More research is needed to explore the role that relationships plays in assessment and learning because my data suggests that there is much more we could uncover by exploring how important relationships are to teaching and learning.

There are many other instances in my data and in reflecting on the study where I can see the importance of relationships between teachers and students. Alice, often disagreed with feedback she received from Rodney because she valued feedback her College Writing I teacher had given her over Rodney’s. In our interview together, Alice often compared her response from

Rodney to responses she had gotten in her earlier composition course. The relationship she had in the class and the response seemed to be geared towards a more traditional educational setting and different theories of writing. Alice reported that her previous class had been a portfolio class, but that papers were graded and could be revised for higher scores. She also made references to grammar and editing as if that were the types of comments that she was most familiar and comfortable with. Alice was not always sure what to do with response from Rodney, but trusted her earlier experiences/teachers over the current ones.

Cory, another student from the study had a good relationship with Rodney in the class but often did not understand the feedback that Rodney was giving him. During the 2nd interview, one of Rodney’s comments on Cory’s book review said

You provide a strong reading of some of Dewey’s most interesting and

complicated ideas. In fact I had not paid attention to Dewey’s ideas about the “the

204

gap” between adult mentors and younger learners. This was important to me

because Lev Vygotsky, a Soviet-era psycholinguist whose work only became

available to the West in the early sixties, had empirically established the concept

of the Zone of Proximal Development (ZOPD). ZOPD measures not only what

individual children could do on their own, but what individuals can with the help

of a more experienced learner.

When reading Rodney’s comment Cory fumbled for how to say “Vygotsky” and “empirical,” a clear indicator that the ideas were ones Cory had never been exposed to. When I asked Cory what the comment meant, he said it was just like he and Rodney were sitting down having a conversation and did not elaborate any further. Rodney’s comment was positive but it clearly went over Cory’s head. In Cory’s interaction with the text I saw how much trust he placed in

Rodney as a teacher to provide him with the necessary tools to succeed in the class as a writer.

Cory’s interactions with formal response were the same as his journal responses, which he viewed as conversational rather than supplying him with new information to think about adding or changing ideas in his text. In similar ways Cory, like Holden, had difficulty transitioning to different types of response teachers make.

It seems that on one level students can participate in the literate activity of response on the textual level when the response is formative in nature providing students directions of “where are you trying to go?”, “where are you now?”, “how can you get there?” (Shepard, 2004, pg

628). But on another level, a student’s ability to participate in the literate activity of assessment is influenced by their relationship to the class and teacher. Like Caswell (2012; 2014; 2016) and

Edgington’s (2005) research suggests that reading is an emotional activity for teachers, I argue that the same is true for students. Both Holden and Chloe had “emotional” reactions to

205 assessment, which suggests that student’s interpretation of written comments can also be an emotional experience that could either inhibit or enhance their understanding of response.

Research Question 3: How does teacher response and assessment act as a sponsor of

literacy?

Throughout this chapter and case studies, I have highlighted the student’s voices and the interactions that they had with teacher response. In these “assessment events,” we can see that response is capable of eliciting different types of literate activity where students derive various meanings from the response. The different reactions that students have with response can all be linked to Brandt’s (2001) notions of “sponsorship.” Brandt’s notions of “sponsorship” state that sponsors “enable, support, teach , and model, as well as recruit, regulate, suppress, or withhold, literacy” (19). Sponsors are living or inanimate objects that have some effect on an individual’s literacy development. Following Brandt’s definition of sponsorship, we see that assessment is capable of enacting the various descriptions that Brandt offers. For many of the students in the classes, their reading of response was positive and enabled, supported, taught, modeled, and recruited, but for others for other students, their reading of response was interpreted as regulating, suppressing, or withholding. Looking at teachers response was not part of my research design so none of the teachers comments were coded, but from what we know of teacher response in general like how and when to respond (Sommers, 1982; Connors and

Lunsford, 1993; Straub, 1996) we can see how Brandt’s definition could apply to different types of comments teachers leave.

O’Neil and Moore (2009), link assessment to Brandt’s notion of sponsorship and talk about how certain types of testing like how “impromptu essay exams reward—and, therefore, encourage the development of—writers who are able to develop ideas and draft quickly, and

206 who, without response or revision, can produce first drafts that are clear, concise, organized, and relatively correct” (38). O’ Neill and Moore argue that in doing this “student writers are labeled in static and one-dimensional ways—e.g., basic, developmental, standard, or honors” (38), and then talk about how writing teachers tend to value the particular experiences of the individuals.

In thinking about the ways that assessment can act as a sponsor it is important for writing teachers to use assessment and response in a way that allows them to communicate with their students to promote their literate development.

In my study, students interacted with response in various ways, and response played different roles for different students. For example, for Kara, Will, Anna (and others), response enabled them to see their writing in different ways. The response supported, taught, and modeled ideas for how to reconceive their papers. Formative assessment acted like a map for some students allowing them to learn from the assessment to make better arguments in their papers.

For others, like Holden, Chloe, Alice (and others), response was interpreted by the students as suppressing or withholding literacy. Students who interpreted response as a negative, or

“critique,” as many called it, did not see the value that response held for them because they thought of response as a bad thing possibly based off of Past Experience and More Is Not Better.

Holden and Chloe, and on some levels Alice, misinterpreted the response because of emotional reactions they had with teacher commentary, therefore, suppressing or regulating certain aspects of the response. My research suggests that teacher’s response is capable of fulfilling the various roles that Brandt identifies sponsors are capable of doing. Because our comments can result in different types of “sponsorship,” it is important for teachers to be cautious when responding to students and vital that they discuss their response practices with students. My research shows how at times students and teachers ideas of feedback vary greatly. Thinking of Holden’s reaction

207 to my response it is easy to see how a teacher and a student can interpret written comments differently. Literacy, a highly social and contextually bound activity, is influenced by a variety of factors. For me, my response to Holden was viewed as positive, where I was trying to engage him to think about his writing differently, but to Holden, the response was suppressing. In linking theories of literacy to assessment, I argue that the promotion of a student’s literacy aligns with an “ideological” model (Street, 1984) of literacy and assessment, but if students are not able to make sense of the assessment or the assessment prohibits their literate development then assessment can potentially tap into an “autonomous” model (Street, 1984) of literacy and assessment, leaving the students with an impoverished “literacy event” and “sponsorship.”

Research Question 4: Are there students in the class that do not interact with teachers

response, and if so, why?

Partially in order to answer Question 4, I direct the conversation back to Question 2 and the roles that relationships play in the interpretative process of students. In answering Question 2,

I determined that some students are able to participate in the literate activity of response/assessment but others are not. The students that were not able to use or interact with response, like John and Arthur, were unable to because of a disconnect from the teacher and their pedagogy. In interviews with John49 and Arthur, as well as through classroom observation, it was clear that they were rejecting the teacher’s pedagogy and were withdrawn from the class, which seemed to have an effect on their inability to interact with response. The same difficulty answering Research Question 2 can be extrapolated in answering Research Question 4, because

49 One day in Francis’s class, John was supposed to lead a discussion on a reading Francis had provided him. John did a different reading other than what he was supposed to do and in class when Francis tried to ask John about the reading to let him know he did the wrong one, John became very condescending, and told Francis he did what he was supposed to do. Francis let the matter slide, but through the interaction it was clear that John was withdrawn from the class and clearly had the intention to challenge Francis’s authority.

208 on one level, many students in the classes had interactions with teacher’s response, but not all of the students were able to interact with response in the same ways. There were varying degrees of interactions from students with response. For example, by looking at the Response Unclear and

Response Clear categories we can see that not all students were able to understand what the formative assessment was directing them to do. The Response Unclear comments ranges from

0% to 25.9% and the Response Clear ranges from 0% to 28.2%. These differences in percentages indicate that response painted different pictures for different students, and that the interactions students were having with response were individualized. The three tenants of formative assessment “where are you trying to go?”, “where are you now?”, “how can you get there?”

(Shepard, 2004, pg 628) should act like a guide, offering directions for the students for how to improve their writing. Some students in the classes were able to understand the response more clearly than others, which provides evidence that there are varying levels of interactions students are capable of having with response. Also by thinking about Question 2, we can see the importance of relationships that students have with the class, teacher, and materials. Students that did not interact as much with response, were students that had poor relationships with the class, teacher, or materials. These students are the ones whose failure stems from the social rather than the academic (Rose, 1989).

Like Question 2, Question 4 has no clear cut answer, because there is little scholarship looking at the social aspects in the classroom between teachers, students, materials, and assessment. In order to fully understand and answer Question 4 more research looking at how relationships play into the production of response (teacher side) and the reception of response

(student side) before we can fully understand the interactions students have. Response is a highly contextualized literate activity that occurs between teachers and students, but to fully understand

209 the way it functions we must explore the notion of relationships in greater detail to gain a better perspective to formulate better teaching strategies to help prevent students from falling through the cracks.

Conclusion

Students and teachers need to be able to communicate and interact through various means throughout the classroom. It is through these interactions that relationships are formed and are what ultimately affect the way that students read and understand our comments. Thinking of response as an “assessment event” allows us the opportunity to engage and interact with students on the page, while also generating a new way for response/portfolios to align with social theories of learning and literacy. The students in the case studies presented in this chapter offer a fuller view of the different types of reactions students have with response. At times response is clear and other times it can be emotional (Caswell, 2012; 2014; 2015). Response has various functions for each individual, but as can be seen in my data there are certain observable patterns that students display. Although it is not my purpose to create “models” of students for a one size fits all approach to response, this research indicates that many of the same feelings and reactions participants exhibit are similar, which allows us the ability to generalize certain aspects of response to create new ways to interact with students through feedback.

In the cases and data I have presented in this chapter, the different types of interactions students have to teacher’s response are represented. As the cases suggest, there are different experiences that shape students perceptions of response. Understanding how different experiences and emotions help shape the way that students read response can help teachers think of ways to incorporate discussions of response into class time as well as individual conferences.

Also thinking about the way we refer to students. For example, do we call students, students or

210 do we refer to them as writers? This is an important distinction to take into consideration when using portfolios with delayed grading. As Rodney indicated on his first day of class, portfolios are for writers. Many of the students in the study were new to portfolio assessment, and only one student who participated in interviews was used to not being graded. It is important for students to be able to assume different identities in the classroom because how they see themselves as well as how we refer to them has a great impact on their ability to interact with and comprehend response.

In the next chapter, I provide a discussion of the research as well as possible areas of further study. Chapter 5, will address the implications of Chapter 4’s analysis as a way to help create a model of classroom writing assessment and portfolios that aligns with theories of literacy. Since literacy permeates our everyday lives, it is important that we not only see assessment as a literate activity capable of creating many different types or interactions, but also that we are mindful of the ways assessment promotes and prohibits.

211

Chapter 5: Conclusion

Introduction

This study focused on answering research questions aimed at gaining the students’ perspective of response in portfolio classrooms where delayed grading was used. The framework and theoretical lens for this dissertation relied heavily on Shirley Brice Heath’s work, using

“literacy events” as a data driven concept to observe and record students’ perceptions of assessment. Little is known by teachers and scholars about how students perceive and interact with response and assessment in writing classroom (Murphy, 2000; O’Neill and Fife, 2001). This research has helped shed light on some of the dark area of classroom writing assessment by highlighting the voices of students while also discussing the types of interactions that response elicits. This dissertation began with the following research questions;

(1) Is response able to act as and fulfill the conditions to be considered a “literacy

event” in a portfolio classroom that uses delayed grading?

(2) Are students able to fully participate in the literate activity of assessment that

portfolios and delayed grading promote?

(3) How does teacher response and assessment act as a sponsor of literacy?

(4) Are there students in the class that do not interact with teachers response, and

if so, why?

To answer these four research questions, I interviewed seventeen students from three classes with a total of twenty six interviews. “Being” in the classes as well as drawing from the

212

literacy and assessment literature, I developed a coding scheme for the interviews, which allowed me to focus on the students’ voices to answer the research questions. In Chapter 4 I presented six case studies, highlighting “assessment events” students experienced with written commentary, to show different interactions students had when reading the comments. In attempting to answer my research questions, it has become clear that delayed grading portfolios and formative assessment have serious pedagogical implications because they align with the assessment and literacy scholarship. Combining these two areas, who otherwise do not make use of each other, provided a new way to think about classroom writing assessment and literacy. In the classes from my study, I observed interactions in students with response that challenged them to think more like writers and less like students. In treating students like writers and engaging them in the literate activity of assessment and response, we are able to tap into “ideological” models of literacy (Street, 1984), creating a usable theory for classroom portfolio assessment. In

Chapter 1 I discuss that portfolios are often used without rhyme or reason, which causes them to become “papers in a folder.” “Literacy events” and the literacy scholarship, however, allow us to reinforce portfolio practice and theory by including students’ use of literacy and response. Earlier research on classroom portfolio assessment (Greve, 2015) shows there is no consensual view on how to use portfolios. Many teachers use of portfolios tap into older theories of learning and writing. Though it is not my intention to create a model of portfolio assessment to replace all others or to say there is only one way to use portfolios, through this study and its results, we can see how classroom portfolio assessment can align with theories of learning and literacy by engaging and involving students in the process of assessment.

213

In Chapters 2 and 3 I defined “assessment events50” and, in Chapter 4, furnished six case studies focusing on “assessment events.” The “assessment events” exemplified in Chapter 4 showed various types of reactions that students had to response. Reading teacher response is a literate practice, and, since literacy is a highly interactive and contextually bound activity (Gee,

2008), it demands that we examine it in natural spaces. Looking at literacy in the spaces it is created and used helps to better understand how literacy functions for individuals. Literacy is a protean concept, which is always in fluctuation, and therefore literacy and its usage changes from location to location and among participants. Literacy and assessment are not transportable activities (Street, 1984; Maddox 2014; 2015; Maddox et al., 2014; Greve, Morris, and Huot,

Forthcoming) and need to be connected to the contextual nature of the spaces where they are created and used.

From experiences with portfolios in my Master’s program, where I was exposed to portfolios for both classroom writing assessment as well as programmatic assessment, I felt that portfolios had much to offer for both measurement and teaching. I also developed strong ideas about portfolios among my experiences with their diversified use, not all of which I would define as portfolio assessment. As Murphy (1994a) pointed out, not all portfolios are constructed identically, some are merely papers in a folder. From witnessing “papers in a folder” first-hand and having conducted various research (Greve, 2014; 2015), I felt that there should be a better way to use portfolios for classroom writing assessment. This “felt difficulty” (Young, 1981) helped to guide this project through systematic research, providing me with a new way to pursue classroom portfolio assessment. As mentioned earlier, the coding scheme I developed for this research allows us to look at assessment as a literate activity to understand assessment in the eyes

50 “Assessment events” fall under the umbrella of “literacy events,” but are specifically the literate activity of reading and interacting with assessment.

214 of the students. We often forget that with assessment comes consequences from material objects, including scholarships, graduate school, jobs, placement, and discounts for performance, to intangibles, such as self-esteem, identity, and movement through systems. When assessment is performed as a summative activity, students are mostly powerless to assessments short and long lasting effects; whereas when formative assessment is performed, students are encouraged to interact with teacher commentary to help them think about writing differently. Formative assessment is geared towards strengthening their deficiencies, so students are able to harness the power of assessment to learn and reach a higher level of achievement.

Although the data for this dissertation was collected in the Fall of 2015, the ideas are ones that I have grappled with since my introduction to portfolios. For quite some time, I thought that portfolios could be used to disrupt the traditional classroom setting and as a tool to get students more focused on learning than on grades. In this chapter I look back on my study to reflect on the decisions, results, and analysis. I will also talk about some of the methods’ and methodological challenges this study presented as well as future research plans for gathering data, incorporating areas that arose through the research and analysis. After revisiting my methods and methodology, I will turn to offering some suggestions for how to make response and portfolio assessment more useful for teachers and students, concluding with a new way to think about using portfolios and response.

Summary of the Results

The analysis of students in Chapter 4 support on some level that assessment is about relationships. Assessment regarded as a series of relationships and conversations complicates our job as teachers because it is difficult to connect with all students in the class. Considering response as our “response-ability” as teachers, however, to teach and make connections

215 corroborates how vital it is to involve and highlight the social aspects of assessment. Holden, a student from my class whom I felt I had a good relationship with, failed to connect with my response on certain levels. Arthur and John, from the other two classes, were also two students with whom response did not connect. They withdrew from the classes, and in interviews, they exhibited high percentages of Student Talk, Grade Talk, and Past Experience and viewed assessment without grades as a trick. Because of this preconception on assessment and the class, students like Arthur and John were unable to connect and utilize response in the same ways that other students were able to. The failure for Arthur and John were social failures rather than academic ones (Rose, 1989), preventing them from being able to connect with the class, materials, and teacher and inhibited their reading of teachers’ commentary. Robert was a student who was also disconnected from response. After he received what he perceived as positive remarks on a paper and was referred to as an asset51 in the class by Rodney, however, he started to come around and became more receptive to response. Although it is impossible for teachers to see everything about their students or to be able to connect with all of them, it is necessary to keep an open line of communication with our students and talk with them about response and assessment. Treating students like writers and showing value in their work help to situate them in the class, affecting how receptive they are to formative assessment.

Involving students in assessment practices that encourage interactions, like delayed portfolios and formative assessment, we can help our students tap into the power of learning, which likely leads to better writing. When assessment acts as a map letting students know

“where you are trying to go?, where are you now?, how can you get there?” (Shepard, 2006), it

51 In an interview with Robert he told me he felt like a key person in class who the professor goes to. On a day that Robert had missed class Rodney told him the next class that they missed him. This interaction with the professor changed a lot of Robert’s perception on the class and materials. Prior to Rodney’s comment, Robert was considering dropping out of the class.

216 helps create better teaching and learning. In ReArticulating Writing Assessment for Teaching and

Learning, Brian Huot (2002) calls for a shift in assessment theory, driving portfolios; otherwise, they will “end up being just another tool for organizing student writing within the classroom, a sort of glorified checklist….” (71). Huot warns if portfolios are just another way to score, grade, and rank writing, they will fail to live up to the transformative potential they have to transform writing assessment. Portfolios can be papers in a folder, as some students from this study described, but they can also be something more. Will, a student from Rodney’s class, said that grades are “standard bullshit.” I am inclined to agree with Will in this remark. Part of my interest in this project can be linked to Will’s notion of grades being “bullshit.” I was not a student who always had academic success and, for many years, I struggled with grades and what they meant.

Writing does not have to be graded to lead to learning (Elbow & Belanoff, 1997), and grades are not an adequate indicator for what students learn in a class. Sadler (1989) notes that grades are a passive activity that have no immediate impact on learning and can be disruptive to the learning process. In some ways, grades are like what Anna says, “just a representation of just how obedient I am” (1st interview). In classes without grades where only response is used, conversations about writing and identities of writers begin to emerge because the focus shifts from product to process.

In the classes from my study, grades were replaced with formative assessment, creating a space for students to act and think like writers. Although not all students made the shift from students to writers, the ones who did certainly talk about writing differently throughout the course. Portfolios are an assessment tool, like others, but should be regarded more significantly as an instructive tool or vehicle through which conversations and interactions around writing can occur rather than a way to assess writing. With delayed grading portfolios, teachers and students

217 can have interactions around the writing in progress. By focusing on writing instead of grades, teachers and students can negotiate and create new meaning on the page, seizing the power that portfolios are capable of housing.

In Chapter 4 and throughout this dissertation I discussed grades and certain types of assessment falling into modes of “autonomous” models of literacy (Street, 1984). Grades are not a transportable system that should be used to talk about ability or what was learned. An A in one class is not the same as an A in another class. We have focused on the wrong thing in teaching and learning by privileging grades over what is learned. I understand that grades are a capital of sorts, which could advocate students in forms of scholarships, graduate school, and jobs.

Nevertheless, while grades promote some students they also have the opposite effect on others.

Portfolios are about writers, a notion Rodney instilled in his class on the first day. In Rodney’s class, and the other two, grades were not what the discussions were about. Some students in the study worried about grades, and wanted to get good grades, but the comments on papers in the portfolios were formative in nature and indicated nothing about grades. Still, students searched the comments at times trying to make sense of where they stood (grade), but after time, I saw a change in the students who participated in multiple interviews. Students who participated in multiple interviews worried less about grades the more they were exposed to response, and focused on writing the best papers they could. When grades are deferred until the end of the semester like they were for the three classes in the study, students and teachers conversations around writing focus on the written text and writers’ writing. To say that an A in Rodney,

Francis, or my class is the same as an A in another class is an underrepresentation of the work, effort, and learning that occurred in the class, because grades do not enter the class until the 16th week of classes and students work as writers, not students.

218

In delayed grading classes, the focus becomes the writing, and although not all students are able to connect to the class or ideas being presented they are encouraged to think and act as writers. Formative assessment, like “literacy events” are about interaction, whereas grades are a means to an end. In thinking about “ideological” models of literacy (Street, 1984) and assessment, we can see that delayed grading encourages teachers and students to interact in various ways to achieve their own goals. In delayed grading classes teachers are able to devote more time to teaching and reading student work shifting the focus away from grades and more on writers’ writing. It does not seem likely that we will do away with grades anytime soon, but in classes with delayed grading the emphasis is on what the student can achieve at the end of a term rather than periodically throughout the semester. Ungraded writing allows students the opportunity to revisit their writing and ideas before a summative evaluation is articulated.

In Chapter 4, the data and analysis that I provided shows some evidence of students taking on identities of writers being encouraged to think differently about their writing through response. Delayed grading portfolios have the potential to subvert certain classroom dynamics.

By providing students a place to think and write (the portfolio), students have the opportunity to utilize formative assessment from teachers. Shifting our pedagogical focus to writing rather than grades allows teachers more time to teach and students more time to write and think like writers.

Huot (2002), explains that when teachers articulate judgements with grades that students become the objects of assessment, but with delayed grading portfolios, students have the ability to revise and resubmit papers before becoming the object of assessment (73). By redirecting the focus from grades to writing, students learn a valuable lesson that allows them to learn how to assess themselves. All of the students in my study who participated in multiple interviews, except for

219 one, Cory, had more Self-Assessment52 present from the 1st to 2nd interviews. Without grades, students are forced to look at their writing differently which results in more Self-Assessment.

Along with students Self-Assessing their writing more, they also focused less on the Teacher

Expectations, which indicates that with the feedback students are becoming more self-reliant in regards to their own writing. Although there were exceptions in participants who participated in multiple interviews, many had higher percentages of Writer Talk and lower percentages of

Student Talk in later interviews. The increases/decreases in categories of students who participated in multiple interviews provides positive evidence of the effects that delayed grading portfolios have on students.

Implications for Researchers and Teachers

This research set out to address the elephant in the room (Murphy, 2000) aiming to gain the students perspective on assessment. From observing classes as well as talking to students, my research confirms Freedman (1987) and O’Neil and Fife’s (1999) notion that response is more than just what is written on the page. My data, like Freedman and O’Neil and Fife, suggests that response intricately connects to a teachers overall pedagogical approach to a class, and from the student’s perspective, response is deeply rooted in past experiences with grades and written comments used to justify grades. In delayed portfolio classrooms, teachers are presented with an extra obstacle, because many students come from classes where grades are the only indicator of how they are doing. For some students, making the shift to a delayed grading class is difficult and despite our best efforts it is one that not all are able to overcome. Making the point to students that assessment is not a transportable activity and explaining to them that their past

52 The words that are capitalized in this chapter like Self-Assessment, Teacher Expectations, Writer Talk, etc. are capitalized because they were codes generated from students interviews and therefore should be looked at as data points from the actual research, results, and analysis from Chapters 3 & 4.

220 experiences may not always ring true from class to class because of different teacher’s response styles as well as different sorts of expectations for papers in various classes. Talking to students about this can help to make the transition into delayed grading classes easier as well as it presents them with a new rhetorical situation. Doing activities with students like asking “what makes a good paper of this type” could go a long way into helping to make expectations clear for students so that they could rely less on past experiences and focus more on the class and being writers.

My data and research suggests that students who are able to assume identities of writers in class are more likely to be able to interact with response. Taking on the identity of a writer is not the only way students can interact with response, but from my results it appears that students who identify as writers, or who can make the shift to assuming a writer’s identity are able to engage with response more effectively. For example, participants who exhibited high levels of

Writer Talk and identified as writers did not fall as prey to the More Is Not Better category as participants who had higher levels of Student Talk or who identified as students rather than writers. In Chapter 4, I discussed the emergence of the More Is Not Better category because for students and past experience the more marks on a paper the more there is to fix as opposed to writers who look at the feedback as an opportunity to make their writing and ideas clearer.

My research also provides evidence that students need to be included in the literate activity of response. “Literacy events” as well as formative assessment are about interactions, so by involving students it is possible that we can help promote our students writing ability by engaging them through textual comments to make their papers stronger. If we do not engage students and allow for interactions in the assessment practice then at the very least we are denying them access to an important literate activity. A limitation of my study is that there is not

221 enough research or data available addressing the issues of relationships between students and the teacher, materials, and the class. Writing, like literacy, is a socially embedded activity, so more research is needed to uncover more information about the role relationships play in the creation and reception of response. Like Caswell’s (2012; 2014, 2016) research suggests, teachers response is an area of emotion based on any number of factors. Teacher’s response can be affected by their emotions or relationship with the students, as a student’s interpretation of response can also be an emotional event influenced by multiple factors.

Assessment is a highly individualized activity, which is why it is important for teachers using response to have students “write back” to them in some form. For Rodney and my own class, students were required to write revision memos addressing all the teachers’ feedback.

These memos came after students had received written feedback and before they submitted revisions to the teacher. In Francis’ class, where most of the response was verbal response in mini-conferences, the students were able to talk to Francis about their ideas and papers and any written comments that Francis had given them. By creating a space for students to address our comments it allows teachers to perform an assessment on their own written comments. Reading students memos or talking with them about written response is a formative assessment tool that can help strengthen teachers response practices as well as give students the opportunity to “write back.” Shepard (2006) explains that formative assessment is a model of assessment that corresponds to Vygotsky’s zone of proximal development (ZPD) and sociocultural learning theory. In this sense formative assessment acts as a scaffolding that requires teachers and students to interact and have a shared understanding of learning, but in order for formative assessment to close gaps in learning, teachers need to be aware that the feedback is actually

222 working and understandable for the students. Memos or other spaces that allow for conversation permit us a medium to make sure our shared understanding really is shared.

Along with research needed looking at the emotional reactions students and teachers have with response, my analysis suggests that we also need to think about teaching response as a genre. Earlier in Chapter 4, I talked about Jake, a student from Rodney’s class, who interpreted the comment “why” as him needing to expand. My interpretation of “why,” along with confirming with Rodney later was that “why” meant, why is this section of text here, and that maybe it should be cut. Comments like “why” are written for teachers and writers, not students.

When a student sees the comment “why” or other sorts of comments like “awk,” it is difficult for them to interpret and therefore use the comment. I believe a large part of the comments we write to students are geared more for teachers and writers, and although I am not suggesting we alter our response practices significantly, I do believe that we need to make spaces to have discussions with students about how to read and interpret our response. Denying students access to our written comments is neglecting not only the rhetorical situation of audience but also leaves students with an impoverished “literacy event.” Part of the stipulation of response acting as a

“literacy event” hinges upon students being able to interact with written response, so with comments that do not encourage or invite an interaction “literacy events” are impossible.

Encouraging students to take on the role of writers is one step in the right direction in helping them reach their potential as writers, but it is also our responsibility to equip them with the tools needed to understand certain aspects of response. We equip students by treating response as a genre which provides them with a foundation to access and use response.

Kara, a student from my class, exhibited zero Response Unclear in her interviews. Kara was an English major and aspiring teacher, who was comfortable with academic literacy. Since

223 she and I were from similar “affinity groups” (Gee, 2007), it comes as no surprise that she was comfortable with teacher commentary. Other student’s not versed in academic literacy reported higher percentages of Response Unclear in their interviews. Kara’s understanding of the written commentary provided her with a stronger basis to judge and interact with teacher’s comments.

There are also other exceptions to students having zero Response Unclear. All 5 of the students from Francis’s class exhibited zero Response Unclear. One possible explanation of this can be attributed to the fact that most of Francis’s feedback was verbal and delivered to students in mini-conferences in class or conferences outside of class. Since Francis talked to his students about their writing, it was possible for those students to clear up any misunderstandings they might have with his feedback. Both Rodney and I held conferences for our students, but with less frequency than Francis and the majority of our feedback was written. For Rodney and me, students had the ability to “write back” to our written comments, but for Francis the “writing back” was verbal conversation. Since teachers written comments can sometimes be confusing, it is important that we find ways to allow conversations and questions to emerge.

In treating response as a genre and discussing how response works and explaining personal response styles then response will be able to help more students in the class use feedback as something other than how to obtain a better grade. Along with thinking about response as a genre, it is important that teachers also point out to students that the style of response can change according to the type, genre, of paper they are writing. For students like

Holden, who did not see the difference between journal writing and formal assignments and had an emotional reaction to the response interpreted my changing response style as being inconsistent. The inconsistency that Holden interpreted from the feedback fed into his negative feelings about the response. To some degree Cory, from Rodney’s class, also did not understand

224 the difference between response to different papers. For Cory, everything Rodney wrote on his paper was them having a “conversation.” Because Cory viewed all response as conversation, it prohibited him from being able to use and comprehend what the response was asking of him. My research indicates that students are more likely to situate themselves as writers when they can connect to the teacher, class, or materials, but even writers sometimes need assistance. The nature of reading and writing are individualized experiences, meaning that we cannot all interpret and use a text in the same ways. Our reading and interpretative process are very much affected by different factors so it is important that we try to make response and assessment as interactive as possible to eliminate some of the guesswork that possibly leads to misinterpretations or misunderstandings.

Future Research

From this inquiry into the student’s perspective on assessment and response, I believe I have unlocked some doors, but with all the doors I have opened there are still areas of mystery.

One area I would like to explore for future research is looking at how relationships and emotions play into the creation and reception of response. My study was not equipped to address the roles relationships and emotions play in assessment. The reason it was not equipped to look at these two areas is because prior to this research I had not expected relationships or emotions to have such a large role in how students read and interpret response. From my study and talking with participants we can see how relationship and emotions effect the ways that response is read. I believe that in focusing more on these two areas we will be able build a better picture of response and how students perceive it. Most students in my study who were not able to “situate” (Gee,

2007) themselves as writers or who were withdrawn from the class, materials, or the teacher, or who had emotion reactions to the response have more difficulty thinking about and discussing

225 what response from the teachers meant. Students who could assume the role of writers in the classes are better able to make sense of teacher’s feedback to think differently about their writing. Failure for students to not connect with response was a social failure rather than an academic (Rose, 1989). From my perspective, many of these students were capable of being writers, but were reluctant or resistant to assuming that role. By looking more at relationships and emotions we will be able to shed light on response practices and help to build a more robust notion of response to writing.

Other future research includes a more systematic approach to interviewing students. What

I mean by this, is that I would like to interview students more frequently at timed intervals. In the classes that I observed the students were on schedules, but also allowed freedom to submit things when they needed to. Since I was interested in talking with students about their response the times that I spoke with students were often inconsistent either because everyone was not on the same time schedules or because their personal schedules limited or prohibited meeting times.

With a more consistent pattern to interviewing students, for example, if a paper was being turned in and handed back every 2-3 weeks, then I could have set up more consistent interviews to be able to pinpoint with more accuracy when and if response had an effect on students outlook on writing. Another problem was that the classes were all front-loaded with readings which resulted in a large amount of writing for the class coming at end of the semester. This schedule also limited the amount of students I could interview because of their time constraints as well as the busyness of the end of the term. Having more interviews with the students at different times might shed some more light on how and what effect response has on students writing.

Another thing to consider with this research is that participants were all individuals and even though some of their reactions to response can be generalized, I caution using the results to

226 create profiles of students in writing classrooms. If we remember Holden, a student from my class, who had a different view of the response than I did, we can start to understand the complexities of response. We might assume to know and understand our students, but often times what we know is only the tip of the iceberg, which is why I suggest we make time to talk with students about response and their papers to help acclimate them into the culture of being a writer. Treating and addressing students as writers does not necessarily mean that students will feel comfortable with their newly appointed roles and some will certainly interpret their being called writers as some sort of trick. The portfolio and response can help mitigate student anxiety in classes by allowing them to focus on the writing and feedback, which should help them to make the transition to becoming writers easier. From interviews, some of the students had difficulty seeing themselves as writers, but in a delayed grading portfolio class students seeing themselves as a writer is not only encourage, but also essential to being able to make the most of response.

Conclusion

This study has answered the research questions; (1) Is response able to act as and fulfill the conditions to be considered a “literacy event” in a portfolio classroom that uses delayed grading?; (2) Are students able to fully participate in the literate activity of assessment that portfolios and delayed grading promote?; (3) How does teacher response and assessment act as a sponsor of literacy?; (4) Are there students in the class that do not interact with teachers response, and if so, why? In answering these questions, this research suggests that delayed grading portfolios and formative assessment can fit under the umbrella of a “literacy event.” By considering response as a “literacy event” we allow students the opportunity to participate in and understand the literate activity of assessment. One of the many possible implications for this

227 study is that it helps create a usable theory for portfolio assessment and response which distinguishes portfolios from “papers in a folder.” My contribution to the field with this research is in highlighting the student’s voice and looking at assessment and response as a literate activity.

In addressing the elephant in the room, the students, my work has helped shed some light on the neglected area of assessment and response.

By emphasizing writing in the classroom and by addressing students as writers rather than as students, we will be able to help redefine what writing classes are and who and where the power resides. Although we can never fully disrupt power dynamics in the classroom, delayed grading portfolios and formative assessment allow us the opportunity to engage students as writers, therefore, taking them to a level that they would not have had access to if we treated them as students. The emphasis in college writing classrooms (or any writing classroom) should be on the writing and helping our students to think about their writing and ideas differently. In using portfolios in ways that align with contemporary social theories of learning and literacy, we are able to help students become better writers. Even though we will never be able to abolish grades, by using delayed grading portfolios we are able to delay judgements until later in the semester which creates spaces for writers to emerge. Considering student’s reactions to response helps to create new conversations and possible areas of research to help develop a fuller view of response in the writing classroom.

In an edited collection on response, Chris Anson (1989) tells the story of Melville’s

“Bartleby, the Scrivener.” Anson’s reading of Bartleby and his position focuses on how Bartleby shuts himself off from humanity, turning his back on everyone around him. Prior to Bartleby’s demise, he had been working in a dead letters office, where his job was “opening, reading, and preparing to burn hundreds of undeliverable letters—‘errands of life’ destined to die in the

228 flames without ever reaching their intended audiences” (1). Anson points out, these letters were ones for which there could be no response, therefore, Bartleby “becomes a man who himself loses the will to respond” (1). This is a particularly insightful look at Bartleby’s position, and

Anson reminds us that “the process of response… is so fundamental to human interaction that when it is short-circuited, whether by accident or design, the result can hardly be interpreted as anything but a loss of humanity” (1). Anson reminds us that response is a social situated activity, rooted in specific contexts, meant to be a form of communication. In thinking about the activity of response between teachers and students, we can see the important role that response plays in the classroom. In keeping in lines with theories of literacy and learning, it is important that we take into consideration the power that our response has to sponsor various types of literacy

(Brandt, 2001). Response is such an integral approach to teaching and is a literate activity that hinges upon interaction, which is why it is important that we consider the students voice to refine the types of assessment we use. By taking the student’s perspective and literacy into consideration we are able to develop a fuller narrative.

229

Appendix A: Edited collections on portfolios for Greve 2014 C’s presentation

Belanoff, P., & Dickson, M. (1991). Portfolios: process and product. Portsmouth, NH:

Boynton/Cook Publishers.

Black, L. (1994). New directions in portfolio assessment: reflective practice, critical theory, and

large-scale scoring. Portsmouth N.H.: Boynton/Cook-Heinemann.

Calfee, R. C., & Perfumo, P. (1996). Writing portfolios in the classroom policy and practice,

promise and peril. Mahwah, N.J.: L. Erlbaum Associates.

Graves, D. H., & Sunstein, B. S. (1992). Portfolio portraits. Portsmouth, N.H.; Toronto, Canada:

Heinemann ; Irwin Pub.

Gill, K. (1993). Process and portfolios in writing instruction. Urbana, IL: National Council of

Teachers of English.

Sunstein, B. S., & Lovell, J. H. (2000). The portfolio standard: how students can show us what

they know and are able to do. Portsmouth, NH: Heinemann.

Yancey, K. B. (1992). Portfolios in the writing classroom: an introduction. Urbana, Ill.: National

Council of Teachers of English.

Yancey, K. B. & Weiser, I. (1997). Situating portfolios: four perspectives. Logan, Utah: Utah

State University Press.

230

Appendix B: Rodney’s Portfolio Facts Sheet

Portfolio Facts and Other Info

 Grading on Your Real Achievement Gives students an opportunity to work on their writing for an entire semester before receiving a grade. No matter how much a student learns or improves over a semester, in traditional grading, a teacher has to count the not-so-good paper done in January or early February. As a teacher I don’t care what you can do in January or February. I want to know what you can do in March or April, and I want your grade to reflect what you have learned in the class. It makes no sense to give you a grade until you have had the chance to learn to write better – the overall objective of the course.

 Paying Attention to Writing not Grading Research shows that more than 50% of teacher commentary on student papers justifies the grade given. Without a focus on grades, teacher commentary focuses on student writing and learning to write. Teacher/student communication is on writing not grades. Good writers do not (except in school) write for grades. Portfolios help to create an environment in which writers can concentrate on writing.

 Revision Research demonstrates that students who learn to revise (not just proofread and edit) write better and get better grades. Revision requires time for students to have their writing read and responded to and for students to revise in response to the reading(s) their writing receives. Portfolios encourage more time and rationale for learning to revise and revising one’s own writing.

 Advantages Portfolio grades are typically higher than those students receive in traditional grading. Results indicate that students receive approximately one half grade higher when teachers use portfolios. Students have all semester to revise their work in response to teacher commentary before being graded. Imagine being able to take and re-take tests until you received the grade you wanted. The student can revise as often as she wants until she is happy with her writing.

 Disadvantages Students miss grades and think they don’t know how they are doing unless they have received a grade on something. The disadvantage can become an advantage if the students learn to chart their progress without grades. Ask your teacher if you have any questions about how you are doing. Check the syllabus and the course requirements against your progress in the course – this is how you’re doing.

231

Appendix C: Recruitment Script for portfolio survey

Initial Recruitment Script Greetings, As writing instructors, you are invited to participate in our survey which seeks to reveal current attitudes and trends in classroom portfolio assessment practices. To participate, we ask that you are using portfolios for your classroom assessment in any writing class. For this study, you will be asked to complete a survey hosted on Survey Monkey where you may provide information about how and why you are using portfolios in your classes. The survey should take no more than 15 minutes, and you may skip any question, or leave the survey at any time for any reason. To participate, follow the link below to the IRB-approved (KSU IRB # here) survey. We thank-you for considering to participate in this survey, and do not hesitate to contact any of us should any concerns arise about this research. Sincerely, Brian Huot [email protected] Curt Greve [email protected] Reminder Script Greetings, As a reminder, we would like a few more participants to complete our survey on classroom portfolio practices. Given a keen interest expressed by many in our field for the kind of data we hope to aggregate, we hope that you (or your writing studies colleagues) take 15 or so minutes from your day to complete our survey. Below you'll find our original recruitment script with a link to the survey. Regards, Brian Huot [email protected] Curt Greve [email protected]

232

Appendix D: Portfolio Survey Consent Form

Informed Consent to Participate in a Research Study Survey

Study Title: Survey of Classroom Portfolio Assessment Practices

Principal Investigator: Dr. Brian Huot You are being invited to participate in a research study. This consent form will provide you with information on the research project, what you will need to do, and the associated risks and benefits of the research. Your participation is voluntary. Please read this form carefully. It is important that you ask questions and fully understand the research in order to make an informed decision Purpose: The purpose of the proposed research is to survey the attitudes of college faculty on classroom portfolio assessment practices. This research attempts to look at the ways portfolios are being used in various situations. Procedures: This study being carried out over two distinct phases with participants first self-selecting to answer a survey. From that survey we will identify a smaller group of respondents who will self-select to submit for further responses. The voluntary, self-selected responses may take the form of informal interviews, or select document collection (syllabi, rubrics, etc.) to elicit a clearer sense of the attitudes of classroom portfolio assessment. The total time that participants can expect if they are respondents for both phases ought to be no greater than one hour. Audio,Video Recording, and Document collection: If interviews occur, they will be audio recorded in order that we may code the interviews into meaningful categories that compare and contrast with the survey data. The same goes for document collection and the coding process. With document collection participant’s names and affiliations will be stripped in order to keep the data anonymous. The raw interviews will be recorded as digital files on a removable data storage disk and the documents collected will also be in electronic format. Both types of data collection will be stored in a locked desk drawer in Dr. Huot's office on Kent State University's main campus. When finished with the project, Dr. Huot will erase all the electronic files associated with this study. Benefits

Individuals will not necessarily benefit directly from this study. However, scholars teaching with portfolios will all benefit from a contemporary sense of the prevalent attitudes and usage of classroom portfolio assessment practices.

Risks and Discomforts

233

There are no anticipated risks beyond those encountered in everyday life. However, if it is believed that a particular question is too personal, answers can be omitted.

Privacy and Confidentiality All materials, digital audio, documents, and any other personal information, gathered during this study will be kept in a locked file cabinet in my campus office. All materials that could be used to identify any individual participating in this study will be destroyed after the completion of this project. Any participants identified in any written form this project becomes will be identified through pseudonym. Your study related information will be kept confidential within the limits of the law. Any identifying information will be kept in a secure location and only the researcher will have access to the data. Research participants will not be identified in any publication or presentation of research results; only aggregate data will be used. Voluntary Participation Taking part in this research study is entirely up to you. You may choose not to participate or you may discontinue your participation at any time without penalty or loss of benefits to which you are otherwise entitled. You will be informed of any new, relevant information that may affect your health, welfare, or willingness to continue your study participation. Contact Information If you have any questions or concerns about this research, you may contact Dr. Brian Huot at 330-672- 1749. This project has been approved by the Kent State University Institutional Review Board. If you have any questions about your rights as a research participant or complaints about the research, you may call the IRB at 330-672-2704. Consent Statement By opening the survey the respondent acknowledges the following: I have read this consent form and have had the opportunity to have my questions answered to my satisfaction. I voluntarily agree to participate in this study. I understand that a copy of this consent will be provided to me for future reference upon request. If you are 18 years of age or older, understand the statements above, and freely consent to participate in the study, click the survey button to begin.

234

Appendix E: Classroom Portfolio Assessment Survey You are being invited to participate in a research study. This consent form provides you with information on the research project, what you will need to do, and the associated risks and benefits. Your participation is voluntary. Please read this form carefully. It is important that you ask questions and fully understand the research in order to make an informed decision. Purpose: The purpose of the proposed research is to survey the attitudes of college faculty on classroom portfolio assessment practices. This research attempts to look at the ways portfolios are being used in various situations. Procedures: This study is being carried out through a survey looking at classroom portfolio assessment practices. The total time for completing the survey should be approximately 10-15 minutes. Benefits: Individuals will not necessarily benefit directly from this study. However, the field will benefit from a contemporary sense of the prevalent attitudes and usage of classroom portfolio assessment practices. Risks and Discomforts: There are no anticipated risks beyond those encountered in everyday life. However, if it is believed that a particular question is too personal, answers can be omitted. Privacy and Confidentiality: All materials, digital audio, documents, and any other personal information, gathered during this study will be kept in a locked file cabinet in my campus office. All materials that could be used to identify any individual participating in this study will be destroyed after the completion of this project. Any participants identified in any written form this project becomes will be identified through pseudonym. Your study related information will be kept confidential within the limits of the law. Any identifying information will be kept in a secure location and only the researcher will have access to the data. Research participants will not be identified in any publication or presentation of research results; only aggregate data will be used. Voluntary Participation: Taking part in this research study is entirely up to you. You may choose not to participate or you may discontinue your participation at any time without penalty or loss of benefits to which you are otherwise entitled. You will be informed of any new, relevant information that may affect your health, welfare, or willingness to continue your study participation. Contact Information: This project has been approved by the Kent State University Institutional Review Board. If you have any questions about your rights as a research participant or complaints about the research, you may call the IRB at 330-672-2704.Consent Statement By opening the survey the respondent acknowledges the following: I have read this consent form and have had the opportunity to have my questions answered to my satisfaction. I voluntarily agree to participate in this study. I understand that a copy of this consent will be provided to me for future reference upon request. If you are 18 years of age or older, teach with

235 portfolios, understand the statements above, and freely consent to participate in the study, click the survey button to begin.  Agree (1)  Disagree (2) If No Is Selected, Then Skip To End of Survey

Q1 What state do you teach in?

 Alabama (1)  Alaska (2)  Arizona (3)  Arkansas (4)  California (5)  Colorado (6)  Connecticut (7)  Delaware (8)  Florida (9)  Georgia (10)  Hawaii (11)  Illinois (12)  Idaho (13)  Indiana (14)  Iowa (15)  Kansas (16)  Kentucky (17)  Louisiana (18)  Maine (19)  Maryland (20)  Massachusetts (21)  Michigan (22)  Minnesota (23)  Mississippi (24)  Missouri (25)  Montana (26)  Nebraska (27)  Nevada (28)  New Hampshire (29)  New Jersey (30)  New Mexico (31)  New York (32)  North Carolina (33)  North Dakota (34)  Ohio (35)  Oklahoma (36)  Oregon (37)  Pennsylvania (38)  Rhode Island (39)  South Carolina (40)

236

 South Dakota (41)  Tennessee (42)  Texas (43)  Utah (44)  Vermont (45)  Virginia (46)  Washington (47)  West Virginia (48)  Wisconsin (49)  Wyoming (50)  Other not listed (51)

Q5 Check the boxes that best describe your institution.

 Public (1)  Private (2)  University (3)  Community College (4)  Grants Doctorates (5)  For Profit Institution (6)

Q6 Please select the box(es) that best describe you.

 Tenure Track (1)  Non-tenure Track (2)  Adjunct (3)  Teaching Assistant/Graduate Assistant (4)  Writing Program Administrator or Program Decision Maker (5)

Q2 Please explain your educational background and all degree(s) you currently hold or are working on. If you are working on a degree please indicate the degree you are working on as well as in progress

Rhetoric and Other not Composition Literature (2) Linguistics (3) In Progress (4) listed (5) (1) PhD (1)      MA (2)      BA (3)      If degree not mentioned      please explain here (4)

237

Q3 How long have you been teaching? ______Click to write Choice 1 (1)

Q4 How long have you been teaching with portfolios? ______Click to write Choice 1 (1)

Q7 Are you required to use portfolios by your institution?  Yes, I am required, but would use portfolios regardless (1)  Yes (2)  No (3)  Other (4) ______If No Is Selected, Then Skip To Why do you teach with portfolios?

Q8 If you are required to use portfolios who decides the way that they are designed and graded?  Yourself (1)  Administration (2)  Collaboratively/Committee Designed (3)  Other (4) ______

Q9 Why do you teach with portfolios?

Q10 Please explain what types of classes you use portfolios for and the decisions why.

Q11 Please explain the types of classes you do not use portfolios for and the decisions for why you do not.

Q12 What types of portfolios do you usually collect?  Paper based portfolio (p-portfolio) (1)  Electronic portfolio (e-portfolio) (2) If Electronic portfolio (e-por... Is Selected, Then Skip To If you collect e-portfolios please ex...

Q13 If you collect e-portfolios please explain the software that students use to compose in.

Q14 Do you use rubrics to assess the writing in portfolios?  Yes (1)  No (2)  Other (3) ______If Yes Is Selected, Then Skip To If you use a rubric who creates the r...

238

Q15 If you use a rubric who creates the rubric?  Do not use a rubric (1)  Teacher created (2)  Department created (3)  Student and Teacher together created (4)  Collaborative/Committee designed (5)  Other (6) ______

Q16 What types of documents do you ask students to submit with their portfolios?

Q17 Do students have a choice in what to include/exclude in their portfolios?  Yes (1)  No (2)  Other (3) ______

Q19 What is the average weight you assign to portfolios in classes you teach? ______Click to write Choice 1 (1)

Q20 How is a score or grade arrived at for portfolio assessment in your class(es)? Do you score each piece individually or do you look at the portfolio as a whole, Etc...

Q22 What type of response do you use with the portfolio before it is collected at the end of the semester?  Formative response (no letter grade attached) (1)  Summative response (letter grade attached) (2)  Other (3) ______

Q23 When do students receive feedback on the portfolios and who provides that feedback?

Q24 Do you allow students to revise pieces for the portfolio? And if so what type of revision policy do you have?

Q25 How would you say you mostly explain portfolios?  Verbally (1)  In Syllabus (2)  A Handout (3)  Sample Portfolios (4)  Other (5) ______

239

Q26 How often would you say you talk about portfolios with your class?  Frequently (1)  Every now and then (2)  Hardly ever (3)  Only when someone asks (4)  Other (5) ______

Q27 How satisfied are you with using portfolios in your class?  Very Satisfied (1)  Somewhat satisfied (2)  Neither satisfied nor unsatisfied (3)  Somewhat unsatisfied (4)  Very unsatisfied (5)

Q28 Have your grading practices changed with the use of portfolios?  Yes (1)  No (2)  Other (3) ______If No Is Selected, Then Skip To Based off of your answers there is a ...

Q29 Please explain how your grading practices have changed with the use of portfolios. For example do you assign higher grades, allow for revision, assess the work as a whole rather than individual pieces, etc...

Q30 Based on your answers there is a possibility that we may wish to conduct a brief interview or ask for select classroom documents (syllabi, rubrics, etc.). If you are willing to be contacted for further research please provide us with your contact information in the form of your name and email address where you can be reached.

240

Appendix F: Recruitment Script for Interviews

AUDIOTAPE/VIDEO CONSENT FORM

Classroom Portfolio Writing Assessment as a Literacy Event Primary Investigator Brian Huot Co-Investigator Curt Greve

You are invited to participate in this research regarding portfolio assessment. We are conducting this interview as part of a dissertation research. We are interested in finding out students perceptions of portfolio assessment and response to writing.

Your participation in this study will require a brief interview. This should take approximately 15-30 minutes of your time. It is possible upon completion of this interview that we may wish to interview you again later in the semester. Your participation will be anonymous and you may withdraw at any time if you feel that you no longer wish to participate. You will not be paid or receive extra credit for any classes for being in this study. We believe this interview does not involve any risk to you. Although you may find it interesting to participate in this study, there will be no direct benefit to you from your participation.

Your study related information will be kept confidential within the limits of the law. Any identifying information will be kept in a secure location and only the researchers will have access to the data. All participants names and will be assigned pseudonyms to conceal their identities. Research participants will not be identified in any publication or presentation of research results; only aggregate data will be used.

You do not have to be in this study if you do not want to be. We will be happy to answer any questions you have about this study. If you have further questions about this project or if you have a research- related problem, you may contact us, Curt Greve at [email protected] or Brian Huot at (330) 672-2124, or [email protected]. If you have any questions about your rights as a research participant you may contact the Kent State University Institutional Review Board (IRB) at 330-672-2704. The IRB is a group of people who review research studies to protect the rights and welfare of research participants.

I agree to participate in this study about portfolio assessment. For this project I agree that Curt Greve and/or Brian Huot may interview me for this project and use it for the purpose of data analysis.

______Signature Date ______Printed name email address

241

Curt Greve and Brian Huot may / may not (circle one) use the audio-tapes/video tapes made of me. The original tapes or copies may be used for:

____this research project _____publication _____presentation at professional meetings

______Signature Date

I have been told that I have the right to listen to the recording of the interview before it is used. I have decided that I: ____want to listen to the recording ____do not want to listen to the recording

Sign now below if you do not want to listen to the recording. If you want to listen to the recording, you will be asked to sign after listening to them.

______Signature Date

242

Appendix G: Field Note Consent Form

CLASSROOM OBSERVATION FIELD NOTE CONSENT FORM

Classroom Portfolio Writing Assessment as a Literacy Event Primary Investigator Brian Huot Co-Investigator Curt Greve

You are invited to participate in a study about portfolio assessment. We are conducting this research as part of a dissertation research. We are interested in finding out students perceptions of portfolio assessment and response to writing.

Your participation in this study will require that we take field notes of classes. Your participation will be anonymous and you may withdraw at any time if you feel that you no longer wish to participate. You will not be paid or receive extra credit for any classes for being in this study. We believe this classroom observation and taking field notes does not involve any risk to you. Although you may find it interesting to participate in this study, there will be no direct benefit to you from your participation.

Your study related information will be kept confidential within the limits of the law. Any identifying information will be kept in a secure location and only the researchers will have access to the data. All participants names and will be assigned pseudonyms to conceal their identities. Research participants will not be identified in any publication or presentation of research results; only aggregate data will be used.

You do not have to be in this study if you do not want to be. We will be happy to answer any questions you have about this study. If you have further questions about this project or if you have a research-related problem, you may contact me, Curt Greve at [email protected] or Brian Huot at (330) 672-2124, or [email protected]. If you have any questions about your rights as a research participant you may contact the Kent State University Institutional Review Board (IRB) at 330-672-2704. The IRB is a group of people who review research studies to protect the rights and welfare of research participants.

I agree to participate in a semester long study about portfolio assessment. For this project field notes of classes will be taken for the purposes of data analysis. I agree that Curt Greve or Brian Huot may field notes this class. ______Signature Date

______Printed name email address

243

Appendix H: Document Collection Consent Form

DOCUMENT COLLECTION CONSENT FORM

Classroom Portfolio Writing Assessment as a Literacy Event Primary Investigator Brian Huot Co-Investigator Curt Greve

You are invited to participate in document collection regarding portfolio assessment. We are conducting this research as part of a dissertation research. We are interested in finding out student’s perceptions of portfolio assessment and response to writing. Your participation in this study will require that we collect your written work (journals, blogs, assignments, drafts, teacher commented papers, peer commented papers, the final portfolio, and any other writing that is done for the class). Your participation will be anonymous and you may withdraw at any time if you feel that you no longer wish to participate. You will not be paid or receive extra credit for any classes for being in this study. We believe this document collection does not involve any risk to you. Although you may find it interesting to participate in this study, there will be no direct benefit to you from your participation.

Your study related information will be kept confidential within the limits of the law. Any identifying information will be kept in a secure location and only the researchers will have access to the data. All participants names and will be assigned pseudonyms to conceal their identities. Research participants will not be identified in any publication or presentation of research results; only aggregate data will be used. You do not have to be in this study if you do not want to be. We will be happy to answer any questions you have about this study. If you have further questions about this project or if you have a research-related problem, you may contact us, Curt Greve at [email protected] or Brian Huot at (330) 672-2124, or [email protected]. If you have any questions about your rights as a research participant you may contact the Kent State University Institutional Review Board (IRB) at 330-672-2704. The IRB is a group of people who review research studies to protect the rights and welfare of research participants.

By signing this form you are allowing the researcher, Curt Greve and Brian Huot to collect written documents from the class.

______Signature Date

244

Appendix I: Classroom Observation Consent Form

CLASSROOM OBSERVATION AUDIOTAPE/VIDEO CONSENT FORM

Classroom Portfolio Writing Assessment as a Literacy Event Primary Investigator Brian Huot Co-Investigator Curt Greve

You are invited to participate in a study about portfolio assessment. We are conducting this research as part of a dissertation research. We are interested in finding out students perceptions of portfolio assessment and response to writing.

Your participation in this study will require that we audio/video tape classes. Your participation will be anonymous and you may withdraw at any time if you feel that you no longer wish to participate. You will not be paid or receive extra credit for any classes for being in this study. We believe this classroom observation and recording does not involve any risk to you. Although you may find it interesting to participate in this study, there will be no direct benefit to you from your participation.

Your study related information will be kept confidential within the limits of the law. Any identifying information will be kept in a secure location and only the researchers will have access to the data. All participants names and will be assigned pseudonyms to conceal their identities. Research participants will not be identified in any publication or presentation of research results; only aggregate data will be used.

You do not have to be in this study if you do not want to be. We will be happy to answer any questions you have about this study. If you have further questions about this project or if you have a research-related problem, you may contact me, Curt Greve at [email protected] or Brian Huot at (330) 672-2124, or [email protected]. If you have any questions about your rights as a research participant you may contact the Kent State University Institutional Review Board (IRB) at 330-672-2704. The IRB is a group of people who review research studies to protect the rights and welfare of research participants.

I agree to participate in a semester long study about portfolio assessment. For this project classes will be audio/video recorded for the purposes of data analysis. I agree that Curt Greve or Brian Huot may audio- tape/video tape this class. ______Signature Date ______Printed name email address

245

I have been told that I have the right to listen to the recording before it is used. I have decided that I: ____want to listen to the recording ____do not want to listen to the recording

Sign now below if you do not want to listen to the recording. If you want to listen to the recording, you will be asked to sign after listening to them.

Curt Greve may / may not (circle one) use the audio-tapes/video tapes made of me. The original tapes or copies may be used for:

____this research project _____publication _____presentation at professional meetings

______Signature Date

246

References

Anastasi, A., & Urbina, S. (1997). Psychological Testing (7 edition). Upper Saddle River,

N.J: Pearson.

Andrews, D., Nonnecke, B., & Preece, J. (2003). Electronic survey methodology: A case

study in reaching hard-to-involve Internet users. International Journal of Human-

Computer Interaction, 16(2), 185–210.

Anson, C., & Brown, R. L. (1991). Large-Scale Portfolio Assessment: Ideological

Sensitivity and Institutional Change . In P. Belanoff & M. Dickson (Eds.),

Portfolios Process and Product (pp. 248–269). Portsmouth, NH: Boynton/Cook

Publishers.

Anson, C. M. (1997). In Our Own Voices: Using Recorded Commentary To Respond to

Writing. New Directions for Teaching and Learning, (69), 105–13.

Anson, C.M. (Ed.). (1989). Writing and response: Theory, practice and research.

Urbana, IL: NCTE.

Anson, C.M. (2000). Response and the social construction of error. Assessing Writing,

7(1), 5-21.

Apple, M. (1999). Official Knowledge: Democratic Education in a Conservative Age (2

edition). Routledge.

Bardine, B. A., Bardine, M. S., & Deegan, E. F. (2000). Beyond the Red Pen: Clarifying

Our Role in the Response Process. The English Journal, (1), 94.

247

Bardine, B. A. (1996). Using pre-instruction questionnaires to improve the small group

writing class [microform] / Bryan Bardine. [Washington, D.C.?] : U.S. Dept. of

Education, Office of Educational Research and Improvement, Educational Resources

Information Center, [1996].

Bardine, B. A. (1999). Students’ perceptions of written teacher comments: what do they

say about how we respond to them? High School Journal, 82(4), 239–247.

Bardine, B. A. (2004). Teacher and student attitudes about response to writing : an

examination of two high school classrooms. Doctoral Dissertation). Kent State

University, Kent, Oh.

Bardine, B. (2006). Reconstructing how We respond to Our Students’ Writing: An

Exploratory Study. Ohio Journal of English Language Arts, 46(2), 27–37.

Barton, E. 2000. “More Methodological Matters: Against Negative Argumentation.”

College Composition and Communication, 51(3), 399-416.

Barton, E., Halter, E., McGee, N., & McNeilly, L. (1998). The Awkward Problem of Awkward Sentences. Written Communication, 15(1), 69–98. Barton, D. (1991). The Social Nature of Writing. In D. Barton, & R. Ivanic (Eds.).

Writing in the Community. (pp 1-13). Newbury Park; London: SAGE

Publications, Inc.

Barton, D., & Ivanic, R. (Eds.). (1991). Writing in the Community. Newbury Park; London: SAGE Publications, Inc. Barton, D., Ivanic, R., Appleby, Y., Hodge, R., & Tusting, K. (2007). Literacy, Lives and Learning (1 edition.). London ; New York: Routledge.

248

Baumann, J. F. & Bason, J.J. (2004). “Survey Research.” Literacy Research

Methodologies. In Nell K. Duke & Marla H. Mallette (Eds). NY: Guilford,

287‐309.

Behizadeh, N., & Engelhard, G. (2011). Historical view of the influences of measurement

and writing theories on the practice of writing assessment in the United States. Assessing

Writing, 16(3), 189–211.

Belanoff, P., & Elbow, P. (1986a). Using Portfolios to Increase collaboration and

community in a Writing Program. WPA: Writing Program Administration, 9(3), 27–40.

Belanoff, P., & Elbow, P. (1986b). State University of New York at Stony Brook

Portfolio-based Evaluation Program. In P. Connolly & T. Vilardi (Eds.), New Methods in

College Writing Programs: Theories in Practice (pp. 95–105). New York: The Modern

Language Association of America.

Belanoff. P. (1991) The Myths of Assessment. The Journal of Basic Writing. 10 (1),

54-66.

Belanoff, P. (1994). Portfolios and Literacy: Why? In Black et al. New directions in portfolio

assessment: reflective practice, critical theory, and large-scale scoring. (13-24).

Portsmouth N.H.: Boynton/Cook-Heinemann.

Berlin, J. (1994). The Subversion of the Portfolio. In Black et al. New directions in

portfolio assessment: reflective practice, critical theory, and large-scale scoring,

(56-68). Portsmouth N.H: Boynton/Cook-Heinemann.

Birks, M., Chapman, Y., & Francis, K. (2008). Memoing in qualitative research: Probing

data and processes. Journal of Research in Nursing, 13(1), 68–75.

249

Birks, M., & Mills, J. (2015). Grounded Theory: A Practical Guide (Second Edition

edition). Los Angeles: SAGE Publications Ltd.

Bishop, W. (1991). Going up the Creek Without a Canoe: Using Portfolios to Train New

teachers of College Writing. In P. Belanoff & M. Dickson (Eds.), Portfolios Process and

Product (pp. 215–228). Portsmouth, NH: Boynton/Cook.

Black, L. (1994). New directions in portfolio assessment: reflective practice, critical theory, and

large-scale scoring. Portsmouth N.H.: Boynton/Cook-Heinemann.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in

Education: Principles, Policy and Practice, 5(1), 7–73.

Black, P., & Wiliam, D. (2004). The Formative Purpose: Assessment Must First Promote

Learning. Yearbook of the National Society for the Study of Education, 103(2), 20–50.

Blair, K. L., & Takayoshi, P. (1997). Reflections in reading and Evaluating Electronic

Portfolios. In K. B. Yancey & I. Weiser (Eds.), Situating Portfolios Four

Perspectives (pp. 357–369). Logan, UT: Utah State University Press.

Blakeslee, A. M., & Fleischer, C. (2007). Becoming a Writing Researcher (1 edition).

Mahwah, N.J: Routledge.

Blythe, S. (2001). Meeting the Paradox of Computer-Mediated Communication in

Writing Instruction. In Takayoshi, P &Huot, B. (Eds.), Teaching Writing With

Computers (118-127). Boston, MA: Houghton Mifflin Company.

Bracey, G. W. (2006). Reading educational research: how to avoid getting statistically

snookered. Portsmouth, NH: Heinemann.

Brandt, D. (1990). Literacy as Involvement: The Acts of Writers, Readers, and Texts.

Carbondale: Southern Illinois University Press.

250

Brandt, D. (1992). The Cognitive as the Social An Ethnomethodological Approach to Writing

Process Research. Written Communication, 9(3), 315–355.

Brandt, D. (1998). Sponsors of Literacy. College Composition and Communication, 49(2), 165–

185.

Brandt, D. (2001). Literacy in American Lives. Cambridge University Press.

Brannon, L., & Knoblauch, C. H. (1982). On Students’ Rights to Their Own Texts: A Model of

Teacher Response. College Composition and Communication, 33(2), 157–166.

Broadfoot, P. (1996). Education Assessment & Society. Buckingham : Philadelphia: Open

University Press.

Brooke, R. (1987). Underlife and Writing Instruction. College Composition and Communication

38 (2), 141-153.

Buck, A. (2012). Examining Digital Literacy Practices on Social Network Sites.

Research in the Teaching of English, 47(1), 9–38.

Calfee, R. C., & Perfumo, P. (1996). Writing portfolios in the classroom policy and practice,

promise and peril. Mahwah, N.J.: L. Erlbaum Associates.

Calfee, R., & Sperling, M. (2010). On Mixed Methods: Approaches to Language and

Literacy Research. New York: Teachers College Press.

Callahan, S. (1997). Tests Worth Taking?: Using Portfolios for Accountability in Kentucky.

Research in the Teaching of English, 31(3), 295–336.

Callahan, S. (1999). All done with the best of intentions: one Kentucky high school after six

years of state portfolio tests. Assessing Writing, 6(1), 5–40.

Callahan, S. (2000). Responding to the Invisible Student. Assessing Writing, 7, 57-77.

251

Cambridge, B., Kahn, S., Thompkins, D.P., & Yancey, K.B. (Eds.). (2001). Electronic

Portfolios: Emerging Practices in Student, Faculty, and Institutional Learning (1ST

edition.). Washington, D.C.: Stylus Publishing.

Cambridge, D., Cambridge, B., Yancey, K. B., & more, & 0. (2009). Electronic Portfolios 2.0:

Emergent Research on Implementation and Impact. Sterling, Va: Stylus Publishing.

Camp, R. (1985). The Writing Folder in Post-Secondary Assessment. In P. J. A. Evans (Ed.),

Directions and Misdirections in English Evaluation (pp. 91–99). Canada: Canadian

Council of Teachers of English.

Camp, R. (1990). Thinking Together about Portfolios. Quarterly of the National Writing Project

and the Center for the Study of Writing and Literacy, 12(2), 8.

Camp, R. (1992). Portfolio Reflections in Middle and Secondary School Classrooms. In K. B.

Yancey (Ed.), Portfolios in the Writing Classroom (pp. 61–79). Urbana, IL: National

Council of Teachers of English.

Camp, R. (1993a). Changing the Model for the Direct Assessment of Writing. In M. M.

Williamson & B. A. Huot (Eds.), Validating Holistic Scoring For Writing

Assessment (pp. 45–78). Cresskill, NJ: Hampton Press, Inc.

Camp, R. (1993b). The Place of Portfolios in Our Changing Views of Writing

Assessment. In R. E. Bennett & W. C. Ward (Eds.), Construction Versus Choice

in Cognitive Measurement: Issues in Constructed Response, Performance Testing,

and Portfolio Assessment (pp. 183–212). Hillsdale, NJ: Lawrence Eerlbaum

Associates, Inc.

252

Camp, R. (1996). New-Views of Measurement and New Models for Writing Assessment. In E.

White, W. D. Lutz, & S. Kamusikiri (Eds.), Assessment of Writing: Politics, Policies,

Practices (pp. 135–147). New York, NY: Modern Language Association of America.

Carini, P. F. (1994). Dear Sister Bess: An Essay on Standards, Judgment and Writing. Assessing

Writing , 1, 29–65.

Caswell, N. I. (2012). Reconsider Emotion: Understanding the Relationship Between

Teachers’ Emotions and Teachers’ Response Practices. Doctoral Dissertation). Kent State

University, Kent, Oh.

Caswell, N. (2014). Dynamic Patterns: Emotional Episodes within Teachers’ Response

Practices. The Journal of Writing Assessment, 7(1), 1–10.

Caswell, N. (2016). Emotionally Exhausting: Investigating the Role of Emotion in

Teacher-Response Practices. In P. Thomas & P. Takayoshi (Eds.), Literacy in Practice:

Writing in Working, Private, and Public Lives. (pp. 148–160). New York: Routledge.

Creswell, J. W. (2015). 30 Essential Skills for the Qualitative Researcher. Thousand

Oaks, California: SAGE Publications, Inc.

Cushman, E., Kintgen, E. R., Kroll, B. M., & Rose, M. (Eds.). (2001). Literacy: A Critical

Sourcebook (1st edition.). Boston: Bedford/St. Martin’s. Belanoff, P., & Dickson, M.

(1991). Portfolios: process and product. Portsmouth, NH: Boynton/Cook Publishers.

Cohen, J. H., & Wiener, R. B. (2002). Literacy Portfolios: Improving Assessment,

Teaching and Learning (2 edition). Upper Saddle River, N.J: Prentice Hall.

Condon, W. (2011). Reinventing Writing Assessment: How the Conversation Is Shifting.

WPA: Writing Program Administration, 35(2), 162–182.

253

Connors, R.J. & Lunsford, A.A. (1993). Teachers’ rhetorical comments on student

papers. College Composition and Communication, 44, 200-223.

Costa, A. L., & Kallick, B. (2000). Discovering and Exploring Habits of Mind.

Alexandria, VA: Association for Supervision & Curriculum Development.

Daiker, D. A., Sommers, J., & Stygall, G. (1996). The Pedagogical Implications

of a College Placement Portfolio. In E. White (Ed.), Assessment of Writing:

Politics, Policies, Practices (pp. 257–270). Modern Language Association of

America.

Dargusch, J. (2014). Teachers as mediators: Formative practices with assessment criteria

and standards. Australian Journal of Language & Literacy, 37(3), 192–204.

Davis, B.G., Scriven, M., & Thomas, S. (1981). Evaluation of Composition Instruction.

Edgepress.

DiVall, M. V., Alston, G. L., Bird, E., Buring, S. M., Kelley, K. A., Murphy, N. L., …

Szilagyi, J. E. (2014). A Faculty Toolkit for Formative Assessment in Pharmacy

Education. American Journal of Pharmaceutical Education, 78(9), 1–9.

Dunn, J. S., Luke, C., & Nassar, D. (2013). Valuing the Resources of Infrastructure:

Beyond From-Scratch and Off-the-Shelf Technology Options for

Electronic Portfolio Assessment in First-Year Writing. Computers and

Composition, 30(1), 61–73.

Edgington, A. (2004). Think before you ink: Reflecting on teacher response. (Doctoral

Dissertation). Available from Dissertations and Theses database. (UMI No. 3144721).

254

Edgington, A. (2005). What Are You Thinking” Understanding Teacher Reading and

Response Through a Protocol Analysis Study. Journal of Writing Assessment, 2(2), 125–

148.

Elbow, P. (1997a). High stakes and low stakes in assigning and responding to writing.

New Directions for Teaching & Learning, (69), 5.

Elbow, P. (1997b). Grading Student Writing: Making It Simpler, Fairer, Clearer. New

Directions for Teaching and Learning. (69), 127-140.

Elbow, P. & Belanoff P. (1986).Portfolios as a Substitute for Proficiency Examinations.

College Composition and Communication, 37, 336-39.

Elbow, P., & Belanoff, P. (1997). Reflections on an Explosion: Portfolios in the 90"s and

Beyond. In K. B. Yancey & I. Weiser (Eds.), Situating Portfolios: Four

Perspectives (pp. 21–33). Logan, Utah: Utah State University Press.

Elliot, N. (2005). On a scale : a social history of writing assessment in America. New

York: Lang.

Emig, J. A. (1969). Components of the Composing Process among Twelfth-Grade

Writers. (Doctoral dissertation). Harvard University, Cambridge, MA.

Emig, J. (1983).The Composing Processes of Twelfth Graders: A Reassessment. College

Composition and Communication, 34(3), 278–283.

Faigley, L., Cherry, R., Jolliffe, D., and Skinner, A. (1985). Assessing Writers’

Knowledge and Processes of Composing. Ablex Publishing Corporation, Norwood, NJ.

Faigley, L., & Witte, S. (1981). Analyzing Revision. College Composition and Communication,

32(4), 400–414.

255

Fife, J. M., & O'Neill, P. (2001). Moving beyond the Written Comment: Narrowing the Gap

between Response Practice and Research. College Composition and Communication, (2).

300.

Fischer, K. M. (1997). Down the Yellow Chip Road: Hypertext Portfolios in Oz . In K. B.

Yancey & I. Weiser (Eds.), Situating Portfolios Four Perspectives (pp. 338–356). Logan,

UT: Utah State University Press.

Fishman, A. R. (1987). Literacy and Cultural Context: A Lesson from the Amish. Language

Arts, 64(8), 842–54.

Ford, J. E., & Larkin, G. (1978). The Portfolio System: An End to Backsliding Writing

Standards. College English, 39(8),

Foucault, M. (1995). Discipline & Punish: The Birth of the Prison. (A. Sheridan, Trans.)

(2nd edition). New York: Vintage Books.

Freedman, S. W. (1993). Linking Large-Scale Testing and Classroom Portfolio

Assessments of Student Writing. Educational Assessment, 1(1), 27.

Freedman, S. W., Greenleaf, C., & Sperling, M. (1987). Response to Student Writing. Urbana,

Ill: Natl Council of Teachers.

Freire, P. (1974). Pedagogy of the Oppressed. New York: The Seabury Press.

Gallagher, C. W. (2007). Reclaiming Assessment: A Better Alternative to the

Accountability Agenda. Portsmouth, NH: Heinemann.

Gallagher, C. W. (2011). Being there: (Re) making the assessment scene. College

Composition and Communication, 62(3), 450–476.

Gearhart, M., & Wolf, S. A. (1994). Engaging teachers in assessment of their students’

narrative writing: The role of subject matter knowledge. Assessing Writing, 1(1), 67–90.

256

Gee, J. P. (1991a). Discourse Systems and Aspirin Bottles: On Literacy. In C. Mitchell &

K. Weiler (Eds.), Rewriting Literacy: Culture and the Discourse of the Other (pp. 123–

135). Westport, Connecticut: Bergin and Garvey.

Gee, J. P. (1991b). the Narrativisation of Experience in the Oral Style. In C. Mitchell &

K. Weiler (Eds.), Rewriting Literacy: Culture and the Discourse of the Other (pp. 77–

102). Westport, Connecticut: Bergin and Garvey.

Gee, J. P. (1991c). What is Literacy? In C. Mitchell & K. Weiler (Eds.), Rewriting

Literacy: Culture and the Discourse of the Other (pp. 3–12). Westport, Connecticut:

Bergin and Garvey.

Gee, J. P. (2007). What video games have to teach us about learning and literacy. New

York: Palgrave Macmillan.

Gee, J. P. (2008). Social linguistics and literacies: ideology in discourses. Abingdon,

Oxon; New York: Routledge.

Gee, J., Hull, G., & Lankshear, C. (1996). The New Work Order (Edition Unstated

edition). Boulder, Colo.: Westview Press.

Gill, K. (1993). Process and portfolios in writing instruction. Urbana, IL: National

Council of Teachers of English.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: strategies for

qualitative research. New Brunswick, N.J.: Aldine Transaction.

Glazer, N. (2014). Formative plus Summative Assessment in Large Undergraduate

Courses: Why Both? International Journal of Teaching and Learning in Higher

Education, 26(2), 276–286.

257

Glazer, S. M., & Brown, C. S. (1993). Portfolios and Beyond: Collaborative Assessment

in Reading and Writing (Third Printing edition). Norwood, Mass: Christopher-Gordon

Pub.

Graves, D. H., & Sunstein, B. S. (1992). Portfolio portraits. Portsmouth, N.H.; Toronto, Canada:

Heinemann ; Irwin Pub.

Greenhalgh, A.M. (1992). Voices in response: A postmodern reading of teacher response.

College Composition and Communication, 43(3), 401-410.

Greve, C. (2014, March). Reading Beyond the Folder: Portfolios and Literacy.

Presentation at Conference on College Composition and Communication, Indianapolis,

Indiana.

Greve, C. (2015, March). Reading Beyond the Folder: Toward a Theory on Portfolio

Assessment. Presentation at Conference on College Composition and Communication,

Tampa, Florida.

Greve, C., Morris, W., & Huot, B. (Forthcoming). Use and Misuse of Writing Rubrics. In J.

Liontas, M. Dellicarpini, & J. C. Riopel (Eds.), TESOL Encyclopedia of English

Language Teaching. Wiley.

Godshalk, F., Swineford, F., & Coffman, W. (1966). The Measurement of Writing Ability.

Princeton: Educational Testing Service: CEEB RM No. 6.

Goody, J. (1986). The Logic of Writing and the Organization of Society (First edition.).

Cambridge Cambridgeshire ; New York: Cambridge University Press.

Goody, J., & Watt, I. (1968). The Consequences of Literacy . In J. Goody (Ed.), Literacy in

Traditional Societies. Cambridge University Press.

258

Gorlewski, J. (2008). Research for the Classroom: Formative Assessment: Can You

Handle the Truth? The English Journal, 98(2), 94–97.

Hanson, F. A. (1993). Testing Testing: Social Consequences of the Examined Life.

Berkeley: University of California Press.

Hamp-Lyons, L. (1996). The Challenges of Second-Language Writing Assessment. In E.

White, W. D. Lutz, & S. Kamusikiri (Eds.), Assessment of Writing: Politics,

Policies, Practices (pp. 271–83). New York, NY: Modern Language Association

of America.

Hartwell, P. (1980). Dialect Interference in Writing: A Critical View. Research in the

Teaching of English, 14(2), 101–118.

Haswell, R. (2001). Beyond Outcomes: Assessment and Instruction Within a University

Writing Program. Westport, CT: Praeger.

Hawisher, G. E., & Moran, C. (1997). Responding to Writing On-Line. New Directions

for Teaching and Learning, 1997(69) 115-126.

Hawisher, G. E., & Selfe, C. L. (Eds.). (1989). Critical Perspectives on Computers and

Composition Instruction. New York: Teachers College Pr.

Hawisher, G E. & Selfe, C.L. (1991).The Rhetoric of Technology and the Electronic

Writing. College Composition and Communication, 42(1) 55-65.

Hawisher, G., & Selfe, C. (1997). Wedding the Technologies of Writing Portfolios and

Computers: the Challenges of Electronic Classrooms. In K. B. Yancey & I. Weiser

(Eds.), Situating Portfolios Four Perspectives (pp. 305–321). Logan, UT: Utah State

University Press.

259

Hayes, M. F., & Daiker, D. A. (1984). Using Protocol Analysis In Evaluating Responses To

Student Writing. Freshman English News, 13(2), 1–4.

Heath, S. B. (1982a). What No Bedtime Story Means: Narrative Skills at Home and School.

Language in Society, 11(1), 49–76.

Heath, S.B. (1982b). “Protean Shapes in Literacy Events: Ever-Shifting Oral and Literate

Traditions.” In Kintgen, P, Kroll, B., & Rose, P. M. (Eds.). Perspectives on Literacy (1st

ed.). (443-466). Southern Illinois University Press.

Heath, S. B. (1983). Ways with Words: Language, Life and Work in Communities and

Classrooms. Cambridge Cambridgeshire ; New York: Cambridge University Press.

Heath, S. B. (1991). “It’s about winning!”: The language of knowledge in baseball. In L.

B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared

cognition (pp. 101–124). Washington, DC, US: American Psychological Association.

Heath, S. B., & Street, B. V. (2008). On ethnography: approaches to language and

literacy research. New York: Teachers College Press : NCRLL/National Conference on

Research in Language and Literacy.

Hester, V., O’Neill, P., Neal, M., Edgington, A., & Huot, B. (2007). Adding Portfolios to

the Placement Process. In P. O’Neill (Ed.), Blurring Boundaries: Developing

Writers, Researchers and Teachers (pp. 61–90). Cresskill, NJ: Hampton Press,

Inc.

Hewett, B. L. (2000). Characteristics of interactive oral and computer-mediated peer

group talk and its influence on revision. Computers and Composition, 17(3),

265–288.

260

Hodges, E. (1997). Negotiating the Margins: Some Principles for Responding to Our

Students Writing: Some Strategies for Helping Students Read Our Comments. New

Directions for Teaching and Learning, 1997(69).

Howard, R.M. (1996). Memoranda to Myself: Maxims for the Online Portfolio.

Computers in Composition, 13(2). 155-167.

Hull, G., & Rose, M. (1990). “This Wooden Shack Place”: The Logic of an

Unconventional Reading. College Composition and Communication, 41(3), 287–

298.

Huot, B. (1988). The validity of holistic scoring: A comparison of the talk aloud

protocols of expert and novice holistic raters. (Doctoral dissertation). Indiana University

of Pennsylvania, Indiana, PA.

Huot, B. (1993). The Influence of Holistic Scoring Procedures on Reading and Rating

Student Essays. In M. Williamson & B. Huot (Eds.), Validating Holistic Scoring for

Writing Assessment: Theoretical and Empirical Foundations (pp. 206–236). New Jersey:

Hampton Press.

Huot, B. (1994a). Beyond the Classroom: Using Portfolios to Assess Writing. In Black et

al. New directions in portfolio assessment: reflective practice, critical theory, and

large-scale scoring, (325-333). Portsmouth N.H: Boynton/Cook-Heinemann.

Huot, B. (1994b). A Survey of College and University Writing Placement Practices.

Writing Program Administration, 17 (3), 49-65.

Huot, B. (1996). Toward a new theory of writing assessment. College composition and

communication, 47(4), 549–566.

261

Huot, B. A. (2002). (Re)articulating writing assessment for teaching and learning. Logan: Utah

State University Press.

Huot, B. A., & O’Neill, P. (2009). Assessing writing : a critical sourcebook. Boston;

Urbana, Ill.: Bedford/St. Martins ; National Council of Teachers of English.

Hamp-Lyons, L., & Condon, W. (1999). Assessing the portfolio: principles for practice,

theory, and research. Cresskill, N.J.: Hampton Press.

Inoue, A. (2004). Community-based assessment pedagogy. Assessing Writing, 9(3), 206–

238.

Jacoby, J., Heugh, S., Bax, C., & Branford-White, C. (2014). Enhancing learning

through formative assessment. Innovations in Education & Teaching

International, 51(1), 72–83.

Johnson, S. (2006). Everything Bad is Good for You: How Today’s Popular Culture is Actually

Making Us Smarter (1 Reprint edition). New York: Riverhead Books.

Kirkpatrick, J., Tanya, R., Kanae, L., and Goya, K. (2009). “A Values-Driven

Eportfolio Journey: Na Wa'a.” In D. Cambridge, B. Cambridge, and K. Blake Yancey

(Eds). Electronic Portfolios 2.0: Emergent Research on Implementation and Impact. (97-

102). Sterling: Stylus.

Knowles, E. (2014). Untangling response to student writing: A look inside the classroom.

Presented at the Conference on College Composition and Communication, Indianapolis,

Indiana.

Knowles, E. (2015). Zooming in: The economy of response in the classroom. Presented

at the Conference on College Composition and Communication, Tampa, Florida.

262

Koretz, D., Stecher, B. M., Klein, S. P., & McCaffrey, D. F. (1995). The Vermont

Portfolio Assessment Program.

Kvale, S. (1996). InterViews: An Introduction to Qualitative Research Interviewing.

Thousand Oaks, Calif: SAGE Publications, Inc.

Kynard, C. (2006). “Y”all are killin me up in here’: Response theory from a Newjack

composition instructor/sistahgurl meeting her students on the page. Teaching

English in the Two-Year College, 34, 361–387.

Lakoff, G., & Johnson, M. (2003). Metaphors we live by. Chicago: University of Chicago

Press.

Larson, R. (2000). Revision as Self-Assessment. In J. B. Smith & K. B. Yancey (Eds.),

Self-Assessment & Development in Writing: A Collaborative Inquiry (pp. 97–103).

Hampton New Jersey : Cresskill.

Lau, A. m. s. (2014). “Formative good, summative bad?” – A review of the dichotomy in

assessment literature. Journal of Further and Higher Education, 17p.

Lauer, C., McLeod, M., & Blythe, S. (2013). Online Survey Design and Development: A

Janus-Faced Approach. Written Communication, 30(3), 330–357.

Lawson, B., Ryan, S. S., & Winterowd, W. R. (1990). Encountering Student Texts:

Interpretive Issues in Reading Student Writing. Urbana, Ill: Natl Council of

Teachers.

Maddox, B. (2014). Globalising assessment: an ethnography of literacy assessment,

camels and fast food in the Mongolian Gobi. Comparative Education, 50(4), 474–489.

Maddox, B. (2015a). The neglected situation: assessment performance and interaction in

context. Assessment in Education: Principles, Policy & Practice, 22(4), 427–443.

263

Maddox, B. (2015b). Inside the Assessment Machine: The Life and Times of a Test Item.

In M. Hamilton, B. Maddox, & C. Addey (Eds.), Literacy as Numbers: Researching the

Politics adn Practices of International Literacy Assessment (pp. 129–146). United

Kingdom: Cambridge University Press.

Maddox, B., Zumbo, B. D., Tay-Lim, B., & Qu, D. (2015). An Anthropologist

Among the Psychometricians: Assessment Events, Ethnography, and Differential Item

Functioning in the Mongolian Gobi. International Journal of Testing, 15(4), 291–309.

Mathison-Fife, J., & O'Neill, P. (2001). Moving beyond the written comment: Narrowing

the gap between response practice and research. College Composition and

Communication, 53(2), 300-321.

Miles, M. & Huberman, M. (1984). Qualitative Data Analysis: A Sourcebook of New

Methods. Thousand Oaks: SAGE Publications, Inc.

Mitchell, C., & Weiler, K. (1991). Rewriting Literacy: Culture and the Discourse of the Other (First Edition edition.). New York: Praeger. Morrissette, J. (2011). Formative Assessment: Revisiting the Territory from the Point of

View of Teachers. McGill Journal of Education, 46(2), 247–265.

Moss, P. A., & And Others. (1992). Portfolios, Accountability, and an Interpretive

Approach to Validity. Educational Measurement: Issues and Practice, 11(3), 12–21.

Moss, P. (2003). Reconceptualizing Validity For Classroom Assessment . Educational

Measurement Issues and Practice, 22(4), 13–25.

Moss, B. J. (Ed.). (1994). Literacy Across Communities. Cresskill, N.J: Hampton Pr. Murphy, S. (1994 a). Portfolios and curriculum reform: Patterns in practice. Assessing

Writing, 1(2), 175–206.

264

Murphy, S. (1994 b). Writing Portfolios in K-12 Schools: Implications for Linguistically

Diverse Students . In L. Black, D. A. Daiker, J. Sommers, & G. Stygall (Eds.),

New Directions in Portfolio Assessment: Reflective Practice, Critical Theory, and

Large-Scale Scoring (pp. 140–165). Portsmouth, NH: Boynton/Cook Publishers.

Murphy, S. (2000). A sociocultural perspective on teacher response: Is there a student in

the room? Assessing Writing, 7, 79-90.

Murphy, S. (2013). Some consequences of writing assessment. In A. Havnes & L. McDowell

(Eds.), Balancing dilemmas in assessment and learning in contemporary education (pp. 33-

49). Routledge.

Murphy, S., & Smith, M. A. (1992). Writing Portfolios: A Bridge from Teaching to Assessment

(1st. edition.). Markham, Ont.: Pippin Publishing.

Murphy, S., & Ruth, L. (1993). The Field Testing of Writing Prompts Reconsidered. In M.

Williamson & B. Huot (Eds.), Validating Holistic Scoring for Writing Assessment:

Theoretical and Empirical Foundations (pp. 266–302). Cresskill, NJ: Hampton Press.

Murphy, S., & Underwood, T. (2000). Portfolio Practices: Lessons from Schools, Districts and

States. Norwood, Mass: Christopher-Gordon Pub.

New London Group. (1996). A Pedagogy of Multiliteracies: Designing Social

Futures. Harvard Educational Review, 66(1), 60–92.

Odell, L., Goswami, D., & Herrington, D. (1983). The discourse based interview. In P.

Mosenthal, L. Tamor, & S. Walmsley (Eds.), Research on Writing (pp. 221–236). New

York: Longman.

O’Neill, P., & Fife, J. M. (1999). Listening to Students: Contextualizing Response to

Student Writing. Composition Studies, 27(2), 39–51.

265

O’Neill, P., & Moore, C. (2009). What College Writing Teachers Value And Why It

Matters. In M. Paretti & K. Powell (Eds.), Assessment of Writing (pp. 35–48).

Tallahassee, Florida: Association for Institutional Research.

O’Neill, P., Schendel, E., & Huot, B. (2002). Defining Assessment as Research: Moving

from Obligations to Opportunities. WPA: Writing Program Administration, 26(1), 10–26.

Ong, W. (1986). Writing Is a Technology that Restructures Thought. In G. Baumann

(Ed.), The Written Word: Literacy in Transition (pp. 23–50). New York: Oxford

University Press.

Parkes, J. (2007). Reliability as argument. Educational Measurement: Issues and

Practice, 26(4), 2–10.

Paulson, F. L., Paulson, P. R., & Meyer, C.(1991). What Makes a Portfolio a Portfolio?

Educational Leadership, 48(5), 60–6.3

Perry, J., “Portfolios and the Promise Left Unfulfilled.” N.d. MS.

Pula, J., & Huot, B. (1993). A Model of Background Influences on Holistic Raters. In M.

Williamson & B. Huot (Eds.), Validating Holistic Scoring for Writing Assessment:

Theoretical and Empirical Foundations (pp. 237–265). New Jersey: Hampton Press.

Purves, A. C., Quattrini, J. A., & Sullivan, C. I. (1995). Creating the writing portfolio.

Lincolnwood, IL.: NTC Publishing Group.

Phelps, L.W. (1998). Surprised by response: Student, teacher, editor, reviewer. Journal of

Advanced Composition, 18, 247-273.

Phelps, L.W. (2000). Cyrano’s nose: Variations on the theme of response. Assessing

Writing, 7, 91-110.

266

Podis, J.M., & Podis, L.A. (1986). Improving our responses to student writing. Rhetoric

Review, 5(1), 90-98.

Powell, K., & Takayoshi, P. (2012). Practicing Research in Writing Studies: Reflexive and

Ethically Responsible Research. New York: Hampton Press Inc.

Rose, M. (1989). Lives on the Boundary: A Moving Account of the Struggles and Achievements

of America’s Educationally Underprepared. New York: Penguin Books.

Ruth, L., & Murphy, S. (1988). Designing Writing Tasks for the Assessment of Writing:

Norwood, N.J: Ablex Publishing.

Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in

Education: Principles, Policy & Practice, 5(1), 77.

Sadler, R. (1989). Formative assessment and the design of instructional systems.

Instructional Science, 18, 119–144.

Schon, D. A. (1990). Educating the Reflective Practitioner: Toward a New Design for

Teaching and Learning in the Professions (1 edition.). San Francisco: Jossey-

Bass.

Scribner, S. (1984). Literacy in Three Metaphors. American Journal of Education, (1). 6.

Scribner, S., & Cole, M. (1978). Literacy without schooling: Testing for intellectual

effects. Harvard Educational Review, (48), 448-61.

Scribner, S., & Cole, M. (1981). Unpacking Literacy. In M. F. Whiteman (Ed.), Writing:

The Nature, Development, and Teaching of Written Communication (pp. 71–87).

Mahway, NJ: Lawrence Eerlbaum Associates, Inc.

267

Scriven, M. (1967). The Methodology of Evaluation . In R. Tyler, R. Gagne, & M.

Scriven (Eds.), Perspectives of Curriculum Evaluation (pp. 39–83). U.S.A.: Rand

McNally & Company.

Selfe, C. (1985). The Electronic Pen: Computers and the Composing Process. Collins,

J.L & Sommers, E.A. (Eds.), Using Computers in the Teaching of Writing.

(55-66). Upper Montclair: Boynton Cook Publishers, Inc.

Shepard, L. (2000). The Role Of Assessment In A Learning Culture. Educational

Researcher, 29(7), 4–14.

Shepard, L. (2006). Classroom Assessment. In Educational Measurement (4th ed., pp.

623–646). Westport, Ct: Praeger Publishers.

Smagorinsky, P. (1989).The Reliability and Validity of Protocol Analysis. Written

Communication 6. 463‐479.

Smagorinsky, P. (2008). The Method Section as Conceptual Epicenter in Constructing

Social Science Research Reports. Written Communication. 25 (3). 389-411

Smith, J. B., & Yancey, K. B. (Eds.). (2000). Self-Assessment & Development in Writing:

A Collaborative Inquiry. Cresskill, N.J: Hampton Pr.

Smith, M. W., & Wilhelm, J. D. (2002). Reading don’t fix no chevys: literacy in the lives

of young men. Portsmouth: Heinemann.

Smith, S. P. (2002). Why Use Portfolios? One Teacher’s Response. In P. O’Neill & C.

Moore (Eds.), Practice in Context: Situating the Works of Writing Teachers

(pp. 227–284).

268

Smith, W. (1992).The importance of Teaching Knowledge in College Composition

Placement Testing. In B. Huot & Peggy O’Neill (Eds.), Assessing Writing: A Critical

Sourcebook (179-202). Boston; Urbana, Ill.: Bedford/St. Martins ; National Council of

Teachers of English.

Smith, W. (1993). Assessing the reliability and adequacy of placement using holistic

scoring or essays as a college composition placement test. In Williamson & Huot

(Eds.), Validating Holistic Scoring For Writing Assessment: Theoretical and

Empirical Foundations (142-205). Cresskill, NJ: Hampton Press.

Sperling, M. (1994). Constructing the perspective of teacher as reader: A framework for

studying response to student writing. Research on the Teaching of English, 28,

174-207.

Sperling, M., & Freedman, S.W. (1987). A good girl writes like a good girl: Written

response and clues to the teaching/learning process. Written Communication, 4,

343-369.

Spinuzzi, C. Transparency in Commenting: An Unintended Experiment With Google

Docs. [Web log comment]. Retrieved from

http://clayspinuzzi.com/2011/12/transparency-in-commenting-an-unintended-experiment-

with-google-docs/

Street, B. V. (1984). Literacy in theory and practice. Cambridge [Cambridgeshire]; New York:

Cambridge University Press.

Street, B. V. (Ed.). (1993). Cross-Cultural Approaches to Literacy. Upper Saddle River, N.J:

Cambridge University Press.

269

Street, B. V. (1995). Social Literacies: Critical Approaches to Literacy in Development,

Ethnography and Education. London ; New York: Routledge.

Sunstein, B. S., & Lovell, J. H. (2000). The portfolio standard: how students can show us what

they know and are able to do. Portsmouth, NH: Heinemann.

Straub, R. (1996). Teacher Response as Conversation: More than Casual Talk, an

Exploration. Rhetoric Review, 14 (2), 374-399.

Straub, R. (1997). Students’ reaction to teacher comments: An exploratory study.

Research in the Teaching of English, 31, 91-120.

Straub, R. (2000). The student, the text, and the classroom context: A case study of

teacher response. Assessing Writing, 7(1), 23-55.

Straub, R., & Lunsford, R.F. (1995). Twelve readers reading: Responding to college

student writing. Cresskill, NJ: Hampton Press.

Taras, M. (2005). Assessment – Summative and Formative – Some Theoretical

Reflections. British Journal of Educational Studies, 53(4), 466–478.

Turley, E. D., & Gallagher, C. W. (2008). On the “Uses” of Rubrics: Reframing the Great

Rubric Debate. The English Journal, 97(4), 87–92.

Tuzi, F. (2004). The impact of e-feedback on the revisions of L2 writers in an academic

writing course. Computers and Composition, 21(2), 217–235. van Teijlingen, E. and Hundley, V., (2001). The Importance of Pilot Studies. Social

Research Update, l-4. van Teijlingen, E., Rennie, A.-M., Hundley, V. and Graham, W., (2001). The

importance of conducting and reporting pilot studies: the example of the Scottish

Births Survey. Journal of Advanced Nursing, 34, 289-295

270

Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological

Processes. (M. Cole, V. John-Steiner, S. Scribner, & E. Souberman, Eds.) (New Ed

edition). Harvard University Press.

Vygotsky, L. S. (1986). Thought and Language - Revised Edition. (A. Kozulin, Ed.)

(revised edition edition). Cambridge, Mass: The MIT Press.

Johnson, A. W. (2009). Objectifying Measures: The Dominance of High-Stakes Testing

and the Politics of Schooling. Philadelphia PA: Temple University Press.

Williams, J. (1981). The Phenomenology of Error. College Composition and

Communication, 32: 152-68.

Williamson, M. (1994). The Worship of Efficiency: Untangling Theoretical and Practical

Considerations in Writing Assessment. Assessing Writing, 1(2), 147–73.

Williamson, M. M., & Huot, B. A. (1993). Validating holistic scoring for writing assessment:

theoretical and empirical foundations. Cresskill, N.J.: Hampton Press.

White, E. M., Lutz, W., & Kamusikiri, S. (Eds.). (1996). Assessment of Writing: Politics,

Policies, Practices. New York: Modern Language Assn of Amer.

White, E., Elliot, N., & Peckham, I. (2015). Very Like a Whale: The Assessment of

Writing Programs. Logan: Utah State University Press.

Wolf, D. P. (1989). Portfolio Assessment: Sampling Student Work. Educational

Leadership, 46(7), 35.

Yancey, K. B. (1992). Portfolios in the writing classroom: an introduction. Urbana, Ill.: National

Council of Teachers of English.

271

Yancey, K. B. (1996). Dialogue, Interplay, and Discovery: Mapping the Role and the

Rhetoric of Reflection in Portfolio Assessment. In R. Calfee & P. Perfumo (Eds.),

Writing Portfolios in the Classroom: Policy and Practice, Promise and Peril (pp.

83–102). Mahway, NJ: Lawrence Erlbaum Associates, Inc.

Yancey, K. B. (1998). Reflection in the writing classroom. Logan, Utah: Utah State

University Press.

Yancey, K. B. (1999). Looking Back as We Look Forward: Historicizing Writing

Assessment. College Composition and Communication, 50(3), 483–503.

Yancey, K. B. & Weiser, I. (1997). Situating portfolios: four perspectives. Logan, Utah: Utah

State University Press.

Yancey, K. B. (2001). Introduction; Digitized Student Portfolios. In B. Cambridge, S. Kahn, D.

Tompkins, & K. B. Yancey (Eds.), Electronic Portfolios: Emerging Practices in Student,

Faculty, and Institutional Learning (pp. 15–30). Washington, D.C.: American Association

for Higher Education.

Young, R. (1981). Problems and the composing process. In C.H. Frederiksen & J.E. Dominic

(Eds), Writing: The nature, development, and teaching of written composition. (pp 59-66).

Hillsdale, NJ: Erlbaum.

Zebroski, J. T. 1989. “A Hero in the Classroom.” In Encountering Student

Texts, eds. Bruce Lawson, Susan Sterr Ryan, and W. Ross Winterowd. Urbana,

IL: National Council of Teachers of English , 35–48.

Zebroski, J. T. (1994). Thinking through theory: Vygotskian perspectives on the teaching

of writing. Portsmouth, NH: Boynton/Cook Publishers.

272