Highland High School

Highland High School

Memorandum

Date: December 4, 2013

To: Highland High School Faculty

From: Dr. Patrick Weil, Principal

Re: Clarifying the Purpose and Use of Assessments as a Strategy to Improve

Highland High School Student Achievement

While traveling the road of school improvement, like on any long journey, it is wise to pause and take stock of where we have been, where we currently are, and how we will progress to our destination. I sense that we are at this point in our process of focusing on assessment as our strategy for improving our students’ achievement. Based on your recent feedback, we need to revisit what we are doing, why we are doing it, and to introduce how to use the data that you, at the classroom level, are generating to enhance the improvement of the learning and teaching of our students.

Addressing Misconceptions

I am aware of some misconceptions that have developed about the purpose and use of quarterly assessments, so let’s fill in some gaps. By no fault of their own, I understand that teachers are looking at their quarterly data in terms of “did I meet my goal of whatever percent?”as opposed to asking and answering questions like:

·  What instruction did my students get or not get that is connected to this/these specific test questions?

·  What specificlearning objectives aremy studentsweak in that I need to incorporate in my next unit?

·  Are my students successful or unsuccessful at certain types of questions (Higher-order-thinking skills, multi-step, Bloom’s levels, Webb’s Depth of Knowledge)?

·  What lessons went well/not well based on the question data?

·  Where are my students now in terms of their skills and knowledge specified by the standards?

·  Where do my students need to be in order to demonstrate proficiency?

·  What interventions are necessary when my students don’t demonstrate proficiency?

·  What steps do I take if my students are under challenged?

To be clear, teachers’ use of the data generated by their quarterly assessments is strictly for the purpose of improving student achievement immediately (progress monitoring) at the classroom level, using individual and group analysis (Teachers working together in their Professional Learning Communities-PLCs). The data is not being used to make judgments about teacher performance.

What is apparent to me is that we have to add to our capacities when it comes to data analysis and its use at the classroom level. For the December 4, 2013 Faculty Meeting, we planned professional development designed to help teachers and their PLCs develop more refined skills in the area of using data to improve individual student and classroom achievement.

Some Essential Questions

Some faculty members have raised questions about the value of our assessments. Some of the essential questions that teachers have recently asked include:

·  Why are we spending all this time on quarterly assessments?

·  What happens to the data generated by it and what valid purpose does it serve?

·  How do we keep learning at the forefront and not spend all of our time testing?

·  How does it help students?

·  How does all this extra work help me?

Assessment as a Strategy for Improving Student Achievement

It is a fact that the demographics of the population of students we service in the School Town of Highland are changing each year. An increasing number of students are coming to us with gaps in achievement. Another fact is that accountability is here to stay, regardless of which political party controls our government, at any level. Remember that the birth mother of the accountability mechanism, No Child Left Behind, was a bipartisan animal. Yet another fact is that the government of the State of Indiana controls public school finance. Schools are no longer in a position to compete for new funding as priorities shift as our population ages. We can’t control how the pie is sliced. We are also challenged to advance our gifted students in order to make them “college ready.” So, what are we to do?

It is always important to remind each other of the fact that we do control the conditions for learning and how we instruct our students within the four walls of our school. We have many of the resources that enable us to set high academic standards and expectations for each and every one of our students. We have invested considerable time, energy and effort into analyzing our standards and writing quarterly instructional objectives. We have written quarterly assessments that are designed to inform us about how our students are doing in terms of learning the content and skills defined by those standards and objectives. Our PLCs are working on implementing a continuous cycle of improvement through the actions of Planning, Doing, Checking and Acting, designed to take students from where they are and help them grow as much as we are able to help them.

According to our school performance data, we do a pretty good job of this already. So, why bother? We must bother because we know that we will be held to even higher standards with more challenging students and scant additional funding.

Most significantly, the focus of accountability has shifted from 100% students passing (NCLB) and comparing groups of students from year to year (Bennett’s PL 221 and A-F model) to measuring individual students’ annual academic growth. Therefore, we must act by focusing on individual student and classroom level data and what we learn from it so we can respond with instruction that is developmentally appropriate and effective.

Our professional challenge is how to make learning “visible” by considering “teaching in terms of its impact on student learning” (Hattie, 2012). This requires a shift in our roles, with the teachers becoming the learners (by being able to clearly identify those attributes that clearly make a difference in student learning) and by making teaching visible to students. The design of the Professional Learning Community structure supports this changing role because, by its’ very nature, the PLC model promotes learning from and with fellow professionals. We also want students who learn to become their own teachers (a core attribute of a life long learner).

How do we learn from our students? One of the most powerful ways is by assessment (pre-assessment, formative assessment, summative assessment). Dylan William (2011; Black & William, 2009) proposes that assessments are formative when the evidence (or data they generate) is actually used to adapt teaching to meet the needs of the students. Formative assessment has two goals: (1) To determine areas of student strength and weakness and (2) to improve instruction based on that information. William also observes that “equally important as teachers’ use of assessment data to improve teaching is students’ use of assessment data to improve their own learning and that of their peers.” Consider, too, this statement from Carol Ann Tomlinson (2013):

In reality, the judicious use of formative assessment is little more than common sense. If we want to maximize student growth, why would we not articulate clear and robust (instructional objectives), teach with those (objectives) squarely in mind, check the status of our students relative to what we just explained or what they just practiced, and do something about what the assessment reveals? This sequence of teacher thinking and planning is at the core of effective differentiation. It’s simply fundamental to good teaching.

A number of experts agree that the effective use of formative assessment is among one of the most powerful tools for improving student achievement. John Hattie (2009) concluded that the effective use of formative assessment is one of the most highly ranked contributors to student achievement, with an effect size of 0.90. (Hattie considers effect sizes of 0.50 and above to be most powerful and effect sizes of 0.30 to be of little value. By way of comparison, effect sizes for commonly cited factors affecting achievement are: ability grouping, 0.12; class size, 0.21; homework, 0.29; quality of teaching, 0.44; teacher clarity, 0.75.)

A study by Yeh (2011) of the impact of formative assessment on student achievement compared to 22 other approaches (including increased requirements for teacher education, teacher experience, summer school, more rigorous math requirements, class-size reductions, more rigorous teacher-licensure exams, and exit exams to name a few) concluded that formative assessment was the most cost-effective in terms of improving student achievement.

How you use an assessment determines if it is formative or not. Students taught by teachers who use assessment as an integral part of instruction can accelerate student learning by 5 – 6 months. Instead of grading in the traditional sense, teachers use formative assessments to determine where students fall in their mastery of learning objectives that are clearly and tightly aligned to the standards. They help teachers and students alike know and understand where their mastery of content and skills are proficient, sufficient, and/or deficient.

Learning is improved by assessment when teachers share clear learning objectives with their students, when they involve students in self-assessment, and when they provide students with quality feedback. Hattie (2012) proposes that “feedback is more powerful when it is sought by the teacher about his or her teaching than by the student about his or her learning.” Quality assessments match the expectations and complexity of thinking identified in the standards, they use the same terms that appear in the standards, they align with or resemble the formatting of state assessments, have a clear purpose, have clear targets, have sound design, and they pre-assess (We can teach better when we already know what students know). Quality assessments provide clear evidence of learning, are measurable, are actionable, and valid (they measure what they are supposed to measure).

A Commitment to the Process

I firmly believe that our investment in the process of data-driven continuous improvement (Plan, Do, Check, Act) will pay off for us. It has a foundation in the best current educational research that is available. It challenges our current beliefs about how learning and teaching are conducted, but in the end the gains our students make will be worth the high learning curve and hard work we are investing in by developing, administering, and evaluating the effectiveness of our own locally developed assessments. A continued commitment to the process is important because it is the best method improving instruction for our students.

We have several choices that we can make when it comes to assessments. We can create our own rigorous assessments, we can adopt a commercially developed assessment series, or we can use the state’s old Core 40 tests. The assessments must accurately align with the standards we chose to focus upon. Most importantly, we need a new approach to analyze our results in order to make the data serve the purpose of driving our instruction.

The best available advice is for our faculty to develop our own standards-based assessments that mirror the PARCC and Smarter Balanced Assessments. Our PLCs have completed 1 ½ cycles of initial design and revision of quarterly assessments in every content area. We must continue our commitment. I remain convinced that we are following the correct path on our journey.

As one wise old superintendent once asked me, “Pat, if you’re going to jump out of a plane, don’t you want to pack your own parachute?” My answer to that question was (and still is), “YES!”

In spite of big external challenges, let’s answer the bell and take control of our own destiny. By doing so, we all win!

References

Black, P. & William, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation, and Accountability, 21 (1), 5-31.

Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York: Routledge.

Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. New York: Routledge.

Tomlinson, C., & Moon, T. (2013). Assessment and student success in a differentiated classroom. Alexandria, VA: ASCD.

Yeh, S. (2011). The cost-effectiveness of 22 approaches for raising student achievement. Charlotte, NC: Information Age.

5