Skip to:

Fixing Our Broken System Of Testing And Accountability: The Reauthorization Of ESEA

Comments

Peter Hofman I watched the hearing and was very impressed with your testimony and response to questions. Effective assessment is an essential ingredient to helping all students succeed. Unfortunately, in many ways, NCLB has made assessment a four letter word. It is a shame that poor assessment practices have cast all assessment in a bad light. To most people, standardized tests are multiple-choice/selected response tests, which have value in the right application but have come to dominate the world of educational assessment. Among the results are been some of the ills you cited in your testimony - time spent on test prep instead of learning academic content, relatively shallow depth comprising largely of basic skills and factual recall, etc. I'm a strong believer in the power and benefits of performance assessment - in high-stakes testing (for its own sake but also because of the positive impact it can have on classroom instruction and assessment) but particularly curriculum-embedded. It is here - in conjunction with formative assessment practices - where we can have the greatest impact on student learning and outcomes. And it is these assessments that could - and I think should - not only contribute to accountability, but ultimately comprise the largest - if not sole - component of state assessment systems. While increasing the role of local educators in the process, it would still be under the umbrella of the state, which would ensure validity, reliability, and comparability through various proven means. I think we need at least annual measures of student learning, but they must be effective measures worthy of students' and teachers' time and that generate useful information. Much, if not most, of the discussion around too much testing, which as I noted at times seems to condemn all assessment, ignores key contributors to the problem. Here are five (I have a feeling that not all apply to you and your school): 1. Most educators (and policy makers for that matter) aren't "assessment literate." Most educator preparation programs, certification exams, evaluation systems, and professional development ignore assessment literacy -- the body of knowledge, skills, and beliefs educators need to identify, select, or create assessments optimally designed for formative and summative purposes; and – with a sound understanding of test quality considerations and comparability issues – to analyze, evaluate, and use the quantitative and qualitative evidence generated by these assessments to improve programs and specific instructional approaches to advance student learning. The lack of assessment literacy hinders the use - either creation or selection - of tests that can inform instruction, leading to ineffective tests that produce inaccurate results leading to inappropriate interventions that don't work, so they must repeat the fruitless cycle, wasting time and resources. It also will pose challenges for schools and districts in carrying out effective audits of their tests. 2. Most tests are not accessible to all students. Some are, especially accountability tests, but even there, efforts to ensure access typically focus on students who are coded under IDEA as having disabilities or English language learners, ignoring even more students with access needs that go unmet. Accessibility to tests given during the school year is a far bigger problem from a learning perspective. Inaccessible tests generate student frustration and inaccurate test results, again leading to inappropriate interventions that don't produce desired effects, prompting another cycle of testing. 3. As I noted above, the nature of accountability tests -- and associated benchmark/interim tests (including adaptive assessments) -- that are comprised largely or even exclusively of multiple-choice/selected response items (even the multi-state consortia tests have a large number of these items) represents multiple causes of this problem. In addition to those you cited, multiple-choice/selected-response items generally obscure why students picked the incorrect - let alone the correct - response option (notwithstanding distractor analyses). Lacking that certainty, teachers reviewing test results can't be certain what students need to improve learning, increasing the odds that interventions won't work, which in turn drives the need for more tests. Is there an option? Yes - performance-based assessments in which students demonstrate what they know and can do through work products, presentations, etc. The benefit: teachers (and others) can actually see student work - direct evidence of student learning as well as misunderstandings/ misconceptions....so there's no doubt about what students need to grow. Do these types of assessments take more time to develop, administer, and score? Sure. But in the long run, they're more efficient and research indicates they engage students, who rise to the challenge. I was glad to see how often performance assessment came up in last week's hearing (thank you). Just as testing time and cost explains the prevalence of selected-response items in accountability assessments, these same factors tend to limit the extent of performance-based items in such tests. This is certainly true of the mult-state consortia, both PARCC and Smarter Balanced cut back on the performance assessment component(s) of their tests. One option to addressing this issue is to include curriculum-embedded performance assessments as part of a state's accountability system. There are numerous benefits, it is quite possible from logistical and technical quality perspectives, and it would likely reduce the number of other tests that would have to be used. 4. The tremendous potential of formative assessment practices has been largely ignored, in part because of confusion sewn in the marketplace by the name being co-opted to stand for (1) off-the-shelf tests or (2) just frequent testing. In fact, only one step in this instructionally embedded process involves gathering evidence of student learning. The practices - individually and collectively -- have been researched more than any other type of assessment. The results are rather astounding: the process is as effective as one-on-one tutoring and more effective than reducing class size. Plus, they help all students grow, but have a greater impact on low-achieving ones -- i.e., they can help close achievement gaps. All it takes is time and support to build educator capacity. The process has support from CCSSO and many well regarded education experts, but still is used in a tiny fraction of the nation's schools. The process helps students and teachers identify -- and address -- misconceptions and misunderstandings during instruction/learning so they can be immediately addressed, lessening the need for traditional tests. And they can be used in all grades and subjects, for foundational knowledge and deeper learning. 5. One reason for the large number of tests some people have cited is the use of test results for educator evaluation. The issue here is that most evaluation systems that have been adopted use a formula designed to generate a single number, purported to denote the effectiveness of each educator. The formula approach is sold as being "objective." Yet, it flies in the face of performance review processes used throughout much if not most of our economy. Performance review is a personnel/human resources issue. It relies on supervisor judgement that takes into account context, numerous interactions over time, and myriad factors. Creating a formula introduces unintended consequences - on top of the many objections that have been raised about its limitations: if the variability of some or all the other factors in the formula besides student test scores is minimal, then the weight of the test scores will increase, perhaps dramatically. Adopting a more qualitative approach (yes, issues exist with previous systems and supervisors will need training and coaching), should place student test results in a more reasonable framework and possibly reduce the amount of testing. As noted, most people ignore these reasons. Despite the length of this comment, I've barely scratched the surface. I have the feeling you'd agree with much, if not most, of what I've written. Your testimony was an important step in educating people about sound assessment practices. I hope you keep on speaking out...and perhaps add one or more of the points above to your message.

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.