Skip to:

Teacher Quality

  • How Relationships Drive School Improvement—And Actionable Data Foster Strong Relationships

    Written on April 20, 2017

    Our guest authors today are Elaine Allensworth, Molly Gordon and Lucinda Fickel. Allensworth is Lewis-Sebring Director of the University of Chicago Consortium on School Research; Gordon is Senior Research Analyst at the University of Chicago Consortium on School Research; and Fickel is Associate Director of Policy at the University of Chicago Urban Education Institute. Elaine Allensworth explores this topic further in Teaching in Context: The Social Side of Education Reform edited by Esther Quintero (Harvard Education Press: 2017). 

    As researchers at the UChicago Consortium on School Research, we believe in using data to support school improvement, such as data on students’ performance in school (attendance, grades, behavior, test scores), surveys of students and teachers on their school experiences. But data does nothing on its own. In the quarter-century that our organization has been conducting research on Chicago Public Schools, one factor has emerged time and time again as vital both for making good use of data, and the key element in school improvement: relationships.

    Squishy and amorphous as it might initially sound, there is actually solid empirical grounding not only about the importance of relationships for student learning, but also about the organizational factors that foster strong relationships. In 2010, the Consortium published Organizing Schools for Improvement, which drew on a decade of administrative and survey data to examine a framework called the 5Essentials (Bryk et al. 2010). The book details findings that elementary/middle schools strong on the 5Essentials—strong leaders, professional capacity, parent-community ties, instructional guidance, and a student-centered learning climate—were highly likely to improve, while others showed little change or fell behind.

    READ MORE
  • Fix Schools, Not Teachers

    Written on April 18, 2017

    This post was originally published at the Harvard Education Press blog.

    Both John and Jasmine are fifth-grade teachers. Jasmine has a lot of experience under her belt, has been recognized as an excellent educator and, as a content expert in math and science, her colleagues seek her out as a major resource at her school. John has been teaching math and science for two years. His job evaluations show room for improvement but he isn’t always sure how to get there. Due to life circumstances, they both switch schools the following year. John starts working at a school where faculty routinely work collaboratively, which is a rather new experience for him. In Jasmine’s new school, teachers are friendly but they work independently and don’t function as a learning community like in her old school.

    After a year John’s practice has improved considerably; he attributes much of it to the culture of his new school, which is clearly oriented toward professional learning. Jasmine’s instruction continues to be strong but she misses her old school, being sought out by her colleagues for advice, and the mutual learning that she felt resulted from those frequent professional exchanges.

    This story helps to illustrate the limitations of how teachers’ knowledge and skills are often viewed: as rather static and existing in a vacuum, unaffected by the contexts where teachers work. Increasing evidence suggests that understanding teaching and supporting its improvement requires a recognition that the context of teachers’ work, particularly its interpersonal dimension, matters a great deal. Teachers’ professional relations and interactions with colleagues and supervisors can constrain or support their learning and, consequently, that of their students.

    READ MORE
  • Teacher Evaluations And Turnover In Houston

    Written on March 30, 2017

    We are now entering a time period in which we might start to see a lot of studies released about the impact of new teacher evaluations. This incredibly rapid policy shift, perhaps the centerpiece of the Obama Administration’s education efforts, was sold based on illustrations of the importance of teacher quality.

    The basic argument was that teacher effectiveness is perhaps the most important factor under schools’ control, and the best way to improve that effectiveness was to identify and remove ineffective teachers via new teacher evaluations. Without question, there was a logic to this approach, but dismissing or compelling the exits of low performing teachers does not occur in a vacuum. Even if a given policy causes more low performers to exit, the effects of this shift can be attenuated by turnover among higher performers, not to mention other important factors, such as the quality of applicants (Adnot et al. 2016).

    A new NBER working paper by Julie Berry Cullen, Cory Koedel, and Eric Parsons, addresses this dynamic directly by looking at the impact on turnover of a new evaluation system in Houston, Texas. It is an important piece of early evidence on one new evaluation system, but the results also speak more broadly to how these systems work.

    READ MORE
  • New Teacher Evaluations And Teacher Job Satisfaction

    Written on February 15, 2017

    Job satisfaction among teachers is a perenially popular topic of conversation in education policy circles. There is good reason for this. For example, whether or not teachers are satisfied with their work has been linked to their likelihood of changing schools or professions (e.g., Ingersoll 2001).

    Yet much of the discussion of teacher satisfaction consists of advocates’ speculation that their policy preferences will make for a more rewarding profession, whereas opponents’ policies are sure to disillusion masses of educators. This was certainly true of the debate surrounding the rapid wave of teacher evaluation reform over the past ten or so years.

    A paper just published in the American Education Research Journal addresses directly the impact of new evaluation systems on teacher job satisfaction. It is, therefore, not only among the first analyses to examine the impact of these systems, but also the first to look at their effect on teachers’ attitudes.

    READ MORE
  • New Evidence On Teaching Quality And The Achievement Gap

    Written on November 17, 2016

    It is an extensively documented fact that low-income students score more poorly on standardized tests than do their higher income peers. This so-called “achievement gap” has persisted for generations and is still one of the most significant challenges confronting the American educational system.

    Some people tend to overstate -- while others tend to understate -- the degree to which this gap is attributable to differences in teacher (and school) effectiveness between lower and higher income students (with income usually defined in terms of students’ eligibility for subsidized lunch assistance). As discussed below, the evidence thus far suggests that lower income students are a more likely than higher income students to have less “effective” teachers -- with effectiveness defined in terms of the ability to help raise student test scores, or value-added, although the magnitude of these discrepancies varies by study. There are also some compelling theories as to the possible mechanisms behind these (often modest) discrepancies, most notably the fact that schools in low-income neighborhoods tend to have fewer resources, as well as more trouble recruiting and retaining highly qualified, experienced teachers.

    The Mathematica Policy Research organization recently released a very large, very important study that addresses these issues directly. It focuses on shedding additional light on the magnitude of any measurable differences in access to effective teaching among students of different incomes (the “Effective Teaching Gap”), as well as the way in which hiring, mobility, and retention might contribute to these gaps. The analysis uses data on teachers in grades 4-8 or 6-8 (depending on data availability) over five years (2008-09 to 2012-13) in 26 districts across the nation.

    READ MORE
  • When Our Teachers Learn, Our Students Learn

    Written on November 1, 2016

    Our guest authors today are Mark D. Benigni, Ed. D., Superintendent of the Meriden Public Schools in Connecticut and co-chairperson of the Connecticut Association of Urban Superintendents, as well as Erin Benham, President of the Meriden Federation of Teachers and a member of the Connecticut State Department of Education Board of Directors. The authors seek to understand how teacher learning improves student learning outcomes. 

    Our students’ success and ability to graduate college and career ready from our public schools must be society's primary educational objective. The challenge lies in how we create neighborhood public schools where student learning and teacher learning are valued and supported. How do we assure our students' and staff's satisfaction and growth? And, in essence, how do we create schools where students and staff want to be?

    Around the country, some districts are opting for market-based reforms such as privately supported charter schools or online school options. In Meriden we took a different approach and decided to collaborate as a springboard for innovation and improvement. The school district and teachers' union have been strong partners for almost seven years. Such trust and partnership has made possible the reforms that will be described in the rest of this post.

    Collaboration facilitated development of a weekly early-release day for Professional Learning Communities to meet. During this time, teachers review individual student academic data with their data teams. However, the paucity of non-academic information about students emerged as an important area of improvement. We launched a three-phased approach to address climate and culture in our schools. Our climate suite includes: a School Climate Survey completed by students, staff, and families; a Getting to Know You Survey completed by students in the spring, with results shared in the fall with receiving teachers; and a MPS Cares online portal for students to request assistance and support.

    READ MORE
  • Social And Emotional Skills In School: Pivoting From Accountability To Development

    Written on October 25, 2016

    Our guest authors today are David Blazar and Matthew A. Kraft. Blazar is a Lecturer on Education and Postdoctoral Research Fellow at Harvard Graduate School of Education and Kraft is an Assistant Professor of Education and Economics at Brown University.

    With the passage of the Every Student Succeeds Act (ESSA) in December 2015, Congress required that states select a nonacademic indicator with which to assess students’ success in school and, in turn, hold schools accountable. We believe that broadening what it means to be a successful student and school is good policy. Students learn and grow in multifaceted ways, only some of which are captured by standardized achievement tests. Measures such as students’ effort, initiative, and behavior also are key indicators for their long-term success (see here). Thus, by gathering data on students’ progress on a range of measures, both academic and what we refer to as “social and emotional” development, teachers and school leaders may be better equipped to help students improve in these areas.

    In the months following the passage of ESSA, questions about use of social and emotional skills in accountability systems have dominated the debate. What measures should districts use? Is it appropriate to use these measures in high-stakes setting if they are susceptible to potential biases and can be easily coached or manipulated? Many others have written about this important topic before us (see, for example, here, here, here, and here). Like some of them, we agree that including measures of students’ social and emotional development in accountability systems, even with very small associated weights, could serve as a strong signal that schools and educators should value and attend to developing these skills in the classroom. We also recognize concerns about the use of measures that really were developed for research purposes rather than large-scale high-stakes testing with repeated administrations.

    READ MORE
  • A Few Reactions To The Final Teacher Preparation Accountability Regulations

    Written on October 19, 2016

    The U.S. Department of Education (USED) has just released the long-anticipated final regulations for teacher preparation (TP) program accountability. These regulations will guide states, which are required to design their own systems for assessing TP program performance for full implementation in 2018-19. The earliest year in which stakes (namely, eligibility for federal grants) will be attached to the ratings is 2021-22.

    Among the provisions receiving attention is the softening of the requirement regarding the use of test-based productivity measures, such as value-added and other growth models (see Goldhaber et al. 2013; Mihaly et al. 2013; Koedel et al. 2015). Specifically, the final regulations allow greater “flexibility” in how and how much these indicators must count toward final ratings. For the reasons that Cory Koedel and I laid out in this piece (and I will not reiterate here), this is a wise decision. Although it is possible that value-added estimates will eventually play a significant role in these TP program accountability systems, the USED timeline provides insufficient time for the requisite empirical groundwork.

    Yet this does not resolve the issues facing those who must design these systems, since putting partial brakes on value-added for TP programs also puts increased focus on the other measures which might be used to gauge program performance. And, as is often the case with formal accountability systems, the non-test-based bench is not particularly deep.

    READ MORE
  • The Details Matter In Teacher Evaluations

    Written on September 22, 2016

    Throughout the process of reforming teacher evaluation systems over the past 5-10 years, perhaps the most contentious, discussed issue was the importance, or weights, assigned to different components. Specifically, there was a great deal of debate about the proper weight to assign to test-based teacher productivity measures, such estimates from value-added and other growth models.

    Some commentators, particularly those more enthusiastic about test-based accountability, argued that the new teacher evaluations somehow were not meaningful unless value-added or growth model estimates constituted a substantial proportion of teachers’ final evaluation ratings. Skeptics of test-based accountability, on the other hand, tended toward a rather different viewpoint – that test-based teacher performance measures should play little or no role in the new evaluation systems. Moreover, virtually all of the discussion of these systems’ results, once they were finally implemented, focused on the distribution of final ratings, particularly the proportions of teachers rated “ineffective.”

    A recent working paper by Matthew Steinberg and Matthew Kraft directly addresses and informs this debate. Their very straightforward analysis shows just how consequential these weighting decisions, as well as choices of where to set the cutpoints for final rating categories (e.g., how many points does a teacher need to be given an “effective” versus “ineffective” rating), are for the distribution of final ratings.

    READ MORE
  • Teachers' Opinions Of Teacher Evaluation Systems

    Written on June 17, 2016

    The primary test of the new teacher evaluation systems implemented throughout the nation over the past 5-10 years is whether they improve teacher and ultimately student performance. Although the kinds of policy evaluations that will address these critical questions are just beginning to surface (e.g., Dee and Wyckoff 2015), among the most important early indicators of how well the new systems are working is their credibility among educators. Put simply, if teachers and administrators don’t believe in the systems, they are unlikely to respond productively to them.

    A new report from the Institute of Education Sciences (IES) provides a useful little snapshot of teachers’ opinions of their evaluation systems using a nationally representative survey. It is important to bear in mind that the data are from the 2011-12 Schools and Staffing Survey (SASS) and the 2012-13 Teacher Follow Up Survey, a time in which most of the new evaluations in force today were either still on the drawing board, or in their first year or two of implementation. But the results reported by IES might still serve as a useful baseline going forward.

    The primary outcome in this particular analysis is a survey item querying whether teachers were “satisfied” with their evaluation process. And almost four in five respondents either strongly or somewhat agreed that they were satisfied with their evaluation. Of course, satisfaction with an evaluation system does not necessarily signal anything about its potential to improve or capture teacher performance, but it certainly tells us something about teachers’ overall views of how they are evaluated.

    READ MORE

Pages

Subscribe to Teacher Quality

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.