Skip to:

Education Research

  • Subgroup-Specific Accountability, Teacher Job Assignments, And Teacher Attrition: Lessons For States

    Written on April 5, 2017

    Our guest author today is Matthew Shirrell, assistant professor of educational leadership and administration in the Graduate School of Education and Human Development at the George Washington University.

    Racial/ethnic gaps in student achievement persist, despite a wide variety of interventions designed to address them (see Reardon, Robinson-Cimpian, & Weathers, 2015). The No Child Left Behind Act of 2001 (NCLB) took a novel approach to closing these achievement gaps, requiring that schools make yearly improvements not only in overall student achievement, but also in the achievement of students of various subgroups, including racial/ethnic minority subgroups and students from economically disadvantaged families.

    Evidence is mixed on whether NCLB’s “subgroup-specific accountability” accomplished its goal of narrowing racial/ethnic and other achievement gaps. Research on the impacts of the policy, however, has largely neglected the effects of this policy on teachers. Understanding any effects on teachers is important to gaining a more complete picture of the policy’s overall impact; if the policy increased student achievement but resulted in the turnover or attrition of large numbers of teachers, for example, these benefits and costs should be weighed together when assessing the policy’s overall effects.

    In a study just published online in Education Finance and Policy (and supported by funding from the Albert Shanker Institute), I explore the effects of NCLB’s subgroup-specific accountability on teachers. Specifically, I examine whether teaching in a school that was held accountable for a particular subgroup’s performance in the first year of NCLB affected teachers’ job assignments, turnover, and attrition.

    READ MORE
  • Teacher Evaluations And Turnover In Houston

    Written on March 30, 2017

    We are now entering a time period in which we might start to see a lot of studies released about the impact of new teacher evaluations. This incredibly rapid policy shift, perhaps the centerpiece of the Obama Administration’s education efforts, was sold based on illustrations of the importance of teacher quality.

    The basic argument was that teacher effectiveness is perhaps the most important factor under schools’ control, and the best way to improve that effectiveness was to identify and remove ineffective teachers via new teacher evaluations. Without question, there was a logic to this approach, but dismissing or compelling the exits of low performing teachers does not occur in a vacuum. Even if a given policy causes more low performers to exit, the effects of this shift can be attenuated by turnover among higher performers, not to mention other important factors, such as the quality of applicants (Adnot et al. 2016).

    A new NBER working paper by Julie Berry Cullen, Cory Koedel, and Eric Parsons, addresses this dynamic directly by looking at the impact on turnover of a new evaluation system in Houston, Texas. It is an important piece of early evidence on one new evaluation system, but the results also speak more broadly to how these systems work.

    READ MORE
  • New Teacher Evaluations And Teacher Job Satisfaction

    Written on February 15, 2017

    Job satisfaction among teachers is a perenially popular topic of conversation in education policy circles. There is good reason for this. For example, whether or not teachers are satisfied with their work has been linked to their likelihood of changing schools or professions (e.g., Ingersoll 2001).

    Yet much of the discussion of teacher satisfaction consists of advocates’ speculation that their policy preferences will make for a more rewarding profession, whereas opponents’ policies are sure to disillusion masses of educators. This was certainly true of the debate surrounding the rapid wave of teacher evaluation reform over the past ten or so years.

    A paper just published in the American Education Research Journal addresses directly the impact of new evaluation systems on teacher job satisfaction. It is, therefore, not only among the first analyses to examine the impact of these systems, but also the first to look at their effect on teachers’ attitudes.

    READ MORE
  • Our Request For Simple Data From The District Of Columbia

    Written on December 2, 2016

    For our 2015 report, “The State of Teacher Diversity in American Education,” we requested data on teacher race and ethnicity between roughly 2000 and 2012 from nine of the largest school districts in the nation: Boston; Chicago; Cleveland; District of Columbia; Los Angeles; New Orleans; New York; Philadelphia; and San Francisco.

    Only one of these districts failed to provide us with data that we could use to conduct our analysis: the District of Columbia.

    To be clear, the data we requested are public record. Most of the eight other districts to which we submitted requests complied in a timely fashion. A couple of them took months to fill the request, and required a little follow up. But all of them gave us what we needed. We were actually able to get charter school data for virtually all of these eight cities (usually through the state).

    Even New Orleans, which, during the years for which we requested data, was destroyed by a hurricane and underwent a comprehensive restructuring of its entire school system, provided the data.

    But not DC.

    READ MORE
  • New Evidence On Teaching Quality And The Achievement Gap

    Written on November 17, 2016

    It is an extensively documented fact that low-income students score more poorly on standardized tests than do their higher income peers. This so-called “achievement gap” has persisted for generations and is still one of the most significant challenges confronting the American educational system.

    Some people tend to overstate -- while others tend to understate -- the degree to which this gap is attributable to differences in teacher (and school) effectiveness between lower and higher income students (with income usually defined in terms of students’ eligibility for subsidized lunch assistance). As discussed below, the evidence thus far suggests that lower income students are a more likely than higher income students to have less “effective” teachers -- with effectiveness defined in terms of the ability to help raise student test scores, or value-added, although the magnitude of these discrepancies varies by study. There are also some compelling theories as to the possible mechanisms behind these (often modest) discrepancies, most notably the fact that schools in low-income neighborhoods tend to have fewer resources, as well as more trouble recruiting and retaining highly qualified, experienced teachers.

    The Mathematica Policy Research organization recently released a very large, very important study that addresses these issues directly. It focuses on shedding additional light on the magnitude of any measurable differences in access to effective teaching among students of different incomes (the “Effective Teaching Gap”), as well as the way in which hiring, mobility, and retention might contribute to these gaps. The analysis uses data on teachers in grades 4-8 or 6-8 (depending on data availability) over five years (2008-09 to 2012-13) in 26 districts across the nation.

    READ MORE
  • The Details Matter In Teacher Evaluations

    Written on September 22, 2016

    Throughout the process of reforming teacher evaluation systems over the past 5-10 years, perhaps the most contentious, discussed issue was the importance, or weights, assigned to different components. Specifically, there was a great deal of debate about the proper weight to assign to test-based teacher productivity measures, such estimates from value-added and other growth models.

    Some commentators, particularly those more enthusiastic about test-based accountability, argued that the new teacher evaluations somehow were not meaningful unless value-added or growth model estimates constituted a substantial proportion of teachers’ final evaluation ratings. Skeptics of test-based accountability, on the other hand, tended toward a rather different viewpoint – that test-based teacher performance measures should play little or no role in the new evaluation systems. Moreover, virtually all of the discussion of these systems’ results, once they were finally implemented, focused on the distribution of final ratings, particularly the proportions of teachers rated “ineffective.”

    A recent working paper by Matthew Steinberg and Matthew Kraft directly addresses and informs this debate. Their very straightforward analysis shows just how consequential these weighting decisions, as well as choices of where to set the cutpoints for final rating categories (e.g., how many points does a teacher need to be given an “effective” versus “ineffective” rating), are for the distribution of final ratings.

    READ MORE
  • An Alternative Income Measure Using Administrative Education Data

    Written on September 16, 2016

    The relationship between family background and educational outcomes is well documented and the topic, rightfully, of endless debate and discussion. A students’ background is most often measured in terms of family income (even though it is actually the factors associated with income, such as health, early childhood education, etc., that are the direct causal agents).

    Most education analyses rely on a single income/poverty indicator – i.e., whether or not students are eligible for federally-subsidized lunch (free/reduced-price lunch, or FRL). For instance, income-based achievement gaps are calculated by comparing test scores between students who are eligible for FRL and those who are not, while multivariate models almost always use FRL eligibility as a control variable. Similarly, schools and districts with relatively high FRL eligibility rates are characterized as “high poverty.” The primary advantages of FRL status are that it is simple and collected by virtually every school district in the nation (collecting actual income would not be feasible). Yet it is also a notoriously crude and noisy indicator. In addition to the fact that FRL eligibility is often called “poverty” even though the cutoff is by design 85 percent higher than the federal poverty line, FRL rates, like proficiency rates, mask a great deal of heterogeneity. Families of two students who are FRL eligible can have quite different incomes, as could two families of students who are not eligible. As a result, FRL-based estimates such as achievement gaps might differ quite a bit from those calculated using actual family income from surveys.

    A new working paper by Michigan researchers Katherine Michelmore and Susan Dynarski presents a very clever means of obtaining a more accurate income/poverty proxy using the same administrative data that states and districts have been collecting for years.

    READ MORE
  • On Focus Groups, Elections, and Predictions

    Written on August 11, 2016

    Focus groups, a method in which small groups of subjects are questioned by researchers, are widely used in politics, marketing, and other areas. In education policy, focus groups, particularly those comprised of teachers or administrators, are often used to design or shape policy. And, of course, during national election cycles, they are particularly widespread, and there are even television networks that broadcast focus groups as a way to gauge the public’s reaction to debates or other events.

    There are good reasons for using focus groups. Analyzing surveys can provide information regarding declaratory behaviors and issues’ rankings at a given point in time, and correlations between these declarations and certain demographic and social variables of interest. Focus groups, on the other hand, can help map out the issues important to voters (which can inform survey question design), as well investigate what reactions certain presentations (verbal or symbolic) evoke (which can, for example, help frame messages in political or informational campaigns).

    Both polling/surveys and focus groups provide insights that the other method alone could not. Neither of them, however, can answer questions about why certain patterns occur or how likely they are to occur in the future. That said, having heard some of the commentary about focus groups, and particularly having seen them being broadcast live and discussed on cable news stations, I feel strongly compelled to comment, as I do whenever data are used improperly or methodologies are misinterpreted.

    READ MORE
  • A Myth Grows In The Garden State

    Written on July 15, 2016

    New Jersey Governor Chris Christie’s recently announced a new "fairness funding" plan to provide every school district in his state roughly the same amount of per-pupil state funding. This would represent a huge change from the current system, in which more state funds are allocated to the districts that serve a larger proportion of economically disadvantaged students. Thus, the Christie proposal would result in an increase in state funding for middle class and affluent districts, and a substantial decrease in money for poorer districts. According to the Governor, the change would reduce the property tax burden on many districts by replacing some of their revenue with state money.

    This is a very bad idea. For one thing, NJ state funding of education is already about 7-8 percent lower than it was in 2008 (Leachman et al. 2015). And this plan would, most likely, cut revenue in the state’s poorest districts by dramatic amounts, absent an implausible increase in property tax rates. It is perfectly reasonable to have a discussion about how education money is spent and allocated, and/or about tax structure. But it is difficult to grasp how serious people could actually conceive of this particular idea. And it’s actually a perfect example of how dangerous it is when huge complicated bodies of empirical evidence are boiled down to talking points (and this happens on all “sides” of the education debate).

    Pu simply, Governor Christie believes that “money doesn’t matter” in education. He and his advisors have been told that how much you spend on schools has little real impact on results. This is also a talking point that, in many respects, coincides with an ideological framework of skepticism toward government and government spending, which Christie shares.

    READ MORE
  • New Research Report: Are U.S. Schools Inefficient?

    Written on June 7, 2016

    At one point or another we’ve all heard some version of the following talking points: 1) “Spending on U.S. education has doubled or triped over the past few decades, but performance has remained basically flat; or 2) “The U.S. spends more on education than virtually any other nation and yet still gets worse results.” If you pay attention, you will hear one or both of these statements frequently, coming from everyone from corporate CEOs to presidential candidates.

    The purpose of both of these statements is to argue that U.S. education is inefficient - that is, gets very little bang for the buck – and that spending more money will not help.

    Now, granted, these sorts of pseudo-empirical talking points almost always omit important nuances yet, in some cases, they can still provide important information. But, putting aside the actual relative efficiency of U.S. schools, these particular statements about U.S. education spending and performance are so rife with oversimplification that they fail to provide much if any useful insight into U.S. educational efficiency or policy that affects it. Our new report, written by Rutgers University Professor Bruce D. Baker and Rutgers Ph.D. student Mark Weber, explains why and how this is the case. Baker and Weber’s approach is first to discuss why the typical presentations of spending and outcome data, particularly those comparing nations, are wholly unsuitable for the purpose of evaluating U.S. educational efficiency vis-à-vis that of other nations. They then go on to present a more refined analysis of the data by adjusting for student characteristics, inputs such as class size, and other factors. Their conclusions will most likely be unsatisfying for all “sides” of the education debate.

    READ MORE

Pages

Subscribe to Education Research

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.