Skip to:

Education Research

  • Interpreting Effect Sizes In Education Research

    Written on March 12, 2019

    Interpreting “effect sizes” is one of the trickier checkpoints on the road between research and policy. Effect sizes, put simply, are statistics measuring the size of the association between two variables of interest, often controlling for other variables that may influence that relationship. For example, a research study may report that participating in a tutoring program was associated with a 0.10 standard deviation increase in math test scores, even controlling for other factors, such as student poverty, grade level, etc.

    But what does that mean, exactly? Is 0.10 standard deviations a large effect or a small effect? This is not a simple question, even for trained researchers, and answering it inevitably entails a great deal of subjective human judgment. Matthew Kraft has an excellent little working paper that pulls together some general guidelines and a proposed framework for interpreting effect sizes in education. 

    Before discussing the paper, though, we need to mention what may be one of the biggest problems with the interpretation of effect sizes in education policy debates: They are often ignored completely.

    READ MORE
  • The Offline Implications Of The Research About Online Charter Schools

    Written on February 27, 2019

    It’s rare to find an educational intervention with as unambiguous a research track record as online charter schools. Now, to be clear, it’s not a large body of research by any stretch, its conclusions may change in time, and the online charter sub-sector remains relatively small and concentrated in a few states. For now, though, the results seem incredibly bad (Zimmer et al. 2009Woodworth et al. 2015). In virtually every state where these schools have been studied, across virtually all student subgroups, and in both reading and math, the estimated impact of online charter schools on student testing performance is negative and large in magnitude.

    Predictably, and not without justification, those who oppose charter schools in general are particularly vehement when it comes to online charter schools – they should, according to many of these folks, be closed down, even outlawed. Charter school supporters, on the other hand, tend to acknowledge the negative results (to their credit) but make less drastic suggestions, such as greater oversight, including selective closure, and stricter authorizing practices.

    Regardless of your opinion on what to do about online charter schools’ poor (test-based) results, they are truly an interesting phenomenon for a few reasons.

    READ MORE
  • We Need To Reassess School Discipline

    Written on November 2, 2018

    It has been widely documented that, in American schools, students of color are disproportionately punished for nonviolent behaviors, and are targeted for exclusionary discipline within schools more often than their white peers. Exclusionary discipline is defined as students being removed from their learning environment, whether by in-school suspension, out-of-school suspension, or expulsion. 

    In a national study, Sullivan et al. (2013) found that “Black students were more than twice as likely as White students to be suspended, whereas Hispanic and Native American students were 10 and 20 percent more likely to be suspended.” Out of all the racial minority groups, Asians had the lowest suspension rates across the board. Across all the racial groups, “males were twice as likely as female students to be suspended, and Black males had the highest rates of all subgroups.”

    One reason that students of color are at a performance disadvantage to their White counterparts is because, put simply, they are being removed from the classroom much more often. This is true nationally, but it seems to be a particularly pronounced issue in the Commonwealth of Virginia. The Center for Public Integrity released a 2015 study demonstrating that schools in Virginia “referred students to law enforcement agencies at a rate nearly three times the national rate” (Ferriss, 2015). According to the U.S. Department of Education, Virginia’s Black student population, which is 23 percent of all students, received 59 percent of short-term arrests and 43 percent of expulsions (Lum, 2018).

    READ MORE
  • Weaning Educational Research Off Of Steroids

    Written on September 25, 2018

    Our guest authors today are Hunter Gehlbach and Carly D. Robinson. Gehlbach is an associate professor of education and associate dean at the University of California, Santa Barbara’s Gevirtz Graduate School of Education, as well as Director of Research at Panorama Education. Robinson is a doctoral candidate at Harvard’s Graduate School of Education.

    Few people confuse academics with elite athletes. As a species, academics are rarely noted for their blinding speed, raw power, or outrageously low resting heart rates. Nobody wants to see a calendar of scantily clad professors. Unfortunately, recent years have surfaced one commonality between these two groups—a commonality no academic will embrace. And one with huge implications for educational policymakers’ and practitioners’ professional lives.

    In the same way that a 37 year-old Barry Bonds did not really break the single-season home run record—he relied on performance-enhancing drugs—a substantial amount of educational research has undergone similar “performance enhancements” that make the results too good to be true.

    To understand, the crux of the issue, we invite readers to wade into the weeds (only a little!), to see what research “on steroids” looks like and why it matters. By doing so, we hope to reveal possibilities for how educational practitioners and policymakers can collaborate with researchers to correct the problem and avoid making practice and policy decisions based on flawed research.

    READ MORE
  • The Teacher Diversity Data Landscape

    Written on September 20, 2018

    This week, the Albert Shanker Institute released a new research brief, authored by myself and Klarissa Cervantes. It summarizes what we found when we contacted all 51 state education agencies (including the District of Columbia) and asked whether data on teacher race and ethnicity was being collected, and whether and how it was made available to the public. This survey was begun in late 2017 and completed in early 2018.

    The primary reason behind this project is the growing body of research to suggest that all students, and especially students of color, benefit from a teaching force that reflects the diverse society in which they must learn to live, work and prosper. ASI’s previous work has also documented that a great many districts should turn their attention to recruiting and retaining more teachers of color (see our 2015 report). Data are a basic requirement for achieving this goal – without data, states and districts are unable to gauge the extent of their diversity problem, target support and intervention to address that problem, and monitor the effects of those efforts. Unfortunately, the federal government does not require that states collect teacher race and ethnicity data, which means the responsbility falls to individual states. Moreover, statewide data are often insufficient – teacher diversity can vary widely within and between districts. Policymakers, administrators, and the public need detailed data (at least district-by-district and preferably school-by-school), which should be collected annually and be made easily available.

    The results of our survey are generally encouraging. The vast majority of state education agencies (SEAs), 45 out of 51, report that they collect at least district-by-district data on teacher race and ethnicity (and all but two of these 45 collect school-by-school data). This is good news (and, frankly, better results than we anticipated). There are, however, areas of serious concern.

    READ MORE
  • Why Teacher Evaluation Reform Is Not A Failure

    Written on August 23, 2018

    The RAND Corporation recently released an important report on the impact of the Gates Foundation’s “Intensive Partnerships for Effective Teaching” (IPET) initiative. IPET was a very thorough and well-funded attempt to improve teaching quality in schools in three districts and four charter management organizations (CMOs). The initiative was multi-faceted, but its centerpiece was the implementation of multi-measure teacher evaluation systems and the linking of ratings from those systems to professional development and high stakes personnel decisions, including compensation, tenure, and dismissal. This policy, particularly the inclusion in teacher evaluations of test-based productivity measures (e.g., value-added scores), has been among the most controversial issues in education policy throughout the past 10 years.

    The report is extremely rich and there's a lot of interesting findings in there, so I would encourage everyone to read it themselves (at least the executive summary), but the headline finding was that the IPET had no discernible effect on student outcomes, namely test scores and graduation rates, in the districts that participated, vis-à-vis similar districts that did not. Given that IPET was so thoroughly designed and implemented, and that it was well-funded, it can potentially be viewed as a "best case scenario" test of the type of evaluation reform that most states have enacted. Accordingly, critics of these reforms, who typically focus their opposition on the high stakes use of evaluation measures, particularly value-added and other test-based measures, in these evaluations, have portrayed the findings as vindication of their opposition. 

    This reaction has merit. The most important reason why is that evaluation reform was portrayed by advocates as a means to immediate and drastic improvements in student outcomes. This promise was misguided from the outset, and evaluation reform opponents are (and were) correct in pointing this out. At the same time, however, it would be wise not to dismiss evaluation reform as a whole, for several reasons, a few of which are discussed below.

    READ MORE
  • We Can't Graph Our Way Out Of The Research On Education Spending

    Written on April 17, 2018

    The graph below was recently posted by U.S. Education Department (USED) Secretary Betsy DeVos, as part of her response to the newly released scores on the 2017 National Assessment of Educational Progress (NAEP), administered every two years and often called the “nation’s report card.” It seems to show a massive increase in per-pupil education spending, along with a concurrent flat trend in scores on the fourth grade reading version of NAEP. The intended message is that spending more money won’t improve testing outcomes. Or, in the more common phrasing these days, "we can't spend our way out of this problem."

    Some of us call it “The Graph.” Versions of it have been used before. And it’s the kind of graph that doesn’t need to be discredited, because it discredits itself. So, why am I bothering to write about it? The short answer is that I might be unspeakably naïve. But we’ll get back to that in a minute.

    First, let’s very quickly run through the graph. In terms of how it presents the data, it is horrible practice. The double y-axes, with spending on the left and NAEP scores on the right, are a textbook example of what you might call motivated scaling (and that's being polite). The NAEP scores plotted range from a minimum of 213 in 2000 to a maximum of 222 in 2017, but the graph inexplicably extends all the way up to 275. In contrast, the spending scale extends from just below the minimum observation ($6,000) to just above the maximum ($12,000). In other words, the graph is deliberately scaled to produce the desired visual effect (increasing spending, flat scores). One could very easily rescale the graph to produce the opposite.

    READ MORE
  • What Happened To Teacher Quality?

    Written on March 15, 2018

    Starting around 2005 and up until a few years ago, education policy discourse and policymaking was dominated by the issue of improving “teacher quality.” We don’t really hear too much about it the past couple of years, or at least not nearly as much. One of the major reasons why is that the vast majority of states have enacted policies ostensibly designed to improve teacher quality.

    Thanks in no small part to the Race to the Top grant program, and the subsequent ESEA waiver program, virtually all states reformed their teacher evaluation systems, the “flagship” policy of the teacher quality push. Many of these states also tied their new evaluation results to high stakes personnel decisions, such as granting tenure, dismissals, layoffs, and compensation. Predictably, the details of these new systems vary quite a bit, both within and between states. Many advocates are unsatisfied with how the new policies were designed, and one could write a book on all the different issues. Yet it would be tough to deny that this national policy effort was among the fastest shifts in recent educational history, particularly given the controversy surrounding it.

    So, what happened to all the attention to teacher quality? It was put into practice. The evidence on its effects is already emerging, but this will take a while, and so it is still a quiet time in teacher quality land, at least compared to the previous 5-7 years. Even so, there are already many lessons out there, too many for a post. Looking back, though, one big picture lesson – and definitely not a new one – is about how the evaluation reform effort stands out (in a very competitive field) for the degree to which it was driven by the promise of immediate, large results.

    READ MORE
  • What Do Schools Fostering A Teacher “Growth Mindset” Look Like?

    Written on January 31, 2018

    Our guest authors today are Stefanie Reinhorn, Susan Moore Johnson, and Nicole Simon. Reinhorn is an independent consultant working with school systems on Instructional Rounds and school improvement.  Johnson is the Jerome T Murphy Research Professor at the Harvard Graduate School of Education.  Simon is a director in the Office of K-16 Initiatives at the City University of New York. The authors are researchers at The Project on the Next Generation of Teachers at Harvard Graduate School of Education. This piece is adapted from the authors’ chapter in Teaching in Context: The Social Side of Education Reform edited by Esther Quintero (Harvard Education Press, 2017).

    Carol Dweck’s theories about motivation and development have become mainstream in schools since her book, Mindset, was published in 2006.  It is common to hear administrators, teachers, parents, and even students talk about helping young learners adopt a “growth mindset” --expecting and embracing the idea of developing knowledge and skills over time, rather than assuming individuals are born with fixed abilities.  Yet, school leaders and teachers scarcely talk about how to adopt a growth mindset for themselves—one that assumes that educators, not only the students they teach, can improve with support and practice. Many teachers find it hard to imagine working in a school with a professional culture designed to cultivate their development, rather than one in which their effectiveness is judged and addressed with rewards and sanctions.  However, these schools do exist.

    In our research (see herehere and here*), we selected and studied six high-performing, high-poverty urban schools so that we could understand how these schools were beating the odds. Specifically, we wondered what they did to attract and develop teachers, and how teachers experienced working there. These schools, all located in one Massachusetts city, included: one traditional district school; two district turnaround schools; two state charter schools; and one charter-sponsored restart school. Based on interviews with 142 teachers and administrators, we concluded that all six schools fostered and supported a “growth mindset” for their educators.

    READ MORE
  • The Social Side Of Capability: Improving Educational Performance By Attending To Teachers’ And School Leaders’ Interactions About Instruction

    Written on January 25, 2018

    Our guest authors today are Matthew Shirrell, James P. Spillane, Megan Hopkins, and Tracy Sweet. Shirrell is an Assistant Professor of Educational Leadership and Administration in the Graduate School of Education and Human Development at George Washington University. Spillane is the Spencer T. and Ann W. Olin Professor in Learning and Organizational Change at the School of Education and Social Policy at Northwestern University. Hopkins is Assistant Professor of Education Studies at the University of California, San Diego. Sweet is an Assistant Professor in the Measurement, Statistics and Evaluation program in the Department of Human Development and Quantitative Methodology at the University of Maryland. This piece is adapted from the authors’ chapter in Teaching in Context: The Social Side of Education Reform edited by Esther Quintero (Harvard Education Press, 2017).

    The last two decades have witnessed numerous educational reforms focused on measuring the performance of teachers and school leaders. Although these reforms have produced a number of important insights, efforts to measure teacher and school leader performance have often overlooked the fact that performance is not simply an individual matter, but also a social one. Theory and research dating back to the last century suggest that individuals use their social relationships to access resources that can improve their capability and, in turn, their performance. Scholars refer to such real or potential resources accessed through relationships as “social capital,” and research in schools has demonstrated the importance of this social capital to a variety of key school processes and outcomes, such as instructional improvement and student performance.

    We know that social relationships are the necessary building blocks of this social capital; we also know that social relationships within schools (as in other settings) don’t arise simply by chance. Over the last decade, we have studied the factors that predict social relationships both within and between schools by examining interactions about instruction among school and school system staff. As suggested by social capital theory, such interactions are important because they facilitate access to social resources such as advice and information. Thus, understanding the predictors of these interactions can help us determine what it might take to build social capital in our schools and school systems. In this post, we briefly highlight two major insights from our work; for more details, see our chapter in Teaching in Context.

    READ MORE

Pages

Subscribe to Education Research

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.