What Makes Teacher Collaboration Work?

Today’s guest authors are David Sherer and Johanna Barmore. Sherer is a doctoral candidate at the Harvard Graduate School of Education. He specializes in research on policy implementation and the social dynamics of K-12 school reform. Barmore is a former teacher and also a current doctoral student at the Harvard Graduate School of Education. She studies how policy impacts teachers' instructional practice as well as how teachers learn to improve instruction, with a focus on teacher education.

You’ve probably attended meetings that were a waste of your time. Perhaps there was no agenda. Perhaps the facilitator of the meeting dominated the conversation. Perhaps people arrived late or the wrong people were in the room in the first place. Maybe the team ran in place and no one had any good ideas. Whatever the reason, it’s common for teamwork to feel ineffective. Good teamwork does not just “happen.” Organizational researchers study teams with a goal of understanding the conditions that foster effective meetings and, more broadly, effective collaboration (see here for a review).

Meetings can feel like a waste of time in schools, just like they can in other workplaces. However, educational scholars have paid less attention, compared to researchers in other fields, to the conditions that foster productive collaborative work, such as management (see, e.g. Cohen & Bailey, 1997). Educational researchers and practitioners have long advocated that collaboration between teachers should be a cornerstone of efforts to improve instruction – indeed, teachers themselves often cite collaboration with colleagues as one of the key ways they learn. And yet, we know many teams flounder instead of flourish. So why are some teams more productive than others?

Evidence From A Teacher Evaluation Pilot Program In Chicago

The majority of U.S. states have adopted new teacher evaluation systems over the past 5-10 years. Although these new systems remain among the most contentious issues in education policy today, there is still only minimal evidence on their impact on student performance or other outcomes. This is largely because good research takes time.

A new article, published in the journal Education Finance and Policy, is among the handful of analyses examining the preliminary impact of teacher evaluation systems. The researchers, Matthew Steinberg and Lauren Sartain, take a look at the Excellence in Teaching Project (EITP), a pilot program carried out in Chicago Public Schools starting in the 2008-09 school year. A total of 44 elementary schools participated in EITP in the first year (cohort 1), while an additional 49 schools (cohort 2) implemented the new evaluation systems the following year (2009-10). Participating schools were randomly selected, which permits researchers to gauge the impact of the evaluations experimentally.

The results of this study are important in themselves, and they also suggest some more general points about new teacher evaluations and the building body of evidence surrounding them.

Who Are (And Should Be) The Teaching Experts?

Our guest author today is Bryan Mascio, who taught for over ten years in New Hampshire, primarily working with students who had been unsuccessful in traditional school settings. Bryan is now a doctoral student at the Harvard Graduate School of Education, where he conducts research on the cognitive aspects of teaching, and works with schools to support teachers in improving relationships with their students.

How do we fix teaching?  This question is on the mind of many reformers, researchers, politicians, and parents.  Every expert has their own view of the problem, their own perspective on what success should look like, and their own solutions to offer.  The plethora of op-eds, reports, articles, and memoranda, can be mindboggling.  It is important to take a step back and see whether we all even consider teaching expertise to be the same thing.  Just as importantly, where does, and should, it reside?

In a New York Times op-ed, “Teachers Aren’t Dumb”, Dr. Daniel Willingham explains that teachers aren’t the problem – it’s just how they are trained. As a teacher, I appreciate a respected person from outside of the profession coming to our defense, and I do agree that we need to take a hard look at teacher preparation programs.  I worry, though, that a call to focus more on the “nuts and bolts” of teaching – in contrast to the current emphasis on educational philosophy and theories of development – could create an alarming pendulum swing.

This recommendation is a common message, promoted both by those in academic research as well as fast-tracked teacher preparation programs.  It sees academics and researchers as the generators and holders of the most important expertise and asks them to then give direction to teachers.  By mistaking different kinds of expertise, it inadvertently lays a path towards teachers as technicians, rather than true professionals.

The Role Of Teacher Diversity In Improving The Academic Performance Of Students Of Color

Last month, the Albert Shanker Institute released a report on the state of teacher diversity, which garnered fair amount of press attention – see here, here, here, and here. (For a copy of the full report, see here.) This is the second of three posts, which are all drawn from a research review published in the report. The first post can be found here. Together, they help to explain why diversity in the teaching force—or lack thereof—should be  a major concern.

It has long been argued that there is a particular social and emotional benefit to children of color, and especially those children from high-poverty neighborhoods, from knowing—and being known and recognized by—people who look like themselves who are successful and in positions of authority. But there is also a growing body of evidence to suggest that students derive concrete academic benefits from having access to demographically similar teachers.

For example, in one important study, Stanford professor Thomas Dee reanalyzed test score data from Tennessee’s Project STAR class size experiment, still one of the largest U.S. studies to employ the random assignment of students and teachers. Dee found that a one-year same-race pairing of students and teachers significantly increased the math and reading test scores of both Black and White students by roughly 3 to 4 percentile points. These effects were even stronger for poor Black students in racially segregated schools (Dee, 2004).

Recent Evidence On The New Orleans School Reforms

A new study of New Orleans (NOLA) schools since Katrina, published by the Education Research Alliance (ERA), has caused a predictable stir in education circles (the results are discussed in broader strokes in this EdNext article, while the full paper is forthcoming). The study’s authors, Doug Harris and Matthew Larsen, compare testing outcomes before and after the hurricanes that hit the Gulf Coast in 2005, in districts that were affected by those storms. The basic idea, put simply, is to compare NOLA schools to those in other storm-affected districts, in order to assess the general impact of the drastic educational change undertaken in NOLA, using the other schools/districts as a kind of control group.

The results, in brief, indicate that: 1) aggregate testing results after the storms rose more quickly in NOLA vis-à-vis the comparison districts, with the difference in 2012 being equivalent to roughly 15 percentile points ; 2) there was, however, little discernible difference in the trajectories of NOLA students who returned after the storm and their peers in other storm-affected districts (though this latter group could only be followed for a short period, all of which occurred during these cohorts' middle school years). Harris and Larsen also address potential confounding factors, including population change and trauma, finding little or no evidence that these factors generate bias in their results.

The response to this study included the typical of mix of thoughtful, measured commentary and reactionary advocacy (from both “sides”). And, at this point, so much has been said and written about the study, and about New Orleans schools in general, that I am hesitant to join the chorus (I would recommend in particular this op-ed by Doug Harris, as well as his presentation at our recent event on New Orleans).

The Story Behind The Story: Social Capital And The Vista Unified School District

Our guest author today is Devin Vodicka, superintendent of Vista Unified, a California school district serving over 22,000 students that was recently accepted into the League of Innovative Schools. Dr. Vodicka participates in numerous state and national leadership groups, including the Superintendents Technical Working Group of the U.S. Education Department .

Transforming a school district is challenging and complex work, often requiring shifts in paradigms, historical perspective, and maintaining or improving performance. Here, I’d like to share how we approached change at Vista Unified School District (VUSD) and to describe the significant transformation we’ve been undergoing, driven by data, focused on relationships, and based in deep partnerships. Although Vista has been hard at work over many years, this particular chapter starts in July of 2012 when I was hired.  

When I became superintendent, the district was facing numerous challenges: Declining enrollment, financial difficulties, strained labor relations, significant turnover in the management ranks, and unresolved lawsuits were all areas in need of attention. The school board charged me and my team with transforming the district, which serves large numbers of linguistically, culturally, and economically diverse students. While there is still significant room for improvement, much has changed in the past three years, generally trending in a positive direction. Below is the story of how we did it.

Recent Evidence On Teacher Experience And Productivity

The idea that teachers’ test-based productivity does not improve after their first few years in the classroom is, it is fair to say, the “conventional wisdom” among many in the education reform arena. It has been repeated endlessly, and used to advocate forcefully for recent changes in teacher personnel policies, such as those regarding compensation, transfers, and layoffs. 

Following a few other, recent analyses (e.g., Harris and Sass 2011Wiswall 2013; Ladd and Sorensen 2013), a new working paper by researchers John Papay and Matthew Kraft examines this claim about the relationship between experience and (test-based) performance. In this case, the authors compare the various approaches with which the productivity returns to experience have been estimated in the literature, and put forth a new one. The paper did receive some attention, and will hopefully have some impact on the policy debate, as well as on the production of future work on this topic.

It might nevertheless be worthwhile to take a closer look at the “nuts and bolts” of this study, both because it is interesting (at least in my opinion) and policy relevant, and also because it illustrates some important lessons regarding the relationship between research and policy, specifically the fact that what we think we know is not always as straightforward as it appears.

New School Climate Tool Facilitates Early Intervention On Social-Emotional Issues: Bullying And Suicide Prevention

Our guest author today is Dr. Alvin Larson, director of research and evaluation at Meriden Public Schools, a district that serves about 8,900 students in Meriden, CT. Dr. Larson holds a B.A. in Sociology, M. Ed., M.S. in Educational Research and a Ph.D. in Educational Psychology. The intervention described below was made possible with support from Meriden's community, leadership and education professionals.

For the most part, students' social-emotional concerns start small; if left untreated, though, they can become severe and difficult to manage. Inappropriate behaviors are not only harmful to the student who exhibits them; they can also serve to increase the social bruising of his/her peers and can be detrimental to the climate of the entire school. The problem is that many of these bruises are not directly observable – or not until they become scars. School psychologists and counselors are familiar with bruised students who act out overtly, but some research suggests that 4.3% of our students carry social-emotional scars of which counselors are unaware (Larson, AERA 2014). To develop a more preventative approach, foster pro-social attitudes and a positive school climate, we need to be able to identify and support the students with hidden bruises as well as intervene with pre-bullies early in their school careers.

Since 2011, Connecticut’s Local Education Agencies (LEAs) have been required to purchase or develop a student school climate survey. The rationale for this is that anti-social attitudes and a negative school climate are associated with lower academic achievement, current behavior problems, as well as future criminal behaviors (DeLisi et al 2013; Hawkins et al 2000) and suicide ideation (King et al 2001). There are hundreds of anonymous school climate surveys, but none of them was designed to provide the kind of information that we need to help individual students.

New Policy Brief: The Evidence On The Florida Education Reform Formula

The State of Florida is well known in the U.S. as a hotbed of education reform. The package of policies spearheaded by then Governor Jeb Bush during the late 1990s and early 2000s focused, in general, on test-based accountability, competition, and choice. As a whole, they have come to be known as the “Florida Formula for education success,” or simply the “Florida Formula.”

The Formula has received a great deal of attention, including a coordinated campaign to advocate (in some cases, successfully) for its export to other states. The campaign and its supporters tend to employ as their evidence changes in aggregate testing results, most notably unadjusted increases in proficiency rates on Florida’s state assessment and/or cohort changes on the National Assessment of Educational Progress. This approach, for reasons discussed in the policy brief, violates basic principles of causal inference and policy evaluation. Using this method, one could provide evidence that virtually any policy or set of policies “worked” or “didn’t work,” often in the same place and time period.

Fortunately, we needn’t rely on these crude methods, as there is quite a bit of high quality evidence pertaining to several key components of the Formula, and it provides a basis for tentative conclusions regarding their short- and medium term (mostly test-based impact. Today we published a policy brief, the purpose of which is to summarize this research in a manner that is fair and accessible to policymakers and the public.

How Effective Are Online Credit Recovery Programs?

Credit recovery programs in the U.S. have proliferated rapidly since the enactment of No Child Left Behind (NCLB), particularly in states that are home to a large number of urban schools with high dropout rates (Balfanz and Legters 2004).

Although definitions vary somewhat, credit recovery is any method by which students can earn missed credits in order to graduate on time (Watson and Gemin 2008). Online credit recovery is a common form of these programs, but others include mixed online/in-person instruction, and in-person instruction (McCabe and Andrie 2012). At least three major school districts – Boston, Chicago, and New York City – offer credit recovery programs, as do several states, including Missouri and Wisconsin. Private companies such as Plato, Pearson, Apex, and Kaplan have also tried to fill this niche by offering to charge between $175 and $1,200 per student per credit. Online credit recovery represents approximately half of all instruction in the $2 billion online education industry.

Yet, despite the rising presence of online credit recovery programs, there exists scant evidence as to their effectiveness in increasing high school graduation rates, or their impact on other outcomes of interest.