Social Capital Matters As Much As Human Capital – A Message To Skeptics

In recent posts (here and here), we have been arguing that social capital -- social relations and the resources that can be accessed through them (e.g., support, knowledge) -- is an enormously important component of educational improvement. In fact, I have suggested that understanding and promoting social capital in schools may be as promising as focusing on personnel (or human capital) policies such as teacher evaluation, compensation and so on. 

My sense is that many teachers and principals support this argument, but I am also very interested in making the case to those who may disagree. I doubt very many people would disagree with the idea that relationships matter, but perhaps there are more than a few skeptics when it comes to how much they matter, and especially to whether or not social capital can be as powerful and practical a policy lever as human capital.

In other words, there are, most likely, those who view social capital as something that cannot really be leveraged cost-effectively with policy intervention toward any significant impact, in no small part because it focuses on promoting things that already happen and/or that cannot be mandated. For example, teachers already spend time together and cannot/should not be required to do so more often, at least not to an extent that would make a difference for student outcomes (although this could be said of almost any policy).

Lost In Citation

The so-called Vergara trial in California, in which the state’s tenure and layoff statutes were deemed unconstitutional, already has its first “spin-off," this time in New York, where a newly-formed organization, the Partnership for Educational Justice (PEJ), is among the organizations and entities spearheading the effort.

Upon first visiting PEJ’s new website, I was immediately (and predictably) drawn to the “Research” tab. It contains five statements (which, I guess, PEJ would characterize as “facts”). Each argument is presented in the most accessible form possible, typically accompanied by one citation (or two at most). I assume that the presentation of evidence in the actual trial will be a lot more thorough than that offered on this webpage, which seems geared toward the public rather than the more extensive evidentiary requirements of the courtroom (also see Bruce Baker’s comments on many of these same issues surrounding the New York situation).

That said, I thought it might be useful to review the basic arguments and evidence PEJ presents, not really in the context of whether they will “work” in the lawsuit (a judgment I am unqualified to make), but rather because they're very common, and also because it's been my observation that advocates, on both “sides” of the education debate, tend to be fairly good at using data and research to describe problems and/or situations, yet sometimes fall a bit short when it comes to evidence-based discussions of what to do about them (including the essential task of acknowledging when the evidence is still undeveloped). PEJ’s five bullet points, discussed below, are pretty good examples of what I mean.

Do Students Learn More When Their Teachers Work Together?

** Reprinted here in the Washington Post

Debates about how to improve educational outcomes for students often involve two 'camps': Those who focus on the impact of "in-school factors" on student achievement; and those who focus on "out-of-school factors." There are many in-school factors discussed but improving the quality of individual teachers (or teachers' human capital) is almost always touted as the main strategy for school improvement. Out-of-school factors are also numerous but proponents of this view tend toward addressing broad systemic problems such as poverty and inequality.

Social capital -- the idea that relationships have value, that social ties provide access to important resources like knowledge and support, and that a group's performance can often exceed that of the sum of its members -- is something that rarely makes it into the conversation. But why does social capital matter?

Research suggests that teachers' social capital may be just as important to student learning as their human capital. In fact, some studies indicate that if school improvement policies addressed teachers' human and social capital simultaneously, they would go a long way toward mitigating the effects of poverty on student outcomes. Sounds good, right? The problem is: Current policy does not resemble this approach. Researchers, commentators and practitioners have shown and lamented that many of the strategies leveraged to increase teachers' human capital often do so at the expense of eroding social capital in our schools. In other words, these approaches are moving us one step forward and two steps back.

What Is A Standard Deviation?

Anyone who follows education policy debates might hear the term “standard deviation” fairly often. Most people have at least some idea of what it means, but I thought it might be useful to lay out a quick, (hopefully) clear explanation, since it’s useful for the proper interpretation of education data and research (as well as that in other fields).

Many outcomes or measures, such as height or blood pressure, assume what’s called a “normal distribution." Simply put, this means that such measures tend to cluster around the mean (or average), and taper off in both directions the further one moves away from the mean (due to its shape, this is often called a “bell curve”). In practice, and especially when samples are small, distributions are imperfect -- e.g., the bell is messy or a bit skewed to one side -- but in general, with many measures, there is clustering around the average.

Let’s use test scores as our example. Suppose we have a group of 1,000 students who take a test (scored 0-20). A simulated score distribution is presented in the figure below (called a "histogram").

Estimated Versus Actual Days Of Learning In Charter School Studies

One of the purely presentational aspects that separates the new “generation” of CREDO charter school analyses from the old is that the more recent reports convert estimated effect sizes from standard deviations into a “days of learning” metric. You can find similar approaches in other reports and papers as well.

I am very supportive of efforts to make interpretation easier for those who aren’t accustomed to thinking in terms of standard deviations, so I like the basic motivation behind this. I do have concerns about this particular conversion -- specifically, that it overstates things a bit -- but I don’t want to get into that issue. If we just take CREDO’s “days of learning” conversion at face value, my primary, far more simple reaction to hearing that a given charter school sector's impact is equivalent to a given number of additional "days of learning" is to wonder: Does this charter sector actually offer additional “days of learning," in the form of longer school days and/or years?

This matters to me because I (and many others) have long advocated moving past the charter versus regular public school “horserace” and trying to figure out why some charters seem to do very well and others do not. Additional time is one of the more compelling observable possibilities, and while they're not perfectly comparable, it fits nicely with the "days of learning" expression of effect sizes. Take New York City charter schools, for example.

Revisiting The Widget Effect

In 2009, The New Teacher Project (TNTP) released a report called “The Widget Effect." You would be hard-pressed to find too many more recent publications from an advocacy group that had a larger influence on education policy and the debate surrounding it. To this day, the report is mentioned regularly by advocates and policy makers.

The primary argument of the report was that teacher performance “is not measured, recorded, or used to inform decision making in any meaningful way." More specifically, the report shows that most teachers received “satisfactory” or equivalent ratings, and that evaluations were not tied to most personnel decisions (e.g., compensation, layoffs, etc.). From these findings and arguments comes the catchy title – a “widget” is a fictional product commonly used in situations (e.g., economics classes) where the product doesn’t matter. Thus, treating teachers like widgets means that we treat them all as if they’re the same.

Given the influence of “The Widget Effect," as well as how different the teacher evaluation landscape is now compared to when it was released, I decided to read it closely. Having done so, I think it’s worth discussing a few points about the report.

Matching Up Teacher Value-Added Between Different Tests

The U.S. Department of Education has released a very short, readable report on the comparability of value-added estimates using two different tests in Indiana – one of them norm-referenced (the Measures of Academic Progress test, or MAP), and the other criterion-referenced (the Indiana Statewide Testing for Educational Progress Plus, or ISTEP+, which is also the state’s official test for NCLB purposes).

The research design here is straightforward – fourth and fifth grade students in 46 schools across 10 districts in Indiana took both tests, their teachers’ value-added scores were calculated, and the scores were compared. Since both sets of scores were based on the same students and teachers, this is allows a direct comparison of how teachers’ value-added estimates compare between these two tests. The results are not surprising, and they square with similar prior studies (see here, here, here, for example): The estimates based on the two tests are moderately correlated. Depending on the grade/subject, they are between 0.4 and 0.7. If you’re not used to interpreting correlation coefficients, consider that only around one-third of teachers were in the same quintile (fifth) on both tests, and another 40 or so percent were one quintile higher or lower. So, most teachers were within a quartile, about a quarter of teachers moved two or more quintiles, and a small percentage moved from top to bottom or vice-versa.

Although, as mentioned above, these findings are in line with prior research, it is worth remembering why this “instability” occurs (and what can be done about it).

Opportunity To Churn: Teacher Assignments Within New York City Schools

Virtually all discussions of teacher turnover focuses on teachers leaving schools and/or the profession. However, a recent working paper by Allison Atteberry, Susanna Loeb and James Wyckoff, which was presented at this month’s CALDER conference, reaches a very interesting conclusion using data from New York City: There is actually more movement within NYC schools than between them.*

Specifically, the authors show that, during the years for which they had data (1997-2002 and 2004-2010), over 50 percent of teachers in any given year exhibited some form of movement (including leaving the profession or switching schools), but two-thirds of these moves were within schools – i.e., teachers changing grades or subjects. Moreover, they find that these within-school moves, like those between-schools/professions, appear to have a negative impact on testing outcomes, one which is very modest but statistically discernible in both math and reading.

There are a couple of interesting points related to these main findings.

The Year In Research On Market-Based Education Reform: 2013 Edition

In the three most discussed and controversial areas of market-based education reform – performance pay, charter schools and the use of value-added estimates in teacher evaluations – 2013 saw the release of a couple of truly landmark reports, in addition to the normal flow of strong work coming from the education research community (see our reviews from 2010, 2011 and 2012).*

In one sense, this building body of evidence is critical and even comforting, given not only the rapid expansion of charter schools, but also and especially the ongoing design and implementation of new teacher evaluations (which, in many cases, include performance-based pay incentives). In another sense, however, there is good cause for anxiety. Although one must try policies before knowing how they work, the sheer speed of policy change in the U.S. right now means that policymakers are making important decisions on the fly, and there is great deal of uncertainty as to how this will all turn out.

Moreover, while 2013 was without question an important year for research in these three areas, it also illustrated an obvious point: Proper interpretation and application of findings is perhaps just as important as the work itself.

Being Kevin Huffman

In a post earlier this week, I noted how several state and local education leaders, advocates and especially the editorial boards of major newspapers used the results of the recently-released NAEP results inappropriately – i.e., to argue that recent reforms in states such as Tennessee and D.C. are “working." I also discussed how this illustrates a larger phenomenon in which many people seem to expect education policies to generate immediate, measurable results in terms of aggregate student test scores, which I argued is both unrealistic and dangerous.

Mike G. from Boston, a friend whose comments I always appreciate, agrees with me, but asks a question that I think gets to the pragmatic heart of the matter. He wonders whether individuals in high-level education positions have any alternative. For instance, Mike asks, what would I suggest to Kevin Huffman, who is the head of Tennessee’s education department? Insofar as Huffman’s opponents “would use any data…to bash him if it’s trending down," would I advise him to forego using the data in his favor when they show improvement?*

I have never held any important high-level leadership positions. My political experience and skills are (and I’m being charitable here) underdeveloped, and I have no doubt many more seasoned folks in education would disagree with me. But my answer is: Yes, I would advise him to forego using the data in this manner. Here’s why.