• In Census Finance Data, Most Charters Are Not Quite Public Schools

    Last month, the U.S. Census Bureau released its annual public K-12 school finance report (and accompanying datasets). The data, which are for FY 2009 (there’s always a lag in finance data), show that spending increased roughly two percent from the previous year. This represents much slower growth than usual.

    These data are a valuable resource that has rightfully gotten a lot of attention. But there’s a serious problem within them, which, while slightly technical, hasn’t received any attention at all: The vast majority of public charter schools are not included in the data.

    To gather its data, the Census Bureau relies on reporting from “government entities." Some charter schools fit this description neatly, such as those operated by governments or government-affiliated bodies, including states, districts, counties, and public universities. But most charter schools are operated by private organizations (mostly non-profits), and finance figures for these schools are not included in the report (the Census classifies them as "private charter schools").

    What does this mean? Well, for one thing, it means that the overall spending figures (total dollar amounts) are a bit understated. Charters only account for a relatively small proportion of all public school enrollments (around 5-6 percent); still, given the huge amounts of money we’re dealing with here (the U.S. spends roughly $600 billion a year), we’re talking about quite a bit in absolute terms. Perhaps more important is the potential effect on per-pupil spending figures – the way that education financing is usually expressed.

  • Another Tiananmen Anniversary: Will There Be A Reckoning?

    This Saturday, June 4, 2011, marks the 22nd anniversary of the 1989 Tiananmen Square massacre, where thousands of pro-democracy activists were killed, injured or imprisoned by Chinese authorities.  This year’s Tiananmen anniversary comes at a time of greatly increased political repression in China.  According to the Congressional-Executive Commission on China (CECC), “Chinese authorities have launched a broad crackdown against rights defenders, reform advocates, lawyers, petitioners, writers, artists, and Internet bloggers in what international observers have described as one of the harshest crackdowns in years."

    Over the last several months, activist groups such as Chinese Human Rights Defenders (CHRD) have repeatedly tried to draw attention to this harsh renewal of repression in China. In an article entitled “Missing before Action” in the March issue of Foreign Policy Magazine, a CHRD writer noted that hundreds of Chinese human rights activists, lawyers, and pro-democracy dissidents from across the country have been affected by the crackdown. Police have used “violence, arbitrary detention, "disappearances," and other forms of harassment and intimidation” to put a damper on any nascent protest movement.  Other dissidents --or non-dissident citizens walking the streets -- have been picked up for questioning.

    Although authorities  began tightening the political screws in the period leading up to  the 2008 Beijing Olympics, it appears that the recent democratic uprisings in the Middle East have given added impetus to this policy.

  • When It Comes To How We Use Evidence, Is Education Reform The New Welfare Reform?

    ** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post

    In the mid-1990s, after a long and contentious debate, the U.S. Congress passed the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, which President Clinton signed into law. It is usually called the “Welfare Reform Act," as it effectively ended the Aid to Families with Dependent Children (AFDC) program (which is what most people mean when they say “welfare," even though it was [and its successor is] only a tiny part of our welfare state). Established during the New Deal, AFDC was mostly designed to give assistance to needy young children (it was later expanded to include support for their parents/caretakers as well).

    In place of AFDC was a new program – Temporary Assistance for Needy Families (TANF). TANF gave block grants to states, which were directed to design their own “welfare” programs. Although the states were given considerable leeway, their new programs were to have two basic features: first, for welfare recipients to receive benefits, they had to be working; and second, there was to be a time limit on benefits, usually 3-5 years over a lifetime, after which individuals were no longer eligible for cash assistance (states could exempt a proportion of their caseload from these requirements). The general idea was that time limits and work requirements would “break the cycle of poverty”; recipients would be motivated (read: forced) to work, and in doing so, would acquire the experience and confidence necessary for a bootstrap-esque transformation.

    There are several similarities between the bipartisan welfare reform movement of the 1990s and the general thrust of the education reform movement happening today. For example, there is the reliance on market-based mechanisms to “cure” longstanding problems, and the unusually strong liberal-conservative alliance of the proponents. Nevertheless, while calling education reform “the new welfare reform” might be a good soundbyte, it would also take the analogy way too far.

    My intention here is not to draw a direct parallel between the two movements in terms of how they approach their respective problems (poverty/unemployment and student achievement), but rather in how we evaluate their success in doing so. In other words, I am concerned that the manner in which we assess the success or failure of education reform in our public debate will proceed using the same flawed and misguided methods that were used by many for welfare reform.

  • The Ethics of Testing Children Solely To Evaluate Adults

    The recent New York Times article, “Tests for Pupils, but the Grades Go to Teachers," alerts us of an emerging paradox in education – the development and use of standardized student testing solely as a means to evaluate teachers, not students. “We are not focusing on teaching and learning anymore; we are focusing on collecting data," says one mother quoted in the article. Now, let’s see: collecting data on minors that is not explicitly for their benefit – does this ring a bell?

    In the world of social/behavioral science research, such an enterprise – collecting data on people, especially on minors – would inevitably require approval from the Institutional Review Board (IRB). For those not familiar, IRB is a committee that oversees research that involves people and is responsible for ensuring that studies are designed in an ethical manner. Even in conducting a seemingly harmless interview on political attitudes or observing a group studying in a public library, the researcher would almost certainly be required to go through a series of steps to safeguard participants and ensure that the norms governing ethical research will be observed.

    Very succinctly, IRBs’ mission is to see that (1) the risk-benefit ratio of conducting the research is favorable; (2) any suffering or distress that participants may experience during or after the study is understood, minimized, and addressed; and (3) research participants’ agreed to participate freely and knowingly – usually, subjects are requested to sign an informed consent which includes a description of the study’s risks and benefits, a discussion of how confidentiality will be guaranteed, a statement on the voluntary nature of involvement, and a clarification that refusal or withdrawal at any time will involve no penalty or loss of benefits. When the research involves minors, parental consent and sometimes child assent are needed.

    In short, IRB procedures exist to protect people. To my knowledge, student evaluation procedures and standardized testing are exempt from this sort of scrutiny. So the real question is: Should they be? Perhaps not.

  • Value-Added In Teacher Evaluations: Built To Fail

    With all the controversy and acrimonious debate surrounding the use of value-added models in teacher evaluation, few seem to be paying much attention to the implementation details in those states and districts that are already moving ahead. This is unfortunate, because most new evaluation systems that use value-added estimates are literally being designed to fail.

    Much of the criticism of value-added (VA) focuses on systematic bias, such as that stemming from non-random classroom assignment (also here). But the truth is that most of the imprecision of value-added estimates stems from random error. Months ago, I lamented the fact that most states and districts incorporating value-added estimates into their teacher evaluations were not making any effort to account for this error. Everyone knows that there is a great deal of imprecision in value-added ratings, but few policymakers seem to realize that there are relatively easy ways to mitigate the problem.

    This is the height of foolishness. Policy is details. The manner in which one uses value-added estimates is just as important – perhaps even more so – than the properties of the models themselves. By ignoring error when incorporating these estimates into evaluation systems, policymakers virtually guarantee that most teachers will receive incorrect ratings. Let me explain.

  • As Membership Has Declined, Have Attitudes Toward Unions Changed Too?

    The sharp decline in U.S. union membership over the past 30-40 years is well known, but does it reflect a change in attitudes towards organized labor? In other words, is decreasing union membership accompanied by decreasing support for labor?

    Of course, if attitudes have in fact changed, they might be both exogenous (membership declines because support decreases, leading to fewer unionization drives and less political support) as well as endogenous (support decreases because membership declines, as fewer people are exposed to unions and to the benefits of membership) to unionization levels. And, to some degree, attitudes and membership likely change independent of each other.

    In any case, it’s worth taking a look at how attitudes towards labor have changed over the past few decades. In the graph below, I present simple trend data from the General Social Survey (GSS), which has been administered either annually or semi-annually since 1972. Every year, the GSS queries respondents’ confidence in a number of major societal institutions, including organized labor. Granted, there is a difference between having confidence in unions and supporting them per se, but I think it’s safe to assume that the former is a decent indicator of the latter.

  • The High Cost Of Caring

    The field of early childhood education (ECE) is riddled with contradictions. Bluntly, when those we love the most—our children—are at the most consequential stage of their cognitive, social, and emotional development, we leave them in the hands of the people we pay the least. According to the latest data from the U.S. Bureau of Labor Statistics, for example, childcare workers earn about 4 percent less than animal caretakers—$20,940 and $21,830 per year, respectively.

    I am far from the first to make this embarrassing comparison; more than a decade ago, Marci Whitebook provided an extensive overview. Unfortunately, the comparisons still hold.

    Over the intervening years, there have been many determined efforts to regulate and improve the working conditions of early childhood educators, including raising the qualifications and wages for the profession. Indeed, the demand for worthy salaries is often discussed in combination with workforce development efforts. In other words, we want early childhood workers to be both better trained and better paid. While this may seem to be a perfectly reasonable approach, it suggests that the low wages are a result of inadequate qualifications. Perhaps. But I believe that this obscures another important explanation for these workers’ persistently meager pay.

  • A Response To Joel Klein

    Our guest author today is Edith (Eadie) Shanker, Albert Shanker’s widow and a retired New York City teacher.

    A few months ago, in the Wall Street Journal (WSJ), Joel Klein invoked Al Shanker’s name as an educator in support of today’s charter school “reform” efforts. Klein wanted the public to believe that Al was the originator of the charter school concept (he wasn’t) and that he would today be supportive of the charter school ”reform” ideology now being spread around New York City and the country as a panacea for low student achievement. Conveniently, Klein did not indicate that Al denounced the idea of charters when it became clear that the concept had changed and was being hijacked by corporate and business interests. In Al’s view, such hijacking would result in the privatization of public education and, ultimately, its destruction - all without improving student outcomes.

    Now, in his recent Atlantic magazine article, Klein trots out a quotation attributed to Al (said in jest if at all) to support the stereotype that, as a union leader, Al cared only about “protecting” the union’s members, including “bad” teachers. Using this alleged quotation – “when school children start paying union dues, that’s when I’ll start representing the interests of children” - Klein not only plays fast and loose with Al’s reputation as a union leader but also as a sterling educator. (To be a true expert on Al’s views on how to improve education for children - and how to be a union leader - Klein could check out 27 years’ worth of his “Where We Stand” columns in the New York Times.)

  • What Do Teachers Really Think About Education Reform?

    There has recently been a lot of talk about teachers’ views on education policy. Many teachers have been quite vocal in their opposition to certain policies (also here) and many more have expressed their views democratically – through their unions – especially in states where teachers have collective bargaining rights.

    We should listen carefully to these views, but it’s also important to bear in mind that there are millions of public school teachers out there, with a wide variety of opinions on any particular education policy, and not all of their voices might be getting through.

    So, the question remains: How do most teachers feel about the current wave of education policy reforms spreading throughout states and districts, including (but not at all limited to) merit pay, eliminating tenure and incorporating test-based measures into teacher evaluations?

    The logical mechanism by which we might learn more about teachers’ views on these policies is, of course, a survey. Unfortunately, useful national surveys are quite rare. In order to get accurate estimates, you need an unusually large number of teachers to take the survey (a deliberate "oversample"), and they must be randomly polled (lest there be selection bias). In my last post, I suggested that states/districts conduct their own teacher surveys.  In the meantime, some national evidence is already available, and if the data make one thing clear, it’s that we need more. When it comes to supporting or opposing different policies, teachers’ opinions, like everyone’s, depend a great deal on the details.

  • To Understand The Impact Of Teacher-Focused Reforms, Pay Attention To Teachers

    You don’ t need to be a policy analyst to know that huge changes in education are happening at the state- and local-levels right now – teacher performance pay, the restriction of teachers’ collective bargaining rights, the incorporation of heavily-weighted growth model estimates in teacher evaluations, the elimination of tenure, etc. Like many, I am concerned about the possible consequences of some of these new policies (particularly about their details), as well as about the apparent lack of serious efforts to monitor them.

    Our “traditional” gauge of “what works” – cross-sectional test score gains – is totally inadequate, even under ideal circumstances. Even assuming high quality tests that are closely aligned to what has been taught, raw test scores alone cannot account for changes in the student population over time and are subject to measurement error. There is also no way to know whether fluctuations in test scores (even fluctuations that are real) are the result of any particular policy (or lack thereof).

    Needless to say, test scores can (and will) play some role, but I for one would like to see more states and districts commissioning reputable, independent researchers to perform thorough, longitudinal analyses of their assessment data (which would at least mitigate the measurement issues). Even so, there is really no way to know how these new, high-stakes test-based policies will influence the validity of testing data, and, as I have argued elsewhere, we should not expect large, immediate testing gains even if policies are working well. If we rely on these data as our only yardstick of how various policies are working, we will be getting a picture that is critically incomplete and potentially biased.

    What are the options? Well, we can’t solve all the measurement and causality issues mentioned above, but insofar as the policy changes are focused on teacher quality, it makes sense to evaluate them in part by looking at teacher behavior and characteristics, particularly in those states with new legislation. Here’s a few suggestions.