• Higher Education: Soaring Rhetoric, Skyrocketing Costs

    Over the past several years, the mantra of “college for all” has become ubiquitous, with Americans told that a college education is no longer a luxury, but a necessity, for any individual who aspires to a middle-class life in the 21st century economy.  And indeed, many studies tend to confirm that persons with a post-secondary education enjoy  lower unemployment rates and higher wages over time

    Simultaneously – sometimes in the same articles – we learn that soaring tuition rates have put college out of the reach of many, if not most, families.  In fact, for the past few decades, college costs have been rising faster than health care costs.  In the last year or so, the news is that students who tried to borrow their way around this seemingly intractable problem only dug themselves a deeper hole. Outstanding student college loans have reached – or soon will reach – the $1 trillion mark.

    The average student graduates college with a debt burden of nearly $25,000; others, especially those with professional degrees, are buckling under a debt load in the six figures. Since bankruptcy forgiveness does not apply to student debt, even unemployed and underemployed graduates can expect to carry this debt with them for years, perhaps decades, to come. With a slow economy exacerbating the problem, it’s no surprise to find that the national student loan default rate for 2009 (the last year for which data are available) was 8.8 percent and rising. At for-profit schools, the rate was 15 percent.

  • The Relatively Unexplored Frontier Of Charter School Finance

    Do charter schools do more – get better results - with less? If you ask this question, you’ll probably get very strong answers, ranging from the affirmative to the negative, often depending on the person’s overall view of charter schools. The reality, however, is that we really don’t know.

    Actually, despite uninformed coverage of insufficient evidence, researchers don’t even have a good handle on how much charter schools spend, to say nothing of whether how and how much they spend leads to better outcomes. Reporting of charter financial data is incomplete, imprecise and inconsistent. It is difficult to disentangle the financial relationships between charter management organizations (CMOs) and the schools they run, as well as that between charter schools and their "host" districts.

    A new report published by the National Education Policy Center, with support from the Shanker Institute and the Great Lakes Center for Education Research and Practice, examines spending between 2008 and 2010 among charter schools run by major CMOs in three states – New York, Texas and Ohio. The results suggest that relative charter spending in these states, like test-based charter performance overall, varies widely. In addition, perhaps more importantly, the findings make it clear that there remain significant barriers to accurate spending comparisons between charter and regular public schools, which severely hinder rigorous efforts to examine the cost-effectiveness of these schools.

  • Teachers And Their Unions: A Conceptual Border Dispute

    One of the segments from “Waiting for Superman” that stuck in my head is the following statement by Newsweek reporter Jonathan Alter:

    It’s very, very important to hold two contradictory ideas in your head at the same time. Teachers are great, a national treasure. Teachers’ unions are, generally speaking, a menace and an impediment to reform.
    The distinction between teachers and their unions (as well as those of other workers) has been a matter of political and conceptual contention for long time. On one “side," the common viewpoint, as characterized by Alter's slightly hyperbolic line, is “love teachers, don’t like their unions." On the other “side," criticism of teachers’ unions is often called “teacher bashing."

    So, is there any distinction between teachers and teachers’ unions? Of course there is.

  • The Test-Based Evidence On New Orleans Charter Schools

    Charter schools in New Orleans (NOLA) now serve over four out of five students in the city – the largest market share of any big city in the nation. As of the 2011-12 school year, most of the city’s schools (around 80 percent), charter and regular public, are overseen by the Recovery School District (RSD), a statewide agency created in 2003 to take over low-performing schools, which assumed control of most NOLA schools in Katrina’s aftermath.

    Around three-quarters of these RSD schools (50 out of 66) are charters. The remainder of NOLA’s schools are overseen either by the Orleans Parish School Board (which is responsible for 11 charters and six regular public schools, and taxing authority for all parish schools) or by the Louisiana Board of Elementary and Secondary Education (which is directly responsible for three charters, and also supervises the RSD).

    New Orleans is often held up as a model for the rapid expansion of charter schools in other urban districts, based on the argument that charter proliferation since 2005-06 has generated rapid improvements in student outcomes. There are two separate claims potentially embedded in this argument. The first is that the city’s schools perform better that they did pre-Katrina. The second is that NOLA’s charters have outperformed the city’s dwindling supply of traditional public schools since the hurricane.

    Although I tend strongly toward the viewpoint that whether charter schools "work" is far less important than why - e.g., specific policies and practices - it might nevertheless be useful to quickly address both of the claims above, given all the attention paid to charters in New Orleans.

  • The Allure Of Teacher Quality

    Those following education know that policy focused on "teacher quality" is by far the dominant paradigm for improving  schools over the past few years. Some (but not nearly all) components of this all-hands-on-deck effort are perplexing to many teachers, and have generated quite a bit of pushback. No matter one’s opinion of this approach, however, what drives it is the tantalizing allure of variation in teacher quality.

    Fueled by the ever-increasing availability of detailed test score datasets linking teachers to students, the research literature on teachers’ test-based effectiveness has grown rapidly, in both size and sophistication. Analysis after analysis finds that, all else being equal, the variation in teachers’ estimated effects on students' test growth – the difference between the “top” and “bottom” teachers – is very large. In any given year, some teachers’ students make huge progress, others’ very little. Even if part of this estimated variation is attributable to confounding factors, the discrepancies are still larger than most any other measured "input" within the jurisdiction of education policy. The underlying assumption here is that “true” teacher quality varies to a degree that is at least somewhat comparable in magnitude to the spread of the test-based estimates.

    Perhaps that's the case, but it does not, by itself, help much. The key question is whether and how we can measure teacher performance at the individual level and, more importantly, influence the distribution – that is, to raise the ceiling, the middle and/or the floor. The variation hangs out there like a drug to which we’re addicted, but haven’t really figured out how to administer. If there was some way to harness it efficiently, the potential benefits could be considerable. The focus of current education policy is in large part an effort to do anything and everything to try and figure this out. And, as might be expected given the enormity of the task, progress has been slow.

  • Value-Added Versus Observations, Part Two: Validity

    In a previous post, I compared value-added (VA) and classroom observations in terms of reliability – the degree to which they are free of error and stable over repeated measurements. But even the most reliable measures aren’t useful unless they are valid – that is, unless they’re measuring what we want them to measure.

    Arguments over the validity of teacher performance measures, especially value-added, dominate our discourse on evaluations. There are, in my view, three interrelated issues to keep in mind when discussing the validity of VA and observations. The first is definitional – in a research context, validity is less about a measure itself than the inferences one draws from it. The second point might follow from the first: The validity of VA and observations should be assessed in the context of how they’re being used.

    Third and finally, given the difficulties in determining whether either measure is valid in and of itself, as well as the fact that so many states and districts are already moving ahead with new systems, the best approach at this point may be to judge validity in terms of whether the evaluations are improving outcomes. And, unfortunately, there is little indication that this is happening in most places.

  • Becoming A 21st Century Learner

    Think about something you have always wanted to learn or accomplish but never did, such as a speaking a foreign language or learning how to play an instrument. Now think about what stopped you. There’s probably a variety of factors but chances are those factors have little to do with technology.

    Electronic devices are becoming cheaper, easier to use, and more intuitive. Much of the world’s knowledge is literally at our fingertips, accessible from any networked gadget. Yet, sustained learning does not always follow. It is often noted that developing digital skills/literacy is fundamental to 21st century learning but, is that all that’s missing? I suspect not. In this post I take a look at university courses available to anyone with an internet connection (a.k.a. massive open on-line courses or MOOCs) and ask: What attributes or skills make some people (but not others) better equipped to take advantage of this and similar educational opportunities brought about by advances in technology?

    In the last few months, Stanford University’s version of MOOCs have attracted considerable attention (also here and here), leading some to question the U.S. higher education model as we know it – and even envision its demise. But, what is really novel about the Stanford MOOCs? Why did 160,000 students from 190 countries sign up for the course “Introduction to Artificial Intelligence”?

  • Jobs And Freedom: Why Labor Organizing Should Be A Civil Right

    Our guest authors today are Norman Hill and Velma Murphy Hill. Norman Hill, staff coordinator of the historic 1963 March on Washington for Jobs and Freedom, is president emeritus of the A. Philip Randolph Institute. Velma Hill, a former vice president of the American Federation of Teachers (AFT), is also the former civil and human rights director for the Service Employees International Union (SEIU). They are currently working on a memoir, entitled Climbing Up the Rough Side of the Mountain.

    Richard D. Kahlenberg and Moshe Z. Marvit have done a great service by writing Why Labor Organizing Should Be a Civil Right: Rebuilding a Middle-Class Democracy by Enhancing Worker Voice, an important work with the potential to become the basis for a strong coalition on behalf of civil rights, racial equality and economic justice.

    In the United States, worker rights and civil rights have a deep and historic connection. What is slavery, after all, if not the abuse of worker rights taken to its ultimate extreme? A. Philip Randolph, the founder and president of the Brotherhood of Sleeping Car Porters, recognized this link and, as far back as the 1920s, spoke passionately about the need for a black-labor alliance. Civil rights activist Bayard Rustin, Randolph’s protégé and an adviser to Martin Luther King, Jr., joined his mentor as a forceful, early advocate for a black-labor coalition.

  • Value-Added Versus Observations, Part One: Reliability

    Although most new teacher evaluations are still in various phases of pre-implementation, it’s safe to say that classroom observations and/or value-added (VA) scores will be the most heavily-weighted components toward teachers’ final scores, depending on whether teachers are in tested grades and subjects. One gets the general sense that many - perhaps most - teachers strongly prefer the former (observations, especially peer observations) over the latter (VA).

    One of the most common arguments against VA is that the scores are error-prone and unstable over time - i.e., that they are unreliable. And it's true that the scores fluctuate between years (also see here), with much of this instability due to measurement error, rather than “real” performance changes. On a related note, different model specifications and different tests can yield very different results for the same teacher/class.

    These findings are very important, and often too casually dismissed by VA supporters, but the issue of reliability is, to varying degrees, endemic to all performance measurement. Actually, many of the standard reliability-based criticisms of value-added could also be leveled against observations. Since we cannot observe “true” teacher performance, it’s tough to say which is “better” or “worse," despite the certainty with which both “sides” often present their respective cases. And, the fact that both entail some level of measurement error doesn't by itself speak to whether they should be part of evaluations.*

    Nevertheless, many states and districts have already made the choice to use both measures, and in these places, the existence of imprecision is less important than how to deal with it. Viewed from this perspective, VA and observations are in many respects more alike than different.

  • There's No One Correct Way To Rate Schools

    Education Week reports on the growth of websites that attempt to provide parents with help in choosing schools, including rating schools according to testing results. The most prominent of these sites is GreatSchools.org. Its test-based school ratings could not be more simplistic – they are essentially just percentile rankings of schools’ proficiency rates as compared to all other schools in their states (the site also provides warnings about the data, along with a bunch of non-testing information).

    This is the kind of indicator that I have criticized when reviewing states’ school/district “grading systems." And it is indeed a poor measure, albeit one that is widely available and easy to understand. But it’s worth quickly discussing the fact that such criticism is conditional on how the ratings are employed - there is a difference between the use of testing data to rate schools for parents versus for high-stakes accountability purposes.

    In other words, the utility and proper interpretation of data vary by context, and there's no one "correct way" to rate schools. The optimal design might differ depending on the purpose for which the ratings will be used. In fact, the reasons why a measure is problematic in one context might very well be a source of strength in another.