• Labor In High School Textbooks: Bias, Neglect And Invisibility

    The nation has just celebrated Labor Day, yet few Americans have any idea why. As high school students, most were taught little about unions—their role, their accomplishments, and how and why they came to exist.

    This is one of the conclusions of a new report, released today by the Albert Shanker Institute in cooperation with the American Labor Studies Center. The report, "American Labor in U.S. History Textbooks: How Labor’s Story Is Distorted in High School History Textbooks," consists of a review of some of the nation’s most frequently used high school U.S. history textbooks for their treatment of unions in American history. The authors paint a disturbing picture, concluding that the history of the U.S. labor movement and its many contributions to the American way of life are "misrepresented, downplayed or ignored." Students—and all Americans—deserve better.

    Unfortunately, this is not a new problem. As the report notes, "spotty, inadequate, and slanted coverage" of the labor movement dates at least to the New Deal era. Scholars began documenting the problem as early as the 1960s. As this and previous textbook reviews have concluded, our history textbooks have essentially "taken sides" in the intense political debate around unions—the anti-union side.

    The impact of these textbook distortions has been amplified by our youth’s exposure to a media that is sometimes thoughtless and sometimes hostile in its reporting and its attitudes toward labor. This is especially troubling when membership in private sector unions is shrinking rapidly and the right of public sector unions to exist is hotly contested.

  • Grand Bargaining

    With Labor Day upon us, I’ve found myself thinking about three apparently unrelated pieces of sociological research, and how all point to the role of laws, policies, and institutions as "signalers" of the social values that we share.

    First, in an unpublished paper, Stanford University’s Cristobal Young examines the role of unemployment insurance in encouraging prolonged job search effort. Second, in a talk earlier this month at the annual meeting of the American Sociological Association, Shelley Correll (also at Stanford) discussed how greater awareness of laws such as the Family and Medical Leave Act (FMLA) make it harder for employers to discriminate against those who take it. Third, a recent article by Bruce Western (Harvard University) and Jake Rosenfeld (University of Washington) argues that unions contribute to a moral economy that reduces wage inequality for all workers, not just union members.

    I think that these three pieces of scholarship tell a similar story: policies, laws and institutions have impact beyond their primary intended purpose. Unemployment benefits are more than the money one receives when jobless; laws pertaining employment rights are more than rules enforced by the imposition of sanctions; and unions are more than organizations seeking to improve their members’ wages and working conditions. These policies, programs, and institutions also have a symbolic importance—they signal a consensus about what we value and desire as a society which simultaneously shapes the lens through which we judge our own behavior and that of others.

  • Predicaments Of Reform

    Our guest author today is David K. Cohen, John Dewey Collegiate Professor of Education and professor of public policy at the University of Michigan, and a member of the Shanker Institute’s board of directors. This is a response to Michael Petrilli, who recently published a post on the Fordham Institute’s blog that referred to Cohen’s new book.

    Dear Mike:

    Thank you for considering my book Teaching And Its Predicaments (Harvard University Press, 2011), and for your intelligent discussion of the issues. I write to continue the conversation. 

    You are right to say that I see the incoherence of U.S. public education as a barrier to more quality and less inequality, but I do not "look longingly" at Asia or Finland, let alone take them as models for what Americans should do to improve schools. 

    In my 2009 book (The Ordeal Of Equality: Did Federal Regulation Fix The Schools?), Susan L. Moffitt and I recounted the great difficulties that the "top-down" approach to coherence, with which you associate my work, encountered as Title I of the 1965 ESEA was refashioned to leverage much greater central influence on schooling. Susan and I concluded that increased federal regulation had not fixed the schools, and had caused some real damage along with some important constructive effects. We did not see central coherence as The Answer.

  • Quality Control, When You Don't Know The Product

    Last week, New York State’s Supreme Court issued an important ruling on the state’s teacher evaluations. The aspect of the ruling that got the most attention was the proportion of evaluations – or “weight” – that could be assigned to measures based on state assessments (in the form of estimates from value-added models). Specifically, the Court ruled that these measures can only comprise 20 percent of a teacher’s evaluation, compared with the option of up to 40 percent for which Governor Cuomo and others were pushing. Under the decision, the other 20 percent must consist entirely of alternative test-based measures (e.g., local assessments).

    Joe Williams, head of Democrats for Education Reform, one of the flagship organizations of the market-based reform movement, called the ruling “a slap in the face” and “a huge win for the teachers unions." He characterized the policy impact as follows: “A mediocre teacher evaluation just got even weaker."

    This statement illustrates perfectly the strange reasoning that seems to be driving our debate about evaluations.

  • Charter And Regular Public School Performance In "Ohio 8" Districts, 2010-11

    Every year, the state of Ohio releases an enormous amount of district- and school-level performance data. Since Ohio has among the largest charter school populations in the nation, the data provide an opportunity to examine performance differences between charters and regular public schools in the state.

    Ohio’s charters are concentrated largely in the urban “Ohio 8” districts (sometimes called the “Big 8”): Akron; Canton; Cincinnati; Cleveland; Columbus; Dayton; Toledo; and Youngstown. Charter coverage varies considerably between the “Ohio 8” districts, but it is, on average, about 20 percent, compared with roughly five percent across the whole state. I will therefore limit my quick analysis to these districts.

    Let’s start with the measure that gets the most attention in the state: Overall “report card grades." Schools (and districts) can receive one of six possible ratings: Academic emergency; academic watch; continuous improvement; effective; excellent; and excellent with distinction.

    These ratings represent a weighted combination of four measures. Two of them measure performance “growth," while the other two measure “absolute” performance levels. The growth measures are AYP (yes or no), and value-added (whether schools meet, exceed, or come in below the growth expectations set by the state’s value-added model). The first “absolute” performance measure is the state’s “performance index," which is calculated based on the percentage of a school’s students who fall into the four NCLB categories of advanced, proficient, basic and below basic. The second is the number of “state standards” that schools meet as a percentage of the number of standards for which they are “eligible." For example, the state requires 75 percent proficiency in all the grade/subject tests that a given school administers, and schools are “awarded” a “standard met” for each grade/subject in which three-quarters of their students score above the proficiency cutoff (state standards also include targets for attendance and a couple of other non-test outcomes).

    The graph below presents the raw breakdown in report card ratings for charter and regular public schools.

  • What Americans Think About Teachers Versus What They're Hearing

    The results from the recent Gallup/PDK education survey found that 71 percent of surveyed Americans “have trust and confidence in the men and women who are teaching children in public schools." Although this finding received a fair amount of media attention, it is not at all surprising. Polls have long indicated that teachers are among the most trusted professions in the U.S., up there with doctors, nurses and firefighters.

    (Side note: The teaching profession also ranks among the most prestigious U.S. occupations – in both analyses of survey data as well as in polls [though see here for an argument that occupational prestige scores are obsolete].)

    What was rather surprising, on the other hand, was the Gallup/PDK results for the question about what people are hearing about teachers in the news media. Respondents were asked, “Generally speaking, do you hear more good stories or bad stories about teachers in the news media?"

    Over two-thirds (68 percent) said they heard more bad stories than good ones. A little over a quarter (28 percent) said the opposite.

  • Certainty And Good Policymaking Don't Mix

    Using value-added and other types of growth model estimates in teacher evaluations is probably the most controversial and oft-discussed issue in education policy over the past few years.

    Many people (including a large proportion of teachers) are opposed to using student test scores in their evaluations, as they feel that the measures are not valid or reliable, and that they will incentivize perverse behavior, such as cheating or competition between teachers. Advocates, on the other hand, argue that student performance is a vital part of teachers’ performance evaluations, and that the growth model estimates, while imperfect, represent the best available option.

    I am sympathetic to both views. In fact, in my opinion, there are only two unsupportable positions in this debate: Certainty that using these measures in evaluations will work; and certainty that it won’t. Unfortunately, that’s often how the debate has proceeded – two deeply-entrenched sides convinced of their absolutist positions, and resolved that any nuance in or compromise of their views will only preclude the success of their efforts. You’re with them or against them. The problem is that it's the nuance - the details - that determine policy effects.

    Let’s be clear about something: I'm not aware of a shred of evidence – not a shred – that the use of growth model estimates in teacher evaluations improves performance of either teachers or students.

  • Our Annual Testing Data Charade

    Every year, around this time, states and districts throughout the nation release their official testing results. Schools are closed and reputations are made or broken by these data. But this annual tradition is, in some places, becoming a charade.

    Most states and districts release two types of assessment data every year (by student subgroup, school and grade): Average scores (“scale scores”); and the percent of students who meet the standards to be labeled proficient, advanced, basic and below basic. The latter type – the rates – are of course derived from the scores – that is, they tell us the proportion of students whose scale score was above the minimum necessary to be considered proficient, advanced, etc.

    Both types of data are cross-sectional. They don’t follow individual students over time, but rather give a “snapshot” of aggregate performance among two different groups of students (for example, third graders in 2010 compared with third graders in 2011). Calling the change in these results “progress” or “gains” is inaccurate; they are cohort changes, and might just as well be chalked up to differences in the characteristics of the students (especially when changes are small). Even averaged across an entire school or district, there can be huge differences in the groups compared between years – not only is there often considerable student mobility in and out of schools/districts, but every year, a new cohort enters at the lowest tested grade, while a whole other cohort exits at the highest tested grade (except for those retained).

    For these reasons, any comparisons between years must be done with extreme caution, but the most common way - simply comparing proficiency rates between years - is in many respects the worst. A closer look at this year’s New York City results illustrates this perfectly.

  • Teachers' Preparation Routes And Policy Views

    In a previous post, I lamented the scarcity of survey data measuring what teachers think of different education policy reforms. A couple of weeks ago, the National Center for Education Information (NCEI) released the results of their teacher survey (conducted every five years), which provides a useful snapshot of teachers’ opinions toward different policies (albeit not at the level of detail that one might wish).

    There are too many interesting results to review in one post, and I encourage you to take a look at the full set yourself. There was, however, one thing about the survey tabulations that I found particularly striking, and that was the high degree to which policy opinions differed between traditionally-certified teachers and those who entered teaching through alternative certification (alt-cert).

    In the figure below, I reproduce data from the NCEI report’s battery of questions about whether teachers think different policies would “improve education." Respondents are divided by preparation route – traditional and alternative.