Skip to:

Education

  • Words Reflect Knowledge

    Written on November 8, 2013

    I was fascinated when I started to read about the work of Betty Hart and Todd Risley and the early language differences between children growing up in different socioeconomic circumstances. But it took me a while to realize that we care about words primarily because of what words indicate about knowledge. This is important because it means that we must focus on teaching children about a wide range of interesting “stuff” – not just vocabulary for vocabulary’s sake. So, if words are the tip of the iceberg, what lies underneath? This metaphor inspired me to create the short animation below. Check it out!

    READ MORE
  • The Word Gap

    Written on October 31, 2013

    ** Reprinted here in the Washington Post

    It is now well established that children’s oral language development is crucial to their academic success, with the documentation of profound differences in word learning and the acquisition of content knowledge between children living in poverty and those from more economically advantaged homes. By the time they enter school, children from advantaged backgrounds may know as many as 15,000 more words than their less affluent peers. This early language gap sets children up to be at risk for other all too familiar gaps, such as the gaps in high school graduation, arrest and incarceration, post-secondary education, and lifetime earnings. So, what can we do to prevent this “early catastrophe”?

    If a child suffers from malnutrition, simply giving him/her more food might not be sufficient to alleviate the problem. A better approach would be to figure out which specific foods and supplements best provide the vitamins and nutrients that are needed, and then deliver these to the child. Recent press coverage on the “word gap," spurred by initiatives such as Too Small to Fail and Thirty Million Words, suffers from a similar failing.

    Don’t get me wrong, the initiatives themselves are hugely important and have done a truly commendable job of focusing public attention on a chronic and chronically overlooked problem. It’s just that the messages that have, thus far, made their way forward are predominantly about quantity – i.e., exposing children to more words and more talk – paying comparatively less attention to qualitative aspects, such as the nature and especially the content of adult-child interactions.

    READ MORE
  • Getting Teacher Evaluation Right

    Written on October 30, 2013

    Linda Darling-Hammond’s new book, Getting Teacher Evaluation Right, is a detailed, practical guide about how to improve the teaching profession. It leverages the best research and best practices, offering actionable, illustrated steps to getting teacher evaluation right, with rich examples from the U.S. and abroad.

    Here I offer a summary of the book’s main arguments and conclude with a couple of broad questions prompted by the book. But, before I delve into the details, here’s my quick take on Darling-Hammond’s overall stance.

    We are at a crossroads in education; two paths lay before us. The first seems shorter, easier and more straightforward. The second seems long, winding and difficult. The big problem is that the first path does not really lead to where we need to go; in fact, it is taking us in the opposite direction. So, despite appearances, more steady progress will be made if we take the more difficult route. This book is a guide on how to get teacher evaluation right, not how to do it quickly or with minimal effort. So, in a way, the big message or take away is: There are no shortcuts.

    READ MORE
  • Innovating To Strengthen Youth Employment

    Written on October 22, 2013

    Our guest author today is Stan Litow, Vice President of Corporate Citizenship and Corporate Affairs at IBM, President of the IBM Foundation, and a member of the Shanker Institute’s board of directors. This essay was originally published in innovations, an MIT press journal.

    The financial crisis of 2008 exposed serious weaknesses in the world’s economic infrastructure. As a former aide to a mayor of New York and as deputy chancellor of the New York City Public Schools (the largest public school system in the United States), my chief concern—and a significant concern to IBM and other companies interested in global economic stability—has been the impact of global economic forces on youth employment.

    Across the United States and around the world, youth unemployment is a staggering problem, and one that is difficult to gauge with precision. One factor that makes it difficult to judge accurately is that many members of the youth population have yet to enter the workforce, making it hard to count those who are unable to get jobs. What we do know is that the scope of the problem is overwhelming. Youth unemployment in countries such as Greece and Spain is estimated at over 50 percent, while in the United States the rate may be 20 percent, 30 percent, or higher in some cities and states. Why is this problem so daunting? Why does it persist? And, most important, how can communities, educators, and employers work together to address it?

    READ MORE
  • Incentives And Behavior In DC's Teacher Evaluation System

    Written on October 17, 2013

    A new working paper, published by the National Bureau of Economic Research, is the first high quality assessment of one of the new teacher evaluation systems sweeping across the nation. The study, by Thomas Dee and James Wyckoff, both highly respected economists, focuses on the first three years of IMPACT, the evaluation system put into place in the District of Columbia Public Schools in 2009.

    Under IMPACT, each teacher receives a point total based on a combination of test-based and non-test-based measures (the formula varies between teachers who are and are not in tested grades/subjects). These point totals are then sorted into one of four categories – highly effective, effective, minimally effective and ineffective. Teachers who receive a highly effective (HE) rating are eligible for salary increases, whereas teachers rated ineffective are dismissed immediately and those receiving minimally effective (ME) for two consecutive years can also be terminated. The design of this study exploits that incentive structure by, put very simply, comparing the teachers who were directly above the ME and HE thresholds to those who were directly below them, and to see whether they differed in terms of retention and performance from those who were not. The basic idea is that these teachers are all very similar in terms of their measured performance, so any differences in outcomes can be (cautiously) attributed to the system’s incentives.

    The short answer is that there were meaningful differences.

    READ MORE
  • Comparing Teacher And Principal Evaluation Ratings

    Written on October 15, 2013

    The District of Columbia Public Schools (DCPS) has recently released the first round of results from its new principal evaluation system. Like the system used for teachers, the principal ratings are based on a combination of test and non-test measures. And the two systems use the same final rating categories (highly effective, effective, minimally effective and ineffective).

    It was perhaps inevitable that there would be comparisons of their results. In short, principal ratings were substantially lower, on average. Roughly half of them received one of the two lowest ratings (minimally effective or ineffective), compared with around 10 percent of teachers.

    Some wondered whether this discrepancy by itself means that DC teachers perform better than principals. Of course not. It is difficult to compare the performance of teachers versus that of principals, but it’s unsupportable to imply that we can get a sense of this by comparing the final rating distributions from two evaluation systems.

    READ MORE
  • Thoughts On Using Value Added, And Picking A Model, To Assess Teacher Performance

    Written on October 7, 2013

    Our guest author today is Dan Goldhaber, Director of the Center for Education Data & Research and a Research Professor in Interdisciplinary Arts and Sciences at the University of Washington Bothell.

    Let me begin with a disclosure: I am an advocate of experimenting with using value added, where possible, as part of a more comprehensive system of teacher evaluation. The reasons are pretty simple (though articulated in more detail in a brief, which you can read here). The most important reason is that value-added information about teachers appears to be a better predictor of future success in the classroom than other measures we currently use. This is perhaps not surprising when it comes to test scores, certainly an important measure of what students are getting out of schools, but research also shows that value added predicts very long run outcomes, such as college going and labor market earnings. Shouldn’t we be using valuable information about likely future performance when making high-stakes personnel decisions? 

    It almost goes without saying, but it’s still worth emphasizing, that it is impossible to avoid making high-stakes decisions. Policies that explicitly link evaluations to outcomes such as compensation and tenure are new, but even in the absence of such policies that are high-stakes for teachers, the stakes are high for students, because some of them are stuck with ineffective teachers when evaluation systems suggest, as is the case today, that nearly all teachers are effective.

    READ MORE
  • Are There Low Performing Schools With High Performing Students?

    Written on October 3, 2013

    I write often (probably too often) about the difference between measures of school performance and student performance, usually in the context of school rating systems. The basic idea is that schools cannot control the students they serve, and so absolute performance measures, such as proficiency rates, are telling you more about the students a school or district serves than how effective it is in improving outcomes (which is better-captured by growth-oriented indicators).

    Recently, I was asked a simple question: Can a school with very high absolute performance levels ever actually be considered a “bad school?"

    This is a good question.

    READ MORE
  • Underlying Issues In The DC Test Score Controversy

    Written on October 1, 2013

    In the Washington Post, Emma Brown reports on a behind the scenes decision about how to score last year’s new, more difficult tests in the District of Columbia Public Schools (DCPS) and the District's charter schools.

    To make a long story short, the choice faced by the Office of the State Superintendent of Education, or OSSE, which oversees testing in the District, was about how to convert test scores into proficiency rates. The first option, put simply, was to convert them such that the proficiency bar was more “aligned” with the Common Core, thus resulting in lower aggregate proficiency rates in math, compared with last year’s (in other states, such as Kentucky and New York, rates declined markedly). The second option was to score the tests while "holding constant" the difficulty of the questions, in order to facilitate comparisons of aggregate rates with those from previous years.

    OSSE chose the latter option (according to some, in a manner that was insufficiently transparent). The end result was a modest increase in proficiency rates (which DC officials absurdly called “historic”).

    READ MORE
  • Selection Versus Program Effects In Teacher Prep Value-Added

    Written on September 24, 2013

    There is currently a push to evaluate teacher preparation programs based in part on the value-added of their graduates. Predictably, this is a highly controversial issue, and the research supporting it is, to be charitable, still underdeveloped. At present, the evidence suggests that the differences in effectiveness between teachers trained by different prep programs may not be particularly large (see here, here, and here), though there may be exceptions (see this paper).

    In the meantime, there’s an interesting little conflict underlying the debate about measuring preparation programs’ effectiveness, one that’s worth pointing out. For the purposes of this discussion, let’s put aside the very important issue of whether the models are able to account fully for where teaching candidates end up working (i.e., bias in the estimates based on school assignments/preferences), as well as (valid) concerns about judging teachers and preparation programs based solely on testing outcomes. All that aside, any assessment of preparation programs using the test-based effectiveness of their graduates is picking up on two separate factors: How well they prepare their candidates; and who applies to their programs in the first place.

    In other words, programs that attract and enroll highly talented candidates might look good even if they don’t do a particularly good job preparing teachers for their eventual assignments. But does that really matter?

    READ MORE

Pages

Subscribe to Education

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.