Feeling Socially Connected Fuels Intrinsic Motivation And Engagement

Our "social side of education reform" series has emphasized that teaching is a cooperative endeavor, and as such is deeply influenced by the quality of a school's social environment -- i.e., trusting relationships, teamwork and cooperation. But what about learning? To what extent are dispositions such as motivation, persistence and engagement mediated by relationships and the social-relational context?

This is, of course, a very complex question, which can't be addressed comprehensively here. But I would like to discuss three papers that provide some important answers. In terms of our "social side" theme, the studies I will highlight suggest that efforts to improve learning should include and leverage social-relational processes, such as how learners perceive (and relate to) -- how they think they fit into -- their social contexts. Finally, this research, particularly the last paper, suggests that translating this knowledge into policy may be less about top down, prescriptive regulations and more about what Stanford psychologist Gregory M. Walton has called "wise interventions" -- i.e., small but precise strategies that target recursive processes (more below).

The first paper, by Lucas P. Butler and Gregory M. Walton (2013), describes the results of two experiments testing whether the perceived collaborative nature of an activity that was done individually would cause greater enjoyment of and persistence on that activity among preschoolers.

Multiple Measures And Singular Conclusions In A Twin City

A few weeks ago, the Minneapolis Star Tribune published teacher evaluation results for the district’s public school teachers in 2013-14. This decision generated a fair amount of controversy, but it’s worth noting that the Tribune, unlike the Los Angeles Times and New York City newspapers a few years ago, did not publish scores for individual teachers, only totals by school.

The data once again provide an opportunity to take a look at how results vary by student characteristics. This was indeed the focus of the Tribune’s story, which included the following headline: “Minneapolis’ worst teachers are in the poorest schools, data show." These types of conclusions, which simply take the results of new evaluations at face value, have characterized the discussion since the first new systems came online. Though understandable, they are also frustrating and a potential impediment to the policy process. At this early point, “the city’s teachers with the lowest evaluation ratings” is not the same thing as “the city’s worst teachers." Actually, as discussed in a previous post, the systematic variation in evaluation results by student characteristics, which the Tribune uses to draw conclusions about the distribution of the city’s “worst teachers," could just as easily be viewed as one of the many ways that one might assess the properties and even the validity of those results.

So, while there are no clear-cut "right" or "wrong" answers here, let’s take a quick look at the data and what they might tell us.

Redesigning Florida's School Report Cards

The Foundation for Excellence in Education, an organization that advocates for education reform in Florida, in particular the set of policies sometimes called the "Florida Formula," recently announced a competition to redesign the “appearance, presentation and usability” of the state’s school report cards. Winners of the competition will share prize money totaling $35,000.

The contest seems like a great idea. Improving the manner in which education data are presented is, of course, a laudable goal, and an open competition could potentially attract a diverse group of talented people. As regular readers of this blog know, however, I am not opposed to sensibly-designed test-based accountability policies, but my primary concern about school rating systems is focused mostly on the quality and interpretation of the measures used therein. So, while I support the idea of a competition for improving the design of the report cards, I am hoping that the end result won't just be a very attractive, clever instrument devoted to the misinterpretation of testing data.

In this spirit, I would like to submit four simple graphs that illustrate, as clearly as possible and using the latest data from 2014, what Florida’s school grades are actually telling us. Since the scoring and measures vary a bit between different types of schools, let’s focus on elementary schools.

Building And Sustaining Research-Practice Partnerships

Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the second of two posts on research-practice partnerships - read the part one here; both posts are part of The Social Side of Reform Shanker Blog series.

In my first post on research-practice partnerships, I highlighted the need for partnerships and pointed to some potential benefits of long-term collaborations between researchers and practitioners. But how do you know when an arrangement between researchers and practitioners is a research-practice partnership? Where can people go to learn about how to form and sustain research-practice partnerships? Who funds this work?

In this post I answer these questions and point to some resources researchers and practitioners can use to develop and sustain partnerships.

The Common Core And Failing Schools

In observing all the recent controversy surrounding the Common Core State Standards (CCSS), I have noticed that one of the frequent criticisms from one of the anti-CCSS camps, particularly since the first rounds of results from CCSS-aligned tests have started to be released, is that the standards are going to be used to label more schools as “failing," and thus ramp up the test-based accountability regime in U.S. public education.

As someone who is very receptive to a sensible, well-designed dose of test-based accountability, but sees so little of it in current policy, I am more than sympathetic to concerns about the proliferation and misuse of high-stakes testing. On the other hand, anti-CCSS arguments that focus on testing or testing results are not really arguments against the standards per se. They also strike me as ironic, as they are based on the same flawed assumptions that critics of high-stakes testing should be opposing.

Standards themselves are about students. They dictate what students should know at different points in their progression through the K-12 system. Testing whether students meet those standards makes sense, but how we use those test results is not dictated by the standards. Nor do standards require us to set bars for “proficient," “advanced," etc., using the tests.

Regular Public And Charter Schools: Is A Different Conversation Possible?

Uplifting Leadership, Andrew Hargreaves' new book with coauthors Alan Boyle and Alma Harris, is based on a seven-year international study, and illustrates how leaders from diverse organizations were able to lift up their teams by harnessing and balancing qualities that we often view as opposites, such as dreaming and action, creativity and discipline, measurement and meaningfulness, and so on.

Chapter three, Collaboration With Competition, was particularly interesting to me and relevant to our series, "The Social Side of Reform." In that series, we've been highlighting research that emphasizes the value of collaboration and considers extreme competition to be counterproductive. But, is that always the case? Can collaboration and competition live under the same roof and, in combination, promote systemic improvement? Could, for example, different types of schools serving (or competing for) the same students work in cooperative ways for the greater good of their communities?

Hargreaves and colleagues believe that establishing this environment is difficult but possible, and that it has already happened in some places. In fact, Al Shanker was one of the first proponents of a model that bears some similarity. In this post, I highlight some ideas and illustrations from Uplifting Leadership and tie them to Shanker's own vision of how charter schools, conceived as idea incubators and, eventually, as innovations within the public school system, could potentially lift all students and the entire system, from the bottom up, one group of teachers at a time.

The Fatal Flaw Of Education Reform

In the most simplistic portrayal of the education policy landscape, one of the “sides” is a group of people who are referred to as “reformers." Though far from monolithic, these people tend to advocate for test-based accountability, charters/choice, overhauling teacher personnel rules, and other related policies, with a particular focus on high expectations, competition and measurement. They also frequently see themselves as in opposition to teachers’ unions.

Most of the “reformers” I have met and spoken with are not quite so easy to categorize. They are also thoughtful and open to dialogue, even when we disagree. And, at least in my experience, there is far more common ground than one might expect.

Nevertheless, I believe that this “movement” (to whatever degree you can characterize it in those terms) may be doomed to stall out in the long run, not because their ideas are all bad, and certainly not because they lack the political skills and resources to get their policies enacted. Rather, they risk failure for a simple reason: They too often make promises that they cannot keep.

Why Teachers And Researchers Should Work Together For Improvement

Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. This is the first of two posts on research-practice partnerships; both are part of The Social Side of Education Reform series.

Policymakers are asking a lot of public school teachers these days, especially when it comes to the shifts in teaching and assessment required to implement new, ambitious standards for student learning. Teachers want and need more time and support to make these shifts. A big question is: What kinds of support and guidance can educational research and researchers provide?

Unfortunately, that question is not easy to answer. Most educational researchers spend much of their time answering questions that are of more interest to other researchers than to practitioners.  Even if researchers did focus on questions of interest to practitioners, teachers and teacher leaders need answers more quickly than researchers can provide them. And when researchers and practitioners do try to work together on problems of practice, it takes a while for them to get on the same page about what those problems are and how to solve them. It’s almost as if researchers and practitioners occupy two different cultural worlds.

Research And Policy On Paying Teachers For Advanced Degrees

There are three general factors that determine most public school teachers’ base salaries (which are usually laid out in a table called a salary schedule). The first is where they teach; districts vary widely in how much they pay. The second factor is experience. Salary schedules normally grant teachers “step raises” or “increments” each year they remain in the district, though these raises end at some point (when teachers reach the “top step”).

The third typical factor that determines teacher salary is their level of education. Usually, teachers receive a permanent raise for acquiring additional education beyond their bachelor’s degree. Most commonly, this means a master’s degree, which roughly half of teachers have earned (though most districts award raises for accumulating a certain number of credits towards a master’s and/or a Ph.D., and for getting a Ph.D.). The raise for receiving a master’s degree varies, but just to give an idea, it is, on average, about 10 percent over the base salary of bachelor’s-only teachers.

This practice of awarding raises for teachers who earn master’s degrees has come under tremendous fire in recent years. The basic argument is that these raises are expensive, but that having a master’s degree is not associated with test-based effectiveness (i.e., is not correlated with scores from value-added models of teachers’ estimated impact on their students’ testing performance). Many advocates argue that states and districts should simply cease giving teachers raises for advanced degrees, since, they say, it makes no sense to pay teachers for a credential that is not associated with higher performance. North Carolina, in fact, passed a law last year ending these raises, and there is talk of doing the same elsewhere.

A Quick Look At The ASA Statement On Value-Added

Several months ago, the American Statistical Association (ASA) released a statement on the use of value-added models in education policy. I’m a little late getting to this (and might be repeating points that others made at the time), but I wanted to comment on the statement, not only because I think it's useful to have ASA add their perspective to the debate on this issue, but also because their statement seems to have become one of the staple citations for those who oppose the use of these models in teacher evaluations and other policies.

Some of these folks claimed that the ASA supported their viewpoint – i.e., that value-added models should play no role in accountability policy. I don’t agree with this interpretation. To be sure, the ASA authors described the limitations of these estimates, and urged caution, but I think that the statement rather explicitly reaches a more nuanced conclusion: That value-added estimates might play a useful role in education policy, as one among several measures used in formal accountability systems, but this must be done carefully and appropriately.*

Much of the statement puts forth the standard, albeit important, points about value-added (e.g., moderate stability between years/models, potential for bias, etc.). But there are, from my reading, three important takeaways that bear on the public debate about the use of these measures, which are not always so widely acknowledged.