New School Climate Tool Facilitates Early Intervention On Social-Emotional Issues: Bullying And Suicide Prevention

Our guest author today is Dr. Alvin Larson, director of research and evaluation at Meriden Public Schools, a district that serves about 8,900 students in Meriden, CT. Dr. Larson holds a B.A. in Sociology, M. Ed., M.S. in Educational Research and a Ph.D. in Educational Psychology. The intervention described below was made possible with support from Meriden's community, leadership and education professionals.

For the most part, students' social-emotional concerns start small; if left untreated, though, they can become severe and difficult to manage. Inappropriate behaviors are not only harmful to the student who exhibits them; they can also serve to increase the social bruising of his/her peers and can be detrimental to the climate of the entire school. The problem is that many of these bruises are not directly observable – or not until they become scars. School psychologists and counselors are familiar with bruised students who act out overtly, but some research suggests that 4.3% of our students carry social-emotional scars of which counselors are unaware (Larson, AERA 2014). To develop a more preventative approach, foster pro-social attitudes and a positive school climate, we need to be able to identify and support the students with hidden bruises as well as intervene with pre-bullies early in their school careers.

Since 2011, Connecticut’s Local Education Agencies (LEAs) have been required to purchase or develop a student school climate survey. The rationale for this is that anti-social attitudes and a negative school climate are associated with lower academic achievement, current behavior problems, as well as future criminal behaviors (DeLisi et al 2013; Hawkins et al 2000) and suicide ideation (King et al 2001). There are hundreds of anonymous school climate surveys, but none of them was designed to provide the kind of information that we need to help individual students.

A Big Open Question: Do Value-Added Estimates Match Up With Teachers' Opinions Of Their Colleagues?

A recent article about the implementation of new teacher evaluations in Tennessee details some of the complicated issues with which state officials, teachers and administrators are dealing in adapting to the new system. One of these issues is somewhat technical – whether the various components of evaluations, most notably principal observations and test-based productivity measures (e.g., value-added) – tend to “match up." That is, whether teachers who score high on one measure tend to do similarly well on the other (see here for more on this issue).

In discussing this type of validation exercise, the article notes:

If they don't match up, the system's usefulness and reliability could come into question, and it could lose credibility among educators.
Value-added and other test-based measures of teacher productivity may have a credibility problem among many (but definitely not all) teachers, but I don’t think it’s due to – or can be helped much by – whether or not these estimates match up with observations or other measures being incorporated into states’ new systems. I’m all for this type of research (see here and here), but I’ve never seen what I think would be an extremely useful study for addressing the credibility issue among teachers: One that looked at the relationship between value-added estimates and teachers’ opinions of each other.

The Faulty Logic Of Using Student Surveys In Accountability Systems

In a recent post, I discussed the questionable value of student survey data to inform teacher evaluation models. Not only is there little research support for such surveys, but the very framing of the idea often reflects faulty reasoning.

A quote from a recent Educators 4 Excellence white paper helps to illustrate the point:

For a system that aims to serve students, young people’s interests are far too often pushed aside. Students’ voices should be at the forefront of the education debate today, especially when it comes to determining the effectiveness of their teacher.

This sounds noble… but seriously, why should students’ opinions be "at the forefront of the education debate"? Are students’ needs better served when we ask students what they need directly? Research on this is explicit: no, not really.

Student Surveys of Teachers: Be Careful What You Ask For

Many believe that current teacher evaluation systems are a formality, a bureaucratic process that tells us little about how to improve classroom instruction. In New York, for example, 40 percent of all teacher evaluations must consist of student achievement data by 2013. Additionally, some are proposing the inclusion of alternative measures, such as “independent outside observations” or “student surveys” among others. Here, I focus on the latter.

Educators for Excellence (E4E), an “organization of education professionals who seek to provide an independent voice for educators in the debate surrounding education reform”, recently released a teacher evaluation white paper proposing that student surveys account for 10 percent of teacher evaluations.

The paper quotes a teacher saying: “for a system that aims to serve students, young people’s interests are far too often pushed aside. Students’ voices should be at the forefront of the education debate today, especially when it comes to determining the effectiveness of their teacher." The authors argue that “the presence of effective teachers […] can be determined, in part, by the perceptions of the students that interact with them." Also, “student surveys offer teachers immediate and qualitative feedback, recognize the importance of student voice […]". In rare cases, the paper concedes, “students could skew their responses to retaliate against teachers or give high marks to teachers who they like, regardless of whether those teachers are helping them learn."

But student evaluations are not new.