Skip to:

Selection Versus Program Effects In Teacher Prep Value-Added


Nice post, Matt. If one is evaluating teacher prep programs or any programs, "value added" (nonexperimental estimates of impact based on OLS with a few covariates) is probably one of the weaker designs. Better than nothing, but not as good as an experimental or quasi-experimental design that takes selection seriously.

I agree that the distinction between selection and preparation effects matters for some purposes, but I think you may be overstating it a bit. Even if the goal is to "hold prep programs accountable for their results", arguably who you're able to consistently recruit is part of your "results" as a prep program. From a public policy point of view, much of what you care about is not "program value-added" per se, but rather "getting quality people into classrooms". A prep program could accomplish that by attracting talented people or by doing an especially good job of training the less-talented, but ultimately if the teachers its putting out are consistently below-average there's a public policy case to be made for "punishing" them (in one way or another).


This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.