This commentary addresses concerns regarding the application of value-added modeling commonly used to evaluate teachers as well as implications for the use of these metrics to assess graduates of preparation programs. The authors raise questions of validity when outcome-based performance metrics for individual teachers are selected to measure the effectiveness of teacher preparation programs in a value-added analysis. They make issue of sample size as a specific weakness in these studies. Koedel and Parsons accept that outcome-based accountability for teacher preparation programs has a place in raising the quality of teachers and that it will eventually result in significant improvement to the quality teacher training, but they argue that decision makers should remain skeptical when considering the use of outcome-based rankings before current issues are resolved.
Their critique of the research focuses on ranking teacher preparation programs in the state of Missouri that reveals small difference in quality that is largely undetectable among most programs using current methods. The authors express fear that ranking programs may lead to bad decision-making, ranging from employers over-valuing particular programs when hiring to regulators prematurely denouncing or praising programs when in fact the differences in rankings, although they maybe statistically significant, are far from socially significant.