Education Drivers

Quality Teachers Evaluation

Feedback is a research-based practice essential for improving performance. It is used in schools in the form of formal evaluations, systematic observation, and coaching. Annual evaluation is the practice most preferred by principals. They view evaluation as an opportunity to assess performance and improve a teacher’s skills. However, research suggests that, in reality, evaluation falls far short as a tool for staff improvement. It does a poor job of measuring teacher quality. Heavily associated with accountability, it is ill-suited for a development role. Since evaluations are annual events, feedback is delivered too infrequently to support continuous sustained improvement. When evaluations do address instruction, they frequently rely on inferior sources of data such as short unscheduled “walk-throughs.” Research shows that, when using such sources, principals are poor judges of teachers’ skills. Furthermore, reviews of evaluations identify inflated ratings for teachers across the board; even teachers of failing schools receive unjustifiably high scores. Teachers are better served by feedback practices based on formal observations and coaching, with the strongest research support for coaching.

Teacher Evaluation Overview

Teacher Evaluation PDF

Cleaver, S., Detrich, R. & States, J. (2018). Overview of Teacher Evaluation. Oakland, CA: The Wing Institute. https://www.winginstitute.org/assessment-summative.

As students progress through school, many elements—home experiences, classroom instruction, and internal factors—influence their eventual outcomes. In the school environment, a teacher’s skills, strengths, and abilities have as much of an influence on student learning as student background (Wenglinsky, 2002). Put another way, teachers matter; teachers who are effectivecontribute to positive student outcomes and achievement (Johnson & Zwick, 1990; Nye, Konstantopoulus, & Hedges, 2004; Sanders, Wright, & Horn, 1997), so it is important to understand what effective teachers do that influence student outcomes. Equally important is to provide teachers with information and feedback they can use to become better practitioners. That’s where teacher evaluation comes in.

Teacher Evaluation

Teacher evaluation is conducted to ensure teacher quality and to promote professional learning with the goal of improving future performance (Danielson, 2010). A basic definition of teacher evaluation is the formal process used to review teacher performance and effectiveness in the classroom (Sawchuk, 2015). However, this definition is an oversimplification. In practice, teacher evaluation involves understanding and agreeing on the inputs (e.g., the practices that define quality teaching), outputs (e.g., student achievement measures), and methods of evaluation (e.g., student assessment data, teacher observation rubrics). The elements of evaluation are rarely agreed on (Goe, Bell, & Little, 2008). This overview provides information about teacher evaluation as it relates to collecting information about teacher practice and using it to improve student outcomes.

Teacher Evaluation for Improvement and Accountability

Teacher evaluation serves two purposes: improvement and accountability. Evaluation provides teachers with information that can improve their practice and serve as a starting point for professional development; for example, using information from teacher evaluations to set a plan of study for professional learning community (PLC) meetings. Evaluation provides accountability when information gained from the evaluation is used to guide decisions regarding bonuses, firing, and other human resource decisions (Santiago & Benavides, 2009).

There is an inherent tension between these two purposes. On one hand, when teachers feel they are focused on improvement, accountability can feel incongruent and teachers may not want to provide accurate information because of the risk of revealing weaknesses. On the other hand, when the focus is on accountability, teachers may feel insecure about their work (Santiago & Benavides, 2009). Goals around improvement may hinder the ability to use evaluation for accountability decisions, while goals around accountability may prevent or obfuscate improvement efforts. If the teacher evaluation process becomes too cumbersome or aversive for either the teacher or evaluator, the process will be in jeopardy.

Summative and Formative Evaluation

Teacher evaluation can serve a summative or formative purpose. Summative evaluation provides conclusive evaluation of a teacher’s performance to determine how well that individual has done his or her work (Marzano, 2012). In this type of evaluation, a supervisor evaluates a teacher using a combination of measures that may include student test scores, lesson plans and artifacts, and rating scales or rubrics. Teachers are not involved and the results are used for accountability decisions such as pay awards or dismissal (Marzano, 2012).

            Formative evaluation provides ongoing information about teacher practice with the goal of providing feedback that helps teachers improve. Teachers are often involved in the process through self-reflection or self-assessment. The results of the evaluation may be used to give teachers feedback, and to make decisions regarding the professional development or coaching support that teachers receive (Sayavedra, 2014).  

History and Current State of Teacher Evaluation

In the early 20th century, the framework of scientific management, or the idea that every task can be broken down into its best and most efficient method, was applied to education (Marzano, Frontier, & Livingston, 2011). This started a focus on examining teacher behavior, providing suggestions for feedback, and evaluating effectiveness in the classroom (Marzano et al., 2011). Since World War II, the role of evaluation has evolved. Clinical supervision, popular in the 1960s and 1970s, was the first major trend. It involved a pre-observation conference, teacher observation, reflection, and analysis with a focus on classroom behaviors that directly impacted learning. In the 1980s, the Hunter lesson design, also called mastery teaching, was incorporated into observation and evaluation so that administrators observed a specific lesson sequence: anticipatory set, objective and purpose, input, model, checking for understanding, guided practice, and independent practice (Hunter, 1984).

            In the mid-1980s, alternatives to clinical supervision and mastery teaching were proposed. In these alternatives, the teacher became a core element in evaluation and principals were expected to differentiate observation and evaluation depending on teachers’ needs and experience (Marzano et al., 2011). Throughout the 1980s and 1990s, there was a shift away from structured observation, along with a move toward formal teacher evaluation (Marzano et al., 2011).

            One of these shifts was prompted by a RAND group study of 32 districts across the United States (Wise, Darling-Hammond, McLaughlin, & Bernstein, 1984). The RAND study concluded that there were four primary concerns regarding then-current evaluation: (a) Principals were not committed or able to provide accurate evaluations, (b) teachers were not open to receiving feedback, (c) evaluation practices were not uniform, and (d) evaluators were not trained (Wise et al., 1984). The RAND study also outlined the following recommendations for evaluation:

  • Evaluation systems should align with goals without being overly prescriptive.
  • Principals need time, training, and oversight to implement evaluations effectively.
  • An evaluation system should align with the overarching purpose (and a district may need multiple evaluations to align with multiple goals).
  • Resources need to be provided and allocated effectively.
  • Teachers need to be involved in the design, monitoring, and implementation of evaluation systems.

Throughout the 20th century, teacher evaluation was a district-level initiative, more focused on teacher behavior and administrative supervision. In the 21st century, teacher evaluation has become a focus of national policy, and the emphasis has shifted to evaluation of teacher quality and student achievement (Marzano et al., 2011).

In the late 2000s, two reports critiqued the teacher evaluation system and set the stage for the current conversation. First, Toch and Rothman’s report Rush to Judgment critiqued teacher evaluation as “superficial and capricious” (2008, p. 1) and ascertained that it did not measure student learning. And, despite No Child Left Behind requirements, Toch and Rothman found only 14 states that required annual teacher evaluations. Similarly, Weisberg, Sexton, Mulhern, and Keeling (2009), in The Widget Effect,found that fewer than 1% of 15,000 teachers in 12 districts and four states were rated “unsatisfactory” and that little action was taken based on results from teacher evaluations. The authors argued that districts were treating teachers as widgets, or interchangeable parts in a system, not as individual professionals with the potential to have an important impact on instructional effectiveness and student outcomes.  

            This increased concern about how teacher evaluations were being conducted and used, along with legislation around teacher quality, focused state legislature attention on teacher evaluation (Goe, Holdheide, & Miller, 2011). The current conversation still focuses on how teacher evaluations are conducted; the impact of teacher evaluation on teacher effectiveness and student outcomes; and how results are used, for example, in professional development (Sawchuk, 2015).

Relevant Issues in Teacher Evaluation

Current issues in teacher evaluation revolve around core questions on how to design and implement an evaluation, including what framework to use, what to measure, and how to collect data.

Framework

A framework outlines the guiding principles for a teacher evaluation. It provides credibility in the system, and assurance that evaluators can confidently ascertain the quality of teachers (Danielson, 2010). That framework should include:

  • A clear definition of good teaching that is agreed on by everyone involved (Danielson, 2010).
  • An understanding of the purpose of the evaluation, which may be information gathering, accountability, or improvement, or any combination of the three (Goe et al., 2008).
  • A clear purpose that provides information about whether the evaluation is formative or summative, and how the results will be used (Goe et al., 2008).
  • An understanding of who is involved and how, the tools that will be used, and the stakeholders involved (Santiago & Benavides, 2009).

Measurement

Teacher quality is measured both quantitatively (e.g., student test scores) and qualitatively (e.g., notes on teacher professionalism). An analysis of 120 studies (Goe et al., 2008) identified qualitative elements of effective teachers:

  • Positive contribution to academic, attitudinal, and social outcomes for students
  • Comprehensive lesson planning, progress monitoring, and instruction adaption and evaluation capacity
  • Diversity and civic-mindedness
  • Collaboration with stakeholders (e.g., parents, administrators), particularly for students who are at risk (e.g., those with individualized education programs, or IEPs)

Once the elements that will be measured are clear, how to measure each aspect must be considered. While summative evaluations should include a comprehensive variety of measures that can provide a full picture of a teacher’s effectiveness, formative evaluations may include any range of measures used to collect enough information to serve the purpose of the evaluation. The measures used in formative evaluation may also be more teacher focused, including self-assessment, observation, peer mentoring, and coaching. When coaching and peer mentoring are used, it is important to consider training evaluators in how to deliver feedback that leads to improved teacher performance.

            Another consideration for measurement is the reliability and validity of tools. Reliability of a tool is how well it produces consistent and stable results. Tools that are used to measure teacher practices must be reliable and valid; they must provide information that is consistent across multiple evaluators and that measure teacher practice without measuring any other factors at the same time. Also, tools used to gauge student outcomes must be valid, meaning that the scores must accurately measure the outcome without measuring anything else (Goe et al., 2008).

            Blanton et al. (2003) outlined additional criteria that inform the usefulness of a measurement tool:

  • The ability to capture all aspects of a teacher’s effectiveness
  • The ability to capture the range of activities in a teacher’s work
  • Usefulness of the scores to be used for a specific purpose
  • Feasibility, including the cost, training required, and other considerations
  • Credibility or the trust that the stakeholders have in the measure

Charlotte Danielson Framework for Teaching.

A common measure used for teacher evaluation is the Charlotte Danielson Framework for Teaching (Danielson, 1996, 2007), which includes an extensive rubric over four domains: planning and preparation, classroom environment, instruction, and professional responsibilities. Across these four domains, the rubric incorporates 76 elements of teaching broken into four levels of performance (unsatisfactory, basic, proficient, and distinguished). Over time and two iterations (1996 and 2007), the Danielson framework has become the primary tool for capturing teaching and learning (Marzano et al., 2011). The Danielson Framework for Teaching (Danielson, 1996) was intended to do three things:

  • Acknowledge the difficulty and complexity of teaching as a profession.
  • Create a language for professional engagement.
  • Provide a structure for teacher assessment and reflection.

Research conducted on the Danielson framework indicates acceptable reliability and validity (Lash, Tran, & Huang, 2016). When there is score variance, it is attributable to the teacher, not other variables (Kane & Staiger, 2012; Kane, Taylor, Tyler, & Wooten, 2011). This means that when a score differs from one evaluation to the next, such as when a teacher advances in the area of planning and preparation from fall to winter, the difference between the two scores occurs because the teacher changed his or her practice, not because the tool was unclear. The reliability of achievement growth scores varies (Kane & Staiger, 2012; Lash et al., 2016). One study that used evaluations from 156 teachers across 18 high-poverty charter schools in the mid-Atlantic concluded that using multiple measures across a school year (in this case, three separate observations using the Danielson framework) provided a reliable measure (Kettler & Reddy, 2017).

Value-Added Measures

Value-added measures are a way to take into account the various conditions and factors that contribute to student achievement, across multiple years of teaching, and in comparison with other teachers . This way of calculating a teacher’s effectiveness was developed in the 2000s using statistical models that could determine how much one teacher contributed to student learning (Goe et al., 2008).

Because they are removed from the immediate classroom experience and seem disconnected from what happens in classrooms, value-added measures are controversial (Goe et al., 2008). However, these measures do have reliability. A study by the Bill and Melinda Gates Foundation (2010) found that teachers whose students showed gains in one assessment were likely to show gains in related assessments that measured conceptual understanding. For example, a math teacher whose students scored high on the state math assessment was likely to have students who also demonstrated a deep knowledge of the core principles of math. The correlation between teacher value-added measures on state tests and deeper understanding were higher for math (0.54) than for reading (0.37). However, it is important to consider that teachers who produce strong value-added scores on state tests may also develop students’ overarching skills and depth of knowledge about the subject.

As a summative measure, value-added measures provide an overarching look at a teacher’s impact over time. Yet, as a formative tool, value-added measures do not provide information about what high-performing teachers do that make a difference in student learning (Goe et al., 2008). While value-added models are useful for identifying trends that can be used to make system improvements, multiple reports have recommended against using them for individual personnel decisions (American Statistical Association, 2014; Darling-Hammond et al., 2012; Polikoff & Porter, 2014). Specifically, the American Statistical Association cautioned against using value-added measures because, among other reasons, they are based on only one measure (standardized test scores), and the models may not capture all the factors that contribute to the effect a teacher may have on student outcomes.

Continuum of Research and Impact on Student Outcomes

Teacher evaluation is an established practice directed by state and federal law. However, we do not know the exact or full impact of teacher evaluation practices on student outcomes (e.g., Stecher et al., 2018). Some research has attempted to connect the practice of teacher evaluation with changes in student outcomes. In three notable large-scale studies, teacher evaluation was the practice of assessing teachers using a valid and reliable tool and providing feedback. These studies produced mixed results on student or school-level outcomes.

A quasi-experimental study of mid-career elementary and middle school teachers in the Cincinnati Public Schools Teacher Evaluation System (TES) examined teachers before, during, and after a year-long evaluation. The 105 teachers involved in the study taught fourth- through eighth-grade math. Evaluations conducted using multiple, structured classroom observations by trained peers and administrators were conducted between the 2003–2004 and 2009–2010 school years. The observations were conducted using a rubric based on the Danielson Framework for Teaching (Danielson, 1996, 2007). Student achievement was compared before, during, and after the teacher’s evaluation year. Teachers were more effective in advancing student achievement in math the year they were evaluated and the years afterward. Specifically, a student who was taught by a teacher who had been through TES scored 11% of a standard deviation (4.5 percentile points for a median student) higher in math compared with a student taught by the same teacher before the evaluation. The study did not identify what about teacher practice accounted for the difference in student achievement. This study supports the use of teacher evaluation to encourage continued growth in mid-career teachers’ performance and a connection to student achievement. Also, performance improvement was greatest for teachers who were weakest at the start of the evaluation (those who received low initial scores or who were ineffective in improving student test scores the year prior to evaluation). Teacher evaluation was a way for teachers who needed the most support, those that scored the lowest on initial evaluations and likely received the most critical feedback, to receive development (Taylor & Tyler, 2012a, 2012b).

In another large-scale study, the Chicago Public Schools’ Excellence in Teaching Project was a teacher evaluation program focused on increasing student learning through principal-teacher conversation. A pilot study included 44 elementary schools in 2008–2009 and an additional 48 schools in 2009–2010. Principals in the first cohort received a total of 50 hours of support across the school year, with training and development in the Danielson framework, best practices in teacher observation and evidence collection, coaching, and implementation. Principals who joined the project in the second year received significantly less support. This difference in support across the two cohorts may have impacted the results. Short-term positive effects on reading performance were found in high-achieving, low-poverty schools, and schools that were in the first cohort performed higher in reading and math than schools in the second cohort. This study suggests that teacher evaluation systems produce different effects at different schools, and that teacher observation can have an impact on school performance (Steinberg & Sartain, 2015).

The Gates Foundation has been extensively involved in teacher evaluation as it relates to student achievement outcomes (Barnum, 2018). In 2018, the Gates Foundation released a cumulative study that reflected its work in three districts (Stecher et al., 2018). The Intensive Partnerships for Effective Teaching initiative was focused on increasing student performance by improving teaching effectiveness. The project started in 2009–2010 in three school districts (Hillsborough County Public Schools in Florida, Memphis City Schools, and Pittsburgh Public Schools) and four charter management organizations. Across multiple years, teaching effectiveness measures collected using a rubric were used to improve staffing, identify areas of development, strengthen professional development, and structure teacher advancement and compensation. The researchers hypothesized that with a strong teaching effectiveness evaluation system in place, teaching quality would increase and lead to greater academic outcomes for students in low-income, minority schools. The final report (Stecher et al., 2018) noted that school sites had implemented the teacher effectiveness practices (evaluation using an observation rubric and subsequent decision-making), but the advancement in student achievement or graduation rates was not realized, particularly for low-income minority students. At the end of the project (2014–2015), student achievement, access to effective teaching, and graduation rates in sites that had participated in the initiative did not differ from those in sites that had not participated. The reason why there was no difference was unclear, although the researchers hypothesized that a focus exclusively on teacher effectiveness may not be enough to improve student outcomes and that other factors may need to be addressed to produce dramatic improvements in student outcomes.

Implications

Teacher evaluation is a best practice that can be used to inform decisions when implemented with transparent processes and strong measures. The process of teacher evaluation produces some change in teacher practice that can impact student outcomes during and after the evaluation period (Taylor & Tyler, 2012a, 2012b). However, teacher evaluation may have different impacts on schools with varying demographics and baseline achievement levels (Steinberg & Sartain, 2015). Finally, formative evaluation can provide clear, objective feedback and a structure for collecting and using data to show teachers how they are changing performance, and, in that way, serve as professional development to support low-performing teachers (Taylor & Tyler, 2012a, 2012b).

Cost-Benefit of Teacher Evaluation.

The cost-benefit of teacher evaluation encompasses many considerations including student learning outcomes, information gathered, and the ability to make decisions with the information (Peterson, 2000). It is likely that the benefits and costs will be specific to a school or district.  

For example, one study of the cost to start a teacher evaluation system across three districts found that it ranged from $8 to $115 per student, which equated to between 0.4% and 0.5% of total district spending, and between 1% and 1.3% of teacher compensation (Chambers, Brodziak de los Reyes, & O’Neil, 2013). The researchers concluded that their figures did not reflect all potential costs and that the cost of actual implementation might be higher.

Conclusion

Currently, teacher evaluation is understood as a form of professional development. The goal is to establish a rigorous and fair system that can be used to make decisions related to hiring, firing, and promotion, and that can improve teacher practice and student learning (Bill and Melinda Gates Foundation, 2012). This is no easy task as evidenced by the mixed results for large-scale studies that have examined the impact of teacher evaluation on student achievement (Stecher et al., 2018; Steinberg & Sartain, 2015; Taylor & Tyler, 2012a, 2012b).

As a practice, teacher evaluation is an established way to gather information about how teachers are performing in the classroom and is already incorporated into the expectations and day-to-day work of school administrators. With current measures (e.g., the Danielson Framework for Teaching), it is possible to collect reliable and valid data related to teacher performance and use that data to design professional development targeted at teacher needs. With rigorous measures and quality implementation, teacher evaluation, especially formative evaluation, is a tool that, ideally, can be used to improve teacher quality over time.

 

Citations

 

American Statistical Association. (2014, April 8). ASA statement on using value-added models for educational assessment. Retrieved from https://www.scribd.com/document/217916454/ASA-VAM-Statement-1 

Barnum, M. (2018, June 21). The Gates Foundation bet big on teacher evaluation. The report it commissioned explains how those efforts fell short. Chalkbeat.Retrieved from https://www.chalkbeat.org/posts/us/2018/06/21/the-gates-foundation-bet-big-on-teacher-evaluation-the-report-it-commissioned-explains-how-those-efforts-fell-short/

Bill and Melinda Gates Foundation. (2010). Learning about teaching: Initial findings from the measures of effective teaching project.Retrieved from https://docs.gatesfoundation.org/documents/preliminary-findings-research-paper.pdf

Bill and Melinda Gates Foundation. (2012). Gathering feedback on teaching: Combining high-quality observation with student surveys and achievement gains.Retrieved from http://k12education.gatesfoundation.org/resource/gathering-feedback-on-teaching-combining-high-quality-observations-with-student-surveys-and-achievement-gains-2/

Blanton, L. P., Sindelar, P. T., Correa, V., Harman, M., McDonnell, J., & Kuhel, K. (2003). Conceptions of beginning teacher quality: Models for conducting research(COPSSE Doc. No. RS-6). Gainesville, FL: Center on Personnel Studies in Special Education (COPSSE), University of Florida. Retrieved from http://copsse.education.ufl.edu//docs/RS-6/1/RS-6.pdf

Chambers, J., Brodziak de los Reyes, I., & O’Neil, C. (2013). How much are districts spending to implement teacher evaluation systems? Case studies of Hillsborough County Public Schools, Memphis City Schools, and Pittsburgh Public Schools. Santa Monica, CA: RAND Corporation. Retrieved from: https://www.rand.org/content/dam/rand/pubs/working_papers/WR900/WR989/RAND_WR989.pdf

Danielson, C. (1996, 2007). Enhancing professional practice: A framework for teaching (1st and 2nd eds).Alexandria, VA: ASCD.

Danielson, C. (2010). Evaluations that help teachers learn. Educational Leadership, 68(4), 35–39. Retrieved from http://www.ascd.org/publications/educational-leadership/dec10/vol68/num04/Evaluations-That-Help-Teachers-Learn.aspx

Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation: Popular modes of evaluating teachers are fraught with inaccuracies and inconsistencies, but the field has identified better approaches. Phi Delta Kappan, 93(6), 8–15.Retrieved from https://www.edweek.org/ew/articles/2012/03/01/kappan_hammond.html

Goe, L., Bell, C., & Little, O. (2008). Approaches to evaluating teacher effectiveness: A research synthesis. Washington, DC: National Comprehensive Center for Teacher Quality. Retrieved from https://eric.ed.gov/?id=ED521228

Goe, L., Holdheide, L., & Miller, T. (2011). A practical guide to designing comprehensive teacher evaluation systems: A tool to assist in the development of teacher evaluation systems.Washington, DC: National Comprehensive Center for Teacher Quality. Retrieved from https://files.eric.ed.gov/fulltext/ED520828.pdf

Hunter, M. (1984). Knowing, teaching, and supervising. In P. Hosford (Ed.), Using what we know about teaching.(pp. 169–192). Alexandria, VA: ASCD.

Johnson, E. G., & Zwick, R. (1990). Focusing the new design: The NAEP 1988 technical report. Journal of Educational and Behavioral Studies, 17,95–109.

Kane, T. J., & Staigler, D. O. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains.Seattle, WA: Bill and Melinda Gates Foundation.

Kane, T. J., Taylor, E. S., Tyler, J. H., & Wooten, A. L. (2011). Identifying effective classroom practices using achievement data. Journal of Human Resources, 46(3), 587–613.

Kettler, R. J., & Reddy, L. A. (2017). Using observational assessment to inform professional development decisions: Alternative scoring for the Danielson Framework for Teaching. Assessment for Effective Intervention,1–12.

Lash, A., Tran, L., & Huang, M. (2016). Examining the validity of ratings from a classroom observation instrument for use in a district’s teacher evaluation system(REL 2016-135). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory West.

Marzano, R. J. (2012). Teacher Evaluation: What’s fair? What’s effective? The two purposes of teacher evaluation. Educational Leadership, 70(3), 14–19. Alexandria, VA: ASCD. Retrieved from http://www.ascd.org/publications/educational-leadership/nov12/vol70/num03/The-Two-Purposes-of-Teacher-Evaluation.aspx

Marzano, R., Frontier, T., & Livingston, D. (2011). Effective supervision: Supporting the art and science of teaching. Alexandria, VA: ASCD.

Nye, B., Konstantopoulos, S., & Hedges, L. V. (2004). How large are teacher effects? Educational Evaluation and Policy Analysis, 26(3),237–257.

Peterson, K. D. (2000). Teacher evaluation: A comprehensive guide to new directions and practices(2nd ed.).Thousand Oaks, CA: Corwin Press.

Polikoff, M. S, & Porter, A. C. (2014). Instructional alignment as a measure of teacher quality. Education Evaluation and Policy Analysis, 64(3), 212–225. Retrieved from http://www.aera.net/Newsroom/Recent-AERA-Research/Instructional-Alignment-as-a-Measure-of-Teaching-Quality 

Sanders, W. L., Wright, S. P., & Horn, S. P. (1997). Teacher and classroom context effects on student achievement: Implications for teacher evaluation. Journal of Personnel Evaluation and Education, 11(1), 57–67.

Santiago, P., & Benavides, F. (2009). Teacher evaluation: A conceptual framework and examples of country practices.Organisation for Economic Cooperation and Development (OECD). Retrieved from http://www.oecd.org/education/school/44568106.pdf

Sawchuk, S. (2015, September 3). Teacher Evaluation: An issue overview. Education Week. Retrieved from www.edweek.org/ew/section/multimedia/teacher-performance-evaluation-issue-overview.html

Sayavedra, M. (2014). Teacher evaluation. ORTESOL Journal, 31, 1–9.

Stecher, B. M., Holtzman, D. J., Garet, M. S., Hamilton, L. S., Engberg, J., Steiner, E. D.,…Chambers, J. (2018).Improving teaching effectiveness: Final report: The intensive partnerships for effective teaching through 2015–2016.Santa Monica, CA: RAND Corporation. Retrieved from https://www.rand.org/pubs/research_reports/RR2242.html

Steinberg, M. P., & Sartain, L. (2015). Does teacher evaluation improve school performance? Experimental evidence from Chicago’s Excellence in Teaching project. Education Finance and Policy, 10(4), 535–572.

Taylor, E. S., & Tyler, J. H. (2012a). Can teacher evaluation improve teaching? Evidence of systematic growth in the effectiveness of midcareer teachers. Education Next, 12(4). Retrieved from http://educationnext.org/can-teacher-evaluation-improve-teaching/

Taylor, E. S., & Tyler, J. H. (2012b). The effect of evaluation on teacher performance. American Economic Review, 102(7), 3628–3651.

Toch, T., & Rothman, R. (2008). Rush to judgment: Teacher evaluation in public education.Washington, DC: Education Sector.Retrieved from https://eric.ed.gov/?id=ED502120 

Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. New York, NY: The New Teacher Project. Retrieved from https://tntp.org/publications/view/the-widget-effect-failure-to-act-on-differences-in-teacher-effectiveness

Wenglinsky, H. (2002). The link between teacher classroom practices and student academic performance. Education Policy Analysis Archives, 10(12).

Wise, A. E., Darling-Hammond, L., Tyson-Bernstein, H, & McLaughlin, M. W. (1984). Teacher evaluation: A study of effective practices. Santa Monica, CA: RAND Corporation. Retrieved from https://www.rand.org/pubs/reports/R3139.html

 

 

Publications

TITLE
SYNOPSIS
CITATION
Overview of Teacher Evaluation. Oakland, CA: The Wing Institute

Teachers contribute to student achievement. As a practice, teacher evaluation has developed over time. Today, the focus of teacher evaluation is to determine the impact of teaching on student outcomes and for use as professional development. Research on teacher evaluation has produced mixed results. This overview provides information about teacher evaluation as it relates to collecting information about teacher practice and using it to improve student outcomes. The history of teacher evaluation and current research findings and implications are included.

Cleaver, S., Detrich, R. & States, J. (2018). Overview of Teacher Evaluation. Oakland, CA: The Wing Institute. https://www.winginstitute.org/assessment-summative.

Introduction: Proceedings from the Wing Institute’s Fifth Annual Summit on Evidence-Based Education: Education at the Crossroads: The State of Teacher Preparation.

This article shared information about the Wing Institute and demographics of the Summit participants. It introduced the Summit topic, sharing performance data on past efforts of school reform that focused on structural changes rather than teaching improvement. The conclusion is that the system has spent enormous resources with virtually no positive results. The focus needs to be on teaching improvement.

Keyworth, R., Detrich, R., & States, J. (2012). Introduction: Proceedings from the Wing Institute’s Fifth Annual Summit on Evidence-Based Education: Education at the Crossroads: The State of Teacher Preparation. In Education at the Crossroads: The State of Teacher Preparation (Vol. 2, pp. ix-xxx). Oakland, CA: The Wing

Effective Teachers Make a Difference

This analysis examines the available research on effective teaching, how to impart these skills, and how to best transition teachers from pre-service to classroom with an emphasis on improving student achievement. It reviews current preparation practices and examine the research evidence on how well they are preparing teachers

States, J., Detrich, R. & Keywroth, R. (2012). Effective Teachers Make a Difference. In Education at the Crossroads: The State of Teacher Preparation (Vol. 2, pp. 1-46). Oakland, CA: The Wing Institute.

Performance Feedback in Education: On Who and For What
This paper reviews the importance of feedback in education reviewed the scientific model of behavior change (antecedent, behavior, consequences).
Daniels, A. (2013). Feedback in Education: On Whom and for What. In Performance Feedback: Using Data to Improve Educator Performance (Vol. 3, pp. 77-95). Oakland, CA: The Wing Institute.

 

Data Mining

TITLE
SYNOPSIS
CITATION
What is the relationship between teacher working conditions and school performance?
This inquiry looks at the effect of time on the job and the quality of a teacher's skills.
Keyworth, R. (2010). What is the relationship between teacher working conditions and school performance? Retrieved from what-is-relationship-between882.
Are teacher preparation programs teaching formative assessment?
This probe lookes at research on teacher preparation program's efforts to provide teachers with instruction in formative assessment.
States, J. (2010). Are teacher preparation programs teaching formative assessment? Retrieved from are-teacher-preparation-programs.

 

Presentations

TITLE
SYNOPSIS
CITATION
Performance Feedback: Use It or Lose It

This paper examines the importance of performance feedback systems at all levels of school, staff and student outcomes to achieve desired results over time.

Keyworth, R. (2011). Performance Feedback: Use It or Lose It [Powerpoint Slides]. Retrieved from 2011-aba-presentation-randy-keyworth.

Performance Feedback in Education: On Who and For What
This paper reviews the importance of feedback in education reviewed the scientific model of behavior change (antecedent, behavior, consequences).
Daniels, A. (2011). Performance Feedback in Education: On Who and For What [Powerpoint Slides]. Retrieved from 2011-wing-presentation-aubrey-daniels.
Care Enough to Count: Measuring Teacher Performance
What teachers do with students is important. It should be measured to assure that they are doing the important things.
Detrich, R. (2013). Care Enough to Count: Measuring Teacher Performance [Powerpoint Slides]. Retrieved from 2013-aba-presentation-karen-hager.
ROKs: Remote Observation Kits
This paper presents a teacher coaching model using high quality audio and video technology to address the needs of teacher training in remote areas.
Hager, K. (2013). ROKs: Remote Observation Kits [Powerpoint Slides]. Retrieved from 2013-wing-presentation-karen-hager.
Teacher Induction: Where the Rubber Meets the Road
The paper examines one of the most critical components of teach training: an on-the-job, ongoing system of coaching and performance feedback to improve skill acquisition, generalization and maintenance.
Keyworth, R. (2010). Teacher Induction: Where the Rubber Meets the Road [Powerpoint Slides]. Retrieved from 2010-aba-presentation-randy-keyworth.
Teacher Coaching: The Missing Link in Teacher Professional Development
Research suggests that coaching is one of the most effective strategies in training teachers. This paper identifies the critical practice elements of coaching and their absence in teacher training.
Keyworth, R. (2013). Teacher Coaching: The Missing Link in Teacher Professional Development [Powerpoint Slides]. Retrieved from 2013-calaba-presentation-randy-keyworth.
Project AIM: Assess, Improve & Maintain Effective Teaching Practices
This paper shared a model for teacher assessment and professional development that address theneeds of large school districts in an effective and efficient manner.
Lewis, T. (2013). Project AIM: Assess, Improve & Maintain Effective Teaching Practices [Powerpoint Slides]. Retrieved from 2013-wing-presentation-teri-lewis.
TITLE
SYNOPSIS
CITATION
Pushing the horizons of student teacher supervision: Can a bug-in-ear system be an effective plug-and-play tool for a novice electronic coach to use in student teacher supervision? ProQuest Dissertations and Theses.

The National Council for Accreditation of Teacher Education has called for strengthening teacher preparation by incorporating more fieldwork. Supervision with effective instructional feedback is an essential component of meaningful fieldwork, and immediate feedback has proven more efficacious than delayed feedback. Rock and her colleagues have developed the wireless Bug-in-Ear (BIE) system to provide immediate, online feedback from a remote location (electronic coaching or e-coaching), and they have pioneered the use of BIE e-coaching (BIE2 coaching) in coaching teachers in graduate education. Other research has also documented successful use of the BIE system with teachers. This case study explored the use of the BIE tool for undergraduate student teacher supervision in the hands of a novice BIE2 coach, including the ease with which BIE equipment can be set up and operated by a novice coach and naïve users in the classroom. The findings provide support for the use of BIE2 coaching as tool for undergraduate student teacher supervision, based on the changed behaviors during reading instruction exhibited by two out of three student teacher participants. 

Almendarez, M. B., Zigmond, N., Hamilton, R., Lemons, C., Lyon, S., McKeown, M., Rock, M. (2012). Pushing the horizons of student teacher supervision: Can a bug-in-ear system be an effective plug-and-play tool for a novice electronic coach to use in student teacher supervision? ProQuest Dissertations and Theses.

Proceedings from the Wing Institute’s Fifth Annual Summit on Evidence-Based Education: Education at the Crossroads: The State of Teacher Preparation

This article shared information about the Wing Institute and demographics of the Summit participants. It introduced the Summit topic, sharing performance data on past efforts of school reform that focused on structural changes rather than teaching improvement. The conclusion is that the system has spent enormous resources with virtually no positive results. The focus needs to be on teaching improvement.

Aud, S., Hussar, W., Kena, G., Bianco, K., Frohlich, L., Kemp, J., & Tahan, K. (2011). The
Condition of Education 2011 (NCES 2011-033). Retrieved from http://nces.ed.gov/
pubs2011/2011033.pdf
Florida Department of Education. (n.d.). Class Size Reduction Amendment. Retrieved from
http://www.fdoe.org/classsize/
Gardner, D. P., Larsen, Y. W., Baker, W. O., Campbell, A., Crosby, E. A., Foster, C. A., Jr.,
...Wallace, R. (1983). A Nation at risk: The imperative for educational reform. An open letter
to the American people. A report to the nation and the secretary of education. Retrieved
from http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED226006
Gorman, S. (2010). An Introduction to NAEP. (NCES 2010-468). Retrieved from http://nces.
ed.gov/nationsreportcard/pdf/parents/2010468.pdf
Grady, S., & Bielick, S. (2010). Trends in the Use of School Choice: 1993 to 2007 (NCES 2010-
004). Retrieved from http://nces.ed.gov/pubs2010/2010004.pdf
Hall, D., & Gutierrez, A. S. (1998). Getting Honest about High School Graduation. [PowerPoint
slides]. Retrieved from http://www.edtrust.org/sites/edtrust.org/fles/publications/fles/
Session14GettingHonestAboutHSGraduation.ppt
Howell, W., Peterson, P. E., & West, M. (2007).What Americans think about their schools: The
2007 Education Next—PEPG Survey. Education Next, 7(4), 12-26. Retrieved from http://
educationnext.org/what-americans-think-about-their-schools/
Luckie, M. S. (2009). California’s class-size-reduction program: Total spending since 1996.
[Interactive Graph]. Retrieved from http://californiawatch.org/k-12/californias-class-sizereduction-program-total-spending-1996
National Center for Education Statistics (NCES). (2010). The Nation’s Report Card: Grade
12 Reading and Mathematics 2009 National and Pilot State Results. (NCES 2011–455).
Retrieved from http://nces.ed.gov/nationsreportcard/pdf/main2009/2011455.pdf
National Center for Education Statistics (NCES). (2011a). Fast Facts. Retrieved from http://
nces.ed.gov/fastfacts/display.asp?id=30
National Center for Education Statistics (NCES). (2011b). The Nation’s Report Card:
Mathematics 2011. (NCES 2012–458). Institute of Education Sciences, U.S. Department
of Education, Washington, DC.
National Center for Education Statistics (NCES). (2011c). The Nation’s Report Card:
Reading 2011. (NCES 2012–457). Institute of Education Sciences, U.S. Department of
Education, Washington, DC.
National Center for Education Statistics (NCES). (2011d). Data Explorer for Long-Term Trend.
[Data fle]. Retrieved from http://nces.ed.gov/nationsreportcard/lttdata/
National Center for Education Statistics (NCES). (2012a). Mathematics: Grade 12 National
Results. Retrieved from http://nationsreportcard.gov/math_2009/gr12_national.asp?subtab_
id=Tab_3&tab_id=tab1#tabsContainer%20information
National Center for Education Statistics (NCES). (2012b). Reading: Grade 12 National Results.
Retrieved from http://nationsreportcard.gov/reading_2009/gr12_national.asp?subtab_
id=Tab_3&tab_id=tab1#tabsContainer information
Nichols, A., & Özek, U. (2010). Public School Choice and Student Achievement in the
District of Columbia. (Working Paper 53). Retrieved from National Center for Analysis of
Longitudinl Data in Education Research - Urban Institute website: http://www.urban.org/
url.cfm?ID=1001499
Organisation for Economic Cooperation and Development (OECD). (2006). PISA 2006
Technical Report. Paris: Author.
Organisation for Economic Cooperation and Development (OECD). (2010a). PISA 2009
Results: Learning to Learn – Student Engagement, Strategies and Practices (Volume III).
http://dx.doi.org/10.1787/9789264083943-en
xxx |
Proceedings of the 5th Annual Summit Introduction
Organisation for Economic Cooperation and Development (OECD). (2010b). PISA 2009
Results: What Makes a School Successful? – Resources, Policies and Practices (Volume
IV). Retrieved from http://dx.doi.org/10.1787/9789264091559-en
Organisation for Economic Cooperation and Development (OECD). (2010c). PISA 2009
Results: What Students Know and Can Do – Student Performance in Reading, Mathematics
and Science (Volume I). Retrieved from http://dx.doi.org/10.1787/9789264091450-en
Snyder, T. D., & Dillow, S. A. (2010). Digest of Education Statistics, 2009. (NCES 2010-013).
Retrieved from http://nces.ed.gov/pubs2010/2010013.pdf
Snyder, T. D., & Dillow, S.A. (2011). Digest of Education Statistics 2010 (NCES 2011-015).
Retrieved from http://nces.ed.gov/pubs2011/2011015.pdf
Stillwell, R. (2010). Public School Graduates and Dropouts From the Common Core of Data:
School Year 2007–08. (NCES 2010-341). National Center for Education Statistics, Institute
of Education Sciences, U.S. Department of Education. Washington, DC. Retrieved from
http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2010341
Zinth, K. (2010). Class-Size Policies. Retrieved from Education Commission of the States
website: http://www.ecs.org/clearinghouse/85/21/8521.pdf

Enhancing Adherence to a Problem Solving Model for Middle-School Pre-Referral Teams: A Performance Feedback and Checklist Approach

This study looks at the use of performance feedback and checklists to improve middle-school teams problem solving.

Bartels, S. M., & Mortenson, B. P. (2006). Enhancing adherence to a problem-solving model for middle-school pre-referral teams: A performance feedback and checklist approach. Journal of Applied School Psychology, 22(1), 109-123.

Do Principals Know Good Teaching When They See It?

This article examines the effectiveness and related issues of current methods of principal evaluation of teachers.

Burns M. (2011). Do Principals Know Good Teaching When They See It?. Educational policy, 19(1), 155-180.

The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood

Are teachers' impacts on students' test scores ("value-added") a good measure of their quality? This question has sparked debate largely because of disagreement about (1) whether value-added (VA) provides unbiased estimates of teachers' impacts on student achievement and (2) whether high-VA teachers improve students' long-term outcomes. We address these two issues by analyzing school district data from grades 3-8 for 2.5 million children linked to tax records on parent characteristics and adult outcomes. We find no evidence of bias in VA estimates using previously unobserved parent characteristics and a quasi-experimental research design based on changes in teaching staff. Students assigned to high-VA teachers are more likely to attend college, attend higher- ranked colleges, earn higher salaries, live in higher SES neighborhoods, and save more for retirement. They are also less likely to have children as teenagers. Teachers have large impacts in all grades from 4 to 8. On average, a one standard deviation improvement in teacher VA in a single grade raises earnings by about 1% at age 28. Replacing a teacher whose VA is in the bottom 5% with an average teacher would increase the present value of students' lifetime income by more than $250,000 for the average class- room in our sample. We conclude that good teachers create substantial economic value and that test score impacts are helpful in identifying such teachers.

 

Chetty, R., Freidman, J. N., & Rockhoff, J. E. (2011). The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood. (Working Paper 17699). Cambridge, MA: National Bureau of Economic Research. Every Student Succeeds Act, 20 U.S.C. § 6301 et seq. (2015).

Effects of immediate performance feedback on implementation of behavior support plans, 2005

The purpose of this study is to examine the effects of feedback on treatment integrity for implementing behavior support plans.

Codding, R. S., Feinberg, A. B., Dunn, E. K., & Pace, G. M. (2005). Effects of immediate performance feedback on implementation of behavior support plans. Journal of Applied Behavior Analysis, 38(2), 205-219.

Leading for Instructional Improvement: How Successful Leaders Develop Teaching and Learning Expertise

This book shows how principals and other school leaders can develop the skills necessary for teachers to deliver high quality instruction by introducing principals to a five-part model of effective instruction.

Fink, S., & Markholt, A. (2011). Leading for instructional improvement: How successful leaders develop teaching and learning expertise. John Wiley & Sons.

Effective Instructional Time Use for School Leaders: Longitudinal Evidence from Observations of Principals

This study examines principals’ time spent on instructional functions. The results show that the traditional walk-through has little impact, but principals provide coaching, evaluation, and focus on educational programs can make a difference.

Grissom, J. A., Loeb, S., & Master, B. (2013). Effective Instructional Time Use for School Leaders: Longitudinal Evidence from Observations of Principals. Educational Researcher, 42(8), 433-444.

Visible learning

This influential book is the result of 15 years research that includes over 800 meta-analyses on the influences on achievement in school-aged students. This is a great resource for any stakeholder interested in conducting a serious search of evidence behind common models and practices used in schools.

Hattie, J. (2009). Visible learning. A synthesis of over, 800.

Visible Learning for Teachers: Maximizing Impact on Learning

This book takes over fifteen years of rigorous research into education practices and provides teachers in training and in-service teachers with concise summaries of the most effective interventions and offers practical guidance to successful implementation in classrooms.

Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. Routledge.

The Power of Feedback

This paper provides a conceptual analysis of feedback and reviews the evidence related to its impact on learning and achievement.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research, 77(1), 81-112.

Impact of performance feedback delivered via electronic mail on preschool teachers’ use of descriptive praise.

This paper examined the effects of a professional development intervention that included data-based performance feedback delivered via electronic mail (e-mail) on preschool teachers’ use of descriptive praise and whether increased use of descriptive praise was associated with changes in classroom-wide measures of child engagement and challenging behavior. 

Hemmeter, M. L., Snyder, P., Kinder, K., & Artman, K. (2011). Impact of performance feedback delivered via electronic mail on preschool teachers’ use of descriptive praise. Early Childhood Research Quarterly26(1), 96-109.

Can Principals Identify Effective Teachers? Evidence on Subjective Performance Evaluation in Education

This paper examines how well principals can distinguish between more and less effective teachers. To put principal evaluations in context, we compare them with the traditional determinants of teacher compensation-education and experience-as well as value-added measures of teacher effectiveness.

Jacob, B. A., & Lefgren, L. (2008). Can principals identify effective teachers? Evidence on subjective performance evaluation in education. Journal of Labor Economics, 26(1), 101-136.

Measuring What Matters: A Stronger Accountability Model for Teacher Education
This report proposes an accountability system to regulate teacher preparation programs in essential areas: students are learning, classroom teaching skills, graduates commitment to the professional, graduates and employers feedback, and tests of teacher knowledge and skills.
Crowe, E. (2010). Measuring What Matters: A Stronger Accountability Model for Teacher Education. Online Submission.
Approaches to Evaluating Teacher Effectiveness: A Research Synthesis
This research synthesis examines how teacher effectiveness is currently measured (i.e., formative vs. summative evaluation).
Goe, L., Bell, C., & Little, O. (2008). Approaches to Evaluating Teacher Effectiveness: A Research Synthesis. National Comprehensive Center for Teacher Quality.
A Practical Guide to Designing Comprehensive Teacher Evaluation Systems: A Tool to Assist in the Development of Teacher Evaluation Systems
This guide is a tool designed to assist states and districts in constructing high-quality teacher evaluation systems in an effort to improve teaching and learning.
Goe, L., Holdheide, L., & Miller, T. (2011). A Practical Guide to Designing Comprehensive Teacher Evaluation Systems: A Tool to Assist in the Development of Teacher Evaluation Systems. National Comprehensive Center for Teacher Quality.
Supporting Principals in Implementing Teacher Evaluation Systems
With so much emphasis being placed on improving teacher performance, The National Association of Elementary School Principals and the National Association of Secondary School Principals have developed recommendations to support principals more effectively evaluate teachers.
Grissom, J. A., Loeb, S., & Master, B. (2013). Effective Instructional Time Use for School Leaders: Longitudinal Evidence from Observations of Principals. Educational Researcher, 42(8), 433-444.
2011 State Teacher Policy Yearbook: National Summary
This is a national analysis of each state’s performance against and progress toward a set of 36 specific, research-based teacher policy goals aimed at helping states build a comprehensive policy of teacher effectiveness.
Jacobs, S., Brody, S., Doherty, K, and Michele, K. (2011). 2011 State Teacher Policy Yearbook: National Summary. National Council on Teacher Quality.
Toward effective supervision: An operant analysis and comparison of managers at work, 1986
This study finds that performance monitoring is the factor that separated good mangers from ineffective managers.
Komaki, J. L. (1986). Toward effective supervision: An operant analysis and comparison of managers at work. Journal of Applied Psychology, 71(2), 270.
Beyond effective supervision: Identifying key interactions between superior and subordinate
This paper examines the effects of supervision performance monitoring.
Komaki, J. L., & Citera, M. (1990). Beyond effective supervision: Identifying key interactions between superior and subordinate. The Leadership Quarterly, 1(2), 91-105.
Development of an operant-based taxonomy and observational index of supervisory behavior, 1986
This paper provides a taxonomy and observational instrument for seven categories of supervisory behavior.
Komaki, J. L., Zlotnick, S., & Jensen, M. (1986). Development of an operant-based taxonomy and observational index of supervisory behavior. Journal of Applied Psychology, 71(2), 260.
A National View of Certification of School Principals: Current and Future Trends
This paper focuses on two questions: (a) What patterns in certification currently exist across the states? and (b) What might these current patterns indicate for the future of school principal certification?
LeTendre, B. G., & Roberts, B. (2005). A National view of certification of school principals: Current and future trends. In University Council for Educational Administration, Convention, Nashville, TN. Retrieved October (Vol. 15, p. 2007).
Implementing Data-Informed Decision Making in Schools-Teacher Access, Supports and Use
This paper documents education data systems and data-informed decision making in districts and schools. It examines implementation and the practices involving the use of data to improve instruction.
Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing Data-Informed Decision Making in Schools: Teacher Access, Supports and Use. US Department of Education.
School, Teacher, and Leadership Impacts on Student Achievement
This brief is a meta-analyses of quantitative research on teacher, school, and leadership practices.
Miller, K. (2003). School, teacher, and leadership impacts on student achievement. Policy Briefs. Mid-Continent Research for Education and Learning. Leadership for School Improvement, Aurora, Colorado.
TITLE
SYNOPSIS
Calder: Longitudinal Data in Education Research
CALDER is a National Research and Development Center that utilizes longitudinal state and district data on student and teachers to examine the effects of real policies and practices on the learning gains of students over time.
Center for Educational Leadership
The Center for Educational Leadership provides research and training in teaching effectiveness and school leadership.
Consortium for Policy Research in Education (CPRE)
CPRE looks at issues of teacher compensation, school finance, and principal evaluation for PK20.
Council of Chief State School Officers (CCSSO)
CCSSO is a nonpartisan, nationwide, nonprofit organization of public officials who head departments of elementary and secondary education in the states, provides leadership, advocacy, and technical assistance on major educational issues.
Joyce Foundation
The Joyce Foundation invests in and focuses on today's most pressing problems while also informing the public policy decisions critical to creating opportunity and achieving long-term solutions. The work is based on sound research and is focused on where it can add the most value.
K-12 Education: Gates Foundation
K-12 Education works to make sure tools, curriculum, and supports are designed using teacher insights.
Measures of Effective Teaching Project (METS)
The (MET) project was designed to find out how teacher evaluation methods could best be used to improve teaching quality.
National Council on Teacher Quality (NCTQ)

The National Council on Teacher Quality works to achieve fundamental changes in the policy and practices of teacher preparation programs, school districts, state governments, and teachers unions.

New Teacher Center
The New Teacher Center provides research, policy analyses, training and support for improving new teacher support and induction.
Stanford Center for Opportunity Policy in Education (SCOPE)
The Stanford Center for Opportunity Policy in Education (SCOPE) was founded in 2008 to foster research, policy, and practice to advance high quality, equitable education systems in the United States and internationally.
Back to Top