Categories for Quality Leadership
October 4, 2021
Overview of Professional Judgment. Educators make many decisions regarding services for students. Even when there is abundant evidence to guide their decisions, educators must use their judgment about what is appropriate in a given situation. Only on rare occasion does the available evidence perfectly match the service context of concern to the educator. To bridge the gap between research and local circumstance, the educator must make a series of judgments such as defining the problem, determining which evidence is relevant, and deciding which features of the local context are likely to require adaptations to the selected evidence-based intervention. Professional judgment is a cornerstone of evidence-based practice, as are best available evidence, stakeholder values, and the context in which services are provided. In this definition of evidence-based practice, the integration of these variables influences decisions. No one cornerstone can be substituted for the others. Judgment must be informed and constrained by the best available evidence, stakeholder values, and context.
Citation: Guinness, K., and Detrich, R. (2021). Overview of Professional Judgment. Oakland, CA: The Wing Institute. https://www.winginstitute.org/evidence-based-decision-making-professional-judgment.
August 31, 2021
Principal Evaluation. The field of principal evaluation, while gaining increased research interest in recent years, lags behind teacher evaluation in terms of conclusions that can be made regarding effective practice. Prior to Race to the Top and ESEA waivers, principal evaluation was implemented inconsistently and evaluation systems lacked instruments with validity and/or reliability, had a tenuous relationship with leadership standards, failed to include measures of student/school outcomes, and had mixed purposes as to their intended use (e.g., sometimes as formative information to help principals improve, while other times as summative information to make personnel decisions). However, today’s evaluation systems have evolved to incorporate multiple measures of principal performance that evaluate principals on research-based principles of effective leadership, often include student outcomes (which is often controversial, however), and are used both to help principals improve and to hold them accountable for their performance. Ongoing and more frequent observations, often conducted by the principal supervisor, who often also serves as a coach/mentor and directs the principal towards needed professional learning, show promise as an effective practice. Using the results from principal evaluations for personnel decisions, such as offering incentives through pay-for-performance programs, yields mixed results and warrants further research attention.
Citation: Donley, J., Detrich, R., States, J., & Keyworth, (2021). Principal Evaluation Oakland, CA: The Wing Institute. https://www.winginstitute.org/quality-leadership-principal-evaluation
August 25, 2021
Driven by Data: Using Licensure Tests to Build a Strong, Diverse Teacher Workforce. Essential to improving educational outcomes for students is to assure that well prepared teachers are in every classroom. Teacher preparation programs are primarily responsible for preparing candidates. One measure of how well institutions are preparing teachers is the percentage of candidates that pass state licensure tests. The National Council on Teacher Quality (https://www.nctq.org/) recently released a report examining the pass rate of elementary education teachers by state, by de-identified teacher preparation institutions, and disaggregated data for candidates of color and socio-economic status. Different states have different standards, rely on different methods to assess performance, and have different criteria for passing scores. Thirty-four states provided complete data for this report, eight provided partial data, and nine states provided no data. Based on the available data, nationally 55% of teacher candidates failed the exam on their first try. The data vary considerably across states and across institutions within and across states. One of the conclusions of this report is that elementary teacher candidates, regardless of race and ethnicity, are “too often poorly prepared and supported to pass their state licensure tests.” The authors of the report identified a number of issues with how states are currently assessing teacher competency. The report concludes with a number of recommendations for improving teacher preparation programs so that more teachers pass the licensure test. These data are directly relevant to the competency implementation driver in the Active Implementation Frameworks. Implementation efforts are not likely to be successful if competent personnel are not available to implement the innovation. Competency is primarily the responsibility of the teacher preparation programs. These programs would be well served to attend to the recommendations of this report. In addition, education policy makers should review their state’s current methods for assessing the competency of teacher candidates.
Citation: Putman, H. & Walsh, K. (2021). Driven by Data: Using Licensure Tests to Build a Strong, Diverse Teacher Workforce. Washington, D.C.: National Council on Teacher Quality
August 25, 2021
Principals’ Perceptions of Influence Over Decisions at Their Schools in 2017-2018. Contemporary models of principal leadership are that principals are expected to be the instructional leaders in their schools. At least two questions emerge from this expectation: (1) Do principals have the ability to influence instructional decisions in their schools? (2) Do principals have the necessary training to base instructional decisions on the best available evidence? In a recent report published by the National Center for Education Statistics at IES (July, 2021), the degree to which principals in traditional public schools, private schools, and charter schools felt like they have influence over decisions across a number of domains of school leadership was assessed. The degree to which they felt they had influence was related to the type of school in which they were working. Particularly, interesting from the perspective of principals as instructional leaders, is that only 39% of principals in traditional public schools felt like they had influence over establishing curriculum. This figure is considerably lower than for principals in private schools (69%) and principals in public charter schools (59%). This raises the question do traditional public school principals have a commitment to the curriculum choices that are made? If they do not, then one has to wonder if they will be champions for the curriculum and effective instructional leaders? In terms of the Active Implementation Frameworks, effective implementation of the model of principals as instructional leaders requires that principals be involved in the identification of useable innovations, the actual implementation of the innovation in their school, and access to the data about the effectiveness of the innovation. In addition, if principals are to be effective instructional leaders then the competency drivers of selection, training, and coaching need to be present so principals will have the necessary skills to function in those roles. Finally, the leadership drivers of technical skills and adaptive leadership skills are necessary to adapt an instructional practice into a particular organizational and school context. It would be interesting to see how the practices in private schools and public charter schools differ from traditional public schools that results in principals reporting they have influence over decisions involving curriculum. This article only addresses the question of do principals have influence over curriculum decisions? It does not address the extent to which principals have the skills to base decisions on best available evidence. Please see the Wing Institute paper on Best Available Evidence (https://www.winginstitute.org/evidence-based-decision-making-evidence) for more on this topic.
Citation: National Center for Education Statistics at IES. (2021). Principals’ Perceptions of Influence Over Decisions at Their Schools in 2017-2018.
June 18, 2021
Cost-Effectiveness Analysis: A Component of Evidence-Based Education. Including cost-effectiveness data in the evaluation of programs is the next step in the evolution of evidence-based practice. Evidence-based practice is grounded in three complementary elements: best available evidence, professional judgment, and client values and context. To fully apply the cost-effectiveness data, school administrators will have to rely on all three of these elements. The function of cost-effectiveness data is to guide decisions about how limited financial resources should be spent to produce the best educational outcomes. To do so, it is necessary for decision makers to choose between options with known cost-effectiveness ratios while working within the budget constraints. In this article, I discuss some of the considerations that have to be addressed in the decision-making process and implications of including cost-effectiveness analyses in data-based decision making.
Citation: Detrich, R. (2020). Cost-effectiveness analysis: A component of evidence-based education. School Psychology Review, 1-8.
June 18, 2021
How could evidence-based reform advance education? This article presents a definition and rationale for evidence-based reform in education, and a discussion of the current state of evidence-based research, focusing on China, the U.S., and the UK. The article suggests ways in which Chinese, U.S., UK, and other scholars might improve the worldwide quality of evidence-based reform in education. One indicator of this partnership is an agreement among the Chinese University of Hong Kong, Nanjing Normal University, and Johns Hopkins University to work together on Chinese and English versions of the website Best Evidence in Brief and a collaboration between Johns Hopkins and the ECNU Review of Education at East China Normal University.
The Wing Institute would like to acknowledge the contributions of Robert Slavin to the field of education. Our condolences go out to Robert Salvin’s family on the loss of one of America’s premier proponents of evidence-based education, who recently passed away on April 24, 2021. Robert Slavin was an education researcher who sought to translate the science of learning into effective teaching practices. Dr. Slavin was a distinguished professor at Johns Hopkins University’s School of Education, where he directed the Center for Research and Reform in Education
Citation: Slavin, R. E., Cheung, A. C., & Zhuang, T. (2021). How could evidence-based reform advance education?. ECNU Review of Education, 4(1), 7-24.
June 18, 2021
Evidence-Based Policies in Education: Initiatives and Challenges in Europe. This article examines the state of progress of evidence-based educational policies in Europe and identifies organizations for the generation and dissemination of evidence. Further, it discusses some of the most relevant challenges facing the development of evidence-informed education policies in Europe.
Citation: Pellegrini, M., & Vivanet, G. (2020). Evidence-based policies in education: Initiatives and challenges in Europe. ECNU Review of Education, 2096531120924670.
June 17, 2021
A Cost Analysis of the Innovation–Decision Process of an Evidence-Based Practice in Schools. The translation of evidence-based practices (EBPs) to improve students’ social, emotional, behavioral, and academic out- comes into authentic school settings has posed significant challenges for both researchers and practitioners. Among the many barriers to the adoption and use of EBPs are their associated costs. This study presents a framework for integrating the diffusion of innovation theory into an economic evaluation utilizing a societal perspective, which affords the capturing of costs of all phases from adoption through implementation of EBPs for all stakeholders.
Citation: Barrett, C. A., Pas, E. T., & Johnson, S. L. (2020). A Cost Analysis of the Innovation–Decision Process of an Evidence-Based Practice in Schools. School Mental Health, 12(3), 638-649.
Link: A Cost Analysis of the Innovation–Decision Process of an Evidence-Based Practice in Schools
April 14, 2021
Cost-Effectiveness of Instructional Coaching: Implementing a Design-Based, Continuous Improvement Model to Advance Teacher Professional Development. Schools devote substantial resources to teacher professional development each year. Yet studies show much of this investment is directed toward ineffective short-term workshops that have little impact on instructional change or student outcomes. At the same time, more intensive job-embedded forms of professional learning, such as instructional coaching, require substantially more resources than traditional professional development. The authors report the results of a two-year study assessing the cost-effectiveness of instructional coaching through a design-based, continuous improvement research model. Our findings suggest that coaching programs can become more cost-effective over time, as coaches and teachers refine their work together.
Citation: Knight, D. S., & Skrtic, T. M. (2020). Cost-Effectiveness of Instructional Coaching: Implementing a Design-Based, Continuous Improvement Model to Advance Teacher Professional Development. Journal of School Leadership, 1052684620972048.
April 12, 2021
Does Teacher Learning Last? Understanding How Much Teachers Retain Their Knowledge After Professional Development. Teacher professional development (PD) is seen as a promising intervention to improve teacher knowledge, instructional practice, and ultimately student learning. While research finds instances of significant program effects on teacher knowledge, little is known about how long these effects last. If teachers forget what is learned, the contribution of the intervention will be diminished. Using a large-scale data set, this study examines the sustainability of gains in teachers’ content knowledge for teaching mathematics (CKT-M). Results show that there is a negative rate of change in CKT after teachers complete the training, suggesting that the average score gain from the program is lost in just 37 days.
Citation: Liu, S., & Phelps, G. (2020). Does Teacher Learning Last? Understanding How Much Teachers Retain Their Knowledge After Professional Development. Journal of Teacher Education, 71(5), 537-550.