Categories for Implementation
December 18, 2017
(1) An Evaluation of a Learner Response System (2) The Effects of Financial Incentives on Standardized Testing (3) Do Teacher Observations Make Any Difference to Student Performance?
Commentary: This piece reports on three examples of studies of practices that did not produce positive results and highlights the issue of publication bias in educational research. There are powerful contingencies that shape the publication process in ways that do not always work in the best interest of science. For example, promotion and tenure committees do not give the same weight to published replication studies. Also, journals generally do not publish studies that show no effect resulting in the “file drawer problem”. The only exception to this rule is if a study shows that a widely accepted intervention is not effective. Studies that show no effect may be very experimentally rigorous but because they did not show an experimental effect the studies are relegated to the researchers file drawer. These contingencies result in a publication bias for original research that demonstrates a positive effect. This can result in efforts to systematically review the evidence for an intervention over-estimating its effectiveness. Publishing in peer-reviewed journals is a critical component needed to safeguard the quality of research but these biases reflect potential publication biases. Replication is a fundamental cornerstone of science. Replication studies demonstrate the robustness of a finding. The biases against publishing non-results is a bit more complicated. Some studies that report non-results are unimportant. For example, demonstrating that a car will not run if gas is put in the tires is unimportant. The only important demonstration is one that shows a positive relation between where the gas was put in the car and the car actually running. Other studies reporting non-results are important because they show that a variable that has been experimentally demonstrated to have an impact on student behavior does not have that effect in a replication study or under a particular set of conditions.
- An Evaluation of a Learner Response System: A Learner Response System (LRS) is a classroom feedback tool that is becoming increasing popular. LRS is the practice of teachers and pupils using electronic handheld devices to provide immediate feedback during lessons. Given that feedback has been found to be a powerful tool in learning, it is not surprising that LRS are being adopted. The important question remains, do LRS increase student performance. This study tests a Learner Response System using Promethean handsets to assess whether it improves student outcomes. The study found no evidence that math and reading were improved using the system for 2 years.
- The Effects of Financial Incentives on Standardized Testing: Standardized testing has increasingly been used to hold educators accountable. Incentives are often offered as a way to improve student test performance. This study examines the impact incentives for students, parents and tutors on standardized test results. The researchers provided incentives on specially designed tests that measure the same skills as the official state standardized tests; however, performance on the official tests was not incentivized. This study finds substantial improvement for performance when there were incentives on the results did not generalize to the official test. This calls into question how to effectively use incentives so they will actually produce desired outcomes.
- Do Teacher Observations Make Any Difference to Student Performance? Research strongly suggests that feedback obtained through direct observations of performance can be a powerful tool for improving teacher’s skills. This study examines a peer teacher observation method used in England. The study found no evidence that Teacher Observation improved student language and math scores.
(1) Education Endowment Foundation (2017). Learner Response System. Education Endowment Foundation. Retrieved https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/learner-response-system/.
(2) John A. List, Jeffrey A Livingston and Susanne Neckermann. “Do Students Show What They Know on Standardized Tests?” working papers (2016) Available at: http://works.bepress.com/jeffrey_livingston/19/
(3) Education Endowment Foundation (2017). Teacher Observation. Education Endowment Foundation. Retrieved https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/teacher-observation/.
December 4, 2017
Framework for Improving Education Outcomes
Multitiered system of support (MTSS) is a framework for organizing service delivery. At the core of MTSS is the adoption and implementation of a continuum of evidence-based interventions that result in improved academic and behavioral outcomes for all students. MTSS is a data-based decision making approach based on the frequent screening of progress for all students and intervention for students who are not making adequate progress.
Citation: States, J., Detrich, R., and Keyworth, R. (2017). Multitiered System of Support Overview. Oakland, Ca. The Wing Institute.
October 4, 2017
The usual approach to determining if an intervention is effective for a student is to review student outcome data; however, this is only part of the task. Student data can only be understood if we know something about how well the intervention was implemented. Student data without treatment integrity data are largely meaningless because without knowing how well an intervention has been implemented, no judgments can be made about the effectiveness of the intervention. Poor outcomes can be a function of an ineffective intervention or poor implementation of the intervention. Without treatment integrity data, the is a risk that an intervention will be judged as ineffective when, in fact, the quality of implementation was so inadequate that it would be unreasonable to expect positive outcomes.
Citation: Detrich, R., States, J. & Keyworth, R. (2017). Treatment Integrity in the Problem Solving Process. Oakland, Ca. The Wing Institute
October 4, 2017
Student achievement scores in the United States remain stagnant despite constant reform. New initiatives arise promising hope, only to disappoint after being adopted, implemented, and quickly found wanting. The cycle of reform followed by failure has had a demoralizing effect on schools, making new reform efforts more problematic. These efforts frequently fail because implementing new practices is far more challenging than expected and require that greater attention be paid to implementation. A fundamental factor leading to failure is inattention to treatment integrity. When innovations are not implemented as designed, it should not be a surprise that anticipated benefits are not forthcoming. The question is, what strategies can educators employ to increase the likelihood that practices will be implemented as designed?
Strategies designed to increase treatment integrity fall into two categories: antecedent-based strategies and consequence-based strategies. Antecedent-based strategies involve any setting event or environmental factor that happens prior to implementing the new practice and that increases the likelihood of success as well as eliminates setting events or environmental considerations that decrease the likelihood of success. Consequence-based strategies are designed to impact actions that happen after implementation of the new practice and that are likely to increase or decrease treatment integrity.
Citation: Detrich, R., States, J. & Keyworth, R. (2017). Approaches to Increasing Treatment Integrity. Oakland, Ca. The Wing Institute
October 4, 2017
Historically, treatment integrity has been defined as implementation of an intervention as planned (Gresham, 1989). More recently, treatment integrity has been reimagined as multidimensional (Dane & Schneider, 1998). In this conceptualization of treatment integrity are four dimensions relevant to practice: (a) exposure (dosage), (b) adherence, (c) quality of delivery, and (d) student responsiveness. It is important to understand that these dimensions do not stand alone but rather interact to impact the ultimate effectiveness of an intervention. It is important for educators to assess all dimensions of treatment integrity to assure that it is being implemented as intended.
Citation: Detrich, R., States, J. & Keyworth, R. (2017). Dimensions of Treatment Integrity Overview. Oakland, Ca. The Wing Institute
October 4, 2017
For the best chance of a positive impact on educational outcomes, two conditions must be met: (a) Effective interventions must be adopted, and (b) those interventions must be implemented with sufficient quality (treatment integrity) to ensure benefit. To date, emphasis in education has been on identifying effective interventions and less concern with implementing the interventions. The research on the implementation of interventions is not encouraging. Often, treatment integrity scores are very low and, in practice, implementation is rarely assessed. If an intervention with a strong research base is not implemented with a high level of treatment integrity, then the students do not actually experience the intervention and there is no reason to assume they will benefit from it. Under these circumstances, it is not possible to know if poor outcomes are the result of an ineffective intervention or poor implementation of that intervention. Historically, treatment integrity has been defined as implementing an intervention as prescribed. More recently, it has been conceptualized as having multiple dimensions, among them dosage and adherence which must be measured to ensure that it is occurring at adequate levels.
Citation: Detrich, R., States, J., & Keyworth, R. (2107). Overview of Treatment Integrity. Oakland, Ca. The Wing Institute.
August 29, 2017
Restructuring Environmental Contingencies and Enhancing Self-Managed Supervision
The results of the 2016-17 Wing Institute’s research grant are now available on our web site. Laura Kern submitted the selected study that examines the effects of a brief training on active supervision and self-management and the use of a simple strategy of self-management (e.g., checklist and Direct Behavior Rating Scales to change adult behavior).
Three research questions were addressed related to recess supervisor and student behaviors:
- What are the effects of a brief training on self-management on recess supervisors’ active supervision behaviors?
- What are the effects of increasing active supervision on students’ problematic behavior during recess?
- Will any increase in recess supervisor’s use of self-management be maintained with the sole use of direct behavior rating scales as part of a self-management strategy of the adult active supervision
Link: Go to Research section
May 12, 2017
Treatment Integrity Strategies Overview
Student achievement scores in the United States remain stagnant despite repeated attempts to reform the education system. New initiatives promising hope arise, only to disappoint after being adopted, implemented, and quickly found wanting. The cycle of reform followed by failure has had a demoralizing effect on schools, making new reform efforts problematic. These efforts frequently fail because implementing new practices is far more challenging than expected and require that greater attention be paid to how initiatives are implemented. Treatment integrity is increasingly recognized as an essential component of effective implementation in an evidence-based education model that produces results, and inattention to treatment integrity is seen as a primary reason new initiatives fail. The question remains, what strategies can educators employ to increase the likelihood that practices are implemented as designed? The Wing Institute overview on the topic of Treatment Integrity Strategies examines the essential practice elements indispensable for maximizing treatment integrity.
Citation: States, J., Detrich, R. & Keyworth, R. (2017). Overview of Treatment Integrity Strategies. Oakland, CA: The Wing Institute. http://www.winginstitute.org/effective-instruction-treatment-integrity-strategies.
March 1, 2017
Sage Spotlight on Data Visualization
The February issue of Sage Publishing’s newsletter, Sage Methods Minute, presents useful guidance on understanding and managing data visualization in making effective decisions. The newsletter offers a lecture, interview, and webinar on this important but often neglected topic. Productive data-based decisions rely on the effective use of analytics and the acquisition, interpretation, and communication of meaningful patterns in data. In an increasingly complicated world in which vast quantities of data are available, it is essential that educators become astute in presenting data adapted to different audiences and in identifying deceptive data so they are able to make wise decisions in the service of educating children. The Sage Spotlight newsletter on visualization includes Tailoring Data Visualization to Reach Different Audiences by Tom Schenk; Textbooks in Data Visualization: 60 Seconds with Andy Kirk; and Webinar: Learn the Essentials of Data Visualization by Andy Kirk and Stephanie Evergreen. For those interested in additional resources on this topic, the works of Edward Tufte, professor emeritus of political science, statistics, and computer science at Yale University, and Howard Wainer, adjunct professor of statistics at the Wharton School of the University of Pennsylvania, provide insight in how to deliver information that communicates your message.
Sage February Newsletter: http://info.sagepub.com/q/17I2b2bhfM2Fc8adzqeF1h/wv
Edward Tufte: https://www.edwardtufte.com/tufte/index
Howard Wainer: https://www.amazon.com/Howard-Wainer/e/B000AP7SUU
February 8, 2017
The Tennessee Department of Education combines coaching, instruction in evidence-based reading practices, and a multitiered system of supports in a new initiative called Read to be Ready. The initiative trains teachers in the best ways to teach children literacy skills. Ample evidence supports the importance of students reading at grade level. Effective reading has been shown to be a reliable indicator of future success in school and adulthood. This initiative is designed to increase literacy by coaching teachers on how to use evidence-based practices of reading. For the past 20 years much attention has been paid to explicit instruction of phonics to improve students’ reading scores. This initiative will build on these efforts by also requiring explicit comprehension instruction to build skills for deriving meaning, analyzing the logic of argumentation, generating conclusions, and interpreting content.