Latest News

Does Professional Development Impact Data-based Decision Making?

April 29, 2022

At the core of evidence-based education is data-based decision making.  Once an empirically-supported intervention has been adopted, it is necessary to monitor student performance to determine if the program is being effective for an individual student. Educators report needing assistance in determining what to do with the student performance data.  Often, external support for educators to successfully navigate the decision-making process is necessary because many training programs are not sufficient.  

A recent meta-analysis by Gesel and colleagues (2021) examined the impact of professional development on teaches knowledge, skill, and self-efficacy in data-based decision making.  The knowledge was assessed by a multiple-choice test to determine if teachers understood the concepts of data-based decision making.  It was not a measure of teachers’ application of that knowledge.  Skill was the direct measure how how well teachers applied their knowledge of data-based decision making.  In most instances, this was assessed under ideal conditions with intense support from researchers and consultants.  Self-efficacy was a measure of the teachers’ confidence to implement data-based decision making.  The overall effect size for the combined measures was 0.57 which is generally considered a moderate effect; however, the effect sizes for the individual items varied significantly (Knowledge range of effect size from -0.02 to 2.28; Skill range -1.25 to 1.96; self-efficacy range -0.08 to 0.78).  The ranges for each of the measures suggests that the average effect size of 0.57 does not adequately reflect the effects of professional development.  The variability could be a function of the specific training methods used in each of the individual studies but the training methods were not described in this meta-analysis.  It should be noted that all of the studies in this meta-analysis was conducted with intensive support from researchers and consultants.  It is not clear how the results of this meta-analysis are generalizable to more standard conditions found in teacher preparation programs and professional development.

Given the importance of data-based decision making to student progress, there is considerable work to be done to identify effective and efficient training methods.  It appears that we are a long way from this goal.  Ultimately, the goal is for data-based decision making to be standard practice in every classroom in the United States.  This will require identifying the critical skills necessary and the most effectiveness method for teaching those skills.

Citation:
Gesel, S. A., LeJeune, L. M., Chow, J. C., Sinclair, A. C., & Lemons, C. J. (2021). A meta-analysis of the impact of professional development on teachers’ knowledge, skill, and self-efficacy in data-based decision-making. Journal of Learning Disabilities, 54(4), 269-283.

 


 

Does Contextual Fit of Interventions Improve Outcomes for Students?

April 29, 2022

Contextual fit refers to the extent that procedures of the selected program are consistent with the knowledge, skills, resources, and administrative support of those who are expected to implement the plan.  Packaged curricula and social programs are developed without a specific context in mind; however, when implementing that program in a particular context, it will often require some adaptations of the program or the setting to increase the fidelity of implementation.  One challenge to improving contextual fit is to determine which features of the program or the environment need to be adapted to improve fit.  

A recent study by Monzalve and Horner (2021) addressed this question.  The authors developed the Contextual Fit Enhancement Protocol to identify components of a behavior support plan to adapt.  The logic of the study was that by increasing contextual fit, fidelity of implementation would increase, and student outcomes would be improved.  Four student-teacher dyads were recruited.  To be included in the study, the student had an existing behavior support plan that was judged technically adequate but was being implemented with low fidelity.  During baseline, no changes were made to the plan. The percentage of the support plan components implemented was measured as well as student behavior.  Following baseline, researchers met with the team responsible for implementing the plan and reviewed the Contextual Fit Enhancement Protocol.  During this meeting the goals and procedures of the plan were confirmed, the contextual fit of the current plan was assessed, specific adaptations to the plan were made to increase contextual fit, and an action plan for implementing the revised plan was developed.  Researchers continued to measure fidelity of implementation and student behavior.  After at least 5 sessions of implementing the revised plan, the implementation team met with the researcher to re-rate the original plan and the revised plan for contextual fit.  Items that were rated low were again reviewed and adapted.  Following the review of the Contextual Fit Enhancement Protocol and revised plan, fidelity of implementation increased substantially and student problem behavior decreased.

There are two important implications from this study.  First, there is no reason to assume that the initial version of the plan or even a revised version of the plan will get everything right because intervention is complex. This is an iterative process.  Periodic reappraisal of the plan is necessary.  The second important point is that student behavior is a function of the technical adequacy of the plan and how well that plan is implemented.  If a plan is technically adequate, is a good contextual fit, and is implemented with high levels of fidelity (even with less than 100%), then positive student outcomes will most likely be achieved.

Citation:

Monzalve, M., & Horner, R. H. (2021). The impact of the contextual fit enhancement protocol on behavior support plan fidelity and student behavior. Behavioral Disorders, 46(4), 267-278.

 


 

What is the Effect of Contextual Fit on Quality of Implementation?

March 3, 2022

Kendra Guinness of the Wing Institute at Morningside provides an excellent summary of the importance of contextual fit and how it can enhance the implementation of evidence-based practices. Practices are often validated under very different conditions than the usual practice settings. In establishing the scientific support for an intervention, researchers often work very closely with the research site providing close supervision and feedback, assuring that all necessary resources are available, and training the implementers of the components of the intervention. In the usual practice settings, the intervention is often implemented without all of the necessary resources, and training and feedback are limited. As a result, the program as developed is not a good fit with the local circumstances within a school or classroom. In this overview, Ms. Guinness defines contextual fit, describes the key features of it, and summarizes the empirical evidence supporting it.  

Briefly, contextual fit is the match between the strategies, procedures, or elements of an intervention and the values, needs, skills, and resources available in the setting. One of the best empirical demonstrations of the importance of contextual fit is research by Benazzi et al. (2006). Behavior support plans were developed in three different ways: (1) behavior support teams without a behavior specialist (2) behavior support teams with a behavior specialist, and (3) behavior specialists alone. The plans were rated for technical adequacy and contextual fit. The plans developed by the behavior specialist alone or teams with a behavior specialist as part of the team were rated highest. When the behavior support plans were rated for contextual fit, plans developed by teams, with or without a behavior specialist, were rated higher than plans developed by behavior specialists alone.

Additional evidence of the importance of context fit comes from research by Monzalve and Horner (2021). They evaluated the effect of the Contextual Fit Enhancement Protocol. First, they had teachers implement a behavior support plan without feedback from researchers and measured fidelity of implementation and the level of student problem behavior. Subsequently, the researchers met with the implementation team and reviewed the goals of the plan, the procedures, identified adaptations to improve the contextual fit, and planned next steps for implementing the revised behavior support plan. Before the team meeting, the intervention plan was implemented with 15% fidelity and student problem behavior occurred during 46% of the observation period. Following the meeting, fidelity of implementation increased to 83% and problem behavior was reduced to 16% of the observation period.

These data clearly suggest that intervention does not occur in a vacuum and there are variables other than the components of the intervention that influence its implementation and student outcomes. Much more needs to be learned about adapting interventions to fit a particular context without reducing the effectiveness of the intervention.

Citation: 

Guinness, K. (2022). Contextual Fit Overview. Original paper for the Wing Institute.

References:

Benazzi, L., Horner, R. H., & Good, R. H. (2006). Effects of behavior support team composition on the technical adequacy and contextual fit of behavior support plans. Journal of Special Education, 40(3), 160–170.
Monzalve, M., & Horner, R. H. (2021). The impact of the contextual fit enhancement protocol on behavior support plan fidelity and student behavior. Behavioral Disorders, 46(4), 267–278. https://doi.org/10.1177/0198742920953497

 


 

Is Real-Time Performance Feedback Effective?

March 3, 2022

Performance feedback is often considered a necessary part of training educators. the challenge is to provide the feedback in a timely manner so that it positively impacts skill acquisition. Often times, the feedback is delayed by hours, or days, which may limit the impact of the feedback. Real-time performance feedback is considered optimal, but may be considered unfeasible in many educational contexts. 

One option is to provide feedback utilizing technology such as “bug in the ear” to deliver feedback in real-time. Sinclair and colleagues (2020) conducted a meta-analysis to determine if feedback delivered via technology could be considered to empirically-supported. In the review, 23 studies met inclusion criteria. Twenty-two of the studies were single case designs and one was a group design. The reported findings were that real-time performance feedback is an effective method for increasing skill acquisition of educators. The authors cautioned that this type of feedback is an intensive intervention and suggested that it is not feasible to use for training all teachers. They suggest that it should be considered an intervention when other training methods have not proven effective. 

In this context, it becomes feasible to support those educators that have not benefitted from less intensive interventions. If it is considered part of a multi-tiered system of support for educators, it can play an important role in training. It can improve the performance of educators and perhaps reduce turnover because it allows educators to develop the skills to be successful.

Citation:

Sinclair, A. C., Gesel, S. A., LeJeune, L. M., & Lemons, C. J. (2020). A review of the evidence for real-time performance feedback to improve instructional practice. The Journal of Special Education, 54(2), 90-100.

 


 

What Does it Take to Assure High-Quality Implementation?

March 3, 2022

A fundamental assumption of evidence-based practice is that interventions will produce benefit only if there are high treatment integrity levels. High levels cannot be assumed in the usual course of practice in education. It must be planned for and routinely monitored. Often, there is not the time and resources to do that in schools, so effective interventions fail to produce the expected benefits for students. The standard “train and hope” is not sufficient to assure adequate levels of treatment integrity. The question becomes what is sufficient? George Noell, Kristin Gansle, and Veronic Gulley (2021) recently addressed this question. Teachers were assigned to either a weekly follow-up consultation meeting or an Integrated Support condition that included social influence, planning, and performance feedback. After an initial four-week consultation period in which problems were identified, intervention plans were developed, and staff were trained to implement, teachers in each group were followed for four additional weeks to determine their level of treatment integrity and effects on student behavior (either behavior or academic). Implementation scores for the participants in the Weekly follow-up meeting were relatively low the first week and declined across the rest of the four weeks.  

Participants in the Integrated Support group had high levels of treatment integrity the first week and scores decreased very little across the rest of the study. Students in the Integrated Support group had much greater improvements in behavior than students in the Weekly Follow-up condition. 

The authors reported that three school climate variables were related to plan implementation and child outcomes in the Integrated Support condition. For treatment plan implementation, the variables were (1) student relations (2) resources (3) time. For child outcomes, the only school climate factor was time. There were no school climate variables that influenced the Weekly Follow-up condition outcomes at either the level of treatment plan implementation or child outcomes.

These data highlight the importance of continuous monitoring of implementation and supporting educators as they implement intervention plans. Failure to do so results in very limited outcomes for students, does not use implementers time most effectively, and yields a very poor return on investment. Separating monitoring of implementation from intervention will almost always result in poor outcomes for students.

The challenge for schools is to reconfigure services so that monitoring treatment integrity is considered a part of services as it generates best outcomes for students. 

Citation

Noell, G., Gansle, K., & Gulley, V. (2021). The Impact of Integrated Support and Context on Treatment Implementation and Child Outcomes Following Behavioral Consultation. Behavior Modification, 01454455211054020.

 


 

What is the Cost of Adopting Unsupported Programs?

March 3, 2022

Even though there is increasing support for schools adopting programs that have strong empirical support for various reasons, schools continue to adopt programs that have no or limited empirical support. Often an unanswered question is what are the costs for implementing programs with limited or no scientific support when well supported programs are available? The challenge for schools is to adopt programs that will produce the greatest benefits for students and do so in a way that is cost-effective. A cost-benefit analysis is one approach to identifying the costs and benefits of a particular program. Essentially, it a ratio of benefits over costs. A cost-benefit analysis is under-utilized in public education. Recently, Scheibel, Zane, and Zimmerman (2022) applied a cost analysis to adopting programs for children with autism that are unproven or have limited scientific support. Specifically, they evaluated the costs of implementing the Rapid Prompting Method (no empirical support) and Floortime Therapy (emerging effectiveness data), both of which are frequently adopted in programs for children on the autism spectrum. The authors reported that implementing interventions with a limited research base or programs with no evidentiary support, can pose significant costs to schools with varying likelihood of benefit to children. In addition to the direct costs of these programs, there may opportunity costs for failing to implement interventions with stronger empirical support. 

The methods for completing these types of cost analyses are complex; however, there is great value to schools when they employ these cost-benefit methods to improve outcomes for students and achieve a greater return on their investment in effective programs. This study is one example of how these analyses can be conducted. Both researchers and public-school administrators would be well-served if cost-effectiveness analyses were more frequently utilized when evaluating programs.

Citation:

Scheibel, G., Zane, T. L., & Zimmerman, K. N. (2022). An Economic Evaluation of Emerging and Ineffective Interventions: Examining the Role of Cost When Translating Research into Practice. Exceptional Children, 00144029211073522.

 


 

What Variables Influence Educators’ Adoption Decisions?

March 3, 2022

In recent years, Federal regulations such as the No Child Left Behind and Every Student Succeeds Act, encourage the use of scientifically supported interventions. To accomplish this, it is necessary that educators adopt programs that have empirical support. Little is known about the variables that influence educators’ adoption decisions. Pinkelman and colleagues (2022) recently published a small qualitative study that asked district-level and school-level administrators about the variables that influenced their most recent adoption decision. The results are interesting. Three general themes emerged from this analysis: (1) Establishing Need (2) Identifying Options (3) Elements of Program. 

Establishing Need refers to school-level or district-level factors considered in adoption decisions. There were three subthemes within Establishing Need: (1) Data, both informal and formal (2) Time Cycle (3) Academic Content Domains.  

Within the subtheme of data, 90% of the participants reported using informal data to determine the need for adoption. This was the most frequently cited means of determining need. Informal data included input from stakeholders through meetings, conversations, and anecdotal commentary. Formal data was mentioned by 55% of the participants as a means of Establishing Need. Formal data was defined as empirical data to assess an academic or behavioral construct, test scores, surveys, school climate data, universal screening data, and student performance data.

The subtheme Time Cycle refers to changes over time such as a district’s schedule for rotating the adoption of new programs, expiring program licenses, changes in standards, or availability of current resources. Thirty-five percent of the participants mentioned this. 

Academic Content Domain refers to academic subjects such as reading, math, and science. Thirty-five percent of the participants indicated that district priorities regarding academic content influenced the need for new programs. Collectively, the data regarding factors influencing Establishing Need suggest that variables other than evidence of effectiveness of current programs or evidence about adoption options.

When identifying adoption options, 85% of the participants reported they relied on word of mouth which included talking to colleagues and other education professionals. Fifty-five percent of the participants also mentioned marketing efforts by publishers. Fifty percent of the participants initiated an independent search through web searches and reading articles. The only reference to relying on empirical effectiveness to make adoption decisions can be inferred from the reference to reading articles. These data also suggest that variables such as word of mouth play an important role when making decisions. This is an understudied role in influencing adoption decisions.

The third major theme regarding variables influencing adoption decisions is Elements of Program Selection. Within this theme there are four subthemes: (1) alignment (2) Teacher Factors (3) Cost and (4) Supplemental curriculum materials.

Seventy percent of the participants referenced alignment with Common Core standards and agreement with the district’s values as a factor in adopting a program. Seventy percent of the participants also identified Teacher Factors as influencing decisions. Such considerations as teacher buy in, time required to implement, training required implementers. Cost was also a component of Elements of Program Selection and was noted by 70% of the participants. Sixty percent of the participants mentioned the availability of online supplemental materials as influencing decisions.

All of these data suggest that adopting a program is a more complex process than simply considering effectiveness data. The news from this study is that effectiveness data do not seem to be a primary source of influence over adoption decisions. Implementation scientists should consider these data when developing processes to influence adoption. This is a small-scale study and should be replicated at a much larger scale to determine if these results are representative across settings.

Link to article: https://link.springer.com/content/pdf/10.1007/s43477-022-00039-2.pdf

Citation: 

Pinkelman, S. E., Rolf, K. R., Landon, T., Detrich, R., McLaughlin, C., Peterson, A., & McKnight-Lizotte, M. (2022). Curriculum Adoption in US Schools: An Exploratory, Qualitative Analysis. Global Implementation Research and Applications, 1-11.

 


 

Are Tier 1 Interventions Being Implemented with Integrity?

December 17, 2021

At the core of any multi-tiered system of support (MTSS; e.g., School-wide positive behavior intervention or Response to Intervention) is the requirement Tier 1 or universal intervention is implemented with adequate fidelity to benefit most students.  If Tier 1 interventions are not implemented with fidelity, too many students will receive more intensive Tier 2 and Tier 3 interventions.  The increased intensity of intervention will also unnecessarily strain school resources.  It is important to remember that MTSS are frameworks, and ultimately the benefit to students depends on adopting empirically-supported interventions and then implementing them well.  Without fidelity measures, it is not possible to know if failing to respond to an intervention is a problem with the intervention or poor implementation.  Often interventions are abandoned for apparent lack of effectiveness when, in fact, the intervention was not implemented with fidelity.

Fidelity is a complex construct that can be measured at different levels and different frequencies.  Each measure yields different types of information.  Until now, we have not known how researchers measured fidelity.  This situation has been partially resolved in a recent review by Bruckman et al. (2021).  Their review measured how researchers assessed treatment integrity, the frequency it was evaluated, and the level (school or individual implementer). 

Bruckman and colleagues reported that measures at the school level were reported about twice as often as at the individual level and assessed once or twice per year.  Treatment integrity measured at this level tells us how well the overall system is functioning with respect to the implementation of the intervention.  Data at this level does not indicate if all students are receiving a well-implemented intervention or if some students are not receiving the intervention as planned.  Measuring treatment integrity at the level of an individual teacher will inform if students in a particular teacher’s classroom are receiving a well-implemented intervention.  Individual-level measures are essential for data-based decision-making when determining if a student should receive more intensive services at Tier 2.  Low levels of fidelity would suggest that rather than increase the intensity of service for a student, it would be wise to invest in improving the individual teacher’s implementation of the intervention. 

Finally, Bruckman and colleagues discussed the limitations of assessing treatment integrity once or twice a year.  Such infrequent measurement does not tell us if implantation with integrity is occurring consistently or not.  The challenge of assessing more frequently is that it places a high demand on resources.  Considerably more research is required to develop effective and efficient methods for evaluating treatment integrity.

Link: https://link.springer.com/article/10.1007/s43494-021-00044-4

Citation for Article:

Buckman, M. M., Lane, K. L., Common, E. A., Royer, D. J., Oakes, W. P., Allen, G. E., … & Brunsting, N. C. (2021). Treatment Integrity of Primary (Tier 1) Prevention Efforts in Tiered Systems: Mapping the Literature. Education and Treatment of Children, 44(3), 145-168.

 


 

What is Necessary to Successfully Implement School-wide Positive Behavioral Interventions and Supports?

December 17, 2021

School-wide Positive Behavioral Interventions and Supports (SWPBIS) is one of the most widely adopted frameworks for supporting prosocial behavior in schools; however, it is not uncommon for schools to abandon it before fully implementing it.  A recent review by Fox and colleagues (2021) sought to understand the facilitators and barriers to implementing SWPBIS.  The study of facilitators of implementation identified adequate resources, strong fidelity of implementation, effective SWPBIS team function, and meaningful collection and use of the data.  The most common barriers identified by participants in the study were staff beliefs that conflict with the philosophy of SWPBIS, poor implementation fidelity, and lack of resources.  Less frequently cited barriers included lack of supporting leadership, lack of staff buy-in, and school characteristics (school size, elementary or high school). 

The good news in this review is that many of the barriers can be addressed by assuring the facilitators of implementation are well established.  Developing systems promoting high levels of implementation fidelity addresses the barrier of poor implementation fidelity.  More challenging is resolving the conflict between teachers’ beliefs and the core philosophy of SWPBIS.  It may be worth examining the roots of these ideas to understand their basis and how, specifically, they are inconsistent with SWPBIS.  To some extent it may be possible to incorporate the teachers’ competing beliefs into the specific practices embedded in SWPBIS without doing harm to the core features of it.  In other instances, there may be so much resistance to SWPBIS practices that implementation efforts should not be initiated until teachers’ concerns have been addressed to their satisfaction.  Unless a substantial majority of teachers and administrators are willing to support the SWPBIS initiative, implementation will not be successful.  This highlights the critical role exploration and adoption plays in implementation.

Link to article: https://link.springer.com/article/10.1007/s43494-021-00056-0

Citation

Fox, R. A., Leif, E. S., Moore, D. W., Furlonger, B., Anderson, A., & Sharma, U. (2021). A Systematic Review of the Facilitators and Barriers to the Sustained Implementation of School-Wide Positive Behavioral Interventions and Supports. Education and Treatment of Children, 1-22.

 


 

How Well are We Preparing Novice Teachers in Classroom Management?

December 17, 2021

Classroom teachers consistently report classroom management as a significant area of concern.  This is especially true for early career teachers and teachers often report it is one of the most common reasons for leaving the profession.  Highly rigorous, practical, and effective pre-service and professional development training approaches are necessary to address classroom behavior challenges.  A recent systematic review by Hirsch and colleagues (2021) reviewed the literature on classroom management training to determine the current status of professional development for classroom teachers.  Ultimately, the authors identified eight experimental studies that met inclusion criteria.  There were several interesting findings from this review.  Of the experimental studies reviewed, a low number of participants reported having received prior training in classroom management.  As the authors discuss, these results are not surprising since relatively few states have policy requirements for classroom teachers to receive instruction in classroom management.  Stevenson and colleagues (2020) proposed the steps for improving instruction in classroom management: (1) pre-service coursework must include a course on explicit, evidence-based, culturally, and contextually relevant classroom management skills; (2) fieldwork should incorporate explicit support and coaching on classroom management; and (3) state departments of education should require training that aligns with the best practices of classroom management to support the  needs of teachers and students.  If these three recommendations were acted on, teachers would likely be more prepared to address the behavioral challenges in their classrooms.

A second finding from the Hirsch et al. (2021) systematic review was that there is considerable evidence to support practice-based professional development rather than the standard “train and hope” (Stokes & Baer, 1977).  There are seven critical features to practice-based professional development.  In the articles reviewed in this systematic review, all of the studies incorporated some elements of practice-based professional development.  A somewhat surprising finding among the reviewed articles was that the length of training ranged from 15 minutes to four days.  This result is likely possible because the researchers used practice-based professional development that included coaching and feedback to teach the new skills.

Hirsch and colleagues made a strong argument for the increased use of technology to support professional development, ranging from low-tech methods to telehealth.  Telehealth makes it possible for teachers in rural communities to access high-quality professional development. Creating more effective and efficient professional development is necessary to scale it up.

As Hirsch et al. make clear, considerably more research on professional development is necessary.  Eight articles are a small database for making policy and practice recommendations.

Link to Article: https://link.springer.com/article/10.1007/s43494-021-00042-6

Article citation:

Hirsch, S. E., Randall, K., Bradshaw, C., & Lloyd, J. W. (2021). Professional Learning and Development in Classroom Management for Novice Teachers: A Systematic Review. Education and Treatment of Children, 44(4), 291-307.

References

Stevenson, N. A., VanLone, J., & Barber, B. R. (2020). A commentary on the misalignment of teacher education and the need for classroom behavior management skills. Education and Treatment of Children, 43(4), 393-404.

Stokes, T. F., & Baer, D. M. (1977). An implicit technology of generalization 1. Journal of applied behavior analysis, 10(2), 349-367.