Categories for Implementation

Does Contextual Fit of Interventions Improve Outcomes for Students?

April 29, 2022

Contextual fit refers to the extent that procedures of the selected program are consistent with the knowledge, skills, resources, and administrative support of those who are expected to implement the plan.  Packaged curricula and social programs are developed without a specific context in mind; however, when implementing that program in a particular context, it will often require some adaptations of the program or the setting to increase the fidelity of implementation.  One challenge to improving contextual fit is to determine which features of the program or the environment need to be adapted to improve fit.  

A recent study by Monzalve and Horner (2021) addressed this question.  The authors developed the Contextual Fit Enhancement Protocol to identify components of a behavior support plan to adapt.  The logic of the study was that by increasing contextual fit, fidelity of implementation would increase, and student outcomes would be improved.  Four student-teacher dyads were recruited.  To be included in the study, the student had an existing behavior support plan that was judged technically adequate but was being implemented with low fidelity.  During baseline, no changes were made to the plan. The percentage of the support plan components implemented was measured as well as student behavior.  Following baseline, researchers met with the team responsible for implementing the plan and reviewed the Contextual Fit Enhancement Protocol.  During this meeting the goals and procedures of the plan were confirmed, the contextual fit of the current plan was assessed, specific adaptations to the plan were made to increase contextual fit, and an action plan for implementing the revised plan was developed.  Researchers continued to measure fidelity of implementation and student behavior.  After at least 5 sessions of implementing the revised plan, the implementation team met with the researcher to re-rate the original plan and the revised plan for contextual fit.  Items that were rated low were again reviewed and adapted.  Following the review of the Contextual Fit Enhancement Protocol and revised plan, fidelity of implementation increased substantially and student problem behavior decreased.

There are two important implications from this study.  First, there is no reason to assume that the initial version of the plan or even a revised version of the plan will get everything right because intervention is complex. This is an iterative process.  Periodic reappraisal of the plan is necessary.  The second important point is that student behavior is a function of the technical adequacy of the plan and how well that plan is implemented.  If a plan is technically adequate, is a good contextual fit, and is implemented with high levels of fidelity (even with less than 100%), then positive student outcomes will most likely be achieved.

Citation:

Monzalve, M., & Horner, R. H. (2021). The impact of the contextual fit enhancement protocol on behavior support plan fidelity and student behavior. Behavioral Disorders, 46(4), 267-278.

 


 

What is the Effect of Contextual Fit on Quality of Implementation?

March 3, 2022

Kendra Guinness of the Wing Institute at Morningside provides an excellent summary of the importance of contextual fit and how it can enhance the implementation of evidence-based practices. Practices are often validated under very different conditions than the usual practice settings. In establishing the scientific support for an intervention, researchers often work very closely with the research site providing close supervision and feedback, assuring that all necessary resources are available, and training the implementers of the components of the intervention. In the usual practice settings, the intervention is often implemented without all of the necessary resources, and training and feedback are limited. As a result, the program as developed is not a good fit with the local circumstances within a school or classroom. In this overview, Ms. Guinness defines contextual fit, describes the key features of it, and summarizes the empirical evidence supporting it.  

Briefly, contextual fit is the match between the strategies, procedures, or elements of an intervention and the values, needs, skills, and resources available in the setting. One of the best empirical demonstrations of the importance of contextual fit is research by Benazzi et al. (2006). Behavior support plans were developed in three different ways: (1) behavior support teams without a behavior specialist (2) behavior support teams with a behavior specialist, and (3) behavior specialists alone. The plans were rated for technical adequacy and contextual fit. The plans developed by the behavior specialist alone or teams with a behavior specialist as part of the team were rated highest. When the behavior support plans were rated for contextual fit, plans developed by teams, with or without a behavior specialist, were rated higher than plans developed by behavior specialists alone.

Additional evidence of the importance of context fit comes from research by Monzalve and Horner (2021). They evaluated the effect of the Contextual Fit Enhancement Protocol. First, they had teachers implement a behavior support plan without feedback from researchers and measured fidelity of implementation and the level of student problem behavior. Subsequently, the researchers met with the implementation team and reviewed the goals of the plan, the procedures, identified adaptations to improve the contextual fit, and planned next steps for implementing the revised behavior support plan. Before the team meeting, the intervention plan was implemented with 15% fidelity and student problem behavior occurred during 46% of the observation period. Following the meeting, fidelity of implementation increased to 83% and problem behavior was reduced to 16% of the observation period.

These data clearly suggest that intervention does not occur in a vacuum and there are variables other than the components of the intervention that influence its implementation and student outcomes. Much more needs to be learned about adapting interventions to fit a particular context without reducing the effectiveness of the intervention.

Citation: 

Guinness, K. (2022). Contextual Fit Overview. Original paper for the Wing Institute.

References:

Benazzi, L., Horner, R. H., & Good, R. H. (2006). Effects of behavior support team composition on the technical adequacy and contextual fit of behavior support plans. Journal of Special Education, 40(3), 160–170.
Monzalve, M., & Horner, R. H. (2021). The impact of the contextual fit enhancement protocol on behavior support plan fidelity and student behavior. Behavioral Disorders, 46(4), 267–278. https://doi.org/10.1177/0198742920953497

 


 

What Does it Take to Assure High-Quality Implementation?

March 3, 2022

A fundamental assumption of evidence-based practice is that interventions will produce benefit only if there are high treatment integrity levels. High levels cannot be assumed in the usual course of practice in education. It must be planned for and routinely monitored. Often, there is not the time and resources to do that in schools, so effective interventions fail to produce the expected benefits for students. The standard “train and hope” is not sufficient to assure adequate levels of treatment integrity. The question becomes what is sufficient? George Noell, Kristin Gansle, and Veronic Gulley (2021) recently addressed this question. Teachers were assigned to either a weekly follow-up consultation meeting or an Integrated Support condition that included social influence, planning, and performance feedback. After an initial four-week consultation period in which problems were identified, intervention plans were developed, and staff were trained to implement, teachers in each group were followed for four additional weeks to determine their level of treatment integrity and effects on student behavior (either behavior or academic). Implementation scores for the participants in the Weekly follow-up meeting were relatively low the first week and declined across the rest of the four weeks.  

Participants in the Integrated Support group had high levels of treatment integrity the first week and scores decreased very little across the rest of the study. Students in the Integrated Support group had much greater improvements in behavior than students in the Weekly Follow-up condition. 

The authors reported that three school climate variables were related to plan implementation and child outcomes in the Integrated Support condition. For treatment plan implementation, the variables were (1) student relations (2) resources (3) time. For child outcomes, the only school climate factor was time. There were no school climate variables that influenced the Weekly Follow-up condition outcomes at either the level of treatment plan implementation or child outcomes.

These data highlight the importance of continuous monitoring of implementation and supporting educators as they implement intervention plans. Failure to do so results in very limited outcomes for students, does not use implementers time most effectively, and yields a very poor return on investment. Separating monitoring of implementation from intervention will almost always result in poor outcomes for students.

The challenge for schools is to reconfigure services so that monitoring treatment integrity is considered a part of services as it generates best outcomes for students. 

Citation

Noell, G., Gansle, K., & Gulley, V. (2021). The Impact of Integrated Support and Context on Treatment Implementation and Child Outcomes Following Behavioral Consultation. Behavior Modification, 01454455211054020.

 


 

What is the Cost of Adopting Unsupported Programs?

March 3, 2022

Even though there is increasing support for schools adopting programs that have strong empirical support for various reasons, schools continue to adopt programs that have no or limited empirical support. Often an unanswered question is what are the costs for implementing programs with limited or no scientific support when well supported programs are available? The challenge for schools is to adopt programs that will produce the greatest benefits for students and do so in a way that is cost-effective. A cost-benefit analysis is one approach to identifying the costs and benefits of a particular program. Essentially, it a ratio of benefits over costs. A cost-benefit analysis is under-utilized in public education. Recently, Scheibel, Zane, and Zimmerman (2022) applied a cost analysis to adopting programs for children with autism that are unproven or have limited scientific support. Specifically, they evaluated the costs of implementing the Rapid Prompting Method (no empirical support) and Floortime Therapy (emerging effectiveness data), both of which are frequently adopted in programs for children on the autism spectrum. The authors reported that implementing interventions with a limited research base or programs with no evidentiary support, can pose significant costs to schools with varying likelihood of benefit to children. In addition to the direct costs of these programs, there may opportunity costs for failing to implement interventions with stronger empirical support. 

The methods for completing these types of cost analyses are complex; however, there is great value to schools when they employ these cost-benefit methods to improve outcomes for students and achieve a greater return on their investment in effective programs. This study is one example of how these analyses can be conducted. Both researchers and public-school administrators would be well-served if cost-effectiveness analyses were more frequently utilized when evaluating programs.

Citation:

Scheibel, G., Zane, T. L., & Zimmerman, K. N. (2022). An Economic Evaluation of Emerging and Ineffective Interventions: Examining the Role of Cost When Translating Research into Practice. Exceptional Children, 00144029211073522.

 


 

Are Tier 1 Interventions Being Implemented with Integrity?

December 17, 2021

At the core of any multi-tiered system of support (MTSS; e.g., School-wide positive behavior intervention or Response to Intervention) is the requirement Tier 1 or universal intervention is implemented with adequate fidelity to benefit most students.  If Tier 1 interventions are not implemented with fidelity, too many students will receive more intensive Tier 2 and Tier 3 interventions.  The increased intensity of intervention will also unnecessarily strain school resources.  It is important to remember that MTSS are frameworks, and ultimately the benefit to students depends on adopting empirically-supported interventions and then implementing them well.  Without fidelity measures, it is not possible to know if failing to respond to an intervention is a problem with the intervention or poor implementation.  Often interventions are abandoned for apparent lack of effectiveness when, in fact, the intervention was not implemented with fidelity.

Fidelity is a complex construct that can be measured at different levels and different frequencies.  Each measure yields different types of information.  Until now, we have not known how researchers measured fidelity.  This situation has been partially resolved in a recent review by Bruckman et al. (2021).  Their review measured how researchers assessed treatment integrity, the frequency it was evaluated, and the level (school or individual implementer). 

Bruckman and colleagues reported that measures at the school level were reported about twice as often as at the individual level and assessed once or twice per year.  Treatment integrity measured at this level tells us how well the overall system is functioning with respect to the implementation of the intervention.  Data at this level does not indicate if all students are receiving a well-implemented intervention or if some students are not receiving the intervention as planned.  Measuring treatment integrity at the level of an individual teacher will inform if students in a particular teacher’s classroom are receiving a well-implemented intervention.  Individual-level measures are essential for data-based decision-making when determining if a student should receive more intensive services at Tier 2.  Low levels of fidelity would suggest that rather than increase the intensity of service for a student, it would be wise to invest in improving the individual teacher’s implementation of the intervention. 

Finally, Bruckman and colleagues discussed the limitations of assessing treatment integrity once or twice a year.  Such infrequent measurement does not tell us if implantation with integrity is occurring consistently or not.  The challenge of assessing more frequently is that it places a high demand on resources.  Considerably more research is required to develop effective and efficient methods for evaluating treatment integrity.

Link: https://link.springer.com/article/10.1007/s43494-021-00044-4

Citation for Article:

Buckman, M. M., Lane, K. L., Common, E. A., Royer, D. J., Oakes, W. P., Allen, G. E., … & Brunsting, N. C. (2021). Treatment Integrity of Primary (Tier 1) Prevention Efforts in Tiered Systems: Mapping the Literature. Education and Treatment of Children, 44(3), 145-168.

 


 

What is Necessary to Successfully Implement School-wide Positive Behavioral Interventions and Supports?

December 17, 2021

School-wide Positive Behavioral Interventions and Supports (SWPBIS) is one of the most widely adopted frameworks for supporting prosocial behavior in schools; however, it is not uncommon for schools to abandon it before fully implementing it.  A recent review by Fox and colleagues (2021) sought to understand the facilitators and barriers to implementing SWPBIS.  The study of facilitators of implementation identified adequate resources, strong fidelity of implementation, effective SWPBIS team function, and meaningful collection and use of the data.  The most common barriers identified by participants in the study were staff beliefs that conflict with the philosophy of SWPBIS, poor implementation fidelity, and lack of resources.  Less frequently cited barriers included lack of supporting leadership, lack of staff buy-in, and school characteristics (school size, elementary or high school). 

The good news in this review is that many of the barriers can be addressed by assuring the facilitators of implementation are well established.  Developing systems promoting high levels of implementation fidelity addresses the barrier of poor implementation fidelity.  More challenging is resolving the conflict between teachers’ beliefs and the core philosophy of SWPBIS.  It may be worth examining the roots of these ideas to understand their basis and how, specifically, they are inconsistent with SWPBIS.  To some extent it may be possible to incorporate the teachers’ competing beliefs into the specific practices embedded in SWPBIS without doing harm to the core features of it.  In other instances, there may be so much resistance to SWPBIS practices that implementation efforts should not be initiated until teachers’ concerns have been addressed to their satisfaction.  Unless a substantial majority of teachers and administrators are willing to support the SWPBIS initiative, implementation will not be successful.  This highlights the critical role exploration and adoption plays in implementation.

Link to article: https://link.springer.com/article/10.1007/s43494-021-00056-0

Citation

Fox, R. A., Leif, E. S., Moore, D. W., Furlonger, B., Anderson, A., & Sharma, U. (2021). A Systematic Review of the Facilitators and Barriers to the Sustained Implementation of School-Wide Positive Behavioral Interventions and Supports. Education and Treatment of Children, 1-22.

 


 

What Do Teachers Think about Praise?

November 5, 2021

Praise is generally recognized as an empirically-supported approach to improving student behavior (Simonsen, Fairbanks, Briesch, Myers, & Sugai, 2008); however, in spite of the research evidence, praise is often under-utilized in classrooms (Floress & Jenkins, 2015; Gable, Hendrickson, Shores, & Young, 1983; Sutherland, Wehby, & Copeland, 2000) highlighting the research to practice gap.  Why don’t teachers implement praise more often and more consistently?  Shernoff and colleagues (2020) attempted to answer this question.  In this study, they recruited 41 teachers who identified praise as a professional development goal to participate in a coaching program with the goal of increasing praise.  After the study was completed, the teachers were asked about facilitators (helpful factors) and barriers (obstacles) to using praise.  During the study, the teachers slowly increased the frequency and quality of praise over a three-month period.  This suggests that it takes time to make practice changes and it may be more complex to implement praise than is generally considered.  The teachers identified a number of facilitators to using praise including feedback to students without having to criticize them, positive student reactions, and deliberate planning and reminders (planning how to use praise in the context of a specific lesson).  Teachers also identified barriers to using praise including it interferes with instruction, conflicts with education, training and beliefs, and the context dependent nature of praise.  Using praise in classrooms is an innovation when there is initially a very low level.  From an implementation science perspective, the process leading to adoption can be complex and influenced by factors that are unrelated to the intervention.  For example, if an innovation conflicts with a teacher’s education, training, and beliefs, then the innovation will likely be met with resistance.  One way to reduce the resistance to the innovation is to have someone that is credible to the teacher champion the intervention rather than outside consultants, trainers, or researchers.  Often the most credible person to a teacher is another teacher.  This highlights that introducing interventions that are seemingly simple is not a simple process.

Citation: Shernoff, E. S., Lekwa, A. L., Reddy, L. A., & Davis, W. (2020). Teachers’ use and beliefs about praise: A mixed-methods study. School Psychology Review, 49(3), 256-274.

Link: https://www.tandfonline.com/doi/pdf/10.1080/2372966X.2020.1732146?casa_token=qvVOyAJiz80AAAAA:sbPix3zyGYx7sc4Vos6V_DX3mIUzqnqp1eYGeSqaGSMVewTmnzNlPZEO1ZUO_7I4Tbs5sjL0V3c2

References:

  • Floress, M. T., & Jenkins, L. N. (2015). A preliminary investigation of kindergarten teachers’ use of praise in general education classrooms. Preventing School Failure: Alternative Education for Children and Youth, 59(4), 253–262. doi:10.1177/0198742917709472.
  • Gable, R. A., Hendrickson, J. M., Shores, R. E., & Young, C. C. (1983). Teacher-handicapped child classroom interactions. Teacher Education and Special Education: The Journal of the Teacher Education Division of the Council for Exceptional Children, 6(2), 88–95. doi:10.1177/019874299301800405.
  • Simonsen, B., Fairbanks, S., Briesch, A., Myers, D., & Sugai, G. (2008). Evidence-based practices in classroom management: Considerations for research to practice. Education and Treatment of Children, 3, 351–380. doi:10.1353/etc.0.0007.
  • Sutherland, K. S., Wehby, J. H., & Copeland, S. R. (2000). Effect of varying rates of behavior-specific praise on the ontask behavior of students with EBD. Journal of Emotional and Behavioral Disorders, 8(1), 2–8. doi:10.1177/ 106342660000800101.

 


 

What are Effective Behavior Management Strategies for Disruptive Behavior?

November 5, 2021

Disruptive behavior is one of the biggest challenges facing classroom teachers today.  Many of the students with the most disruptive behavior are classified as having emotional and behavioral disorders or at risk of developing them.  These students take up a disproportionate amount of classroom time, reducing time spent of instruction.  Generally, these students have not been responsive to class-wide behavior management approaches and require more individualized and intensive intervention.  This raises the question what are the effective practices that will benefit the student?   A recent review by Riden and colleagues attempted to answer this question through a systematic review of the literature.  They identified eight practices that met critieria to be considered evidence-based: (check in/check out (2) functional assessment-based intervention (3) group contingencies (4) peer- mediated interventions (5) self-management (6) self-regulated strategy development for writing (7) token economies (8) video modeling.  Another eleven practices were identified as promising and include: (1) praise (2) opportunities to respond (3) behavior contracting (4) cooperative learning (5) goal setting (6) good behavior game (7) high probability requests (8) instructional choice (9) self-determination (10) social skills (11) time out.  It is important to recognize that practices described as promising may well be effective but the empirical data base is not yet strong enough to warrant inclusion as evidence-based.  These practices should be considered when selecting approaches for addressing significant behavior problems.

These data are important because they can guide educators about which practices to adopt when addressing the behavior problems posed by disruptive students.

Citation: Riden, B. S., Kumm, S., & Maggin, D. M. (2021). Evidence-Based Behavior Management Strategies for Students With or At Risk of EBD: A Mega Review of the Literature. Remedial and Special Education, 07419325211047947.

Link: https://journals-sagepub-com.dist.lib.usu.edu/doi/pdf/10.1177/07419325211047947?casa_token=_igJ42XflS4AAAAA:kGoB5TX6fn2kB-uMYFpNHhU3pa1GWEjyThRiasmNOcf2XcUAjpdr8WZd3f4eiTLxdDHSS2UcApQf

 


 

Can We Make Coaching More Cost-Effective?

November 5, 2021

One of the great challenges in education is training all staff to implement interventions.  There is considerable reliance on para-professionals, especially in special education, to support students.  Many of the para-professionals have minimal training in educational practices.  In many cases, the training that does occur is the traditional didactic model and there is little evidence that it produces the outcomes it is supposed to yield.  An alternative model of training that holds great promise is coaching; however, there are limitations to it because it often relies on outside coaches which makes it cost-prohibited for many districts.  A recent report by Sallese and Vannest (2021) offers an alternative that may make coaching more cost-effective.  In their research, they utilized classroom teachers to coach the para-professionals working in the classroom to increase the use of behavior specific praise.  Many teachers report that they have little or no pre-service or in-service training focused on paraprofessional training and support (Douglas, Chapin & Nolan, 2016).  To address this issue, the teachers were provided a manual to guide their coaching efforts.  The components of the coaching package included self-monitoring, performance feedback, goal setting, modeling, and action planning.  In surveys of paraprofessionals one of the most cited concerns is lack of training and support in behavior management (Mason, et al., 2021).  Behavior specific praise has been identified as an evidence-based component of classroom behavior management (Simonsen, Fairbanks, Briesch, Myers, & Sugai (2008); however, it has often been a challenge to increase behavior specific praise and maintain it over time.  In this study, all four of the para-professionals that received coaching increased their rate of behavior specific praise.  In addition, 100% of the participants agreed that the procedures were appropriate and feasible in terms of time and effort required to implement.

This was a small-scale study but it holds promise as a method for coaching implementers to carry out effective practices.  From an implementation perspective, this provides a cost-effective approach to increase the internal capacity of a system to implement adopted practices.  Building internal capacity is critical if effective interventions are to be sustained over generations of implementers. 

Citation: Sallese, M. R., & Vannest, K. J. (2021). Effects of a Manualized Teacher-Led Coaching Intervention on Paraprofessional Use of Behavior-Specific Praise. Remedial and Special Education, 07419325211017298.

Link: https://journals-sagepub-com.dist.lib.usu.edu/doi/pdf/10.1177/07419325211017298?casa_token=SxqgzIctDbYAAAAA:m6SYJpowkUZud_eynTS6oZUX0Bbn2ExL87kZ7PO8fzmZz7Di2CSKl08A9KO2Wv1h32uaf68TQuja

References:

  • Douglas, S. N., Chapin, S. E., & Nolan, J. F. (2016). Special education teachers’ experiences supporting and supervising paraeducators: Implications for special and general education settings. Teacher Education and Special Education, 39(1), 60–74. https://doi.org/gf86tz
  • Mason, R. A., Gunersel, A. B., Irvin, D. W., Wills, H. P., Gregori, E., An, Z. G., & Ingram, P. B. (2021). From the frontlines: Perceptions of paraprofessionals’ roles and responsibilities. Teacher Education and Special Education, 44(2), 97–116. https://doi.org/fwn6
  • Simonsen, B., Fairbanks, S., Briesch, A., Myers, D., & Sugai, G. (2008). Evidence-based practices in classroom management: Considerations for research to practice. Education and treatment of children, 351-380.

 


 

Can We Close the Research to Practice Gap?

November 5, 2021

One of the persistent problems in education is the gap between what we know about effective educational practices and the practices that are frequently used in public schools.  Many of these practices do not have empirical support.  The challenge for all educators is how do we close the gap?  The flow of research to practice is often perceived as being a one way flow from researchers that develop effective interventions and disseminate them to practitioners who are expected to adopt them (Ringeisen, Henderson, & Hoagwood, 2003).  Ringeisen et al., argue that this is not likely to result in widespread adoption of effective practices.  McLaughlin and colleagues (1997) have made the argument that having an array of effective practices is not sufficient for closing the research to practice gap.  In many instances, the practices developed by researchers are not a good contextual fit for the school settings because training and experience requirements for implementers are unreasonable within the school setting, the resources necessary for implementation are not present, and the time demands to implement are unrealistic.  If the dominant model of disseminating empirically-supported interventions is not impacting the research to practice gap, what should we do?  The goal is important but we need effective alternatives to the common approach.  Recently a report from the William T. Grant Foundation, (Farrell, Penuel, Coburn, Daniel, Steup (2021) entitled, Research-Practice Partnerships in Education: The State of the Field.  In this report, the authors define research-practice partnerships as “intentionally organized to connect diverse forms of expertise and shift power relations in the research endeavor to ensure that all partners have a say in the joint work.”  This is a significant shift from usual practice in the development and dissemination of effective practices.  There are five principles associated with these partnerships: (1) they are long term collaborations (2) they work toward educational improvement or equitable transformation (3) they feature engagement with research as a leading activity (4) they are intentionally organized to bring together a diversity of expertise (5) they employ strategies to shift power relations in research endeavors to ensure that all participants have a say.  This is an important shift.  Practitioners are now partners with researchers.  It is a movement away from the researcher as expert model to a model in which practitioners are equally expert as researchers.  Each is an expert in different domains of improving educational practices.

If practitioners are involved from the beginning in guiding research then the practices are more likely to be seen as usable by educators when considering interventions to adopt. The development of research-practice partnerships has the potential to increase the adoption of empirically-supported practices.

Citation: Farrell, C.C., Penuel, W.R., Coburn, C., Daniel, J., & Steup, L. (2021). Research-practice partnerships in education: The state of the field. William T. Grant Foundation.

Link: http://wtgrantfoundation.org/research-practice-partnerships-in-education-the-state-of-the-field

References: McLaughlin, M. J., & Leone, P. E., Meisel, S., & Henderson, K. (1997). Strengthen school and community capacity. Journal of Emotional and Behavioral Disorders, 5(1), 15-24.
Ringeisen, H., Henderson, K., & Hoagwood, K. (2003). Context matters: Schools and the “research to practice gap” in children’s mental health. School Psychology Review, 32(2), 153-168.