Categories for Decision Making

How effective is Schoolwide Positive Behavior Interventions and Supports?

December 5, 2018

A Review of Schoolwide Positive Behavior Interventions and Supports as a Framework for Reducing Disciplinary Exclusions. Schoolwide positive behavior interventions and supports (SWPBIS) is implemented in more than 23,000 schools. Several reviews have examined the impact of SWPBIS, including a meta-analysis of single-case design research. However, to date, there has not been a randomized controlled trials (RCTs) reviews on the effects of SWPBIS implementation to reduce disciplinary exclusion, including office discipline referrals and suspensions. The purpose of this study is to conduct a systematic meta-analysis of RCTs on SWPBIS. Ninety schools, including both elementary and high schools, met criteria to be included in this study. A statistically significant large treatment effect (g = −.86) was found for reducing school suspension. No treatment effect was found for office discipline referrals.

Citation: Gage, N.A., Whitford, D.K. and Katsiyannis, A., 2018. A review of schoolwide positive behavior interventions and supports as a framework for reducing disciplinary exclusions. The Journal of Special Education, p.0022466918767847.

LinkSchoolwide Positive Behavior Interventions and Supports as a Framework for Reducing Disciplinary Exclusions



What one study tells us about publication bias in studies published the field of education?

November 14, 2018

Do Published Studies Yield larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-Review

The purpose of this study is to estimate the extent to which publication bias is present in education and special education journals. Meta-analyses are increasingly used as the basis for making educational decisions. Research suggests that publication bias continues to exist in meta-analyses that are published. The data reveal that 58% of meta-analyses did not test for possible publication bias. This paper shows that published studies were associated with significantly larger effect sizes than unpublished studies (d=0.64). The authors suggest that meta-analyses report effect sizes of published and unpublished separately in order to address issues of publication bias.

Citation:Chow, J. C., & Ekholm, E. (2018). Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review.




What steps can be taken to improve the quality of research?

October 15, 2018

Sharing successes and hiding failures: ‘reporting bias’ in learning and teaching research

An examination of current practices and standards in education research strongly support the need for improvement. One of the issues that requires attention is reporting bias. Reporting bias can lead to a study telling a different story from the realities it is supposed to represent. When researchers selectively publish significant positive results, and omit non-significant or negative results, the research literature is skewed. This is called ‘reporting bias’, and it can cause both practitioners and researchers to develop an inaccurate understanding of the efficacy of an intervention. Potential reporting bias are identified in this recent high-profile higher education meta-analysis. The paper examines factors that lead to bias as well offers specific recommendations to journals, funders, ethics committees, and universities designed to reduce reporting bias.

Citation:Dawson, P., & Dawson, S. L. (2018). Sharing successes and hiding failures:‘reporting bias’ in learning and teaching research. Studies in Higher Education43(8), 1405-1416.




How can open science increase confidence and the overall quality of special education research?

August 23, 2018

Promoting Open Science to Increase the Trustworthiness of Evidence in Special Education

The past two decades has seen an explosion of research to guide special educators improve the lives for individuals with disabilities. At the same time society is wrestling with the challenges posed by a post-truth age in which the public is having difficulty discerning what to believe and what to consider as untrustworthy. In this environment it becomes ever more important that researchers find ways to increase special educator’s confidence in the available knowledge base of practices that will reliably produce positive outcomes. This paper offers methods to increase confidence through transparency, openness, and reproducibility of the research made available to special educators. To accomplish this the authors propose that researchers in special education adopt emerging open science reforms such as preprints, data and materials sharing, preregistration of studies and analysis plans, and Registered Reports.

Citation:Cook, B. G., Lloyd, J. W., Mellor, D., Nosek, B. A., & Therrien, W. (2018). Promoting Open Science to Increase the Trustworthiness of Evidence in Special Education.




Why We Cling to Ineffective Practices.

April 3, 2018

Why Do School Psychologists Cling to Ineffective Practices? Let’s Do What Works.

This article examines the impact of poor decision making in school psychology, with a focus on determining eligibility for special education. Effective decision making depends upon the selection and correct use of measures that yield reliable scores and valid conclusions, but traditional psychometric adequacy often comes up short. The author suggests specific ways in which school psychologists might overcome barriers to using effective assessment and intervention practices in schools in order to produce better results.

Citation: VanDerHeyden, A. M. (2018, March). Why Do School Psychologists Cling to Ineffective Practices? Let’s Do What Works. In School Psychology Forum, Research in Practice(Vol. 12, No. 1, pp. 44-52). National Association of School Psychologists.




What is the Campbell Collaboration and how does the organization support educators to make informed evidence-based decisions?

March 15, 2018

The Campbell Collaboration: Providing Better Evidence for a Better World

News Summary: This paper provides the history and summarizes the development of the Campbell Collaboration. The Campbell Collaboration is a “nonprofit organization with the mission of helping people make well-informed decisions about the effects of interventions in the social, behavioral, and educational domains. The paper looks at the organization’s efforts to build a world library of accurate, synthesized evidence to inform policy and practice and improve human well-being worldwide. The Education section of the Campbell research library produces reviews on issues in early childhood, elementary, secondary, and postsecondary education. Topics range from academic programs, teacher qualifications, testing, to a wide variety of school-based interventions. Campbell systematic reviews and related evidence synthesis provide unbiased summaries of bodies of empirical evidence. The Campbell Collaboration has recently implemented new changes in its practices designed to significantly increase the production, dissemination, and use of rigorous syntheses of research. Following the acquisition of new funding, The Campbell Collaboration embarked on a process of reform culminating in the appointment of a new Board of Directors and the creation of an annual members conference.

Citation: Littell, J. H., & White, H. (2018). The Campbell Collaboration: Providing better evidence for a better world. Research on Social Work Practice28(1), 6-12.




Addressing issues of publication bias and the importance of publishing null findings in education research.

January 31, 2018

Introduction to Special Issue: Null Effects and Publication Bias in Learning Disabilities Research

This paper addresses null effects and publication bias, two important issues that are impediments to improving our knowledge of what works and what doesn’t in education. Despite great progress over the past twenty years in establishing empirical evidence for interventions and instructional practices in the field of education, more needs to be accomplished in identifying not only what works, but also what research can inform us about inaccurate evidence that can lead us down a blind alley. This key element in the scientific process has often been over looked in the body of research that is published. Therrien and Cook examine how the contingencies that control publication of research are limiting our knowledge by excluding results of research that suggest practices that don’t produce positive outcomes and conditions under which practices work. The paper highlights the fact that not all negative results are equal. One such instance is when research results are mixed, some revealing positive results and other studies offering negative outcomes. Negative effects in these situations can be of assistance in identifying the boundary conditions as to where and when the practice can be used effectively. Another benefit of null effects research is when popular opinion is such that everyone believes something to be, true-sugar increases hyperactivity, but rigorous research reveals there to be no significant cause and effect relationship.

Citation: Therrien, W. J. and Cook, Brian. G. (2018). Introduction to Special Issue: Null Effects and Publication Bias in Learning Disabilities Research. Learning Disabilities Research and Practice. DOI10.1111/ldrp.12163




The Importance and Dilemma of Publishing Studies That Do Not Produce Positive Results

December 18, 2017

(1) An Evaluation of a Learner Response System (2) The Effects of Financial Incentives on Standardized Testing (3) Do Teacher Observations Make Any Difference to Student Performance?

Commentary: This piece reports on three examples of studies of practices that did not produce positive results and highlights the issue of publication bias in educational research. There are powerful contingencies that shape the publication process in ways that do not always work in the best interest of science. For example, promotion and tenure committees do not give the same weight to published replication studies. Also, journals generally do not publish studies that show no effect resulting in the “file drawer problem”. The only exception to this rule is if a study shows that a widely accepted intervention is not effective. Studies that show no effect may be very experimentally rigorous but because they did not show an experimental effect the studies are relegated to the researchers file drawer. These contingencies result in a publication bias for original research that demonstrates a positive effect. This can result in efforts to systematically review the evidence for an intervention over-estimating its effectiveness. Publishing in peer-reviewed journals is a critical component needed to safeguard the quality of research but these biases reflect potential publication biases. Replication is a fundamental cornerstone of science.  Replication studies demonstrate the robustness of a finding. The biases against publishing non-results is a bit more complicated. Some studies that report non-results are unimportant. For example, demonstrating that a car will not run if gas is put in the tires is unimportant. The only important demonstration is one that shows a positive relation between where the gas was put in the car and the car actually running. Other studies reporting non-results are important because they show that a variable that has been experimentally demonstrated to have an impact on student behavior does not have that effect in a replication study or under a particular set of conditions.

News Summary:

  • An Evaluation of a Learner Response System: A Learner Response System (LRS) is a classroom feedback tool that is becoming increasing popular. LRS is the practice of teachers and pupils using electronic handheld devices to provide immediate feedback during lessons. Given that feedback has been found to be a powerful tool in learning, it is not surprising that LRS are being adopted. The important question remains, do LRS increase student performance. This study tests a Learner Response System using Promethean handsets to assess whether it improves student outcomes. The study found no evidence that math and reading were improved using the system for 2 years.


  • The Effects of Financial Incentives on Standardized Testing: Standardized testing has increasingly been used to hold educators accountable. Incentives are often offered as a way to improve student test performance. This study examines the impact incentives for students, parents and tutors on standardized test results. The researchers provided incentives on specially designed tests that measure the same skills as the official state standardized tests; however, performance on the official tests was not incentivized. This study finds substantial improvement for performance when there were incentives on the results did not generalize to the official test. This calls into question how to effectively use incentives so they will actually produce desired outcomes.


  • Do Teacher Observations Make Any Difference to Student Performance? Research strongly suggests that feedback obtained through direct observations of performance can be a powerful tool for improving teacher’s skills. This study examines a peer teacher observation method used in England. The study found no evidence that Teacher Observation improved student language and math scores.


(1) Education Endowment Foundation (2017). Learner Response System. Education Endowment Foundation. Retrieved

(2) John A. List, Jeffrey A Livingston and Susanne Neckermann. “Do Students Show What They Know on Standardized Tests?” working papers (2016) Available at:

(3) Education Endowment Foundation (2017). Teacher Observation. Education Endowment Foundation. Retrieved







Multitiered System of Support (MTSS) Overview

December 4, 2017

Framework for Improving Education Outcomes

Multitiered system of support (MTSS) is a framework for organizing service delivery. At the core of MTSS is the adoption and implementation of a continuum of evidence-based interventions that result in improved academic and behavioral outcomes for all students. MTSS is a data-based decision making approach based on the frequent screening of progress for all students and intervention for students who are not making adequate progress.

Citation: States, J., Detrich, R., and Keyworth, R. (2017). Multitiered System of Support Overview. Oakland, Ca. The Wing Institute.




Treatment Integrity in the Problem Solving Process (Wing Institute Paper)

October 4, 2017

The usual approach to determining if an intervention is effective for a student is to review student outcome data; however, this is only part of the task. Student data can only be understood if we know something about how well the intervention was implemented. Student data without treatment integrity data are largely meaningless because without knowing how well an intervention has been implemented, no judgments can be made about the effectiveness of the intervention. Poor outcomes can be a function of an ineffective intervention or poor implementation of the intervention. Without treatment integrity data, the is a risk that an intervention will be judged as ineffective when, in fact, the quality of implementation was so inadequate that it would be unreasonable to expect positive outcomes.

Citation: Detrich, R., States, J. & Keyworth, R. (2017). Treatment Integrity in the Problem Solving Process. Oakland, Ca. The Wing Institute