Categories for Decision Making

What is the Campbell Collaboration and how does the organization support educators to make informed evidence-based decisions?

March 15, 2018

The Campbell Collaboration: Providing Better Evidence for a Better World

News Summary: This paper provides the history and summarizes the development of the Campbell Collaboration. The Campbell Collaboration is a “nonprofit organization with the mission of helping people make well-informed decisions about the effects of interventions in the social, behavioral, and educational domains. The paper looks at the organization’s efforts to build a world library of accurate, synthesized evidence to inform policy and practice and improve human well-being worldwide. The Education section of the Campbell research library produces reviews on issues in early childhood, elementary, secondary, and postsecondary education. Topics range from academic programs, teacher qualifications, testing, to a wide variety of school-based interventions. Campbell systematic reviews and related evidence synthesis provide unbiased summaries of bodies of empirical evidence. The Campbell Collaboration has recently implemented new changes in its practices designed to significantly increase the production, dissemination, and use of rigorous syntheses of research. Following the acquisition of new funding, The Campbell Collaboration embarked on a process of reform culminating in the appointment of a new Board of Directors and the creation of an annual members conference.

Citation: Littell, J. H., & White, H. (2018). The Campbell Collaboration: Providing better evidence for a better world. Research on Social Work Practice28(1), 6-12.

Link: http://journals.sagepub.com/doi/full/10.1177/1049731517703748

 


 

Addressing issues of publication bias and the importance of publishing null findings in education research.

January 31, 2018

Introduction to Special Issue: Null Effects and Publication Bias in Learning Disabilities Research

This paper addresses null effects and publication bias, two important issues that are impediments to improving our knowledge of what works and what doesn’t in education. Despite great progress over the past twenty years in establishing empirical evidence for interventions and instructional practices in the field of education, more needs to be accomplished in identifying not only what works, but also what research can inform us about inaccurate evidence that can lead us down a blind alley. This key element in the scientific process has often been over looked in the body of research that is published. Therrien and Cook examine how the contingencies that control publication of research are limiting our knowledge by excluding results of research that suggest practices that don’t produce positive outcomes and conditions under which practices work. The paper highlights the fact that not all negative results are equal. One such instance is when research results are mixed, some revealing positive results and other studies offering negative outcomes. Negative effects in these situations can be of assistance in identifying the boundary conditions as to where and when the practice can be used effectively. Another benefit of null effects research is when popular opinion is such that everyone believes something to be, true-sugar increases hyperactivity, but rigorous research reveals there to be no significant cause and effect relationship.

Citation: Therrien, W. J. and Cook, Brian. G. (2018). Introduction to Special Issue: Null Effects and Publication Bias in Learning Disabilities Research. Learning Disabilities Research and Practice. DOI10.1111/ldrp.12163

Link: https://www.researchgate.net/publication/322700698_Introduction_to_Special_Issue_Null_Effects_and_Publication_Bias_in_Learning_Disabilities_Research

 


 

The Importance and Dilemma of Publishing Studies That Do Not Produce Positive Results

December 18, 2017

(1) An Evaluation of a Learner Response System (2) The Effects of Financial Incentives on Standardized Testing (3) Do Teacher Observations Make Any Difference to Student Performance?

Commentary: This piece reports on three examples of studies of practices that did not produce positive results and highlights the issue of publication bias in educational research. There are powerful contingencies that shape the publication process in ways that do not always work in the best interest of science. For example, promotion and tenure committees do not give the same weight to published replication studies. Also, journals generally do not publish studies that show no effect resulting in the “file drawer problem”. The only exception to this rule is if a study shows that a widely accepted intervention is not effective. Studies that show no effect may be very experimentally rigorous but because they did not show an experimental effect the studies are relegated to the researchers file drawer. These contingencies result in a publication bias for original research that demonstrates a positive effect. This can result in efforts to systematically review the evidence for an intervention over-estimating its effectiveness. Publishing in peer-reviewed journals is a critical component needed to safeguard the quality of research but these biases reflect potential publication biases. Replication is a fundamental cornerstone of science.  Replication studies demonstrate the robustness of a finding. The biases against publishing non-results is a bit more complicated. Some studies that report non-results are unimportant. For example, demonstrating that a car will not run if gas is put in the tires is unimportant. The only important demonstration is one that shows a positive relation between where the gas was put in the car and the car actually running. Other studies reporting non-results are important because they show that a variable that has been experimentally demonstrated to have an impact on student behavior does not have that effect in a replication study or under a particular set of conditions.

News Summary:

  • An Evaluation of a Learner Response System: A Learner Response System (LRS) is a classroom feedback tool that is becoming increasing popular. LRS is the practice of teachers and pupils using electronic handheld devices to provide immediate feedback during lessons. Given that feedback has been found to be a powerful tool in learning, it is not surprising that LRS are being adopted. The important question remains, do LRS increase student performance. This study tests a Learner Response System using Promethean handsets to assess whether it improves student outcomes. The study found no evidence that math and reading were improved using the system for 2 years.
  • The Effects of Financial Incentives on Standardized Testing: Standardized testing has increasingly been used to hold educators accountable. Incentives are often offered as a way to improve student test performance. This study examines the impact incentives for students, parents and tutors on standardized test results. The researchers provided incentives on specially designed tests that measure the same skills as the official state standardized tests; however, performance on the official tests was not incentivized. This study finds substantial improvement for performance when there were incentives on the results did not generalize to the official test. This calls into question how to effectively use incentives so they will actually produce desired outcomes.
  • Do Teacher Observations Make Any Difference to Student Performance? Research strongly suggests that feedback obtained through direct observations of performance can be a powerful tool for improving teacher’s skills. This study examines a peer teacher observation method used in England. The study found no evidence that Teacher Observation improved student language and math scores.

Citation:

(1) Education Endowment Foundation (2017). Learner Response System. Education Endowment Foundation. Retrieved https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/learner-response-system/.

(2) John A. List, Jeffrey A Livingston and Susanne Neckermann. “Do Students Show What They Know on Standardized Tests?” working papers (2016) Available at: http://works.bepress.com/jeffrey_livingston/19/

(3) Education Endowment Foundation (2017). Teacher Observation. Education Endowment Foundation. Retrieved https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/teacher-observation/.

Link:

(1) https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/learner-response-system/

(2) https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/teacher-observation/

 


 

Multitiered System of Support (MTSS) Overview

December 4, 2017

Framework for Improving Education Outcomes

Multitiered system of support (MTSS) is a framework for organizing service delivery. At the core of MTSS is the adoption and implementation of a continuum of evidence-based interventions that result in improved academic and behavioral outcomes for all students. MTSS is a data-based decision making approach based on the frequent screening of progress for all students and intervention for students who are not making adequate progress.

Citation: States, J., Detrich, R., and Keyworth, R. (2017). Multitiered System of Support Overview. Oakland, Ca. The Wing Institute.

Link: https://winginstitute.org/school-programs-multi-tiered-systems

 


 

Treatment Integrity in the Problem Solving Process (Wing Institute Paper)

October 4, 2017

The usual approach to determining if an intervention is effective for a student is to review student outcome data; however, this is only part of the task. Student data can only be understood if we know something about how well the intervention was implemented. Student data without treatment integrity data are largely meaningless because without knowing how well an intervention has been implemented, no judgments can be made about the effectiveness of the intervention. Poor outcomes can be a function of an ineffective intervention or poor implementation of the intervention. Without treatment integrity data, the is a risk that an intervention will be judged as ineffective when, in fact, the quality of implementation was so inadequate that it would be unreasonable to expect positive outcomes.

Citation: Detrich, R., States, J. & Keyworth, R. (2017). Treatment Integrity in the Problem Solving Process. Oakland, Ca. The Wing Institute

Link: https://www.winginstitute.org/treatment-integrity-problem-solving

 


 

Approaches to Increasing Treatment Integrity (Wing Institute Paper)

October 4, 2017

Student achievement scores in the United States remain stagnant despite constant reform. New initiatives arise promising hope, only to disappoint after being adopted, implemented, and quickly found wanting. The cycle of reform followed by failure has had a demoralizing effect on schools, making new reform efforts more problematic. These efforts frequently fail because implementing new practices is far more challenging than expected and require that greater attention be paid to implementation. A fundamental factor leading to failure is inattention to treatment integrity. When innovations are not implemented as designed, it should not be a surprise that anticipated benefits are not forthcoming. The question is, what strategies can educators employ to increase the likelihood that practices will be implemented as designed?

Strategies designed to increase treatment integrity fall into two categories: antecedent-based strategies and consequence-based strategies. Antecedent-based strategies involve any setting event or environmental factor that happens prior to implementing the new practice and that increases the likelihood of success as well as eliminates setting events or environmental considerations that decrease the likelihood of success. Consequence-based strategies are designed to impact actions that happen after implementation of the new practice and that are likely to increase or decrease treatment integrity.

Citation: Detrich, R., States, J. & Keyworth, R. (2017). Approaches to Increasing Treatment Integrity. Oakland, Ca. The Wing Institute

Link: https://www.winginstitute.org/treatment-integrity-strategies

 


 

Dimensions of Treatment Integrity Overview (Wing Institute Paper)

October 4, 2017

Historically, treatment integrity has been defined as implementation of an intervention as planned (Gresham, 1989). More recently, treatment integrity has been reimagined as multidimensional (Dane & Schneider, 1998). In this conceptualization of treatment integrity are four dimensions relevant to practice: (a) exposure (dosage), (b) adherence, (c) quality of delivery, and (d) student responsiveness.  It is important to understand that these dimensions do not stand alone but rather interact to impact the ultimate effectiveness of an intervention. It is important for educators to assess all dimensions of treatment integrity to assure that it is being implemented as intended.

Citation: Detrich, R., States, J. & Keyworth, R. (2017). Dimensions of Treatment Integrity Overview. Oakland, Ca. The Wing Institute

Link: https://www.winginstitute.org/treatment-integritydimensions

 


 

Overview of Treatment Integrity (Wing Institute paper)

October 4, 2017

For the best chance of a positive impact on educational outcomes, two conditions must be met: (a) Effective interventions must be adopted, and (b) those interventions must be implemented with sufficient quality (treatment integrity) to ensure benefit.  To date, emphasis in education has been on identifying effective interventions and less concern with implementing the interventions. The research on the implementation of interventions is not encouraging. Often, treatment integrity scores are very low and, in practice, implementation is rarely assessed. If an intervention with a strong research base is not implemented with a high level of treatment integrity, then the students do not actually experience the intervention and there is no reason to assume they will benefit from it. Under these circumstances, it is not possible to know if poor outcomes are the result of an ineffective intervention or poor implementation of that intervention. Historically, treatment integrity has been defined as implementing an intervention as prescribed. More recently, it has been conceptualized as having multiple dimensions, among them dosage and adherence which must be measured to ensure that it is occurring at adequate levels.

Citation: Detrich, R., States, J., & Keyworth, R. (2107). Overview of Treatment Integrity. Oakland, Ca. The Wing Institute.

Link: https://www.winginstitute.org/evidence-based-decision-making-treatment-integrity

 


 

Predatory Journals: How do you to know whom to trust?

September 6, 2017

Beall’s List of Predatory Journals and Publishers

This news item offers a list of questionable, scholarly open-access publishers. In an era in which we are bombarded with volumes of research, it becomes ever more challenging to decide which journals and publishers are reputable. This web site reviews, assesses, and provides guidelines on how to decide which are trustworthy, whether you want to submit articles, serve as an editor, or serve on an editorial board. The web site provides a list that mostly consists of open access journals, although, a few non-open access publishers whose practices match those of predatory publishers have been added to the list.

Citation: Beall, J. (2012). Predatory publishers are corrupting open access. Nature, 489(7415), 179.

Web Site: Beall’s List of Predatory Journals and Publishers

 


 

How to Use Expenditure-to-Performance Ratios to Boost Student Achievement Cost Effectively

August 1, 2017

A Guide to Calculating District Expenditure-to-Performance Ratios Using Publicly Available Data

Efficient use of educational resources is a perennial challenge for school systems. Maximizing the impact of education interventions to magnify student achievement is an important goal for all school districts. This guide examines the use of expenditure-to-performance ratios as a critical indicator for school systems to help decide which interventions make sense when education dollars are at a premium. It describes how states and districts can use available data on district expenditures and student academic achievement to calculate six district-level expenditure-to-performance ratios.

Citation: Ryan, S., Lavigne, H. J., Zweig, J. S., & Buffington, P. J. (2017). A guide to calculating district expenditure-to-performance ratios using publicly available data. (REL 2017-179). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast & Islands.

Link: http://files.eric.ed.gov/fulltext/ED572599.pdf