Categories for Decision Making

Are more high quality education studies produced today as compared to thirty years ago?

January 28, 2019

High-Quality Education Research.

A recent article published in Best Evidence in Brief examines the issues of quantity and quality of education research. Robert Slavin highlights the progress made over the past 30 years in delivering the evidence that education practitioners need to make informed decisions. His conclusions are based on three studies: Effective Programs for Struggling Readers: A Best-Evidence Synthesis; A Synthesis of Quantitative Research on Reading Programs for Secondary Students; and Effective Programs in Elementary Mathematics: A Best-Evidence Synthesis. The research found that the number of rigorous randomized or quasi-experimental studies in elementary reading for struggling readers, secondary reading, and elementary math rose significantly over the past 20 years. Despite the important gains, the trend may be going in the wrong direction. Given the importance of research in developing an effective evidence-based culture in education, educators need to diligently support the production of the types of research (including replication studies) essential to building a robust evidence base.

Citations:

Slavin, R. (2019). Replication. [Blog post]. Retrieved from https://robertslavinsblog.wordpress.com/2019/01/24/replication/

Baye, A., Inns, A., Lake, C., & Slavin, R. E. (2018). A synthesis of quantitative research on reading programs for secondary students. Reading Research Quarterly.

Inns, A., Lake, C., Pellegrini, M., & Slavin, R. (2018). Effective programs for struggling readers: A best-evidence synthesis.Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, DC.

Pellegrini, M., Inns, A., & Slavin, R. (2018). Effective programs in elementary mathematics: A best-evidence synthesis.Paper presented at the annual meeting of the Society for Research on Educational Effectiveness, Washington, DC.

Link: For copies of the papers presented at the annual meeting, please contact sdavis@SuccessForAll.org, and for the Baye article, go to https://ila.onlinelibrary.wiley.com/doi/abs/10.1002/rrq.229

 


 

Why do evidence-based practices frequently fail to produce positive results?

January 22, 2019

Citation: Gonzalez, N. (2018). When evidence-based literacy programs fail. Phi Delta Kappan, 100(4), 54–58. https://doi.org/10.1177/0031721718815675

 


 

How can teachers overcome obstacles to executing effective classroom management?

January 22, 2019

Barriers to Implementing Classroom Management and Behavior Support Plans: An Exploratory Investigation. Ample evidence supports effective classroom management’s place in maximizing student achievement. Unfortunately, sustained implementation of classroom management strategies too often fail. This study examines obstacles encountered by 33 educators along with suggested interventions to overcome impediments to effective delivery of classroom management interventions or behavior support plans. Having the right classroom management plan isn’t enough if you can’t deliver the strategies to the students in the classroom.

Citation: Collier‐Meek, M. A., Sanetti, L. M., & Boyle, A. M. (2019). Barriers to implementing classroom management and behavior support plans: An exploratory investigation. Psychology in the Schools56(1), 5-17.

Link: https://onlinelibrary.wiley.com/doi/pdf/10.1002/pits.22127

 

 


 

The Myth of Learning Styles

December 6, 2018

The Learning Styles Educational Neuromyth: Lack of Agreement Between Teachers’ Judgments, Self-Assessment, and Students’ Intelligence. The issue of learning styles (LS) have been overwhelmingly embraced by teachers and the public for over forty years. International surveys of teachers have shown more than 90% believe that grouping students into categories, like auditory, visual, or kinesthetic learners, or concrete versus abstract learners will enhance student achievement. This study examined the hypothesis that teachers’ and students’ assessment of preferred LS correspond. The study found no relationship between pupils’ self-assessment and teachers’ assessment. Teachers’ and students’ answers didn’t match up. The study suggests that teachers cannot assess the LS of their students accurately. This is important because if teachers cannot accurately identify which style is preferred, they cannot assign the appropriate curriculum to each student. For a thorough summary on research on this topic the article by Daniel Willingham, “Does Tailoring Instruction to Learning Styles Help Student Learn?” offers arguments for and against LS. At this time the preponderance of evidence finds learning styles to have no basis in fact, despite the very strong and persistent preference teachers and the public have for the concept.

Citation:Papadatou-Pastou, M., Gritzali, M., & Barrable, A. (2018). The Learning Styles Educational Neuromyth: Lack of Agreement Between Teachers’ Judgments, Self-Assessment, and Students’ Intelligence. Front. Educ. 3:105. doi: 10.3389/feduc.2018.00105

Link: https://www.frontiersin.org/articles/10.3389/feduc.2018.00105/full

 


 

What obstacles do teachers face in using data for decision making?

December 5, 2018

Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Underlying many accountability policies is the assumption that standardized test data and other common sources of data will be used to make decisions that will result in changes to instructional practices. This study examines longitudinal from nine high schools nominated as leading practitioners of Continuous Improvement (CI) practices. The researchers compared continuous improvement best practices to teachers actual use of data in making decisions. The study found teachers to be receptive, but also found that significant obstacles were interfering with the effective use of data that resulted in changes in instruction. The analysis showed cultural values and practices inconsistent with accountability policies and continuous improvement practices impede implementation. The researchers identify barriers to use of testing and other data that help to account for the less than successful results. Given the current understanding of the importance on implementation science in the effective application of any new practice, these findings are not a surprise. As our colleague, Ronnie Detrich, is quoted as saying, “Implementation is where great ideas go to die”.

Citation: Ingram, D., Louis, K. S., & Schroeder, R. G. (2004). Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Teachers College Record106(6), 1258-1287.

LinkAccountability policies and teacher decision making: Barriers to the use of data to improve practice

 


 

How effective is Schoolwide Positive Behavior Interventions and Supports?

December 5, 2018

A Review of Schoolwide Positive Behavior Interventions and Supports as a Framework for Reducing Disciplinary Exclusions. Schoolwide positive behavior interventions and supports (SWPBIS) is implemented in more than 23,000 schools. Several reviews have examined the impact of SWPBIS, including a meta-analysis of single-case design research. However, to date, there has not been a randomized controlled trials (RCTs) reviews on the effects of SWPBIS implementation to reduce disciplinary exclusion, including office discipline referrals and suspensions. The purpose of this study is to conduct a systematic meta-analysis of RCTs on SWPBIS. Ninety schools, including both elementary and high schools, met criteria to be included in this study. A statistically significant large treatment effect (g = −.86) was found for reducing school suspension. No treatment effect was found for office discipline referrals.

Citation: Gage, N.A., Whitford, D.K. and Katsiyannis, A., 2018. A review of schoolwide positive behavior interventions and supports as a framework for reducing disciplinary exclusions. The Journal of Special Education, p.0022466918767847.

LinkSchoolwide Positive Behavior Interventions and Supports as a Framework for Reducing Disciplinary Exclusions

 


 

What one study tells us about publication bias in studies published the field of education?

November 14, 2018

Do Published Studies Yield larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-Review

The purpose of this study is to estimate the extent to which publication bias is present in education and special education journals. Meta-analyses are increasingly used as the basis for making educational decisions. Research suggests that publication bias continues to exist in meta-analyses that are published. The data reveal that 58% of meta-analyses did not test for possible publication bias. This paper shows that published studies were associated with significantly larger effect sizes than unpublished studies (d=0.64). The authors suggest that meta-analyses report effect sizes of published and unpublished separately in order to address issues of publication bias.

Citation:Chow, J. C., & Ekholm, E. (2018). Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review.

Link: https://link.springer.com/article/10.1007/s10648-018-9437-7

 


 

What steps can be taken to improve the quality of research?

October 15, 2018

Sharing successes and hiding failures: ‘reporting bias’ in learning and teaching research

An examination of current practices and standards in education research strongly support the need for improvement. One of the issues that requires attention is reporting bias. Reporting bias can lead to a study telling a different story from the realities it is supposed to represent. When researchers selectively publish significant positive results, and omit non-significant or negative results, the research literature is skewed. This is called ‘reporting bias’, and it can cause both practitioners and researchers to develop an inaccurate understanding of the efficacy of an intervention. Potential reporting bias are identified in this recent high-profile higher education meta-analysis. The paper examines factors that lead to bias as well offers specific recommendations to journals, funders, ethics committees, and universities designed to reduce reporting bias.

Citation:Dawson, P., & Dawson, S. L. (2018). Sharing successes and hiding failures:‘reporting bias’ in learning and teaching research. Studies in Higher Education43(8), 1405-1416.

Link: https://srhe.tandfonline.com/doi/pdf/10.1080/03075079.2016.1258052?needAccess=true

 


 

How can open science increase confidence and the overall quality of special education research?

August 23, 2018

Promoting Open Science to Increase the Trustworthiness of Evidence in Special Education

The past two decades has seen an explosion of research to guide special educators improve the lives for individuals with disabilities. At the same time society is wrestling with the challenges posed by a post-truth age in which the public is having difficulty discerning what to believe and what to consider as untrustworthy. In this environment it becomes ever more important that researchers find ways to increase special educator’s confidence in the available knowledge base of practices that will reliably produce positive outcomes. This paper offers methods to increase confidence through transparency, openness, and reproducibility of the research made available to special educators. To accomplish this the authors propose that researchers in special education adopt emerging open science reforms such as preprints, data and materials sharing, preregistration of studies and analysis plans, and Registered Reports.

Citation:Cook, B. G., Lloyd, J. W., Mellor, D., Nosek, B. A., & Therrien, W. (2018). Promoting Open Science to Increase the Trustworthiness of Evidence in Special Education.

Link: https://osf.io/zqr69

 


 

Why We Cling to Ineffective Practices.

April 3, 2018

Why Do School Psychologists Cling to Ineffective Practices? Let’s Do What Works.

This article examines the impact of poor decision making in school psychology, with a focus on determining eligibility for special education. Effective decision making depends upon the selection and correct use of measures that yield reliable scores and valid conclusions, but traditional psychometric adequacy often comes up short. The author suggests specific ways in which school psychologists might overcome barriers to using effective assessment and intervention practices in schools in order to produce better results.

Citation: VanDerHeyden, A. M. (2018, March). Why Do School Psychologists Cling to Ineffective Practices? Let’s Do What Works. In School Psychology Forum, Research in Practice(Vol. 12, No. 1, pp. 44-52). National Association of School Psychologists.

Link: https://www.researchgate.net/publication/324065605_Why_Do_School_Psychologists_Cling_to_Ineffective_Practices_Let%27s_Do_What_Works