This special issue of Strategies is devoted to highlighting this knowledge base and corresponding practices. In this issue you will find an in-depth case study of a school district engaged in systemic improvement, using the principles and practices of what Dr. Jackson calls the Pedagogy of Confidence®.
Destination: Equity. (2015). Strategies, 17(1) . Retrieved from https://www.aasa.org/uploadedFiles/Resources/Other_Resources/STRATEGIES-SEPT-15-FINAL.pdf
The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric. Five criteria, described by the acronym MAGIC (magnitude, articulation, generality, interestingness, and credibility) are proposed as crucial features of a persuasive, principled argument.
Abelson, R. P. (2012). Statistics as principled argument. Psychology Press.
Value-added assessment proves that very good teaching can boost student learning and that family background does not determine a student's destiny. Students taught by highly effective teachers several years in a row earn higher test scores than students assigned to particularly ineffective teachers.
American Education Research Association (AERA). (2004). Teachers matter: Evidence from value-added assessments. Research Points, 2(2). Retrieved from http://www.aera.net/ Portals/38/docs/Publications/Teachers%20Matter.pdf
This article explores factors influencing the sustained use of Peer Assisted Learning Strategies (PALS) in math in one elementary school.
Baker, S., Gersten, R., Dimino, J. A., & Griffiths, R. (2004). The sustained use of research-based instructional practice: A case study of peer-assisted learning strategies in mathematics. Remedial and Special Education, 25(1), 5-24.
This book has been raging for decades, raising many questions about the power of science. The book not only helps resolve many current debates about science, but it is also a major contribution to explaining science in terms of a powerful philosophical system.
Baldwin, J. D. (2015). Ending the science wars. Routledge.
Using professional self-regulation in medicine as a model, the National Commission on Teaching and America's Future has proposed sweeping changes in how teachers are trained and licensed, claiming that the reforms are well-grounded in research. This paper argues that the research literature offers far less support for the Commission's recommendations than is claimed.
Ballou, D., & Podgursky, M. (2000). Reforming Teacher Preparation and Licensing: What is the Evidence?. Teachers College Record, 102(1), 5-27.
The CJAA funded the nation’s first statewide experiment concerning research-based programs for juvenile justice. The question here was whether they work when applied statewide in a “real world” setting. This report indicates that the answer to this question is yes— when the programs are competently delivered.
Barnoski, R., & Aos, S. (2004). Outcome evaluation of Washington State’s research-based programs for juvenile offenders. Olympia, WA: Washington State Institute for Public Policy, 460.
This research finds starting school later is associated with reduced suspensions and higher course grades. These studies suggest disadvantaged students may especially benefit from delayed starting times.
Bastian, K. C., & Fuller, S. C. (2018). Answering the Bell: High School Start Times and Student Academic Outcomes. AERA Open, 4(4), 2332858418812424.
After reviewing relevant scientific literature, the author concludes that these are myths with little or no evidence to support them. The author suggests 4 ways to improve the quality and effectiveness of services.
Bickman, L. (1999). Practice makes perfect and other myths about mental health services. American Psychologist, 54(11), 965.
In the present article, it is argued that rules and conventions for generalizing in group-statistical research are different from those applying to single-subject research.
Birnbrauer, J. S. (1981). External validity and experimental investigation of individual behaviour. Analysis and Intervention in Developmental Disabilities, 1(2), 117-132.
This book has three main goals: to take stock of progress in the development of data-analysis procedures for single-subject research; to clearly explain errors of application and consider them within the context of new theoretical and empirical information of the time; and to closely examine new developments in the analysis of data from single-subject or small n experiments.
Busk, P. L., Serlin, R. C., Kratochwill, T. R., & Levin, J. R. (1992). Single-case research design and analysis: New directions for psychology and education.
The authors discuss the emergence of the evidence-based practice movement and the challenges of integrating what we know from scientific research into daily practice with children and families.
Buysse, V., & Wesley, P. W. (2006). Evidence-Based Practice: How Did It Emerge and What Does It Mean for the Early Childhood Field?. Zero to Three (J), 27(2), 50-55.
in this perspective, the author challenge us to accept the responsibility of moving education forward by doing more than paying lip service to the translation of research into practice.
Carnine, D. (1999). Campaigns for moving research into practice. Remedial and Special Education, 20(1), 2-35.
This essay provides examples from reading and math curricula, describes how experts have, for ideological reasons, shunned some solutions that do display robust evidence of efficacy, then examines how public impatience has forced other professions to “grow up” and accept accountability and scientific evidence.
Carnine, D. (2000). Why education experts resist effective practices (Report of the Thomas B. Fordham Foundation). Washington, DC: Thomas B. Fordham Foundation.
This report presents case studies of the efforts by three school districts, Hillsborough County Public Schools (HCPS), Memphis City Schools (MCS), and Pittsburgh Public Schools (PPS), to launch, implement, and operate new teacher evaluation systems as part of a larger reform effort called the Partnership Sites to Empower Effective Teaching.
Chambers, J., Brodziak de los Reyes, I., & O'Neil, C. (2013). How Much are Districts Spending to Implement Teacher Evaluation Systems?.
The work of several such task forces and other groups reviewing empirically supported treatments (ESTs) in the United States, United Kingdom, and elsewhere is summarized here, along with the lists of treatments that have been identified as ESTs
Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual review of psychology, 52(1), 685-716.
A multilevel model of leadership, empowerment, and performance was tested using a sample of 62 teams, 445 individual members, 62 team leaders, and 31 external managers from 31 stores of a Fortune 500 company. Leader-member exchange and leadership climate-related differently to individual and team empowerment and interacted to influence individual empowerment.
Chen, G., Kirkman, B. L., Kanfer, R., Allen, D., & Rosen, B. (2007). A multilevel study of leadership, empowerment, and performance in teams. Journal of Applied Psychology, 92(2), 331–346.
These guidelines emphasized the dimensions of 1) efficacy and 2) effectiveness. A model is provided that proposes how evidence--however defined--will ultimately connect with practice.
Chorpita, B. F. (2003). The frontier of evidence-based practice.
This article details the context and findings of a review conducted by a state-established panel established to examine the efficacy and effectiveness of child treatments for Anxiety Disorders, Depression, Attention Deficit Hyperactivity Disorder, Conduct and Oppositional Disorders, and Autistic Disorder
Chorpita, B. F., Yim, L. M., Donkervoet, J. C., Arensdorf, A., Amundsen, M. J., McGee, C., ... & Morelli, P. (2002). Toward large‐scale implementation of empirically supported treatments for children: A review and observations by the Hawaii Empirical Basis to Services Task Force. Clinical Psychology: Science and Practice, 9(2), 165-190.
The purpose of this study is to estimate the extent to which publication bias is present in education and special education journals. This paper shows that published studies were associated with significantly larger effect sizes than unpublished studies (d=0.64). The authors suggest that meta-analyses report effect sizes of published and unpublished separately in order to address issues of publication bias.
Chow, J. C., & Ekholm, E. (2018). Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review.
This paper describes opportunities, challenges, and cautions in response to T. R. Kratochwill and K. C. Stoiber's vision and other critical issues for the evidence-based intervention (EBI) movement in school psychology.
Christenson, S. L., Carlson, C., & Valdez, C. R. (2002). Evidence-based interventions in school psychology: Opportunities, challenges, and cautions. School Psychology Quarterly, 17(4), 466.
This Section of reports aim to assess the extent to which reports of RTCs published in 5 general medical journal have discussed new results in light of all of available evidence.
Clarke, M., & Chalmers, I. (1998). Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents?. Jama, 280(3), 280-282.
This article describes the use of evidence-based practice along with a multi-stakeholder
consensus process to design the psychosocial rehabilitation components in a benefit
package of publicly funded mental health services in Texas.
Cook, J. A., Toprac, M., & Shore, S. E. (2004). Combining evidence-based practice with stakeholder consensus to enhance psychosocial rehabilitation services in the Texas benefit design initiative. Psychiatric Rehabilitation Journal, 27(4), 307.
This journal attempts to fill the chasm by helping doctors find the information that will ensure they can provide optimum management for their patients.
Davidoff, F., Haynes, B., Sackett, D., & Smith, R. (1995). Evidence based medicine.
This paper examines data on 39 charter schools and correlates these data with school effectiveness. We find that class size, per-pupil expenditure, teacher certification, and teacher training—are not correlated with school effectiveness. In stark contrast, we show that frequent teacher feedback, the use of data to guide instruction, high-dosage tutoring, increased instructional time, and high expectations—explains approximately 45 percent of the variation in school effectiveness.
Dobbie, W., & Fryer Jr, R. G. (2013). Getting beneath the veil of effective schools: Evidence from New York City. American Economic Journal: Applied Economics, 5(4), 28-60.
The purpose of evidence-based medicine (EBM) is to enable patients-- through the process of collaboration with their health care providers--to take advantage of the best available scientific evidence when they are making health care decisions.
Drake, R. E., Rosenberg, S. D., Teague, G. B., Bartels, S. J., & Torrey, W. C. (2003). Fundamental principles of evidence-based medicine applied to mental health care. Psychiatric Clinics of North America.
This article focuses on the most fundamental question regarding evidence-based practice: What is evidence? To address this question, the authors first review several of the definitions, criteria, and strategies that have been used to define scientific evidence.
Drake, R.E., Latimer, E.S., Leff, H. S., McHugi, G. J., Burns, B. J. (2004). What is Evidence?. In Child and Adolescent Psychiatric Clinics of North America, Vol. 13, pp. 717-728
This report discuss how to use research findings as a base to support stronger teacher preparation programs.
Dynarski, M. (2014). Moving Teacher Preparation into the Future. Brookings Institute. Retrieved from https://www.brookings.edu/research/moving-teacher-preparation-into-the-future/
The largest Elementary and Secondary Education Act (ESEA) expenditure by far is for its Title I program. This report try to follow the money to see whether Title I funds are spent effectively and whether or not ESEA achieves its objectives. This report suggest focusing effective interventions on the neediest students may provide a way forward that is consistent with fiscal realities.
Dynarski, M., kainz, K. (2015). Why federal spending on disadvantaged students (Title I) doesn’t work. Brookings Institutions. Retrieved from https://www.brookings.edu/research/why-federal-spending-on-disadvantaged-students-title-i-doesnt-work/
This book compares what actually occurred since publication of A System of Logic with some of the more probable scenarios of what could have happened if education had been framed as a science that resides on a logical-empirical base.
Englemann, S., & Carnine, D. (2016). Could John Stuart Mill have saved our schools?. Attainment Company Inc.
The purpose of clinical research is to answer this question: Would a new treatment, when added to the existing range of treatment options available in practice, help patients?
Essock, S. M., Drake, R. E., Frank, R. G., & McGuire, T. G. (2003). Randomized controlled trials in evidence-based mental health care: getting the right answer to the right question. Schizophrenia Bulletin, 29(1), 115-123.
The purpose of this paper is to identify the forces that influence how developmental research is prioritized and evaluated and how these influences are changing as we enter the new millennium.
Fabes, R. A., Martin, C. L., Hanish, L. D., & Updegraff, K. A. (2000). Criteria for evaluating the significance of developmental research in the twenty‐first century: Force and counterforce. Child development, 71(1), 212-221.
This paper identified and discussed some of the more pressing challenges and associated ethical dilemmas of implementing EBP in social work and strategies to manage them, in the hopes of affirming that the process of EBP is both feasible and practicable.
Farley, A. (2009). The challenges of implementing evidence based practice: ethical considerations in practice, education, policy, and research. Social Work & Society, 7(2), 246-259.
In this article, which draws on a recently released National Research Council report, the authors argue that the primary emphasis should be on nurturing and reinforcing a scientific culture of educational research.
Feuer, M. J., Towne, L., & Shavelson, R. J. (2002). Scientific culture and educational research. Educational researcher, 31(8), 4-14.
This research examines the impact of longer school days on student achievement. This study attempts to fill in gaps in the evidence-base on this topic. Although this study finds positive outcomes for additional reading instruction, it is important to note that for achieving maximum results it is important to pair evidence-based reading instruction practices with the additional instruction time in order to achieve maximum results.
Figlio, D., Holden, K. L., & Ozek, U. (2018). Do students benefit from longer school days? Regression discontinuity evidence from Florida’s additional hour of literacy instruction. Economics of Education Review, 67, 171-183.
This paper explain a three-stage process of Pilot Research, Formal Evaluation, and Scaling Up. Finally, we discuss several misconceptions about empirical research and researchers.
Fuchs, D., & Fuchs, L. S. (1998). Researchers and teachers working together to adapt instruction for diverse learners. Learning Disabilities Research & Practice.
This text provides a comprehensive introduction to educational research. This textbook has been revised to reflect a balance of both quantitative and qualitative research methods
Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An introduction. Longman Publishing.
This article discusses critical issues related to conducting high-quality intervention research using experimental and quasi-experimental group designs.
Gersten, R., Baker, S., & Lloyd, J. W. (2000). Designing high-quality research in special education: Group experimental design. The Journal of Special Education, 34(1), 2-18.
Using the teacher‐centered systemic reform model as a framework, the authors explore the connection between chemistry instructors’ beliefs about teaching and learning and self‐efficacy beliefs, and their enacted classroom practices.
Gibbons, R. E., Villafañe, S. M., Stains, M., Murphy, K. L., & Raker, J. R. (2018). Beliefs about learning and enacted instructional practices: An investigation in postsecondary chemistry education. Journal of Research in Science Teaching, 55(8), 1111-1133.
This study examines the implementation of Leveled Literacy Intervention (LLI) for struggling readers that had been proven to work in early grades. The findings highlight the importance of considering context and implementation, in addition to evidence of effectiveness, when choosing an intervention program. Not only do schools need to adopt programs supported by evidence, but equally educators need to implement them consistently and effectively if students are to truly benefit from an intervention.
Gonzalez, N. (2018). When evidence-based literacy programs fail. Phi Delta Kappan, 100(4), 54–58. https://doi.org/10.1177/0031721718815675
This article maintains that intelligence tests contribute little if any information useful for the planning, implementation, and evaluation of instructional interventions for children. This argument is supported by the virtual absence of empirical evidence supporting the existence of aptitude × treatment interactions.
Gresham, F. M., & Witt, J. C. (1997). Utility of intelligence tests for treatment planning, classification, and placement decisions: Recent empirical findings and future directions. School Psychology Quarterly, 12(3), 249.
In this issue you will find both a brief introduction to the new Empirically Supported Interventions Section and the first of a two-part substantive discussion of vital issues pertaining to this topic. A companion piece further extending this analysis will follow shortly in a subsequent issue.
Gutkin, T. B. (2000). Empirically supported interventions: Initiating a new standing section in School Psychology Quarterly. School Psychology Quarterly, 15(1), 1.
The manual offers not just a summary of the articles in JAMA, but modified and expanded material. The manual clearly explain the principles of EBM and guidelines for accessing and evaluating scientific articles.
Guyatt, G., Rennie, D., Meade, M., & Cook, D. (Eds.). (2002). Users' guides to the medical literature: a manual for evidence-based clinical practice (Vol. 706). Chicago: AMA press.
This study examines adoption and implementation of the US Department of Education's new policy, the `Principles of Effectiveness', from a diffusion of innovations theoretical framework. In this report, we evaluate adoption in relation to Principle 3: the requirement to select research-based programs.
Hallfors, D., & Godette, D. (2002). Will the “principles of effectiveness” improve prevention practice? Early findings from a diffusion study. Health Education Research, 17(4), 461–470.
to inform selection of evidence-based interventions to be implemented in classroom settings, the current systematic review with meta-analysis of single-case design studies was conducted to evaluate intervention effectiveness, evidence-based status, and moderators of effects for four intervention types when implemented with students with ADHD in classroom settings.
Harrison, J. R., Soares, D. A., Rudzinski, S., & Johnson, R. (2019). Attention Deficit Hyperactivity Disorders and Classroom-Based Interventions: Evidence-Based Status, Effectiveness, and Moderators of Effects in Single-Case Design Research. Review of Educational Research, 0034654319857038.
This paper reports on the analysis of state statutes and department of education regulations in fifty states for changes in teacher evaluation in use since the passage of No Child Left Behind Act of 2001.
Hazi, H. M., & Rucinski, D. A. (2009). Teacher evaluation as a policy target for improved student learning: A fifty-state review of statute and regulatory action since NCLB. education policy analysis archives, 17, 5.
The Percentage of Proficient Students (PPS) has become a ubiquitous statistic under the No Child Left Behind Act. The author demonstrates that the PPS metric offers only limited and unrepresentative depictions of large-scale test score trends, gaps, and gap trends. The author shows how the statistical shortcomings of these depictions extend to shortcomings of policy, from exclusively encouraging score gains near the proficiency cut score to shortsighted comparisons of state and national testing results. The author proposes alternatives for large-scale score reporting and argues that a distribution-wide perspective on results is required for any serious analysis of test score data, including “growth”-related results under the recent Growth Model Pilot Program.
Ho, A. D. (2008). The problem with “proficiency”: Limitations of statistics and policy under No Child Left Behind. Educational researcher, 37(6), 351-360.
This outstanding textbook presents innovative interventions for youth with severe emotional and behavioral disorders. Community Treatment for Youth is designed to fill a gap between the knowledge base and clinical practice through its presentation of theory, practice parameters, training requirements, and research evidence.
Hoagwood, K. I. M. B. E. R. L. Y., Burns, B. J., & Weisz, J. R. (2002). A profitable conjunction: From science to service in children’s mental health. Community treatment for youth: Evidence-based interventions for severe emotional and behavioral disorders, 327-338.
This report, preceded as it was by the seminal report of the Surgeon General on Mental Health (2000) and followed by the Surgeon General’s Youth Violence (2001) and Culture, Race and Ethnicity Reports (2002), represented a critical shift in federal health priorities.
Hoagwood, K., & Johnson, J. (2003). School psychology: A public health framework: I. From evidence-based practices to evidence-based policies. Journal of School Psychology, 41(1), 3-21.
This document presents a set of criteria to be used in evaluating treatment guidelines that have been promulgated by health care organizations, government agencies, professional associations, or other entities.1 The purpose of treatment guidelines is to educate health care professionals2 and health care systems about the most effective treatments available
Hollon, D., Miller, I. J., & Robinson, E. (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57(12), 1052-1059.
This article allows readers to determine if a specific study is a credible example of single-subject research and if a specific practice or procedure has been validated as “evidence-based” via single-subject research.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2008). The use of single-subject research to identify evidence-based practice in special education. Advances in Evidence-Based Education, 1(1), 67-87.
The purposes of this manuscript are to propose core features that may apply to any practice or set of practices that proposes to be evidence-based in relation to School-wide Positive Behavior Support (SWPBS).
Horner, R. H., Sugai, G., & Anderson, C. M. (2010). Examining the evidence base for school-wide positive behavior support. Focus on Exceptional Children, 42(8), 1.
This book presents clear and functional techniques for deciding what students with learning disabilities should be taught and how. This book can also function as a tool to assist pre-service teachers (students) with deciding how to teach and what to teach to regular/non-special education children.
Howell, K. W. (1993). Curriculum-based evaluation: Teaching and decision making. Cengage Learning.
This study examines longitudinal from nine high schools nominated as leading practitioners of Continuous Improvement (CI) practices. The researchers compared continuous improvement best practices to teachers actual use of data in making decisions. The study found teachers to be receptive, but also found that significant obstacles were interfering with the effective use of data that resulted in changes in instruction.
Ingram, D., Louis, K. S., & Schroeder, R. G. (2004). Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Teachers College Record, 106(6), 1258-1287.
This piece describes the widely held perception among education leaders that we already know how to help teachers improve, and that we could achieve our goal of great teaching in far more classrooms if we just applied what we know more widely.
Jacob, A., & McGovern, K. (2015). The mirage: Confronting the hard truth about our quest for teacher development. Brooklyn, NY: TNTP. https://tntp.org/assets/documents/TNTP-Mirage_2015.pdf.
This book summarize how science works, why it offers hope to educators, how science has been neglected and abused in education, and what I think science now tells us — and doesn’t tell us—about several issues in education.
Kauffman, J. M. (2011). Toward a science of education: The battle between rogue and real science. Full Court Press.
By focusing on clinical practice and what can be changed, this book offers suggestions for improvement of patient care and advises how clinical work can contribute directly and in new ways to the accumulation of knowledge.
Kazdin, A. E. (2000). Psychotherapy for children and adolescents: Directions for research and practice. Oxford University Press.
This article discusses key issues in identifying evidence-based treatments for children and adolescents. Among the issues discussed are obstacles in transporting treatments from research to clinical services, the weak criteria for delineating whether a treatment is evidence based, and barriers to training therapists.
Kazdin, A. E. (2004). Evidence-based treatments: Challenges and priorities for practice and research. Child and Adolescent Psychiatric Clinics, 13(4), 923-940.
This book provides up-to-date, in-depth information about the use of single-case experimental designs in educational research across a range of educational settings and students.
Kennedy, C. H. (2005). Single-case designs for educational research. Pearson/A & B.
This paper is an overview of issues related to evidence-based practice and the role that the school psychology profession can play in developing and dissemi- nating evidence-based interventions (EBIs).
Kratochwill, T. R., & Shernoff, E. S. (2003). Evidence-based practice: Promoting evidence-based interventions in school psychology. School Psychology Quarterly, 18(4), 389.
The task force on interventions by the American Psychological Association (APA, Task Force on Promotion and Dissemination of Psychological Procedures, 1995) stimulated considerable enthusiasm among many about the role of ESIs in practice.
Kratochwill, T. R., & Stoiber, K. C. (2000). Diversifying theory and science: Expanding the boundaries of empirically supported interventions in school psychology. Journal of School Psychology, 38(4), 349-358.
The authors presents the conceptual, philosophical, and methodological basis for the Procedural and Coding Manual for Review of Evidence-Based Interventions
Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-based interventions in school psychology: Conceptual foundations of the Procedural and Coding Manual of Division 16 and the Society for the Study of School Psychology Task Force. School Psychology Quarterly, 17(4), 341.
The authors conducted a comprehensive review of research to identify the impact of coaching on changes in preservice and in-service teachers’ implementation of evidence-based practices.
Kretlow, A. G., & Bartholomew, C. C. (2010). Using coaching to improve the fidelity of evidence-based practices: A review of studies. Teacher Education and Special Education, 33(4), 279-299.
This report recommend thirteen specific steps the federal government can take to develop new methods to define and measure such outcomes, use federal resources to build and apply evidence of what works, and help colleges and universities invest in student outcomes.
Kvaal, J., Bridgeland, J. (2018). Moneyball for Higher Education: How Federal Leaders Can Use Data and Evidence to Improve Student Outcomes. Retrieved from https://results4america.org/tools/moneyball-higher-education-federal-leaders-can-use-data-evidence-improve-student-outcomes/
In this discussion, we examine the relationship between science and education and delineate four reasons for characterizing science as an uninvited guest in schools.
Landrum, T. J., & Tankersley, M. (2004). Science in the schoolhouse: An uninvited guest. Journal of Learning Disabilities, 37(3), 207-212.
what does it mean to take a scientific approach to instructional productivity? This chapter hopes to contribute to that discussion by examining the role scientific assessment can play in enhancing educational productivity.
Layng, T. J., Stikeleather, G., & Twyman, J. S. (2006). Scientific formative evaluation: The role of individual learners in generating and predicting successful educational outcomes. The scientific basis of educational productivity, 29-44.
Populations and study samples can change over time—sometimes dramatically so. We illustrate this important point by presenting data from 5 randomized control trials of the efficacy of Kindergarten Peer-Assisted Learning Strategies, a supplemental, peer-mediated reading program.
Lemons, C. J., Fuchs, D., Gilbert, J. K., & Fuchs, L. S. (2014). Evidence-based practices in a changing world: Reconsidering the counterfactual in education research. Educational Researcher, 43(5), 242-252.
Dear Colleagues Letter: Resource Comparability is a letter written by United States Department of Education. This letter was meant to call people attention to disparities that persist in access to educational resources, and to help address those disparities and comply with the legal obligation to provide students with equal access to these resources without regard to race, color, or national origin (This letter addresses legal obligations under Title VI of the Civil Rights Act of 1964, Title VI). This letter builds on the prior work shared by the U.S. Department of Education on this critical topic.
Lhamon, C. E. (2014). Dear colleague letter: Resource comparability. Washington, DC: US Department of Education, Office for Civil Rights. Retrieved from http://www2. ed. gov/about/offices/list/ocr/letters/colleague-resourcecomp-201410. pdf.
The authors lay out each step of meta-analysis from problem formulation through statistical analysis and the interpretation of results.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Sage Publications, Inc.
The College board was recently released SAT scores for the high school graduating class of 2015. Both math and reading scores declined from 2014, continuing a steady downward trend that has been in place for the past decade. Pundits of contrasting political stripes seized on the scores to bolster their political agendas. Petrilli argued that falling SAT scores show that high schools needs more reform. For Burris, the declining scores were evidence of the failure of policies her organization opposes. This articles pointing out that SAT was never meant to measure national achievement and provide detail explanation.
Loveless, T. (2015). No, the sky is not falling: Interpreting the latest SAT scores. Brown Center Chalkboard. Retrieved from https://www.brookings.edu/blog/brown-center-chalkboard/2015/10/01/no-the-sky-is-not-falling-interpreting-the-latest-sat-scores/
The Millennium Challenge Corporation (MCC) sponsoring rigorous independent evaluations of its funded projects to build scientifically-valid evidence about "what works." On October 29, the nonprofit, nonpartisan Coalition for Evidence-Based Policy, in collaboration with MCC, hosted a forum with leaders of the development policy and research community on MCC's evidence-based approach.
Lyon, R. L. (2002, November). Rigorous evidence: The key to progress in education. In forum of the Coalition for Evidence Based Policy, Washington, DC.
This meta-analysis research cover all major domains in which deliberate practice has been investigated in search of empirical evidence. The authors conclude that deliberate practice is important, but not as important as has been argued.
Macnamara, B. N., Hambrick, D. Z., & Oswald, F. L. (2014). Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis. Psychological science, 25(8), 1608-1618.
This paper discusses the benefits of using brief experimental analyses to aid in treatment selection, identifies the forms of treatment that are most appropriate for this type of analysis, and describes key design elements for comparing 2 or more treatments efficiently.
Martens, B. K., Eckert, T. L., Bradley, T. A., & Ardoin, S. P. (1999). Identifying effective treatments from a brief experimental analysis: Using single-case design elements to aid decision making. School Psychology Quarterly, 14(2), 163.
How does classroom management affect student achievement? What techniques do
teachers find most effective? How important are schoolwide policies and practices in setting
the tone for individual classroom management? In this follow-up to What Works in Schools,
Robert J. Marzano analyzes research from more than 100 studies on classroom
management to discover the answers to these questions and more. He then applies these
findings to a series of" Action Steps"--specific strategies.
Marzano, R. J., Marzano, J. S., & Pickering, D. (2003). Classroom management that works: Research-based strategies for every teacher. Alexandria, VA: Association for Supervision and Curriculum Development (ASCD).
This paper reports evidence-based research and offers suggestions based on studies that include theoretical work, qualitative analysis, statistical analysis, and randomized experience that could provide strong causal evidence of the effects of teacher preparation on student learning.
Meadows, L., Theodore, K. (2012). Teacher Preparation Programs: Research and Promising Practices. Retrieved from http://www.sedl.org/txcc/resources/briefs/number_11/
In this article we discuss guidelines and algorithms as a means of addressing the complexity of pharmacologic treatment of people with severe mental illnesses and disseminating relevant research findings.
Mellman, T. A., Miller, A. L., Weissman, E. M., Crismon, M. L., Essock, S. M., & Marder, S. R. (2001). Evidence-based pharmacologic treatment for people with severe mental illness: a focus on guidelines and algorithms. Psychiatric Services, 52(5), 619-625.
This study documents the implementation of research-based strategies to minimize the occurrence of reading difficulties in a first-grade population. Three strategies were implemented.
Menzies, H. M, Mahdavi, J. N., & Lewis, J. L. (2008). Early intervention in reading: From research to practice. Remedial and Special Education, 29(2), 67-77.
This report is concerned with only one of the many causes and dimensions of the problem, but it is the one that undergirds American prosperity, security, and civility.
National Commission on Excellence in Educatio. (1984). A nation at Risk: The full Account. Cambridge, MA: USA Research.
the Center for Education of the National Research Council (NRC) has undertaken a series of activities to address issues related to the quality of scientific education research.1 In 2002, the NRC released Scientific Research in Education (National Research Council, 2002), a report designed to articulate the nature of scientific education research and to guide efforts aimed at improving its quality.
National Research Council. (2002). Scientific research in education. National Academies Press.
This book describes the similarities and differences between scientific inquiry in education and scientific inquiry in other fields and disciplines and provides a number of examples to illustrate these ideas.
National Research Council. (2002). Scientific research in education. National Academies Press.
This paper enters debate about how U.S. schools might address long-standing disparities in educational and economic opportunities while improving the educational outcomes for all students. with a vision and an argument for realizing that vision, based on lessons learned from 60 years of education research and reform efforts. The central points covered draw on a much more extensive treatment of these issues published in 2015. The aim is to spark fruitful discussion among educators, policymakers, and researchers.
O'Day, J. A., & Smith, M. S. (2016). Equality and Quality in US Education: Systemic Problems, Systemic Solutions. Policy Brief. Education Policy Center at American Institutes for Research.
In this article, implementation is proposed as the link between evidence-based practices and positive outcomes. Strategies for promoting implementation through “enlightened professional development” are proposed.
Odom, S. L. (2009). The tie that binds: Evidence-based practice, implementation science, and outcomes for children. Topics in Early Childhood Special Education, 29(1), 53-61.
The purpose of this study was to examine the strength of scientific evidence from single-subject research underlying the Division of Early Childhood (DEC) Recommended Practices.
Odom, S. L., & Strain, P. S. (2002). Evidence-based practice in early intervention/early childhood special education: Single-subject design research. Journal of Early Intervention, 25(2), 151-160.
The school year and day length have varied over time and across localities depending on the particular needs of the community. Proponents argue that extending time will have learning and nonacademic benefits. Opponents suggest increased time is not guaranteed to lead to more effective instruction and suggest other costs.
Patall, E. A., Cooper, H., & Allen, A. B. (2010). Extending the school day or school year: A systematic review of research (1985–2009). Review of educational research, 80(3), 401-436.
This enlightening book contains papers (presented as chapters) commissioned from nationally recognized scholars, which examine topics related to ethics, culture, science, and philosophy that have a direct bearing on the future of special education.
Paul, J. L. (1997). Foundations of special education: Basic knowledge informing research and practice in special education. Pacific Grove: Brooks.
A discussion of this chapter entitled "Dissemination of What, and to Whom?" by B. S. Kohlenberg follows this chapter.
Persons, J. B. (1995). Why practicing psychologists are slow to adopt empirically-validated treatments. In S. C. Hayes, V. M. Follette, R. M. Dawes, & K. E. Grady (Eds.), Scientific standards of psychological practice: Issues and recommendations (pp. 141-157). Reno, NV, US: Context Press
This award-winning twelve-volume reference covers every aspect of the ever-fascinating discipline of psychology and represents the most current knowledge in the field. This ten-year revision now covers discoveries based in neuroscience, clinical psychology's new interest in evidence-based practice and mindfulness, and new findings in social, developmental, and forensic psychology.
Pianta, R. C., Hamre, B., Stuhlman, M., Reynolds, W. M., & Miller, G. E. (2003). Handbook of psychology: Educational psychology.
The effects of active participation on student learning of simple probability was investigated using 20 fifth-grade classes randomly assigned to level of treatment. t was concluded that active student participation exerts a positive influence on fifth-grade student achievement of relatively unique instructional material.
Pratton, J., & Hales, L. W. (1986). The effects of active participation on student learning. The Journal of Educational Research, 79(4), 210-215.
This articles suggest policymakers to focus less on the international test and more on how states compare to each other when trying to improve schools. This article also shows how it's not worthwhile to compare school in countries where the conditions are different.
Rabinovitz, j. (2015, October). Report urges educators to avoid using international tests to make policy. Standford Graduate School of Education. Retrieved from https://ed.stanford.edu/news/national-test-superior-international-ones-assessing-us-schools-says-report
Teacher turnover occurs during and at the end of the school year, although documentation of within-year turnover currently rests on anecdotal evidence.
Redding, C., & Henry, G. T. (2018). New evidence on the frequency of teacher turnover: Accounting for within-year turnover. Educational Researcher, 47(9), 577-593.
In this paper, we analyze racial differences in the math section of the general SAT test, using publicly available College Board population data for all of the nearly 1.7 million college-bound seniors in 2015 who took the SAT. The evidence for a stubborn race gap on this test does meanwhile provide a snapshot into the extraordinary magnitude of racial inequality in contemporary American society. Standardized tests are often seen as mechanisms for meritocracy, ensuring fairness in terms of access. But test scores reflect accumulated advantages and disadvantages in each day of life up the one on which the test is taken. Race gaps on the SAT hold up a mirror to racial inequities in society as a whole. Equalizing educational opportunities and human capital acquisition earlier is the only way to ensure fairer outcomes.
Reeves, R. V., Halikias, D. (2017). Race Gap in SAT scores highlight inequality and Hinder Upward Mobility. Brookings. Retrieved from https://www.brookings.edu/research/race-gaps-in-sat-scores-highlight-inequality-and-hinder-upward-mobility/
In this study, using mixed methods, we investigated the longer term effects of eCoaching through advanced online bug-in-ear (BIE) technology.
Rock, M. L., Schumacker, R. E., Gregg, M., Howard, P. W., Gable, R. A., & Zigmond, N. (2014). How are they now? Longer term effects of e coaching through online bug-in-ear technology. Teacher Education and Special Education, 37(2), 161-181.
The criteria for empirically supported treatments, as described by Lonigan, Elbert, and Johnson (this issue), were applied to reports of eight treatment efficacy studies published in peer-reviewed journals.
Rogers, S. J. (1998). Empirically supported comprehensive treatments for young children with autism. Journal of clinical child psychology, 27(2), 168-179.
Current systems for listing empirically supported therapies (ESTs) provide recognition to treatment packages, many of them proprietary and trademarked, without regard to the principles of change believed to account for their effectiveness.
Rosen, G. M., & Davison, G. C. (2003). Psychology should list empirically supported principles of change (ESPs) and not credential trademarked therapies or other treatment packages. Behavior modification, 27(3), 300-312.
This What Works Clearinghouse practice guide provides educators and administrators with four evidence-based recommendations for reducing dropout rates in middle and high schools. The guide offers specific, strategies; examples of how to implement the practices; advice on how to overcome obstacles; and a summary of the supporting evidence.
Rumberger, R. W., et al. (2017). Educator’s Practice Guide: Preventing Dropout in Secondary School. IES National Center for Education and Evaluation and Regional Assistance.
Casting a wide net through history and culture, Sagan examines and authoritatively debunks such celebrated fallacies of the past as witchcraft, faith healing, demons, and UFOs. And yet, disturbingly, in today's so-called information age, pseudoscience is burgeoning with stories of alien abduction, channeling past lives, and communal hallucinations commanding growing attention and respect.
Sagan, C. (2011). The demon-haunted world: Science as a candle in the dark. Ballantine Books.
This report breaks out key steps in the school identification and improvement process, focusing on (1) a diagnosis of school needs; (2) a plan to improve schools; and (3) evidenced-based interventions that work.
School Intervention That Work: Targeted Support for Low-Performing Students. (2017). Alliance For Excellent Education. Retrieved from https://all4ed.org/reports-factsheets/schoolinterventions/
This paper illustrates the application of the Task Force on Evidence-Based Interventions in School Psychology coding criteria using a single-participant research design study.
Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-based interventions in school psychology: An illustration of Task Force coding criteria using single-participant research design. School Psychology Quarterly, 17(4), 390.
This study examined the degree to which school psychology programs provided training in Evidence-Based Interventions (EBIs), examined the contextual factors that interfere with EBI training, and whether students are taught to apply the criteria developed by Divisions 12, 16, and 53 of the APA when evaluating outcome research.
Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2003). Training in Evidence-Based Interventions (EBIs): What are school psychology programs teaching?. Journal of School Psychology, 41(6), 467-483.
Curriculum-Based Measurement and Special Services for Children is a concise and convenient guide to CBM that demonstrates why it is a valuable assessment procedure, and how it can be effectively utilized by school professionals.
Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. Guilford Press.
Developed specifically to overcome problems with traditional standardized instruments--and widely used in both general and special education settings throughout the US--curriculum-based measurement (CBM) comprises brief assessment probes of reading, spelling, written expression, and mathematics that serve both to quantify student performance and to bolster academic achievement.
Shinn, M. R. (Ed.). (1998). Advanced applications of curriculum-based measurement. Guilford Press.
The purpose of this chapter is to understand the reasons why categorical assessment and identification for students with severe achievement needs is indefensible. Then, to provide a viable alternative to expedite the assessment and decisionmaking process of educators when they are confronted with students with severe achievement needs
Shinn, M., Good, R., & Parker, C. (1998). Noncategorical special education services with students with severe achievement deficits. Functional and noncategorical identification and intervention in special education, 65-83.
Replication has taken on more importance recently because the ESSA evidence standards only require a single positive study. To meet the strong, moderate, or promising standards, programs must have at least one “well-designed and well-implemented” study using randomized (strong), matched (moderate), or correlational (promising) designs and finding significantly positive outcomes.
Slavin, R. (2019). Replication. [Blog post]. Retrieved from https://robertslavinsblog.wordpress.com/2019/01/24/replication/
This paper pointed out three prominent points of impact in addressing the poor performance of America’s fourth-graders on national examinations of reading proficiency.
Smartt, S. M., & Reschly, D. J. (2007). Barriers to the Preparation of Highly Qualified Teachers in Reading. TQ Research & Policy Brief. National Comprehensive Center for Teacher Quality.
In this provocative and headline-making book, Michael Specter confronts the widespread fear of science and its terrible toll on individuals and the planet.
Smith, T. C. (2010). Denialism: How Irrational Thinking Hinders Scientific Progress, Harms the Planet, and Threatens our Lives.
In this book the author describes six teaching myths that prevent reform in education.
Snider, V. (2006). Myths and Misconceptions about Teaching: What Really Happens in the Classroom. Rowman & Littlefield Publishing Group, 4501 Forbes Blvd., Suite 200, Lanham, MD 20706.
The Stanford Education Data Archive (SEDA) is an initiative aimed at harnessing data to help scholars, policymakers, educators, and parents learn how to improve educational opportunity for all children. The data are publicly available here, so that anyone can obtain detailed information about American schools, communities, and student success.
Stanfrod Education Data Archive. Standford Center for Education Policy Analysis. Retrieved from https://cepa.stanford.edu/seda/overview
The purpose of this chapter is to present a combined research- and practice-based framework for integrating a comprehensive MTSS model with EBP, and thus, optimize the results stemming from school improvement efforts.
Stoiber, K. C., & Gettinger, M. (2016). Multi-tiered systems of support and evidence-based practices. In Handbook of response to intervention (pp. 121-141). Springer, Boston, MA.
Familiarity with Evidence-based Medicine (EBM) terminology has extended into the popular press, as evidenced by a recent article in the Times describing the number needed to treat. But all this leads to the question, “What's the E for EBM?”
Straus, S. E. (2004). What's the E for EBM?.
Increasingly, school services are being guided by a problem solving approach and are evaluated by the achievement of positive outcomes. This shift is explored here in 96 chapters and 11 appendices. The volume provides a comprehensive reference relating contemporary research and thought to quality professional services
Thomas, A., & Grimes, J. (Eds.). (1995). Best practices in school psychology III.Washington, DC: National Association of School Psychologists.
This chapter chronicles some of the major steps school psychology has taken toward adopting science as the basis of practice. Each step has yielded benefits for students as well as practice challenges to be overcome.
Tilly, W. D. (2008). The evolution of school psychology to science-based practice: Problem solving and the three-tiered model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology–5(pp. 17–36). Bethesda, MD: National Association of School Psychologists.
Student engagement is critical to academic success. High-Active Student Response (ASR) teaching techniques are an effective way to improve student engagement and are an important component of evidence-based practice. . This report provides techniques and strategies to enhance engagement through ASR. Key terms are appended.
Tincani, M., & Twyman, J. S. (2016). Enhancing Engagement through Active Student Response. Center on Innovations in Learning, Temple University.
Extensive empirical research, summarized in several reviews and codified in practice guidelines, recommendations, and algorithms, demonstrates that several pharmacological and psychosocial interventions are effective in improving the lives of persons with severe mental illnesses.
Torrey, W. C., Drake, R. E., Dixon, L., Burns, B. J., Flynn, L., Rush, A. J., ... & Klatzker, D. (2001). Implementing evidence-based practices for persons with severe mental illnesses. Psychiatric services, 52(1), 45-50.
The purpose of the conference was to engage a group of citizens in a thoughtful, meaningful dialogue about issues of prevention, identification, recognition, and referral of children with mental health needs to appropriate, evidence-based treatments or services.
US Department of Health and Human Services. (2000). Report of the Surgeon General's Conference on Children's Mental Health: A national action agenda.
The Innovation Journey presents the results of a major longitudinal study that examined the process of innovation from concept to implementation of new technologies, products, processes, and administrative arrangements.
Van de Ven, A. H., Polley, D. E., Garud, R., & Venkataraman, S. (1999). The Innovation Journey, New York: Oxford Univ.
This article describes the emergence and influence of evidence-based practice and data-based decision making in educational systems. This article describes the ways in which evidence-based practice (EBP) and response to intervention (RtI) can be used to improve efficacy, efficiency, and equity of educational services.
VanDerHeyden, A., & Harvey, M. (2013). Using data to advance learning outcomes in schools. Journal of Positive Behavior Interventions, 15(4), 205-213.
The Child Task Force report represents an important initial step in this direction. Here they offer both praise and critique, suggesting a number of ways the task force process and product may be improved.
Weisz, J. R., & Hawley, K. M. (1998). Finding, evaluating, refining, and applying empirically supported treatments for children and adolescents. Journal of Clinical Child Psychology, 27(2), 206-216.
The Society of Clinical Psychology's task forces on psychological intervention developed criteria for evaluating clinical trials, applied those criteria, and generated lists of empirically supported treatments. Building on this strong base, the task force successor, the Committee on Science and Practice, now pursues a three‐part agenda
Weisz, J. R., Hawley, K. M., Pilkonis, P. A., Woody, S. R., & Follette, W. C. (2000). Stressing the (other) three Rs in the search for empirically supported treatments: Review procedures, research quality, relevance to practice and the public interest. Clinical Psychology: Science and Practice, 7(3), 243-258.
U.S. public policy has increasingly been conceived, debated, and evaluated through the lenses of politics and ideology. The fundamental question -- Will the policy work? -- too often gets short shrift or even ignored. A remedy is an evidence-based policy--a rigorous approach that draws on careful data collection, experimentation, and both quantitative and qualitative analysis to determine what the problem is, which ways it can be addressed, and the probable impacts of each of these ways.
Wesley, P. W., & Buysse, V. (2006). Making the case for evidence- based policy. In V. Buysse & P. W. Wesley (Eds.), Evidence-based practice in the early childhood field (pp. 117–159). Washington, DC: Zero to Three.
This article describes a systematic process for finding and resolving problems with classroom-based behavioral interventions in schools.
Witt, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Troubleshooting behavioral interventions: A systematic process for finding and eliminating problems. School Psychology Review, 33, 363-383.