Samantha Cleaver PhD
Every day, school leaders and teachers engage in nonstop decision making (Klein, 2021). They make far-reaching decisions, such as which reading curriculum to purchase, and split-second decisions, such as how to address misbehavior.
All this decision making can become overwhelming. In The Washington Post, veteran teacher Julie Rine Holderbaum explained that while teaching has never been easy, the decisions she now makes every day have changed and feel more significant (Strauss, 2022). “Now it feels like kids need so much more from us than academic lessons, and it places more importance on every move a teacher makes.”
Research on how daily in-the-minute decisions impact student achievement is minimal, but we do know about how larger trends impact decision making and student achievement. The purpose of this overview is to provide information about the research around decision making in education. Important questions include:
- What is the impact of educational trends (accountability, data-based decision making, evidence-based practices) on decision making and student achievement?
- What are the limitations of decision making?
- How did decisions made during the COVID-19 pandemic impact student achievement?
Educational Trends in Decision Making
In recent decades, decision making in education has been influenced by three key trends: accountability, data-based decision making, and evidence-based practices. The question is, what impact has each had on student achievement?
The No Child Left Behind Act of 2002 (NCLB) started the current era of federal accountability. However, accountability efforts existed long before (Torres, 2021). These efforts connected school information (e.g., student test scores) with direct or indirect rewards and sanctions; they focused on holding schools accountable for the achievement of students who had previously been overlooked, namely students from low socioeconomic backgrounds, students of color, and students with disabilities (Petrilli, 2019).
The pre-NCLB accountability efforts produced mixed results (Hanushek & Raymond, 2005; Jacob, 2005). Hanushek and Raymond found that, when provided by a state, rewards and consequences for test results were associated with significant increases in National Assessment of Educational Progress (NAEP) reading and math scores. However, these gains were not universal: Hispanic students demonstrated the largest gains, White students demonstrated lesser gains, and gains for Black students were statistically insignificant. Accountability also impacted how schools were perceived; schools were viewed not through what they provided (e.g., libraries, courses, books), but through standardized test scores (Petrilli, 2019).
Jacob (2005) used student-level data to study an accountability policy implemented in Chicago in 1996-1997. While math and reading achievement did increase after the accountability policy started, for younger students there was no increase in performance on a state assessment. Furthermore, an item-level analysis indicated that the achievement gains may have been the result of skills that were test-specific and student effort. Jacob (2005) also found that teachers behaviors shifted; they increased referrals to special education, retained students, and shifted instruction away from non-tested subjects (social studies and science).
NCLB made accountability efforts federal law. It required annual assessments of students, set targets for student proficiency, made test data public, and imposed sanctions on schools that did not meet targets (Petrilli, 2019). This clear focus on accountability, as well as transparency around which schools did and did not make annual yearly progress, or AYP, had a dramatic impact on decision making.
How does accountability impact decision making?
Accountability provisions impact how districts, schools, and teachers make decisions. Initial concerns about NCLB provisions were that teachers would spend more time on tested subjects at the expense of art, social studies, and other subjects and on teaching test prep, and that they would focus on kids who were “on the bubble,” that is, those who were close to the proficiency cut for state tests and might most impact test scores (Neal & Schanzenbach, 2010).
These concerns seem to have come to fruition. Pedulla et al. (2003) surveyed a nationally representative sample of teachers and found that 34% of teachers who worked in states with high-stakes testing regimes shifted their instruction toward subjects that were tested (e.g., math and reading), compared with 17% of teachers in states with moderate-stakes testing. The difference in testing was based on the consequences that states imposed on schools based on test results, so high-stakes states used accountability measures to determine consequences for students, teachers, and school (e.g., student promotion, school accreditation). On the other hand, low-stakes states did not provide any consequences for accountability results. Moderate-stakes states were somewhere in the middle, with some consequences but not the most severe. This finding supports the concern that accountability efforts would change how schools allocate their instructional time, meaning that students received less instruction in non-tested areas, such as the arts or social studies (Jennings & Rentner, 2006; Supovitz, 2009).
Teachers also increased instructional time spent on test preparation. Pedulla et al. (2003) found that teachers in states with high-stakes testing reported spending more time on test preparation activities: 30% in high-stakes states compared with 12% of teachers in low- or moderate-stakes states. In 2005, the RAND Corporation collected data from three states (California, Georgia, and Pennsylvania) and found the same trend; teachers were narrowing the curriculum and focusing on test preparation, particularly for “bubble kids,” or those close to the proficiency cut (Hamilton et al., 2007).
Other unintended consequences of accountability that relate to decision making include the following:
- Parents moving students away from schools that were identified as low-performing, which increased school segregation (Davis et al., 2015)
- difficulties recruiting teachers to and retaining them in lower performing schools (Clotfelter et al., 2004)
- reclassification of lower performing students to non-testable categories, such as special education (Figlio & Getzler, 2006)
- school leaders and individual teachers changing test results (Jacob & Levitt, 2003); for example, the 2008 Atlanta cheating scandal, in which test scores of students in 58 of 84 elementary and middle schools were tampered with (Chen, 2022; Dewan, 2010).
In the two decades since NCLB was passed, accountability measures can and do influence the decisions teachers make in the classroom, and the student experience (Dee & Jacob, 2010b).
How does accountability impact student achievement?
Accountability has had an impact on teacher decision making, but what about the effect on student achievement? Dee and Jacob (2010a) attempted to isolate the impact of NCLB’s accountability measures on student achievement by comparing trends in student achievement in states that had accountability experience prior to NCLB with states that did not have prior experience. The sample included 39 states for fourth-grade math and 37 states for fourth-grade reading, as well as 38 states for eighth-grade math and 34 states for eighth-grade reading. Samples within the data resembled the nation in terms of demographics, socioeconomic trends, and pre-NCLB trends in NAEP test scores.
By 2007, the accountability provisions in NCLB had increased fourth-grade math achievement by approximately 7.2 scale points (0.23 standard deviations). Comparing NCLB assessments, which vary in rigor from state to state, with the NAEP assessment, NCLB accountability increased the percentage of students performing at or above basic level in math by 10 percentage points in fourth graders and 6 percentage points in eighth graders; there was no significant impact on reading scores (Dee & Jacob, 2010a).
Political support for accountability measures was strong throughout the 2000s, and accountability is now the norm in schools (Petrilli, 2019). Students continue to take annual assessments, and states report school data to the public. Looking forward, some researchers have suggested applying accountability measures to school resources, teacher skill, and other factors to increase the relevance of accountability (Bae, 2018).
Data-Based Decision Making
In addition to focusing on accountability, NCLB and the 2015 Every Student Succeeds Act (ESSA) made data-based decision making an expected part of teaching. Teachers use data to set a goal for student learning, check to see if the goal is being met, and adjust teaching as appropriate (Van der Kleij et al., 2015). The overarching idea behind data-based decision making is to reinforce education as an evidence-based discipline. A common analogy: A teacher uses data to decide how to approach a problem with a student (e.g., how to approach reading intervention) just as a doctor uses data to write an appropriate prescription (Schildkamp et al., 2019).
How does using data impact decision making?
In a survey by the Data Quality Campaign (2018), teachers reported using a variety of data (e.g., attendance, behavior, test data) in decision making. A majority of teachers (89%) used data to personalize learning. The biggest challenge that teachers identified was a lack of time; 57% of teachers reported not having enough time in the day to work with data.
Even with data collection a ubiquitous part of the school day, collecting more data does not mean that it will be used, or used effectively (Marsh et al., 2006). Additionally, there are concerns that an abundance of data will narrow education, encouraging teachers to “teach to the test” (Marsh et al., 2006).
Simply having access to data does not guarantee improvements in student outcomes. The way that teachers use data is an equally important factor. Marsh et al. (2006) determined that data-based decision making was most effective when teachers were able to adjust instruction based on what they gleaned from the data.
Keuning et al. (2019) found that teachers trained in data-based decision making produced better results in math and spelling skills compared with teachers who had not been trained. Carlson et al. (2011) found similar results; that teachers trained in data-based decision making produced higher student achievement in math and reading. And providing teachers with training and ongoing support to better implement data-based decision making has been shown to produce effects on knowledge, skill, and self-efficacy, though the effects on teacher skill were the smallest (Deno, 2014; Gesel et al., 2021; Stecker et al., 2005)
Finally, motivation is a key factor in how teachers engage with data. Vanlommel et al. (2016) surveyed 408 teachers at 52 Flemish primary schools about data use. Teachers surveyed made limited use of the data they had from standardized assessments to make classroom decisions. Motivation was a key factor. When teachers identified autonomous motivation, or engaging in data analysis because it was aligned with their beliefs and values, it had a significant positive effect on teachers’ data use.
How does data-based decision making impact student achievement?
Research on data-based decision making has produced mixed results. Schildkamp et al. (2019) reviewed 11 data-based decision-making interventions that were studied using research methodology (most used a randomized control trial or quasi-experimental design; quasi-experimental designs aim to study interventions but do not use randomization in the design). The interventions studied demonstrated some evidence of impact on student achievement. Schildkamp et al. (2019) concluded that when done well, data-based decision making resulted in positive impacts on student achievement.
Staman et al. (2017) studied a two-year implementation of an intervention that used data-based decision making in 42 schools; teachers used the results of half-year interim assessments to plan instruction. Although there were no main effects, or how one independent variable impacts a dependent variable, on student achievement. Interaction effects, or when the effect of a variable is dependent on the value of other variables, were seen in students who had low levels of achievement prior to the intervention, and those from lower socioeconomic households. Teachers also struggled to translate their work with student data into classroom-level instruction.
In a meta-analysis, Gesel et al. (2021) reviewed 28 studies (26 teacher samples) that examined the impact of professional development on data-driven decision making. Across the studies, teachers’ use of data-based decision making was found to have a positive effect on student outcomes. The researchers identified a significant effect of professional development in data-based decision making on teacher-level outcomes, including teacher skill. However, Gesel et al. (2021) also identified that additional research was needed to determine the outcomes of data-based decision making when researcher supports were removed.
How does tiered intervention impact student achievement?
One way that data-based decision making has become integrated into schools is through tiered intervention: response to intervention (RTI) and multitiered system of supports (MTSS). Both RTI and MTSS are approaches to using data in providing intervention and supports.
Mellard et al. (2012) studied the implementation of RTI in five schools that each delivered a full year of tiered instruction in reading. Three of the schools, which started the school year above average, maintained and even increased their scores by spring. In a fourth school, students began the year with scores that averaged far below normal and made significant gains, approaching the normative mean of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) assessment. However, in the fifth school, students achievement decreased.
A large study commissioned by the Department of Education examined RTI across Grades 1 through 3 in 146 schools in 13 states during the 2011–2012 school year. A total of 86% of schools reported full implementation of RTI; 55% percent of those schools focused on reading intervention for students who were not reading at grade level (Balu et al., 2015). The study showed that students in Grade 1 who received the intervention had lower scores in spring than students who had not received the intervention. There was no effect on students in Grades 2 and 3.
In the Balu study, schools were selected based on how clear their decision rules for moving students into tiers 2 and 3 were and how well they followed their own rules for placing students in intervention tiers. During the study, not all schools followed their own rules, and some changed the rules due to resource limitations, teacher judgment, or other factors, which resulted in changes in how students were assigned to tiered interventions. This meant that the group of students who ended up in intervention may not have been the group that would have benefited the most, or that would have seen the best results. The study design (a quasi-experimental regression discontinuity design that measures the impact of an intervention without randomization) also limited the generalizability of the findings.
After reviewing the Balu study, Fuchs and Fuchs (2017) pointed out that RTI was not implemented consistently across the school sites. To that end, this study may raise more questions about implementation of RTI than about the intervention’s ability to produce results.
Tiered instruction—RTI or MTSS—is still a new way to approach problem solving and decision making. Additional research is needed to determine the full impact of data-based decision making on various aspects of student achievement, as well as how teachers are actually using data to make decisions throughout the school day.
Evidence-based practices (EBPs) are methods that high-quality research has shown to be effective and to result in meaningful student outcomes (Cook et al., 2012). Put another way, EBPs are practices that produce the intended student outcomes and are supported by a high-quality evidence base. In theory, EBPs support teacher decision making because teachers and leaders will confidently implement practices that they expect will produce positive outcomes in student skills.
For example, What Works Clearinghouse (2010) compiled research on Sound Partners, a phonics-based tutoring program that provides explicit instruction in reading foundational skills, including phonological awareness, phonics, word reading, and writing (Vadasy et al., 2004). Four studies met the What Works Clearinghouse evidence standards; three additional studies met standards with reservations. Based on these seven studies, the extent of evidence for Sound Partners on beginning readers is medium to large for alphabetics, fluency, and comprehension. This means that teachers using Sound Partners with fidelity and consistency can be confident that the intervention will likely produce positive outcomes for students who need to build alphabetic, fluency, and comprehension skills.
EBPs are especially important for students with disabilities. When these students are taught using a combination of teacher content knowledge and EBPs, their expected outcomes increase (Cook et al., 2008, 2012).
Federal laws—the 2004 Individuals with Disabilities Education Act (IDEA) and the 2015 Every Student Succeeds Act (ESSA)—require teachers to use EBPs to the greatest extent possible. In particular, ESSA gives states and schools the freedom to choose which interventions to implement, as long as they are evidence based. The goal is for EBPs to make teachers’ efforts more likely to increase achievement and to reduce achievement gaps (Lam et al., 2016). In particular, under ESSA, schools in the lowest performing 5% of schools and schools with persistently underperforming subgroups of students must provide targeted support using EBPs.
How do mandates around EBPs impact decision making?
ESSA gives states and districts flexibility in the use of EBPs for decision making (U.S. Department of Education, 2016). Even with flexibility, putting EBPs into practice is more difficult than it seems. Many interventions have sufficient evidence but fail upon implementation in school settings. Similarly, many programs lacking research support have become commonplace in schools.
One high-profile example comes from Lucy Calkins, whose Calkins’ Units of Study curriculum was used by an estimated one quarter of the 67,000 elementary schools nationwide in 2022—despite lacking substantial research support and providing practices that not in accordance with research on phonics. The concern is that Calkins’ methods, which are popular but not research based and not EBPs, do not support student learning (Goldstein, 2022).
In short, even with mandates and resources, such as What Works Clearinghouse, there is no straight line to ubiquitous implementation of EBPs. Educators who are aware of EBPs may use them, while others may rely on what they are given by their principals or districts or what peers and friends recommend.
How do EBPs impact student achievement?
Each EBP will have a different impact on student achievement depending on the intervention and the fidelity of implementation. The primary challenge in evaluating the effect of an EBP on student achievement is understanding how teachers choose practices to implement.
When choosing a practice, educators may rely on insufficient information or may lack access to relevant research (Rosenthal & DiMatteo, 2002; Slavin, 1984). Leaders or teachers may not have the time to research EBPs before they need to make a decision based on the budget process.
Pinkelman et al. (2022) studied how education administrators selected curriculum. They found that administrators looked at informal and formal data, timelines, and district priorities. The majority (90%) of administrators indicated that they used informal data, including anecdotal information, to determine what to adopt. A smaller majority (55%) used formal data, such as empirical data, test scores, and student performance data. Participants also indicated that their schedule for adopting new programs in response to expiring licenses or changing standards also influenced their choices. Finally, administrators considered the program itself, including alignment with standards and district values, teacher buy-in, cost, and training needed for implementation.
To identify which options to choose
- 85% of participants said they relied on word of mouth and recommendations from colleagues,
- 55% mentioned publisher marketing materials,
- 55% stated that they searched websites or articles for information, and
- no mention was made of reviewing empirical evidence aside from reading articles.
Administrators did consider effectiveness data, however more research is needed on how administrators make decisions in general. This is especially important during periods of change, such as the current period in reading instruction when administrators, teachers, and parents are revisiting reading curriculum and approaches as they learn more about the Science of Reading (Pinkelman et al., 2022).
Additional information on evidence-based instruction can be found in the following overviews:
Effective Instruction Overview (https://winginstitute.org/effective-instruction-overview)
Instructional Delivery (https://winginstitute.org/effective-instruction-delivery)
Best Available Evidence Overview (https://winginstitute.org/evidence-based-decision-making-evidence)
Limitations of Decision Making
Limitations that can influence teacher decision making before a decision is made includes bias. After a decision is made, implementation—buy-in and treatment fidelity—influence how a decision ultimately impacts students.
Bias is an often-unfair preference for or against something, and is seen when teachers have a strong inclination not based in fact. It can occur at various points in the decision-making process. For example, publication bias occurs when studies with positive findings are more likely to be published in peer-reviewed journals than are studies with negative findings (Easterbrook et al., 1991). Publication bias can impact which studies get attention and which programs teachers and leaders choose from. Knowing which practices do not work is as important as knowing which do, and the fact that studies with negative effects are less likely to be published can impact decision making (Chow & Ekholm, 2018; Torgerson, 2006).
Confirmation bias is the tendency to focus on evidence that aligns with what we already think. In education, this could be teachers choosing a curriculum based on their existing ideas about reading rather than external evidence or information. At the district or system level, operating with bias may stall policy making and have broader implications when poorly navigated decisions impact teachers and students (The Decision Lab, 2023).
Treatment fidelity, or implementing an intervention as planned, is more difficult and time-consuming than might be expected (Detrich, 2014; Elliot & Mihalic, 2004). When an intervention does not produce the anticipated benefits in a real-world context, rather than reflect on fidelity of implementation, teachers may conclude that the intervention itself is ineffective (Detrich, 2014). One way to address this failure is to consider interventions that have enough research support to be implemented at scale in real-world contexts and to focus on integrity of implementation (Detrich, 2014).
For more information, see Treatment Fidelity Overview, https://winginstitute.org/evidence-based-decision-making-treatment-integrity.
Buy-in, or motivation to implement a strategy, increases the likelihood that a practice will be implemented with fidelity (Detrich, 2014). For example, schoolwide positive behavior support (SWPBS) is a data-based decision-making approach to school behavior. Because motivation and buy-in are key to the success of SWPBS, it is encouraged that a majority of school personnel agree to prioritize student behavior when they implement the intervention (Sugai & Horner, 2006).
School leaders can increase the likelihood of treatment fidelity by considering stakeholder buy-in during the decision-making process. This involves identifying interventions that:
- are compatible with the beliefs, values, and experiences of those who will implement it,
- solve a problem for the implementer,
- have an advantage over the current practice, and
- have the support of opinion leaders (Rogers, 2003; Detrich, 2014).
Impact of Decisions Made During the COVID-19 Pandemic
During the years of the COVID-19 pandemic, the public has had a front seat to how decision making impacts students. At the start of the pandemic, schools across the country closed, teachers pivoted to online teaching, and educational decisions moved front and center. Now, in spring of 2023, three years after the initial lockdown, learning has returned to the classroom, and findings about how those decisions impacted students are starting to emerge. The primary question is, how did decisions made during the pandemic impact student learning?
It is not yet possible to definitively say how many of these decisions impacted students; however, the decision to close schools has been examined.
In 2022, NAEP test scores showed an overall negative trend; fourth-grade students scores declined in math and reading, falling the most in 30 years. The New York Times reported that the pandemic had “erased two decades of progress” (Mervosh, 2022). The cause of the decline was not clear, but one hypothesis was that the decision to close schools had an influence.
Did states that closed schools for longer periods see a greater decrease in test scores? There was no clear correlation between regions that closed schools for longer and lower test scores (Greene, 2022). There was a correlation between student test scores and a family’s ability to provide access to resources; students whose families could provide them with resources (e.g., tutoring, computer and internet) scored higher than students whose families could not provide the same access (Greene, 2022).
Kuhland et al. (2022) analyzed data from the nationwide testing system NWEA MAP across the COVID-19-affected school years and found that student test scores in Grades 3 through 8 had fallen in math (0.2-0.27 standard deviations) and reading (0.09-0.18 standard deviations) compared with prepandemic scores. Achievement gaps increased, particularly during the 2020–2021 school year (0.1-0.2 standard deviations). Furthermore, these declines were more substantial compared with those seen during other events that impact education, such as natural disasters.
While there was not a direct correlation between closing schools and student learning, other decisions at the district and teacher level have yet to be analyzed: specifically, the impact of how teachers engaged students through virtual learning, how teachers collaborated with families to maintain learning at home, and how schools worked to support students’ mental health.
Cost of Decision Making
The cost of decision making is difficult to quantify. Restructuring a high school around an evidence-based community-schools model is much more expensive than implementing the Sound Partners intervention at one elementary school, but both decisions could produce important results.
Analyzing the cost of decision making involves reviewing inputs and outcomes. School-level inputs may include the cost of teachers’ time and any data systems or assessment systems used (e.g., a subscription to Salesforce to capture student behavior and attendance data; a subscription to a testing system, such as i-Ready or NWEA MAP). At the district level, inputs may include hiring additional teachers to reduce class sizes, providing additional staff for community services, and supplying additional facilities to rework a high school. Each district’s inputs will be different, as well the subsequent outcomes in terms of scale and benefit.
Recommendations for Decision Making
A productive decision-making process requires that leaders do the following:
- Consider stakeholder bias, which is most likely to occur in early in the process.
- Secure buy-in by
- identifying interventions that are in line with teacher beliefs, values, and experiences,
- choosing an intervention that solves a problem for the teacher,
- choosing an intervention that has an advantage over what is currently being done, and
- choosing an intervention that has the support of leaders in the setting (Rogers, 2003; Detrich, 2014).
The behavioral ideas behind accountability can influence how teachers and school leaders make decisions (Gill et al., 2016). Policy makers should do the following:
- Engage teachers in the process of decision making at all levels (Sarafidou & Chatziioannidis, 2013).
- Involve multiple forms of accountability to create a system that encourages transparency and continuous improvement (Gill et al., 2016).
- Include focuses on equity and well-being in addition to student achievement when setting goals for data and accountability (Datnow & Park, 2018).
Data-Based Decision Making
Mandinach and Schildkamp (2021) made the following recommendations for using data in decision making:
- Start with goals for how the data will be used and not for the data itself.
- Evaluate the data that is available: Is the necessary data accessible? What data sources are missing?
- Use multiple data sources to capture the diverse needs of students; include formal and informal sources.
- Where appropriate, involve students in data analysis and use.
- Balance accountability and improvement in the use of data.
- Align data and decisions that need to be made; the data should be actionable and able to inform practice.
- Know what data literacy and proficiency look like for teachers, and help teachers work toward data proficiency.
Torres et al. (2012) outlined these steps to consider when choosing and implementing EBPs:
- Identify the parameters (e.g., age of students, number of students, time) and teacher characteristics.
- Use a reputable source (e.g., What Works Clearinghouse, CEEDAR Center, Best Evidence Encyclopedia, National Center on Intensive Intervention).
- Identify the desired outcome from an EBP.
- After choosing an EBP, break it down like a recipe to make sure all the components are present.
- If an EBP requires more training, materials, or instruction to implement, acquire them before implementation.
- Track implementation fidelity; some EBPs, such as Sound Partners, include fidelity measures in the materials.
- If adapting the EBP, do so with caution and thought.
- Monitor student outcomes; either use a program’s monitoring function or create your own.
- Use progress monitoring data to track student progress toward the ultimate goal that was set when choosing the practice.
In a single day, teachers and school leaders make seemingly countless decisions ranging from inconsequential to highly impactful. Daily decisions are shaped by broad trends in decision making, including accountability, data-based decision making, and evidence-based practices. Many factors limit how decisions affect student outcomes, from buy-in to fidelity of implementation. Understanding the factors that influence decision making and how best to make decisions that result in higher implementation fidelity can have a positive impact on student achievement.
Bae, S. (2018). Redesigning systems of school accountability: A multiple measures approach to accountability and support. Educational Policy Analysis Archives, 26(8). http://dx.doi.org/10.14507/epaa.26.2920
Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, & J., Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. (NCEE 2016-4000). U.S. Department of Education. https://ies.ed.gov/ncee/pubs/20164000/pdf/20164000.pdf
Carlson, D., Borman, G. D., & Robinson, M. (2011). A multistate district-level cluster randomized trial of the impact of data-driven reform on reading and mathematics achievement. Educational Evaluation and Policy Analysis, 33(3), 378–398. https://doi.org/10.3102/0162373711412765
Chen, G. (2022, February 22). When teachers cheat: The standardized test controversies. Public School Review. https://www.publicschoolreview.com/blog/when-teachers-cheat-the-standardized-test-controversies
Chow, J. C., & Ekholm, E. (2018). Do published studies yield larger effect sizes than unpublished studies in education and special education? A meta-review. Educational Psychology Review, 30(3), 727-744. https://doi.org/10.1007/s10648-018-9437-7
Clotfelter, C. T., Ladd, H. F., Vigdor, J. L., & Diaz, R. A. (2004). Do school accountability systems make it more difficult for low-performing schools to attract and retain high-quality teachers? Journal of Policy Analysis and Management, 23(2), 251–271. https://onlinelibrary.wiley.com/doi/abs/10.1002/pam.20003
Cook, B. G., Smith, G. J., & Tankersley, M. (2012). Evidence-based practices in special education. In K. R. Harris, S. Graham, T. Urdan, C. B. McCormick, G. M. Sinatra, & J. Sweller (Eds.), APA educational psychology handbook, Vol. 1: Theories, constructs, and critical issues (pp. 495-527). American Psychological Association.
Cook, B. G., Tankersley, M., Cook, L., & Landrum, T. J. (2008). Evidence-based practices in special education: Some practical considerations. Intervention in School and Clinic, 44(2), 69-75.
Data Quality Campaign. (2018, September 12). Teacher see the power of data—but don’t have enough time to use it. https://dataqualitycampaign.org/resource/teachers-see-the-power-of-data-but-has-have-enough-time-to-use-it/
Datnow, A. & Park, V. (2018). Opening or closing doors for students? Equity and data use in
schools. Journal of Educational Change, 19(2). Doi: 10.1007/s10833-018-9323-6
Davis, T., Bhatt, R., & Schwarz, K. (2015). School segregation in the era of accountability. Social Currents, 2(3), 239–259. https://doi.org/10.1177/2329496515589852
The Decision Lab. (2023). Why do we favor our existing beliefs? The confirmation bias explained. https://thedecisionlab.com/biases/confirmation-bias?&adw=true&utm_campaign=21+Biases+-+Confirmation+Bias&utm_medium=ppc&utm_source=adwords&utm_term=confirmation%20bias&has_mt=b&has_net=adwords&has_ad=500704987803&has_src=g&has_cam=12416110011&hsa_kw=confirmation%20bias&has_grp=121194112474&hsa_tgt=kwd-468373051&has_ver=3&has_acc=8441935193&gclid=Cj0KCQiAnsqdBhCGARIsAAyjYjQ2qs9H6uaIBM3fpM2uB-rTbjqw7DVBtlnPa8eBUeEprx0h4jrEH_AaAl2pEALw_wcB
Dee, T. S., & Jacob, B. A. (2010a). Evaluating NCLB: Accountability has produced substantial gains in math skills but not in reading. Education Next, 10(3). https://www.educationnext.org/evaluating-nclb/
Dee, T. S., & Jacob, B. A. (2010b). The impact of No Child Left Behind on students, teachers, and schools. The Brookings Institution. https://www.brookings.edu/wp-content/uploads/2010/09/2010b_bpea_dee.pdf
Deno, S. L. (2014). Reflections on progress monitoring and databased intervention. In B. G. Cook, M. Tankersley, & T. J. Landrum (Eds.), Special education past, present, and future: Perspectives from the field (Vol. 27, pp. 171–194). Emerald Group Publishing.
Detrich, R. (2014). Treatment integrity: Fundamental to education reform. Journal of Cognitive Education and Psychology, 13(2), 258–271.
Dewan, S. (2010, August 2). Cheating inquiry in Atlanta largely vindicates schools. The New York Times. https://www.nytimes.com/2010/08/03/education/03georgia.html?_r=1
Easterbrook, P. J., Gopalan, R., Berlin, J. A., & Matthews, D. R. (1991). Publication bias in clinical research. The Lancet, 337(8746), 867–872. https://www.sciencedirect.com/science/article/pii/014067369190201Y
Elliot, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5(1), 47–53.
Figlio, D. N., & Getzler, L.S. (2006). Accountability, ability and disability: Gaming the system? In T. J. Gronberg & D. W. Jansen. (Eds.) Improving school accountability: Advances in applied microeconomics, Vol. 14 (pp. 35–49). Emerald Group Publishing. http://dx.doi.org/10.1016/s0278-0984(06)14002-x
Fuchs, D., & Fuchs, L. S. (2017). Critique of the national evaluation of response to intervention: A case for simpler frameworks. Exceptional Children, 83(3), 255–268. https://journals.sagepub.com/doi/abs/10.1177/0014402917693580
Gesel, S. A., LeJeune, L. M., Chow, J. C., Sinclair, A. C., & Lemons, C. J. (2021). A meta-analysis of the impact of professional development on teachers’ knowledge, skill, and self-efficacy in data-based decision making. Journal of Learning Disabilities, 54(4), 269-283. https://files.eric.ed.gov/fulltext/EJ1298555.pdf
Gill, B. P., Lerner, J. S., & Meosky, P. (2016). Reimagining accountability in K–12 education. Behavioral Policy, 2(1), 57–70. https://behavioralpolicy.org/wp-content/uploads/2017/05/BSP_vol1is1_Gill.pdf
Goldstein, D. (2022, May 22). In the fight over how to teach reading, this guru makes a major retreat. The New York Times. https://www.nytimes.com/2022/05/22/us/reading-teaching-curriculum-phonics.html
Greene, P. (2022, September 24). Did closing school buildings cause test scores to drop? Looking for evidence. Forbes. https://www.forbes.com/sites/petergreene/2022/09/24/did-closing-school-buildings-cause-test-scores-to-drop-looking-for-evidence/?sh=6ccde44424c4
Hamilton, L. S., Stecher, B. M., Marsh, J. A., McCombs, J. S., Robyn, A., Russell, J. L., Naftel, S., & Barney, H. (2007). Standards-based accountability under No Child Left Behind: Experiences of teachers and administrators in three states. RAND Corporation.
Hanushek, E. & Raymond, M. (2004). The effect of school accountability systems on the level and distribution of student achievement. Journal of the European Economic Association, 2(2–3), 406–415. http://dx.doi.org/10.1162/154247604323068096
Hanushek, E., & Raymond, M. (2005). Does school accountability lead to improved student performance? Journal of Policy Analysis and Management, 24(2), 297–327.
Jacob, B. 2005. Accountability, incentives and behavior: The impact of high-stakes testing in the Chicago Public Schools. Journal of Public Economics, 89(5–6), 843–877.
Jacob, B., & Levitt, S. (2003). Rotten apples: An investigation of the prevalence and predictors of teacher cheating. Quarterly Journal of Economics, 118(3), 843–877. https://academic.oup.com/qje/article-abstract/118/3/843/1943009
Jennings, J., & Rentner, D. S. (2006). Ten big effects of the No Child Left behind Act on public schools. Phi Delta Kappan, 88(2), 110–113, http://dx.doi.org/10.1177/003172170608800206.
Keuning ,T., van Geel, M., Visscher, A., & Fox, J.-P. (2019). Assessing and validating effects of a data-based decision-making intervention on student growth for mathematics and spelling. Journal of Educational Measurement, 56(4), 757–792.
Klein, A. (2021, December 6). 1,500 decisions a day (at least!) How teachers cope with a dizzying array of questions. Education Week. https://www.edweek.org/teaching-learning/1-500-decisions-a-day-at-least-how-teachers-cope-with-a-dizzying-array-of-questions/2021/12
Kuhland, M., Soland, J., & Lewis, K. (2022). Test score patterns across three COVID-19-impacted school years. (EdWorking Paper No. 22-521). Annenberg Institute at Brown University. https://edworkingpapers.com/sites/default/files/ai22-521.pdf
Lam, L., Mercer, C., Podolsky, A., & Darling-Hammond, L. (2016). Evidence-based intervention: A guide for states. Learning Policy Institute. https://learningpolicyinstitute.org/sites/default/files/product-files/Evidence_Based_Interventions_Guide_for_States.pdf
Mandinach, B., & Schildkamp, K. (2021). Misconceptions about data-based decision making in education: An exploration of the literature. Studies in Educational Evaluation, 69. https://www.sciencedirect.com/science/article/pii/S0191491X1930416X
Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-based decision making in education: Evidence from recent RAND research. RAND Corporation. https://eric.ed.gov/?id=ED605450
Mellard, D. F., Frey, B. B., & Woods, K. I. (2012). School-wide student outcomes of response to intervention frameworks. Learning Disabilities, 10(2), 17–32. https://files.eric.ed.gov/fulltext/EJ998223.pdf
Mervosh, S. (2022, September 1). The pandemic erased two decades of progress in math and reading. The New York Times. https://www.nytimes.com/2022/09/01/us/national-test-scores-math-reading-pandemic.html
Neal, D., & Schanzenbach, D. W. (2010). Left behind by design: Proficiency counts and test-based accountability. Review of Economics and Statistics 92(2), 263–283.
Pedulla, J. J., Abrams, L. M., Madaus, G. F., Russell, M. K., Ramos, M. A., & Miao, J. (2003). Perceived effects of state-mandated testing programs on teaching and learning: Findings from a national survey of teachers. National Board on Educational Testing and Public Policy.
Petrilli, M. J. (2019, September 25). A new era of accountability in education has barely just begin. Fordham Institute. https://fordhaminstitute.org/national/commentary/new-era-accountability-education-has-barely-just-begun
Pinkelman, S. E., Rolf, K. R., Landon, T., Detrich, R., McLaughlin, C., Peterson, A., & McKnight-Lizotte, M. (2022). Curriculum adoption in U.S. schools: An exploratory, qualitative analysis. Global Implementation Research and Applications, 2(1), 1–11.
Rogers, E. M. (2003). Diffusion of innovations (5th ed.) Free Press.
Rosenthal, R., & DiMatteo, M. R. (2002). Meta-analysis. In H. Pashler & J. Wixted (Eds.), Stevens’ handbook of experimental psychology, Vol. 4: Methodology in experimental psychology (pp. 391-428). John Wiley.
Sarafidou, J. & Chatziioannidis, G. (2013). Teacher participation in decision making and its impact on school and teachers. International Journal of Education Management, 27(2), 170–183. https://www.emerald.com/insight/content/doi/10.1108/09513541311297586/full/html
Schildkamp, K., Poortman, C. L., Ebbeler, J., & Pieters, J. M. (2019). How school leaders can build effective data teams: Five building blocks for a new wave of data-informed decision making. Journal of Educational Change, 20(3), 283–325. 10.1007/s10833-019-09345-3
Slavin, R. E. (1984). Meta-analysis in education: How has it been used? Educational Researcher, 13(8). 6–15.
Staman, L., Timmermans, A. C., & Visscher, A. J. (2017). Effects of a data-based decision making intervention on student achievement. Studies in Educational Evaluation, 55, 58-67. https://www.sciencedirect.com/science/article/abs/pii/S0191491X17300329?via%3Dihub
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42(8), 795– 819. https://doi.org/10.1002/pits.20113
Strauss, V. (2022, January 4). A teacher’s brain on a typical school day. The Washington Post. https://www.washingtonpost.com/education/2022/01/04/a-teachers-brain-at-school/
Sugai, G., & Horner, R. H. (2006). A promising approach for expanding and sustaining school-wide positive behavior support. School Psychology Review, 35(2), 245–259.
Supovitz, J. (2009). Can high stakes testing leverage educational improvement? Prospects from the last decade of testing and accountability reform. Journal of Educational Change, 10(2–3), 211–227. http://dx.doi.org/10.1007/s10833-009-9105-2
Torgerson, C. J. (2006). Publication bias: The Achilles’ heel of systematic reviews? British Journal of Educational Studies, 54(1), 89–102. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8527.2006.00332.x
Torres, C., Farley, C. A., & Cook, B. G. (2012). A special educator’s guide to successfully implementing evidence-based practices. TEACHING Exceptional Children, 45(1), 64–73.
Torres, R. (2021). Does test-based school accountability have an impact on student achievement and equity in education? A panel approach using PISA. (OECD Education Working Papers, No. 250). https://doi.org/10.1787/0798600f-en
Vadasy, P. F., Wayne, S. K., O’Connor, R. E., Jenkins, J. R., Pool, K., Firebaugh, M., & Peyton, J. (2004). Sound Partners: A supplementary one-to-one tutoring program in phonics-based early reading skills. Sopris West.
Van der Kleij, F. M., Vermeulen, J. A., Schildkamp, K, & Eggen, T. J. H. M. (2015). Integrating data-based decision making, assessment for learning, and diagnostic testing in formative assessment. Assessment in Education Principles Policy and Practice, 22(3), 324–343. https://research.utwente.nl/en/publications/integrating-data-based-decision-making-assessment-for-learning-an
Vanlommel, K., Vanhoof, J., & Van Petegem, P. (2016). Data use by teachers: The impact of motivation, decision-making style, supportive relationships and reflective capacity. Educational Studies, 42(1), 36–53.
What Works Clearinghouse. (2010). Sound Partners. U.S. Department of Education. https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_soundpartners_092110.pdf
U.S. Department of Education. (2016). Non-regulatory guidance: Using evidence to strengthen education investments. https://www2.ed.gov/policy/elsec/leg/essa/guidanceuseseinvestment.pdf
What Can Implementation Science Offer During Covid-19?
The Covid-19 outbreak has highlighted the significance and value of implementation knowledge and implementation capacity. To assist you during this difficult time, GIS has gathered some potential resources to help schools weather these challenging times.
This special issue of Strategies is devoted to highlighting this knowledge base and corresponding practices. In this issue you will find an in-depth case study of a school district engaged in systemic improvement, using the principles and practices of what Dr. Jackson calls the Pedagogy of Confidence®.
Statistics as Principled Argument
The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric. Five criteria, described by the acronym MAGIC (magnitude, articulation, generality, interestingness, and credibility) are proposed as crucial features of a persuasive, principled argument.
Abelson, R. P. (2012). Statistics as principled argument. Psychology Press.
Teachers Matter: Evidence from Value-Added Assessments.
Value-added assessment proves that very good teaching can boost student learning and that family background does not determine a student's destiny. Students taught by highly effective teachers several years in a row earn higher test scores than students assigned to particularly ineffective teachers.
American Education Research Association (AERA). (2004). Teachers matter: Evidence from value-added assessments. Research Points, 2(2). Retrieved from http://www.aera.net/ Portals/38/docs/Publications/Teachers%20Matter.pdf
Can evidence-based prevention programs be sustained in community practice settings? The Early Risers' advanced-stage effectiveness trial
This study evaluated institutional sustainability of the Early Risers “Skills for Success” conduct problems prevention program.
August, G. J., Bloomquist, M. L., Lee, S. S., Realmuto, G. M., & Hektner, J. M. (2006). Can evidence-based prevention programs be sustained in community practice settings? The Early Risers’ advanced-stage effectiveness trial. Prevention Science, 7(2), 151-165.
The sustained use of research-based instructional practice: A case study of peer-assisted learning strategies in mathematics
This article explores factors influencing the sustained use of Peer Assisted Learning Strategies (PALS) in math in one elementary school.
Baker, S., Gersten, R., Dimino, J. A., & Griffiths, R. (2004). The sustained use of research-based instructional practice: A case study of peer-assisted learning strategies in mathematics. Remedial and Special Education, 25(1), 5-24.
Ending the Science Wars
This book has been raging for decades, raising many questions about the power of science. The book not only helps resolve many current debates about science, but it is also a major contribution to explaining science in terms of a powerful philosophical system.
Baldwin, J. D. (2015). Ending the science wars. Routledge.
Reforming teacher preparation and licensing: What is the evidence?
Using professional self-regulation in medicine as a model, the National Commission on Teaching and America's Future has proposed sweeping changes in how teachers are trained and licensed, claiming that the reforms are well-grounded in research. This paper argues that the research literature offers far less support for the Commission's recommendations than is claimed.
Ballou, D., & Podgursky, M. (2000). Reforming Teacher Preparation and Licensing: What is the Evidence?. Teachers College Record, 102(1), 5-27.
Outcome evaluation of Washington State's research-based programs for juvenile offenders.
The CJAA funded the nation’s first statewide experiment concerning research-based programs for juvenile justice. The question here was whether they work when applied statewide in a “real world” setting. This report indicates that the answer to this question is yes— when the programs are competently delivered.
Barnoski, R., & Aos, S. (2004). Outcome evaluation of Washington State’s research-based programs for juvenile offenders. Olympia, WA: Washington State Institute for Public Policy, 460.
Identifying and Implementing Education Practices Supported by Rigorous Evidence: A User Friendly Guide.
This Guide seeks to provide assistance to educational practitioners in evaluating whether an educational intervention is backed by rigorous evidence of effectiveness, and in implementing evidence-based interventions in their schools or classrooms.
Baron, J. (2004). Identifying and Implementing Education Practices Supported by Rigorous Evidence: A User Friendly Guide. Journal for Vocational Special Needs Education, 26, 40-54.
Answering the Bell: High School Start Times and Student Academic Outcomes.
This research finds starting school later is associated with reduced suspensions and higher course grades. These studies suggest disadvantaged students may especially benefit from delayed starting times.
Bastian, K. C., & Fuller, S. C. (2018). Answering the Bell: High School Start Times and Student Academic Outcomes. AERA Open, 4(4), 2332858418812424.
Using Resource and Cost Considerations to Support Educational Evaluation: Six Domains.
The focus of this essay is on which economic methods can complement and enhance impact evaluations. The authors propose the use of six domains to link intervention effectiveness to the best technique needed to determine which practice is the most cost-effective choice.
Belfield, C. R., & Brooks Bowden, A. (2019). Using Resource and Cost Considerations to Support Educational Evaluation: Six Domains. Educational Researcher, 48(2), 120-127.
Practice makes perfect and other myths about mental health services.
After reviewing relevant scientific literature, the author concludes that these are myths with little or no evidence to support them. The author suggests 4 ways to improve the quality and effectiveness of services.
Bickman, L. (1999). Practice makes perfect and other myths about mental health services. American Psychologist, 54(11), 965.
External validity and experimental investigation of individual behavior
In the present article, it is argued that rules and conventions for generalizing in group-statistical research are different from those applying to single-subject research.
Birnbrauer, J. S. (1981). External validity and experimental investigation of individual behaviour. Analysis and Intervention in Developmental Disabilities, 1(2), 117-132.
Effects of teacher professional development on gains in student achievement: How meta analysis provides scientific evidence useful to education leaders
The Council of Chief State School Officers (CCSSO) was awarded a grant from the National Science Foundation to conduct a meta analysis study with the goal of providing state and local education leaders with scientifically-based evidence regarding the effects of teacher professional development on improving student learning. The analysis focused on completed studies of effects of professional development for K-12 teachers of science and mathematics.
Blank, R. K., & De Las Alas, N. (2009). The Effects of Teacher Professional Development on Gains in Student Achievement: How Meta Analysis Provides Scientific Evidence Useful to Education Leaders. Council of Chief State School Officers. One Massachusetts Avenue NW Suite 700, Washington, DC 20001.
Evidence-Based Programs and Cultural Competence
This was a historic meeting among developers of evidence-based programs, leaders of various cultural, racial, and ethnic professional associations, and representatives of family associations. Evidence-based program implementation and cultural competence in human services have had parallel paths with limited intersection and dialogue.
Blase, K. A., & Fixsen, D. L. (2003). Evidence-based programs and cultural competence. Tampa, FL: National Implementation Research Network, Louis de la Parte Florida Mental Health Institute, University of South Florida.
Special education teachers' views of research-based practices
Focus groups with teachers of students with learning disabilities (n= 30) and teachers of students with emotional/behavior disorders (n= 19) were conducted to examine the the teachers' perspectives about educational research and the extent to which they found research findings to be useful.
Boardman, A. G., Argüelles, M. E., Vaughn, S., Hughes, M. T., & Klingner, J. (2005). Special education teachers' views of research-based practices. The Journal of Special Education, 39(3), 168-180.
Qualitative Studies in Special Education
An overview of the many types of studies that fall into the qualitative design genre is provided. Strategies that qualitative researchers use to establish the authors’ studies as credible and trustworthy are listed and defined
Brantlinger, E., Jimenez, R., Klingner, J., Pugach, M., & Richardson, V. (2005). Qualitative studies in special education. Exceptional children, 71(2), 195-207.
Single-case research design and analysis : new directions for psychology and education
This book has three main goals: to take stock of progress in the development of data-analysis procedures for single-subject research; to clearly explain errors of application and consider them within the context of new theoretical and empirical information of the time; and to closely examine new developments in the analysis of data from single-subject or small n experiments.
Busk, P. L., Serlin, R. C., Kratochwill, T. R., & Levin, J. R. (1992). Single-case research design and analysis: New directions for psychology and education.
Evidence-Based Practice: How Did It Emerge and What Does It Mean for the Early Childhood Field?.
The authors discuss the emergence of the evidence-based practice movement and the challenges of integrating what we know from scientific research into daily practice with children and families.
Buysse, V., & Wesley, P. W. (2006). Evidence-Based Practice: How Did It Emerge and What Does It Mean for the Early Childhood Field?. Zero to Three (J), 27(2), 50-55.
Campaigns for Moving Research Into Practice
in this perspective, the author challenge us to accept the responsibility of moving education forward by doing more than paying lip service to the translation of research into practice.
Carnine, D. (1999). Campaigns for moving research into practice. Remedial and Special Education, 20(1), 2-35.
Why education experts resist effective practice: Report of the Thomas B. Fordham Foundation
This essay provides examples from reading and math curricula, describes how experts have, for ideological reasons, shunned some solutions that do display robust evidence of efficacy, then examines how public impatience has forced other professions to “grow up” and accept accountability and scientific evidence.
Carnine, D. (2000). Why education experts resist effective practices (Report of the Thomas B. Fordham Foundation). Washington, DC: Thomas B. Fordham Foundation.
How much are districts spending to implement teacher evaluation systems: Case studies of Hillsborough County Public Schools, Memphis City Schools, and Pittsburgh Public Schools.
This report presents case studies of the efforts by three school districts, Hillsborough County Public Schools (HCPS), Memphis City Schools (MCS), and Pittsburgh Public Schools (PPS), to launch, implement, and operate new teacher evaluation systems as part of a larger reform effort called the Partnership Sites to Empower Effective Teaching.
Chambers, J., Brodziak de los Reyes, I., & O'Neil, C. (2013). How Much are Districts Spending to Implement Teacher Evaluation Systems?.
Empirically Supported Psychological Interventions: Controversies and Evidence
The work of several such task forces and other groups reviewing empirically supported treatments (ESTs) in the United States, United Kingdom, and elsewhere is summarized here, along with the lists of treatments that have been identified as ESTs
Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual review of psychology, 52(1), 685-716.
A multilevel study of leadership, empowerment, and performance in teams
A multilevel model of leadership, empowerment, and performance was tested using a sample of 62 teams, 445 individual members, 62 team leaders, and 31 external managers from 31 stores of a Fortune 500 company. Leader-member exchange and leadership climate-related differently to individual and team empowerment and interacted to influence individual empowerment.
Chen, G., Kirkman, B. L., Kanfer, R., Allen, D., & Rosen, B. (2007). A multilevel study of leadership, empowerment, and performance in teams. Journal of Applied Psychology, 92(2), 331–346.
The Frontier of Evidence-Based Practice
These guidelines emphasized the dimensions of 1) efficacy and 2) effectiveness. A model is provided that proposes how evidence--however defined--will ultimately connect with practice.
Chorpita, B. F. (2003). The frontier of evidence-based practice.
Mapping Evidence-Based Treatments for Children and Adolescents: Application of the Distillation and Matching Model to 615 Treatments From 322 Randomized Trials
This is the 1st study to aggregate evidence-based treatment protocols empirically according to their constituent treatment procedures, and the results point both to the overall organization of therapy procedures according to matching factors and to gaps in the current child and adolescent treatment literature.
Chorpita, B. F., & Daleiden, E. L. (2009). Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of consulting and clinical psychology, 77(3), 566.
Evidence-based practice in the broader context: How can we really use evidence to inform decisions
This article examines the evidence-based practice decision-making heuristic in the broader context of clinical decision making.
Chorpita, B. F., & Starace, N. K. (2010). Evidence-based practice in the broader context: How can we really use evidence to inform decisions. Journal of Evidence-Based Practices for Schools, 11(1), 47-61.
Driving with Roadmaps and Dashboards: Using Information Resources to Structure the Decision Models in Service Organizations
This paper illustrates the application of design principles for tools that structure clinical decision-making
Chorpita, B. F., Bernstein, A., Daleiden, E. L., & Research Network on Youth Mental Health. (2008). Driving with roadmaps and dashboards: Using information resources to structure the decision models in service organizations. Administration and Policy in Mental Health and Mental Health Services Research, 35(1-2), 114-123.
Toward Large-Scale Implementation of Empirically Supported Treatments for Children: A Review and Observations by the Hawaii Empirical Basis to Services Task Force
This article details the context and findings of a review conducted by a state-established panel established to examine the efficacy and effectiveness of child treatments for Anxiety Disorders, Depression, Attention Deficit Hyperactivity Disorder, Conduct and Oppositional Disorders, and Autistic Disorder
Chorpita, B. F., Yim, L. M., Donkervoet, J. C., Arensdorf, A., Amundsen, M. J., McGee, C., ... & Morelli, P. (2002). Toward large‐scale implementation of empirically supported treatments for children: A review and observations by the Hawaii Empirical Basis to Services Task Force. Clinical Psychology: Science and Practice, 9(2), 165-190.
Do Published Studies Yield larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-Review
The purpose of this study is to estimate the extent to which publication bias is present in education and special education journals. This paper shows that published studies were associated with significantly larger effect sizes than unpublished studies (d=0.64). The authors suggest that meta-analyses report effect sizes of published and unpublished separately in order to address issues of publication bias.
Chow, J. C., & Ekholm, E. (2018). Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review.
Evidence-based interventions in school psychology: Opportunities, challenges, and cautions.
This paper describes opportunities, challenges, and cautions in response to T. R. Kratochwill and K. C. Stoiber's vision and other critical issues for the evidence-based intervention (EBI) movement in school psychology.
Christenson, S. L., Carlson, C., & Valdez, C. R. (2002). Evidence-based interventions in school psychology: Opportunities, challenges, and cautions. School Psychology Quarterly, 17(4), 466.
Discussion Sections in Reports of Controlled Trials Published in General Medical Journals: Islands in Search of Continents?
This Section of reports aim to assess the extent to which reports of RTCs published in 5 general medical journal have discussed new results in light of all of available evidence.
Clarke, M., & Chalmers, I. (1998). Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents?. Jama, 280(3), 280-282.
Combining evidence-based practice with stakeholder consensus to enhance pychosocial rehabilitation services in the Texas benefit design initiative.
This article describes the use of evidence-based practice along with a multi-stakeholder
consensus process to design the psychosocial rehabilitation components in a benefit
package of publicly funded mental health services in Texas.
Cook, J. A., Toprac, M., & Shore, S. E. (2004). Combining evidence-based practice with stakeholder consensus to enhance psychosocial rehabilitation services in the Texas benefit design initiative. Psychiatric Rehabilitation Journal, 27(4), 307.
Disproportionality reduction in exclusionary school discipline: A best-evidence synthesis
A full canon of empirical literature shows that students who are African American, Latinx, or American Indian/Alaskan Native, and students who are male, diagnosed with disabilities, or from low socioeconomic backgrounds are more likely to experience exclusionary discipline practices in U.S. schools. Though there is a growing commitment to mitigating discipline disparities through alternative programming, it is clear that disproportionality in the application of harmful discipline practices persists.
Cruz, R. A., Firestone, A. R., & Rodl, J. E. (2021). Disproportionality reduction in exclusionary school discipline: A best-evidence synthesis. Review of Educational Research, 91(3), 397-431.
From Data to Wisdom: Quality Improvement Strategies Supporting Large-scale Implementation of Evidence-Based Services
The goal of this article is to illustrate various strategies that the Hawaii Child and Adolescent Mental Health Division (CAMHD) adopted to increase the use of empirical evidence to improve the quality of services and outcomes for youth.
Daleiden, E. L., & Chorpita, B. F. (2005). From data to wisdom: Quality improvement strategies supporting large-scale implementation of evidence-based services. Child and Adolescent Psychiatric Clinics, 14(2), 329-349.
Teachers See the Power of Data – But Don’t Have Enough Time to Use It
The Data Quality Campaign’s first teacher poll – commissioned in 2018 – uncovered this important finding and allows for a better understanding of educators’ opinions of data.
This journal attempts to fill the chasm by helping doctors find the information that will ensure they can provide optimum management for their patients.
Davidoff, F., Haynes, B., Sackett, D., & Smith, R. (1995). Evidence based medicine.
Best practices in teacher’ professional development in the United States
This paper discusses best practices in teachers’ professional development (PD) in the United States. We begin by presenting a conceptual framework for effective professional development, which suggests five key features that make professional development effective—content focus, active learning, coherence, sustained duration, and collective participation.
Desimone, L. M., & Garet, M. S. (2015). Best practices in teacher's professional development in the United States.
Cost-effectiveness analysis: A component of evidence-based education
Including cost-effectiveness data in the evaluation of programs is the next step in the evolution of evidence-based practice. Evidence-based practice is grounded in three complementary elements: best available evidence, professional judgment, and client values and context. To fully apply the cost-effectiveness data, school administrators will have to rely on all three of these elements. The function of cost-effectiveness data is to guide decisions about how limited financial resources should be spent to produce the best educational outcomes. To do so, it is necessary for decision makers to choose between options with known cost-effectiveness ratios while working within the budget constraints. In this article, I discuss some of the considerations that have to be addressed in the decision-making process and implications of including cost-effectiveness analyses in data-based decision making.
Detrich, R. (2020). Cost-effectiveness analysis: A component of evidence-based education. School Psychology Review, 1-8.
Getting beneath the veil of effective schools: Evidence from New York City
This paper examines data on 39 charter schools and correlates these data with school effectiveness. We find that class size, per-pupil expenditure, teacher certification, and teacher training—are not correlated with school effectiveness. In stark contrast, we show that frequent teacher feedback, the use of data to guide instruction, high-dosage tutoring, increased instructional time, and high expectations—explains approximately 45 percent of the variation in school effectiveness.
Dobbie, W., & Fryer Jr, R. G. (2013). Getting beneath the veil of effective schools: Evidence from New York City. American Economic Journal: Applied Economics, 5(4), 28-60.
Fundamental Principles of Evidence-based Medicine Applied to Mental Health Care
The purpose of evidence-based medicine (EBM) is to enable patients-- through the process of collaboration with their health care providers--to take advantage of the best available scientific evidence when they are making health care decisions.
Drake, R. E., Rosenberg, S. D., Teague, G. B., Bartels, S. J., & Torrey, W. C. (2003). Fundamental principles of evidence-based medicine applied to mental health care. Psychiatric Clinics of North America.
What is Evidence?
This article focuses on the most fundamental question regarding evidence-based practice: What is evidence? To address this question, the authors first review several of the definitions, criteria, and strategies that have been used to define scientific evidence.
Drake, R.E., Latimer, E.S., Leff, H. S., McHugi, G. J., Burns, B. J. (2004). What is Evidence?. In Child and Adolescent Psychiatric Clinics of North America, Vol. 13, pp. 717-728
Moving Teacher Preparation into the Future
This report discuss how to use research findings as a base to support stronger teacher preparation programs.
Why federal spending on disadvantaged students (Title I) doesn’t work
The largest Elementary and Secondary Education Act (ESEA) expenditure by far is for its Title I program. This report try to follow the money to see whether Title I funds are spent effectively and whether or not ESEA achieves its objectives. This report suggest focusing effective interventions on the neediest students may provide a way forward that is consistent with fiscal realities.
Developing evidence-based practice: The role of case-based research.
The authors argue that important evidence about best practice comes from case-based research, which builds knowledge in a clinically useful manner and complements what is achieved by multivariate research methods.
Edwards, D. J., Dattilio, F. M., & Bromley, D. B. (2004). Developing evidence-based practice: The role of case-based research. Professional Psychology: Research and Practice, 35(6), 589.
Planning, Implementing and Evaluating Evidence-Based Interventions
This section includes tools and resources that can help school leaders, teachers, and other stakeholders be more strategic in their decision-making about planning, implementing, and evaluating evidence-based interventions to improve the conditions for learning and facilitate positive student outcomes.
Elliott, S. N., Witt, J. C., & Kratochwill, T. R. (1991). Selecting, implementing, and evaluating classroom interventions. Interventions for achievement and behavior problems, 99-135.
Evidence-based kernels: Fundamental units of behavioral influence
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior.
Embry, D. D., & Biglan, A. (2008). Evidence-based kernels: Fundamental units of behavioral influence. Clinical child and family psychology review, 11(3), 75-113.
Could John Stuart Mill have saved our schools
This book compares what actually occurred since publication of A System of Logic with some of the more probable scenarios of what could have happened if education had been framed as a science that resides on a logical-empirical base.
Englemann, S., & Carnine, D. (2016). Could John Stuart Mill have saved our schools?. Attainment Company Inc.
Randomized controlled trials in evidence-based mental health care: Getting the right answer to the right question
The purpose of clinical research is to answer this question: Would a new treatment, when added to the existing range of treatment options available in practice, help patients?
Essock, S. M., Drake, R. E., Frank, R. G., & McGuire, T. G. (2003). Randomized controlled trials in evidence-based mental health care: getting the right answer to the right question. Schizophrenia Bulletin, 29(1), 115-123.
Criteria for evaluating the significance of developmental research in the twenty-first century:
The purpose of this paper is to identify the forces that influence how developmental research is prioritized and evaluated and how these influences are changing as we enter the new millennium.
Fabes, R. A., Martin, C. L., Hanish, L. D., & Updegraff, K. A. (2000). Criteria for evaluating the significance of developmental research in the twenty‐first century: Force and counterforce. Child development, 71(1), 212-221.
The challenges of implementing evidence based practice: Ethical considerations in practice, education, policy, and research.
This paper identified and discussed some of the more pressing challenges and associated ethical dilemmas of implementing EBP in social work and strategies to manage them, in the hopes of affirming that the process of EBP is both feasible and practicable.
Farley, A. (2009). The challenges of implementing evidence based practice: ethical considerations in practice, education, policy, and research. Social Work & Society, 7(2), 246-259.
Scientific culture and educational research.
In this article, which draws on a recently released National Research Council report, the authors argue that the primary emphasis should be on nurturing and reinforcing a scientific culture of educational research.
Feuer, M. J., Towne, L., & Shavelson, R. J. (2002). Scientific culture and educational research. Educational researcher, 31(8), 4-14.
Do students benefit from longer school days? Regression discontinuity evidence from Florida’s additional hour of literacy instruction
This research examines the impact of longer school days on student achievement. This study attempts to fill in gaps in the evidence-base on this topic. Although this study finds positive outcomes for additional reading instruction, it is important to note that for achieving maximum results it is important to pair evidence-based reading instruction practices with the additional instruction time in order to achieve maximum results.
Figlio, D., Holden, K. L., & Ozek, U. (2018). Do students benefit from longer school days? Regression discontinuity evidence from Florida’s additional hour of literacy instruction. Economics of Education Review, 67, 171-183.
Implementation of evidence-based treatments for children and adolescents: Research findings and their implications for the future.
A growing number of evidence-based psychotherapies hold the promise of substantial benefits for children, families, and society. For the benefits of evidence-based programs to be realized on a scale sufficient to be useful to individuals and society, evidence-based psychotherapies need to be put into practice outside of controlled clinical trials.
Fixsen, D. L., Blase, K. A., Duda, M. A., Naoom, S. F., & Van Dyke, M. (2010). Implementation of evidence-based treatments for children and adolescents: Research findings and their implications for the future.
Implementation Research: A synthesis of literature
Over the past decade, the science related to developing and identifying “evidence-based practices and programs” has improved—however the science related to implementing these programs with ﬁdelity and good outcomes for consumers lags far behind. As a ﬁeld, we have discovered that all the paper in ﬁle cabinets plus all the manuals on the shelves do not equal real-world transformation of human service systems through innovative practice.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., Wallace, F., Burns, B., ... & Shern, D. (2005). Implementation research: A synthesis of the literature.
Standards of evidence: Criteria for efficacy, effectiveness and dissemination
Ever-increasing demands for accountability, together with the proliferation of lists of evidence-based prevention programs and policies, led the Society for Prevention Research to charge a committee with establishing standards for identifying effective prevention programs and policies.
Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., ... & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention science, 6(3), 151-175.
Evidence-Based Assessment of Learning Disabilities in Children and Adolescents
The reliability and validity of 4 approaches to the assessment of children and adolescents with learning disabilities (LD) are reviewed. The authors identify serious psychometric problems that affect the reliability of models based on aptitude-achievement discrepancies and low achievement.
Fletcher, J. M., Francis, D. J., Morris, R. D., & Lyon, G. R. (2005). Evidence-based assessment of learning disabilities in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 506-522.
Researchers and teachers working together to adapt instruction for diverse learners
This paper explain a three-stage process of Pilot Research, Formal Evaluation, and Scaling Up. Finally, we discuss several misconceptions about empirical research and researchers.
Fuchs, D., & Fuchs, L. S. (1998). Researchers and teachers working together to adapt instruction for diverse learners. Learning Disabilities Research & Practice.
A bridge too far? Challenges in evaluating principal effectiveness
This research has profound implications for states and districts implementing principal evaluation systems, particularly those making high-stakes decisions about principals based on statistical estimates of principal effectiveness. Indeed, such statistical estimates should be used not for making judgments or decisions about principals but rather as a screening tool to identify where states and districts should focus more in-depth and accurate strategies to evaluate principal effectiveness.
Fuller, E. J., & Hollingworth, L. (2014). A bridge too far? Challenges in evaluating principal effectiveness. Educational Administration Quarterly, 50(3), 466-499.
An Evidence-Based Review and Meta-Analysis of Active Supervision.
This paper synthesizes and evaluates 12 studies to calculate the effect size on Active Supervision and student conduct.
Gage, N. A., Haydon, T., MacSuga-Gage, A. S., Flowers, E., & Erdy, L. (2020). An Evidence-Based Review and Meta-Analysis of Active Supervision. Behavioral Disorders, 0198742919851021.
Educational research: An introduction
This text provides a comprehensive introduction to educational research. This textbook has been revised to reflect a balance of both quantitative and qualitative research methods
Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An introduction. Longman Publishing.
Sorting Out the Roles of Research in the Improvement of Practice
This paper discusses the effectiveness of research‐based educational approaches on
Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities Research & Practice, 16(1), 45-50.
Designing high quality research in special education
This article discusses critical issues related to conducting high-quality intervention research using experimental and quasi-experimental group designs.
Gersten, R., Baker, S., & Lloyd, J. W. (2000). Designing high-quality research in special education: Group experimental design. The Journal of Special Education, 34(1), 2-18.
Factors enhancing sustained use of research-based instructional practices
This article reviews key findings from school-reform studies of the 1980s and explains their relevance to special education. It also highlights significant findings from more recent studies that help elucidate and flesh out the earlier findings.
Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of learning disabilities, 33(5), 445-456.
Beliefs about learning and enacted instructional practices: An investigation in postsecondary chemistry education
Using the teacher‐centered systemic reform model as a framework, the authors explore the connection between chemistry instructors’ beliefs about teaching and learning and self‐efficacy beliefs, and their enacted classroom practices.
Gibbons, R. E., Villafañe, S. M., Stains, M., Murphy, K. L., & Raker, J. R. (2018). Beliefs about learning and enacted instructional practices: An investigation in postsecondary chemistry education. Journal of Research in Science Teaching, 55(8), 1111-1133.
Sustaining Teacher Training in a Shifting Environment
This brief is one in a series aimed at providing K-12 education decision makers and advocates with an evidence base to ground discussions about how to best serve students during and following the novel coronavirus pandemic. Student teaching placements influence teacher effectiveness. If student teaching experiences are constrained by the pandemic, teacher candidates may lose valuable experiences and schools may lose the opportunity to shape and evaluate prospective hires.
Goldhaber, D., & Ronfeldt, M. (2020). Sustaining Teacher Training in a Shifting Environment. Brief No. 7. EdResearch for Recovery Project.
Policy Implications for Implementing Evidence-Based Practices
The authors describe the policy and administrative-practice implications of implementing evidence-based services, particularly in public-sector settings. They review the observations of the contributors to the evidence-based practices series published throughout 2001 in Psychiatric Services.
Goldman, H. H., Ganju, V., Drake, R. E., Gorman, P., Hogan, M., Hyde, P. S., & Morgan, O. (2001). Policy implications for implementing evidence-based practices. Psychiatric Services, 52(12), 1591-1597.
When Evidence-based Literacy Programs Fail.
This study examines the implementation of Leveled Literacy Intervention (LLI) for struggling readers that had been proven to work in early grades. The findings highlight the importance of considering context and implementation, in addition to evidence of effectiveness, when choosing an intervention program. Not only do schools need to adopt programs supported by evidence, but equally educators need to implement them consistently and effectively if students are to truly benefit from an intervention.
Utility of intelligence tests for treatment planning, classification, and placement decisions: Recent empirical findings and future directions.
This article maintains that intelligence tests contribute little if any information useful for the planning, implementation, and evaluation of instructional interventions for children. This argument is supported by the virtual absence of empirical evidence supporting the existence of aptitude × treatment interactions.
Gresham, F. M., & Witt, J. C. (1997). Utility of intelligence tests for treatment planning, classification, and placement decisions: Recent empirical findings and future directions. School Psychology Quarterly, 12(3), 249.
Generalizability of multiple measures of treatment integrity: Comparisons among direct observations, permanent products, and self-report
The concept of treatment integrity is an essential component to data-based decision making within a response-to-intervention model. Although treatment integrity is a topic receiving increased attention in the school-based intervention literature, relatively few studies have been conducted regarding the technical adequacy of treatment integrity assessment methods.
Gresham, F. M., Dart, E. H., & Collins, T. A. (2017). Generalizability of Multiple Measures of Treatment Integrity: Comparisons Among Direct Observation, Permanent Products, and Self-Report. School Psychology Review, 46(1), 108-121.
Empirically supported interventions: Initiating a new standing section in School Psychology Quarterly.
In this issue you will find both a brief introduction to the new Empirically Supported Interventions Section and the first of a two-part substantive discussion of vital issues pertaining to this topic. A companion piece further extending this analysis will follow shortly in a subsequent issue.
Gutkin, T. B. (2000). Empirically supported interventions: Initiating a new standing section in School Psychology Quarterly. School Psychology Quarterly, 15(1), 1.
Users' guide to the medical literature: a manual for evidence-based clinical practice.
The manual offers not just a summary of the articles in JAMA, but modified and expanded material. The manual clearly explain the principles of EBM and guidelines for accessing and evaluating scientific articles.
Guyatt, G., Rennie, D., Meade, M., & Cook, D. (Eds.). (2002). Users' guides to the medical literature: a manual for evidence-based clinical practice (Vol. 706). Chicago: AMA press.
Will the “principles of effectiveness” improve prevention practice? Early findings from a diffusion study
This study examines adoption and implementation of the US Department of Education's new policy, the `Principles of Effectiveness', from a diffusion of innovations theoretical framework. In this report, we evaluate adoption in relation to Principle 3: the requirement to select research-based programs.
Hallfors, D., & Godette, D. (2002). Will the “principles of effectiveness” improve prevention practice? Early findings from a diffusion study. Health Education Research, 17(4), 461–470.
At a loss for words: How a flawed idea is teaching millions of kids to be poor readers.
For decades, schools have taught children the strategies of struggling readers, using a theory about reading that cognitive scientists have repeatedly debunked. And many teachers and parents don't know there's anything wrong with it.
Hanford, E. (2019). At a loss for words: How a flawed idea is teaching millions of kids to be poor readers. APM Reports. https://www.apmreports.org/story/2019/08/22/whats-wrong-how-schools-teach-reading
Attention Deficit Hyperactivity Disorders and Classroom-Based Interventions: Evidence-Based Status, Effectiveness, and Moderators of Effects in Single-Case Design Research
to inform selection of evidence-based interventions to be implemented in classroom settings, the current systematic review with meta-analysis of single-case design studies was conducted to evaluate intervention effectiveness, evidence-based status, and moderators of effects for four intervention types when implemented with students with ADHD in classroom settings.
Harrison, J. R., Soares, D. A., Rudzinski, S., & Johnson, R. (2019). Attention Deficit Hyperactivity Disorders and Classroom-Based Interventions: Evidence-Based Status, Effectiveness, and Moderators of Effects in Single-Case Design Research. Review of Educational Research, 0034654319857038.
Teacher evaluation as a policy target for improved student learning: A fifty-state review of statute and regulatory action since NCLB
This paper reports on the analysis of state statutes and department of education regulations in fifty states for changes in teacher evaluation in use since the passage of No Child Left Behind Act of 2001.
Hazi, H. M., & Rucinski, D. A. (2009). Teacher evaluation as a policy target for improved student learning: A fifty-state review of statute and regulatory action since NCLB. education policy analysis archives, 17, 5.
Testing a longitudinal model of distributed leadership effects on school improvement
A central premise in the literature on leadership highlights its central role in organizational change. In light of the strength of this conceptual association, it is striking to note the paucity of large-scale empirical studies that have investigated how leadership impacts performance improvement in organizations over time. Indeed evidence-based conclusions concerning the impact of leadership on organizational change are drawn largely from case studies and cross-sectional surveys.
Heck, R. H., & Hallinger, P. (2010). Testing a longitudinal model of distributed leadership effects on school improvement. The leadership quarterly, 21(5), 867-885.
Psychometrics of direct observation
Direct observation plays an important role in the assessment practices of school psychologists and in the development of evidence-based practices in general and special education. The defining psychometric features of direct observation are presented, the contributions to assessment practice reviewed, and a specific proposal is offered for evaluating the psychometric merit of direct observation in both practitioner developed and commercial/research specific applications.
Hintze, J. M. (2005). Psychometrics of direct observation. School Psychology Review, 34(4), 507-519.
The Problem with "Proficiency": Limitaions of Statistic and Policy Under No Child Left Behind
The Percentage of Proficient Students (PPS) has become a ubiquitous statistic under the No Child Left Behind Act. The author demonstrates that the PPS metric offers only limited and unrepresentative depictions of large-scale test score trends, gaps, and gap trends. The author shows how the statistical shortcomings of these depictions extend to shortcomings of policy, from exclusively encouraging score gains near the proficiency cut score to shortsighted comparisons of state and national testing results. The author proposes alternatives for large-scale score reporting and argues that a distribution-wide perspective on results is required for any serious analysis of test score data, including “growth”-related results under the recent Growth Model Pilot Program.
Ho, A. D. (2008). The problem with “proficiency”: Limitations of statistics and policy under No Child Left Behind. Educational researcher, 37(6), 351-360.
A profitable conjunction: From science to service in children's mental health
This outstanding textbook presents innovative interventions for youth with severe emotional and behavioral disorders. Community Treatment for Youth is designed to fill a gap between the knowledge base and clinical practice through its presentation of theory, practice parameters, training requirements, and research evidence.
Hoagwood, K. I. M. B. E. R. L. Y., Burns, B. J., & Weisz, J. R. (2002). A profitable conjunction: From science to service in children’s mental health. Community treatment for youth: Evidence-based interventions for severe emotional and behavioral disorders, 327-338.
School psychology: a public health framework I. From evidence-based practices to evidence-based policies
This report, preceded as it was by the seminal report of the Surgeon General on Mental Health (2000) and followed by the Surgeon General’s Youth Violence (2001) and Culture, Race and Ethnicity Reports (2002), represented a critical shift in federal health priorities.
Hoagwood, K., & Johnson, J. (2003). School psychology: A public health framework: I. From evidence-based practices to evidence-based policies. Journal of School Psychology, 41(1), 3-21.
Criteria for Evaluating Treatment Guidelines
This document presents a set of criteria to be used in evaluating treatment guidelines that have been promulgated by health care organizations, government agencies, professional associations, or other entities.1 The purpose of treatment guidelines is to educate health care professionals2 and health care systems about the most effective treatments available
Hollon, D., Miller, I. J., & Robinson, E. (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57(12), 1052-1059.
The Use of Single-Subject Research to Identify Evidence-Based Practice in Special Education
This article allows readers to determine if a specific study is a credible example of single-subject research and if a specific practice or procedure has been validated as “evidence-based” via single-subject research.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2008). The use of single-subject research to identify evidence-based practice in special education. Advances in Evidence-Based Education, 1(1), 67-87.
Examining the evidence base for school-wide positive behavior support.
The purposes of this manuscript are to propose core features that may apply to any practice or set of practices that proposes to be evidence-based in relation to School-wide Positive Behavior Support (SWPBS).
Horner, R. H., Sugai, G., & Anderson, C. M. (2010). Examining the evidence base for school-wide positive behavior support. Focus on Exceptional Children, 42(8), 1.
Is school-wide positive behavior support an evidence-based practice
The purpose of this document is to lay out the current evidence assessing SWPBIS and the considerations that may be relevant for state, district and national decision-makers.
Horner, R. H., Sugai, G., & Lewis, T. (2015). Is school-wide positive behavior support an evidence-based practice. Positive Behavioral Interventions and Supports.
Curriculum-based evaluation: Teaching and decision making
This book presents clear and functional techniques for deciding what students with learning disabilities should be taught and how. This book can also function as a tool to assist pre-service teachers (students) with deciding how to teach and what to teach to regular/non-special education children.
Howell, K. W. (1993). Curriculum-based evaluation: Teaching and decision making. Cengage Learning.
Accountability policies and teacher decision making: Barriers to the use of data to improve practice
This study examines longitudinal from nine high schools nominated as leading practitioners of Continuous Improvement (CI) practices. The researchers compared continuous improvement best practices to teachers actual use of data in making decisions. The study found teachers to be receptive, but also found that significant obstacles were interfering with the effective use of data that resulted in changes in instruction.
Ingram, D., Louis, K. S., & Schroeder, R. G. (2004). Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Teachers College Record, 106(6), 1258-1287.
Using office discipline referral data for decision making about student behavior in elementary and middle schools: An empirical evaluation of validity
This evaluation used Messick's construct validity as a conceptual framework for an empirical study assessing the validity of use, utility, and impact of office discipline referral (ODR) measures for data-based decision making about student behavior in schools.
Irvin, L. K., Horner, R. H., Ingram, K., Todd, A. W., Sugai, G., Sampson, N. K., & Boland, J. B. (2006). Using office discipline referral data for decision making about student behavior in elementary and middle schools: An empirical evaluation of validity. Journal of Positive Behavior Interventions, 8(1), 10-23.
The mirage: Confronting the hard truth about our quest for teacher development
This piece describes the widely held perception among education leaders that we already know how to help teachers improve, and that we could achieve our goal of great teaching in far more classrooms if we just applied what we know more widely.
Jacob, A., & McGovern, K. (2015). The mirage: Confronting the hard truth about our quest for teacher development. Brooklyn, NY: TNTP. https://tntp.org/assets/documents/TNTP-Mirage_2015.pdf.
Effectiveness and implementation of evidence-based practices in residential care settings
Evidence-based psychosocial interventions and respective outcome studies, published from 1990 to 2012, were identified through a multi-phase search process, involving the review of four major clearinghouse websites and relevant electronic databases. To be included, effectiveness had to have been previously established through a comparison group design regardless of the setting, and interventions tested subsequently with youth in RCS.
James, S., Alemi, Q., & Zepeda, V. (2013). Effectiveness and implementation of evidence-based practices in residential care settings. Children and Youth Services Review, 35(4), 642-656.
Toward a Science of Education: The Battle Between Rogue and Real Science
This book summarize how science works, why it offers hope to educators, how science has been neglected and abused in education, and what I think science now tells us — and doesn’t tell us—about several issues in education.
Kauffman, J. M. (2011). Toward a science of education: The battle between rogue and real science. Full Court Press.
Comparative outcome studies of psychotherapy: Methodological issues and strategies.
Considers design issues and strategies by comparative outcome studies, including the conceptualization, implementation, and evaluation of alternative treatments; assessment of treatment-specific processes and outcomes; and evaluation of the results. It is argued that addressing these and other issues may increase the yield from comparative outcome studies and may attenuate controversies regarding the adequacy of the demonstrations.
Kazdin, A. E. (1986). Comparative outcome studies of psychotherapy: methodological issues and strategies. Journal of consulting and clinical psychology, 54(1), 95.
Psychotherapy for Children and Adolescents: Directions for Research and Practice
By focusing on clinical practice and what can be changed, this book offers suggestions for improvement of patient care and advises how clinical work can contribute directly and in new ways to the accumulation of knowledge.
Kazdin, A. E. (2000). Psychotherapy for children and adolescents: Directions for research and practice. Oxford University Press.
Evidence-Based treatments: Challenges and Priorities for Practice and Research
This article discusses key issues in identifying evidence-based treatments for children and adolescents. Among the issues discussed are obstacles in transporting treatments from research to clinical services, the weak criteria for delineating whether a treatment is evidence based, and barriers to training therapists.
Kazdin, A. E. (2004). Evidence-based treatments: Challenges and priorities for practice and research. Child and Adolescent Psychiatric Clinics, 13(4), 923-940.
Single-Case Designs for Educational Research
This book provides up-to-date, in-depth information about the use of single-case experimental designs in educational research across a range of educational settings and students.
Kennedy, C. H. (2005). Single-case designs for educational research. Pearson/A & B.
Sustaining Research-Based Practices in Reading: A 3-Year Follow Up
This study examined the extent to which the reading instructional practices learned by a
cohort of teachers who participated in an intensive, yearlong professional development
experience during the 1994-1995 school year have been sustained and modified over time.
Klingner, J. K., Vaughn, S., Tejero Hughes, M., & Arguelles, M. E. (1999). Sustaining research-based practices in reading: A 3-year follow-up. Remedial and Special Education, 20(5), 263-287.
Knowledge and public policy: The search for meaningful indicators
This book addresses the question of what it takes to develop social indicators that genuinely influence important public decisions
Knowledge and public policy: The search for meaningful indicators
Dubious “Mozart effect” remains music to many Americans’ ears.
Scientists have discredited claims that listening to classical music enhances intelligence, yet this so-called "Mozart Effect" has actually exploded in popularity over the years.
Krakovsky, M. (2005). Dubious “Mozart effect” remains music to many Americans’ ears. Stanford, CA: Stanford Report
Evidence-based practice: Promoting evidence-based interventions in school psychology
This report presents an overview of issues related to evidence-based practice and the role that the school psychology profession can play in developing and disseminating evidence-based interventions.
Kratochwill, T. R., & Shernoff, E. S. (2003). Evidence-based practice: Promoting evidence-based interventions in school psychology. School Psychology Quarterly, 18(4), 389.
Diversifying Theory and Science: Expanding the Boundaries of Empirically Supported Interventions in School Psychology
The task force on interventions by the American Psychological Association (APA, Task Force on Promotion and Dissemination of Psychological Procedures, 1995) stimulated considerable enthusiasm among many about the role of ESIs in practice.
Kratochwill, T. R., & Stoiber, K. C. (2000). Diversifying theory and science: Expanding the boundaries of empirically supported interventions in school psychology. Journal of School Psychology, 38(4), 349-358.
Evidence-based interventions in school psychology: Conceptual foundations of the Procedural and Coding Manual of Division 16 and the Society for the Study of School Psychology Task Force.
The authors presents the conceptual, philosophical, and methodological basis for the Procedural and Coding Manual for Review of Evidence-Based Interventions
Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-based interventions in school psychology: Conceptual foundations of the Procedural and Coding Manual of Division 16 and the Society for the Study of School Psychology Task Force. School Psychology Quarterly, 17(4), 341.
Using Coaching to improve the Fidelity of Evidence-Based Practices: A Review of Studies
The authors conducted a comprehensive review of research to identify the impact of coaching on changes in preservice and in-service teachers’ implementation of evidence-based practices.
Kretlow, A. G., & Bartholomew, C. C. (2010). Using coaching to improve the fidelity of evidence-based practices: A review of studies. Teacher Education and Special Education, 33(4), 279-299.
Moneyball for Higher Education: How Federal Leaders Can Use Data and Evidence to Improve Student Outcomes
This report recommend thirteen specific steps the federal government can take to develop new methods to define and measure such outcomes, use federal resources to build and apply evidence of what works, and help colleges and universities invest in student outcomes.
Science in the schoolhouse: An uninvited guest.
In this discussion, we examine the relationship between science and education and delineate four reasons for characterizing science as an uninvited guest in schools.
Landrum, T. J., & Tankersley, M. (2004). Science in the schoolhouse: An uninvited guest. Journal of Learning Disabilities, 37(3), 207-212.
Scientific Formative Evaluation: The Role of Individual Learners in Generating and Predicting Successful Educational Outcomes
what does it mean to take a scientific approach to instructional productivity? This chapter hopes to contribute to that discussion by examining the role scientific assessment can play in enhancing educational productivity.
Layng, T. J., Stikeleather, G., & Twyman, J. S. (2006). Scientific formative evaluation: The role of individual learners in generating and predicting successful educational outcomes. The scientific basis of educational productivity, 29-44.
Leading school turnaround: How successful school leaders transform low-performing schools
Leading School Turnaround offers new perspectives and concrete, evidence-based guidelines for the educational leaders and administrators faced with the challenge of turning our low-performing schools around. Using the tools outlined in this groundbreaking book, school leaders can guide their schools to higher levels of achievement and sustained academic success.
Leithwood, K., Harris, A., & Strauss, T. (2010). Leading school turnaround: How successful leaders transform low-performing schools. John Wiley & Sons.
The Reading Wars
An old disagreement over how to teach children to read -- whole-language versus phonics -- has re-emerged in California, in a new form. Previously confined largely to education, the dispute is now a full-fledged political issue there, and is likely to become one in other states.
Lemann, N. (1997). The reading wars. The Atlantic Monthly, 280(5), 128–133.
Evidence-Based Practices in a Changing World: Reconsidering the Counterfactual in Educational Research.
Populations and study samples can change over time—sometimes dramatically so. We illustrate this important point by presenting data from 5 randomized control trials of the efficacy of Kindergarten Peer-Assisted Learning Strategies, a supplemental, peer-mediated reading program.
Lemons, C. J., Fuchs, D., Gilbert, J. K., & Fuchs, L. S. (2014). Evidence-based practices in a changing world: Reconsidering the counterfactual in education research. Educational Researcher, 43(5), 242-252.
Educational/Psychological Intervention Research Circa 2012
The chapter focuses on the historically perceived poor methodological rigor and low scientific credibility of most educational/psychological intervention research.
Levin, J. R., & Kratochwill, T. R. (2012). Educational/psychological intervention research circa 2012. Handbook of Psychology, Second Edition, 7.
Dear Colleagues Letter: Resource Comparability
Dear Colleagues Letter: Resource Comparability is a letter written by United States Department of Education. This letter was meant to call people attention to disparities that persist in access to educational resources, and to help address those disparities and comply with the legal obligation to provide students with equal access to these resources without regard to race, color, or national origin (This letter addresses legal obligations under Title VI of the Civil Rights Act of 1964, Title VI). This letter builds on the prior work shared by the U.S. Department of Education on this critical topic.
Lhamon, C. E. (2014). Dear colleague letter: Resource comparability. Washington, DC: US Department of Education, Office for Civil Rights. Retrieved from http://www2. ed. gov/about/offices/list/ocr/letters/colleague-resourcecomp-201410. pdf.
The authors lay out each step of meta-analysis from problem formulation through statistical analysis and the interpretation of results.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Sage Publications, Inc.
Building a practically useful theory of goal setting and task motivation: A 35-year odyssey
The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed.
Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American psychologist, 57(9), 705.
No, The Sky is Not Falling: Interpreting the Latest SAT Scores
The College board was recently released SAT scores for the high school graduating class of 2015. Both math and reading scores declined from 2014, continuing a steady downward trend that has been in place for the past decade. Pundits of contrasting political stripes seized on the scores to bolster their political agendas. Petrilli argued that falling SAT scores show that high schools needs more reform. For Burris, the declining scores were evidence of the failure of policies her organization opposes. This articles pointing out that SAT was never meant to measure national achievement and provide detail explanation.
Rigorous Evidence: Key to Progress Against World Poverty?
The Millennium Challenge Corporation (MCC) sponsoring rigorous independent evaluations of its funded projects to build scientifically-valid evidence about "what works." On October 29, the nonprofit, nonpartisan Coalition for Evidence-Based Policy, in collaboration with MCC, hosted a forum with leaders of the development policy and research community on MCC's evidence-based approach.
Lyon, R. L. (2002, November). Rigorous evidence: The key to progress in education. In forum of the Coalition for Evidence Based Policy, Washington, DC.
Corrigendum: Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis
This meta-analysis research cover all major domains in which deliberate practice has been investigated in search of empirical evidence. The authors conclude that deliberate practice is important, but not as important as has been argued.
Macnamara, B. N., Hambrick, D. Z., & Oswald, F. L. (2014). Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis. Psychological science, 25(8), 1608-1618.
Practice and Research in Special Education
This article considers possible reasons that research knowledge is not used more extensively in special education practice and suggests issues to be addressed in solving this problem.
Malouf, D. B., & Schiller, E. P. (1995). Practice and research in special education. Exceptional Children, 61(5), 414-424.
Identifying effective treatments from a brief experimental analysis: Using single-case design elements to aid decision making.
This paper discusses the benefits of using brief experimental analyses to aid in treatment selection, identifies the forms of treatment that are most appropriate for this type of analysis, and describes key design elements for comparing 2 or more treatments efficiently.
Martens, B. K., Eckert, T. L., Bradley, T. A., & Ardoin, S. P. (1999). Identifying effective treatments from a brief experimental analysis: Using single-case design elements to aid decision making. School Psychology Quarterly, 14(2), 163.
Classroom management that works: Research-based strategies for every teacher
How does classroom management affect student achievement? What techniques do
teachers find most effective? How important are schoolwide policies and practices in setting
the tone for individual classroom management? In this follow-up to What Works in Schools,
Robert J. Marzano analyzes research from more than 100 studies on classroom
management to discover the answers to these questions and more. He then applies these
findings to a series of" Action Steps"--specific strategies.
Marzano, R. J., Marzano, J. S., & Pickering, D. (2003). Classroom management that works: Research-based strategies for every teacher. Alexandria, VA: Association for Supervision and Curriculum Development (ASCD).
How to reverse the assault on science.
We should stop being so embarrassed by uncertainty and embrace it as a strength rather than a weakness of scientific reasoning
McIntyre, L., (2019, May 22). How to reverse the assault on science. Scientific American. https://blogs.scientificamerican.com/observations/how-to-reverse-the-assault-on-science1/
Teacher Preparation Programs: Research and Promising Practices
This paper reports evidence-based research and offers suggestions based on studies that include theoretical work, qualitative analysis, statistical analysis, and randomized experience that could provide strong causal evidence of the effects of teacher preparation on student learning.
Evidence-based Pharmacologic Treatment for People with Severe Mental Illness: a focus on guidelines and algorithms.
In this article we discuss guidelines and algorithms as a means of addressing the complexity of pharmacologic treatment of people with severe mental illnesses and disseminating relevant research findings.
Mellman, T. A., Miller, A. L., Weissman, E. M., Crismon, M. L., Essock, S. M., & Marder, S. R. (2001). Evidence-based pharmacologic treatment for people with severe mental illness: a focus on guidelines and algorithms. Psychiatric Services, 52(5), 619-625.
Getting intentional about principal evaluations
Under new frameworks, districts have better aligned their evaluations with their school-leadership standards and developed nuanced rubrics for evidence-collection and evaluation ratings. They have also altered the role of principal supervisors so that they spend more time in schools working with principals.
Mendels, P. (2017). Getting Intentional about Principal Evaluations. Educational Leadership, 74(8), 52-56.
Early intervention in reading: From research to practice
This study documents the implementation of research-based strategies to minimize the occurrence of reading difficulties in a first-grade population. Three strategies were implemented.
Menzies, H. M, Mahdavi, J. N., & Lewis, J. L. (2008). Early intervention in reading: From research to practice. Remedial and Special Education, 29(2), 67-77.
Leveling the Playing Field: Creating Funding Equity Through Student-Based Budgeting
The authors trace the district's process of moving to a system of student-based budgeting:
funding children rather than staff members and weighting the funding according to schools'
and students' needs.
Miles, K. H., Ware, K., & Roza, M. (2003). Leveling the playing field: Creating funding equity through student-based budgeting. Phi Delta Kappan, 85(2), 114-119.
A Cost Allocation Model for Shared District Resources: A Means for Comparing Spending Across Schools
This paper addresses one key driver of spending variation between schools: shared district resources.
Miller, L. J., Roza, M., & Swartz, C. (2004). A cost allocation model for shared district resources: A means for comparing spending across schools. Developments in school finance, 69.
Whole language lives on: The illusion of “balanced” reading instruction.
This position paper contends that the whole language approach to reading instruction has been disproved by research and evaluation but still pervades textbooks for teachers, instructional materials for classroom use, some states' language-arts standards and other policy documents, teacher licensing requirements and preparation programs, and the professional context in which teachers work.
Moats, L. C. (2000). Whole language lives on: The illusion of “balanced” reading instruction. Washington, DC: DIANE Publishing.
A nation at Risk: The full Account
This report is concerned with only one of the many causes and dimensions of the problem, but it is the one that undergirds American prosperity, security, and civility.
National Commission on Excellence in Educatio. (1984). A nation at Risk: The full Account. Cambridge, MA: USA Research.
Scientific Research in Education
This book describes the similarities and differences between scientific inquiry in education and scientific inquiry in other fields and disciplines and provides a number of examples to illustrate these ideas.
National Research Council. (2002). Scientific research in education. National Academies Press.
Advancing Scientific Research in Education
the Center for Education of the National Research Council (NRC) has undertaken a series of activities to address issues related to the quality of scientific education research.1 In 2002, the NRC released Scientific Research in Education (National Research Council, 2002), a report designed to articulate the nature of scientific education research and to guide efforts aimed at improving its quality.
National Research Council. (2004). Scientific research in education. National Academies Press.
Making sense of implementation theories, models and frameworks.
The chapter describes five categories of theoretical approaches that achieve three overarching aims: process models, which are aimed at describing and/or guiding the process of translating research into practice; determinant frameworks, classic theories and implementation theories, which are aimed at understanding and/or explaining what influences implementation outcomes; and evaluation frameworks, which are aimed at evaluating implementation. Awareness of how the approaches differ is important to facilitate the selection of relevant approaches.
Nilsen, P. (2020). Making sense of implementation theories, models, and frameworks. In Implementation Science 3.0 (pp. 53-79). Springer, Cham.
Equality and Quality in U.S. Education: Systemic Problems, Systemic Solutions. Policy Brief
This paper enters debate about how U.S. schools might address long-standing disparities in educational and economic opportunities while improving the educational outcomes for all students. with a vision and an argument for realizing that vision, based on lessons learned from 60 years of education research and reform efforts. The central points covered draw on a much more extensive treatment of these issues published in 2015. The aim is to spark fruitful discussion among educators, policymakers, and researchers.
O'Day, J. A., & Smith, M. S. (2016). Equality and Quality in US Education: Systemic Problems, Systemic Solutions. Policy Brief. Education Policy Center at American Institutes for Research.
The tie that binds: Evidence-based practice, implementation science, and outcomes for children.
In this article, implementation is proposed as the link between evidence-based practices and positive outcomes. Strategies for promoting implementation through “enlightened professional development” are proposed.
Odom, S. L. (2009). The tie that binds: Evidence-based practice, implementation science, and outcomes for children. Topics in Early Childhood Special Education, 29(1), 53-61.
Evidence-Based Practice in Early Intervention/Early Childhood Special Education: Single-Subject Design Research
The purpose of this study was to examine the strength of scientific evidence from single-subject research underlying the Division of Early Childhood (DEC) Recommended Practices.
Odom, S. L., & Strain, P. S. (2002). Evidence-based practice in early intervention/early childhood special education: Single-subject design research. Journal of Early Intervention, 25(2), 151-160.
Research in special education: Scientific methods and evidence-based practices
This article sets the context for the development of research quality indicators and guidelines
for evidence of effective practices provided by different methodologies.
Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Thompson, B., & Harris, K. R. (2005). Research in special education: Scientific methods and evidence-based practices. Exceptional children, 71(2), 137-148.
Implementation of parent management training at the national level: The case of Norway
This article describes early aspects of the nationwide implementation of an evidence‐based program (EBP) in Norway and the design for studying program fidelity over time.
Ogden, T., Forgatch, M. S., Askeland, E., Patterson, G. R., & Bullock, B. M. (2005). Implementation of parent management training at the national level: The case of Norway. Journal of Social Work Practice, 19(3), 317-329.
Why Trust Science?
Naomi Oreskes offers a bold and compelling defense of science, revealing why the social character of scientific
knowledge is its greatest strength—and the greatest reason we can trust it.
Oreskes, N. (2019). Why trust science? Princeton, NJ: Princeton University Press.
Extending the school day or school year: A systematic review of research
The school year and day length have varied over time and across localities depending on the particular needs of the community. Proponents argue that extending time will have learning and nonacademic benefits. Opponents suggest increased time is not guaranteed to lead to more effective instruction and suggest other costs.
Patall, E. A., Cooper, H., & Allen, A. B. (2010). Extending the school day or school year: A systematic review of research (1985–2009). Review of educational research, 80(3), 401-436.
Foundations of special education: basic knowledge informing research and practice in special education
This enlightening book contains papers (presented as chapters) commissioned from nationally recognized scholars, which examine topics related to ethics, culture, science, and philosophy that have a direct bearing on the future of special education.
Paul, J. L. (1997). Foundations of special education: Basic knowledge informing research and practice in special education. Pacific Grove: Brooks.
Evidence-Based Policies in Education: Initiatives and Challenges in Europe
This article examines the state of progress of evidence-based educational policies in Europe and identifies organizations for the generation and dissemination of evidence. Further, it discusses some of the most relevant challenges facing the development of evidence-informed education policies in Europe.
Pellegrini, M., & Vivanet, G. (2020). Evidence-based policies in education: Initiatives and challenges in Europe. ECNU Review of Education, 2096531120924670.
Why practicing psychologists are slow to adopt empirically-validated treatments.
A discussion of this chapter entitled "Dissemination of What, and to Whom?" by B. S. Kohlenberg follows this chapter.
Persons, J. B. (1995). Why practicing psychologists are slow to adopt empirically-validated treatments. In S. C. Hayes, V. M. Follette, R. M. Dawes, & K. E. Grady (Eds.), Scientific standards of psychological practice: Issues and recommendations (pp. 141-157). Reno, NV, US: Context Press
Handbook of Psychology: Educational psychology
This award-winning twelve-volume reference covers every aspect of the ever-fascinating discipline of psychology and represents the most current knowledge in the field. This ten-year revision now covers discoveries based in neuroscience, clinical psychology's new interest in evidence-based practice and mindfulness, and new findings in social, developmental, and forensic psychology.
Pianta, R. C., Hamre, B., Stuhlman, M., Reynolds, W. M., & Miller, G. E. (2003). Handbook of psychology: Educational psychology.
The effects of active participation on student learning.
The effects of active participation on student learning of simple probability was investigated using 20 fifth-grade classes randomly assigned to level of treatment. t was concluded that active student participation exerts a positive influence on fifth-grade student achievement of relatively unique instructional material.
Pratton, J., & Hales, L. W. (1986). The effects of active participation on student learning. The Journal of Educational Research, 79(4), 210-215.
Report Urges Educators to Avoid Using International Test to Make Policy
This articles suggest policymakers to focus less on the international test and more on how states compare to each other when trying to improve schools. This article also shows how it's not worthwhile to compare school in countries where the conditions are different.
Music and spatial task performance.
This research paper reports on testing the hypothesis that music and spatial task performance are causally related. Two complementary studies are presented that replicate and explore previous findings.
Rauscher, F. H., Shaw, G. L., & Ky, C. N. (1993). Music and spatial task performance. Nature, 365(6447), 611–611.
Practical statistics for educators.
The focus of the book is on essential concepts in educational statistics, understanding when to use various statistical tests, and how to interpret results. This book introduces educational students and practitioners to the use of statistics in education and basic concepts in statistics are explained in clear language.
Ravid, R. (2019). Practical statistics for educators. Rowman & Littlefield Publishers.
New evidence on the frequency of teacher turnover: Accounting for within-year turnover.
Teacher turnover occurs during and at the end of the school year, although documentation of within-year turnover currently rests on anecdotal evidence.
Redding, C., & Henry, G. T. (2018). New evidence on the frequency of teacher turnover: Accounting for within-year turnover. Educational Researcher, 47(9), 577-593.
Race Gap in SAT scores highlight inequality and Hinder Upward Mobility
In this paper, we analyze racial differences in the math section of the general SAT test, using publicly available College Board population data for all of the nearly 1.7 million college-bound seniors in 2015 who took the SAT. The evidence for a stubborn race gap on this test does meanwhile provide a snapshot into the extraordinary magnitude of racial inequality in contemporary American society. Standardized tests are often seen as mechanisms for meritocracy, ensuring fairness in terms of access. But test scores reflect accumulated advantages and disadvantages in each day of life up the one on which the test is taken. Race gaps on the SAT hold up a mirror to racial inequities in society as a whole. Equalizing educational opportunities and human capital acquisition earlier is the only way to ensure fairer outcomes.
What is a conflict of interest?
This page describes the conflict of interest and what should we do about it.
Resources for Research Ethics Education. (2001). What is a conflict of interest? San Diego, CA: University of California, San Diego. http://research-ethics.org/topics/conflicts-of-interest/
How are they now? Longer term effects of eCoaching through online bug-in-ear technology.
In this study, using mixed methods, we investigated the longer term effects of eCoaching through advanced online bug-in-ear (BIE) technology.
Rock, M. L., Schumacker, R. E., Gregg, M., Howard, P. W., Gable, R. A., & Zigmond, N. (2014). How are they now? Longer term effects of e coaching through online bug-in-ear technology. Teacher Education and Special Education, 37(2), 161-181.
Empirically supported comprehensive treatments for young children with autism
The criteria for empirically supported treatments, as described by Lonigan, Elbert, and Johnson (this issue), were applied to reports of eight treatment efficacy studies published in peer-reviewed journals.
Rogers, S. J. (1998). Empirically supported comprehensive treatments for young children with autism. Journal of clinical child psychology, 27(2), 168-179.
Conflicts of interest in research: Looking out for number one means keeping the primary interest front and center
This review will briefly address the nature of conflicts of interest in research, including the importance of both financial and non-financial conflicts, and the potential effectiveness and limits of various strategies for managing such conflicts.
Romain, P. L. (2015). Conflicts of interest in research: Looking out for number one means keeping the primary interest front and center. Current Reviews in Musculoskeletal Medicine, 8(2), 122–127.
Psychology should list empirically supported principles of change (ESPs) and not credential trademarked therapies or other treatment packages
Current systems for listing empirically supported therapies (ESTs) provide recognition to treatment packages, many of them proprietary and trademarked, without regard to the principles of change believed to account for their effectiveness.
Rosen, G. M., & Davison, G. C. (2003). Psychology should list empirically supported principles of change (ESPs) and not credential trademarked therapies or other treatment packages. Behavior modification, 27(3), 300-312.
Evaluating problem-solving teams in K–12 schools: Do they work?
Teams and other collaborative structures have become commonplace in American schools, although historically school staff members functioned more independently from one another. In this article, we describe the growing influence of collaboration and teaming in a variety of school contexts, but focus on the empirical literature on problem-solving teams as reflecting the state of research and practice in the schools.
Rosenfield, S., Newell, M., Zwolski Jr, S., & Benishek, L. E. (2018). Evaluating problem-solving teams in K–12 schools: do they work?. American Psychologist, 73(4), 407.
A Systematic Review of Teacher-Delivered Behavior-Specific Praise on K–12 Student Performance
The authors conducted a systematic literature review to explore this low-intensity, teacher-delivered strategy, applying Council for Exceptional Children (CEC) quality indicators and standards to determine whether BSP can be considered an evidence-based practice (EBP).
Royer, D. J., Lane, K. L., Dunlap, K. D., & Ennis, R. P. (2019). A systematic review of teacher-delivered behavior-specific praise on K–12 student performance. Remedial and Special Education, 40(2), 112-128.
Preventing Dropout in Secondary Schools
This What Works Clearinghouse practice guide provides educators and administrators with four evidence-based recommendations for reducing dropout rates in middle and high schools. The guide offers specific, strategies; examples of how to implement the practices; advice on how to overcome obstacles; and a summary of the supporting evidence.
Rumberger, R. W., et al. (2017). Educator’s Practice Guide: Preventing Dropout in Secondary School. IES National Center for Education and Evaluation and Regional Assistance.
The Demon-Haunted World: Science as a Candle in the Dark
Casting a wide net through history and culture, Sagan examines and authoritatively debunks such celebrated fallacies of the past as witchcraft, faith healing, demons, and UFOs. And yet, disturbingly, in today's so-called information age, pseudoscience is burgeoning with stories of alien abduction, channeling past lives, and communal hallucinations commanding growing attention and respect.
Sagan, C. (2011). The demon-haunted world: Science as a candle in the dark. Ballantine Books.
The purpose and practices of leadership assessment as perceived by select public middle and elementary school principals in the Midwest
The purpose of this study was to explore the purpose and practices of leadership evaluation as perceived by principals. The researcher wanted to identify the perceived purposes and practices of leadership evaluation as described by nine public school principals, and to respond to the apparent need expressed by administrators to receive substantive feedback.
Sanders, K. (2008). The purpose and practices of leadership assessment as perceived by select public middle and elementary school principals in the Midwest. Aurora University.
Supporting successful interventions in schools: Tools to plan, evaluate, and sustain effective implementation
Evidence-based interventions benefit learners only when they are implemented fully. Yet many educators struggle with successful implementation. Step-by-step procedures are presented for assessing existing implementation efforts and using a menu of support strategies to promote intervention fidelity.
Sanetti, L. M. H., & Collier-Meek, M. A. (2019). Supporting successful interventions in schools: Tools to plan, evaluate, and sustain effective implementation. Guilford Publications.
Fidelity of implementation in the field of learning disabilities
Decades of research and billions of dollars have been spent to develop and evaluate evidence-based interventions and develop multitiered systems of support (MTSS) toward the goal of more effectively delivering interventions and improving student outcomes. Available evidence, however, suggests interventions are often adopted slowly and delivered with poor fidelity, resulting in uninspiring outcomes for students.
Sanetti, L. M. H., & Luh, H. J. (2019). Fidelity of implementation in the field of learning disabilities. Learning Disability Quarterly, 42(4), 204-216.
Treatment integrity of interventions with children in School Psychology International from 1995–2010
Over the past two decades, the role of school psychologists internationally has shifted from a more narrow focus on assessment to a broader emphasis on problem solving and delivering intervention services via consultation. Defining interventions is important for replication and translation of practice. Further, to make valid, data-based decisions about intervention effectiveness, school psychologists need to consider student outcomes in light of treatment integrity data.
Sanetti, L. M. H., Dobey, L. M., & Gallucci, J. (2014). Treatment integrity of interventions with children in School Psychology International from 1995–2010. School Psychology International, 35(4), 370-383.
The role of performance feedback and implementation of evidence-based practices for preservice special education teachers and student outcomes: A review of the literature
Given the importance of evidence-based practices (EBPs) for improving outcomes for students with disabilities, it is key that preservice special education teachers have the opportunity to implement EBPs with high levels of fidelity during their teacher preparation program. For this reason, the authors conducted a systematic review of the literature to answer the question: Does providing performance feedback improve preservice special education teachers’ fidelity of implementation of EBPs and outcomes for students with disabilities?
Schles, R. A., & Robertson, R. E. (2019). The role of performance feedback and implementation of evidence-based practices for preservice special education teachers and student outcomes: A review of the literature. Teacher Education and Special Education, 42(1), 36-48.
Toward Effective Quality Assurance in Evidence-Based Practice: Links Between Expert Consultation, Therapist Fidelity, and Child Outcomes
This study validated a measure of expert clinical consultation and examined the association between consultation, therapist adherence, and youth outcomes in community-based settings.
Schoenwald, S. K., Sheidow, A. J., & Letourneau, E. J. (2004). Toward effective quality assurance in evidence-based practice: Links between expert consultation, therapist fidelity, and child outcomes. Journal of Clinical Child and Adolescent Psychology, 33(1), 94-104.
School Intervention That Work: Targeted Support for Low-Performing Students
This report breaks out key steps in the school identification and improvement process, focusing on (1) a diagnosis of school needs; (2) a plan to improve schools; and (3) evidenced-based interventions that work.
Evidence-based interventions in school psychology: An illustration of Task Force coding criteria using single-participant research design.
This paper illustrates the application of the Task Force on Evidence-Based Interventions in School Psychology coding criteria using a single-participant research design study.
Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-based interventions in school psychology: An illustration of Task Force coding criteria using single-participant research design. School Psychology Quarterly, 17(4), 390.
Training in Evidence-Based Interventions (EBIs): What are school psychology programs teaching?
This study examined the degree to which school psychology programs provided training in Evidence-Based Interventions (EBIs), examined the contextual factors that interfere with EBI training, and whether students are taught to apply the criteria developed by Divisions 12, 16, and 53 of the APA when evaluating outcome research.
Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2003). Training in Evidence-Based Interventions (EBIs): What are school psychology programs teaching?. Journal of School Psychology, 41(6), 467-483.
Curriculum-based Measurement: Assessing Special Children
Curriculum-Based Measurement and Special Services for Children is a concise and convenient guide to CBM that demonstrates why it is a valuable assessment procedure, and how it can be effectively utilized by school professionals.
Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. Guilford Press.
Advanced Applications of Curriculum-based Measurement
Developed specifically to overcome problems with traditional standardized instruments--and widely used in both general and special education settings throughout the US--curriculum-based measurement (CBM) comprises brief assessment probes of reading, spelling, written expression, and mathematics that serve both to quantify student performance and to bolster academic achievement.
Shinn, M. R. (Ed.). (1998). Advanced applications of curriculum-based measurement. Guilford Press.
Noncategorical special education services with students with severe achievement deficits
The purpose of this chapter is to understand the reasons why categorical assessment and identification for students with severe achievement needs is indefensible. Then, to provide a viable alternative to expedite the assessment and decisionmaking process of educators when they are confronted with students with severe achievement needs
Shinn, M., Good, R., & Parker, C. (1998). Noncategorical special education services with students with severe achievement deficits. Functional and noncategorical identification and intervention in special education, 65-83.
Roles and responsibilities of researchers and practitioners for translating research to practice
This paper outlines the best practices for researchers and practitioners translating research to practice as well as recommendations for improving the process.
Shriver, M. D. (2007). Roles and responsibilities of researchers and practitioners for translating research to practice. Journal of Evidence-Based Practices for Schools, 8(1), 1-30.
Bridging the great divide: Linking research to practice in scholarly publications
This article define what constitutes research to practices study and differentiate the terms research to practice and evidence-based practice.
Shriver, M. D., & Watson, T. S. (2005). Bridging the great divide: Linking research to practice in scholarly publications. Journal of Evidence Based Practices fo
The Shame of American Education
Recent analyses of American schools and proposals for school reform have missed an essential point: Most current problems could be solved if students learned twice as much in the same time and with the same effort.
Skinner, B. F. (1984). The shame of American education. American Psychologist, 39(9), 947.
Replication has taken on more importance recently because the ESSA evidence standards only require a single positive study. To meet the strong, moderate, or promising standards, programs must have at least one “well-designed and well-implemented” study using randomized (strong), matched (moderate), or correlational (promising) designs and finding significantly positive outcomes.
Slavin, R. (2019). Replication. [Blog post]. Retrieved from https://robertslavinsblog.wordpress.com/2019/01/24/replication/
Effective programs in elementary mathematics: A best-evidence synthesis
This article reviews research on the achievement outcomes of three types of approaches to
improving elementary mathematics: mathematics curricula, computer-assisted instruction
(CAI), and instructional process programs.
Slavin, R. E., & Lake, C. (2008). Effective programs in elementary mathematics: A best-evidence synthesis. Review of educational research, 78(3), 427-515.
How could evidence-based reform advance education?
This article presents a definition and rationale for evidence-based reform in education, and a discussion of the current state of evidence-based research, focusing on China, the U.S., and the UK. The article suggests ways in which Chinese, U.S., UK, and other scholars might improve the worldwide quality of evidence-based reform in education.
Slavin, R. E., Cheung, A. C., & Zhuang, T. (2021). How could evidence-based reform advance education?. ECNU Review of Education, 4(1), 7-24.
Barriers to the Preparation of Highly Qualified Teachers in Reading. TQ Research & Policy Brief.
This paper pointed out three prominent points of impact in addressing the poor performance of America’s fourth-graders on national examinations of reading proficiency.
Smartt, S. M., & Reschly, D. J. (2007). Barriers to the Preparation of Highly Qualified Teachers in Reading. TQ Research & Policy Brief. National Comprehensive Center for Teacher Quality.
Denialism: How irrational thinking hinders scientific progress,
In this provocative and headline-making book, Michael Specter confronts the widespread fear of science and its terrible toll on individuals and the planet.
Smith, T. C. (2010). Denialism: How Irrational Thinking Hinders Scientific Progress, Harms the Planet, and Threatens our Lives.
Instruction of Students with Severe Disabilities, 7th Edition
Comprehensively succinct and advanced in its scope, this widely adopted text addresses the full-range of curriculum and instructional topics involved in educating individuals with moderate, severe, and multiple disabilities.
Snell, M. E., & Brown, F. E. (2011). Instruction of Students with Severe Disabilities: Pearson New International Edition. Pearson Higher Ed.
Myths and Misconceptions about Teaching: What Really Happens in the Classroom.
In this book the author describes six teaching myths that prevent reform in education.
Snider, V. (2006). Myths and Misconceptions about Teaching: What Really Happens in the Classroom. Rowman & Littlefield Publishing Group, 4501 Forbes Blvd., Suite 200, Lanham, MD 20706.
Digest of Education Statistics 2010
The 2010 edition of the Digest of Education Statistics is the 46th in a series of publications initiated in 1962. The Digest includes a selection of data from many sources, both government and private, and draws especially on the results of surveys and activities carried out by the National Center for Education Statistics (NCES).
Snyder, T. D., & Dillow, S. A., (2010). Digest of Education Statistics 2010. U.S. Department of Education: Washington, DC. Retrieved from http://nces.ed.gov/pubs2011/2011015.pdf
Generalizability and Decision Studies of a Treatment Adherence Instrument.
Observational measurement of treatment adherence has long been considered the gold standard. However, little is known about either the generalizability of the scores from extant observational instruments or the sampling needed. Results suggested that reliable cognitive–behavioral therapy adherence studies require at least 10 sessions per patient, assuming 12 patients per therapists and two coders—a challenging threshold even in well-funded research. Implications, including the importance of evaluating alternatives to observational measurement, are discussed.
Southam-Gerow, M. A., Bonifay, W., McLeod, B. D., Cox, J. R., Violante, S., Kendall, P. C., & Weisz, J. R. (2020). Generalizability and decision studies of a treatment adherence instrument. Assessment, 27(2), 321-333.
Stanfrod Education Data Archive
The Stanford Education Data Archive (SEDA) is an initiative aimed at harnessing data to help scholars, policymakers, educators, and parents learn how to improve educational opportunity for all children. The data are publicly available here, so that anyone can obtain detailed information about American schools, communities, and student success.
Why Education Practices Fail?
This paper examines a range of education failures: common mistakes in how new practices are selected, implemented, and monitored. The goal is not a comprehensive listing of all education failures but rather to provide education stakeholders with an understanding of the importance of vigilance when implementing new practices.
States, J., & Keyworth, R. (2020). Why Practices Fail. Oakland, CA: The Wing Institute. https://www.winginstitute.org/roadmap-overview
Conflict of interest in the debate over calcium-channel antagonists.
The debate about the safety of calcium-channel antagonists provided an opportunity to study financial conflicts of interest in medicine. This project was designed to examine the relation between authors' published positions on the safety of calcium-channel antagonists and their financial interactions with the pharmaceutical industry.
Stelfox, H. T., Chua, G., O'Rourke, K., & Detsky, A. S. (1998). Conflict of interest in the debate over calcium-channel antagonists. New England Journal of Medicine, 338(2), 101–106.
Multi-tiered systems of support and evidence-based practices
The purpose of this chapter is to present a combined research- and practice-based framework for integrating a comprehensive MTSS model with EBP, and thus, optimize the results stemming from school improvement efforts.
Stoiber, K. C., & Gettinger, M. (2016). Multi-tiered systems of support and evidence-based practices. In Handbook of response to intervention (pp. 121-141). Springer, Boston, MA.
What's the E for EBM?
Familiarity with Evidence-based Medicine (EBM) terminology has extended into the popular press, as evidenced by a recent article in the Times describing the number needed to treat. But all this leads to the question, “What's the E for EBM?”
Straus, S. E. (2004). What's the E for EBM?.
Best practices in school psychology III.
Increasingly, school services are being guided by a problem solving approach and are evaluated by the achievement of positive outcomes. This shift is explored here in 96 chapters and 11 appendices. The volume provides a comprehensive reference relating contemporary research and thought to quality professional services
Thomas, A., & Grimes, J. (Eds.). (1995). Best practices in school psychology III.Washington, DC: National Association of School Psychologists.
Evaluating the Quality of Evidence From Correlational Research for Evidence-Based Practice
The present article proposes some quality indicators for evaluating correlational research in efforts to inform evidence-based practice.
Thompson, B., Diamond, K. E., McWilliam, R., Snyder, P., & Snyder, S. W. (2005). Evaluating the quality of evidence from correlational research for evidence-based practice. Exceptional Children, 71(2), 181-194.
Best Practices in School Psychology as a Problem-Solving Enterprise.
This chapter presents the conceptual and operational underpinnings of a problem-solving special education system designed to improve educational results for students with disabilities.
Tilly III, W. D. (2002). Best Practices in School Psychology as a Problem-Solving Enterprise.
The evolution of school psychology to science-based practice: Problem solving and the three-tiered model.
This chapter chronicles some of the major steps school psychology has taken toward adopting science as the basis of practice. Each step has yielded benefits for students as well as practice challenges to be overcome.
Tilly, W. D. (2008). The evolution of school psychology to science-based practice: Problem solving and the three-tiered model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology–5(pp. 17–36). Bethesda, MD: National Association of School Psychologists.
Enhancing engagement through active student response.
Student engagement is critical to academic success. High-Active Student Response (ASR) teaching techniques are an effective way to improve student engagement and are an important component of evidence-based practice. . This report provides techniques and strategies to enhance engagement through ASR. Key terms are appended.
Tincani, M., & Twyman, J. S. (2016). Enhancing Engagement through Active Student Response. Center on Innovations in Learning, Temple University.
Publication bias: The Achilles’ heel of systematic reviews?
This paper describes the problem of publication bias with reference to its history in a number of fields, with special reference to the area of educational research.
Implementing Evidence-Based Practices for Persons With Severe Mental Illnesses
Extensive empirical research, summarized in several reviews and codified in practice guidelines, recommendations, and algorithms, demonstrates that several pharmacological and psychosocial interventions are effective in improving the lives of persons with severe mental illnesses.
Torrey, W. C., Drake, R. E., Dixon, L., Burns, B. J., Flynn, L., Rush, A. J., ... & Klatzker, D. (2001). Implementing evidence-based practices for persons with severe mental illnesses. Psychiatric services, 52(1), 45-50.
Identifying research-based practices for response to intervention: Scientifically-based instruction
This paper examines the types of research to consider when evaluating programs, how to know what “evidence’ to use, and continuums of evidence (quantity of the evidence, quality of the evidence, and program development).
Twyman, J. S., & Sota, M. (2008). Identifying research-based practices for response to intervention: Scientifically based instruction. Journal of Evidence-Based Practices for Schools, 9(2), 86-101.
Data point: Adult literacy in the United States.
Using the data from the Program for the International Assessment of Adult Competencies (PIAAC), this Data Point summarizes the number of U.S. adults with low levels of English literacy and describes how they differ by nativity status1 and race/ethnicity.
U.S. Department of Education. (2019). Data point: Adult literacy in the United States. https://nces.ed.gov/datapoints/2019179.asp
National Assessment of Educational Progress (NAEP).
The National Assessment of Educational Progress (NAEP) is the largest nationally representative and continuing assessment of what America's students know and can do in various subject areas.
U.S. Department of Education. (2020). National Assessment of Educational Progress (NAEP). https://nces.ed.gov/nationsreportcard/
Conflict of interest in research.
This website contains information regarding the Committee process, including the regulations, laws, policies, and guidelines that govern disclosures and conflict of interest.
University of California, San Francisco. (2013). Conflict of interest in research. https://coi.ucsf.edu
Report of the Surgeon General's Conference on Children's Mental Health: A National Action Agenda.
The purpose of the conference was to engage a group of citizens in a thoughtful, meaningful dialogue about issues of prevention, identification, recognition, and referral of children with mental health needs to appropriate, evidence-based treatments or services.
US Department of Health and Human Services. (2000). Report of the Surgeon General's Conference on Children's Mental Health: A national action agenda.
The Innovation Journey
The Innovation Journey presents the results of a major longitudinal study that examined the process of innovation from concept to implementation of new technologies, products, processes, and administrative arrangements.
Van de Ven, A. H., Polley, D. E., Garud, R., & Venkataraman, S. (1999). The Innovation Journey, New York: Oxford Univ.
Using data to advance learning outcomes in schools
This article describes the emergence and influence of evidence-based practice and data-based decision making in educational systems. This article describes the ways in which evidence-based practice (EBP) and response to intervention (RtI) can be used to improve efficacy, efficiency, and equity of educational services.
VanDerHeyden, A., & Harvey, M. (2013). Using data to advance learning outcomes in schools. Journal of Positive Behavior Interventions, 15(4), 205-213.
Finding, evaluating, refining, and applying empirically supported treatments for children and adolescents
The Child Task Force report represents an important initial step in this direction. Here they offer both praise and critique, suggesting a number of ways the task force process and product may be improved.
Weisz, J. R., & Hawley, K. M. (1998). Finding, evaluating, refining, and applying empirically supported treatments for children and adolescents. Journal of Clinical Child Psychology, 27(2), 206-216.
Stressing the (other) three Rs in the search for empirically supported treatments: Review procedures, research quality, relevance to practice and the public interest
The Society of Clinical Psychology's task forces on psychological intervention developed criteria for evaluating clinical trials, applied those criteria, and generated lists of empirically supported treatments. Building on this strong base, the task force successor, the Committee on Science and Practice, now pursues a three‐part agenda
Weisz, J. R., Hawley, K. M., Pilkonis, P. A., Woody, S. R., & Follette, W. C. (2000). Stressing the (other) three Rs in the search for empirically supported treatments: Review procedures, research quality, relevance to practice and the public interest. Clinical Psychology: Science and Practice, 7(3), 243-258.
Making the case for evidence-based policy
U.S. public policy has increasingly been conceived, debated, and evaluated through the lenses of politics and ideology. The fundamental question -- Will the policy work? -- too often gets short shrift or even ignored. A remedy is an evidence-based policy--a rigorous approach that draws on careful data collection, experimentation, and both quantitative and qualitative analysis to determine what the problem is, which ways it can be addressed, and the probable impacts of each of these ways.
Wesley, P. W., & Buysse, V. (2006). Making the case for evidence- based policy. In V. Buysse & P. W. Wesley (Eds.), Evidence-based practice in the early childhood field (pp. 117–159). Washington, DC: Zero to Three.
What Works Clearinghouse: Procedures Handbook, Version 4.1
This What Works Clearinghouse Procedures Handbook, Version 4.1, provides a detailed description of the procedures used by the WWC in the systematic review process.
Evidence-Based Education (EBE)
This slide show presents what is EBE and what are EBE goals in education.
Whitehurst, G. J. (2002). Evidence-based education (EBE). Washington, DC. Retrieved Juanuary, 9(2), 6.
Troubleshooting Behavioral Interventions: A Systematic Process for Finding and Eliminating Problems
This article describes a systematic process for finding and resolving problems with classroom-based behavioral interventions in schools.
Witt, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Troubleshooting behavioral interventions: A systematic process for finding and eliminating problems. School Psychology Review, 33, 363-383.
The Cost-Effectiveness of Five Policies for Improving Student Achievement
This article compares the relative cost-effectiveness of the five policies, using best-evidence estimates drawn from available data regarding the effectiveness and costs of rapid assessment, increased spending, voucher programs, charter schools, and accountability, using a conservative methodology for calculating the relative effectiveness of the rapid assessment.
Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 28(4), 416-436.
A Logical and Empirical Analysis of Current Practices in Classifying Students as Handicapped
Two studies were conducted to examine the extent to which the category "learning disabilities" (LD) meets the major criterion for classification systems, specifically that the category demonstrates at least one universal and one specificfcharacteristic.
Ysseldyke, J., Algozzine, B., & Epps, S. (1983). A logical and empirical analysis of current practice in classifying students as handicapped. Exceptional Children, 50(2), 160-166.
Using a curriculum-based instructional management system to enhance math achievement in urban schools
More than two-thirds of students living in U.S. low-income urban areas have not demonstrated basic levels of math achievement. Teachers are confronted with a difficult task of meeting the needs of an increasingly academically diverse population of urban students. There is a well-confirmed knowledge base on effective instruction, but teachers need massive amounts of information for effective, sustainable improvement and data-driven decision making.
Ysseldyke, J., Spicuzza, R., Kosciolek, S., Teelucksingh, E., Boys, C., & Lemkuil, A. (2003). Using a curriculum-based instructional management system to enhance math achievement in urban schools. Journal of Education for Students Placed at Risk, 8(2), 247-265.
An empirical review of peer-mediated interventions: Implications for young children with autism spectrum disorders
Peer-mediated instruction and intervention (PMII) is a systematic, evidence-based method for addressing the social-communication needs of children with autism spectrum disorder (ASD). Despite existing research on this practice, gaps remain in the implementation of PMII. The purpose of this empirical review was to examine recent applications of this evidence-based practice and systematically assess the quality of the analytic approaches implemented.
Zagona, A. L., & Mastergeorge, A. M. (2018). An empirical review of peer-mediated interventions: Implications for young children with autism spectrum disorders. Focus on Autism and Other Developmental Disabilities, 33(3), 131-141.