Categories for Decision Making

Does a Systematic Decision-Making Process Facilitate Adoption?

April 29, 2022

Education decision makers have to consider many variables when adopting an intervention.  In addition to evidence of effectiveness, they must consider local context, the capacity of the school to implement the program, resource availability, and stakeholder values.  The complexity of the decision-making makes it likely that without a decision-making framework the decision-making task is so complex it is probable that some decision-makers will rely on processes that are influenced by personal biases rather than a systematic approach.  There are several decision-making frameworks available to guide the process but many have not been empirically evaluated.  Hollands and colleagues (2019) evaluated a cost-utility framework as a tool to guide decisions.  This approach relies on multiple sources of evidence to identify values of the decision-makers, the “experiential evidence” of stakeholders that have implemented similar interventions, the problem the alternative solutions are to solve, and the criteria for evaluating each dimension of a decision.  

In this project, the users evaluated the framework in three phases.  In the first phase principals, assistant principals, teacher leaders, and teachers enrolled in a principal preparation program, were assigned to small groups to implement the first six steps of the decision-making framework.  Although performance on each of the steps ranged considerably, approximately one-third of the groups completed each of the steps in the decision-making framework within the available time.  The authors suggested that factors such as complexity of the decision, alignment of the vision of the group members, and the emergence of a leader to keep the process moving forward, influenced performance on each of the six steps in the framework.

The second phase of the project was to survey participants about the usefulness of the cost-utility decision-making framework.  A large majority of the participants had a positive view of the process and thought it would be valuable to apply in their day-to-day work.  A few participants identified that the process was time consuming and may limit the application of the framework.  

The final phase of the study selected three assistant principals to apply the cost-utility framework in their work in their schools.  Two of the three participants reported that although it was time-consuming, it helped clarify decision options, and the stakeholders to be involved in the decision.  The third participant was not able to reach a decision problem within the available time.  This participant also reported that some decisions were imposed by district administration, subverting the cost-utility decision-making process.  

It seems that this framework has the potential value to guide decision-making in the complex environments of public schools.  The time-consuming feature of the process suggests that educators may need additional coaching and support as they develop competencies in applying the framework. Streamlining the steps in the process will be a significant step toward increasing the usability of the tool.

Citation:
Hollands, F., Pan, Y., & Escueta, M. (2019). What is the potential for applying cost-utility analysis to facilitate evidence-based decision making in schools? Educational Researcher, 48(5), 287-295.

 


 

Does Professional Development Impact Data-based Decision Making?

April 29, 2022

At the core of evidence-based education is data-based decision making.  Once an empirically-supported intervention has been adopted, it is necessary to monitor student performance to determine if the program is being effective for an individual student. Educators report needing assistance in determining what to do with the student performance data.  Often, external support for educators to successfully navigate the decision-making process is necessary because many training programs are not sufficient.  

A recent meta-analysis by Gesel and colleagues (2021) examined the impact of professional development on teaches knowledge, skill, and self-efficacy in data-based decision making.  The knowledge was assessed by a multiple-choice test to determine if teachers understood the concepts of data-based decision making.  It was not a measure of teachers’ application of that knowledge.  Skill was the direct measure how how well teachers applied their knowledge of data-based decision making.  In most instances, this was assessed under ideal conditions with intense support from researchers and consultants.  Self-efficacy was a measure of the teachers’ confidence to implement data-based decision making.  The overall effect size for the combined measures was 0.57 which is generally considered a moderate effect; however, the effect sizes for the individual items varied significantly (Knowledge range of effect size from -0.02 to 2.28; Skill range -1.25 to 1.96; self-efficacy range -0.08 to 0.78).  The ranges for each of the measures suggests that the average effect size of 0.57 does not adequately reflect the effects of professional development.  The variability could be a function of the specific training methods used in each of the individual studies but the training methods were not described in this meta-analysis.  It should be noted that all of the studies in this meta-analysis was conducted with intensive support from researchers and consultants.  It is not clear how the results of this meta-analysis are generalizable to more standard conditions found in teacher preparation programs and professional development.

Given the importance of data-based decision making to student progress, there is considerable work to be done to identify effective and efficient training methods.  It appears that we are a long way from this goal.  Ultimately, the goal is for data-based decision making to be standard practice in every classroom in the United States.  This will require identifying the critical skills necessary and the most effectiveness method for teaching those skills.

Citation:
Gesel, S. A., LeJeune, L. M., Chow, J. C., Sinclair, A. C., & Lemons, C. J. (2021). A meta-analysis of the impact of professional development on teachers’ knowledge, skill, and self-efficacy in data-based decision-making. Journal of Learning Disabilities, 54(4), 269-283.

 


 

What Variables Influence Educators’ Adoption Decisions?

March 3, 2022

In recent years, Federal regulations such as the No Child Left Behind and Every Student Succeeds Act, encourage the use of scientifically supported interventions. To accomplish this, it is necessary that educators adopt programs that have empirical support. Little is known about the variables that influence educators’ adoption decisions. Pinkelman and colleagues (2022) recently published a small qualitative study that asked district-level and school-level administrators about the variables that influenced their most recent adoption decision. The results are interesting. Three general themes emerged from this analysis: (1) Establishing Need (2) Identifying Options (3) Elements of Program. 

Establishing Need refers to school-level or district-level factors considered in adoption decisions. There were three subthemes within Establishing Need: (1) Data, both informal and formal (2) Time Cycle (3) Academic Content Domains.  

Within the subtheme of data, 90% of the participants reported using informal data to determine the need for adoption. This was the most frequently cited means of determining need. Informal data included input from stakeholders through meetings, conversations, and anecdotal commentary. Formal data was mentioned by 55% of the participants as a means of Establishing Need. Formal data was defined as empirical data to assess an academic or behavioral construct, test scores, surveys, school climate data, universal screening data, and student performance data.

The subtheme Time Cycle refers to changes over time such as a district’s schedule for rotating the adoption of new programs, expiring program licenses, changes in standards, or availability of current resources. Thirty-five percent of the participants mentioned this. 

Academic Content Domain refers to academic subjects such as reading, math, and science. Thirty-five percent of the participants indicated that district priorities regarding academic content influenced the need for new programs. Collectively, the data regarding factors influencing Establishing Need suggest that variables other than evidence of effectiveness of current programs or evidence about adoption options.

When identifying adoption options, 85% of the participants reported they relied on word of mouth which included talking to colleagues and other education professionals. Fifty-five percent of the participants also mentioned marketing efforts by publishers. Fifty percent of the participants initiated an independent search through web searches and reading articles. The only reference to relying on empirical effectiveness to make adoption decisions can be inferred from the reference to reading articles. These data also suggest that variables such as word of mouth play an important role when making decisions. This is an understudied role in influencing adoption decisions.

The third major theme regarding variables influencing adoption decisions is Elements of Program Selection. Within this theme there are four subthemes: (1) alignment (2) Teacher Factors (3) Cost and (4) Supplemental curriculum materials.

Seventy percent of the participants referenced alignment with Common Core standards and agreement with the district’s values as a factor in adopting a program. Seventy percent of the participants also identified Teacher Factors as influencing decisions. Such considerations as teacher buy in, time required to implement, training required implementers. Cost was also a component of Elements of Program Selection and was noted by 70% of the participants. Sixty percent of the participants mentioned the availability of online supplemental materials as influencing decisions.

All of these data suggest that adopting a program is a more complex process than simply considering effectiveness data. The news from this study is that effectiveness data do not seem to be a primary source of influence over adoption decisions. Implementation scientists should consider these data when developing processes to influence adoption. This is a small-scale study and should be replicated at a much larger scale to determine if these results are representative across settings.

Link to article: https://link.springer.com/content/pdf/10.1007/s43477-022-00039-2.pdf

Citation: 

Pinkelman, S. E., Rolf, K. R., Landon, T., Detrich, R., McLaughlin, C., Peterson, A., & McKnight-Lizotte, M. (2022). Curriculum Adoption in US Schools: An Exploratory, Qualitative Analysis. Global Implementation Research and Applications, 1-11.

 


 

How Effective are Most Commonly Adopted Reading Programs?

December 17, 2021

One of the most important decisions educators make is what reading curriculum to adopt.  The consequences of that decision can have profound implications for students.  Adopting a curriculum not based on the science of reading is likely to produce a generation of poor readers.  Education Week recently reviewed a report from EDReports that reported two of the most commonly adopted reading curricula failed to meet their new review standards.  The review covered both K-2 and grades 3-8 for Fountas and Pinnell Classroom and Units of Study from the Teachers College Reading and Writing Project.  Neither program met expectations for text quality or alignment to standards.  In 2019, EdWeek Research Center reported that 44% of K-2 early reading and special education teachers used Fountas and Pinnell’s Leveled Literacy Intervention, a companion intervention to Fountas and Pinnell Classroom.

Additionally, it was reported that 16% of teachers used Units of Study for Teaching Reading.  Approximately 60% of K-2 and special education students are taught reading with curricula that do not meet standards for reading instruction.  This is distressing given the importance of early reading on the educational trajectory for students.

Link for Ed Week article: https://www.edweek.org/teaching-learning/new-curriculum-review-gives-failing-marks-to-popular-early-reading-programs/2021/11

References

Kurtz, H., Lloyd, S., Harwin, A., Chen, V., & Furuya, Y. (2020). Early Reading Instruction: Results of a National Survey. Editorial Projects in Education.

 


 

Can We Close the Research to Practice Gap?

November 5, 2021

One of the persistent problems in education is the gap between what we know about effective educational practices and the practices that are frequently used in public schools.  Many of these practices do not have empirical support.  The challenge for all educators is how do we close the gap?  The flow of research to practice is often perceived as being a one way flow from researchers that develop effective interventions and disseminate them to practitioners who are expected to adopt them (Ringeisen, Henderson, & Hoagwood, 2003).  Ringeisen et al., argue that this is not likely to result in widespread adoption of effective practices.  McLaughlin and colleagues (1997) have made the argument that having an array of effective practices is not sufficient for closing the research to practice gap.  In many instances, the practices developed by researchers are not a good contextual fit for the school settings because training and experience requirements for implementers are unreasonable within the school setting, the resources necessary for implementation are not present, and the time demands to implement are unrealistic.  If the dominant model of disseminating empirically-supported interventions is not impacting the research to practice gap, what should we do?  The goal is important but we need effective alternatives to the common approach.  Recently a report from the William T. Grant Foundation, (Farrell, Penuel, Coburn, Daniel, Steup (2021) entitled, Research-Practice Partnerships in Education: The State of the Field.  In this report, the authors define research-practice partnerships as “intentionally organized to connect diverse forms of expertise and shift power relations in the research endeavor to ensure that all partners have a say in the joint work.”  This is a significant shift from usual practice in the development and dissemination of effective practices.  There are five principles associated with these partnerships: (1) they are long term collaborations (2) they work toward educational improvement or equitable transformation (3) they feature engagement with research as a leading activity (4) they are intentionally organized to bring together a diversity of expertise (5) they employ strategies to shift power relations in research endeavors to ensure that all participants have a say.  This is an important shift.  Practitioners are now partners with researchers.  It is a movement away from the researcher as expert model to a model in which practitioners are equally expert as researchers.  Each is an expert in different domains of improving educational practices.

If practitioners are involved from the beginning in guiding research then the practices are more likely to be seen as usable by educators when considering interventions to adopt. The development of research-practice partnerships has the potential to increase the adoption of empirically-supported practices.

Citation: Farrell, C.C., Penuel, W.R., Coburn, C., Daniel, J., & Steup, L. (2021). Research-practice partnerships in education: The state of the field. William T. Grant Foundation.

Link: http://wtgrantfoundation.org/research-practice-partnerships-in-education-the-state-of-the-field

References: McLaughlin, M. J., & Leone, P. E., Meisel, S., & Henderson, K. (1997). Strengthen school and community capacity. Journal of Emotional and Behavioral Disorders, 5(1), 15-24.
Ringeisen, H., Henderson, K., & Hoagwood, K. (2003). Context matters: Schools and the “research to practice gap” in children’s mental health. School Psychology Review, 32(2), 153-168.

 


 

How can educators effectively incorporate Professional Judgment into the decision making process?

October 4, 2021

Overview of Professional Judgment. Educators make many decisions regarding services for students. Even when there is abundant evidence to guide their decisions, educators must use their judgment about what is appropriate in a given situation. Only on rare occasion does the available evidence perfectly match the service context of concern to the educator. To bridge the gap between research and local circumstance, the educator must make a series of judgments such as defining the problem, determining which evidence is relevant, and deciding which features of the local context are likely to require adaptations to the selected evidence-based intervention. Professional judgment is a cornerstone of evidence-based practice, as are best available evidence, stakeholder values, and the context in which services are provided. In this definition of evidence-based practice, the integration of these variables influences decisions. No one cornerstone can be substituted for the others. Judgment must be informed and constrained by the best available evidence, stakeholder values, and context.

Citation: Guinness, K., and Detrich, R. (2021). Overview of Professional Judgment. Oakland, CA: The Wing Institute. https://www.winginstitute.org/evidence-based-decision-making-professional-judgment.

Link: https://www.winginstitute.org/evidence-based-decision-making-professional-judgment

 


 

What are common criticisms expressed against data-based decision making?

July 7, 2021

Misconceptions about data-based decision making in education: An exploration of the literature. Research on data-based decision making has proliferated around the world, fueled by policy recommendations and the diverse data that are now available to educators to inform their practice. Yet, many misconceptions and concerns have been raised by researchers and practitioners. This paper surveys and synthesizes the landscape of the data-based decision-making literature to address the identified misconceptions and then to serve as a stimulus to changes in policy and practice as well as a roadmap for a research agenda.

Citation: Mandinach, E. B., & Schildkamp, K. (2021). Misconceptions about data-based decision making in education: An exploration of the literature. Studies in Educational Evaluation69, 100842.

Link: https://www.sciencedirect.com/science/article/pii/S0191491X1930416X

 


 

Should financing be a component of evidence-based decision making?

June 18, 2021

Cost-Effectiveness Analysis: A Component of Evidence-Based Education. Including cost-effectiveness data in the evaluation of programs is the next step in the evolution of evidence-based practice. Evidence-based practice is grounded in three complementary elements: best available evidence, professional judgment, and client values and context. To fully apply the cost-effectiveness data, school administrators will have to rely on all three of these elements. The function of cost-effectiveness data is to guide decisions about how limited financial resources should be spent to produce the best educational outcomes. To do so, it is necessary for decision makers to choose between options with known cost-effectiveness ratios while working within the budget constraints. In this article, I discuss some of the considerations that have to be addressed in the decision-making process and implications of including cost-effectiveness analyses in data-based decision making.

Citation: Detrich, R. (2020). Cost-effectiveness analysis: A component of evidence-based education. School Psychology Review, 1-8.

Link:

https://www.tandfonline.com/doi/abs/10.1080/2372966X.2020.1827864

 


 

What can educators do to promote evidence-based education reform?

June 18, 2021

How could evidence-based reform advance education? This article presents a definition and rationale for evidence-based reform in education, and a discussion of the current state of evidence-based research, focusing on China, the U.S., and the UK. The article suggests ways in which Chinese, U.S., UK, and other scholars might improve the worldwide quality of evidence-based reform in education. One indicator of this partnership is an agreement among the Chinese University of Hong Kong, Nanjing Normal University, and Johns Hopkins University to work together on Chinese and English versions of the website Best Evidence in Brief and a collaboration between Johns Hopkins and the ECNU Review of Education at East China Normal University.

The Wing Institute would like to acknowledge the contributions of Robert Slavin to the field of education. Our condolences go out to Robert Salvin’s family on the loss of one of America’s premier proponents of evidence-based education, who recently passed away on April 24, 2021.  Robert Slavin was an education researcher who sought to translate the science of learning into effective teaching practices. Dr. Slavin was a distinguished professor at Johns Hopkins University’s School of Education, where he directed the Center for Research and Reform in Education

Citation: Slavin, R. E., Cheung, A. C., & Zhuang, T. (2021). How could evidence-based reform advance education?. ECNU Review of Education4(1), 7-24.

Link:

https://journals.sagepub.com/doi/full/10.1177/2096531120976060

 


 

How can educators and policy-makers overcome challenges to building an evidence-based education culture?

June 18, 2021

Evidence-Based Policies in Education: Initiatives and Challenges in Europe. This article examines the state of progress of evidence-based educational policies in Europe and identifies organizations for the generation and dissemination of evidence. Further, it discusses some of the most relevant challenges facing the development of evidence-informed education policies in Europe.

Citation: Pellegrini, M., & Vivanet, G. (2020). Evidence-based policies in education: Initiatives and challenges in Europe. ECNU Review of Education, 2096531120924670.

Link:

https://journals.sagepub.com/doi/pdf/10.1177/2096531120924670