Education Drivers

Student Formative Assessment

Effective ongoing assessment, referred to in the education literature as formative assessment or progress monitoring, is indispensable in promoting teacher and student success. Feedback through formative assessment is ranked at or near the top of practices known to significantly raise student achievement. For decades, formative assessment has been found to be effective in clinical settings and, more important, in typical classroom settings. Formative assessment produces substantial results at a cost significantly below that of other popular school reform initiatives such as smaller class size, charter schools, accountability, and school vouchers. It also serves as a practical diagnostic tool available to all teachers. A core component of formal and informal assessment procedures, formative assessment allows teachers to quickly determine if individual students are progressing at acceptable rates and provides insight into where and how to modify and adapt lessons, with the goal of making sure that students do not fall behind.

Formative Assessment overview.pdf

Formative Assessment

For teachers, few skills are as important or powerful as formative assessment (also known as progress monitoring and rapid assessment). This process of frequent and ongoing feedback on the effects of instruction gives teachers insight on when and how to adjust instruction to maximize learning. The assessment data are used to verify student progress and act as indicators to adjust interventions when insufficient progress has been made or a particular concept has been mastered (VanDerHeyden, 2013). For the past 30 years, formative assessment has been found to be effective in typical classroom settings. The practice has shown power across student ages, treatment durations, and frequencies of measurement, as well as with students with special needs (Hattie, 2009).

Another important assessment tool commonly used in schools that should not be confused with formative assessment is summative assessment. Formative assessment and summative assessment play important but very different roles in an effective model of education. Both are integral in gathering information necessary for maximizing student success, but they differ in important ways (see Figure 1).

Summative assessment evaluates the overall effectiveness of teaching at the end of a class, end of a semester, or end of the school year. This type of assessment is used to determine at a particular time what students know and do not know. It is most often associated with standardized tests such as state achievement assessments but are also commonly used by teachers to assess the overall progress of students in determining grades (Geiser & Santelices, 2007). Since the advent of No Child Left Behind, summative assessment has increasingly been used to hold schools and teachers accountable for student progress and its use is likely to continue under the Every Student Succeeds Act.

In contrast, formative assessment is a practical diagnostic tool for routinely determining student progress. Formative assessment allows teachers to quickly ascertain if individual students are progressing at acceptable rates and provides insight into when and how to modify and adapt lessons, with the goal of making sure all students are progressing satisfactorily.

Comparing Formative Assessment and Summative Assessment

 Summative and Formative Assessment Table

Figure 1. Comparing two types of assessment

Both formative assessment and summative assessment are essential components of information gathering, but they should be used for the purposes for which they were designed.

Figure 2 offers a data display examining the relative impact of formative assessment and summative assessment (the latter in the form of high-stakes testing). Research shows a clear advantage for formative assessment in improving student performance.


Assessment Impact

Figure 2. Comparison of formative assessment and summative assessment impact on student achievement

Research consistently lists formative assessment in the top tier of variables that make a difference in improving student achievement (Hattie, 2009; Marzano, 1998). In 1986, Fuchs and Fuchs conducted the first comprehensive quantitative examination of formative assessment. They found that it had an impressive 0.90 effect size on student achievement. Figure 3 provides the effect size of formative assessment, gleaned from multiple studies over more than 40 years of research on the topic.

Formative Assessment Impact

Figure 3: Effect size of formative assessment

At its core, formative assessment uses feedback to improve student performance. It furnishes teachers with indicators of each student’s progress, which can be used to determine when and how to adjust instruction to maximize learning. Feedback is ranked at or near the top of practices known to significantly raise student achievement (Kluger & DeNisi, 1996; Marzano, Pickering, & Pollock, 2001; Walberg, 1999). It is not surprising that data-based decision-making approaches such as response to intervention (RtI) and positive behavior interventions and supports (PBIS) depend heavily on formative assessment.

Another important feature of well-designed formative assessment is the incorporation of grade-level norms into the assessment process. Grade-level norms are a valuable yardstick enabling teachers to more efficiently compare each student’s performance against normed standards (McLaughlin & Shepard, 1995). In addition to allowing teachers to determine whether a student met or missed a target, grade-level norms offer teachers a clear picture of whether students are meeting important goals in the standards and quickly identify struggling students who need more intensive support.

Fuchs and Fuchs conducted the first extensive quantitative examination of formative assessment in 1986. This meta-analysis added considerably to the knowledge base by identifying the essential practice elements that increase the impact of ongoing formative assessment. The impact is equivalent to raising student achievement in an average nation such as the United States to that of the top five nations (Black & Wiliam, 1998). As can be seen in Figure 4, Fuchs and Fuchs reported that the impact of formative assessment is significantly enhanced by the cumulative effect of three practice elements. The practice begins with collecting data weekly (0.26 effect size). When teachers interact with the collected data by graphing it, the effect size increases to 0.70. Adding decision rules to aid teachers in analyzing the graphed data increases the effect size to 0.90.

Fuchs and Fuchs Graph

Figure 4: Impact of formative assessment on student achievement

 

Why Is Formative Assessment Important?

 

Much has been said about the importance of selecting evidence-based practices for use in schools. One of the most common failures in building an evidence-based culture is overreliance on selecting interventions and underreliance on managing the interventions (VanDerHeyden & Tilly, 2010). Adopting an evidence-based practice, although an important first step, does not guarantee that the practice will produce the desired results. Even if every action leading up to implementation is flawless, if the intervention is not implemented as designed, it will likely fail and learning will not occur (Detrich, 2014). A growing body of research is now available to help teachers identify and overcome obstacles to implementing practices accurately (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Witt, Noell, LaFleur, & Mortenson, 1997). Formative assessment and treatment integrity checks constitute the basic tool kit enabling schools to avoid or quickly remedy failures during implementation.

The fact is, not all practices produce positive outcomes for all students. In medicine, all patients do not respond positively to a given treatment. The same holds true in education: Not all students respond identically to an education intervention. Given the possibility that even good practices may produce poor outcomes, it is incumbent on educators to monitor student progress frequently. Formal and routine sampling of student performance significantly reduces the likelihood that struggling students will fall through the cracks.

Common informal sampling methods such as having students answer questions by raising their hands aren’t sufficient. It is imperative that teachers have a clear understanding of each student’s progress toward mastery of standards. This is important not just for the lesson at hand but also for future success. A systematically planned curriculum builds on learned skills across a school year. Skills learned in one assignment are very often the foundation skills needed for success in subsequent lessons. Today’s failure may increase the possibility of failure tomorrow. For example, students who fall behind in reading by the third grade have been found to have poorer academic success, including a significantly greater likelihood of dropping out of high school (Celio, & Harvey, 2005; Lesnick, Goerge, Smithgall, & Gwynne, 2010).

It is only through ongoing monitoring that problems can be identified early and adjustments made to teaching strategies to ensure greater success for all students. In this way, formative assessment guides teachers on when and how to improve instructional delivery and make effective adjustments to the curriculum. This is necessary for helping struggling students as well as adapting instruction for gifted students.

In addition to formative assessment’s notable impact on achievement is its impressive return on investment compared with other popular reform practices. In a cost-effectiveness analysis of frequently adopted education interventions, Yeh (2007) found that formative assessment (which he referred to as rapid assessment) outperformed other common reform practices. He found the advantage for formative assessment striking compared with a 10% increase in spending, vouchers, charter schools, or high-stakes testing (see Figure 5).

 

Yeh Graph 

Figure 5: Return on investment of common education interventions

The Figure 5 data display compares Yeh’s 2007 and The Wing Institute analysis cost-effectiveness analysis of formative assessment with six common structural interventions.

Yeh compared the cost and outcomes of alternative practices to aid education decision makers in selecting economical and productive choices (Levin, 1988; Levin & McEwan, 2002). Educational cost-effectiveness analyses are designed to assess key educational outcomes, such as student achievement relative to the monetary resources needed to achieve worthy results. Cost-effectiveness analyses provide a practical and systematic architecture that permits educators to more effectively compare the real impact of interventions.

Although the structural interventions identified in Figure 5 are designed to address an array of differing issues impacting schools, a fair comparison can be made because all the interventions aim to improve student achievement. In the end, decision makers need to know which approaches produce the greatest benefit for the dollars invested. A given practice may be very effective, but if it costs more than the resources available for implementation, the practice is of little use to the average school.

Summary

It is clear from years of rigorous research that formative assessment produces important results. It is also true that ongoing assessment carried out through the school year is necessary for teachers to grasp when and how to adjust instruction and curriculum to meet the various needs of struggling students as well as gifted students. Finally, cost-effectiveness research reveals that formative assessment is not only effective, but one of the most cost-effective interventions available to schools for boosting student performance.

Citations

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74.

Bloom, B. S. (1976). Human characteristics and school learning. New York, NY: McGraw-Hill.

Celio, M. B., & Harvey, J. (2005). Buried treasure: Developing a management guide from mountains of school data. Seattle, WA: University of Washington, Center on Reinventing Public Education.

Detrich, R. (2014). Treatment integrity: Fundamental to education reform. Journal of Cognitive Education and Psychology, 13(2), 258–271.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication No. 231). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, the National Implementation Research Network.

Fuchs, L. S. & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53(3), 199–208.

Geiser, S., & Santelices, M. V. (2007). Validity of high-school grades in predicting student success beyond the freshman year: High-school record vs. standardized tests as indicators of four-year college outcomes (Research and Occasional Paper Series CSHE. 6.07). Berkeley, CA: University of California, Berkeley, Center for Studies in Higher Education.

Haller, E. P., Child, D. A., & Walberg, H. J. (1988). Can comprehension be taught? A quantitative synthesis of “metacognitive” studies. Educational Researcher, 17(9), 5–8.

Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.

Kavale, K. A. (2005). Identifying specific learning disability: Is responsiveness to intervention the answer? Journal of Learning Disabilities, 38(6), 553–562.

Kluger, A. N., & DeNisi, A. S. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.

Lesnick, J., Goerge, R., Smithgall, C., & Gwynne, J. (2010). Reading on grade level in third grade: How is it related to high school performance and college enrollment? Chicago, IL: Chapin Hall at the University of Chicago, 1, 12.

Levin, H. M. (1988). Cost-effectiveness and educational policy. Educational Evaluation and Policy Analysis, 10(1), 51­–69.

Levin, H. M., & McEwan, P. J., eds. (2002). Cost-effectiveness and educational policy. Larchmont, NY: Eye on Education.

Marzano, R. J. (1998). A theory-based meta-analysis of research on instruction. Aurora, CO: Mid-Continent Regional Educational Laboratory.

Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development.

McLaughlin, M. W., & Shepard, L. A. (1995). Improving education through standards-based reform. A report by the National Academy of Education Panel on Standards-Based Education Reform. Palo Alto, CA: Stanford University Press.

Scheerens, J., & Bosker, R. J. (1997). The foundations of educational effectiveness. Oxford, UK: Pergamon.

VanDerHeyden, A. (2013). Are we making the differences that matter in education? In R. Detrich, R. Keyworth, & J. States (Eds.), Advances in evidence-based education: Vol 3. Performance feedback: Using data to improve educator performance (pp. 119–138). Oakland, CA: The Wing Institute. http://www.winginstitute.org/uploads/docs/Vol3Ch4.pdf

VanDerHeyden, A. M., & Tilly, W. D. (2010). Keeping RtI on track: How to identify, repair and prevent mistakes that derail implementation. Horsham, PA: LRP Publications.

Walberg H. J. (1999). Productive teaching. In H. C. Waxman & H. J. Walberg (Eds.), New directions for teaching, practice, and research (pp. 75–104). Berkeley, CA: McCutchen.

Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of the independent variable. Journal of Applied Behavior Analysis, 30(4), 693–696.

Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 28(4), 416–436.

 

Publications

TITLE
SYNOPSIS
CITATION
Treatment Integrity: Fundamental to Education Reform

To produce better outcomes for students two things are necessary: (1) effective, scientifically supported interventions (2) those interventions implemented with high integrity.  Typically, much greater attention has been given to identifying effective practices.  This review focuses on features of high quality implementation.

Detrich, R. (2014). Treatment integrity: Fundamental to education reform. Journal of Cognitive Education and Psychology, 13(2), 258-271.

Evidence-Based Education and Best Available Evidence: Decision-Making Under Conditions of Uncertainty

Evidence-based practice is a framework for decision making.  Even with high quality evidence there are likely sources of uncertainty that practitioners must confront.

Detrich, R., Slocum, T. A., & Spencer, T. D. (2013). Evidence-based education and best available evidence: decision-making under conditions of uncertainty. Evidence-Based Practices, 26, 21.

Response to intervention: What it is and what it is not

Response to Intervention is a framework for determining the intensity of services that are necessary for a student to benefit from instruction. This paper addresses some of the misconceptions about RtI.

Detrich, R., States, J., & Keyworth, R. (2008). Response to Intervention: What it Is and What it Is Not. Journal of Evidence-based Practices for Schools, 9(2), 60-83.

Overview of Formative Assessment

Effective ongoing assessment, referred to in the education literature as formative assessment or progress monitoring, is indispensable in promoting teacher and student success. Feedback through formative assessment is ranked at or near the top of practices known to significantly raise student achievement. For decades, formative assessment has been found to be effective in clinical settings and, more important, in typical classroom settings. Formative assessment produces substantial results at a cost significantly below that of other popular school reform initiatives such as smaller class size, charter schools, accountability, and school vouchers. It also serves as a practical diagnostic tool available to all teachers. A core component of formal and informal assessment procedures, formative assessment allows teachers to quickly determine if individual students are progressing at acceptable rates and provides insight into where and how to modify and adapt lessons, with the goal of making sure that students do not fall behind.

 

States, J., Detrich, R. & Keyworth, R. (2017). Overview of Formative Assessment. Oakland, CA: The Wing Institute. http://www.winginstitute.org/student-formative-assessment.

Introduction: Proceedings from the Wing Institute’s Sixth Annual Summit on Evidence-Based Education: Performance Feedback: Using Data to Improve Educator Performance.

This book is compiled from the proceedings of the sixth summit entitled “Performance Feedback: Using Data to Improve Educator Performance.” The 2011 summit topic was selected to help answer the following question: What basic practice has the potential for the greatest impact on changing the behavior of students, teachers, and school administrative personnel?

States, J., Keyworth, R. & Detrich, R. (2013). Introduction: Proceedings from the Wing Institute’s Sixth Annual Summit on Evidence-Based Education: Performance Feedback: Using Data to Improve Educator Performance. In Education at the Crossroads: The State of Teacher Preparation (Vol. 3, pp. ix-xii). Oakland, CA: The Wing Institute.

 

 

 

Data Mining

TITLE
SYNOPSIS
CITATION
Are teacher preparation programs teaching formative assessment?
This probe lookes at research on teacher preparation program's efforts to provide teachers with instruction in formative assessment.
States, J. (2010). Are teacher preparation programs teaching formative assessment? Retrieved from are-teacher-preparation-programs.
What are the costs and benefits of five common educational interventions?
This analysis examined the cost effectiveness of research from Stuart Yeh on common sturctural interventions in education.
States, J. (2010). What are the costs and benefits of five common educational interventions? Retrieved from what-are-costs-and.
Does Feedback Improve Performance?
This review is a summary of the effect size of the effectiveness feedback to improve both student and teacher performance.
States, J. (2011). Does Feedback Improve Performance? Retrieved from does-feedback-improve-performance.
Does professional development make a difference in student performance?
This analysis looks at a systematic review of teacher professional development on student achievement.
States, J. (2011). Does professional development make a difference in student performance? Retrieved from does-professional-development-make.
What Is the Effect of Formative Evaluation on Student Achievement?
This review examines the effect size of the practice elements that comprise formative assessment.
States, J. (2011). What Is the Effect of Formative Evaluation on Student Achievement? Retrieved from what-is-effect-of869.

 

Presentations

TITLE
SYNOPSIS
CITATION
A Systems Approach to Feedback: What You Need to Know and Who Needs
This paper looks at feedback as a powerful systems approach to improving the performance of both student and school faculty.
States, J. (2011). A Systems Approach to Feedback: What You Need to Know and Who Needs [Powerpoint Slides]. Retrieved from 2011-calaba-presentation-jack-states.
Feedback as Education Reform: What We Know
This paper examines the power of feedback as a strategy for improving student performance. Types of feeback are explored building from student and teacher performance that can be aggregated to create a systems wide feedback tool.
States, J. (2011). Feedback as Education Reform: What We Know [Powerpoint Slides]. Retrieved from 2011-aba-presentation-jack-states.
TITLE
SYNOPSIS
CITATION
Introduction: Proceedings from the Wing Institute’s Sixth Annual Summit on Evidence-Based Education: Performance Feedback: Using Data to Improve Educator Performance.

This book is compiled from the proceedings of the sixth summit entitled “Performance Feedback: Using Data to Improve Educator Performance.” The 2011 summit topic was selected to help answer the following question: What basic practice has the potential for the greatest impact on changing the behavior of students, teachers, and school administrative personnel?

States, J., Keyworth, R. & Detrich, R. (2013). Introduction: Proceedings from the Wing Institute’s Sixth Annual Summit on Evidence-Based Education: Performance Feedback: Using Data to Improve Educator Performance. In Education at the Crossroads: The State of Teacher Preparation (Vol. 3, pp. ix-xii). Oakland, CA: The Wing Institute.

 

 

A systematic review and summarization of the recommendations and research surrounding Curriculum-Based Measurement of oral reading fluency (CBM-R) decision rules.

This article reviews the decision rules for curriculum based reading scores. It concluded the rules were most often based on expert opinion.

Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding Curriculum-Based Measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51(1), 1-18. 

 

Assessment and classroom learning. Assessment in Education: principles, policy & practice

This is a review of the literature on classroom formative assessment. Several studies show firm evidence that innovations designed to strengthen the frequent feedback that students receive about their learning yield substantial learning gains.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: principles, policy & practice, 5(1), 7-74.

Human characteristics and school learning

This paper theorizes that variations in learning and the level of learning of students are determined by the students' learning histories and the quality of instruction they receive.

Bloom, B. (1976). Human characteristics and school learning. New York: McGraw-Hill.

Who Leaves? Teacher Attrition and Student Achievement

The goal of this paper is to estimate the extent to which there is differential attrition based on teachers' value-added to student achievement.

Boyd, D., Grossman, P., Lankford, H., Loeb, S., & Wyckoff, J. (2008). Who leaves? Teacher attrition and student achievement. Working Paper No. 14022. Cambridge, MA: National Bureau of Economic Research. Retrieved from https://www.nber.org/papers/w14022

Explaining the short careers of high-achieving teachers in schools with low-performing students

This paper examines New York City elementary school teachers’ decisions to stay in the same school, transfer to another school in the district, transfer to another district, or leave teaching in New York state during the first five years of their careers.

Boyd, D., Lankford, H., Loeb, S., & Wyckoff, J. (2005). Explaining the short careers of high-achieving teachers in schools with low-performing students. American Economic Review, 95(2), 166-171.

The narrowing gap in New York City teacher qualifications and its implications for student achievement in high-poverty schools.

By estimating the effect of teacher attributes using a value-added model, the analyses in this paper predict that observable qualifications of teachers resulted in average improved achievement for students in the poorest decile of schools of .03 standard deviations.

Boyd, D., Lankford, H., Loeb, S., Rockoff, J., & Wyckoff, J. (2008). The narrowing gap in New York City teacher qualifications and its implications for student achievement in high‐poverty schools. Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management27(4), 793-818.

Buried Treasure: Developing a Management Guide From Mountains of School Data

This report provides a practical “management guide,” for an evidence-based key indicator data decision system for school districts and schools.

Celio, M. B., & Harvey, J. (2005). Buried Treasure: Developing A Management Guide From Mountains of School Data. Center on Reinventing Public Education.

Varying Intervention Delivery in Response to Intervention: Confronting and Resolving Challenges With Measurement, Instruction, and Intensity

Daly, I. I. I., Edward J, Martens, B. K., Barnett, D., Witt, J. C., & Olson, S. C. (2007). Varying Intervention Delivery in Response to Intervention: Confronting and Resolving Challenges With Measurement, Instruction, and Intensity. School Psychology Review, 36(4), 562-581.

Developments in Curriculum-Based Measurement

Curriculum-based measurement is a type of formative assessment.  It is used to screen for students who are not progressing and to identify how well students are responding to interventions.

Deno, S. L. (2003). Developments in Curriculum-Based Measurement. Journal of Special Education, 37(3), 184-192.

Developing Curriculum-Based Measurement Systems For Data Based Special Education Problem Solving

This article reviews the advantages of curriculum-based measurement as part of a data-based problem solving model.

Deno, S. L., & Fuchs, L. S. (1987). Developing Curriculum-Based Measurement Systems For Data Based Special Education Problem Solving. Focus on Exceptional Children, 19(8), 1-16.

Treatment Integrity: Fundamental to Education Reform

To produce better outcomes for students two things are necessary: (1) effective, scientifically supported interventions (2) those interventions implemented with high integrity.  Typically, much greater attention has been given to identifying effective practices.  This review focuses on features of high quality implementation.

Detrich, R. (2014). Treatment integrity: Fundamental to education reform. Journal of Cognitive Education and Psychology, 13(2), 258-271.

Primary and secondary prevention of behavior difficulties: Developing a data-informed problem-solving model to guide decision making at a school-wide level

This article describes using formative assessemnt as a foundational tool in a data-based problem solving approach to solving social behavior problems.

Ervin, R. A., Schaughency, E., Matthews, A., Goodman, S. D., & McGlinchey, M. T. (2007). Primary and secondary prevention of behavior difficulties: Developing a data-informed problem-solving model to guide decision making at a school-wide level. Psychology in the Schools, 44(1), 7-18.

Single‐track year‐round education for improving academic achievement in U.S. K‐12 schools: Results of a meta‐analysis

This systematic review synthesizes the findings from 30 studies thatcompared the performance of students at schools using single‐trackyear‐round calendars to the performance of students at schools usinga traditional calendar.

Fitzpatrick, D., & Burns, J. (2019). Single‐track year‐round education for improving academic achievement in US K‐12 schools: Results of a meta‐analysis. Campbell Systematic Reviews15(3), e1053.

Implementation Research: A Synthesis of the Literature

This is a comprehensive literature review of the topic of Implementation examining all stages beginning with adoption and ending with sustainability.

Fixsen, D. L., Naoom, S. F., Blase, K. A., & Friedman, R. M. (2005). Implementation research: A synthesis of the literature.

Paradigmatic distinctions between instructionally relevant measurement models

This article compares and contrasts mastery level measures (grades) with curriculum-based measurement (global outcome measure).

Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57(6), 488-500.

Effects of Systematic Formative Evaluation: A Meta-Analysis

In this meta-analysis of studies that utilize formative assessment the authors report an effective size of .7.

Fuchs, L. S., & Fuchs, D. (1986). Effects of Systematic Formative Evaluation: A Meta-Analysis. Exceptional Children, 53(3), 199-208.

Use of curriculum-based measurement in identifying students with disabilities

Curriculum-based measurement is recommended as an assessment method to identify students that require special education services.

Fuchs, L. S., & Fuchs, D. (1997). Use of curriculum-based measurement in identifying students with disabilities. Focus on Exceptional Children, 1.

Effects of Curriculum-Based Measurement on Teachers' Instructional Planning

This study examines the effect of formative assessment on teachers’ instructional planning.

Fuchs, L. S., Fuchs, D., & Stecker, P. M. (1989). Effects of Curriculum-Based Measurement on Teachers’ Instructional Planning. Journal of Learning Disabilities, 22(1).

Validity of High-School Grades in Predicting Student Success beyond the Freshman Year: High-School Record vs. Standardized Tests as Indicators of Four-Year College Outcomes

High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well.

Geiser, S., & Santelices, M. V. (2007). Validity of High-School Grades in Predicting Student Success beyond the Freshman Year: High-School Record vs. Standardized Tests as Indicators of Four-Year College Outcomes. Research & Occasional Paper Series: CSHE. 6.07. Center for studies in higher education.

The Importance and Decision-Making Utility of a Continuum of Fluency-Based Indicators of Foundational Reading Skills for Third-Grade High-Stakes Outcomes.

The authors contrast the functions of high stakes testing with prevention-based assessment.  The authors also show the value of using formative assesment to estimate performance on high stakes tests.

Good, R.H., III., Simmons, D. C., & Kame’enui, E. J. (2001). The Importance and Decision-Making Utility of a Continuum of Fluency-Based Indicators of Foundational Reading Skills for Third-Grade High-Stakes Outcomes. Scientific Studies of Reading, 5(3), 257-288.

A Building-Based Case Study of Evidence-Based Literacy Practices: Implementation, Reading Behavior, and Growth in Reading Fluency, K--4.

Curriculum based measures were used to to evaluate student progress across multiple years following the introduction of selected evidence-based practices.

Greenwood, C. R., Tapia, Y., Abbott, M., & Walton, C. (2003). A Building-Based Case Study of Evidence-Based Literacy Practices: Implementation, Reading Behavior, and Growth in Reading Fluency, K--4. Journal of Special Education, 37(2), 95.

Can comprehension be taught? A quantitative synthesis of “metacognitive” studies

This quantitative review examines 20 studies to establish an effect size of .71 for the impact of “metacognitive” instruction on reading comprehension.

Haller, E. P., Child, D. A., & Walberg, H. J. (1988). Can comprehension be taught? A quantitative synthesis of “metacognitive” studies. Educational researcher, 17(9), 5-8.

Teacher training, teacher quality and student achievement

The authors study the effects of various types of education and training on the ability of teachers to promote student achievement.

Harris, D. N., & Sass, T. R. (2011). Teacher training, teacher quality and student achievement. Journal of Public Economics95(7–8), 798-812.

 

 
Visible learning: A synthesis of over 800 meta-analyses relating to achievement

Hattie’s book is designed as a meta-meta-study that collects, compares and analyses the findings of many previous studies in education. Hattie focuses on schools in the English-speaking world but most aspects of the underlying story should be transferable to other countries and school systems as well. Visible Learning is nothing less than a synthesis of more than 50.000 studies covering more than 80 million pupils. Hattie uses the statistical measure effect size to compare the impact of many influences on students’ achievement, e.g. class size, holidays, feedback, and learning strategies.

Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.

 

Identifying Specific Learning Disability: Is Responsiveness to Intervention the Answer?

Responsiveness to intervention (RTI) is being proposed as an alternative model for making decisions about the presence or absence of specific learning disability. The author argue that there are many questions about RTI that remain unanswered, and radical changes in proposed regulations are not warranted at this time.

Kavale, K. A. (2005). Identifying specific learning disability: Is responsiveness to intervention the answer?. Journal of Learning Disabilities38(6), 553-562.

Pulling back the curtain: Revealing the cumulative importance of high-performing,

This study examines the relationship between two dominant measures of teacher quality, teacher qualification and teacher effectiveness (measured by value-added modeling), in terms of their influence on students’ short-term academic growth and long-term educational success (measured by bachelor’s degree attainment).

Lee, S. W. (2018). Pulling back the curtain: Revealing the cumulative importance of high-performing, highly qualified teachers on students’ educational outcome. Educational Evaluation and Policy Analysis40(3), 359–381.

Reading on grade level in third grade: How is it related to high school performance and college enrollment.

This study uses longitudinal administrative data to examine the relationship between third- grade reading level and four educational outcomes: eighth-grade reading performance, ninth-grade course performance, high school graduation, and college attendance.

Lesnick, J., Goerge, R., Smithgall, C., & Gwynne, J. (2010). Reading on grade level in third grade: How is it related to high school performance and college enrollment. Chicago: Chapin Hall at the University of Chicago, 1, 12.

Cost-effectiveness and educational policy.

This article provides a summary of measuring the fiscal impact of practices in education
educational policy.

Levin, H. M., & McEwan, P. J. (2002). Cost-effectiveness and educational policy. Larchmont, NY: Eye on Education.

Do Pay-for-Grades Programs Encourage Student Academic Cheating? Evidence from a Randomized Experiment

Using a randomized control trial in 11 Chinese primary schools, we studied the effects of pay-for-grades programs on academic cheating. We randomly assigned 82 classrooms into treatment or control conditions, and used a statistical algorithm to determine the occurrence of cheating. 

Li, T., & Zhou, Y. (2019). Do Pay-for-Grades Programs Encourage Student Academic Cheating? Evidence from a Randomized Experiment. Frontiers of Education in China14(1), 117-137.

A Theory-Based Meta-Analysis of Research on Instruction.

This research synthesis examines instructional research in a functional manner to provide guidance for classroom practitioners.

Marzano, R. J. (1998). A Theory-Based Meta-Analysis of Research on Instruction.

 

Classroom Instruction That Works: Research Based Strategies For Increasing Student Achievement

This is a study of classroom management on student engagement and achievement.

Marzano, R. J., Pickering, D., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Ascd

Improving education through standards-based reform.

This report offers recommendations for the implementation of standards-based reform and outlines possible consequences for policy changes. It summarizes both the vision and intentions of standards-based reform and the arguments of its critics.

McLaughlin, M. W., & Shepard, L. A. (1995). Improving Education through Standards-Based Reform. A Report by the National Academy of Education Panel on Standards-Based Education Reform. National Academy of Education, Stanford University, CERAS Building, Room 108, Stanford, CA 94305-3084..

Building Capacity to Implement and Sustain Effective Practices to Better Serve Children

This article provides an overview of contextual factors across the levels of an educational system that influence implementation.

Schaughency, E., & Ervin, R. (2006). Building Capacity to Implement and Sustain Effective Practices to Better Serve Children. School Psychology Review, 35(2), 155-166. Retrieved from http://eric.ed.gov/?id=EJ788242

 

 

 

The Foundations of Educational Effectiveness

This book looks at research and theoretical models used to define educational effectiveness with the intent on providing educators with evidence-based options for implementing school improvement initiatives that make a difference in student performance.

Scheerens, J. and Bosker, R. (1997). The Foundations of Educational Effectiveness. Oxford:Pergmon

Overview of Formative Assessment

Effective ongoing assessment, referred to in the education literature as formative assessment or progress monitoring, is indispensable in promoting teacher and student success. Feedback through formative assessment is ranked at or near the top of practices known to significantly raise student achievement. For decades, formative assessment has been found to be effective in clinical settings and, more important, in typical classroom settings. Formative assessment produces substantial results at a cost significantly below that of other popular school reform initiatives such as smaller class size, charter schools, accountability, and school vouchers. It also serves as a practical diagnostic tool available to all teachers. A core component of formal and informal assessment procedures, formative assessment allows teachers to quickly determine if individual students are progressing at acceptable rates and provides insight into where and how to modify and adapt lessons, with the goal of making sure that students do not fall behind.

 

States, J., Detrich, R. & Keyworth, R. (2017). Overview of Formative Assessment. Oakland, CA: The Wing Institute. http://www.winginstitute.org/student-formative-assessment.

Progress Monitoring as Essential Practice Within Response to Intervention

Response to Intervention depends on regular, routine monitoring of student progress.  This paper describes a multi-component approach to monitoring progress.

Stecker, P. M., Fuchs, D., & Fuchs, L. S. (2008). Progress Monitoring as Essential Practice Within Response to Intervention. Rural Special Education Quarterly, 27(4), 10-17.

Using Curriculum-Based Measurement to Improve Student Achievement: Review of Research

This article reviews the efficacy of curriculum-based measurement as a methodology for enhancing student achievement in reading and math.  Variables that contribute to the benefit of curriculum-based measurement are discussed.

Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using Curriculum-Based Measurement to Improve Student Achievement: Review of Research. Psychology in the Schools, 42(8), 795-819.

Using Progress-Monitoring Data to Improve Instructional Decision Making

In order to determine effectiveness of instruction teachers require data about the effects of instruction.  With these data teachers are able to make adjustments to instruction when progress is not being made.

Stecker, P. M., Lembke, E. S., & Foegen, A. (2008). Using Progress-Monitoring Data to Improve Instructional Decision Making. Preventing School Failure, 52(2), 48-58.

Keeping RTI on track: How to identify, repair and prevent mistakes that derail implementation

Keeping RTI on Track is a resource to assist educators overcome the biggest problems associated with false starts or implementation failure. Each chapter in this book calls attention to a common error, describing how to avoid the pitfalls that lead to false starts, how to determine when you're in one, and how to get back on the right track.

Vanderheyden, A. M., & Tilly, W. D. (2010). Keeping RTI on track: How to identify, repair and prevent mistakes that derail implementation. LRP Publications.

Productive teaching

This literature review examines the impact of various instructional methods

Walberg H. J. (1999). Productive teaching. In H. C. Waxman & H. J. Walberg (Eds.) New directions for teaching, practice, and research (pp. 75-104). Berkeley, CA: McCutchen Publishing.

Teacher use of interventions in general education settings: Measurement and analysis of? the independent variable

This study evaluated the effects of performance feedback on increasing the quality of implementation of interventions by teachers in a public school setting.

Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of ?the independent variable. Journal of Applied Behavior Analysis, 30(4), 693.

Can Rapid Assessment Moderate the Consequences of High-Stakes Testing

The author makes the case that rapid assessment can identify struggling students who can then be provided intensive instruction so their performance on high stakes tests is improved.

Yeh, S. S. (2006). Can Rapid Assessment Moderate the Consequences of High-Stakes Testing. Education & Urban Society, 39(1), 91-112.

High-Stakes Testing: Can Rapid Assessment Reduce the Pressure

The author reports data suggesting that the systematic use of formative assessment can reduce the pressure on teachers that they experience with high stakes testing.

Yeh, S. S. (2006). High-stakes testing: Can rapid assessment reduce the pressure?. Teachers College Record, 108(4).

The Cost-Effectiveness of Five Policies for Improving Student Achievement

This study compares the effect size and return on investment for rapid assessment, between, increased spending, voucher programs, charter schools, and increased accountability.

Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 28(4), 416-436.

The Cost-Effectiveness of Five Policies for Improving Student Achievement

This study compares the effect size and return on investment for rapid assessment, between, increased spending, voucher programs, charter schools, and increased accountability.

Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 28(4), 416-436.

The Cost-Effectiveness of Comprehensive School Reform and Rapid Assessment

The author compares the effectiness of comprehensive school reform relative to rapid progress monitoring.  Progress monitoring results in much greater benefit than comprehensive school reform.

Yeh, S. S. (2008). The Cost-Effectiveness of Comprehensive School Reform and Rapid Assessment. Education Policy Analysis Archives, 16(13), 1-32.

The Cost-Effectiveness of Replacing the Bottom Quartile of Novice Teachers Through Value-Added Teacher Assessment

The authors examine the effectiveness of replacing low performing teachers relative to using formative assessment as a means of increasing student outcomes.

Yeh, S. S., & Ritter, J. (2009). The Cost-Effectiveness of Replacing the Bottom Quartile of Novice Teachers Through Value-Added Teacher Assessment. Journal of Education Finance, 34(4), 426-451. 

No items found.

Back to Top