Categories for Education Resources

How effective are year-round schools in overcoming student’s summer break performance loss?

October 1, 2019

Singletrack yearround education for improving academic achievement in U.S. K12 schools: Results of a metaanalysis. This Campbell Collaboration systematic meta-analysis examines the impact of reducing summer breaks on student academic performance. Research shows that students experience a loss in math and reading performance over summer breaks. Year‐round education (YRE) redistributes schooldays to shorten summer. This study finds that students at single‐track YRE schools show modestly higher achievement in both math and reading. The higher achievement is similar to estimates of summer learning loss. The effects were similar for all students. Given that summer learning loss is thought to be greater among students from disadvantaged groups, the estimated impact for low‐income and minority students were found to unexpectedly be about the same magnitude or smaller than for the full sample.

Citation: Fitzpatrick, D., & Burns, J. (2019). Single‐track year‐round education for improving academic achievement in US K‐12 schools: Results of a meta‐analysis. Campbell Systematic Reviews15(3), e1053.

Linkhttps://onlinelibrary.wiley.com/doi/full/10.1002/cl2.1053 and https://onlinelibrary.wiley.com/doi/epdf/10.1002/cl2.1053

 


 

What is the impact of cash incentives for grades on student cheating?

September 30, 2019

Do Pay-for-Grades Programs Encourage Student Cheating? Evidence from a randomized experiment.  Pay-for-grades programs are designed to increase student academic performance. One of the claims of those opposing such incentive systems is monetary incentives may lead to academic cheating. This randomized controlled study of 11 Chinese primary schools examines the effects of pay-for-grades programs on academic fraud. The study found widespread cheating behavior for students regardless of being in the control or experimental group, but no overall increase in the level of cheating for students in the pay-for-grades program. The authors conclude that educators need to be on the lookout for academic dishonesty, especially on standardized tests, while using moderate incentives to encourage student learning did not lead to increased levels of gaming the system.

Citation:  Li, T., & Zhou, Y. (2019). Do Pay-for-Grades Programs Encourage Student Academic Cheating? Evidence from a Randomized Experiment. Frontiers of Education in China14(1), 117-137.

Linkhttps://link.springer.com/article/10.1007/s11516-019-0005-9

 


 

A look at John Hattie’s latest work

September 27, 2019

Visible Learning Insights. This book by John Hattie and Klaus Zierer is written for teachers, education researchers, and anyone interested in the latest research on the efficacy of education practices. This research offers an overview of 1,400 meta-analyses and continues to build on the work John Hattie began with his book, Visible Learning, published over a decade ago that provided educators with a synthesis of 800 meta-analyses. 

Citation: Hattie, J., & Zierer, K. (2019). Visible Learning Insights. Routledge.

Linkhttps://www.routledge.com/Visible-Learning-Insights-1st-Edition/Hattie-Zierer/p/book/9781138549692

 


 

Teacher Retention Analysis Overview (Wing Institute Original paper)

August 27, 2019

Matching the availability of teachers to demand constantly evolves. During recessions schools are forced to layoff teachers. As economic times improve, schools acquire resources and rehire personnel. Currently, American schools are faced with the most severe shortages in special education; science, technology, engineering, and math (STEM); and bilingual education. Shortages vary across the country and are most acute in areas with lower wages and in poor schools. Starting in the 1980’s schools began filling vacancies with under-qualified personnel hired on emergency or temporary credentials to meet needs. A 35% drop in pre-service enrollment and high teacher attrition currently impact the supply. Candidates and veteran teachers are influenced to leave teaching due to low compensation, stressful working conditions, and a perceived decline in respect. The demand side is influenced primarily by fluctuations in population, finances, and education policy. Matching supply to demand is a challenge but can be accomplished through better planning, procuring less volatile funding sources, and improving working conditions through improved pay and effective training.

Citation: Donley, J., Detrich, R. Keyworth, R., & States, J., (2019). Teacher Retention Analysis Overview. Oakland, CA: The Wing Institute. https://www.winginstitute.org/teacher-retention-turnover-analysis.

Link: https://www.winginstitute.org/teacher-retention-turnover-analysis

 


 

Teacher Retention Overview (Wing Institute Original Paper)

August 6, 2019

Teacher turnover has been a persistent challenge; while the national rate has hovered at 16% in recent decades, more teachers are leaving the profession, contributing to teacher shortages in hard-to-staff subjects and schools. Higher attrition rates coupled with disproportionate teacher movement away from schools in economically disadvantaged communities has resulted in inequitable distributions of high-quality teachers across schools. Teacher turnover is quite costly, and primarily has negative consequences for school operations, staff collegiality, and student learning. Turnover rates are highest among minority teachers working in high-need schools, beginning teachers, and those who are alternatively certified; higher rates are also found for those teaching math, science, and English as a foreign language, and for special education teachers. Teachers are less likely to be retained in schools with poor working conditions, particularly those led by principals perceived to be less effective, and in schools where they are paid less. Teacher retention may be improved with combinations of targeted financial incentives and improved working conditions (e.g., better principal preparation), and through better supports for early career teachers through effective induction and mentoring programs. Linking financial incentives with enhanced leadership opportunities and career paths also offer potential for retaining effective teachers in classrooms where they are most needed. 

Citation: Donley, J., Detrich, R., Keyworth, R., & States, J. (2019). Teacher Retention. Oakland, CA: The Wing Institute. https://www.winginstitute.org/quality-teachers-retention

Link:  https://www.winginstitute.org/quality-teachers-retention

 


 

Do later school starting times offer a cost-effective method for improving student performance?

July 29, 2019

Answering the Bell: High School Start Times and Student Academic Outcomes. Research in the area of health and sleep has encouraged educators and policymakers to look to delaying school starting times as an intervention with the potential to improve achievement and other relevant student outcomes. At this time, studies conducted on starting school days at a later time show mixed results. Although, a sufficient number of studies exist to suggest that moving back the start time of school can contribute to improving lagging student performance. This research finds starting school later is associated with reduced suspensions and higher course grades. These studies suggest disadvantaged students may especially benefit from delayed starting times. This study attempts to fill in the research gap on the topic of later start times as much of the earlier research has been conducted using small sample sizes. To increase the sample size needed to confirm previous research, Bastin and Fuller use statewide student-level data from North Carolina to estimate start time effects for all students and traditionally disadvantaged students. Statewide achievement results were mixed, with positive and negative associations found between start times and high school students’ test scores. Bastin and Fuller counsel for further research to increase confidence that later start times predictably produce desired outcomes.  Studies of sufficient rigor, using multiple populations, and across different settings are required to address remaining issues and possible unintended consequences associated with changing start times.  

Citation: Bastian, K. C., & Fuller, S. C. (2018). Answering the Bell: High School Start Times and Student Academic Outcomes. AERA Open4(4), 2332858418812424.

Linkhttps://journals.sagepub.com/doi/pdf/10.1177/2332858418812424

 


 

Does research on gaps in student learning caused by summer breaks hold up?

July 23, 2019

Do Test Score Gaps Grow Before, During, or Between the School Years? Measurement Artifacts and What We Can Know in Spite of Them. Concerns regarding gaps in student achievement for students of lower socio-economic status (SES) and students of color continue to concern educators and the public. One of the more influential studies to examine this issue was the Beginning School Study (BSS) of students in Baltimore City Public Schools in 1982 (Alexander and Entwisle, 2003). The authors found an achievement gap exists at the time student entered elementary school. More importantly, they conclude that the discrepancy in performance widened after each summer break, tripling in size by the end of middle school. 

A more recent study published in 2019 by von Hippel and Hamrock offers evidence to counter the Alexander and Entwisle 2003 claims, suggesting that the growing gap is an artifact of the testing and the measurement methods used in the 2003 research. Von Hippel and Hamrock conclude the scaling method, Thurstone scaling (frequently used in the 1960s and 1970s), is flawed and is responsible for the original findings. The Thurstone scaling method has subsequently been replaced in research by more effective methods such as response theory (IRT). When the data from the study was reanalyzed using IRT, the gaps shrank. The new study concludes that gaps are already significant by the time children start school and remain relatively stable until graduation. 

The von Hippel and Hamrock research looked at test score gaps for a range of populations: between boys and girls; between black, white, and Hispanic children; between the children and the mother’s education; between children in poor and nonpoor families; and the gaps between high-poverty and low-poverty schools. The researchers wanted to know whether gaps grow faster during summer or the school year. They were unable to answer this question as the results were inconclusive. Although, von Hippel and Hamrock did find the total gap in performance from kindergarten to eighth grade, is substantially smaller than the gap that exists at the time children enter school. 

Von Hippel and Hamrock highlight two measurement artifacts that skewed Alexander and Entwisle results: test score scaling and changes of test content. Scaling is a mathematical method that transforms right and wrong answers into a test score. Not all scales produce the same results with important implications for whether and when score gaps happen. Along with concluding that a gap between SES populations tripled between first and eighth grade, Alexander and Entwisle found it was summer vacations where the real gap increased each year. Von Hippel and Hamrock found the BSS used CAT Form C, which was a “fixed-form” paper test. In first grade, all BSS children took a test that contained a fixed or unvarying set of questions in fall and spring. This makes sense when you want to know if students are meeting learning expectations over a specific grade. 

But the Alexander and Entwisle wanted to understand the impact of summer breaks on learning, not during a school year. To obtain this information they were used a test designed for the first grade taken at the end of the school year and compared it to the second-grade test given in the fall. Using the spring test of first grader knowledge, then switching the test to the second-grade test in the fall to measure performance the impact of summer break has the effect of confounding summer learning results. Von Hippel and Hamrock propose that changing the test form had the possible effect of distorting the results. Alexander and Entwisle was not the only seasonal learning study to use fixed forms that changed after the summer. Using fixed form tests was a common practice for research from the 1960s into the 1990s. Von Hippel and Hamrock study suggests the summer learning literature was potentially vulnerable to artifacts related to scaling and changes of test form. 

Fixed form tests have been replaced by the use of adaptive tests less vulnerable to artifacts that might affect summer learning. Adaptive tests do not ask the same questions of all students. Adaptive tests measure ability by increasing the difficulty of questions asked of students based on the student’s earlier performance. Hence, adaptive tests are a better tool to gauge the impact of summer on student achievement. 

The von Hippel and Hamrock study concludes that gaps grow fastest in early childhood. They find no evidence of a gap doubling between first grade and eighth grade and some disparities even shrank. The summer gap growth does not hold up when the flawed instrument is replaced with adaptive tests scored using IRT ability scales. When summer learning gaps are present, most of them are small and not easily detectable. The conclusion is that gaps happen mostly in the first five years of life. Resources currently used to solve a summer learning gap that doesn’t appear to exist should be redirected toward early childhood education.  Von Hippel and Hamrock’s study suggests students who are behind peers at the time they enter kindergarten should receive early remedial instruction as the most efficacious way to improve overall performance.

Citation: von Hippel, P. T., & Hamrock, C. (2019). Do test score gaps grow before, during, or between the school years? Measurement artifacts and what we can know in spite of them. Sociological Science, 6, 43-80.

Link: https://www.sociologicalscience.com/download/vol-6/january/SocSci_v6_43to80.pdf

 


 

Active Student Responding (Wing Institute Original Paper)

May 8, 2019

Active Student Responding (ASR) is a powerful set of low cost strategies teachers can use to improve student achievement. ASR occurs when a student makes a response by answering questions or responding in a variety of ways that communicates the student’s understanding of the content being taught during the lesson. The more opportunities the student has to respond, the increased likelihood the student is learning. Increasing active responses allows teachers to rapidly assess performance. As opportunities to respond increase so does opportunities for praise and corrective feedback that results in accelerated learning. Attending and being on-task are insufficient ways for teachers to know if learning is occurring. For a teacher to know if a student is actually learning a written, action, or oral response is required. The more opportunities to respond the more quickly students master lessons. ASR strategies are designed to engage all students regardless of class size and ASR avoids the common problem of having only high achievers answer questions while low achievers remain silent, thus escaping detection. Examples of ASR strategies include; guided notes, response slates, response cards, and choral responding.

Citation: States, J., Detrich, R. & Keyworth, R. (2019). Active Student Responding (ASR) Overview.Oakland, CA: The Wing Institute. https://www.winginstitute.org/instructional-delivery-student-respond

Link: https://www.winginstitute.org/instructional-delivery-student-respond

 


 

Does the National Board for Professional Teaching Standards Certification (NBPTS) have a positive effect on student outcomes?

April 17, 2019

What Works Clearinghouse Intervention Report: National Board for Professional Teaching Standards Certification. NBPTS was established in 1987 to foster “high and rigorous standards for what accomplished teachers should know and be able to do” (NBPTS mission statement). As a voluntary national system, NBPTS certifies that a teacher has taught in the field and meets certification requirements for best practices for instruction and pedagogy. The standards reflect five core propositions: (1) effective teachers are committed to students and their learning, (2) effective teachers know the subjects they teach and how to teach those subjects to students, (3) effective teachers manage and monitor student learning, (4) effective teachers think systematically about their practice and learn from experience, and (5) effective teachers are members of learning communities. 

The process requires teachers to pay a fee and can take from 3 months to several years to complete. School districts have come to view the process as a way to improve student achievement, allocating scarce resources in the form of performance compensation to encourage teachers who acquire certification. The What Works Clearinghouse review found NBPTS-certified teachers had mixed effects on mathematics achievement and no discernible effects on English language arts achievement for students in grades 3 through 8. 

Citation: Mathematica Policy Research (2018). What Works Clearinghouse Intervention Report: National Board for Professional Teaching Standards Certification. Washington, DC: U.S. Department of Education, National Center for Education Statistics, National Assessment of Educational Progress (NAEP). Retrieved from https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_nbpts_021318.pdf.

Linkhttps://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_nbpts_021318.pdf

 


 

How does attending a charter school impact success in college?

April 17, 2019

Do Charter Middle Schools Improve Students’ College Outcomes? This study examines the impact of Charter schools on college enrollment. The National Center for Education Evaluation and Regional Assistance (NCEE) used college enrollment and completion data for students who (more than a decade ago) entered lotteries to be admitted to 31 charter middle schools across the United States. College outcomes were compared for 1,723 randomly selected “lottery winners” and 1,150 randomly selected “lottery losers”. The results show that admission to a charter middle school did not affect college outcomes. Additionally, the study finds no consistent relationship between the impact a charter middle school achievement and the school’s impact on college outcomes

Citation: Place, K., & Gleason, P. Do Charter Middle Schools Improve Students’ College Outcomes? (Study Highlights) (No. 61bd53574633412b9136328cb4e143ef). Mathematica Policy Research.

Link: https://ies.ed.gov/ncee/pubs/20194005/index.asp