The Feasibility of Collecting School-Level Finance Data: An Evaluation of Data from the School-Level Finance Survey (SLFS) School Year 2014–15. Few things are more complicated nor critical than collecting accurate and meaningful data on school finances at the individual school level. It is complicated because of the sheer size of the education system, diversity of spending categories, differing state laws and regulations governing finances, and accounting systems not designed for this task. It is critical because the education system puts high value on equitable and adequate funding for all students. Tracking spending at the individual school level is also a requirement of the recently enacted Every Student Succeeds Act.
This research and development report field-tested a new model for collection of finance data at the school level—the School- Level Finance Survey (SLFS). The pilot SLFS, collected for fiscal year (FY) 14 (school year 2013–14) and FY 15 (school year 2014–15), was designed to evaluate whether the survey is a viable, efficient, and cost-effective method to gather comparable school-level finance data. The results suggest that, regardless of the inherent challenges, it is highly feasible to collect and report on school-level finance data with acceptable accuracy. It also projects improved response rates and the increased availability of complete, accurate, and comparable finance data at the school level as the number of states participating in the SLFS increases and the collection continues to expand.
Citation: Cornman, S.Q., Reynolds, D., Zhou, L., Ampadu, O., D’Antonio, L., Gromos, D., Howell, M., and Wheeler, S. (2019). The Feasibility of Collecting School-Level Finance Data: An Evaluation of Data From the School- Level Finance Survey (SLFS) School Year 2014–15 (NCES 2019-305). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Characteristics of Public and Private Elementary and Secondary School Principals in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look: The National Teacher and Principal Survey (NTPS) collects data from public and private K-12 schools, principals, and teachers across the United States. Its data provides critical data on core topics such as school characteristics and services, principal and teacher demographics, and teacher preparation. The most recent 2017-18 report examined public (traditional), charter, and private school principals in terms of: race/ethnicity, age, highest college degree, salary, years experience (as a principal and at their current school), level of influence on decision-making, and experience with evaluations. A few of the more notable points include:
• Twenty-seven percent of school principals are 55 or older. This represents a significant number of principals who likely to retire in five years.
• The average salary for school principals is $ 92,900.
• Over ninety percent (91.7%) of school principals have a Master’s Degree or higher.
• Almost half (44.3%) of school principals have less than three years experience in their current schools.
• Seventy percent of school principals received evaluations in the selected year (79% in traditional public schools, 69% in charter schools, and 51% in private schools).
Citation: Taie, S., and Goldring, R. (2019). Characteristics of Public and Private Elementary and Secondary School Principals in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look (NCES 2019- 141). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Characteristics of Public and Private Elementary and Secondary Schools in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look. The National Teacher and Principal Survey (NTPS) collects data from public and private K-12 schools, principals, and teachers across the United States. Its data provides critical data on core topics such as school characteristics and services, principal and teacher demographics, and teacher preparation. The most recent 2017-18 report examined public (traditional), charter, and private schools in terms of their participation in the federal free or reduced-price lunch programs (FRLP), special education, English-language learners (ELLs) or limited-English proficient (LEP), extended school days, school start times, special emphasis schools, and minutes of instruction. One of the takeaways from the data is that public (traditional) and charter schools have almost identical statistics in these categories. Included in this data are the following:
Approximately 12% of all K-12 students have IEPs or formally identified disabilities: public (traditional) 13% schools, charter schools 11%, and private schools 7.5%. Ten percent of all K-12 students required ELL/LEP services: public (traditional) 10.6% schools, charter schools 10.2%, and private schools 2.6%.
The majority of public schools (96.6% of traditional public schools and 83.6% of charter schools) participated in the FLRP, with over half of all students receiving these services (55% of total students in each). Private schools were much less likely to participate, with only 18.8% of private schools and 8.7% of the served population receiving FRLP.
Citation: Taie, S., and Goldring, R. (2019). Characteristics of Public and Private Elementary and Secondary Schools in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look (NCES 2019-140). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Teacher-Delivered Strategies to Increase Students’ Opportunities to Respond: A Systematic Methodological Review. This systematic review of the literature examines the evidence behind teacher-directed strategies to increase students’ opportunities to respond (OTR) during whole-group instruction. The results indicate teacher-directed OTR strategy of response cards in K-12 school settings to be a potentially evidence-based practice according to the Council for Exceptional Children’s Standards for Evidence-Based Practices in Special Education .
Citation: Common, E. A., Lane, K. L., Cantwell, E. D., Brunsting, N. C., Oakes, W. P., Germer, K. A., & Bross, L. A. (2019). Teacher-delivered strategies to increase students’ opportunities to respond: A systematic methodological review. Behavioral Disorders, 0198742919828310.
Teacher turnover has been a persistent challenge; while the national rate has hovered at 16% in recent decades, more teachers are leaving the profession, contributing to teacher shortages in hard-to-staff subjects and schools. Higher attrition rates coupled with disproportionate teacher movement away from schools in economically disadvantaged communities has resulted in inequitable distributions of high-quality teachers across schools. Teacher turnover is quite costly, and primarily has negative consequences for school operations, staff collegiality, and student learning. Turnover rates are highest among minority teachers working in high-need schools, beginning teachers, and those who are alternatively certified; higher rates are also found for those teaching math, science, and English as a foreign language, and for special education teachers. Teachers are less likely to be retained in schools with poor working conditions, particularly those led by principals perceived to be less effective, and in schools where they are paid less. Teacher retention may be improved with combinations of targeted financial incentives and improved working conditions (e.g., better principal preparation), and through better supports for early career teachers through effective induction and mentoring programs. Linking financial incentives with enhanced leadership opportunities and career paths also offer potential for retaining effective teachers in classrooms where they are most needed.
Answering the Bell: High School Start Times and Student Academic Outcomes. Research in the area of health and sleep has encouraged educators and policymakers to look to delaying school starting times as an intervention with the potential to improve achievement and other relevant student outcomes. At this time, studies conducted on starting school days at a later time show mixed results. Although, a sufficient number of studies exist to suggest that moving back the start time of school can contribute to improving lagging student performance. This research finds starting school later is associated with reduced suspensions and higher course grades. These studies suggest disadvantaged students may especially benefit from delayed starting times. This study attempts to fill in the research gap on the topic of later start times as much of the earlier research has been conducted using small sample sizes. To increase the sample size needed to confirm previous research, Bastin and Fuller use statewide student-level data from North Carolina to estimate start time effects for all students and traditionally disadvantaged students. Statewide achievement results were mixed, with positive and negative associations found between start times and high school students’ test scores. Bastin and Fuller counsel for further research to increase confidence that later start times predictably produce desired outcomes. Studies of sufficient rigor, using multiple populations, and across different settings are required to address remaining issues and possible unintended consequences associated with changing start times.
Citation: Bastian, K. C., & Fuller, S. C. (2018). Answering the Bell: High School Start Times and Student Academic Outcomes. AERA Open, 4(4), 2332858418812424.
Attention Deficit Hyperactivity Disorders and Classroom-Based Interventions: Evidence-Based Status, Effectiveness, and Moderators of Effects in Single-Case Design Research. Students with attention deficit hyperactivity disorder (ADHD) traditionally struggle academically and behaviorally. The issues associated with ADHD challenge teachers in meeting the needs of these students. This meta-analysis of single-case design studies evaluates intervention effectiveness, evidence-based status, and moderators of effects for four intervention types (behavioral, instructional, self-management, and environmental) when implemented with students with ADHD in classroom settings. This study suggests that interventions that target academic learning strategies and behavioral challenges produced medium effect sizes. The instructional and self-management interventions examined in this study were deemed as evidence-based by What Works Clearinghouse standards.
Citation: Harrison, J. R., Soares, D. A., Rudzinski, S., & Johnson, R. (2019). Attention Deficit Hyperactivity Disorders and Classroom-Based Interventions: Evidence-Based Status, Effectiveness, and Moderators of Effects in Single-Case Design Research. Review of Educational Research, 0034654319857038.
Do Test Score Gaps Grow Before, During, or Between the School Years? Measurement Artifacts and What We Can Know in Spite of Them. Concerns regarding gaps in student achievement for students of lower socio-economic status (SES) and students of color continue to concern educators and the public. One of the more influential studies to examine this issue was the Beginning School Study (BSS) of students in Baltimore City Public Schools in 1982 (Alexander and Entwisle, 2003). The authors found an achievement gap exists at the time student entered elementary school. More importantly, they conclude that the discrepancy in performance widened after each summer break, tripling in size by the end of middle school.
A more recent study published in 2019 by von Hippel and Hamrock offers evidence to counter the Alexander and Entwisle 2003 claims, suggesting that the growing gap is an artifact of the testing and the measurement methods used in the 2003 research. Von Hippel and Hamrock conclude the scaling method, Thurstone scaling (frequently used in the 1960s and 1970s), is flawed and is responsible for the original findings. The Thurstone scaling method has subsequently been replaced in research by more effective methods such as response theory (IRT). When the data from the study was reanalyzed using IRT, the gaps shrank. The new study concludes that gaps are already significant by the time children start school and remain relatively stable until graduation.
The von Hippel and Hamrock research looked at test score gaps for a range of populations: between boys and girls; between black, white, and Hispanic children; between the children and the mother’s education; between children in poor and nonpoor families; and the gaps between high-poverty and low-poverty schools. The researchers wanted to know whether gaps grow faster during summer or the school year. They were unable to answer this question as the results were inconclusive. Although, von Hippel and Hamrock did find the total gap in performance from kindergarten to eighth grade, is substantially smaller than the gap that exists at the time children enter school.
Von Hippel and Hamrock highlight two measurement artifacts that skewed Alexander and Entwisle results: test score scaling and changes of test content. Scaling is a mathematical method that transforms right and wrong answers into a test score. Not all scales produce the same results with important implications for whether and when score gaps happen. Along with concluding that a gap between SES populations tripled between first and eighth grade, Alexander and Entwisle found it was summer vacations where the real gap increased each year. Von Hippel and Hamrock found the BSS used CAT Form C, which was a “fixed-form” paper test. In first grade, all BSS children took a test that contained a fixed or unvarying set of questions in fall and spring. This makes sense when you want to know if students are meeting learning expectations over a specific grade.
But the Alexander and Entwisle wanted to understand the impact of summer breaks on learning, not during a school year. To obtain this information they were used a test designed for the first grade taken at the end of the school year and compared it to the second-grade test given in the fall. Using the spring test of first grader knowledge, then switching the test to the second-grade test in the fall to measure performance the impact of summer break has the effect of confounding summer learning results. Von Hippel and Hamrock propose that changing the test form had the possible effect of distorting the results. Alexander and Entwisle was not the only seasonal learning study to use fixed forms that changed after the summer. Using fixed form tests was a common practice for research from the 1960s into the 1990s. Von Hippel and Hamrock study suggests the summer learning literature was potentially vulnerable to artifacts related to scaling and changes of test form.
Fixed form tests have been replaced by the use of adaptive tests less vulnerable to artifacts that might affect summer learning. Adaptive tests do not ask the same questions of all students. Adaptive tests measure ability by increasing the difficulty of questions asked of students based on the student’s earlier performance. Hence, adaptive tests are a better tool to gauge the impact of summer on student achievement.
The von Hippel and Hamrock study concludes that gaps grow fastest in early childhood. They find no evidence of a gap doubling between first grade and eighth grade and some disparities even shrank. The summer gap growth does not hold up when the flawed instrument is replaced with adaptive tests scored using IRT ability scales. When summer learning gaps are present, most of them are small and not easily detectable. The conclusion is that gaps happen mostly in the first five years of life. Resources currently used to solve a summer learning gap that doesn’t appear to exist should be redirected toward early childhood education. Von Hippel and Hamrock’s study suggests students who are behind peers at the time they enter kindergarten should receive early remedial instruction as the most efficacious way to improve overall performance.
Citation: von Hippel, P. T., & Hamrock, C. (2019). Do test score gaps grow before, during, or between the school years? Measurement artifacts and what we can know in spite of them. Sociological Science, 6, 43-80.
The Number of Low-Performing Schools by State in Three Categories (CSI, TSI, and ATSI), School Year 2018-19. Every Student Succeeds Act (ESSA) gives individual states significant flexibility as to how they identify “low performing schools”. This decision is extremely important as low performing school triggers mandates for states and districts to invest resources to improve them. The more schools identified, the bigger the responsibilities. ESSA identifies three categories of low-performing schools. Going from most intensive to least they include: Comprehensive Support and Improvement (CSI) schools, Targeted Support and Improvement (TSI) schools, and Additional Targeted Support and Improvement (ATSI) schools.
Ideally, each state would have consistent standards for identifying schools that are low performing. To date, there is no formal system in place to monitor these new standards. This report, completed by the Center on Education Policy, attempts to provide an initial snapshot of the number and percentages of schools each state has identified low performing. It has limitations in that states are in the early stages of implementation and calibration, states offered various degrees of cooperation, and some states had yet to complete implementation. Still, it does provide an early look at a very diverse set of guidelines.
The following chart captures their results.
The data show a wide range of results in terms of the percentage of schools identified as low performing. The overall range is 3% to 99%, with individual states spread out fairly evenly in between. Eight states identified over 40% of their public schools as low performing, eleven states 20%–40%, fifteen states 11%–19%, and thirteen states 3%–10%. Even with the limitations of the data listed above, this data suggests inconsistent standards across states.
Citation: Stark Renter, D., Tanner, K., Braun, M. (2019). The Number of Low-Performing Schools by State in Three Categories (CSI, TSI, and ATSI), School Year 2018-19. A Report of the Center on Education Policy
Persistence, Retention, and Attainment of 2011–12 First-Time Beginning Postsecondary Students as of Spring 2017: First Look. This First Look report provides findings from the 2012/17 Beginning Postsecondary Students Longitudinal Study, a national survey of undergraduate students entering postsecondary education for the first time. It looks at a cohort of beginning students over a six-year period of time, examining persistence, retention and attainment (degrees conferred).
The overall graduation rate for first-time, full-time undergraduate students who began seeking a bachelor’s degree at a 4-year degree-granting institution in fall 2012 was 60 percent. That is, by 2017 approximately 60 percent of students had completed a bachelor’s degree. The 6-year graduation rate was 59 percent at public institutions, 74 percent at private nonprofit institutions, and 14 percent at private for-profit institutions. Such low graduation rates are certainly cause for concern for a host of reasons, not the least because the 40% students not graduating have acquired student debt that cannot be offset by the value of having a degree.
The following data examines postsecondary graduation rates over the past seven years by Institutional control: public, private non-profit, and private for-profit.
Postsecondary graduation rates in public institutions have stayed virtually the same for seven years, with 57% of students graduating in 2011 and 59% in 2017. Private non-profit institutions remained at 65-66% graduation for the first six years, increasing by 8 percentage points in 2017. Private for-profit institutions fared the worst, decreasing consistently from 42% in 2011 to 14% in 2017.
Citation: Chen, X., Elliott, B.G., Kinney, S.K., Cooney, D., Pretlow, J., Bryan, M., Wu, J., Ramirez, N.A., and Campbell, T. (2019). Persistence, Retention, and Attainment of 2011–12 First-Time Beginning Postsecondary Students as of Spring 2017 (First Look) (NCES 2019-401). U.S. Department of Education. Washington, DC: National Center for Education Statistics.