Categories for External Influences

Latest Data on Characteristics of Public and Private Elementary and Secondary School Principals in the United States

August 29, 2019

Characteristics of Public and Private Elementary and Secondary School Principals in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look: The National Teacher and Principal Survey (NTPS) collects data from public and private K-12 schools, principals, and teachers across the United States.  Its data provides critical data on core topics such as school characteristics and services, principal and teacher demographics, and teacher preparation.  The most recent 2017-18 report examined public (traditional), charter, and private school principals in terms of:  race/ethnicity, age, highest college degree, salary, years experience (as a principal and at their current school), level of influence on decision-making, and experience with evaluations.  A few of the more notable points include:

•     Twenty-seven percent of school principals are 55 or older. This represents a significant number of principals who likely to retire in five years.

•     The average salary for school principals is $ 92,900.

•     Over ninety percent (91.7%) of school principals have a Master’s Degree or higher.

•     Almost half (44.3%) of school principals have less than three years experience in their current schools.

•     Seventy percent of school principals received evaluations in the selected year (79% in traditional public schools, 69% in charter schools, and 51% in private schools).

Citation: Taie, S., and Goldring, R. (2019). Characteristics of Public and Private Elementary and Secondary School Principals in the United States: Results From the 201718 National Teacher and Principal Survey First Look (NCES 2019- 141)U.S. Department of Education. Washington, DC: National Center for Education Statistics. 

Link: https://nces.ed.gov/pubs2019/2019141.pdf

 


 

Latest Data on Characteristics of Public (traditional), Charter, and Private Elementary and Secondary Schools in the United States

August 15, 2019

Characteristics of Public and Private Elementary and Secondary Schools in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look.  The National Teacher and Principal Survey (NTPS) collects data from public and private K-12 schools, principals, and teachers across the United States.  Its data provides critical data on core topics such as school characteristics and services, principal and teacher demographics, and teacher preparation.  The most recent 2017-18 report examined public (traditional), charter, and private schools in terms of their participation in the federal free or reduced-price lunch programs (FRLP), special education, English-language learners (ELLs) or limited-English proficient (LEP), extended school days, school start times, special emphasis schools, and minutes of instruction. One of the takeaways from the data is that public (traditional) and charter schools have almost identical statistics in these categories.  Included in this data are the following:  

  • Approximately 12% of all K-12 students have IEPs or formally identified disabilities: public (traditional) 13% schools, charter schools 11%, and private schools 7.5%.  Ten percent of all K-12 students required ELL/LEP services: public (traditional) 10.6% schools, charter schools 10.2%, and private schools 2.6%.
  • The majority of public schools (96.6% of traditional public schools and 83.6% of charter schools) participated in the FLRP, with over half of all students receiving these services (55% of total students in each). Private schools were much less likely to participate, with only 18.8% of private schools and 8.7% of the served population receiving FRLP.   

Citation:  Taie, S., and Goldring, R. (2019). Characteristics of Public and Private Elementary and Secondary Schools in the United States: Results From the 201718 National Teacher and Principal Survey First Look (NCES 2019-140). U.S. Department of Education. Washington, DC: National Center for Education Statistics. 

Link:  https://nces.ed.gov/pubs2019/2019140.pdf

 


 

Which strategies for increasing students’ opportunities to respond produce the best results?

August 15, 2019

Teacher-Delivered Strategies to Increase Students’ Opportunities to Respond: A Systematic Methodological Review. This systematic review of the literature examines the evidence behind teacher-directed strategies to increase students’ opportunities to respond (OTR) during whole-group instruction. The results indicate teacher-directed OTR strategy of response cards in K-12 school settings to be a potentially evidence-based practice according to the Council for Exceptional Children’s Standards for Evidence-Based Practices in Special Education .

Citation: Common, E. A., Lane, K. L., Cantwell, E. D., Brunsting, N. C., Oakes, W. P., Germer, K. A., & Bross, L. A. (2019). Teacher-delivered strategies to increase students’ opportunities to respond: A systematic methodological review. Behavioral Disorders, 0198742919828310.

Linkhttps://journals.sagepub.com/doi/abs/10.1177/0198742919828310

 


 

Teacher Retention Overview (Wing Institute Original Paper)

August 6, 2019

Teacher turnover has been a persistent challenge; while the national rate has hovered at 16% in recent decades, more teachers are leaving the profession, contributing to teacher shortages in hard-to-staff subjects and schools. Higher attrition rates coupled with disproportionate teacher movement away from schools in economically disadvantaged communities has resulted in inequitable distributions of high-quality teachers across schools. Teacher turnover is quite costly, and primarily has negative consequences for school operations, staff collegiality, and student learning. Turnover rates are highest among minority teachers working in high-need schools, beginning teachers, and those who are alternatively certified; higher rates are also found for those teaching math, science, and English as a foreign language, and for special education teachers. Teachers are less likely to be retained in schools with poor working conditions, particularly those led by principals perceived to be less effective, and in schools where they are paid less. Teacher retention may be improved with combinations of targeted financial incentives and improved working conditions (e.g., better principal preparation), and through better supports for early career teachers through effective induction and mentoring programs. Linking financial incentives with enhanced leadership opportunities and career paths also offer potential for retaining effective teachers in classrooms where they are most needed. 

Citation: Donley, J., Detrich, R., Keyworth, R., & States, J. (2019). Teacher Retention. Oakland, CA: The Wing Institute. https://www.winginstitute.org/quality-teachers-retention

Link:  https://www.winginstitute.org/quality-teachers-retention

 


 

Do later school starting times offer a cost-effective method for improving student performance?

July 29, 2019

Answering the Bell: High School Start Times and Student Academic Outcomes. Research in the area of health and sleep has encouraged educators and policymakers to look to delaying school starting times as an intervention with the potential to improve achievement and other relevant student outcomes. At this time, studies conducted on starting school days at a later time show mixed results. Although, a sufficient number of studies exist to suggest that moving back the start time of school can contribute to improving lagging student performance. This research finds starting school later is associated with reduced suspensions and higher course grades. These studies suggest disadvantaged students may especially benefit from delayed starting times. This study attempts to fill in the research gap on the topic of later start times as much of the earlier research has been conducted using small sample sizes. To increase the sample size needed to confirm previous research, Bastin and Fuller use statewide student-level data from North Carolina to estimate start time effects for all students and traditionally disadvantaged students. Statewide achievement results were mixed, with positive and negative associations found between start times and high school students’ test scores. Bastin and Fuller counsel for further research to increase confidence that later start times predictably produce desired outcomes.  Studies of sufficient rigor, using multiple populations, and across different settings are required to address remaining issues and possible unintended consequences associated with changing start times.  

Citation: Bastian, K. C., & Fuller, S. C. (2018). Answering the Bell: High School Start Times and Student Academic Outcomes. AERA Open4(4), 2332858418812424.

Linkhttps://journals.sagepub.com/doi/pdf/10.1177/2332858418812424

 


 

How effective are attention deficit hyperactivity disorder (ADHD) interventions?

July 29, 2019

Attention Deficit Hyperactivity Disorders and Classroom-Based Interventions: Evidence-Based Status, Effectiveness, and Moderators of Effects in Single-Case Design Research. Students with attention deficit hyperactivity disorder (ADHD) traditionally struggle academically and behaviorally. The issues associated with ADHD challenge teachers in meeting the needs of these students. This meta-analysis of single-case design studies evaluates intervention effectiveness, evidence-based status, and moderators of effects for four intervention types (behavioral, instructional, self-management, and environmental) when implemented with students with ADHD in classroom settings. This study suggests that interventions that target academic learning strategies and behavioral challenges produced medium effect sizes. The instructional and self-management interventions examined in this study were deemed as evidence-based by What Works Clearinghouse standards.

Citation: Harrison, J. R., Soares, D. A., Rudzinski, S., & Johnson, R. (2019). Attention Deficit Hyperactivity Disorders and Classroom-Based Interventions: Evidence-Based Status, Effectiveness, and Moderators of Effects in Single-Case Design Research. Review of Educational Research, 0034654319857038.

Linkhttps://journals.sagepub.com/doi/full/10.3102/0034654319857038

 


 

Does research on gaps in student learning caused by summer breaks hold up?

July 23, 2019

Do Test Score Gaps Grow Before, During, or Between the School Years? Measurement Artifacts and What We Can Know in Spite of Them. Concerns regarding gaps in student achievement for students of lower socio-economic status (SES) and students of color continue to concern educators and the public. One of the more influential studies to examine this issue was the Beginning School Study (BSS) of students in Baltimore City Public Schools in 1982 (Alexander and Entwisle, 2003). The authors found an achievement gap exists at the time student entered elementary school. More importantly, they conclude that the discrepancy in performance widened after each summer break, tripling in size by the end of middle school. 

A more recent study published in 2019 by von Hippel and Hamrock offers evidence to counter the Alexander and Entwisle 2003 claims, suggesting that the growing gap is an artifact of the testing and the measurement methods used in the 2003 research. Von Hippel and Hamrock conclude the scaling method, Thurstone scaling (frequently used in the 1960s and 1970s), is flawed and is responsible for the original findings. The Thurstone scaling method has subsequently been replaced in research by more effective methods such as response theory (IRT). When the data from the study was reanalyzed using IRT, the gaps shrank. The new study concludes that gaps are already significant by the time children start school and remain relatively stable until graduation. 

The von Hippel and Hamrock research looked at test score gaps for a range of populations: between boys and girls; between black, white, and Hispanic children; between the children and the mother’s education; between children in poor and nonpoor families; and the gaps between high-poverty and low-poverty schools. The researchers wanted to know whether gaps grow faster during summer or the school year. They were unable to answer this question as the results were inconclusive. Although, von Hippel and Hamrock did find the total gap in performance from kindergarten to eighth grade, is substantially smaller than the gap that exists at the time children enter school. 

Von Hippel and Hamrock highlight two measurement artifacts that skewed Alexander and Entwisle results: test score scaling and changes of test content. Scaling is a mathematical method that transforms right and wrong answers into a test score. Not all scales produce the same results with important implications for whether and when score gaps happen. Along with concluding that a gap between SES populations tripled between first and eighth grade, Alexander and Entwisle found it was summer vacations where the real gap increased each year. Von Hippel and Hamrock found the BSS used CAT Form C, which was a “fixed-form” paper test. In first grade, all BSS children took a test that contained a fixed or unvarying set of questions in fall and spring. This makes sense when you want to know if students are meeting learning expectations over a specific grade. 

But the Alexander and Entwisle wanted to understand the impact of summer breaks on learning, not during a school year. To obtain this information they were used a test designed for the first grade taken at the end of the school year and compared it to the second-grade test given in the fall. Using the spring test of first grader knowledge, then switching the test to the second-grade test in the fall to measure performance the impact of summer break has the effect of confounding summer learning results. Von Hippel and Hamrock propose that changing the test form had the possible effect of distorting the results. Alexander and Entwisle was not the only seasonal learning study to use fixed forms that changed after the summer. Using fixed form tests was a common practice for research from the 1960s into the 1990s. Von Hippel and Hamrock study suggests the summer learning literature was potentially vulnerable to artifacts related to scaling and changes of test form. 

Fixed form tests have been replaced by the use of adaptive tests less vulnerable to artifacts that might affect summer learning. Adaptive tests do not ask the same questions of all students. Adaptive tests measure ability by increasing the difficulty of questions asked of students based on the student’s earlier performance. Hence, adaptive tests are a better tool to gauge the impact of summer on student achievement. 

The von Hippel and Hamrock study concludes that gaps grow fastest in early childhood. They find no evidence of a gap doubling between first grade and eighth grade and some disparities even shrank. The summer gap growth does not hold up when the flawed instrument is replaced with adaptive tests scored using IRT ability scales. When summer learning gaps are present, most of them are small and not easily detectable. The conclusion is that gaps happen mostly in the first five years of life. Resources currently used to solve a summer learning gap that doesn’t appear to exist should be redirected toward early childhood education.  Von Hippel and Hamrock’s study suggests students who are behind peers at the time they enter kindergarten should receive early remedial instruction as the most efficacious way to improve overall performance.

Citation: von Hippel, P. T., & Hamrock, C. (2019). Do test score gaps grow before, during, or between the school years? Measurement artifacts and what we can know in spite of them. Sociological Science, 6, 43-80.

Link: https://www.sociologicalscience.com/download/vol-6/january/SocSci_v6_43to80.pdf

 


 

States’ Identification of Low Performing Schools Under the Every Student Succeeds Act (ESSA)

May 31, 2019

The Number of Low-Performing Schools by State in Three Categories (CSI, TSI, and ATSI), School Year 2018-19. Every Student Succeeds Act (ESSA) gives individual states significant flexibility as to how they identify “low performing schools”. This decision is extremely important as low performing school triggers mandates for states and districts to invest resources to improve them. The more schools identified, the bigger the responsibilities. ESSA identifies three categories of low-performing schools. Going from most intensive to least they include:  Comprehensive Support and Improvement (CSI) schools, Targeted Support and Improvement (TSI) schools, and Additional Targeted Support and Improvement (ATSI) schools.

Ideally, each state would have consistent standards for identifying schools that are low performing. To date, there is no formal system in place to monitor these new standards. This report, completed by the Center on Education Policy, attempts to provide an initial snapshot of the number and percentages of schools each state has identified low performing. It has limitations in that states are in the early stages of implementation and calibration, states offered various degrees of cooperation, and some states had yet to complete implementation. Still, it does provide an early look at a very diverse set of guidelines.  

The following chart captures their results.  

Center on Education Policy (2019)

The data show a wide range of results in terms of the percentage of schools identified as low performing. The overall range is 3% to 99%, with individual states spread out fairly evenly in between. Eight states identified over 40% of their public schools as low performing, eleven states 20%–40%, fifteen states 11%–19%, and thirteen states 3%–10%. Even with the limitations of the data listed above, this data suggests inconsistent standards across states.

Citation: Stark Renter, D., Tanner, K., Braun, M. (2019). The Number of Low-Performing Schools by State in Three Categories (CSI, TSI, and ATSI), School Year 2018-19. A Report of the Center on Education Policy

Link: http://educationresearchreport.blogspot.com/2019/05/number-of-low-performing-schools-by.html

 


 

U.S. College Graduation Rates

May 30, 2019

Persistence, Retention, and Attainment of 2011–12 First-Time Beginning Postsecondary Students as of Spring 2017: First Look. This First Look report provides findings from the 2012/17 Beginning Postsecondary Students Longitudinal Study, a national survey of undergraduate students entering postsecondary education for the first time.  It looks at a cohort of beginning students over a six-year period of time, examining persistence, retention and attainment (degrees conferred). 

The overall graduation rate for first-time, full-time undergraduate students who began seeking a bachelor’s degree at a 4-year degree-granting institution in fall 2012 was 60 percent. That is, by 2017 approximately 60 percent of students had completed a bachelor’s degree. The 6-year graduation rate was 59 percent at public institutions, 74 percent at private nonprofit institutions, and 14 percent at private for-profit institutions.  Such low graduation rates are certainly cause for concern for a host of reasons, not the least because the 40% students not graduating have acquired student debt that cannot be offset by the value of having a degree.

The following data examines postsecondary graduation rates over the past seven years by Institutional control: public, private non-profit, and private for-profit. 

Data from The Conditions of Education (2019, 2018, 2017, 2016, 2015, 2014, 2013)

https://surveys.nces.ed.gov/bb/img/IES_NCES_logo.png

Postsecondary graduation rates in public institutions have stayed virtually the same for seven years, with 57% of students graduating in 2011 and 59% in 2017.  Private non-profit institutions remained at 65-66% graduation for the first six years, increasing by 8 percentage points in 2017.  Private for-profit institutions fared the worst, decreasing consistently from 42% in 2011 to 14% in 2017.  

Citation: Chen, X., Elliott, B.G., Kinney, S.K., Cooney, D., Pretlow, J., Bryan, M., Wu, J., Ramirez, N.A., and Campbell, T. (2019). Persistence, Retention, and Attainment of 2011–12 First-Time Beginning Postsecondary Students as of Spring 2017 (First Look) (NCES 2019-401). U.S. Department of Education. Washington, DC: National Center for Education Statistics. 

Link:https://nces.ed.gov/pubs2019/2019401.pdf

 


 

What is the impact of classroom management for ethnically diverse student populations?

May 15, 2019

Classroom management for ethnic–racial minority students: A meta-analysis of single-case design studies. Demographic changes in the United States support the need to examine the impact of evidence-based classroom management interventions for students from ethic and racially diverse backgrounds. Research consistently shows African-American students receive harsher and exclusionary discipline. Studies also reveal that African American, Latinex, and Native American children are subject to punitive consequences disproportionate to their numbers in the population. This meta-analysis of behavior management strategies includes single-subject designed studies of 838 students from 22 studies for K-12 classrooms. Half of the studies included met the What Works Clearinghouse design standards for the use of single subject design methodology. The study finds the behavior management strategies are highly effective for improving student conduct. Interventions that used an individual or group contingency demonstrated large effects and were the most common behavior management strategies used. The study finds few studies included diverse populations other than African-American students. The authors conclude there exists a need to increase the number of studies of diverse student populations when examining classroom management. They also find a need to improve upon the quality of available studies on the classroom management strategies.

Citation: Long, A. C. J., Miller, F. G., & Upright, J. J. (2019). Classroom management for ethnic–racial minority students: A meta-analysis of single-case design studies. School Psychology, 34(1), 1-13. http://dx.doi.org/10.1037/spq0000305

Linkhttps://psycnet.apa.org/record/2018-65318-001?_ga=2.74983442.1820096213.1557858091-255501757.1557858091