PISA 2018 Results (Volume I): What Students Know and Can Do. Benchmark Indicators are critical tools to help education stakeholders track their education system’s performance over time, in comparison to other similar level education systems (state, national, international), and by student groups (ethnicity, disabilities, socioeconomic status, etc.). One of the most respected tools for benchmarking system performance is the Program for International Student Assessment (PISA), which tests 15-year-old-students across nearly 80 countries and educational systems in reading, mathematics, and science. The results from the most recent testing (2018) were just released. The report itself has an enormous amount of data. A summary of key findings follows.
Performance Over Time
U.S. test performance, despite small fluctuations, has been virtually flat over the past twelve to eighteen years (depending on the subject area). The reading performance score was 504 in 2000 and 505 in 2018. Math performance got worse, dropping from 483 in 2003 to 478 in 2018. And science performance has remained the same over the last four testing periods, remaining at 502 between 2009 and 2018. Consistency is not inherently a bad thing depending on how well a system is performing. However, PISA data suggests that the U.S. system is significantly underperforming compared to other international systems (see following). Consistency is also a problem when one considers that unprecedented investments in school reform efforts during this time period (e.g. No Child Left Behind, School Improvement Grants, Race to the Top, and Every Student Succeeds) have failed to move the needle in any significant way.
Performance Compared to Other Countries
U.S performance in reading ranked thirteenth among participating nations in reading, thirty-eighth in math, and twelfth in science. These represent a slight improvement in rankings from the 2015 test results, but that is a reflection of several top performing nations had lower scores, not that the U.S. improved.
Performance Across Different Student Subgroups
One of the biggest takeaway’s from this report is the growing inequity in performance between the high student performers and low. The following chart looks at the average scores of a gap between student scores in the highest percent of academic achievement (90%) and scores at the lowest (10%).
Reading scores of the highest performing students have increased over the lasts two tests from 614 to 643, while the scores of the lowest 10% have decreased from 378 to 361. The result is a widening gap between the top and lowest performing students. While improving the scores of the best performing students is a laudable achievement, an education system must serve all of its students in the interest of equity.
Further analysis of the data shows a correlation between student performance differences and their socioeconomic status (SES). One of the metrics used to determine SES is whether or not students qualify for the National School Lunch Program.
This data shows a direct correlation between a school’s reading scores and the SES of its student body. The more low SES students, the lower the PISA reading, math, and science scores.
It is almost impossible to document cause and effect with data at this level of analysis and control. Still, when making policy and program decisions, we must use the best available evidence. In this case, the best available evidence portrays an education system that, despite significant school improvement efforts, has shown little or no improvement over time, performs worse than a significant number of other nations’ education systems and continues to have inequitable results.
A systematic review of single-case research on video analysis as professional development for special educators. Professional development is viewed as essential to providing teachers with the skills needed to be successful in the classroom. Research strongly supports the need to go beyond the typical in-service training that is commonly provided teachers. Coaching and feedback have been found to be very effective in increasing the likelihood that training will be implemented in classrooms. The use of video has been offered as a cost-effective way to trainers to provide feedback to teachers in training based on actual performance in classroom use of the new skill(s).
Citation: Morin, K. L., Ganz, J. B., Vannest, K. J., Haas, A. N., Nagro, S. A., Peltier, C. J., … & Ura, S. K. (2019). A systematic review of single-case research on video analysis as professional development for special educators. The Journal of Special Education, 53(1), 3-14.
State Department of Education Support for Implementation Issues Faced by School Districts during the Curriculum Adoption Process. The results of this systematic review of the websites of all 50 of the departments of education in the United States show that relatively few states provide state-created curriculum evaluation tools in the areas of English/language arts and mathematics, and only one state provides a curriculum evaluation tool that thoroughly addresses issues of implementation. In the area of English/language arts, the implementation issue most commonly addressed is fit of an instructional program with the district. Evidence demonstrating the effectiveness of a curriculum and the district’s capacity to effectively implement a curriculum are the next two most frequently addressed implementation-related issues. In the area of mathematics, fit with the district is also the most commonly addressed implementation-related issue. The next two most frequently addressed implementation-related issues are supports for the personnel implementing the curriculum and the capacity of the district to successfully implement. Only one state provided a state-created evaluation tool that thoroughly addressed all aspects of implementation as defined by The Hexagon Tool. Interestingly, this tool was generic. It was not designed to be used with English/language arts or mathematics curricula specifically, but with a variety of innovations that districts may consider adopting.
Citation: Rolf, R., R. (2019). State Department of Education Support for Implementation Issues Faced by School Districts during the Curriculum Adoption Process. Oakland, CA: The Wing Institute. https://www.winginstitute.org/student-research-2019.
Do Pay-for-Grades Programs Encourage Student Cheating? Evidence from a randomized experiment. Pay-for-grades programs are designed to increase student academic performance. One of the claims of those opposing such incentive systems is monetary incentives may lead to academic cheating. This randomized controlled study of 11 Chinese primary schools examines the effects of pay-for-grades programs on academic fraud. The study found widespread cheating behavior for students regardless of being in the control or experimental group, but no overall increase in the level of cheating for students in the pay-for-grades program. The authors conclude that educators need to be on the lookout for academic dishonesty, especially on standardized tests, while using moderate incentives to encourage student learning did not lead to increased levels of gaming the system.
Citation: Li, T., & Zhou, Y. (2019). Do Pay-for-Grades Programs Encourage Student Academic Cheating? Evidence from a Randomized Experiment. Frontiers of Education in China, 14(1), 117-137.
Research on informal teacher evaluation reveals the predominant evaluation method is the walk-though, which ranges from a brief 2- to 3-minute snapshot to longer observation. Studies support the important role principals play in instructional leadership but also suggest that principals are not good at identifying which teachers are the best instructors. Research finds that principals overwhelmingly understand the need to sample teacher performance but that they are rarely trained in how to accomplish this.
Citation: Cleaver, S., Detrich, R., & States, J. (2019). Informal Teacher Evaluation. Oakland, CA: The Wing Institute. Retrieved from https://www.winginstitute.org/staff-informal.
The Feasibility of Collecting School-Level Finance Data: An Evaluation of Data from the School-Level Finance Survey (SLFS) School Year 2014–15. Few things are more complicated nor critical than collecting accurate and meaningful data on school finances at the individual school level. It is complicated because of the sheer size of the education system, diversity of spending categories, differing state laws and regulations governing finances, and accounting systems not designed for this task. It is critical because the education system puts high value on equitable and adequate funding for all students. Tracking spending at the individual school level is also a requirement of the recently enacted Every Student Succeeds Act.
This research and development report field-tested a new model for collection of finance data at the school level—the School- Level Finance Survey (SLFS). The pilot SLFS, collected for fiscal year (FY) 14 (school year 2013–14) and FY 15 (school year 2014–15), was designed to evaluate whether the survey is a viable, efficient, and cost-effective method to gather comparable school-level finance data. The results suggest that, regardless of the inherent challenges, it is highly feasible to collect and report on school-level finance data with acceptable accuracy. It also projects improved response rates and the increased availability of complete, accurate, and comparable finance data at the school level as the number of states participating in the SLFS increases and the collection continues to expand.
Citation: Cornman, S.Q., Reynolds, D., Zhou, L., Ampadu, O., D’Antonio, L., Gromos, D., Howell, M., and Wheeler, S. (2019). The Feasibility of Collecting School-Level Finance Data: An Evaluation of Data From the School- Level Finance Survey (SLFS) School Year 2014–15 (NCES 2019-305). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Characteristics of Public and Private Elementary and Secondary School Principals in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look: The National Teacher and Principal Survey (NTPS) collects data from public and private K-12 schools, principals, and teachers across the United States. Its data provides critical data on core topics such as school characteristics and services, principal and teacher demographics, and teacher preparation. The most recent 2017-18 report examined public (traditional), charter, and private school principals in terms of: race/ethnicity, age, highest college degree, salary, years experience (as a principal and at their current school), level of influence on decision-making, and experience with evaluations. A few of the more notable points include:
• Twenty-seven percent of school principals are 55 or older. This represents a significant number of principals who likely to retire in five years.
• The average salary for school principals is $ 92,900.
• Over ninety percent (91.7%) of school principals have a Master’s Degree or higher.
• Almost half (44.3%) of school principals have less than three years experience in their current schools.
• Seventy percent of school principals received evaluations in the selected year (79% in traditional public schools, 69% in charter schools, and 51% in private schools).
Citation: Taie, S., and Goldring, R. (2019). Characteristics of Public and Private Elementary and Secondary School Principals in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look (NCES 2019- 141). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Characteristics of Public and Private Elementary and Secondary Schools in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look. The National Teacher and Principal Survey (NTPS) collects data from public and private K-12 schools, principals, and teachers across the United States. Its data provides critical data on core topics such as school characteristics and services, principal and teacher demographics, and teacher preparation. The most recent 2017-18 report examined public (traditional), charter, and private schools in terms of their participation in the federal free or reduced-price lunch programs (FRLP), special education, English-language learners (ELLs) or limited-English proficient (LEP), extended school days, school start times, special emphasis schools, and minutes of instruction. One of the takeaways from the data is that public (traditional) and charter schools have almost identical statistics in these categories. Included in this data are the following:
Approximately 12% of all K-12 students have IEPs or formally identified disabilities: public (traditional) 13% schools, charter schools 11%, and private schools 7.5%. Ten percent of all K-12 students required ELL/LEP services: public (traditional) 10.6% schools, charter schools 10.2%, and private schools 2.6%.
The majority of public schools (96.6% of traditional public schools and 83.6% of charter schools) participated in the FLRP, with over half of all students receiving these services (55% of total students in each). Private schools were much less likely to participate, with only 18.8% of private schools and 8.7% of the served population receiving FRLP.
Citation: Taie, S., and Goldring, R. (2019). Characteristics of Public and Private Elementary and Secondary Schools in the United States: Results From the 2017–18 National Teacher and Principal Survey First Look (NCES 2019-140). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
The Number of Low-Performing Schools by State in Three Categories (CSI, TSI, and ATSI), School Year 2018-19. Every Student Succeeds Act (ESSA) gives individual states significant flexibility as to how they identify “low performing schools”. This decision is extremely important as low performing school triggers mandates for states and districts to invest resources to improve them. The more schools identified, the bigger the responsibilities. ESSA identifies three categories of low-performing schools. Going from most intensive to least they include: Comprehensive Support and Improvement (CSI) schools, Targeted Support and Improvement (TSI) schools, and Additional Targeted Support and Improvement (ATSI) schools.
Ideally, each state would have consistent standards for identifying schools that are low performing. To date, there is no formal system in place to monitor these new standards. This report, completed by the Center on Education Policy, attempts to provide an initial snapshot of the number and percentages of schools each state has identified low performing. It has limitations in that states are in the early stages of implementation and calibration, states offered various degrees of cooperation, and some states had yet to complete implementation. Still, it does provide an early look at a very diverse set of guidelines.
The following chart captures their results.
The data show a wide range of results in terms of the percentage of schools identified as low performing. The overall range is 3% to 99%, with individual states spread out fairly evenly in between. Eight states identified over 40% of their public schools as low performing, eleven states 20%–40%, fifteen states 11%–19%, and thirteen states 3%–10%. Even with the limitations of the data listed above, this data suggests inconsistent standards across states.
Citation: Stark Renter, D., Tanner, K., Braun, M. (2019). The Number of Low-Performing Schools by State in Three Categories (CSI, TSI, and ATSI), School Year 2018-19. A Report of the Center on Education Policy
An Investigation of Concurrent Validity of Fidelity of Implementation Measures at Initial Years of Implementation. Much of the effectiveness of newly introduced educational practices is lost within 18 months after introducing the method in the classroom. Understanding why practices with solid research fail is important to improving teacher effectiveness and for improving student performance. Research suggests practices implemented incorrectly are less likely to produce the desired outcomes. Research also finds that treatment fidelity (implementing practices as designed) begins to decline shortly after the new skill has been learned. This paper examines fidelity self-assessment and team-based fidelity measures in the first 4 years of implementation of School-Wide Positive Behavioral Interventions and Supports (SWPBIS). Results show strong positive correlations between fidelity self-assessments and a team-based measure of fidelity at each year of implementation.
Citation: Khoury, C. R., McIntosh, K., & Hoselton, R. (2019). An Investigation of Concurrent Validity of Fidelity of Implementation Measures at Initial Years of Implementation. Remedial and Special Education, 40(1), 25-31.