English-learning students across Texas have faced an uphill battle with the Texas English Language Proficiency Assessment System (TELPAS). In 2018, the test was redesigned, and the shift to automated computer scoring has led to a troubling trend — drastically lower scores that don’t seem to reflect students’ true abilities, according to a Texas Tribune analysis.
For years, TELPAS was a tool that allowed educators to engage with students directly, assessing their English skills through human interaction. However, with the introduction of computer scoring, many students who previously excelled are now struggling to meet the necessary benchmarks. The result? A mere 10% of students are achieving the highest score in speaking, down from 50% before the redesign. This same system of grading portions of the TELPAS is being used in STAAR grading – showcasing that automation is not always best.
These low scores have consequences that affect the school districts these students attend. Students who don’t meet TELPAS standards are often stuck in remedial English courses, limiting their access to electives and advanced classes. This can hinder their academic growth and college prospects, a reality that many of our bilingual educators find deeply frustrating.
Additionally, the Texas Education Agency (TEA) uses TELPAS scores in calculating A-F ratings they have created. Not only do they create a flawed acocuntabiltiy system, but they create flawed tests that do not accurately measure a student’s English capabilities.
The TEA defends the changes, citing the need for standardized testing and faster results. However, the accuracy and fairness of this automated system are in question. When educators challenge TELPAS scores, they often find that human reviews lead to improved results, suggesting that the system may not be adequately capturing students’ true abilities.