Rigged Grading

A large study by the OECD concludes: “[T]he socio-economic composition of schools explains far more of the differences in student performance between schools than do other school factors that are more easily amenable to policy makers, such as school resources and school policies.” That is consistent with the Virginia data that show economically disadvantaged (“ED”) students underperforming their more affluent peers (“Not ED”) by about twenty points on the SOL pass rates.

The SOL reporting system ignores this issue and punishes schools with larger populations of less affluent students. Four Richmond elementary schools illustrate the situation.

Fox and Munford serve relatively affluent neighborhoods; both schools turn in superb SOL results. Cary and Obama have much tougher clienteles and do not show as well. Nonetheless, Cary and Obama get better ED pass rates than do Fox and Munford.

Hard to believe, eh? Well, look at this. And notice how the SOL average is affected by the %ED.

image

The SOL average over all students punishes Cary and Obama for the relative poverty of the majority of their students, albeit those students outscore the ED students at Fox and Munford. At the same time, Munford and Fox are rewarded for having small ED populations.

If we look at the averages of the ED and Not ED rates, a different pattern emerges.

image

In terms of those averages, all four schools cluster near the 75% accreditation benchmark. This calculation rewards Cary and Obama for the superior performance of their tougher-to-teach students and recognizes the inferior results at Fox and Munford.

On the math tests, Cary escapes most of the SOL penalty by way of a very high ED pass rate but Obama confirms the point. The Fox and Munford SOLs again shine, despite relatively lower ED pass rates.

image

The ED/Not ED average again tells a more balanced story.

image

The Board of Education bases its accreditation calculation on the SOL average over all students in a school or division. Indeed, the public measure of school quality is that same average. That rewards the schools and divisions with fewer ED students, whether or not they get good results with those ED students, and penalizes schools and divisions with large ED populations, even when they get better than average results with their ED students.

A plot of division average reading pass rates and ED/Not ED averages vs. the %ED illuminates the difference between the SOL average pass rate and the ED/Not ED average.

image

The fitted line to the SOL scores (red) suggests that the SOLs decrease on average by about 2.5% for a 10% increase in the ED population. The R-squared of 23% suggests a modest correlation. Indeed, we have seen that the correlation derives almost entirely from the effect of the ED group’s generally lower scores.

The green points show the averages of the ED and Not ED pass rates. The slope of the fitted line is much lower, minus 0.7% for a 10% ED increase, and the 2.3% R-squared value denotes only a trace of correlation. Said otherwise, this average is almost entirely unrelated to the percentage of ED students in the division.

The math and science data tell the same story.

image

image

An ideal grading system would present a horizonal fitted line with an R-Squared of zero. The ED/Not Ed average comes close and, in any event, is vastly less unfair than the current system.

BTW: The Board of Education had an even better system, the SGP, that was uncorrelated with economic status. The Board abandoned that system because it provided an accurate measure of teacher performance.

So we are left with a reporting system that punishes schools and divisions that serve larger populations of poorer students.

If that is a fair system, I am Santa Claus.