The poverty excuse for poor school performance again rears its head.
The 2018 SOL data tell us, yet again, to look elsewhere for the causes of poor school performance.
Before we look at those new data, let’s clear away some underbrush:
- It is beyond question that poor kids underperform on the SOL tests. For example, on the 2018 state average pass rate data, the “economically disadvantaged” (here abbreviated “ED”) students underperform their more affluent peers by 21.7 points on the reading tests and 20.0 on the math:
- Correlation, however, does not imply causation. The cause of this underperformance may well be something related to poverty, not the ED itself.
- The data tell us that economic disadvantage has much lower correlation with SOL pass rates than do other factors. See below.
- Even with the flawed SOL yardstick, we can identify schools and divisions that perform well and that underperform. Also see below.
- The State Board of “Education” had, but has abandoned, a poverty-neutral measure of academic growth, the Student Growth Percentile. The SGP data showed large variations in division (here and here) and teacher performance.
- Poverty makes a perfect excuse for poor school performance because some portion of the population will always be less affluent than the citizenry in general. And, for sure, the schools can’t fix poverty, so they like to blame that external factor for their own failures.
- There are indications that even perfectly awful schools with large numbers of ED students can be improved.
Turning to the 2018 data, let’s start with the division pass rates on the reading tests vs. the percentage of the economically disadvantaged students.
A glance at the graph tells the same story as the statistics: As the ED percentage increases, the scores go down but there is a huge amount of scatter. Some divisions greatly outscore the trendline and some grossly underperform. Clearly, some factor(s) must have a much stronger relationship with SOL performance than the incidence of poverty in the student population.
Richmond is the gold square on the graph. The peer cities are the red diamonds, from the left Hampton, Newport News, and Norfolk.
Richmond’s cry of “poverty” is falsified on these data: All but one of the 22 divisions with ED populations larger than Richmond’s outperformed Richmond.
The math data tell the same story.
Note: The ED percentages here and below are of the students taking the tests in question. They are different here for the two subjects. For example, Richmond is 68.2% on the reading tests, 67.1%, math.
Of course, division averages do not separate out the performance of the schools with larger or smaller ED populations. So let’s take a look at the data by school.
Note: Some of the smaller schools are absent from these graphs because of the VDOE suppression rules as to small groups of students.
I’ve broken the data out by grade. First, reading, in the elementary grades:
These are modest correlations, especially in the non-ED data, with both groups showing roughly the same change with increasing ED percentage (between 1.6% and 2.0% decrease per 10% increase in ED).
On to middle school:
These are about the trends we might expect but with some better (albeit still modest) correlations. One interesting difference: It looks like the effect of increasing ED percentage is about a third larger on the non-ED than on the ED pass rates.
I’ll spare you the math data by school. They tell the same story but with lower scores and even more scatter.
The bottom line: At the school level, as at the division level, by far the largest effect on SOL pass rates relates to some factor(s) other than the relative number of economically disadvantaged students.
And we know what one of the other factors (probably the most important one) is: teacher effectiveness. For example, the SGP data showed for one year (I think it was 2014):
Only one of Richmond’s twenty-one sixth grade reading teachers produced an average student improvement better than the state average; none was more than two standard deviations above the statewide average. Six (or seven, depending on the rounding) were more than two standard deviations below the state average and four were more than three standard deviations below. The Richmond average is 1.5 standard deviations below the state average.
And the Richmond picture was even worse in 6th Grade math:
Note: State average there was 50.6.
Thus, not only is it futile to blame poverty for poor school performance, we know at least one place where we can improve learning: teacher performance.