It is commonplace to expect that increasing the population of poorer (the official euphemism is “economically disadvantaged”) students lowers the test scores. That certainly is the case among the Virginia school divisions.

The VDOE database reports the SOL scores for the ED group; also the disabled, limited English proficient (“LEP”), migrant, and homeless groups. VDOE’s data definitions of the groups are here.

The migrant and homeless populations are relatively small; many divisions report zero or fewer than the ten-student data suppression cutoff. The LEP populations are larger but VDOE still does not report data for many divisions. Accordingly, the analysis below deals only with the ED and disabled groups, as well as all students and the students not in any of those defined groups (here, “no group”). As well, VDOE does not provide a datum for the Lexington disabled enrollment (albeit they do show the pass rates) so I’ve omitted the Lexington data.

Let’s start with the reading pass rates by division, plotted v. the percentage of ED students.

As expected, the division average pass rates for all students (the blue diamonds) decrease with increasing percentage of ED students. The fitted line shows a decent correlation (R-squared = 39%) and a slope decreasing 2.8% for a 10% increase in the ED population.

In contrast, the scores of the ED students themselves (yellow circles) show less than a third of that rate of decrease and a correlation approaching negligible (R-squared = 2.9%).

Thus, the decrease in the all students average rate must come predominantly from the increasing proportion of lower-scoring ED students. Indeed, a little arithmetic shows the increasing proportion of lower scoring ED students slightly overestimates the decrease.

Note: The calculated line here was obtained from the two fitted lines. Thus, the 80% calculated point was 20% of the All pass rate (blue line) + 80% of the ED rate (yellow line).

The 18% R-squared for the disabled group suggests that the disabled pass rates are related in some measure to the population of ED students. The VDOE data do not allow us to test the possibility that disabled students are found more often in the ED population.

The No Group population shows about half as much correlation as the disabled group, *ca*. 11%, but still some effect from increasing poverty.

Given that the no group students are __not__ members of the ED group (or any other), this (modest) effect of ED population on the no group pass rates cannot come from averaging in the lower ED scores. We can speculate whether this No Group score decrease is the effect of increasing poverty in a division on the classroom environment, the teachers’ distraction to deal with the larger ED group, or something else.

Overall, these data are consistent with the notion that more poverty in the district will be associated with lower pass rates.

Turning to the math tests:

This is the same story, told in slightly different numbers. Increasing poverty lowers all the scores, but the scores of the poor students themselves do not seem to be lowered significantly by increasing the percentage of poor students. The R-squared of the no group scores, however, rises to 15%.

Next, let’s look at the effect of increasing populations of students with disabilities.

First, notice that the largest disabled population is 21% (Craig County) while the largest ED population was 79% (Northampton County), so the scale here is much shorter.

The fitted lines suggest that the reading scores of the disabled and ED populations __increase__ with increasing populations of disabled students but note the very small R-squared values.

Of more interest is the behavior of the all student scores with increasing disabled populations. At 20% disabled, the pass rate would be lowered by about 20% of the difference between the two groups, ca. 20%*40%, which would put the pass rate near 72%. Using the intercept values for the group scores, the calculation produces 69%, hence the red Estimate line here:

If, instead of the intercepts, we use the fitted lines to calculate the 20% all students score, we still get 71%.

In short, increasing the disabled population does not decrease the all student scores as much as it should.

The 1.9% R-squared for the disabled data commands caution but the data certainly are consistent with the disabled scores being artificially enhanced in the larger disabled populations, perhaps by the VAAP. Then, again, it may be that the districts with larger disabled populations have more effective programs to teach those populations.

The math tests show much the same pattern and the R-squared values all run less than 1%.

Here, courtesy of Excel, is the correlation matrix for the reading data (these numbers are R, not R-squared):

And here is the matrix for the math data.

The pass rates correlate fairly strongly with each other (the strongest being the 71% R-squared for the math, All/ED correlation; the weakest being the 26% for the math No Group/Disab. case). The strong No Group to group correlations suggest that better teaching produces better results, both in the unimpaired and the impaired populations.